TheIntern Dojo example exits with timeout error - javascript

The Example of Dojo tests run under Intern (https://github.com/theintern/intern-examples/tree/master/dojo-example) does not actually test anything, fails on connect to the Sauce network:
$ npm test
> dojo-intern-example#0.1.0 test /home/bogdanbiv/WebstormProjects/intern-examples/dojo-example
> intern-runner config=tests/intern
Listening on 0.0.0.0:9001
Starting tunnel...
Using no proxy for connecting to Sauce Labs REST API.
**********************************************************
A newer version of Sauce Connect (build 1283) is available!
Download it here:
https://saucelabs.com/downloads/sc-4.3-linux.tar.gz
**********************************************************
Started scproxy on port 49172.
Starting secure remote tunnel VM...
Secure remote tunnel VM provisioned.
Tunnel ID: 2f904e21cf1e4c3e83f63a4b3089127c
Secure remote tunnel VM is now: booting
Secure remote tunnel VM is now: running
Remote tunnel host is: maki76020.miso.saucelabs.com
Using no proxy for connecting to tunnel VM.
Establishing secure TLS connection to tunnel...
Cleaning up.
Finished! Deleting tunnel.
Error: failed to connect to tunnel VM.
Error: failed to connect to tunnel VM.
at reject <node_modules/intern/node_modules/digdug/SauceLabsTunnel.js:353:17>
at readStartupMessage <node_modules/intern/node_modules/digdug/SauceLabsTunnel.js:381:12>
at <node_modules/intern/node_modules/digdug/SauceLabsTunnel.js:434:12>
at Array.some <native>
at Socket.<anonymous> <node_modules/intern/node_modules/digdug/SauceLabsTunnel.js:428:21>
at Socket.EventEmitter.emit <events.js:117:20>
at Socket.<anonymous> <_stream_readable.js:746:14>
at Socket.EventEmitter.emit <events.js:92:17>
at emitReadable_ <_stream_readable.js:408:10>
at emitReadable <_stream_readable.js:404:5>
npm ERR! weird error 1
npm WARN This failure might be due to the use of legacy binary "node"
npm WARN For further explanations, please read
/usr/share/doc/nodejs/README.Debian
npm ERR! not ok code 0
Ok it does complain about having an old Sauce Connect binary, but even after downloading and inserting the path of the newest SC (4.3). I also updated .bin/intern-runner to contain js as a running environment as opposed to the old node command. User and password are the ones from the repository (left them unchanged). I followed the documentation and did uncomment the tunnel in the intern config file.
UPDATE: This problem still occurs. I find it wierd that a proxy is started Started scproxy on port 54687., but, further down, Using no proxy for connecting to tunnel VM.. Aren't these lines supposed to match?
It could be that this mismatch has nothing to do with the original problem? The new Sauce Connect binary is still ignored.

UPDATE: Actually this solution affects only client, local - intern-client config=tests/intern. As a result this solution solves a different problem than the one originally posted. /UPDATE
The problem was that although I executed bower install as documented, the bower components installed in a folder set by the bowerrc global configuration. This was quite different from what the Dojo TodoMVC example required for its components.
Also submitted an issue at https://github.com/theintern/intern-examples/issues/10 and a pull request.

Related

NPM: request to https://registry.npmjs.org/corepack failed, reason: connect EHOSTUNREACH

NPM used to work no problem but now for some reason anything I try to do that involves connecting to the registry times out.
The failure message I get from NPM is request to https://registry.npmjs.org/corepack failed, reason: connect EHOSTUNREACH 2606:4700::6810:1223:443
The command I'm running is npm update -g.
I'm on Arch Linux, and I installed the NPM package from arch. It is version 8.19.2 (the latest on arch).
I tried two DNS, the one I'm using now is Cloudflare (1.1.1.1).
Pinging "registry.npmjs.org" results in From 2600:1700:4630:c000::1 (2600:1700:4630:c000::1) icmp_seq=1 Destination unreachable: Address unreachable
But if I go to registry.npmjs.org in my web browser, I get the expected json result.
Any help is appreciated.
https://askubuntu.com/questions/32298/prefer-a-ipv4-dns-lookups-before-aaaaipv6-lookups/38468#38468
For some reason ipv6 requests are not working on my current network. I "solved" the issue for now by preferring ipv4 using the solution in the link above. Ultimately I'd like to find out why ipv6 is not working on my computer/network but for now this is fine.

Cypress Test Runner unexpectedly exited via a exit event with signal SIGSEGV in circleCI

I am stuck in this problem. I am running cypress tests. When I run locally, it runs smoothly. when I run in circleCI, it throws error after some execution.
Here is what i am getting:
[334:1020/170552.614728:ERROR:bus.cc(392)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
[334:1020/170552.616006:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix")
[334:1020/170552.616185:ERROR:bus.cc(392)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix")
[521:1020/170552.652819:ERROR:gpu_init.cc(441)] Passthrough is not supported, GL is swiftshader
Current behavior:
When I run my specs headless on the circleCI, Cypress closed unexpectedly with a socket error.
Error message:
The Test Runner unexpectedly exited via a exit event with signal
SIGSEGV
Please search Cypress documentation for possible solutions:
https://on.cypress.io
Platform: linux (Debian - 10.5)
Cypress Version: 8.6.0
Issue resolved by reverting back cypress version to 7.6.0.
I had this same issue on within our Azure builds as well. We only recently migrated from Cypress 8.4.0. Going back to that has solved the problem.
downgrade the cypress by running npm install cypress#7.6.0
Downgrade of cypress to 8.3.0 worked for me to solve that, you dont need to go to previous versions.
npm install cypress#8.3.0
SIGSEGV means a segmentation fault error. Read all about it here.
"In practice, a segfault occurs when your program breaks some
fundamental rule set by the operating system. In that case, the
operating system sends your process a signal (SIGSEGV on Mac & Linux,
STATUS_ACCESS_VIOLATION on Windows), and typically the process shuts
down immediately."
This blog post also goes into detail about how this can occur, even if you don't directly interact with the operating system (since we're writing JavaScript). Please read it on the original author's site for context.
But in short - you are likely encountering this error because
a. you upgraded Cypress while on an old version of NodeJS, or
b. you upgraded NodeJS, while you have Cypress code that is directly or indirectly incompatible (this could be your own code, or one of its dependencies) with that NodeJS version
Since SIGSEGV is a complete shutdown, you have no stack trace or debug information to guide you. So you have to debug the old fashioned way, turning tests and/or dependencies on or off to locate the problem in your code.

ECONNRESET while installing Angular CLI with npm

While trying to install Angular CLI on a machine with no proxy set and flawless internet I get a following error:
4727 silly extract micromatch#^3.1.4 extracted to C:\Users\User\AppData
\Roaming\npm\node_modules\.staging\micromatch-7d604bf4 (38763ms)
4728 timing action:extract Completed in 265532ms
4729 verbose unlock done using C:\Users\User\AppData\Roaming\npm-cache\_locks\staging-eb8de851d6fef93d.lock for C:\Users\User\AppData\Roaming\npm\node_modules\.staging
4730 timing stage:rollbackFailedOptional Completed in 0ms
4731 timing stage:runTopLevelLifecycles Completed in 277531ms
4732 verbose type system
4733 verbose stack FetchError: request to https://registry.npmjs.org/mime-types/-/mime-types-2.1.18.tgz failed, reason: read ECONNRESET
4733 verbose stack at ClientRequest.req.on.err
[...]
4739 error code ECONNRESET
4740 error errno ECONNRESET
4741 error network request to https://registry.npmjs.org/mime-types/-/mime-types-2.1.18.tgz failed, reason: read ECONNRESET
4742 error network This is a problem related to network connectivity.
4742 error network In most cases you are behind a proxy or have bad network settings.
4742 error network
4742 error network If you are behind a proxy, please make sure that the
4742 error network 'proxy' config is set properly. See: 'npm help config'
4743 verbose exit [ 1, true ]
It usually fails on extracting rxjs package. So far I've tried:
Setting registry to a http:// version, but then it fails earlier, every time on is-number package
reinstalling and updating npm/node
clearing the cache after every single operation
disabling the windows firewall
starting the command line with administrator rights
checked that proxy config is null
Nothing seems to be working. Do you have any ideas?
Edit: Maybe this will help, but when I've tried to update npm itself, it would hang itself immediately on rollbackFailedOptional, it managed to update itself only after changing the registry to the http:// version
Downgrade towards a more stable version of node/npm.
To install a specific version of npm, e.g, 5.6.0:
npm install -g npm#5.6.0

Node.JS: Why is my connection to localhost:3000 refused?

I'm a student going into back-end development for the first time and are trying to learn Node.JS. I downloaded a pdf book about Node.JS from sitepoint called: "Jumpstart Node.JS". In following the instructions to set up the server on the command line, install the dependencies, and navigate to localhost:3000, i got nothing except the following message: "Connection refused: localhost:3000", Can somebody please tell me what might have went wrong and how to fix it?
Edit1:
The instructions i followed is about setting up a node.js server using the Node command line, thus no code, simply cmd commands, however, here is a quick summary of the process i followed:
Created an account on MongoLabs and then a database using the free pricing plan.
Installed express using the command: npm install -g express#.2.5.8.
Created an applications with default options using this command: express authentication.
modified the package.json file in system32
installed the dependencies by typing cd authentication, hitting enter, and then typing the command: npm install
Typed node app and hit enter.
According to the instructions i should have seen a message: "Welcome to express" but instead got the error message.
In following the instructions to set up the server on the command line, install the dependencies, and navigate to localhost:3000
It seems that you didn't start the server.
Somewhere between installing the dependencies and navigating to the URL you need to actually start the server if you want it to serve the request.
Check that there is no copy of the server running in the background, or that another app is using the port currently.
(Your firewall show allow you to see which app has been allocated to that port)
Because nodejs requires it to be the only app on that port running on your computer.
Also try a different port maybe?

What's the cause of the error 'getaddrinfo EAI_AGAIN'?

My server threw this today, which is a Node.js error I've never seen before:
Error: getaddrinfo EAI_AGAIN my-store.myshopify.com:443
at Object.exports._errnoException (util.js:870:11)
at errnoException (dns.js:32:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:78:26)
I'm wondering if this is related to the DynDns DDOS attack which affected Shopify and many other services today. Here's an article about that.
My main question is what does dns.js do? What part of node is it a part of? How can I recreate this error with a different domain?
If you get this error with Firebase Cloud Functions, this is due to the limitations of the free tier (outbound networking only allowed to Google services).
Upgrade to the Flame or Blaze plans for it to work.
EAI_AGAIN is a DNS lookup timed out error, means it is a network connectivity error or proxy related error.
My main question is what does dns.js do?
The dns.js is there for node to get ip address of the domain(in brief).
Some more info:
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
If you get this error from within a docker container, e.g. when running npm install inside of an alpine container, the cause could be that the network changed since the container was started.
To solve this, just stop and restart the container
docker-compose down
docker-compose up
Source: https://github.com/moby/moby/issues/32106#issuecomment-578725551
As xerq's excellent answer explains, this is a DNS timeout issue.
I wanted to contribute another possible answer for those of you using Windows Subsystem for Linux - there are some cases where something seems to be askew in the client OS after Windows resumes from sleep. Restarting the host OS will fix these issues (it's also likely restarting the WSL service will do the same).
For those who perform thousand or millions of requests per day, and need a solution to this issue:
It's quite normal to get getaddrinfo EAI_AGAIN errors when performing a lot of requests on your server. Node.js itself doesn't perform any DNS caching, it delegates everything DNS related to the OS.
You need to have in mind that every http/https request performs a DNS lookup, this can become quite expensive, to avoid this bottleneck and getaddrinfo errors, you can implement a DNS cache.
http.request (and https) accepts a lookup property which defaults to dns.lookup()
http.get('http://example.com', { lookup: yourLookupImplementation }, response => {
// do something here with response
});
I strongly recommend to use an already tested module, instead of writing a DNS cache yourself, since you'll have to handle TTL correctly, among other things to avoid hard to track bugs.
I personally use cacheable-lookup which is the one that got uses (see dnsCache option).
You can use it on specific requests
const http = require('http');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
http.get('http://example.com', {lookup: cacheable.lookup}, response => {
// Handle the response here
});
or globally
const http = require('http');
const https = require('https');
const CacheableLookup = require('cacheable-lookup');
const cacheable = new CacheableLookup();
cacheable.install(http.globalAgent);
cacheable.install(https.globalAgent);
NOTE: have in mind that if a request is not performed through Node.js http/https module, using .install on the global agent won't have any effect on said request, for example requests made using undici
The OP's error specifies a host (my-store.myshopify.com).
The error I encountered is the same in all respects except that no domain is specified.
My solution may help others who are drawn here by the title "Error: getaddrinfo EAI_AGAIN"
I encountered the error when trying to serve a NodeJs & VueJs app from a different VM from where the code was developed originally.
The file vue.config.js read :
module.exports = {
devServer: {
host: 'tstvm01',
port: 3030,
},
};
When served on the original machine the start up output is :
App running at:
- Local: http://tstvm01:3030/
- Network: http://tstvm01:3030/
Using the same settings on a VM tstvm07 got me a very similar error to the one the OP describes:
INFO Starting development server...
10% building modules 1/1 modules 0 activeevents.js:183
throw er; // Unhandled 'error' event
^
Error: getaddrinfo EAI_AGAIN
at Object._errnoException (util.js:1022:11)
at errnoException (dns.js:55:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26)
If it ain't already obvious, changing vue.config.js to read ...
module.exports = {
devServer: {
host: 'tstvm07',
port: 3030,
},
};
... solved the problem.
I started getting this error (different stack trace though) after making a trivial update to my GraphQL API application that is operated inside a docker container. For whatever reason, the container was having difficulty resolving a back-end service being used by the API.
After poking around to see if some change had been made in the docker base image I was building from (node:13-alpine, incidentally), I decided to try the oldest computer science trick of rebooting... I stopped and started the docker container and all went back to normal.
Clearly, this isn't a meaningful solution to the underlying problem - I am merely posting this since it did clear up the issue for me without going too deep down rabbit holes.
I was having this issue on docker-compose. Turns out I forgot to add my custom isolated named network to my service which couldn't be found.
TLDR; Make sure, in your compose file, you have your custom-networks defined on both services that need to talk to each other.
My error looked like this: Error: getaddrinfo EAI_AGAIN minio-service. The error was coming from my server's backend when making a call to the minio-service using the minio-service hostname. This tells me that minio-service's running service, was not reachable by my server's running service. The way I was able to fix this issue is I changed the minio-service in my docker-compose from this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ... (missing networks: section)
# ...
networks:
my-network:
To include my custom isolated named network, like this:
docker-compose.yml
version: "3.8"
# ...
services:
server:
# ...
networks:
my-network:
# ...
minio-service:
# ...
networks:
my-network:
# ...
# ...
networks:
my-network:
More details on docker-compose networking can be found here.
This is the issue related to hosts file setup.
Add the following line to your hosts file
In Ubuntu: /etc/hosts
127.0.0.1 localhost
In windows: c:\windows\System32\drivers\etc\hosts
127.0.0.1 localhost
In my case the problem was the docker networks ip allocation range, see this post for details
#xerq pointed correctly, here's some more reference
http://www.codingdefined.com/2015/06/nodejs-error-errno-eaiagain.html
i got the same error, i solved it by updating "hosts" file present under this location in windows os
C:\Windows\System32\drivers\etc
Hope it helps!!
In my case, connected to VPN, the error happens when running Ubuntu from inside Windows Terminal but doesn't happen when opening Ubuntu directly from Windows (not from inside the Windows Terminal)
I had a same problem with AWS and Serverless. I tried with eu-central-1 region and it didn't work so I had to change it to us-east-2 for the example.
I was getting this error after I recently added a new network to my docker-compose file.
I initially had these services:
services:
frontend:
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
I decided to add a new network which hosts other services I wanted my frontend service to have access to, so I did this:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Unfortunately, the above made it so that my frontend service was no longer visible on the default network, and only visible in the moar network. This meant that the frontend service could no longer proxy requests to backend, therefore I was getting errors like:
Error occured while trying to proxy to: localhost:3005/graphql/
The solution is to add the default network to the frontend service's network list, like so:
networks:
moar:
name: moar-network
attachable: true
services:
frontend:
networks:
- moar
- default # here
depends_on:
- backend
ports:
- 3005:3000
backend:
ports:
- 8005:8000
Now we're peachy!
One last thing, if you want to see which services are running within a given network, you can use the docker network inspect <network_name> command to do so. This is what helped me discover that the frontend service was not part of the default network anymore.
Enabled Blaze and it still doesn't work?
Most probably you need to set .env from the right path, require('dotenv').config({ path: __dirname + './../.env' }); won't work (or any other path). Simply put the .env file in the functions directory, from which you deploy to Firebase.

Categories

Resources