HTTP request is being blocked - javascript

I am working on multiple apps that communicate between each other. I am using Chrome and Firefox both to test my apps on. The problem seems to be persistent in both browsers.
The problem:
I am sending a PUT request from app nr.1 to the Express Node server that essentially sends an update to my mongo database server. Once updated app nr.2 will retrieve the updated value with a GET request. Websockets are being used to notify apps on changes.
The problem however is that the HTTP GET requests on the receiving app nr.2 is taking multiple seconds for it to complete (after a few of them have been done).
To explain the written lines above look at the screenshot below:
the first few GET request take 3-5ms to complete, then the upcoming GET requests will take up to 95634ms to complete....
What could be the cause of this and how could this be fixed?

It is difficult to tell without seeing your whole stack.
Sometimes a reverse-proxy that sits in front of you applications can cause issues like this
They could be trying to route to ipv6 instead of ipv4 especially if you are using localhost to point your GET requests. The fix is to use 127.0.0.1 instead of localhost
Also, a high keepalive timeout setting on a proxy can cause this
Good first places to look in a situation like this are
Proxy logs
Node logs
Server logs (ie firewall or throttling)

Related

Apollo Explorer Failed to load resource: net::ERR_CERT_AUTHORITY_INVALID

I am running an Apollo Server with express to create an http server:
const express = require("express");
const cors = require('cors');
const app = express();
const server = new ApolloServer({ ... });
server.applyMiddleware({ app });
// enable pre-flight for cors requests
app.options('*', cors());
// Create the HTTP server
let httpServer = http.createServer(app);
httpServer.listen({ port: config.port });
Locally I can run the server and query it on Apollo Explorer without any issues.
However, when I deploy this server on dev environment, and try to access the Explorer page with the dev endpoint, I get a few errors.
The app.options() line with cors argument somehow seems to have solved part of them but not all.
Errors I am getting (on Dev Tools console):
Failed to load resource: net::ERR_CERT_AUTHORITY_INVALID
POST https://dev.endpoint.service/graphql net::ERR_CERT_AUTHORITY_INVALID
Errors I am getting (as popups on the Explorer page):
Unable to reach server
To diagnose the problem, please run:
npx diagnose-endpoint#1.0.12 --endpoint=https://dev.endpoint.service/graphql
I've tried running the command as instructed in the error and got this result:
Diagnosing https://dev.endpoint.service/graphql
Could not find any problems with the endpoint. Would you please to let us know about this > at explorer-feedback#apollographql.com
Frankly, I'm not even sure I understand the problem.
Am I getting these errors because, even though I launch an http server of Apollo without certificates, I am trying to access it via an https endpoint (which requires certificates)? I have to do this, service is stored in AKS cluster, which is only accessible through the endpoint I am calling. But every service that is already there is also an http service, not https, and is accessible through this same endpoint.
Also, even though these errors are showing up frequently, I am also able to query the server successfully most of the time on Explorer, and the data returned is exactly what I expected, which makes even less sense.
I am using edge browser but also tried chrome, and have the same issues.
How can an error like this be intermittent?
Without any intervention on my part, sometimes it's like this:
Any help, hints, ideas, please.
Thank you so much.
As much as it pains me to admit, it seems the issue is related to the VPN my company is using.
There were a few tells that pointed in this direction, once I started paying attention:
We can't access the endpoint I mentioned without the VPN turned on.
Other services in the AKS behave with the same error, if being called constantly through the same endpoint. I did not think to do that test at first, but when I realized that on Apollo server, the server is constantly doing the introspection thing to check the schema, it means it is being called more often than the other services that do not have this functionality.
We have some monitoring tools, to check the pod statuses and so on, and nothing indicated any problems in this service, or that it needed any kind of pod escalation (due to excessive number of requests).
I actually performed kubectl portforward test linking my localhost directly to the AKS cluster. Calling the service this way bypasses that endpoint which I am, under normal circumstances, forced to use before the request actually reaches the cluster. And I was simultaneously seeing on one window where I was calling the service the normal way showing that error on Apollo Studio, and at the same time on another Apollo Studio window performing the same request with this portforward bypass mechanic, and the latter was working just fine. If it really was a problem with the service, it would be down for both windows.
Other colleagues were testing the service at the same time as me and they were saying the service was working fine for them, until it wasn't. So every developer on my team could be accessing the service at the same time, and the error would just randomly show up for some, but not for others.
There are long periods where the error doesn't occur at all, like during lunch hours, or after work hours, and I assume the VPN traffic will be much lower during those hours.

ECONNREFUSED when I request data from URL using NodeJS on company computer whilst connected to company network but I can open URL in browser

Ok, I have searched around the forums and googled for a solution but no one seems to have the exact same problem as me or at least no one has posted about it that I can find.
So the problem is I can make a request (using request module) on my personal computer to this API (to get data) but when I try make the same request on the companies work laptop WHILST being connected to the company network I am unable too make the same request (getting ECONNREFUSED error) despite the fact that I can navigate to the URL that I am trying to request on my companies work laptop whilst being connected to the company network using a browser.
However, if I disconnect from the companies network and connect to a hotspot/other WiFi I retrieve data using NodeJs request again.
The things I have tried. I have tried using built in HTTPS module and also passing in headers such as different port numbers (URL I am trying to request only works with port 443 it appears) and setting the User-Agent as well. I haven't tried request the data using my personal computer whilst connected to the companies network because I can not.
It seems to me that my company is detecting that I am requesting the data via a script and blocking it and not actually blocking the site itself. (so I cant even call and ask IT to white-list the site because it looks like it isn't being blocked anyways)
Any help will be appreciated. Thanks
You can try Ngrock https://ngrok.com/download
use ngrock http PORT
run you nodejs application and in another shell run ngrock, it will give you a unique public URL, which you can use for request data

Delay when sending information to client using node.js and socket.io

I have an application written in node.js with a timer function. Whenever a second has passed, the server sends the new time value to every connected client. While this works perfectly fine on localhost, it's very choppy when hosted online. Clients won't update immediately and the value will sometimes jump two or three seconds at a time.
I discovered, however, if I repeatedly send the timer data to the clients (using setInterval), it runs perfectly without any delay from anywhere.
Does anyone have any idea why this might be the case? It doesn't make sense to me why sending the same data more often would fix the issue. If anything, shouldn't this be more slow? I was thinking I could use this approach and have the client notify the server when it has updated but this seems unnecessary and inefficient.
I'm very new to node.js but this has got me stumped. Any insight would be greatly appreciated.
Where are you hosting it? Does it support websockets? Some hosts do not support/allow them. My guess is that your host is not allowing websockets and socket.io is falling back to the polling transport.
In your browser, you can find the websocket connection and inspect it in developer tools:
How do you inspect websocket traffic with Chrome Developer Tools?
If it does not undergo the 101 Switching Protocols http status to successfully upgrade the first request to a websocket, you'll see the polling requests recur in the developer tools.

Web socket in VPC behind load-balancer giving errors

When I connect and send some sockets to my Linux, node.js server inside a VPC and behind a load balancer I get a unusually long delay, followed by WebSocket connection to [address] failed: Connection closed before receiving a handshake response
And then a few seconds later I get responses for all the sockets I sent, and then everything works fine. No long delays.
But just on this initial connect, there's a horrible wait followed by this error message. Everything still works, it just takes a bit.
I'm using Amazon Web Service EC2 load-balancers and AWS VPCs
When I'm accessing the same server directly, I get no delays.
I was unable to connect to my server when having just a load-balancer.
I was unable to connect to my server when having just a VPC, so I can't isolate the problem to just my load-balancer or the VPC.
What's going on?
The correct answer was Michael's comment that I marked as helpful.
The first person who puts this into an answer format gets points.
The health of connection from the Load Balancer to the server is determined by the way in which your Health Check is set up.
Try and set it up differently.
eg
Use a TCP based Health Check rather than a HTTP based one, and change the thresholds.
If you see some different behaviour, you'll know that the Health Check is the issue.
It is hard to know exactly without debugging, but note that there are issues on using Elastic Load Balancer for Web Sockets. They parse HTTP requests (unless in TCP mode) and they have a 60 seconds idle connection timeout.

Understanding mod_proxy and Apache 2 for writing a comet-server

I currently try to implement a simple HTTP-server for some kind of comet-technique (long polling XHR-requests). As JavaScript is very strict about crossdomain requests I have a few questions:
As I understood any apache worker is blocked while serving a request, so writing the "script" as a usual website would block the apache, when all workers having a request to serve. --> Does not work!
I came up with the idea writing a own simple HTTP server only for serving this long polling requests. This server should not be blocking, so each worker could handle many request at the same time. As my site also contains content / images etc and my server does not need to server content I started him on a different port then 80. The problem now is that I can't interact between my JavaScript delivered by my apache and my comet-server running on a different port, because of some crossdomain restrictions. --> Does not work!
Then I came up with the idea to use mod_proxy to map my server on a new subdomain. I really don't could figure out how mod_proxy works but I could imagine that I know have the same effect as on my first approach?
What would be the best way to create these kind of combination this kind of classic website and these long-polling XHR-requests? Do I need to implement content delivery on my server at my own?
I'm pretty sure using mod_proxy will block a worker while the request is being processed.
If you can use 2 IPs, there is a fairly easy solution.
Let's say IP A is 1.1.1.1 and IP B is 2.2.2.2, and let's say your domain is example.com.
This is how it will work:
-Configure Apache to listen on port 80, but ONLY on IP A.
-Start your other server on port 80, but only on IP B.
-Configure the XHR requests to be on a subdomain of your domain, but with the same port. So the cross-domain restrictions don't prevent them. So your site is example.com, and the XHR requests go to xhr.example.com, for example.
-Configure your DNS so that example.com resolves to IP A, and xhr.example.com resolves to IP B.
-You're done.
This solution will work if you have 2 servers and each one has its IP, and it will work as well if you have one server with 2 IPs.
If you can't use 2 IPs, I may have another solution, I'm checking if it's applicable to your case.
This is a difficult problem. Even if you get past the security issues you're running into, you'll end up having to hold a TCP connection open for every client currently looking at a web page. You won't be able to create a thread to handle each connection, and you won't be able to "select" on all the connections from a single thread. Having done this before, I can tell you it's not easy. You may want to look into libevent, which memcached uses to a similar end.
Up to a point you can probably get away with setting long timeouts and allowing Apache to have a huge number of workers, most of which will be idle most of the time. Careful choice and configuration of the Apache worker module will stretch this to thousands of concurrent users, I believe. At some point, however, it will not scale up any more.
I don't know what you're infrastructure looks like, but we have load balancing boxes in the network racks called F5s. These present a single external domain, but redirect the traffic to multiple internal servers based on their response times, cookies in the request headers, etc.. They can be configured to send requests for a certain path within the virtual domain to a specific server. Thus you could have example.com/xhr/foo requests mapped to a specific server to handle these comet requests. Unfortunately, this is not a software solution, but a rather expensive hardware solution.
Anyway, you may need some kind of load-balancing system (or maybe you have one already), and perhaps it can be configured to handle this situation better than Apache can.
I had a problem years ago where I wanted customers using a client-server system with a proprietary binary protocol to be able to access our servers on port 80 because they were continuously having problems with firewalls on the custom port that the system used. What I needed was a proxy that would live on port 80 and direct the traffic to either Apache or the app server depending on the first few bytes of what came across from the client. I looked for a solution and found nothing that fit. I considered writing an Apache module, a plugin for DeleGate, etc., but eventually rolled by own custom content-sensing proxy service. That, I think, is the worst-case scenario for what you're trying to do.
To answer the specific question about mod-proxy: yes, you can setup mod_proxy to serve content that is generated by a server (or service) that is not public facing (i.e. which is only available via an internal address or localhost).
I've done this in a production environment and it works very, very well. Apache forwarding some requests to Tomcat via AJP workers, and others to a GIS application server via mod proxy. As others have pointed out, cross-site security may stop you working on a sub-domain, but there is no reason why you can't proxy requests to mydomain.com/application
To talk about your specific problem - I think really you are getting bogged down in looking at the problem as "long lived requests" - i.e. assuming that when you make one of these requests that's it, the whole process needs to stop. It seems as though your are trying to solve an issue with application architecture via changes to system architecture. In-fact what you need to do is treat these background requests exactly as such; and multi-thread it:
Client makes the request to the remote service "perform task X with data A, B and C"
Your service receives the request: it passes it onto a scheduler which issues a unique ticket / token for the request. The service then returns this token to the client "thanks, your task is in a queue running under token Z"
The client then hangs onto this token, shows a "loading/please wait" box, and sets up a timer that fires say, for arguments, every second
When the timer fires, the client makes another request to the remote service "have you got the results for my task, it's token Z"
You background service can then check with your scheduler, and will likely return an empty document "no, not done yet" or the results
When the client gets the results back, it can simply clear the timer and display them.
So long as you're reasonably comfortable with threading (which you must be if you've indicated you're looking at writing your own HTTP server, this shouldn't be too complex - on top of the http listener part:
Scheduler object - singleton object, really that just wraps a "First in, First Out" stack. New tasks go onto the end of the stack, jobs can be pulled off from the beginning: just make sure that the code to issue a job is thread safe (less you get two works pulling the same job from the stack).
Worker threads can be quite simple - get access to the scheduler, ask for the next job: if there is one then do the work send the results, otherwise just sleep for a period, start over.
This way, you're never going to be blocking Apache for longer than needs be, as all you are doing is issues requests for "do x" or "give me results for x". You'll probably want to build some safety features in at a few points - such as handling tasks that fail, and making sure there is a time-out on the client side so it doesn't wait indefinitely.
For number 2: you can get around crossdomain restrictions by using JSONP.
Two Three alternatives:
Use nginx. This means you run 3 servers: nginx, Apache, and your own server.
Run your server on its own port.
Use Apache mod_proxy_http (as your own suggestion).
I've confirmed mod_proxy_http (Apache 2.2.16) works proxying a Comet application (powered by Atmosphere 0.7.1) running in GlassFish 3.1.1.
My test app with full source is here: https://github.com/ceefour/jsfajaxpush

Categories

Resources