I have a Javascript function that runs every 5 seconds and requests information from the same server via a jQuery AJAX call. The function runs indefinitely once the page is loaded.
For some reason the AJAX query is failing about once every minute or two, and showing
ERR_EMPTY_RESPONSE
in the console. The odd thing is, it fails for exactly 60 seconds, then starts working fine for another minute or two.
So far I've tried with no success:
Different browser
Different internet connection
Changing the polling time of the function. (Still fails for 60 second intervals. eg run every 10 seconds, it fails 6 times. Or 5x12 or 1x60)
Web searches which suggesting flushing ip settings on my computer
I never had any problem on my last server which was a VPS. I'm now running this off shared hosting with GoDaddy and wonder if there's a problem at that end. Other sites and AJAX calls to the server are working fine during downtimes though.
I also used to run the site over HTTPS, now it's over plain HTTP only. Not sure if relevant.
Here's the guts of the function:
var interval = null;
function checkOrders() {
interval = window.setInterval(function () {
$.ajax({
type: "POST",
dataType: "json",
url: "http://www.chipshop.co.nz/ajax/check_orders.php",
data: {shopid : 699},
error: function(errorData) {
//handle error
},
success: function(data) {
//handle success
}
});
}, 5000); // repeat until switched off, polling every 5 seconds
}
Solved: It turned out the problem was with GoDaddy hosting. Too many POST requests resulted in the 60 second 'ban' from accessing that file. Changing to GET avoided this.
This page contains the answer from user emrys57 :
For me, the problem was caused by the hosting company (Godaddy)
treating POST operations which had substantial response data (anything
more than tens of kilobytes) as some sort of security threat. If more
than 6 of these occurred in one minute, the host refused to execute
the PHP code that responded to the POST request during the next
minute. I'm not entirely sure what the host did instead, but I did
see, with tcpdump, a TCP reset packet coming as the response to a POST
request from the browser. This caused the http status code returned in
a jqXHR object to be 0.
Changing the operations from POST to GET fixed the problem. It's not
clear why Godaddy impose this limit, but changing the code was easier
than changing the host.
Related
This behavior was not present all the time, just come from nothing about a month ago and then disappeared suddenly. The problem is that I can't identify what happened. I have no server debug tool because only take place in production.
Roughly 100 ajax request are triggered at the same time using some loop like :
let url = "example.com/";
var methods = ["method1", "method2", "method3", "method4"]; //Roughly 100
$.each(methods, function(index, value) {
$.ajax({
url: url + value,
method: "POST",
data: { params: "whatever", otherParams: "whatever" }
}).done(function(data) {
console.log(data);
});
});
In server side (apache+php) there are selects, updates and inserts in a relational database. Each request performs an individual thread since apache is hearing.
When I see the network console, all requests starts at the same time (roughly), but here is the problem. The response happen one after the other finish. If request 1 starts at 0 and spend 5 seconds, request 2 starts at 5, and request 3 starts when request 2 was finished. All browser have the same behavior.
The best logic explanation I thought, is that database is blocking some table when performs an update or insert. Some tables are huge and without indexes could spend too much time. Well, staging environment points to the same database and works perfectly asynchronously. So what is going on? It is possible that php or Apache could be stucked in this way for some reason? I thought other crazy idea that is some writing problems with log files in the OS (debian) but I have no idea how that works. So I would be glad if anyone could give me any suggestion. Maybe I could reproduce the problem in a controlled environment and do something to prevent this can happen again.
Some additional information, the API have two clients one in angular the other in javascript+php. It's exactly the same behavior with both clients.
I have a Problem With IE and SignalR, I'm using the it to perform a Syncing action between two databases, the Actions Completed successfully on Google Chrome / Firefox / Safari in all scenarios.
Using IE for the First time the sync performed successfully but only for one time, in the second time a pending request stack and the page stay freeze for ever.
I found a solution online which is changing the transport mode.
But page still freezing.
if (isIE()) {
$.connection.hub.start({ transport: ['serverSentEvents','foreverFrame']}).done(function () {
progressNotifier.server.DoMyLongAction();
});
}else{
$.connection.hub.start({ transport: ['serverSentEvents','longPolling'] }).done(function () {
progressNotifier.server.DoMyLongAction();
});
}
I'm Using:
SgnalR v2.1.0.0
.Net framework v4.5
jquery v1.8
is it an Issue or I'm Doing something wrong ?
Edit
my application use Jquery progress bar and i Update this progress bar using this Code:
server side:
Clients.Caller.sendMessage(msg, 5, "Accounts");
client side:
progressNotifier.client.sendMessage = function (message, value, Entity) {
pbar1.progressbar("value", nvalue);
};
it's working on Firefox so I thought it's a signalR Issue !! Now i became confused if it's working as expected then what causes this problem ?
you can try use EventSource (SSE).
I am using this:
https://github.com/remy/polyfills/blob/master/EventSource.js
but modified, for SignalR:
http://a7.org/scripts/jquery/eventsource_edited.js
I am working with it for one year, SignalR just check for window.EventSource and it works.
The solution you found online is not likely to help your issue.
I doubt your IsIE() function is correctly identifying IE. If it was, SignalR should only be attempting to establish a "foreverFrame" connection, since IE does not even support "serverSentEvents". I would not expect IE to make any "/signalr/poll" requests, because those requests are only made by the "longPolling" transport.
Also, having a "pending" poll request in the IE F12 tool's network tab is entirely expected. This is how long polling is designed to work. Basically, as soon as a message is received the client makes a new ajax request (a long poll) to retrieve new messages. If no new messages are immediately available, the server will wait (for up to 110 seconds by default in the case of SignalR, not forever) for a new message to be sent to the client before responding to the pending long poll request with the new message.
Can you clarify exactly what issue you are having other than seeing a pending poll request showing up under the network tab? It would also help if you you enabled tracing in the JS client, provided the console output, and showed all the "/signalr/..." requests in the network tab.
Hi I'm new user to atmosphere, and set up a simple test that worked fine. We used long-polling, and the behavior was that my client would send the server a GET that would stay open until:
data was returned by the server
a minute elapsed
in both cases, the client would immediately send another GET for the server to hold open. Most of the time no data was sent, so every minute the GET would be "refreshed." I assumed this was the default behavior because maybe certain browsers or networks would shut off a GET that exceeded a certain time limit, so this was a way to avoid that.
Question:
Is this refresh controlled by the client or the browser? I poked around and couldn't figure out if the client was closing the connection on its own and sending a new request, or if it was the server.
The reason I ask is that the server got deployed, and now that refresh is no longer occurring. My client GET now stays open to the full 5 minute (default) timeout and then throws the timeout event, then reconnects for another 5 minutes.
Server team claims "nothing changed," ha-ha. So did I do something or what? Please let me know! Thank you!
request object:
var request = {
url: 'xyz',
transport: 'long-polling',
reconnectInterval: 5000,
maxReconnectOnClose: 20,
enableXDR: true
};
Edit: the atmosphere server was changed from 2.1.3 (working) to 2.0.7 (not working) when the deploy occurred. When changed back, the 1 minute refresh behavior re-appeared. The problem is that 2.1.3 is not compatible with the server they are using, thus the down-grade.
Question: what is this feature called, is this the heartbeat or something else? Can someone tell me what change was made that would cause this. I've looked through the release notes and nothing jumped out at me.
I have an area in my page where messages go when a database has changed. Now, some days the database will change so much that a new message is displayed every 10 minutes; other days it will change only a few times. The issue I am having is that the EventSource seems to time out after 1hr 22 minutes, and no longer will the browser receive notifications.
I am wondering if there is a way to keep EventSources persistent (i.e., for as long as the browser is displaying the page, the EventSource is alive). According to what I have found in my Google searches, EventSources should remain alive until the tab/window is closed. Unfortunately, there seems to be so very little that I find in my Google searches, and for me this doesn't seem to be the case.
You don't say where the socket closure is happening (on the browser, socket on client machine, socket on server-side, etc.) but it doesn't really matter as the fix is the same for all of them: send keep-alive messages.
The server should send a keep-alive message. Either every, say, 15 seconds; or only after 15 seconds of inactivity. (Whichever is easier to code, server-side, for you.) It can be as simple as an SSE comment: ":\n\n" (lines starting with colons are ignored). I prefer to send actual data, because:
You get to see a message, allowing client-side keep-alive checking (see below)
There is bound to be something useful you want to send, like a timestamp (for a check that client/server clocks are in sync), or metrics, etc.
On the client-side, run a timer with setTimeout() set to 20 seconds. Each time you receive any data from the server (whether genuine data, or your keep-alive), kill the timer, and start it again. Therefore the only time the time-out function will get called is if your server went more than 20 seconds without sending you anything. When that happens, kill the connection and reconnect.
The above is assuming the problem is at the socket-level. The problem might instead be the browser is crashing: perhaps it has run out of memory. The fix I'd do in that case is a once/hour timer (setTimeout() in JavaScript), to manually close and re-open the EventSource connection. Or clear out some memory buffers you might be using. A bit of profiling with FireBug or Chrome tools will tell you if you have a memory problem.
Plug: Over half of the "Making our App production quality" chapter in my coming-soon SSE book is about keep-alive and using LastId on the reconnect. Please buy when it comes out :-)
I had the same problem with Chrome reporting "net::ERR_SPDY_PROTOCOL_ERROR 200" every two minutes.
Sending SSE comments every minute solved the problem for me. See the Nodejs / Express example code below.
exports.addWebServices = function(app) {
app.get('/ws/clientEvent', function(req, res) {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive'
})
/* Event handlers for SSE here */
let keepAliveMS = 60 * 1000;
function keepAlive() {
// SSE comment for keep alive. Chrome times out after two minutes.
res.write(':\n\n');
setTimeout(keepAlive, keepAliveMS);
}
setTimeout(keepAlive, keepAliveMS);
}
}
On Localhost every thing works fine but problem occur when I am deploying code on heroku.
This is simple Ajax call that I am using in my application.
I am using AJAX to send some data to server for processing.
When I add large amount of data to the request then it get failed.
If I send less data with AJAX request,its working fine.
$.ajax({
url:'Ajax.php',
data:"data to send",
type:'POST',
success: function(data) {
console.log("success");
},
error: function(XMLHttpRequest, textStatus, errorThrown) {
console.log("failed");
}
});
Can anyone suggest me why this is happing???
Heroku only allows 30 seconds for a request before it times out.
https://devcenter.heroku.com/articles/request-timeout
When this happens the router will terminate the request if it takes longer than 30 seconds to complete. The timeout countdown begins when the request leaves the router. The request must then be processed in the dyno by your application, and then a response delivered back to the router within 30 seconds to avoid the timeout.
it has absolutely got to do with the system memory usage especially in a screnerio where you would be parsing a alot of JSON objects to and fro!
Oh wait! not only JSON data, but any type of data(large in size ) using POST/GET method can result in failure because many JS oriented framework like nodejs still returns them as objects at the end!