On Localhost every thing works fine but problem occur when I am deploying code on heroku.
This is simple Ajax call that I am using in my application.
I am using AJAX to send some data to server for processing.
When I add large amount of data to the request then it get failed.
If I send less data with AJAX request,its working fine.
$.ajax({
url:'Ajax.php',
data:"data to send",
type:'POST',
success: function(data) {
console.log("success");
},
error: function(XMLHttpRequest, textStatus, errorThrown) {
console.log("failed");
}
});
Can anyone suggest me why this is happing???
Heroku only allows 30 seconds for a request before it times out.
https://devcenter.heroku.com/articles/request-timeout
When this happens the router will terminate the request if it takes longer than 30 seconds to complete. The timeout countdown begins when the request leaves the router. The request must then be processed in the dyno by your application, and then a response delivered back to the router within 30 seconds to avoid the timeout.
it has absolutely got to do with the system memory usage especially in a screnerio where you would be parsing a alot of JSON objects to and fro!
Oh wait! not only JSON data, but any type of data(large in size ) using POST/GET method can result in failure because many JS oriented framework like nodejs still returns them as objects at the end!
Related
I was calling an ajax function on load page event. When I load the page, in the server log, There was not existing call log(I log entrance and out on all mvc server method). The request from javascript to server was taking some time, 2 to 3 minutes. The strange thing is when I test it local and test server, it works well. It occurs just when I deploy a project to the real server!
I found some posts about the method to do ajax, xmlhttp, $.ajax(). I already used both. But It still exists.
This is my javascript code.
$(document).ready(function () {
$.ajax({
type: "GET",
url: allservicesUrl,
async: true,
success: function(result, status, xhr){
var services = JSON.parse(xhr.responseText);
for (i in services) {
createServicecard(services[i]);
}
},
error: function(xhr, status, err) {
alert(xhr.responseText);
}
});
})
I wanna execute it immediately. How do I correct this problem?
Thanks for the advice.
I saw an answer and comments. Then, I saw the dev tool(network tab), finally, I found the problem.
Now I correct it, it works well. Thank you very much.
ps. Problem: connecting the internet in closed network.
Use the debugging/developer tools in your browser to trouble shoot this issue.
look in the console and see if you have any JS errors, then look in the network tab. Clear the entries and then reload the AJAX call.
you should be able to see if your script is slow to send the request to the server or if the server is slow to answer.
until you figure out if the bottleneck is in the script or on the server you can't fix it.
This behavior was not present all the time, just come from nothing about a month ago and then disappeared suddenly. The problem is that I can't identify what happened. I have no server debug tool because only take place in production.
Roughly 100 ajax request are triggered at the same time using some loop like :
let url = "example.com/";
var methods = ["method1", "method2", "method3", "method4"]; //Roughly 100
$.each(methods, function(index, value) {
$.ajax({
url: url + value,
method: "POST",
data: { params: "whatever", otherParams: "whatever" }
}).done(function(data) {
console.log(data);
});
});
In server side (apache+php) there are selects, updates and inserts in a relational database. Each request performs an individual thread since apache is hearing.
When I see the network console, all requests starts at the same time (roughly), but here is the problem. The response happen one after the other finish. If request 1 starts at 0 and spend 5 seconds, request 2 starts at 5, and request 3 starts when request 2 was finished. All browser have the same behavior.
The best logic explanation I thought, is that database is blocking some table when performs an update or insert. Some tables are huge and without indexes could spend too much time. Well, staging environment points to the same database and works perfectly asynchronously. So what is going on? It is possible that php or Apache could be stucked in this way for some reason? I thought other crazy idea that is some writing problems with log files in the OS (debian) but I have no idea how that works. So I would be glad if anyone could give me any suggestion. Maybe I could reproduce the problem in a controlled environment and do something to prevent this can happen again.
Some additional information, the API have two clients one in angular the other in javascript+php. It's exactly the same behavior with both clients.
I am developing a project using django - python, javascript. When I call a long running process (time-consuming) approx more than 20 minutes from views the process starts successfully. I used loader in ajax to notify user that process is running. After the process completes the loader will stop and change to completed state.
But the issue is every time after 14.59 minutes from process started, the loader stops and status change to completed. But the process running in background is not yet completed. The page crash after that time. After the process complete I bind the result under a tag in web page. In that tag the error 504 (Gateway Timeout) arise. In web console log Failed to load resource: the server responded with a status of 504 (Gateway Timeout), the above error prints. If anyone knows please help me to fix this.
Is the django is closing the connection after that time ? If so is there possible to mention timeout in django settings (settings.py). I tried giving timeout in ajax call but the same issue returns. My doubt is on django development server. Is there timeout in django development server. But when I search for this issue I found, in nginx server the same type of issue arise. Is django depend on nginx or using it ?
I tried to provide all information regarding my issue if any further clarifications, please let me know.
The server always closes its connection after a particular amount of time, which can be modified by changing its configuration, but that would not be a good approach.
I would suggest you to try to reduce time taken by the script by:
Optimizing Your code.
By caching data which is to be used often
using backend services which will process your data and store them in temporary storage such as memcache or redis which can be quickly accessed.
Also you should handle the timeout of the ajax call:
$.ajax({
...
timeout: 1000,
error: function(jqXHR, textStatus, errorThrown) {
if(textStatus==="timeout") {
//do something on timeout / show appropriate message.
}
}
});
References for tweaking performance and optimizing coding techniques thus reducing time:
https://docs.djangoproject.com/en/1.10/topics/db/optimization/
https://docs.djangoproject.com/en/1.10/topics/performance/
https://realpython.com/blog/python/caching-in-django-with-redis/
I have a Javascript function that runs every 5 seconds and requests information from the same server via a jQuery AJAX call. The function runs indefinitely once the page is loaded.
For some reason the AJAX query is failing about once every minute or two, and showing
ERR_EMPTY_RESPONSE
in the console. The odd thing is, it fails for exactly 60 seconds, then starts working fine for another minute or two.
So far I've tried with no success:
Different browser
Different internet connection
Changing the polling time of the function. (Still fails for 60 second intervals. eg run every 10 seconds, it fails 6 times. Or 5x12 or 1x60)
Web searches which suggesting flushing ip settings on my computer
I never had any problem on my last server which was a VPS. I'm now running this off shared hosting with GoDaddy and wonder if there's a problem at that end. Other sites and AJAX calls to the server are working fine during downtimes though.
I also used to run the site over HTTPS, now it's over plain HTTP only. Not sure if relevant.
Here's the guts of the function:
var interval = null;
function checkOrders() {
interval = window.setInterval(function () {
$.ajax({
type: "POST",
dataType: "json",
url: "http://www.chipshop.co.nz/ajax/check_orders.php",
data: {shopid : 699},
error: function(errorData) {
//handle error
},
success: function(data) {
//handle success
}
});
}, 5000); // repeat until switched off, polling every 5 seconds
}
Solved: It turned out the problem was with GoDaddy hosting. Too many POST requests resulted in the 60 second 'ban' from accessing that file. Changing to GET avoided this.
This page contains the answer from user emrys57 :
For me, the problem was caused by the hosting company (Godaddy)
treating POST operations which had substantial response data (anything
more than tens of kilobytes) as some sort of security threat. If more
than 6 of these occurred in one minute, the host refused to execute
the PHP code that responded to the POST request during the next
minute. I'm not entirely sure what the host did instead, but I did
see, with tcpdump, a TCP reset packet coming as the response to a POST
request from the browser. This caused the http status code returned in
a jqXHR object to be 0.
Changing the operations from POST to GET fixed the problem. It's not
clear why Godaddy impose this limit, but changing the code was easier
than changing the host.
I have very strange problem and that problem occurs very rarely and that too in our production env.
The production env. setup is,
Apache Web Server as front layer
Apache Tomcat 6.0 as application server (liked with Apache Web server via mod_jk)
I have custom made Ajax based RPC component where we use jQuery for ajax calling. The data is communicated using POST method.
The client side data (javascript objects) are sent to server in JSON format and on server side they are deserialized into java objects.
The RPC call is executed by providing following information,
var jsonParamObj = new Object();
jsonParamObj.param0 = objParam0;
var params = new Object();
params.**jsontext**=**toJsonString**(jsonParamObj);
where jsontext contains the real data to be transmitted. I am using toJsonString javascript function available as open source json script (previously used JSON.stringify but had same problem).
Following is the jQuery call,
$.ajax({async:async,
data:params,
dataType:"json",
type:"POST",
url:this.ajaxAction+qs,
contentType:"application/x-www-form-urlencoded; charset=UTF-8",
error:function (XMLHttpRequest, textStatus, errorThrown)
{
alert('Connectivity Issue : '+textStatus + ' Error : '+errorThrown + ' Response : '+XMLHttpRequest.responseText);
},
success:function(jsonobj){
if(jsonobj.JSON.ajaxSysError)
{
alert(jsonobj.JSON.ajaxSysError.message);
return;
}
// do other work
}
});
Now the problem is sometimes whatever data sent in forms of params do not reach to server (not to apache as well as tomcat) I have enabled maximum level of verbosity in logs however whatever data it sends through query string (see qs) reaches server.
The client browser is IE 7 (Windows XP Media Edition).
Can you put some thoughts that would help me in debug this issue.
Thanks for reading this long question.
Jatan
Install Fiddler and look at the HTTP request that IE is sending.
Also, put the ajax call in a try/catch block and check whether you're getting any Javascript errors.