While searching tips to speed up the performance one site suggest me that Reduce number of HTTP request : With the help of Firebug, Firefox extension, you can find out how many resource requests are made by your asp.net web page. It is very important to reduce the number of HTTP requests on your server; it will help in reducing the server load and allow more visitors to access your website.
As per this i check my site and noticed that : it need to wait a long time for http Response
How can i reduce this time to speedup the performance
The most probable reason could be some expensive operation in one of your events of the page load process. You need to optimize. To confirm, you can create a datetime variable in Page_PreInit and you can check the total processing time taken in the PagePrerender.
Most of the time, the database call will be the culprit.
Related
I have a website up and running. The website worked fine on localhost with no such errors but after I put it online it started showing 507 insufficient storage page whenever two or three users used the same page at the same time.
For example there is a webpage chat.php which runs an ajax request to update the chats every 700 milliseconds. Side by side two ajax requests keep checking for messages and notifications. These ajax requests are completed using javascript's setInterval method. When this page is accessed concurrently by two or more users the page does not load and shows the error and sometimes the page shows 429 too many requests error. So at the same time maximum 4 requests can occur at the user's end and that too if the scripts run at the same time. Could this occur because of the limited entry processes? The hosting provides me with 10 limited entry processes by default. Please reply and leave a comment if you want me to post the setInterval method code even though I think the problem is something else here.
For example there is a webpage chat.php which runs an ajax request to update the chats every 700 milliseconds.
These ajax requests are completed using javascript's setInterval method.
When this page is accessed concurrently by two or more users the page does not load and shows the error and sometimes the page shows 429 too many requests error.
So at the same time maximum 4 requests can occur at the user's end and that too if the scripts run at the same time.
The hosting provides me with 10 limited entry processes by default.
Please take some time to read through (your own) quotes.
You stating that you AJAX the te server every 700ms and you do so by using setInterval. There is a maximum of 4 requests per user and 10 in total. If there is 2 or more visiters stuff goes haywire.
I think multiple things may be causing issues here:
You hit the 10 requests limit because of multiple users.
When 2 users hit 4 requests your at 8, if anythings else does a requests on the server you very quickly hit the maximum of 10. With 3 users with 4 requests your at 12 which according to your question hits your limit.
You might be DOSsing your own servers.
Using setInterval to do AJAX requests is bad. really bad. The problem is that if you request your server every 700ms and the server needs more than those 700ms to respond you'll stacking up requests. You will eventually hit whatever the limit is with just one user. (although in certain cases the browser might protect you).
How to fix
I think 10 connections (if it's actually 10 connections, which is unclear to me) is very low. However you should refactor your code to avoid using setInterval. You should use something like Promise to keep track of when a requests ends before scheduling the new one so you prevent stacks of requests piling up. Keep as a rule of thumb that you should never use setInterval unless you have a very good reason to do so. It's almost always better to use some more intelligent scheduling.
You also might want to look into being more efficient with those requests, can you merge the call to check for messages and notifications?
I'm working on a project in which I have to develop a simple PHP based web module from where the user (admins) can send SMS messages (Followup) to students, as for the sake of advertisement and other needs.
The SMS API is very simple and I just need to send a GET request to a Cross Origin Domain along with the phone number and message.
I tested it with the file_get_contents("sms_api_url?credentials"); and it works fine.
What worries me is that the SMS will be sent to TONS of numbers and so I have to send the request multiple times using a loop, which will take a lot of time and I think will be too much resource consuming.
Also the max execution time for PHP is set to 30 seconds which I don't want to change.
I thought to use the Client side JavaScript for sending cross origin request in a loop so that it wont affect my server but that wouldn't be secure as it would reveal the API credentials.
What Technology should I use to accomplish my goals? and send tons of get request efficiently?
You've told us nothing about the the actual volume you need to handle, the metrics for the processing/connection time nor what constraints there are on the implementation.
As it stands this is way too broad to answer. But some approaches you might consider are:
1) Running concurrent requests - but note that just like domain sharding, this can undermine your bandwidth if over used
2) You can have PHP scripts running indefinitely outside the webserver (using the CLI SAPI) and these can be launched from a web session.
I thought to use the Client side JavaScript for sending cross origin request in a loop so that it wont affect my server but that wouldn't be secure as it would reveal the API credentials.
If you send directly to the endpoint, then yes, you'd need the credentials in the browser. But if you implement a proxy script which injects the credentials on your webserver then you can use your own credentials from the browser.
Using cron has certian advantages - but you really don't want to be spawning a task from crond to send one SMS message - it needs to run in batches, and you need to manage the concurrency.
You might want to consider switching to a different aggregator whom can offer bulk processing.
Regardless of the aproach you will need a way to store the messages/phone numbers and a locking mechanism around retrieval processing.
Personally, I'd be tempted to look at using an MTA for this or perhaps even Kannel - but that's more an approach for handling volumes in excess of 300,000 per day.
To send as many network requests as needed in less than 30 seconds are two requirements that kind of contradict themselves. Also, raw "efficiency" can just mean squeeze every single resource in the server, which not may be desirable.
Said that, I think the key points are:
I may be wrong but, as far as I know, there're only two ways to prevent a non-authorised party from consuming a web service: private credentials and IP filtering. None are possible in browser-based JavaScript.
Don't make a human being stare in front of the computer until a task of this kind completes. There's absolutely no need to and it can even cause the task to abort.
If you need to send the same text to different recipients, find out whether the SMS provider has an API that allows to do it in a single API request. Large batch deliveries get one or two orders of magnitude harder when this feature is not available.
In short you need:
A command line script
A task scheduler (e.g. cron)
Prefer server stability to maximum efficiency (you may even want to throttle your requests)
Send the requests from the server, but don't do it in the PHP script that generates the page.
Instead, store information about the desired messages in a database.
Write another program which, periodically, checks the database for unsent messages and makes the call to the API. You could run it using cron.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am building a mobile app using jquery mobile, jquery and php back-end.
My problem is in certain pages I am sending and receiving multiple ajax requests which are visibly slowing down the performance and causes memory leaks which leads to the crashing of app in low network connections
What are the possible ways of optimizing the ajax requests?
note:- I am sending some ajax requests periodically(like once in a second)
Some ajax are sent based on events
First off, correctly written Ajax code does not leak memory. So, if you actually have a leak, that is likely caused by incorrectly written code, not by using Ajax.
Then, there are a number of optimizations you can consider.
Combine multiple requests to your server into one larger request. This saves roundtrips to the server, saves bandwidth, saves battery and increases performance.
Don't poll every second. Be smarter about how often you poll your server.
Lengthen the polling interval to vary it based on likely activity, switch to "long polling" or switch to a webSocket so you can do server push with no polling.
Analyze what is causing excessive memory consumption and fix bugs or redesign to curtail that. There is no reason that lots of Ajax calls should "leak" memory. It will chew up battery life, but need not leak memory if coded correctly.
Oh, and don't hesitate to pull up some high scale social web apps that already exist, open the debugger, switch to the network tab and study the heck out of what they are doing. You may as well learn from high scale apps that already have solved this issue.
Long Polling Is Better Than Polling
If you poll in a set interval it is probably better if you set a timeout on server-side and wait to return a message until a given number of loops. This makes especially sense for SPA.
However if you poll, you should increase the time intervall with every empty response you get.
Group Requests By Events, Not By Entity
Let's say you poll every 10 seconds, to retrieve new messages. Moreover you poll every 10 seconds to retrieve more notifications. Instead of doing two requests you should only do one ajax request and return one big json response, which you then use to modify DOM elements on the client side.
Use Server-Sent Events Or Sockets If Possible
Sockets or Server-Sent Events will result in much less requests and will only notify you if something actually happened on the server side. Here is a good comparison.
Here are few things you can do
Firstly
Get rid of all the memory leaks. Put your application in to a sandbox and give it the toughest time you can to record any memory leaks. Investigate them and correct your code.
Then
Reduce the number of requests. Carefully research on the best thresholds in your application and only trigger your requests at theses. Try to batch your requests and put them in to a single request whenever possible.
Use GET requests whenever possible. They are comparably simple and would act fast. This step is important for an application which triggers a lot of requests in its operation.
Choose the data format carefully. It could be plain text, JSON or
XML. Figure out the one that have the lowest impact on your
application and switch to that appropriately. Configure your server
to use data compression if possible.
Optimize the server. Use proper Expire or Cache-Control headers
for the content being servers. Use ETags if possible.
Try putting your content on a CDN. This may place the resources in a
geographically closer location to the user and make your application work faster.
Also, If you want to make a chat server or send fast messages to the server, you should use webSockets like socket.io .
There are mainly 3 steps for optimization (minimize) jQuery AJAX calls:
Either pass the done callback function as an argument when calling your function
return the jqXhr object from the function, and assign the done callback
Alternatively switch to using jQuery.ajax() instead, and pass the entire options object
Please refer this link : How to optimize (minimize) jQuery AJAX calls
You can use the cache feature of the Ajax call for this that helps you to get all data in first during the request and stored in cache and from next time no Ajax request happened and data manipulation done using cached data only.
Also, you can only optimize Ajax call request and response time by reducing the amount of data requested and sent while Ajax call. This can be done only by removing unwanted data during Ajax call.
Other than that, all optimization depends on code and database logic. To increase performance and reduce app crashing, optimized code and database logic helps you.
Create variable with ajax callback and use it
Each second you create new xhr request instance, that will be in
memory until done callback executes.Better way is to create new ajax
request on previous ajax complete, that means you will have only one
xhr request instance at time.
After data send to server, clean your object data, for example:
array.length = 0;
object = null;
image.src = "";
file = null;
As #jfriend00 said, Ajax requests don't generate memory leaks issues...
this is what your code probably do!
The best way to do what you need is to open a Socket Layer with your Data Layer Service, so, instead of a polling, you can be notified when a domain-model change occurs.
Checkout at Socket.io...
If you don't want to implement a Socket Layer, I think there aren't many way to increase performances...
One thing that, in addition - of course - to the code improvement operations (refactoring, ecc.), in my opinion, you can do is to implement a "Tail Management System"...
Study how Promises work (es6, jQuery, Q, bluebird)
Create A QueueService, a simple Javascript Class that knows how to serialize all (Ajax) tasks (promises);
Interpose the QueueService between Controllers and Data Layer Access Services
Implement a simple lightweight CacheSystem that prevents unnecessary AjaxRequests!
I am trying to optimize my site's speed and I'm using the great tool at pingdom.com. Right now, over 50% of the time it takes to load the page is "Wait" time as shown in the screenshot below. What can I do to reduce this? Also, how typical is this figure? are there benchmarks on this? Thanks!
EDIT:
Ok.. let me clarify a few things. There are no server side scripts or database calls going on. Just HTML, CSS, JS, and images. I have already done some things like push js to the end of the body tag to get parallel downloads. I am aware that the main.html and templates.html are adding to the overall wait time by being done synchronously after js.js downloads, that's not the problem. I am just surprised at how much "wait" time there is for each request. Does server distance affect this? what about being on a shared server, does that affect the wait time? Is there any low-hanging fruit to remedy those issues?
The most common reason for this in the case of Apache is the usage of DNS Reversal Lookup. What this means is that the server tries to figure out what the name of your machine is, each time you make a request. This can take several seconds, and that explains why you have a long WAIT time and then a very quick load, because the matter is not about bandwidth.
The obvious solution for this is to disable hostnamelookup in /etc/httpd/conf/httpd.conf
HostnameLookups Off
However...this is usually NOT enough. The fact is that in many cases, apache still does a reversal lookup even when you have disabled host name lookup, so you need to take a careful look at each line of your apache config. In particular, one of the most common reasons for this are LOGS. By default on many red hat - centos installations, the log format includes %h which stands for "hostname", and requires apache to do a reverse lookup. You can see this here:
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
You should change those %h for %a to solve this problem.
If you have multiple server requests which the page is waiting on, you can make sure that those server requests are sent asynchronously in parallel so that you are serializing them.
The slowest possible way to fetch multiple requests is to send one request, wait for its response, send the next request, wait for its response, etc... It's usually much faster to send all requests asynchronously and then process all responses as they arrive. This shortens the total wait time to the longest wait time for any single request rather than the cumulative wait time of all requests.
If you are only making one single request, then all you can do on the client-side of things is to make sure that the request is sent to the server as early as possible in the page loading sequence so that other parts of the page can be doing their business while the request is processing, thus getting the initial request started sooner (and thus finishing sooner).
The wait time, also known as time to first byte is how long it takes for the server to send the first byte from when the connection is initiated. If this is high, it means your server has got to do a lot of work to render the page before sending it. We need more information about what your site is doing to render the page.
TTFB is directly influenced by "physical" distance between browser and server. CDN proxy is the best way to shorten said distance. This, coupled with native caching capabilities, will help provide swifter response by loading cached object from the nearest POP (point of placement) location.
The effect will depend on user geo-location and CDN's spread. Still, you can expect significant improvement, 50%-70% or more.
Speaking from experience, I saw cases in which 90% of content was cached and deliver directly from proxy placed on a different continent, from the other side of the globe.
This is an issue with the server... According to Pingdom, "The web browser is waiting for data from the server" is what defines the "Wait" time.
There isn't much you can do from a javascript or code end to fix this.
I'm developing an ajax project, but i'm confusing in something
i have 3 functions that are sending data to the server every function has a certain job.
is it better to group the ajax request that will be send from each function and send it in one big request. which will reduce my requests count & time. or send each request separately which will reduce execution time for my code at the server side but will make many requests.
P.S: for my application it works in both cases.
If you can consolidate them into a single request, neatly, I'd suggest you do that. Browsers will put restrictions on how many requests you can be making simultaneously (can be as low as 2), as will many servers.
Of course, as always, there are exceptions. If you have some data that you need to query every 10 seconds, and other data you need to query every 10 minutes, it's really not necessary to query the 10-minute data along with the 10-second data every 10 seconds.
Perhaps a single request with options that control what is sent back. You can request large amounts of data, or small amounts of data, all from the same request depending on the options you pass along with the request itself.
Just theory.
There are several factors involved here.
Putting them together on the client has the advantage of the client doing more of the work, which is good.
All but the newest browsers only allow 2 requests to the same domain at a time. If you have a lot of latency, this would cause to to want to group requests together if they're going to the same domain. However, newer browsers allow 8 requests at a time, so this problem will gradually go away.
But on the other hand
Do the requests need to all be made at the same time? Keeping the requests separate will allow you to maintain flexibility.
Probably the most important: Keeping the request separate will probably result in more maintainable, straightforward code.
I would suggest doing whatever keeps the code on the browser straightforward. Javascript (or frameworks that do ajax for you) is difficult enough to maintain and coordinate. Keep it simple!