Client-side caching (with JavaScript) - javascript

I have a API that I'm querying via JS on a client side and then displaying result on a page (again via JS).
I have a limit of 5 queries per second. In real life i can send maximum of 11 API calls in one loop.
What i need:
I need somehow to bypass 11 queries limit, because usually i need to make about 50 calls in one loop.
I need to make sure that I'm not sending the same API requests on every page refresh.
The obvious solution is caching. To comply with speed requirements, ideally i would like to cache data on the client's side.
The question:
How? I don't think that cookies is a good solution because of the 4KB size limit. I heard about Google-gears (that they use for Offline-Gmail.) but recent search result showed that it doesn't exists anymore.

You can use localstorage but only if you need the cache to remain between refreshes of the browser. If you don't then you can use memory like hold it in array or result.

Related

How to efficiently send tons of get request with php

I'm working on a project in which I have to develop a simple PHP based web module from where the user (admins) can send SMS messages (Followup) to students, as for the sake of advertisement and other needs.
The SMS API is very simple and I just need to send a GET request to a Cross Origin Domain along with the phone number and message.
I tested it with the file_get_contents("sms_api_url?credentials"); and it works fine.
What worries me is that the SMS will be sent to TONS of numbers and so I have to send the request multiple times using a loop, which will take a lot of time and I think will be too much resource consuming.
Also the max execution time for PHP is set to 30 seconds which I don't want to change.
I thought to use the Client side JavaScript for sending cross origin request in a loop so that it wont affect my server but that wouldn't be secure as it would reveal the API credentials.
What Technology should I use to accomplish my goals? and send tons of get request efficiently?
You've told us nothing about the the actual volume you need to handle, the metrics for the processing/connection time nor what constraints there are on the implementation.
As it stands this is way too broad to answer. But some approaches you might consider are:
1) Running concurrent requests - but note that just like domain sharding, this can undermine your bandwidth if over used
2) You can have PHP scripts running indefinitely outside the webserver (using the CLI SAPI) and these can be launched from a web session.
I thought to use the Client side JavaScript for sending cross origin request in a loop so that it wont affect my server but that wouldn't be secure as it would reveal the API credentials.
If you send directly to the endpoint, then yes, you'd need the credentials in the browser. But if you implement a proxy script which injects the credentials on your webserver then you can use your own credentials from the browser.
Using cron has certian advantages - but you really don't want to be spawning a task from crond to send one SMS message - it needs to run in batches, and you need to manage the concurrency.
You might want to consider switching to a different aggregator whom can offer bulk processing.
Regardless of the aproach you will need a way to store the messages/phone numbers and a locking mechanism around retrieval processing.
Personally, I'd be tempted to look at using an MTA for this or perhaps even Kannel - but that's more an approach for handling volumes in excess of 300,000 per day.
To send as many network requests as needed in less than 30 seconds are two requirements that kind of contradict themselves. Also, raw "efficiency" can just mean squeeze every single resource in the server, which not may be desirable.
Said that, I think the key points are:
I may be wrong but, as far as I know, there're only two ways to prevent a non-authorised party from consuming a web service: private credentials and IP filtering. None are possible in browser-based JavaScript.
Don't make a human being stare in front of the computer until a task of this kind completes. There's absolutely no need to and it can even cause the task to abort.
If you need to send the same text to different recipients, find out whether the SMS provider has an API that allows to do it in a single API request. Large batch deliveries get one or two orders of magnitude harder when this feature is not available.
In short you need:
A command line script
A task scheduler (e.g. cron)
Prefer server stability to maximum efficiency (you may even want to throttle your requests)
Send the requests from the server, but don't do it in the PHP script that generates the page.
Instead, store information about the desired messages in a database.
Write another program which, periodically, checks the database for unsent messages and makes the call to the API. You could run it using cron.

Better way to design a web application with data persistance

For my Web apps I'm always wondering, which way is the best to design a proper Web applications with data persistance. For now I design every time a single HTML page, and all the content and the data upload is managed with jQuery AJAX requests based on a RESTful model, to a remote server which takes care of the database. But at the end that make sometimes a lot of AJAX calls, and getting huge amount of data takes sometimes a few seconds, which is not user-friendly.
Is there something like a guideline, or a standard way of developing to design web App ?
I've already looked over the WebWorkers and WebSockets Javascript API, but never used them yet. Does anybody already try it ? Does that allows better performance than AJAX exchanges ?
What is your way of Web App developing ?
This isn't really the place for questions like this, but I will give you a few brief pointers.
AJAX requests shouldn't take long, if they are consistently being slow then the problem is most likely your server-side code and inefficiencies there. Websockets aren't going to give you any benefit over AJAX if your server is slow.
A common design is to load the minimal dataset required for the page to function, AJAXing any other required data to get the page responsive as quickly as possible.
Caching and pre-fetching are great ways to speed up your site, for instance: if you are running a mysql query over and over, run it once and put the results in a caching service like memcached or mongodb with an expiration of an hour (or something) and serve the cached response, this will speed up your server response times. Pre-fetching is anticipating what your user is going to do next and loading that data in the background without any user interaction.
Consider using localStorage or IndexedDB if your users are loading the same data repeatedly

Best practice use sam AJAX in multiple browser windows?

I am developing a website that has some sort of realtime update.
Now the website is generated with a javascript variable of the current ID of the dataset.
Then in an interval of some seconsd an AJAX call is made passing on the current ID, and if theres something new the server returns it along with the latest ID which is then updated in the javascript.
Very simple, but here comes the Problem.
If the user opens the same page multiple times, every page does this AJAX requests which produces heavy serverload.
Now I thought about the following approach:
The website is loaded with a javascript variable of the current timestamp and ID of the current dataset.
My desired refresh interval is for example 3 seconds.
In the website an interval counter counts up every seconds, and everytime the timestamp reaches a state where (timestmap % 3===0) returns true, the content is updated.
The link looks like http://www.example.com/refresh.php?my-revision=123&timestamp=123456
Now this should ensure that every browser window calls the same URL.
Then I can turn on browser level caching.
But I don't really like this solution.
I would prefer adding another layer of data sharing in a Cookie.
This shouldn't be much of a problem, I can just store every request in a cookie named by timestamp and data revision with a TTL of 10 seconds or so and check for its exitence first.
BUT
The pages will do the request at the same time. So the whole logic of browser caching and cookie might not work because the requests occour simultanously and not one after another.
So I thought about limiting the current connections to 1 server side. But then I would need at least an extra vhost, because I really dont want to do that for the whole page.
And this lets me run into problems concerning cross-site policies!
Of course there are some super complicated load balancing solutions / server side solusions bound to request uri and ip adress or something but thats all extreme overkill!
It must be a common problem! Just think of facebook chat. I really don't think they do all the requests in every window you have open...
Any ideas? I'm really stuck with this one!
Maby I can do some inter-window Javascript communication? Shouldnt be a problem if its all on the same domain?
A thing I can do of course is server side caching. Which avoids at least DB Connections and intensive calculations... but it still is an request which I would like to avoid.
You might want to check out Comet and Orbited .
This is best solved with server push technology.
The first thing is: Do server-side caching anyway, using Memcache or Redis or whatever. So you're defended against three machines doing the requests. But you knew that.
I think you're onto the right thing with cookies, frankly (but see below for a more modern option) — they are shared by all window instances, easily queried, etc. Your polling logic could look something like this:
On polling interval:
Look at content cookie: Is it fresher than what you have? If so, use it and you're done.
Look at status cookie; is someone else actively polling (e.g., cookie is set and not stale)? If yes, come back in a second.
Set status cookie: I'm actively polling at (now).
Do request
On response:
If the new data is newer than the (possibly updated) contents of the content cookie, set the content cookie to the new data
Clear status cookie if you're the one who set it
Basically, the status cookie acts as a semaphore indicating to all window instances that someone, somewhere is on the job of updating the content.
Your content cookie might contain the content directly, or if your content is large-ish and you're worried about running into limits, you could have each page have a hidden iframe, each with a unique name, and have your Ajax update write the output to the iframe. The content cookie would publish the name of the most up-to-date iframe, and other windows seeing that there's fresh content could use window.open to get at that iframe (since window.open doesn't open a window if you use the name of an existing one).
Be alert to race conditions. Although JavaScript within any given page is single-threaded (barring the explicit use of web workers), you can't expect that JavaScript in the other windows is necessarily running on the same thread (it is on some browsers, not on others — heck, on Chrome it's not even the same process). I also don't know that there's any guarantee of atomicity in writing cookies, so you'll want to be vigilant.
Now, HTML5 defines some useful inter-document communication mechanisms, and so you might consider looking to see if those exist and using them before falling back on this cookie approach, since they'll work in modern browsers today but not in older browsers you're probably having to deal with right now. Still, on the browsers that support it, great!
Web storage might also be an option worth investigating as an aspect of the above, but your clients will almost certainly have to give your app permissions and it's also a fairly new thing.

Is there a limit to how much data I should cache in browser memory?

I need to load a couple thousand records of user data (user contacts in a contact-management system, to be precise) from a REST service and run a seach on them. Unfortunately, the REST service doesn't offer a search which meets my needs, so I'm reduced to just loading a bunch of data and searching through it myself. Loading the records is time-consuming, so I only want to do it once for each user.
Obviously this data needs to be cached. Unfortunately, server-side caching is not an option. My client runs apps on multiple servers, and there's no way to predict which server a given request will land on.
So, the next option is to cache this data on the browser side and run searches on it there. For a user with thousands of contacts, this could mean caching several megs of data. What problems might I run in to storing several megs of javascript data in browser memory?
Storing several megs of Javascript data should cause no problems. Memory leaks will. Think about how much RAM modern computers have - a few megabytes is a molecule in the drop in the proverbial bucket.
Be careful when doing anything client side if you intend your users to use mobile devices. While desktops won't have an issue, Mobile Safari will stop working at (I believe) 10Mb of JavaScript data. (See this article for more info on Mobile Safari). Other mobile browsers are likely to have similar memory restrictions. Figure out the minimal set of info that you can return to allow the user to perform the search, and then lazy load richer records from the REST API as you need them.
As an alternative, proxy the REST Service in question, and create your own search on a server that you then control. You could do this with pretty quickly and easily with Python + Django + XML Models. No doubt there are equally simple ways to do this with whatever your preferred dev language is. (In re-reading, I see that you can't do server-side caching which may make this point moot).
You can manage tens of thousands of records safely in the browser. I'm running search & sorting benchmarks with jOrder (http://github.com/danstocker/jorder) on such datasets with no problem.
I would look at a distributed server side cache. If you keep the data in the browser, as system grows you will have to increase the browser cache lifetime to keep traffic down.

Pagination: Server Side or Client Side?

What is it best to handle pagination? Server side or doing it dynamically using javascript?
I'm working on a project which is heavy on the ajax and pulling in data dynamically, so I've been working on a javascript pagination system that uses the dom - but I'm starting to think it would be better to handle it all server side.
What are everyone's thoughts?
The right answer depends on your priorities and the size of the data set to be paginated.
Server side pagination is best for:
Large data set
Faster initial page load
Accessibility for those not running javascript
Client side pagination is best for:
Small data set
Faster subsequent page loads
So if you're paginating for primarily cosmetic reasons, it makes more sense to handle it client side. And if you're paginating to reduce initial load time, server side is the obvious choice.
Of course, client side's advantage on subsequent page load times diminishes if you utilize Ajax to load subsequent pages.
Doing it on client side will make your user download all the data at first which might not be needed, and will remove the primary benefit of pagination.
The best way to do so for such kind of AJAX apps is to make AJAX call the server for next page and add update the current page using client side script.
If you have large pages and a large number of pages you are better of requesting pages in chunks from the server via AJAX. So let the server do the pagination, based of your request URL.
You can also pre-fetch the next few pages the user will likely view to make the interface seem more responsive.
If there are only few pages, grabbing it all up-front and paginating on the client may be a better choice.
Even with small data sizes the best choice would be server side pagination. You will not have to worry later if your web application scales further.
And for larger data sizes the answer is obvious.
Server side - send to the client just enough content for the current view.
In a practical world of limits, I would page on the server side to conserve all the resources associated with sending the data. Also, the server needs to protect itself from a malicious/malfunctioning client asking for a HUGE page.
Once that code is happily chugging along, I would add "smarts" to the client to get the "next" and "previous" page and hold that in memory. When the user pages to the next page, update your cache.
If the client software does this sort of page caching, do consider how fast your data ages (is likely to change) and if you should check that your cached page of data is still valid. Maybe re-request it if it ages more than 2 minutes. Maybe have a "dirty" flag in it. Something like that. Hope you find this helpful. :)
Do you mean that your JavaScript has all the data in memory, and shows one page a time? Or that it downloads each page from the server as it's needed, using AJAX?
If it's the latter, you also may need to think about sorting. If you sort using JavaScript, you'll only be able to sort one page at a time, which doesn't make much sense. So your sorting should be done on the server.
I prefer server side pagination. However, when implementing it, you need to make sure that you're optimizing your SQL properly. For instance, I believe in MySQL, if you use the LIMIT option it doesn't use the index so you need to rewrite your sql to use the index properly.
G-Man
One other thing to point out here is that very rarely will you be limited to simply paging through a raw dataset.
You might have to search for certain terms in one or more columns you are displaying, and then say sort on a few columns and then give the users the ability to page through this filtered dataset.
In a situation like this you might have to see whether it would be better to have this logic search and/or sort client side or server side.
Another thing to consider is that Amazon's cloud search api gives you some very powerful searching abilities and obviously you'll want to allow cloud search to handle searching and sorting for you if you happen to have your data hosted there.

Categories

Resources