AngualrJS - Like "real time" http requests - javascript

I am using AngularJS 1.3 and I have a backend that supports only HTTP requests. (without WebSockets).
What is the best option for most "real time" data updates?
Right now I am using $interval and sending http request every second but I am not so satisfied I am always thinking maybe there is a better option.
Thank you !

Based on your description, there's not really an alternative, but you may be able to optimize the behavior based on the characteristics of your data and/or the user interface.
To minimize the resource consumption, for example, pause any requests when the relevant aspects of the interface aren't visible (e.g. in a different "page" or even if the user has switched to a different browser tab).
If the data is high volume but doesn't change frequently, you could set up your server to return a 304 Not Modified until the data actually changes.
You could also send only diffs instead of the full data set if that results in a significant bandwidth savings.

Related

How to efficiently send tons of get request with php

I'm working on a project in which I have to develop a simple PHP based web module from where the user (admins) can send SMS messages (Followup) to students, as for the sake of advertisement and other needs.
The SMS API is very simple and I just need to send a GET request to a Cross Origin Domain along with the phone number and message.
I tested it with the file_get_contents("sms_api_url?credentials"); and it works fine.
What worries me is that the SMS will be sent to TONS of numbers and so I have to send the request multiple times using a loop, which will take a lot of time and I think will be too much resource consuming.
Also the max execution time for PHP is set to 30 seconds which I don't want to change.
I thought to use the Client side JavaScript for sending cross origin request in a loop so that it wont affect my server but that wouldn't be secure as it would reveal the API credentials.
What Technology should I use to accomplish my goals? and send tons of get request efficiently?
You've told us nothing about the the actual volume you need to handle, the metrics for the processing/connection time nor what constraints there are on the implementation.
As it stands this is way too broad to answer. But some approaches you might consider are:
1) Running concurrent requests - but note that just like domain sharding, this can undermine your bandwidth if over used
2) You can have PHP scripts running indefinitely outside the webserver (using the CLI SAPI) and these can be launched from a web session.
I thought to use the Client side JavaScript for sending cross origin request in a loop so that it wont affect my server but that wouldn't be secure as it would reveal the API credentials.
If you send directly to the endpoint, then yes, you'd need the credentials in the browser. But if you implement a proxy script which injects the credentials on your webserver then you can use your own credentials from the browser.
Using cron has certian advantages - but you really don't want to be spawning a task from crond to send one SMS message - it needs to run in batches, and you need to manage the concurrency.
You might want to consider switching to a different aggregator whom can offer bulk processing.
Regardless of the aproach you will need a way to store the messages/phone numbers and a locking mechanism around retrieval processing.
Personally, I'd be tempted to look at using an MTA for this or perhaps even Kannel - but that's more an approach for handling volumes in excess of 300,000 per day.
To send as many network requests as needed in less than 30 seconds are two requirements that kind of contradict themselves. Also, raw "efficiency" can just mean squeeze every single resource in the server, which not may be desirable.
Said that, I think the key points are:
I may be wrong but, as far as I know, there're only two ways to prevent a non-authorised party from consuming a web service: private credentials and IP filtering. None are possible in browser-based JavaScript.
Don't make a human being stare in front of the computer until a task of this kind completes. There's absolutely no need to and it can even cause the task to abort.
If you need to send the same text to different recipients, find out whether the SMS provider has an API that allows to do it in a single API request. Large batch deliveries get one or two orders of magnitude harder when this feature is not available.
In short you need:
A command line script
A task scheduler (e.g. cron)
Prefer server stability to maximum efficiency (you may even want to throttle your requests)
Send the requests from the server, but don't do it in the PHP script that generates the page.
Instead, store information about the desired messages in a database.
Write another program which, periodically, checks the database for unsent messages and makes the call to the API. You could run it using cron.

Better way to design a web application with data persistance

For my Web apps I'm always wondering, which way is the best to design a proper Web applications with data persistance. For now I design every time a single HTML page, and all the content and the data upload is managed with jQuery AJAX requests based on a RESTful model, to a remote server which takes care of the database. But at the end that make sometimes a lot of AJAX calls, and getting huge amount of data takes sometimes a few seconds, which is not user-friendly.
Is there something like a guideline, or a standard way of developing to design web App ?
I've already looked over the WebWorkers and WebSockets Javascript API, but never used them yet. Does anybody already try it ? Does that allows better performance than AJAX exchanges ?
What is your way of Web App developing ?
This isn't really the place for questions like this, but I will give you a few brief pointers.
AJAX requests shouldn't take long, if they are consistently being slow then the problem is most likely your server-side code and inefficiencies there. Websockets aren't going to give you any benefit over AJAX if your server is slow.
A common design is to load the minimal dataset required for the page to function, AJAXing any other required data to get the page responsive as quickly as possible.
Caching and pre-fetching are great ways to speed up your site, for instance: if you are running a mysql query over and over, run it once and put the results in a caching service like memcached or mongodb with an expiration of an hour (or something) and serve the cached response, this will speed up your server response times. Pre-fetching is anticipating what your user is going to do next and loading that data in the background without any user interaction.
Consider using localStorage or IndexedDB if your users are loading the same data repeatedly

Client-side caching (with JavaScript)

I have a API that I'm querying via JS on a client side and then displaying result on a page (again via JS).
I have a limit of 5 queries per second. In real life i can send maximum of 11 API calls in one loop.
What i need:
I need somehow to bypass 11 queries limit, because usually i need to make about 50 calls in one loop.
I need to make sure that I'm not sending the same API requests on every page refresh.
The obvious solution is caching. To comply with speed requirements, ideally i would like to cache data on the client's side.
The question:
How? I don't think that cookies is a good solution because of the 4KB size limit. I heard about Google-gears (that they use for Offline-Gmail.) but recent search result showed that it doesn't exists anymore.
You can use localstorage but only if you need the cache to remain between refreshes of the browser. If you don't then you can use memory like hold it in array or result.

WebSockets: useful for reducing overhead?

I am building a dynamic search (updated with every keystroke): my current scheme is to, at each keystroke, send a new AJAX request to the server and get data back in JSON.
I considered opening a WebSocket for every search "session" in order to save some overhead.
I know that this will save time, but the question is, is it really worth it, considering those parameters:
80ms average ping time
166ms: time between each keystroke, assuming the user types relatively fast
A worst-case transfer rate of 1MB/s, with each data pack that has to be received on every keystroke being no more than 1KB.
The app also takes something like 30-40ms to weld the search results to the DOM.
I found this: HTTP vs Websockets with respect to overhead, but it was a different use case.
Will websockets reduce anything besides the pure HTTP overhead? How much is the HTTP overhead (assuming no cookies and minimal headers)?
I guess that HTTP requests open a new network socket on each request, while the WebSocket allows us to use only one all the time. If my understanding is correct, what is the actual overhead of opening a new network socket?
It seems like WebSockets provide a better performance in situations like yours.
Web Socked
small handshake header
full duplex communication after the handshake.
After the connection is established, only 2 bytes are added per transmitted request/response
Http
Http headers are sent along with each request
On the other hand, WebSocket is a relatively new technology. It would be wise to investigate web browser support potential network related issues.
Ref:
http://websocket.org/quantum.html
http://www.youtube.com/watch?v=Z897fkPn7Rw
http://en.wikipedia.org/wiki/WebSocket#Browser_support

Checking for updates using AJAX - bandwidth optimization possibilities?

I'm creating a simple online multiplayer game, with which two players (clients) can play the game with each other. The data is sent to and fetched from a server, which manages all data concerning this.
The problem I'm facing is how to fetch updates from the server efficiently. The data is fetched using AJAX: every 5 seconds, data is fetched from the server to check for updates. This is however done using HTTP, which means all headers are sent each time as well. The data itself is kept to an absolute minimum.
I was wondering if anyone would have tips on how to save bandwidth in this server/client scenario. Would it be possible to fetch using a custom protocol or something like that, to prevent all headers (like 'Server: Apache') being sent each single time? I basically only need the very data (only 9 bytes) and not all headers (which are like 100 bytes if it's not more).
Thanks.
Comet or Websockets
HTML5's websockets (as mentioned in other answers here) may have limited browser support at the moment, but using long-lived HTTP connections to push data (aka Comet) gives you similar "streaming" functionality in a way that even IE6 can cope with. Comet is rather tricky to implement though, as it is kind of a hack taking advantage of the way browsers just happened to be implemented at the time.
Also note that both techniques will require your server to handle a lot more simultaneous connections than it's used to, which can be a problem even if they're idle most of the time. This is sometimes referred to as the C10K problem.
This article has some discussion of websockets vs comet.
Reducing header size
You may have some success reducing the HTTP headers to the minimum required to save bytes. But you will need to keep Date as this is not optional according to the spec (RFC 2616). You will probably also need Content-Length to tell browser the size of the body, but might be able to drop this and close the connection after sending the body bytes but this would prevent the browser from taking advantage of HTTP/1.1 persistent connections.
Note that the Server header is not required, but Apache doesn't let you remove it completely - the ServerTokens directive controls this, and the shortest setting results in Server: Apache as you already have. I don't think other webservers usually let you drop the Server header either, but if you're on a shared host you're probably stuck with Apache as configured by your provider.
html5 sockets will be the way to do this in the near future.
http://net.tutsplus.com/tutorials/javascript-ajax/start-using-html5-websockets-today/
This isn't possible for all browsers, but it is supported in newer ones(Chrome, Safari). You should use a framework that uses websockets and then gracefully degrades to long polling(you don't want to poll at fixed intervals unless there are always events waiting). This way you will get the benefit of the newer browsers and that pool will continue to expand as people upgrade.
For Java the common solution is Atmosphere: http://atmosphere.java.net. It has a jQuery plugin as well as a abstraction the servlet container level.

Categories

Resources