High traffic solution for simple data graph website - javascript

I'm building a single page website that will display dynamic data (updating once a second) via a graph to its users. I'm expecting this page to receive a large amount of traffic.
My data is stored in REDIS and I'm displaying the graph using Highcharts. I'm using ruby / Sinatra as my application layer.
My question is how best should I architecture the link between the data store and the JavaScript graph solution?
I've considered directly connecting to REDIS but that seems the least efficient . I'm wondering whether a XML solution where ruby builds an XML file every second and then Highcharts pulls data from there is the best as therefore the stress is only on hitting that XML file.
But I wanted to see whether anyone on here might have solved this previously or had any better ideas?

If the data is not user-specific, you should cache it into a representation that is easily read by the client. With web browsers, JSON might be a better choice.
You can cache it using Redis itself. (Memcached, Varnish are other options) You should cache it every time the data arrives and must avoid transforming the data on each request. The requests must simply serve pre-computed information from the cache (like you do with static information)
For a better experience on the client side, you should minimize the amount of data you are downloading from the server. JSON serves this purpose better than XML.

Related

Why can't you build a RESTful API using data stored in a memory structure?

I'm learning how to build APIs so that I can make simple interactive data visualizations in the client and pull data using get requests. The data source I'm using is about 12mb, so I thought I'd put it on a backend and make requests as needed.
I don't need PUT or DELETE, just GET. Found a lot of resources that show how to make very simple APIs using Python or Node, like this one, but in this guide, Miguel Grinburg says that you'll need a database as well:
In place of a database we will store our task list in a memory structure. This will only work when the web server that runs our application is single process and single threaded. This is okay for Flask's own development web server. It is not okay to use this technique on a production web server, for that a proper database setup must be used.
Is this strictly true? What are the consequences of storing data in a memory structure? I don't wholly understand what he means by "single threaded" and "single process" either.

Better way to design a web application with data persistance

For my Web apps I'm always wondering, which way is the best to design a proper Web applications with data persistance. For now I design every time a single HTML page, and all the content and the data upload is managed with jQuery AJAX requests based on a RESTful model, to a remote server which takes care of the database. But at the end that make sometimes a lot of AJAX calls, and getting huge amount of data takes sometimes a few seconds, which is not user-friendly.
Is there something like a guideline, or a standard way of developing to design web App ?
I've already looked over the WebWorkers and WebSockets Javascript API, but never used them yet. Does anybody already try it ? Does that allows better performance than AJAX exchanges ?
What is your way of Web App developing ?
This isn't really the place for questions like this, but I will give you a few brief pointers.
AJAX requests shouldn't take long, if they are consistently being slow then the problem is most likely your server-side code and inefficiencies there. Websockets aren't going to give you any benefit over AJAX if your server is slow.
A common design is to load the minimal dataset required for the page to function, AJAXing any other required data to get the page responsive as quickly as possible.
Caching and pre-fetching are great ways to speed up your site, for instance: if you are running a mysql query over and over, run it once and put the results in a caching service like memcached or mongodb with an expiration of an hour (or something) and serve the cached response, this will speed up your server response times. Pre-fetching is anticipating what your user is going to do next and loading that data in the background without any user interaction.
Consider using localStorage or IndexedDB if your users are loading the same data repeatedly

Best way to pass a query result to another page

Quick question:
I have a page that makes an AJAX call to my SQL server and receives an XML response. I parse the XML and display the relevant data in a table.
I have a button on the page that is used to display the table data in a simple line graph on a new page. Currently, I just re-query the database and to re-get the data and create a new array object of that data for my graph.
The GET can take up to 2.5 seconds, with the final graph time to render being about 8-9 seconds. I am investigating any alternatives to re-GET'ting the data.
So far I have:
localStorage (HTML5)
php pass me the data instead of querying the DB
jquery plugin (DOMCache)
Any thoughts on this or best practices??
thanks for any input!
You might do best to just make sure the response is cached in your users' browsers. There are a variety of ways to make this happen (variant upon the framework your server is running, browsers that your clients are using, etc etc), but the long story short is that relying upon caching will alleviate you from having to jump through performance hoops by making modifications to your codebase.
IE8+ is actually a kingpin in this area (much as I hate to admit it). Its aggressive caching of ajax responses is usually a serious pain in the arse, but in this case would prove valuable to your scenario.
Edit -
You mentioned SQL Server, so I'm making the assumption that you're running through an ASP.NET middle tier. If that's the case, here's an interesting article on caching ajax requests on the server and the client with the .NET framework.

Fast database for JavaScript data visualization vs minifying CSV?

I have a 10 MB CSV file that is the fundamental data source for an interactive JavaScript visualization.
In the GUI, the user will typically make a selection of Geography, Gender, Year and Indicator.
The response is an array of 30 4-digit numbers.
I want the user experience to be as snappy as possible and am either considering delivering the full CSV file (compressed using various means ...) or having a backend service that almost matches locally hosted data.
What are my options and what steps can I take to deliver the query response with maximum speed?
To deliver the full file maybe a combination of String Compression algorithmns such as http://code.google.com/p/jslzjb/ combined with html5 web storage.
However if its not really necessary to have the full db at the users client (which might lead into further problems regarding db updates, security, etc.) i would use a backend service with query caching etc.
I wouldn't transfer it to the client, Who knows how fast their connection is? Maximum speed would be to create an api, and query it from your mobile client. This would only mean a request for data is transfered from the client (small in size) and a response is returned to the client (only 30 4 digit number)

How to create temporary files on the client machine, from Web Application?

I am creating a Web Application using JSP, Struts, EJB and Servlets. The Application is a combined CRM and Accounting Package so the Database size is very huge. So, in order to make Execution faster, I want prevent round trips to the Database.
For that purpose, what I want to do is create some temporary XML files on the client Machine and use them whenever required. How can I do this, as Javascript do not permits me to do so. Is there any way of doing this? Or, is there any other solution which I can adopt in order to make my application Faster?
You do not have unfettered access to the client file system to create a temporary file on the client. The browser sandbox prevents this for very good reasons.
What you can do, perhaps, is make some creative use of caching in the browser. jQuery's data method is an example of this. TIBCO General Interface makes extensive use of a browser cache for XML data. Their code is open source and you could take a look to see how they've implemented their browser cache.
If the database is large and you are attempting to store large files, the browser is likely not going to be a great place for that data. If, however, the information you want to store is fairly small, using an in-browser cache may accomplish what you'd like.
You should be caching on the web server.
As you've no doubt realised by now, there is a very limited set of things you can do on the client machine from a web app (eg, write cookie).
You can make your application use the browser plugin Google Gears, that allows you a real clientside storage.
Apart from that, remember, there is a huge overhead for every single request, if needed you can easily stack a few 100 kB in one response, but far away users might only be able to execute a few requests per second. Try to keep the number of requests down, even if it means adding overhead in form of more data.
#justkt Actually, there is no good reason to not allow a web application to store data. Indeed HTML5 specifications include a database similar to the one offered by Google Gears, browser support is just a bit too sporadic for relying on that feature.
If you absolutely want to cache it on the client, you can create the file on your server and make your web app retrieve it. This way the browser will fetch it and keep it on the client cache.
But keep in mind that this could be a pain for the client if the file is large enough.

Categories

Resources