Fast database for JavaScript data visualization vs minifying CSV? - javascript

I have a 10 MB CSV file that is the fundamental data source for an interactive JavaScript visualization.
In the GUI, the user will typically make a selection of Geography, Gender, Year and Indicator.
The response is an array of 30 4-digit numbers.
I want the user experience to be as snappy as possible and am either considering delivering the full CSV file (compressed using various means ...) or having a backend service that almost matches locally hosted data.
What are my options and what steps can I take to deliver the query response with maximum speed?

To deliver the full file maybe a combination of String Compression algorithmns such as http://code.google.com/p/jslzjb/ combined with html5 web storage.
However if its not really necessary to have the full db at the users client (which might lead into further problems regarding db updates, security, etc.) i would use a backend service with query caching etc.

I wouldn't transfer it to the client, Who knows how fast their connection is? Maximum speed would be to create an api, and query it from your mobile client. This would only mean a request for data is transfered from the client (small in size) and a response is returned to the client (only 30 4 digit number)

Related

Vue.js to Node.js Server: How can I secure my POST call?

I have a simple single page application (a game) written in Vue.js with a backend in Node.js, all hosted on Heroku. My frontend in Vue uses axios to do the api call to my backend, which uses express and mysql libraries to query my database and get high scores or post a new score.
I gave the finished game to my friends and they realized right away they could use postman or similar to do a simple post request and send a fake score, so I'd like to secure it.
I'm open to anything fairly simple, but I'd like to set a token that I can check in my Node.js if it matches, and if not, send a 403. I've tried setting an environment variable with a token, but on the front end ends up displaying that token in the resources if I inspect the element (if I use a .env file and then get the value). I've also tried my config.json files, but obviously there's no way to hide these values from anyone using inspect element. I tried checking the req.hostname but even when I send a request from postman, it returns a 200.
How can I secure my post request?
Problem
As others have pointed out, there is no generic way to generate information client side that cannot be forged. The problem being that no matter how complex the rules to generate this information (e.g. game scores must be prime numbers), somebody might isolate those rules and create arbitrary information (e.g. fake prime number scores without playing the game).
For games this often leads to input processing (client) and game rules (server) being split between client & server, making it impossible to isolate score generation from game rules. However this introduces latency and asynchronicity and requires heavy refactoring for client side games - three difficult issues.
Solution
There are scenarios where a different solution exists. Take chess: given a chessboard ask the client for the least possible number of moves until mate. The number of moves is the score and the lowest score wins. The client must send the specific moves and the server verifies the result. In other words the client side information is the entire input the player generates for the game.
As generic pattern this means: define the client side (score) information as entire game input. Record the entire input client side and re-run the game server side with this input. Verify the result.
Requirements:
Split input processing from game rules so that it can run with pre-defined input.
Implement equivalent server and client side game rules.
Eliminate any source of randomness! (E.g. use the same seed for the same random number generator or a server generated random number list)
You are close to this solution as you have wisely chosen one language for server and client, and Javascript represents numbers as plattform independent 64-bit floats (which avoids rounding errors). This solution avoids latency & asynchronicity, but does not allow multi-player games where atomic server side updates and coupled player input is needed.

How to read serial port data from JavaScript

I connected an Arduino to my laptop using USB, and I can read the serial data using Processing.
Is there any way to get this data in real time into a local webbrowser? For example, a text field that shows the value from the serial port? It does not have to be connected to the internet.
The JavaScript version of Processing does not support the following code, which would have been the ideal solution.
The Processing code is:
myPort = new Serial(this, Serial.list()[0], 9600);
// read a byte from the serial port
int inByte = myPort.read();
// print it
println(inByte);
// now send this value somewhere...?
// ehm...
There is no way to directly access the local machine from the web browser. For security reasons browsers have very limited access to the machines resources.
To do this one option would be to write an extension for the browser of your choosing. Even though extensions also have a variety of limitations.
Option two would be to use local server to provide the functionality you need. Personally I recommend using node.js (it's light weight, fast and easy to implement). You can read/write serial data using https://github.com/rwaldron/johnny-five (as #kmas suggested) or https://github.com/voodootikigod/node-serialport and than you can use http://socket.io/ to create a simple service and connect to it though the browser. Socket.io uses WebSockets in modern browsers and works excepionally well for real-time connections.
I had a similar problem to solve. My Data acquisition system (DAQ) (like your arduino) relays data in HTTP, TCP, FTP, as well as serial. I had to capture it on the server and then send it to my webpage in real-time.
The hack I wrote uses nodejs at the server, and connects DAQ to the server using TCP sockets using the "net" module of nodejs and connects the server to the HTML page using socket.io.
The code and context can be found in "How to get sensor data over TCP/IP in nodejs?".
I use TCP as I wanted to transmit data over a long distance. You need to modify the socket protocol to serial.
For serial-to-TCP redirection, you may use bloom from sensorMonkey for Windows or their processing sketch for *nix/Mac OS.
if you want to send or receive serial data from Arduino to JavaScript in Processing code editor just go to Sketch -> Import Library -> Serial or just write import processing.serial.*;

High traffic solution for simple data graph website

I'm building a single page website that will display dynamic data (updating once a second) via a graph to its users. I'm expecting this page to receive a large amount of traffic.
My data is stored in REDIS and I'm displaying the graph using Highcharts. I'm using ruby / Sinatra as my application layer.
My question is how best should I architecture the link between the data store and the JavaScript graph solution?
I've considered directly connecting to REDIS but that seems the least efficient . I'm wondering whether a XML solution where ruby builds an XML file every second and then Highcharts pulls data from there is the best as therefore the stress is only on hitting that XML file.
But I wanted to see whether anyone on here might have solved this previously or had any better ideas?
If the data is not user-specific, you should cache it into a representation that is easily read by the client. With web browsers, JSON might be a better choice.
You can cache it using Redis itself. (Memcached, Varnish are other options) You should cache it every time the data arrives and must avoid transforming the data on each request. The requests must simply serve pre-computed information from the cache (like you do with static information)
For a better experience on the client side, you should minimize the amount of data you are downloading from the server. JSON serves this purpose better than XML.

Maximum json size for response to the browser

I am creating tree with some custom control prepared with JavaScript/jquery.
For creating the tree we are supplying json object as the input to java-script to iterate through and create the tree.
Since the volume of data may go up-to 25K nodes. during a basic load test we identified that the browser will be crashed for such volume.
The alternate solution is just load first level of the nodes and rest load on demand via AJAX request. the volume of first level can vary up-to 500 - 1K nodes.
What is the max size a json should have as a response from the server. What could be the best approach to process such volume of data on browser.
There is no max size limit of the http response (or the max size of Int or the limit of browser or the limit of server have been configured).
The best approach is use AJAX to load part of data while it need to be shown.
An HTTP response has no size limit. JSON is coming as an HTTP response. So it has no size limit either.
There might be problem if the object parsed from JSON response consumes too much memory. It'll crash the browser. So it's better you test with different data sizes and check whether your app works correctly.
I think lazy-loading is the best approach for such large amounts of data. Especially when dealing with object literals.
See High Performance Ajax Application presentation from Yahoo.
Well I think I am too late to give my two cents. Complementing shiplu.mokadd.im's answer browser memory is a limitation and HTTP response can have any amount of data according to the TCP spec.
But I have an application that uses Google Chrome (version 29.0.xx) and Jetty server where response from the Jetty server has a payload amounting to 335MB. While the browser is receiving the response of that sheer size Chrome stops leaving the message "IPC message is too big". Though this is specific to Google Chrome(not sure about other browsers), there should be a threshold on the max size of response.
There is no max size limit but the size depends the client system's (The system in which browser exists) RAM, CPU, Network bandwidth to parse the large json data. If the system is low end and with the large json data then the browser hangs.

Is there a limit to how much data I should cache in browser memory?

I need to load a couple thousand records of user data (user contacts in a contact-management system, to be precise) from a REST service and run a seach on them. Unfortunately, the REST service doesn't offer a search which meets my needs, so I'm reduced to just loading a bunch of data and searching through it myself. Loading the records is time-consuming, so I only want to do it once for each user.
Obviously this data needs to be cached. Unfortunately, server-side caching is not an option. My client runs apps on multiple servers, and there's no way to predict which server a given request will land on.
So, the next option is to cache this data on the browser side and run searches on it there. For a user with thousands of contacts, this could mean caching several megs of data. What problems might I run in to storing several megs of javascript data in browser memory?
Storing several megs of Javascript data should cause no problems. Memory leaks will. Think about how much RAM modern computers have - a few megabytes is a molecule in the drop in the proverbial bucket.
Be careful when doing anything client side if you intend your users to use mobile devices. While desktops won't have an issue, Mobile Safari will stop working at (I believe) 10Mb of JavaScript data. (See this article for more info on Mobile Safari). Other mobile browsers are likely to have similar memory restrictions. Figure out the minimal set of info that you can return to allow the user to perform the search, and then lazy load richer records from the REST API as you need them.
As an alternative, proxy the REST Service in question, and create your own search on a server that you then control. You could do this with pretty quickly and easily with Python + Django + XML Models. No doubt there are equally simple ways to do this with whatever your preferred dev language is. (In re-reading, I see that you can't do server-side caching which may make this point moot).
You can manage tens of thousands of records safely in the browser. I'm running search & sorting benchmarks with jOrder (http://github.com/danstocker/jorder) on such datasets with no problem.
I would look at a distributed server side cache. If you keep the data in the browser, as system grows you will have to increase the browser cache lifetime to keep traffic down.

Categories

Resources