Is it possible to stream serialized data into file with flatbuffers? - javascript

I am making an online game in node.js and trying to save a game replay on my game's server. I am using flatbuffers to serialize data for client-server communication and I thought it would be cool to save my game's state frame by frame in the file.
I created the following table in my .fbr file
table Entity {
id: ushort;
pos: Vec2;
}
table Frame {
entities: [Entity];
}
table Replay {
frames: [Frame];
}
Is there a way to write all game state frames to the file on the fly? I know that I could just buffer N frames and save them in separate replay files, but i feel that there should be a better way. I want my replay to be in a single file, otherwise it would be very inconvenient to use it afterwards.

The best way to do this is to make sure each individual FlatBuffer is a "size prefixed buffer" (it has a 32-bit size ahead of the actual buffer), which in JS you can create by calling this instead of the usual finish function: https://github.com/google/flatbuffers/blob/6da1cf79d90eb242e7da5318241d42279a3df3ba/js/flatbuffers.js#L715
Then you write these buffers one after the other into an open file. When reading, because the buffers start with a size, you can process them one by one. I don't see any helper functions for that in JS, so you may have to do this yourself: read 4 bytes to find out the size, then read size bytes, repeat.

Related

Bypass the 6 downloads limit for multiple videos watching

I have to code a website with the capability of watching many live streams (video-surveillance cameras) at the same time.
So far, I'm using MJPEG and JS to play my live videos and it is working well ... be only up to 6 streams !
Indeed, I'm stuck with the 6 parallel downloads limit most browser have (link).
Does someone know how to by-pass this limit ? Is there a tip ?
So far, my options are:
increase the limit (only possible on Firefox) but I don't like messing with my users browser settings
merge the streams in one big stream/video on the server side, so that I can have one download at the time. But then I won't be able to deal with each stream individually, won't I ?
Switch to JPEG stream and deal with a queue of images to be refreshed on the front side (but if I have say 15 streams, I'm afraid I will collapse my client browser on the requests (15x25images/s)
Do I have any other options ? Is there a tip or a lib, for example could I merge my stream in one big pipe (so 1 download at the time) but have access to each one individually in the front code ?
I'm sure I'm on the right stack-exchange site to ask this, if I'm not please tell me ;-)
Why not stream (if you have control over the server side and the line is capable) in one connection? You do one request for all 15 streams to be send /streamed in one connection (not one big stream) so the headers of each chunk have to match the appropriate stream-id. Read more: http://qnimate.com/what-is-multiplexing-in-http2/
More in-depth here: https://hpbn.co/http2/
With http1.0/1.1 you are out of luck for this scenario - back then when developed one video or mp3 file was already heavy stuff (work arounds where e.g. torrent libraries but unreliable and not suited for most scenarios apart from mere downloading/streaming). For your interactive scenario http2 is the way to go imho.
As Codebreaker007 said, I would prefer HTTP2 stream multiplexing too. It is specifically designed to get around the very problem of too many concurrent connections.
However, if you are stuck with HTTP1.x I don't think you're completely out of luck. It is possible to merge the streams in a way so that the clientside can destructure and manipulate the individual streams, although admittedly it takes a bit more work, and you might have to resort to clientside polling.
The idea is simple - define a really simple data structure:
[streamCount len1 data1 len2 data2 ...]
Byte 0 ~ 3: 32-bit unsigned int number of merged streams
Byte 4 ~ 7: 32-bit unsigned int length of data of stream 1
Byte 8 ~ 8+len1: binary data of stream 1
Byte 8+len1+1 ~ 8+len1+4: length of data of stream 2
...
Each data is allowed to have a length of 0, and is handled no differently in this case.
On the clientside, poll continuously for more data, expecting this data structure. Then destructure it and pipe the data to the individual streams' buffer. Then you can still manipulate the component streams individually.
On the serverside, cache the data from individual component streams in memory. Then in each response empty the cache, compose this data structure and send.
But again, this is very much a plaster solution. I would recommend using HTTP2 stream as well, but this would be a reasonable fallback.

Client vs server image process and shown

Client vs server imagen process.
We got a big system which runs on JSF(primefaces) EJB3 and sometimes JavaScript logic (like for using firebase and stuff).
So we run onto this problem, we have a servlet to serve some images. Backend take a query, then extract some blob img from DB, make that BLOB into array of bytes, send it to browser session memory and servlet take it to serve it in ulr-OurSite/image/idImage. Front end calls it by <img>(url/image/id)</img> and works fine so far.
Then we are using a new direct way to show img, we send BLOB/RAW data to frontend and there we just convert them into Base64.imageReturn. and pass it to html.
Base64 codec = new Base64();
String encoded = codec.encodeBase64String(listEvidenciaDev.get(i).getImgReturns());
Both work, for almost all cases.
Note: We didn't try this before because we couldn't pass the RAW data through our layers of serialized objects and RMI. Now we can of course.
So now there are two ways.
Either we send data to servlet and put it on some url, which means the backend does all the job and frontend just calls url
or we send data to frontend which is going to make some magic and transform it to img.
This brings 2 questions.
If we send to frontend RawObject or make them call URL to show his image content, final user download the same amount of data? This is important because we have some remote branch offices with poor internet connection
Is worth pass the hard work to frontend (convert data) or backend (convert and publish)?
EDIT:
My questions is not about BLOB (the one i call RAW data) being bigger than base64
It is; passing the data as object and transform it to a readable picture is more heavy to internet bandwidth than passing a url from our servlet with the actual IMG and load it on html ?
I did choose to close this answer because we did some test and it was the same bandwidth usage on front end.
Anyway we make use of both solutions
If we dont want to charge frontend making a lot of encode we set a servlet for that images (that comes with more code and more server load). We look for the best optimization on specific cases.

How to read a large file(>1GB) in javascript?

I use ajax $.get to read a file at local server. However, the web crashed since my file was too large(> 1GB). How can I solve the problem? If there's other solutions or alternatives?
$.get("./data/TRACKING_LOG/GENERAL_REPORT/" + file, function(data){
console.log(data);
});
A solution, assuming that you don't have control over the report generator, would be to download the file in multiple smaller pieces, using range headers, process the piece, extract what's needed from it (I assume you'll be building some html components based on the report), and move to the next piece.
You can tweak the piece size until you find a reasonable value for it, a value that doesn't make the browser crash, but also doesn't result in a large number of http requests.
If you can control the report generator, you can configure it to generate multiple smaller reports instead of a huge one.
Split the file into a lot of files or give a set user ftp access. I doubt you'd want too many people downloading a gig each off your web server.

How to properly process base64 to stored server image

I'm working on an add-item page for a basic webshop, the shop owner can add item images via drag/drop or browsing directly. When images are selected i'm storing the base64 in an array. I'm now not too sure how best to deal with sending/storing of these item images for proper use. After giving Google a bit of love i'm thinking the image data could be sent as base64 and saved back to an image via something like file_put_contents('/item-images/randomNumber.jpg', base64_decode($base64)); then adding the item's image paths to its database data for later retrieval. Below is an untested example of how i currently imagine sending the image data, is something like this right?
$("#addItem").click(function() {
var imgData = "";
$.each(previewImagesArray, function(index, value) {
imgData += previewImagesArray[index].value;
});
$.post
(
"/pages/add-item.php",
"name="+$("#add-item-name").val()+
"&price="+$("#add-item-price").val()+
"&desc="+$("#add-item-desc").val()+
"&category="+$("#add-item-category :selected").text()+
"&images="+imgData
);
return false;
});
Really appreciate any help, i'm relatively new to web development.
As you are doing, so do I essentially: get the base64 from the browser, then post back, and store. A few comments...
First, HTML POST has no mandatory size limitation, but practically your backend will limit the size of posted data. (eg, 2M max_post_size in PHP.) Since you are sending base64, you are significantly reducing the effective payload you can send. That is, if every one byte of image equals three bytes of base64, you will get far less image transfered per byte of network. Either send multiple posts or increase your post size to fit your needs.
Second, as #popnoodles mentioned, using a randomNumber will likely not be sufficient in the long term. Use either a database primary key or the tempnam family of functions to generate a unique identifier. I disagree with #popnoodleson implementation, however: it's quite possible to upload the same file b/w two different people. For example, my c2013 Winter Bash avatar on SO was taken from an online internet library. Someone else could use that same icon. We would collide, so the MD5 is not sufficient in general, but in your use case could be.
Finally, you probably will want to base64 decode, but give some thought to whether you need it. You can use a data/url and inline the base64 image data. This has the same network issue as before: significantly more transfer is required to send it. But, a data URL works very well for lots of very small images (eg avatars) or pages that will be cached for a very long time (esp if your users have reasonable data connections). Summary: consider the use case before presuming you need to base64 decode.

What is the most efficient way of sending data for a very large playlist over http?

I'm currently working on developing a web-based music player. The issue that I'm having is pulling a list of all the songs from the database and sending it to the client. The client has the ability to dynamically create playlists, and therefore they must have access to a list of the entire library. This library can range upwards of 20,000 unique songs. I'm preparing the data on the server-side using django and this tentative scheme:
{
id: "1",
cover: "http://example.com/AlbumArt.jpg",
name: "Track Name",
time: "3:15",
album: "Album Name",
disc: (1, 2),
year: "1969",
mp3: "http://example.com/Mp3Stream.mp3"
},
{
id: "2",
...
}
What is the best method of DYNAMICALLY sending this information to the client? Should I be using jSON? Could jSON effectively send this text file consisting of 20,000 entries? Is it possible to cache this playlist on the client side so this huge request doesn't have to happen every time the user logs-in, instead only when there was a change in database?
Basically, what I need at this point is a dependable method of transmitting a text-based playlist consisting of around 20,000 objects, each with their own attributes (name, size, etc...), in a timely manor. Sort of like Google Music. When you log-in, you are presented with all the songs in your library. How are they sending this list?
Another minor question that comes to mind is, can the browser (mainly Chrome) handle this amount of data without sacrificing usability?
Thank you so much for all your help!
I just took a look at the network traffic for Google Play, and they will transmit the initial library screen (around 50 tracks) via JSON, with the bare minimum for metadata (name, track ID, and album art ID). When you load the main library page, it makes a request to an extremely basic HTML page, that appears to insert items from an inline JS object Gist Sample. The total file was around 6MB, but it was cached and nothing needed to be transferred.
I would suggest doing a paginated JSON request to pull down the data, and using ETags and caching to ensure it isn't retransmitted unless it absolutely needs to be. And instead of a normal pagination of ?page=5&count=1000, try ?from=1&to=1000, so that deleting 995 will purge ?from=1&to=1000 from the cache, but not ?from=1001&to=2000 (whereas ?page=2&count=1000 would).
Google Play Music does not appear to use Local Storage, IndexedDB, or Web SQL, and loads everything from the cached file and parses it into a JS object.
Have you seen this http://code.flickr.net/2009/03/18/building-fast-client-side-searches/ ?
I've been using this array system myself lately (for 35K objects) and it is fast (assuming you dont want to render them all on screen).
Basically the server builds a long string in the form
1|2|3$cat|dog|horse$red|blue|green
Which is sent as a single string to an http request. Take the responseText field and conver it to an array using
Var arr = request.responseText.split('$');
Var ids = arr[0].split('|');
Var names = arr[1].split('|');
Clearly, you end up with arrays of strings at the end, not objects, but arrays are fast for many operations. I've used $ and | as delimiters in this example, but for live use I use something more obscure. My 35k objects are completly handled in less than 0.5sec (iPad client).
You can save the strings to localstorage, but watch the 5Mb limit, or use a shim such as lawnchair. (nb I also like SpenserJ answer, which may be easier to implement depending on your environment)
This method doesn't easily work for all JSON datatypes, they need to be quite flat. I've also found these big arrays to behave well for performance, even on smartphones, ipod touch etc ( see jsperf.com for several tests around string.split and array searching)
You could implement a file-like object that wraps the json file and spits out proper chunks.
For instance, you know that your json file is a single array of music objects, you could create a generator that wraps the json file and returns chunks of the array.
You would have to do some string content parsing to get the chunking of the json file right.
I don't know what generates your json content. If possible, I would consider generating a number of managable files, instead of one huge file.
I would test performance of sending the complete JSON in a single request. Chances are that the slowest part will be rendering the UI and not the response time of the JSON request. I recommend storing the JSON in a JavaScript object on the page, and only render UI dynamically as needed based on scrolling. The JavaScript object can serve as a data source for the client side scrolling. Should the JSON be too large, you may want to consider server backed scrolling.
This solution will also be browser agnostic (HTML < 5 )

Categories

Resources