Simperium and binary (image / video) asset files from JS - javascript

Simperium looks like an awesome way to sync data across a variety of platforms and to deal with on/offline access from mobile.
For a project I'm working on some of the data is in the form of generated image and video files. I can't find any information about whether it is possible to sync this kind of data through Simperium (I guess I could base64 encode the images but it seems like a hack).
Or would I need to sync the URLs and then manually download these resources and somehow store them locally?

Simperium has basic support for binary files on the iOS side, currently in testing. This isn't yet available in the JavaScript library, but it will be. The way it works is similar to what you described. Simperium can handle the syncing of both a URL and its associated binary content in cases where that makes sense.
On iOS, binary files are stored to the local file system (though small files can indeed be stored as base64 encoded strings if you prefer).
In JavaScript, if you're working on the client side, the situation is less clear given the storage limits imposed by browsers, but you always have the option of syncing and using standard links depending on what you're trying to do. On the server side, there are of course more options. If you have some uses cases to share, you should get in touch.

Related

File API, Persistent link to a local file

I'm currently working on some web script based on a game (to port a game to the web).
The scripts download data from my web host so the loading is slow (have to download each files : maps, models, textures, ...).
To correct this, I added an option that allow users to select their local game data in their computer (using the File API - drag and drop) to parse content directly from local and avoid downloading multiple megas from the web, the result is incredibly fast.
Here the problem : each time they reload the browser, they have to re-select their files, again and again. It's not user-friendly.
So, is there a way to keep a reference from this game archive to avoid the user to re-do the drag and drop each time ? I know about security concern, just want to know if there is something like a persistent URL.createObjectURL().
Note: the game data is about ~2Go, so it's not possible to store it in the FileSystem API (and I don't want to copy it, it's waste space to copy data when you can just keep a reference to it).
Thank you :)
You have to have an input from the user
It is not possible to access the files on a client's computer without the user confirming it. Once the user chooses a file (You can listen to it with the change event), you can then use the FileReader API for example to read the file.
document.getElementById("input").addEventListener("change", function() {
const fs = new FileReader();
fs.readAsText(this.files[0]);
fs.onloadend = function() {
document.getElementById("output").innerText = fs.result;
}
});
<input id="input" type="file" />
<div id="output"></div>
Using localStorage to store a fraction of your files
You could use the localStorage API to store some of your files, but the capacity is very limited, especially for a project like yours (maximum of 5 to 10 MB on current most popular browsers).
This would make your coding much harder, as you would have to check every time what was stored or not, and load what is not saved.
Caching
By using caching, you basically fall into the same problem as you are with localStorage: each browser has its own maximum capacity.
The advantage of this method is that you do not have to worry about what has been loaded or not, as the browser will do this by itself.
Using Flash
Now if you really do not care at all about security, you could use a Flash plugin to store and load the files, and then use ExternalInterface to load the data in your JavaScript code.
ExternalInterface.call("loaded", filename, data);
// And then in JavaScript:
// function loaded(filename, data)
// ...
You could use SharedObject to save and load your data.
I am not an AS3 expert, please excuse any clumsiness.
A Desktop application?
Last option would be converting and bundling your game into a desktop application, for example by bundling it through electron, and then using for example NeDB, which is currently the suggested tool by electron for persistant storage.
You may want to consider using IndexedDB. It is supported in recent browsers including Chrome, Firefox, and Safari (macOS and iOS). IndexedDB allows you to save Blob, File or ArrayBuffer as values in an IndexedDB object store.
Check this IndexedDB: Store file as File or Blob or ArrayBuffer. What is the best option?.

If I upload a new version of a javascript file to Amazon S3, should I expect browser caching problems?

We have a large number of people (10k+) who return to my clients' sites on a regular basis to use a web app we built, improve, and host for them. We have been making fairly frequent backward-incompatible updates to the web app's javascript as our app has improved and evolved. During deployments, the javascript is minified and concatenated into one file, loaded in the browser by require.js, and is uploaded to and hosted on Amazon S3. The file name & url currently doesn't change at all during updates. This last week we deployed a major refactor to the web app and got a few (but not a lot) of reports back that the app stopped working for some people, particularly in firefox. It seemed like a caching issue. We were able to see it initially in a few browsers in testing but it seemed to go away after a refresh or two.
It dawned on me that I really don't know what browser-caching ramifications deploying a new version of a javascript file (with the same name) on S3 will have and whether this situation warrants cache-busting or manipulating S3's headers or anything. Can someone help me get a handle on this? Are there actions I should be taking during deployments to ensure that browsers will immediately get the new version of a javascript file? If not, we run the risk of the javascript and the server API being out of sync and failing, which I think happened here.
Not sure if it matters, but the site's server runs Django and the app and DB are deployed to Heroku. Static files are deployed to S3 using S3Boto via Django's collectstatic command.
This depends a lot on the behaviour of S3 and the headers it sends when requesting files on S3. As you experienced, browsers will show different caching behaviour - so the best option is to use unique filenames.
I would suggest to use cachebuster hashes - in this way you can be sure that the new file always gets requested by browsers and you can use long cache-lifetime headers if you host the files on your own server.
You can for example create a MD5 hash of your minified file and append it (like mycss-322242fadcd23.css). Or you could use the revision number of your source control system. You have to use the cache buster in all links to this file, but you can normally easily do this in your templates where you embed your static resources. Depending on your application, you could probably use this Django plugin that should do this work for you.

How can I generate and download a large file clientside using Javascript?

I have an application that displays a large set of data using Slickgrid. The dataset may be 30-50MB in size. I would like users to be able to download the current view of the data displayed. What is the best way to set this up?
I have considered the approach described here using data URIs, but the maximum size of a URI is much too small.
I have also considered the approach described here where the client posts arbitrary data to the server, which the server echos back as a download. I worry that the documents may exceed the maximum POST size.
Since you want to do this on client side and If HTML5 is an option why not HTML5 File System API
One option is to use the HTML5 FileWriter API. As of today it's only supported in Chrome (and the BlackBerry browser of all things).

Web client cache

We are in process of building conceptual model of web-based audio editor. And the first trouble we met is client-side caching system. In my opinion as server-side programmer having huge cache on client side is perfect idea, because in many cases it takes of server load by excepting multiple loading of the same data. Furthermore such cache could be good candidate for buffer for providing per-track operations, like filtering.
Our flex programmer says that this is a great trouble and it is impossible in almost any cases. But I am in great doubt, cause I know that actual Google Chrome browser version can simple keep up to 2 Gb in localStorage. Moreover I've found this example of online track-editor and looks like its caching mechanism working pretty good.
Is it possible to cache some data (smth about 100-200mb) on the client side using flash and js?
You can use SharedObject to store the data.
I am pretty sure that default size limit is too low for your needs, so your app will need to ask user to accept your new limit:
http://www.macromedia.com/support/documentation/en/flashplayer/help/help06.html
SharedObject is more reliable than the browser cache, and you control it from your app.
If you are using html5 then you can store large data on client side using html5 inbuilt database.
also refer this link
What we did when writing a video editor. Well, actually, in Flash you can save files to the user's machine, with the restriction that it must be transparent to the user (i.e. the user initiates the action, goes through the OS dialog and saves the file as they would normally save anything they download), similarly, you can load in a file from a user's computer, with the restriction that the user must initiate the action (as in by clicking with a pointing device or pressing a key).
This has certain advantages over different local storage strategies, which are mostly opaque to users (people don't usually know how to erase cookies, SharedObjects or web storage that comes with more modern browsers, but they are pretty much capable of saving and deleting the files on their system). Furthermore, all other opaque local storages may have restrictions that less savvy users might not know how to overcome / may not be possible to overcome in general - these would be size, location and ownership.
This will still be a bit of hindrance for your audience, because every time they need to save a file, they have to go through the OS's dialog, instead of doing Ctrl+S / Cmd+S / C-x C-s... But given all other options, this, IMO, leaves the user with the most choices / delivers best experience.
Another suggestion - you could, in principle, come up with a browser-based "enhanced" version of your application, which users would install as a browser plugin (if that's an editor they are using on a regular basis - why not?), in which case you wouldn't be limited to the clumsy options provided by web technologies. Chrome and Mozilla-based browsers encourage such development, however it's not standardized. Still, since these two browsers run on virtually any OS, that doesn't sound particularly as locking in your users into a certain platform...

Read EXIF data from img in JavaScript (cross-domain friendly)

I recently started building a bookmarklet that displays the EXIF data of any image on a website. I chose to use Nihilogic's binary.js and exif.js libraries. For images hosted on the same domain as the parent page, the script works wonderfully. But because it uses XHR to read the binary data of images, and because XHR is restricted to same-origin requests, I can't access the binary data of any cross-site-hosted images.
Is there a way to load the binary data of an image locally (or at least without using XHR) that preserves EXIF data?
I've explored a few other directions, but I fear I don't understand them well enough to determine if there's a solution:
JSONP - I'm assuming there's no way to get binary data into one of these things.
canvas tags - these seem to produce very different base64 encodings than what PHP does, and I suspect that the EXIF data no longer exists in the new encoding.
Javascript is allowed to do special operations (like on the canvas) with same-origin images and cors-enabled images because the browser can safely assume that those would be OK to upload to the server in the first place. But then it gets complicated...
I can't access the binary data of any cross-site-hosted images.
Yes, generally it is very important that you can't. More to the point, you can't do what you want with a bookmarklet.
You can't do this with a canvas because the cors rules here are strict (for good reasons!)
In short the reasoning in general is pretty much exactly the same. Browsers are in a unique security position: A random page on the internet can show you things that are private to you, such as the hypothetical image, C:\MyPhotos\privateImage1.jpg, assuming it could guess that file path.
But that webpage is most certainly not allowed to do anything with that file other than show it to you. It can't read the binary information (EXIF information or pixel information). JavaScript is not allowed to know what that image looks like or nearly any data associated with it.
If it was able to figure that out, a random webpage would be able to try a bunch of file paths and maybe come across an image on your hard drive, and then upload the binary data of that image to a server, in effect stealing your private image.
A browser extension would be far more suited to this task than a (JavaScript) bookmarklet because of this.
Purely from the client? I doubt it. How about XHR'ing a local PHP script that runs something like exif_imagetype()? This function works on remote as well as images.

Categories

Resources