I'm puzzled by the juxtaposition of pitching IndexDB for offline (e.g.) single-page HTML apps with the fact that the documentation seems to indicate the browser can trash your local data at any time. "In addition, be aware that browsers can wipe out the database, such as in the following conditions"...
It seems like the options are
a) only design read-only offline apps or
b) just accept that once in a while some users of your offline app are going to get unlucky and lose all their work when the browser gets in a mood to delete your IndexedDB data.
My question is: is there any serious discussion of this issue anywhere, or (better, but too much to hope for) a serious read/write offline app that deals with the issue? My searches on the topic have been fruitless. For example, this complete offline todo app example manages to never mention the problem -- who wants to store even simple todo data in a storage that the browser could wipe out at any moment and that can't trivially be backed up?
Related
Sharing cookies from the browser to the electron app
I log in to my website. Then I start my electron application. I don't want to log in. I need cookies or tokens to log in. Is it possible to share cookies from the browser to the electron app?
Think about this from a security perspective: If any app could read any of the Browser's cookies, then it would be simple to spy on users or impersonate them from the outside (a malicious app, like spyware or something similar). So the answer is "maybe".
Firefox, for example, stores the cookies (assuming that you have not set a primary password for your profile) in an SQLite database in a well-defined folder. So you could definitely try to read them.
However, AFAIK most antivirus software is aware that this is a security problem and will thus nuke any app other than the browser which tries to access them.
So, as long as no antivirus software is installed and Firefox is used without a primary password, you "should be good".
However, this is not a good idea, even from a user perspective: The connection between "I logged in via my browser" and "I am logged in in the app" is not intuitively clear. Also, some (most) users may consider this a breach of trust. After all, if you read their cookies, what else will you read? Who guarantees that you only use the cookies from your particular webpage? An app "randomly" reading your cookies is kind of creepy if you think about it.
Then there's another hurdle to overcome: How do you decide which of the multiple browsers installed on the system (and even if uninstalled, there probably will still be the users' profiles left) is the "right" one? What do you do if multiple browsers have multiple session cookies for your webpage? All this is not as easy as it might seem in the first place.
I suggest you to look into some other technologies, like OAuth2, which may reduce the "login process" inside your app to a single click in case there's a session open for the device. How this is implemented specifically is out of scope for this answer (and hard to explain and understand without the required basic knowledge).
I've seen there are ways to store data on the client, e.g. using localStorage, sessionStorage, or indexedDB.
AFAIK the main disadvantage of these technologies is that the browser may decide to clear out the stored data say if the device is low on memory (not sure if this is true also about localStorage).
I seem to fail to find information on some alternative storage which is more persistent: e.g. won't get deleted by a browser based on some decision.
Is there such a technology available? I am looking to use it next to ServiceWorkers for an offline first app.
I found something like this, is this something included with ServiceWorkers? (The article doesn't show much API). How is the support from browsers?
clarification: I am fine if the data can be deleted by user, I don't want it to be deleted by browser automatically based on some decision.
Since your app runs on clients' devices, and you don't have any real control on it, and your desire is impossible (' browser may decide to clear out the stored data' - not true. browsers might not be able to store data, or to get a reference to storages in some browsers and scenarios - e.g safari iframe and localstorage are not friends...)
Service worker do support indexedDB, so why not using it?
The app
I have a web app that currently uses AppCache for offline functionality since users of the system need to create documents offline. The document is first created offline and when internet access is available, the user can click "sync" which will send the document to the server and save it as a revision. To be more specific, the app does not save the change delta as a revision (the exact field modified) but rather the whole document in its entirety. So in other words, a "snapshot" document is saved.
The problem
Users can login from different browsers and devices and work on their documents. When they click "sync", if the server's document is newer, the entire client's version will be overridden by the server's. This leads to one main issue that is depicted in the image below.
The scenario above occurs because of the current implementation which does not rely on deltas (small changes) and rather relies on snapshot revisions.
Some questions
1) My research indicates that I should be upgrading the "sync" mechanism to be expressed in deltas (small changes that can be applied independently). Is this a sound approach?
2) Should each delta be applied independently?
2) According to my research, revision deltas have a numeric value and not a timestamp. What should the value for this be exactly? How would I ensure both the server and the client agree on what the revision number should be?
Stack information
Angular on the frontend
IndexedDB to save documents locally (offline mode)
Postgres DB with JSONB in the backend
What your describing is a version control issue like in this question. The choice is yours with how to resolve. Here are a few examples of other products with this problem:
Google docs: A makes edit offline, B makes edit online, A goes online, Sync, Google Docs combines A and B's edits
Apple notes: Same as Google Docs
Git/Subversion: Throw an error, ask user to resolve conflicts
Wunderlist: Last edit overwrites previous
For your case, this simplest solution is to use Wunderlist's approach, but it seems that may cause a usability issue. What do your users expect to happen?
Answering your questions directly:
A custom sync implementation is necessary if you don't want overwrites.
This is a usability decision, what does the user expect?
True, revisions are numeric (e.g r1, r2). To get server agreement, alter the return value of the last sync request. You can return the entire model to the client each time (or just a 200 OK if a normal sync happened). If a model is returned to the client, update the client with the latest model.
In any case, the server should always be the source of truth. This post provides some good advice on server/mobile referential integrity:
To track inserts you need a Created timestamp ... To track updates you need to track a LastUpdate timestamp on your rows ... To track deletes you need a tombstone table.
Note that when you do a sync, you need to check the time offset between the server and the mobile device, and you need to have a method for resolving conflicts. Inserts are no big deal (they shouldn't conflict), but updates could conflict, and a delete could conflict with an update.
My question is a follow-up to this topic. I love the simplicity and performance of Firebase from what I have seen so far.
As I understand, firebase.js syncs data snapshots from the server into an object in Javascript memory. However there is currently no functionality to cache this data to disk.
As a result:
Applications are required to have a connection when they start-up, thus there is no true offline access.
Bandwidth is wasted every time an app starts up by re-transmitting all previous data.
Since the snapshot data is sitting in memory as a Javascript object, it should be quite trivial to serialize it as JSON and save it to localStorage, so the exact application state can be loaded next time the app is started, online or not. But as the firebase.js code is minified and cryptic I have no idea where to look.
PouchDB handles this very well on a CouchDB backend. (But it lacks the quick response time and simplicity of Firebase.)
So my questions are:
1. What data would I need to serialize to save a snapshot to localStorage? How can I then load this back into Firebase when the app starts?
2. Where can I download the original non-minified dev source code for firebase.js?
(By the way, two features that would help Firebase blow the competition out of the water: offline caching and map reduce.)
Offline caching and map reduce-like functionality are both in development. The firebase.js source is available here for dev and debugging.
You can serialize a snapshot locally using exportVal to preserve all priority data. If you aren't using priorities, a simple value will do:
var fb = new Firebase(URL);
fb.once('value', function(snapshot) {
console.log('values with priorities', snapshot.exportVal());
console.log('values without priorities', snapshot.val());
});
Later, if Firebase is offline (use .info/connected to help determine this) when your app is loaded, you can call .set() to put that data back into the local Firebase. When/if Firebase comes online, it will be synced.
However, this is truly only suitable for static data that only one person will access and change. Consider, for example, the fallout if I download the data, keep it locally for a week, and it's modified by several other users during that time, then I load my app offline, make one minor change, and then come online. My stale changes would blow away all the work done in between.
There are lots of ways to deal with this--conflict resolution, using security rules and update counters/timestamps to detect stale data and prevent regressions--but this isn't a simple affair and needs deep consideration before you head down this route.
I am building applications that are used on a touch screen in an educational environment. The applications gather data from user input. The data is then send to a server. There are multiple units, and whilst exact synchronisation is not paramount, the gathered data (along with other data collection from another source) will be combined and distributed back to the touch screen applications.
The applications are being build in Backbone with initial data loaded from a single JSON document. The JSON document is parsed from a remote MySQL database, which is downloaded (along with assets) on initialisation.
Whilst when possible the app should send new data back to the remote mySQL DB as soon as it is gathered, this may not always be possible and I need to collect the data so as to send it when I can.
My first thoughts are that storing everything in localstorage and syncing whenever possible (clearing the localstorage each time a successful sync takes place) is the way to go.
Over the bank holiday weekend, I have been playing meteor.js, and I think that maybe if I write my localstorage solution I will be reinventing the wheel, and a tricky wheel at that. It seems that Meteor.js has a way of mimicking a database offline, in order to fake instant updating.
My question is: How can I use a similar technique to add some offline protection? Is there a JS framework, or backbone plugin I can utilise, or a technique I can tap into?
You can use Backbone.localStorage to save your models and collections to the local storage, while the connection is offline.
Detecting if your user is offline is as easy as noticing that your xhr requests are failing. (https://stackoverflow.com/a/189443/1448860e).
To combine these two.
When you suspect the user is offline (an ajax request to your backend gets no response), switch Backbone.localStorage and store everything there. Inform the user!
When the user gets Internet connectivity again, save any changes from localStorage to the server. Inform the user again!
VoilĂ !