Meteor reactive publish based on client session variable - javascript

I'm making a meteor js web app that presents the client an html range slider tied to a session variable.
I want the server to only publish data with values less than the current value of the slider with data sorted from newest to oldest. I have a lot of database entries (2000+). If I publish everything within the max of slider my browsers too slow. If I limit the publish to 100 entries or so, I miss out on a lot of data with small values (which happen to be older) when I bring the slider down.
What are the best practices for trying to be scalable (not sending too much data to the client)? Is a reactive publish function the key (using onchange with the slider value as the key)? That sounds like a lot of server round trips. Help!

Would pagination be acceptable from a UX standpoint? If so, there are packages that may help, for instance alethes:pages.
Otherwise, Adam is on the right track by suggesting to use Tracker.autorun (Tracker has replaced Deps).
As with any other publication, make sure your publish function only returns the fields that you need on the client, in order to minimize the data transferred and the memory consumption.

Related

How to store an object that is retrievable from the client-side in Javascript

Beginner question: I built a simple draggable to-do list that caches the state in a single object (tasks, containers and index) - currently, it's storing it in local storage. I am working on the server side using express and node.js, but I am confused as to where I would simply store the object. Would a database like mongodb be a good choice...or is there an even simpler option? I assume I can keep the project static and have the server side just receive and serve up JSON? Thanks!
If you plan to integrate it with backend server, it is actually a good idea to store the object in a database. The benefit is, you can still maintain the state of your to-do-list no matter on which machine you are logging in. If you access your to-do-list app from the browser of your smartphone or desktop, they both still point to a single source of truth, which is your database. Think of it as a Trello board that is in-sync on every device. In your database, you may record the task status, task ID, description, etc. If you want to go further, you can group this information per user, so every user will have their own to-do-list. (which is not possible if you rely on conventional local storage). With database, you can extend the functionality beyond simple to-do-list. Alternatively, you may consider a much simpler solution by recording the object as JSON file and storing it in your server. This solution is feasible albeit limited flexibility.
I would recommend MongoDB Atlas and Firebase Realtime Database as both are beginner friendly and easy to use. Both are free-of-charge on limited usage and hosted in the cloud.

AngularJS and MySQL real-time communication

I have built a web application using AngularJS (front-end) and PHP/MySQL (back-end).
I was wondering if there is a way to "watch" the MySQL database (without Node.js), so if one user adds some data to it, the changes are synced to other users too.
E.g. I know Firebase does that, but it's object oriented database and I am unable to do the advanced queries there like I do with SQL.
I was thinking to use $interval and $http and do ajax requests, so that way I could detect changes in the database. Well, that's possible, but it'll then do thousands of http requests to the server everyday and plus interpret php on each request.
I believe nothing is impossible, I just need an idea to do this, which I don't have, so that's why I am asking for a help here.
If you want a form of "real-time communication" you'll likely have to incorporate some form of long-polling from the client. Unless you use web sockets, but that's a big post about a bunch of different things. You're right to be concerned about bandwidth and demand on the DB though. So here's my suggestion:
If you don't have experience with web sockets then log your events in a separate table/view and use the pub/sub method to subscribe entities to an event, and broadcast that event to the table. Then long-poll against the watcher view to see when changes may have occurred. If one did occur then you query for the exact value.
Another option would be to use some query system with "deciders" that hold messages. Take a look at Amazon's SQS platform for a better explanation of how this could work. Basically you have a queue that holds messages and a decider chooses where to store the message using some hash or sorting method (to reduce run time). When the client requests an update, the decider finds any messages that would apply based on the hash/sort and returns them. Then you just have to decide how and when to destruct the messages.
The second option would require a lot more tinkering though, so it's really about your preference. I think what you'll find the difficulty to be is that most solutions have to deal with the fact that the message has to be delivered 1 or More times and you'll need to track when someone received the message and if it can now be deleted from the queue/event table or if you still need to wait. Otherwise you'll consume a lot of memory.

Getting user info using Client ID

I have inserted the analytics.js tracking script into my code, and now I am trying to get user data such as medium, source, etc. using javascript and putting them into variables. Is there a way I can do this using Client Id?
I assume you mean getting the data in realtime for use in your website. That is not possible.
Client ID is not exposed in the interface by default, you'd need to use a custom dimension.
There is a processing delay, report data may only be reliable the next day.
While there is the (less reliable) data from the real time API (which at least contains medium and source information) it does not support custom dimension, so you could not use the client id as query key.
Also to retrieve data from the API you need to be authenticated, which the current users of your webpage is not. So you would need to set up some kind of serverside proxy that handles authentication for you.
Also there are API limits determining how many requests you can make in a given time frame. Even a small site would exhaust those requests pretty quickly.
So while in theory this sounds doable it is not actually feasible for any real-life purpose.

Improve client-server data sync functionality with deltas

The app
I have a web app that currently uses AppCache for offline functionality since users of the system need to create documents offline. The document is first created offline and when internet access is available, the user can click "sync" which will send the document to the server and save it as a revision. To be more specific, the app does not save the change delta as a revision (the exact field modified) but rather the whole document in its entirety. So in other words, a "snapshot" document is saved.
The problem
Users can login from different browsers and devices and work on their documents. When they click "sync", if the server's document is newer, the entire client's version will be overridden by the server's. This leads to one main issue that is depicted in the image below.
The scenario above occurs because of the current implementation which does not rely on deltas (small changes) and rather relies on snapshot revisions.
Some questions
1) My research indicates that I should be upgrading the "sync" mechanism to be expressed in deltas (small changes that can be applied independently). Is this a sound approach?
2) Should each delta be applied independently?
2) According to my research, revision deltas have a numeric value and not a timestamp. What should the value for this be exactly? How would I ensure both the server and the client agree on what the revision number should be?
Stack information
Angular on the frontend
IndexedDB to save documents locally (offline mode)
Postgres DB with JSONB in the backend
What your describing is a version control issue like in this question. The choice is yours with how to resolve. Here are a few examples of other products with this problem:
Google docs: A makes edit offline, B makes edit online, A goes online, Sync, Google Docs combines A and B's edits
Apple notes: Same as Google Docs
Git/Subversion: Throw an error, ask user to resolve conflicts
Wunderlist: Last edit overwrites previous
For your case, this simplest solution is to use Wunderlist's approach, but it seems that may cause a usability issue. What do your users expect to happen?
Answering your questions directly:
A custom sync implementation is necessary if you don't want overwrites.
This is a usability decision, what does the user expect?
True, revisions are numeric (e.g r1, r2). To get server agreement, alter the return value of the last sync request. You can return the entire model to the client each time (or just a 200 OK if a normal sync happened). If a model is returned to the client, update the client with the latest model.
In any case, the server should always be the source of truth. This post provides some good advice on server/mobile referential integrity:
To track inserts you need a Created timestamp ... To track updates you need to track a LastUpdate timestamp on your rows ... To track deletes you need a tombstone table.
Note that when you do a sync, you need to check the time offset between the server and the mobile device, and you need to have a method for resolving conflicts. Inserts are no big deal (they shouldn't conflict), but updates could conflict, and a delete could conflict with an update.

What is good way to store big JSON objects in couchDb?

I work on a web app which store projects data. Data are saved in a couchDb database A. The app pull and push data with a local pouchDb database B, which is sync with A.
So the app can also work offline. When user has connection back, changes made on localDb B during offline time are sent to A using a classic replication.
I store 1 document per project in couchDb, it is a big JSON object with lot of data (project todos, collaborators, advancements, risks, problems, etc...).
It is working like a charm, but I have some problems, and it seems I use pouchDb in wrong way. Situation example:
User A is offline and he adds a todo on project 1.
User B is online and he adds a new collaborator on project 1.
User B changes are pushed to couchDb by the automatic sync.
The project 1 _rev has been incremented.
User B pulls its own changes from couchDb, because the app downloads all documents on any couchDb changes detected. Weird... Idk how to prevent that. But the app still work fine so it's not a big problem.
User A gets its connection back.
User A changes are ignored because of older _rev. But the user did a modification on a different project property, can couchDb detect that himself and merge with newer _rev ?
I clearly see my problem is I'm using 1 document per project. I could use thousands documents to store each properties of each project and my problem woudn't happens, but it seems quite weird: To retrieve all data of a project I would fully scan my database, check document type (collaborator, todos, ...?), and check if the document is linked to the project by adding a new _projectId property to any document.
Currently I just have to request one document, which contains all project data, then I manipulate my JSON easily. It's quite convenient to handle.
How to manage this ? A project may contains averagely 10 to 10 000 properties that multiple users can edit being online or offline.
But the user did a modification on a different project property, can couchDb detect that himself and merge with newer _rev ?
PouchDB/CouchDB conflict handling is described in the PouchDB guide: http://pouchdb.com/guides/conflicts.html
the app downloads all documents on any couchDb changes detected. Weird... Idk how to prevent that.
This is standard PouchDB/CouchDB behavior - you asked it to sync the whole database, so it synced the whole database. :) You can prevent it by using filtered-replication: http://pouchdb.com/api.html#filtered-replication.
How to manage this ? A project may contains averagely 10 to 10 000 properties that multiple users can edit being online or offline.
It really really depends on your data, how frequently it may change, what the unique identifier of a single "property" is... Storing 10,000 separate documents in PouchDB/CouchDB is not a crazy idea, though, and may help you out when it comes to conflicts, since only those individual documents can ever be in conflict.
In general, I'd recommend you read the guide to conflict resolution as described above and review your options. There's also a plugin that may help you with conflict resolution: https://github.com/jo/pouch-resolve-conflicts

Categories

Resources