Meteor server-side update/insert vs client side update/insert - javascript

I have a question about the advantages vs disadvantages of an update/insert of a collection on the client vs server. For example say I have the method which takes a current player, sets him/her no longer as the current player and then creates a new current player.
Meteor.methods({
currentPlayer : function () {
var id = Player.findOne({current:true})._id;
Player.update(id, {$set:{current:false}});
Player.insert({current:true});
...
What would be the advantages to doing this on the server vs doing the exact same thing on the client side:
'click #add' : function () {
var id = Player.findOne({current:true})._id;
Player.update(id, {$set:{current:false}});
Player.insert({current:true});
...
Maybe there aren't any inherently important differences or advantages to either technique. However if there is I would like to be aware of them. Thanks for your input!

I think Akshat has some great points. Basically there isn't a lot of difference in terms of latency compensation if you define the method on both the client and the server. In my opinion, there are a couple of reasons to use a method:
The operation can only be completed on the server or it results in some side effect that only makes sense on the server (e.g. sending an email).
You are doing an update and the permissions for doing the update are complex. For example maybe only the leader of a game can update certain properties of the players. Cases like that are extremely hard to express in allow/deny rules, but are easy to write using methods.
Personally, I prefer using methods in large projects because I find it's easier to reason about state mutations when all of the changes are forced to funnel through a small set of functions.
On the other hand, if you are working on a smaller project that doesn't have a lot of complex update rules, doing direct collection mutations may be a bit faster to write.

The main difference is latency compensation.
Under the hood, Player.update/insert/remove, uses a Meteor.call anyway. The difference is that it simulates the result of a successful operation on the browser before it has happened.
So say your server is somewhere on the other side of the world where it has a 2-3 second latency. If you update your player using Player.insert/update it would reflect instantly as if it was inserted and updated. This can be make the UI feel responsive.
Using a Meteor.methods waits for the server to send an updated record, meaning when you update something it would take the 2-3 seconds to reflect on your UI.
Using the method's you can be sure that the data has been inserted on the server at the cost of UI responsiveness. (You could also use the Player.insert & Player.update callbacks for this too.
With Meteor.methods you can also simulate this same latency compensation effect by doing the same Meteor.method on the client side with the code that you would like to run to simulate latency compensation.
There's a bit more details on the specifics on how to do this at the docs: http://docs.meteor.com/#meteor_methods

Related

How to share a pouchDB between main thread and a web worker

In a web app, using pouchDB, I have a slow running function that finishes by updating a document in the DB. I want to move it off the main UI thread and into a web worker. However, we have lots of other code using pouchDB still in the main thread (e.g. the change event listener, but also code that deals with other documents). (For reference the database size is on the order of 100MB; Vue2 is used so, in general, the UI can update when the data changes.)
This is where I seem to come unstuck immediately:
Shared memory is basically out, as all the browsers disable it by default
Even if it wasn't, pouchDB is a class, and cannot be transferred(?).
Isolating all the db code, including the changes handler, into one web worker is a huge refactor; and then we still have the issue of having to pass huge chunks of data in and out of that web worker.
Move all the code that uses the data into the web worker too, and just have the UI thread pass messages back and forth, is an even bigger refactor, and I've not thought through how it might interfere with Vue.
That seems to leave us with a choice of two extremes. Either rewrite the whole app from the ground up, possibly dropping Vue, or just do the slow, complex calculation in a web worker, then have it pass back the result, and continue to do the db.put() in the main UI thread.
Is it really an all or nothing situation? Are there any PouchDB "tricks" that allow working with web workers, and if so will we need to implement locking?
You're missing an option, that I would chose in your situation. Write a simple adapter that allows your worker code to query the DB in main thread via messages. Get your data, process it in the worker and send it back.
You only need to "wrap" the methods that you need in the worker. I recommend writing a class or set of functions that are async in your worker, to make the code readable.
You don't need to worry about the amount of data passed. The serialization and de-serialization is quite fast and the transfer is basically memcpy, so that does not take any reasonable time.
I found this adapter plugin, which I guess counts as the "PouchDB trick" I was after: https://github.com/pouchdb-community/worker-pouch
It was trivial to add (see below), and has been used in production for 6-7 weeks, and appears to have fixed the problems we saw. (I say appears, as it is quite hard to see it having any effect, and we didn't have a good way to reproduce the slowdown problems users were seeing.)
const PouchDB = require('pouchdb-browser').default
const pouchdbWorker = require('worker-pouch')
PouchDB.adapter('worker', pouchdbWorker)
The real code is like this, but usePouchDBWorker has always been kept as true:
const PouchDB = require('pouchdb-browser').default
// const pouchdbDebug = require('pouchdb-debug')
if (usePouchDBWorker) {
const pouchdbWorker = require('worker-pouch')
PouchDB.adapter('worker', pouchdbWorker)
}
This code is used in both web app and Electron builds. The web app is never used with older web browsers, so read the github site if that might be a concern for your own use case.

Is node.js event loop like an ajax call?

I am confused with node.js' advantages over other tech. I've been reading this article : http://www.toptal.com/nodejs/why-the-hell-would-i-use-node-js and this How to decide when to use Node.js? to familiarize myself with it and have left me confused.
I am familiar with cpu intensive task like the computation of the Fibonacci series but that's where my understanding ends.
For example, I have a Rest API that does all the computation or recommendation and is housed on a different server from the machine running node, then node.js won't have any trouble with having to deal with cpu intensive task. Just call the api then tell the client that your request is acknowledged.
I can't shake this thinking about comparing node.js with a simple ajax call to send the request from a form to the server, display a ticker then show the result. I am guessing that node.js is a web server, doing lot's of "ajax" type calls and handling concurrent connections.
Are my assumptions correct?
Is it also correct to assume that retrieving data from a database is an io operation but creating a complex report from that data a cpu intensive one?
You are right about handling many ajax requests, however thats true in worker based model also (php/python workers threads)
Main difference for event based system there will be only one worker doing all sorts of computation part of code (such as filtering data, adding processing etc). When it calls io ops like read from file, or db etc. node doesn't have control over that, instead of waiting on that to finish it puts a call back in the queue and moves on with next processing in queue (if any).
For analogy think of pizza outlet, if only one person is taking order and handing over the order to kitchen, once its ready cutting it, packing and giving it to customer. Where ever there is wait, he just moves on to next task. This is what node does, that person wont hang-on next to kitchen until pizza gets cooked.
In case of worker based approach think of a bank teller and you see couple of them (may be 5 or so) they take every kind of request but they dont switch between customer / request.
Refer to these resources for a deeper understanding of how JavaScript event loop works.
https://www.youtube.com/watch?v=8aGhZQkoFbQ
http://latentflip.com/loupe/
I can't answer all your doubts, but would like you to have some clarity over AJAX.
AJAX - Asynchronous JavaScript + XML is a technique to make requests to a server. Nodejs server knows how to handle such requests, but saying that is the only thing it can do is absolutely wrong. Nodejs is single threaded, hence async. Whether it is good for CPU intensive tasks, I would say why not, unless you want to solve issues in a multithreaded fashion.

Best Server API and Client Side Javascript Interaction Methods?

Currently, I'm using setTimeout() to pause a for loop on a huge list so that I can add some styling to the page. For instance,
Eg: http://imdbnator.com/process?id=wtf&redirect=false
What I use setTimeOut for:
I use setTimeout() to add images,text and css progress bar (Why doesn't Progress Bar dynamically change unlike Text?2).
Clearly, as you can see it is quite painful for a user to just browse through the page and hover over a few images. It gets extremely laggy. Is there any any workaround to this?
My FOR Loop:
Each for loop makes an ajax request on the background to a PHP API. It definitely costs me some efficiency there but how do all other websites pull it off with such elegance? I mean, I've seen websites show a nice loading image with no user interference while it makes an API request. While I try to do something like that, I have set a time-out everytime.
Is that they use better Server-Client side interaction languages like the node.js that I've heard?
Also, I'e thought of a few alternatives but run into other complications. I would greatly appreciate if you can help me on each of these possible alternatives.
Method 1:
Instead of making an AJAX call to my PHP API through jQuery, I could do a complete server side script altogether. But then, the problem I run into is that I cannot make a good Client Side Page (as in my current page) which updates the progress bar and adds dynamic images after each of the item of the list is processed. Or is this possible?
Method 2: (Edited)
Like one the useful answers below, I think the biggest problem is the server API and client interaction. Websockets as suggested by him look promising to me. Will they necessarily be a better fix over a setTimeout? Is there any significant time difference in lets say I replace my current 1000 AJAX requests into a websocket?
Also, I would appreciate if there is anything other than websocket that is better off than an AJAX call.
How do professional websites get around with a fluidic server and client side interactions?
Edit 1: Please explain how professional websites (such as http://www.cleartrip.com when you are requesting for flight details) provide a smooth client side while processing the server side.
Edit 2: As #Syd suggested. That is something that I'm looking for.I think there is a lot of delay in my current client and server interaction. Websockets seem to be a fix for that. What are the other/ best ways for improving server cleint interaction apart from the standard AJAX?
Your first link doesn't work for me but I'll try to explain a couple of things that might help you if I understand your overall problem.
First of all it is bad to have synchronous calls with large amount of data that require processing in your main ui thread because the user experience might suffer a lot. For reference you might want to take a look into "Is it feasible to do an AJAX request from a Web Worker?"
If I understand correctly you want to load some data on demand based on an event.
Here you might want to sit back and think what is the best event for your need, it's quite different to make an ajax request every once in a while especially when you have a lot of traffic. Also you might want to check if your previous request has completed before you initialize the next one (this might not be needed in some cases though). Have a look at async.js if you want to create chained asynchronous code execution without facing the javascript "pyramid of doom" effect and messy code.
Moreover you might want to "validate - halt" the event before making the actual request. For example let's assume a user triggers a "mouseenter" you should not just fire an ajax call. Hold your breath use setTimeout and check if the user didn't fire any other "mouseenter" event for the next 250 ms this will allow your server to breath. Or in implementations that load content based on scroll. You should not fire an event if the user scrolls like a maniac. So validate the events.
Also loops and iterations, we all know that if the damn loop is too long and does heavy lifting you might experience unwanted results. So in order to overcome this you might want to look into timed loops (take a look at the snippet bellow). basically loops that break after x amount of time and continue after a while. Here are some references that helped me with a three.js project. "optimizing-three-dot-js-performance-simulating-tens-of-thousands-of-independent-moving-objects" and "Timed array processing in JavaScript"
//Copyright 2009 Nicholas C. Zakas. All rights reserved.
//MIT Licensed
function timedChunk(items, process, context, callback){
var todo = items.concat(); //create a clone of the original
setTimeout(function(){
var start = +new Date();
do {
process.call(context, todo.shift());
} while (todo.length > 0 && (+new Date() - start < 50));
if (todo.length > 0){
setTimeout(arguments.callee, 25);
} else {
callback(items);
}
}, 25);
}
cleartip.com will probably might use some of these techniques and from what I've seen what it does is get a chunk of data when you visit the page and then upon scroll it fetches other chunks as well. The trick here is to fire the request a little sooner before the user reaches the bottom of the page in order to provide a smooth experience. Regarding the left side filters they only filter out data that are already in the browser, no more requests are being made. So you fetch and you keep something like cache (in other scenarios though caching might be unwanted for live data feeds etc).
Finally If you are interested for further reading and smaller overhead in data transactions you might want to take a look into "WebSockets".
You must use async AJAX calls. Right now, the user interaction is blocked while the HTTP ajax request is being done.
Q: "how professional websites (such as cleartrip.com) provide a smooth client side while processing the server side."
A: By using async AJAX calls

How to avoid dog-pile effect at Node.js & MongoDB & Redis stack?

When some cached value is expired or new cache will be generated for any reason and we have a huge traffic at the time of no cache exists, there will be a heavy load on MongoDB and response time significantly increases. This is typically called "Dog-pile effect". Everything works well after cache is created.
I know that it's a very common problem which applies to all web applications using a database & cache system.
What should one do to avoid dog-pile effect at a Node.js & MongoDB & Redis stack? What are best practices and common mistakes?
One fairly proven way to keep the dogs from piling up is to keep a "lock" (e.g. in Redis) that prevents the cache populating logic from firing up more than once. The first time that the fetcher is called (for a given piece of content), the lock is acquired (for it) and set to expire (e.g. with SET ... NX EX 60). Any subsequent invocation of the fetcher for that content will fail on getting the lock thus only one dog gets to the pile.
The other thing you may want to put into place is some kind of rate limiting on the fetcher, regardless the content. That's also quite easily doable with Redis - feel free to look it up or ask another question :)
Id just serve expired content until new content is done caching so that database wont get stampede.

How to cache results for autosuggest component?

I have a UI autosuggest component that performs an AJAX request as user types. For example, if user types mel, the response could be:
{
suggestions: [{
id: 18,
suggestion: 'Melbourne'
}, {
id: 7,
suggestion: 'East Melbourne'
}, {
id: 123,
suggestion: 'North Melbourne'
}]
}
The UI component implements client side caching. So, if user now clicks b (results for melb are retrieved), and then Backspace, the browser already has results for mel in memory, so they are immediately available. In other words, every client makes at most one AJAX call for every given input.
Now, I'd like to add server side caching on top of this. So, if one client performs an AJAX call for mel, and let's say there is some heavy computation going on to prepare the response, other clients would be getting the results without executing this heavy computation again.
I could simply have a hash of queries and results, but I'm not sure that this is the most optimal way to achieve this (memory concerns). There are ~20000 suggestions in the data set.
What would be the best way to implement the server side caching?
You could implement a simple cache with an LRU (least recently used) discard algorithm. Basically, set a few thresholds (for example: 100,000 items, 1 GB) and then discard the least recently used item (i.e., the item that is in cache but was last accessed longer ago than any of the other ones). This actually works pretty well, and I'm sure you can use an existing Node.js package out there.
If you're going to be building a service that has multiple frontend servers, it might be easier and simpler to just set up memcached on a server (or even put it on a frontend server if you have a relatively low load on your server). It's got an extremely simple TCP/IP protocol and there's memcached clients available for Node.js.
Memcached is easy to set up and will scale for a very long time. Keeping the cache on separate servers also has the potential benefit of speeding up requests for all frontend instances, even the ones that have not received a particular request before.
No matter what you choose to do, I would recommend keeping the caching out of the process that serves the requests. That makes it easy to just kill the cache if you have caching issues or need to free up memory for some reason.
(memory concerns). There are ~20000 suggestions in the data set.
20,000 results? Have you thought about home much memory that will actually take? My response is assuming you're talking about 20,000 short strings as presented in the example. I feel like you're optimizing for a problem you don't have yet.
If you're talking about a reasonably static piece of data, just keep it in memory. Even if you want to store it in a database, just keep it in memory. Refresh it periodically if you must.
If it's not static, just try and read it from the database on every request first. Databases have query caches and will chew through a 100KB table for breakfast.
Once you're actually getting enough hits for this to become an actual issue, don't cache it yourself. I have found that if you actually have a real need for a cache other people have written it better than you would have. But if you really need one, go for an external one like Memcached or even something like Redis. Keeping that stuff external can makes testing and scalability a heap easier.
But you'll know when you actually need a cache.

Categories

Resources