which control flow library works with google closure library? - javascript

there are several js libraries available for flow control.
though when working with the closure compiler the ones i looked at yet do not work well with the compiler in advanced mode.
is there any closure compatible flow control library?
I am mostly interested in waiting on multiple results without complicating the code more than necessary.
What i want to archive is to reduce loading time on user actions.
For one action of the user multiple requests have to be done to the backend. To keep the code maintainable, at the moment, i do one request at a time and handle potential errors after each step.
What i want to archive is that i can just fire nondepended requests together without complicating the error handling more than necessary.
I like the syntax of flow js:
var auth = flow.define(
function(username, password) {
sys.puts("trying " + username + ", " + password);
this.password = password;
keystore.get('userId:' + username, this);
},function(err, userId) {
keystore.get('user:' + userId + ':password', this);
},function(err, passwordInDb) {
if (passwordInDb == this.password) {
sys.puts("Authenticated!");
}
else {
sys.puts("Failed Authentication!");
}
}
)
It also allows to spawn multiple async operations and collect them.
Though if you need any state between callbacks the state is stored like "this.password" above.
as the containing scope is not typed the closure compiler will not be able to rename it consistently (from my understanding) when in ADVANCED mode.
So i need an alternative which has a typed container object that is pushed as a parameter (or this) through each function.

Can you use goog.async.Deferred from Closure Library? It manage async and sync workflow.

Generally, you can use any library with Closure Compiler's advanced mode by creating an extern file for the library and loading the code separately (or concatenating the library after compilation of your code).

Ok i've found a solution.
When i create a typed container object and pass it to each of the functions with goog.bind it works.

A very tiny flow control lib for Closure Library i've written is ready.js.
As per the description:
Watches over multiple async operations and triggers listeners when all or some are complete.
It's worth a look

Related

Conflicting purposes of IndexedDB transactions

As I understand it, there are three somewhat distinct reasons to put multiple IndexedDB operations in a single transaction rather than using a unique transaction for each operation:
Performance. If you’re doing a lot of writes to an object store, it’s much faster if they happen in one transaction.
Ensuring data is written before proceeding. Waiting for the “oncomplete” event is the only way to be sure that a subsequent IndexedDB query won’t return stale data.
Performing an atomic set of DB operations. Basically, “do all of these things, but if one of them fails, roll it all back”.
#1 is fine, most databases have the same characteristic.
#2 is a little more unique, and it causes issues when considered in conjunction with #3. Let’s say I have some simple function that writes something to the database and runs a callback when it's over:
function putWhatever(obj, cb) {
var tx = db.transaction("whatever", "readwrite");
tx.objectStore("whatever").put(obj);
tx.oncomplete = function () { cb(); };
}
That works fine. But now if you want to call that function as a part of a group of operations you want to atomically commit or fail, it's impossible. You'd have to do something like this:
function putWhatever(tx, obj, cb) {
tx.objectStore("whatever").put(obj).onsuccess = function () { cb(); };
}
This second version of the function is very different than the first, because the callback runs before the data is guaranteed to be written to the database. If you try to read back the object you just wrote, you might get a stale value.
Basically, the problem is that you can only take advantage of one of #2 or #3. Sometimes the choice is clear, but sometimes not. This has led me to write horrible code like:
function putWhatever(tx, obj, cb) {
if (tx === undefined) {
tx = db.transaction("whatever", "readwrite");
tx.objectStore("whatever").put(obj);
tx.oncomplete = function () { cb(); };
} else {
tx.objectStore("whatever").put(obj).onsuccess = function () { cb(); };
}
}
However even that still is not a general solution and could fail in some scenarios.
Has anyone else run into this problem? How do you deal with it? Or am I simply misunderstanding things somehow?
The following is just opinion as this doesn't seem like a 'one right answer' question.
First, performance is an irrelevant consideration. Avoid this factor entirely, unless later profiling suggests a material problem. Chances of perf issues are ridiculously low.
Second, I prefer to organize requests into transactions solely to maintain integrity. Integrity is paramount. Integrity as I define it here simply means that the database at any one point in time does not contain conflicting or erratic data. Essentially the database is never able to enter into a 'bad' state. For example, to impose a rule that cross-store object references point to valid and existing objects in other stores (a.k.a. referential integrity), or to prevent duplicated requests such as a double add/put/delete. Obviously, if the app were something like a bank app that credits/debits accounts, or a heart-attack monitor app, things could go horribly wrong.
My own experience has led me to believe that code involving indexedDB is not prone to the traditional facade pattern. I found that what worked best, in terms of organizing requests into different wrapping functions, was to design functions around transactions. I found that quite often there are very few DRY violations because every request is nearly always unique to its transactional context. In other words, while a similar 'put object' request might appear in more than one transaction, it is so distinct in its behavior given its separate context that it merits violating DRY.
If you go the function per request route, I am not sure why you are checking if the transaction parameter is undefined. Have the caller create the function and then pass it to the requests in turn. Expect the tx to always be defined and do not over-zealously guard against it. If it is ever not defined there is either a serious bug in indexedDB or in your calling function.
Explicitly, something like:
function doTransaction1(db, onComplete) {
var tx = db.transaction(...);
tx.onComplete = onComplete;
doRequest1(tx);
doRequest2(tx);
doRequest3(tx);
}
function doRequest1(tx) {
var store = tx.objectStore(...);
// ...
}
// ...
If the requests should not execute in parallel, and must run in a series, then this indicates a larger and more difficult design issue.

How can I follow an _id in Mongoose.js to find an item and process it?

I am new to databases and these MongoDB/Mongoose.js async functions have annoyed the hell out of me over the last few hours. I have written and rewritten this bit so many times:
router.get('/districts', function(req, res) {
districtNames = [];
// I'm using mongoose-simpledb, so db.District refers to the districts collection
db.District.find(function(err, found) {
found.forEach(function(element) {
findParentProv(element, districtNames);
});
res.render('districts', {title: "Districts page", district_list: districtNames});
})
});
function findParentProv(element, namesArray) {
db.Province.findById(element.parent, function(err, found) {
console.log(found.name);
namesArray.push(element.name + " " + found.name);
});
}
I want to get all items in the districts collection, follow their parent field (which contains an ObjectID), find that item from the provinces collection and push both their names as a string into districtNames.
How should I do this?
Well, you do seem to be on the right track.
The one major issue I recognize in your solution is that after kicking off all the async queries for parents, you immediately return the (most likely empty) districtNames array, without waiting for the queries to finish.
This is indeed very annoying, and not surprisingly so. MongoDB is a non-relational DB, and so join operations like what you're trying to do aren't easy to get right.
The solution that would probably require the least fundamental changes to what you're doing would be to wait on all the queries before calling res.render. The most basic way to do this would be to check the length of namesArray/districtNames after pushing each element, and once you see it's gotten to the desired size, only then calling render. There are, however, more standardized ways of doing this, and I'd suggest looking into something like Async (specifically async.parallel) or a Promise framework such as Bluebird.
Now, another approach to solving this problem is de-normalizing the data. For someone with a relational background this probably sound appalling, but in Mongo it might actually be a valid solution to just include the province names along with their IDs in the districts collection, in which case your one initial query should be sufficient.
Another approach, which might be suitable if you're dealing with relatively small collections, would be to run 2 queries, 1 for all the districts and 1 for all the provinces, and do the correlation in-app. Obviously, this isn't a very efficient solution, and should definitely be avoided if there's any chance the collections contain, or will in the future contain, more than a handful of objects.
Best bet moving forward is to use ES6 Promise patterns to help with your callback patterns..
suggested modules:
lodash [optional] has a lot of useful methods, not needed here, but you may need, for example _.flatten, or _.assign
i-promise will give you a native Promise (node 0.11.3+) or a scripted implementation
es6-promise is the fallback for i-promise to use
promisify-patch is an inline promisify for specific methods.
Install the modules required for your use (in this example).
npm install --save es6-promise i-promise promisify-patch
Use Promise pattern with your example.
require('promisify-patch').patch();
var Promise = require('i-promise')
;
//returns a promise to resolve to your list for display
function getDistricts() {
//gets all of the db.District
return db.District.find.bind(db.District).promise()
//after districts retrieved
.then(function(districts){
//resolve an array of promises, will return an array of results
return Promise.all(districts.map(getDistrictProv)); //map each district via getDistrictProv
});
}
//returns a promise to resolve a specific district/province name
function getDistrictProv(district){
return db.Provice.findById.bind(db.Province).promise(element.parent)
.then(function(province){
return district.name + ' ' + province.name;
});
}
...
//express handler
router.get('/districts', function(req, res, next) {
//get the district names
getDistricts()
//then process the rendering with the names
.then(function(names){
res.render('districts', {title: "Districts page", district_list: names});
})
//if there was an error in the promise chain
// pass it along, so it can be handled by another express plugin
.catch(next)
});
Disclosure: I made i-promise and promisify-patch to make situations like this easier to convert node-style callbacks into promise chains.
NOTE: If you are creating general purpose libraries for Node or the Browser that are not flow-control related, you should at least implement the node-style callback implementation.
Further, you may wish to look into co, koa for using generators as well.
The question seemed to be how to control the flow of data, in which promises are likely the best answer. If your issue is trying to fit non-relational data into a relational box or vice-versa, may want to re-evaluate your data structure as well...
http://blog.mongodb.org/post/88473035333/6-rules-of-thumb-for-mongodb-schema-design-part-3
You should probably have some key data for parents/children replicated to those affected documents in other collections. There are configuration options via Mongoose to support this, but that doesn't mean you should avoid the consideration.
If you do many/large join operations like this it will negatively affect your performance. This isn't meant to be a religious comment only that MongoDB/SQL or other SQL vs. NoSQL considerations should be made depending on your actual needs.
The data in question seems to be highly cacheable data that may well be better with a relational/sql database.

About Node's code style

EDIT
thx to all the answers,
and finally I decide to use some tools like Step,
all I need is "flow control" and don't want any thing else which may slow down the performance (I don't know how much exactly it would effect or the effect just can be ignored).
So I just create a little tool for flow control:
line.js
/**
* Create the "next" function
*
* #param {Array} tasks
* #param {Number} index
* #param {Number} last
*/
var next = function(tasks, index, last) {
if (index == last) {
return tasks[index + 1];
}
else {
return function(data) {
var nextIndex = index + 1;
tasks[nextIndex](next(tasks, nextIndex, last), data);
};
}
};
/**
* Invoke functions in line.
*/
module.exports = function() {
var tasks = arguments,
last = tasks.length - 2;
tasks[0](next(tasks, 0, last));
};
usage:
var line = require("line.js");
line(function(next) {
someObj.find(function(err, docs) {
// codes
next(docs);
});
}, function(next, docs) {
// codes
});
Hope this helps.
EDIT END
As all know,
Node's built-in or third-part modules often provides async API,
and using "callback" function for dealing the results.
It's cool but sometimes would code like this:
//some codes
}
}
}
}
codes like this are hard to read.
I know "deferred" library can solve such problem,
Is there any good "deferred" module for Node?
And How is the performance if I code Node with "deferred"?
It is a large problem with Node-based code; you frequently grow "callback pyramids". There are several approaches to dealing with the problem:
Code style:
Use this annoyance as an opportunity to break your code into bite sized chunks. It means you're likely going to have a proliferation of tiny named funcs - that's probably just fine, though! You might also find more opportunities for reuse.
Flow-control Libraries
There are exactly 593.72 billion flow control libraries out there. Here's some of the more popular ones:
Step super basic serial & parallel flow management.
seq is a heavier but more feature-full flow control library.
There's plenty more. Search the npm registry for "flow" and "flow control" (sorry, doesn't appear to be linkable)
Language Extensions
There are several attempts to provide a more synchronous-feeling syntax on top of JavaScript (or CoffeeScript), often based on the concepts behind the tame paper.
TameJS is the OkCupid team's answer to this.
IcedCoffeeScript they've also ported TameJS over CoffeeScript as a fork.
streamline.js is very similar to TameJS.
StratifiedJS is a heavier approach to the problem.
This route is a deal-breaker for some:
It's not standard JavaScript; if you are building libraries/frameworks/etc, finding help will be more difficult.
Variable scope can behave in unexpected ways, depending on the library.
The generated code can be difficult to debug & match to the original source.
The Future:
The node core team is very aware of the problem, and are also working on lower level components to help ease the pain. It looks like they'll be introducing a basic version of domains in v0.8, which provide a way of rolling up error handling (avoiding the common return err if err pattern, primarily).
This should start to lay a great foundation for cleaner flow control libraries, and start to pave the way for a more consistent way of dealing with callback pyramids. There's too much choice out there right now, and the community isn't close to agreeing on even a handful of standards yet.
References:
Mixu's Node book has an awesome chapter on this subject.
There are tons of "deferred libraries". Have a look there http://eirikb.github.com/nipster/#promise and there http://eirikb.github.com/nipster/#deferred. To pick one, it's only a matter of style & simplicity :)
If you really don't like that, there's always the alternative of using named functions, which will reduce the indentation.
Instead of
setTimeout(function() {
fs.readFile('file', function (err, data) {
if (err) throw err;
console.log(data);
})
}, 200);
You can do this:
function dataHandler(err, data)
{
if (err) throw err;
console.log(data);
}
function getFile()
{
fs.readFile('file', dataHandler);
}
setTimeout(getFile, 200);
The same thing, no nesting.
There are some libraries that may be useful in some scenarios, but as a whole you won't be excited after using them for everything.
According to the slowness issues. Since node.js is async, the wrapped functions are not such a big performance consumer.
You could look here for deferred-like library
https://github.com/kriszyp/node-promise
Also this question is very similar
What nodejs library is most like jQuery's deferreds?
And as a final bonus I suggest you take a look at CoffeeScript. It is a language, which compiles to javascript and has more beautiful syntax, since the function braces are removed
I usually like to use the async.js library as it offers a few different options on how to execute the code

What is the correct way to chain async calls in javascript?

I'm trying to find the best way to create async calls when each call depends on the prior call to have completed. At the moment I'm chaining the methods by recursively calling a defined process function as illustrated below.
This is what I'm currently doing.
var syncProduct = (function() {
var done, log;
var IN_CAT = 1, IN_TITLES = 2, IN_BINS = 3;
var state = IN_CAT;
var processNext = function(data) {
switch(state) {
case IN_CAT:
SVC.sendJsonRequest(url("/api/lineplan/categories"), processNext);
state = IN_TITLES;
break;
case IN_TITLES:
log((data ? data.length : "No") + " categories retrieved!");
SVC.sendJsonRequest(url("/api/lineplan/titles"), processNext);
state = IN_BINS;
break;
case IN_BINS:
log((data ? data.length : "No") + " titles retrieved!");
SVC.sendJsonRequest(url("/api/lineplan/bins"), processNext);
state = IN_MAJOR;
break;
default:
log((data ? data.length : "No") + " bins retrieved!");
done();
break;
}
}
return {
start: function(doneCB, logCB) {
done = doneCB; log = logCB; state = IN_CAT;
processNext();
}
}
})();
I would then call this as follows
var log = function(message) {
// Impl removed.
}
syncProduct.start(function() {
log("Product Sync Complete!");
}, log);
While this works perfectly fine for me I can't help but think there has to be a better (simpler) way. What happens later when my recursive calls get too deep?
NOTE: I am not using javascript in the browser but natively within the Titanium framework, this is akin to Javascript for Node.js.
There are lots of libraries and tools that do async chaining and control-flow for you and they mostly come in two main flavours:
Control-flow libraries
For example, see async, seq and step (callback based) or Q and futures (promise based). The main advantage of these is that they are just plains JS libraries that ease the pain of async programming.
In my personal experience, promise-based libraries tend to lead to code that looks more like usual synchronous code, since you return values using "return" and since promise values can be passed and stored around, similarly to real values.
On the other hand, continuation-based code is more low level since it manipulates code paths explicitely. This can possibly allow for more flexible control flow and better integration with existing libraries, but it might also lead to more boilerplaty and less intuitive code.
Javascript CPS compilers
Extending the language to add native support for coroutines/generators lets you write asynchronous code in a very straightforward manner and plays nice with the rest of the language meaning you can use Javascript if statements, loops etc instead of needing to replicate them with functions. This also means that its very easy to convert previously sync code into an async version. However, there is the obvious disadvantage that not every browser will run your Javascript extension so you will need to add a compilation step in your build proccess to convert your code to regular JS with callbacks in continuation-passing-style. Anyway, one promising alternative is the generators in the Ecmascript 6 spec - while only firefox supports them natively as of now, there are projects such as regenerator and Traceur to compile them back to callbacks. There are also other projects that create their own async syntax (since es6 generators hadn't come up back then). In this category, you will find things such as tamejs and Iced Coffeescript. Finally, if you use Node.js there you could also take a look at Fibers.
My recomendation:
If you just want something simple that won't complicate your build proccess, I would recomend going with whatever control-flow library best fits your personal style and the libraries you already use.
However, if you expect to write lots of complicated and deeply-integrated asynchronous code, I would strongly recommend at least looking into a compiler-based alternative.

Threads (or something like) in javascript

I need to let a piece of code always run independently of other code. Is there a way of creating a thread in javascript to run this function?
--why setTimeout doesn't worked for me
I tried it, but it runs just a single time. And if I call the function recursively it throws the error "too much recursion" after some time. I need it running every 100 milis (it's a communication with a embedded system).
--as you ask, here goes some code
function update(v2) {
// I removed the use of v2 here for simplicity
dump("update\n"); // this will just print the string
setTimeout(new function() { update(v2); }, 100); // this try doesn't work
}
update(this.v);
It throws "too much recursion".
I am assuming you are asking about executing a function on a different thread. However, Javascript does not support multithreading.
See: Why doesn't JavaScript support multithreading?
The Javascript engine in all current browsers execute on a single thread. As stated in the post above, running functions on a different thread would lead to concurrency issues. For example, two functions modifying a single HTML element simultaneously.
As pointed out by others here, perhaps multi-threading is not what you actually need for your situation. setInterval might be adequate.
However, if you truly need multi-threading, JavaScript does support it through the web workers functionality. Basically, the main JavaScript thread can interact with the other threads (workers) only through events and message passing (strings, essentially). Workers do not have access to the DOM. This avoids any of the concurrency issues.
Here is the web workers spec: http://www.whatwg.org/specs/web-workers/current-work/
A more tutorial treatment: http://ejohn.org/blog/web-workers/
Get rid of the new keyword for the function you're passing to setTimeout(), and it should work.
function update(v2) {
try {
dump("update\n");
} catch(err) {
dump("Fail to update " + err + "\n");
}
setTimeout(function() { update(v2); }, 100);
}
update(this.v);
Or just use setInterval().
function update(v2) {
try {
dump("update\n");
} catch(err) {
dump("Fail to update " + err + "\n");
}
}
var this_v = this.v;
setInterval(function() {update(this_v);}, 100);
EDIT: Referenced this.v in a variable since I don't know what the value of this is in your application.
window.setTimeout() is what you need.
maybe you should to view about the javascirpt Workers (dedicated Web Workers provide a simple means for web content to run scripts in background threads), here a nice article, which explain how this works and how can we to use it.
HTML5 web mobile tutororial
U can try a loop instead of recursivity

Categories

Resources