Meteor._wrapAsync causes Meteor to be unresponsive - javascript

I'm using Meteor._wrapAsync to force only one call to the function writeMeLater to be
executing at any one time. If 10 calls to writeMeLater are made within 1 second, the other 9 calls should be queued up in order.
To check that writeMeLater is running synchronously, the timestamp field in the Logs Collection should be spaced 1 second apart.
Problem: With the following code, only the first call to writeMeLater is executed, the other 9 does not appear to run. Why is this so?
Server Code:
writeMeLater = function(data) {
console.log('writeMeLater: ', data)
// simulate taking 1 second to complete
Meteor.setTimeout(function() {
Logs.insert({data: data, timestamp: new Date().getTime()})
}, 1 * 1000)
}
writeMeLaterSync = Meteor._wrapAsync(writeMeLater)
// simulate calling the function many times real quick
for(var i=0; i<10; i++) {
console.log('Loop: ', i)
writeMeLaterSync(i)
}
Output:
=> Meteor server running on: http://localhost:4000/
I20140119-11:04:17.300(8)? Loop: 0
I20140119-11:04:17.394(8)? writeMeLater: 0
Using an alternate version of writeMeLater, I get the same problem:
writeMeLater = function(data) {
console.log('writeMeLater: ', data)
setTimeout(Meteor.bindEnvironment( function() {
Logs.insert({data: data, timestamp: new Date().getTime()})
}), 1 * 1000)
}

TL;DR - your writeMeLater function needs to take a callback parameter.
NodeJS classic asynchronous functions usually have this signature :
function async(params..., callback) {
try {
var result = compute(params);
callback(null,result);
}
catch {
callback("something went wrong", null);
}
}
They take any number of parameters, the last one being a callback to be run when the computation is ready, called with 2 parameters: error which is null if everything is OK, and the result of course.
Meteor._wrapAsync expects to be given a function with this signature to return a newly pseudo-synchronous function.
Meteor "synchronous" functions allows you to write code in a synchronous style, but they are not truly synchronous like NodeJS fs.readFileSync for example, which BLOCKS the event loop until it's done (usually this is bad unless you're writing a command-line app, which is not the case with Meteor).
Note: using NodeJS fs *Sync functions in Meteor is bad because you might be tricked into thinking they are "Meteor synchronous" but they aren't, they will block your entire node process until they're done ! You should be using fs async funcs wrapped with Meteor._wrapAsync.
A simplified clone of Meteor._wrapAsync would look like this:
var wrapAsync=function(asyncFunc) {
// return a function who appears to run synchronously thanks to fibers/future
return function() {
var future = new Future();
// take the arguments...
var args = arguments;
// ...and append our callback at the end
Array.prototype.push.call(args, function(error, result) {
if (error) {
throw error;
}
// our callback calls future.return which unblocks future.wait
future.return(result);
});
// call the async func with computed args
asyncFunc.apply(null, args);
// wait until future.return is called
return future.wait();
};
};
There is a Future.wrap which does exactly this, Meteor._wrapAsync is a bit more complicated because it handles Meteor environment variables by using Meteor.bindEnvironment.
Fibers and Futures are a bit out of scope so I won't dive into them, be sure to check eventedmind.com videos on the subject.
Introducing Fibers - https://www.eventedmind.com/feed/BmG9WmSsdzChk8Pye
Using Futures - https://www.eventedmind.com/feed/kXR6nWTKNctKariSY
Meteor._wrapAsync - https://www.eventedmind.com/feed/Ww3rQrHJo8FLgK7FF
Now that you understand how things need to be done to encapsulate async functions in Meteor, let's fix your code.
If your async function doesn't take a callback as last argument, it won't call it (obviously), and the callback we pass to it in the wrapped function won't trigger either, which means future.return won't be called and this is why your program is blocked in the first place !
You simply have to rewrite writeMeLater to take a callback as final argument :
var writeMeLater = function(data, callback){
console.log('writeMeLater: ', data);
// simulate taking 1 second to complete
Meteor.setTimeout(function() {
Logs.insert({
data:data,
timestamp:new Date().getTime()
});
callback(null, "done processing " + data);
}, 1 * 1000);
};
And you're good to go !

Related

Structuring sequences of longer operations in Javascript

Consider a sequence of steps that need to be performed as part of preparing a heavy web page:
step1();
step2();
..
stepk();
Each step may take in the range of 100milliseconds to a few seconds: but we are uncertain in advance how long each step takes.
At least until Promise/ await hit the street my understanding is that we use callbacks along with setTimeout.
But how can we avoid that from quickly becoming unwieldy? In the following sequence we have two concerns:
how to specify the timeout when the actual work could be up to two orders of magnitude in range
how to handle the passing of arguments - argK in the code shown below - to the nested function invocations
First two steps (of K):
function step1(args1,args2,args3,..) {
// do work for step1 using args1
setTimeout(function() {step2(args2,args3);}, [some timeout..]);
}
function step2(args2,args3,..) {
// do work for step2 using args2
setTimeout(function() {step3(args3 [, args4, args5 ..]);}, [some timeout..]);
}
So how can these sequential steps be structured so that we are not sending a growing list of args down an entire chain of functions?
Note: Webworkers may be a useful approach for some cases: but I want to be able to serve from the local file system and that apparently precludes them:
http://blog.teamtreehouse.com/using-web-workers-to-speed-up-your-javascript-applications
Restricted Local Access
Web Workers will not work if the web page is being served directly
from the filesystem (using file://). Instead you will need to use a
local development server such as XAMPP.
Without promises or async await, you must do it callback hell style
function step1(a,b,c){
setTimeout(() => {
step2():
}
}
Or you can pass references to the next step
If step2 relies on results from step1
function step1(a,b,c, done){
setTimeout(() => {
done(a,b,step3):
}
}
function step2(d,e,done){
setTimeout(() => {
done(e):
}
}
step1("cat","dog","mouse", step2);
If you want to pass args to step2 manually, and get results from step1
function step1(a,b,c, done){
setTimeout(() => {
done(a):
}
}
function step2(d,e,done){
return function(step1a){
setTimeout(() => {
done(step1a, d):
}
}
}
step1("cat","dog","mouse", step2("d","e", step3);
This is probably as clean as you can get without Promisifying your async actions or implementing your own promise style.
I've been reading up on generator functions and it seems they could be a vanilla JS solution.
Alex Perry wrote a great article with a relevant demo:
function step1() {
setTimeout(function(){
gen.next('data from 1')
}, 500);
}
function step2(data) {
setTimeout(function(){
gen.next(`data from 2 and ${data[0]}`)
}, 700);
}
function step3() {
setTimeout(function(){
gen.next('data from 3')
}, 100);
}
function *sayHello() {
var data = [];
data.push(yield step1());
data.push(yield step2(data));
data.push(yield step3(data));
console.log(data);
}
var gen = sayHello();
gen.next();
In the example above each asynchronous request returns a fake data. Each successive step receives an array containing the previous responses so the previous responses can be used.

What is the meaning of using done as a parameter in a javascript function?

i am learning Javascript, and it seems that the done as a parameter in a function is a difficult concept to understand.
I want to know why it does behaves like that (done as a parameter (completed process signal i guess), and if there is some good book or resource Online to study further this concept.
Example, i am following along with a Tutorial and it uses done as a parameter, the thing is that when i run the code on node via gulp (gulpfile.js) the process never stops when using done, if i choose to skip done in the code it runs smoothly. I am trying to track down the problem, and i know that the problem is the done as a parameter, ( it has been checked by me multiple times).
gulp.task('clean-styles', function(done) {
var files = config.temp + '**/*.css';
clean(files, done);
});
function clean(path, done) {
log('Cleaning: ' + $.util.colors.blue(path));
del(path, done).then(function(path) {
console.log("path=",util.inspect(path,false,null))
console.log('Deleted Files\/Folders:\n', path.join('\n'));
console.log('Finishing clean')
});
}
node version: 0.12.4
npm version: 2.10.1
gulp version: 3.9.0
Thanks a lot for any help, it will be really appreciated.
Salutations.
can only explain the concept. what you are trying to achieve is not clear enough.
done is just a non-official standard name for a function (a.k.a callback) that informs the calling function (parent in stacktrace) that a task is completed.
recall that javascript is asynchronous and functions can be passed around as variables.
now, imagine a function startPrinting that has to call printText1, printText2 and printText3 and then output message that process is completed. We have:
function startPrinting() {
printText1();
printText2();
printText3();
console.log("completed");
}
function printText1() {
$.get('http://ps-web.cloudapp.net/proxy.php?url=http://pastebin.com/raw.php?i=3uPCavLN', function(response){
console.log(response)
});
}
function printText2() {
$.get('http://ps-web.cloudapp.net/proxy.php?url=http://pastebin.com/raw.php?i=jZjqKgNN', function(response){
console.log(response)
});
}
function printText3() {
$.get('http://ps-web.cloudapp.net/proxy.php?url=http://pastebin.com/raw.php?i=SreCbunb', function(response){
console.log(response)
});
}
here, there is no assurance that completed will ALWAYS be printed after all three functions have been executed. since they execute asynchronously.
in order to sort this, javascript ninjas will introduce a done function so that startPrinting will only print completed when all three functions have been executed. Notice how a function is passed to printText1 ... 2 below:
function startPrinting() {
/* START OF DONE ROUTINE */
var count = 0;
var printCompleted = function() {
count+=1;
if(count == 3)
console.log("completed");
}
/* END */
printText1(printCompleted);
printText2(printCompleted);
printText3(printCompleted);
}
function printText1(done) {
$.get('http://ps-web.cloudapp.net/proxy.php?url=http://pastebin.com/raw.php?i=3uPCavLN', function(response){
console.log(response)
done();
});
}
function printText2(done) {
$.get('http://ps-web.cloudapp.net/proxy.php?url=http://pastebin.com/raw.php?i=jZjqKgNN', function(response){
console.log(response)
done();
});
}
function printText3(done) {
$.get('http://ps-web.cloudapp.net/proxy.php?url=http://pastebin.com/raw.php?i=SreCbunb', function(response){
console.log(response)
done();
});
}
I hope you are able to apply this principle to better understanding your context.
Functions are first class objects in JavaScript. You can pass them around like any other value. Once they have been passed to another function as an argument, then you can call them using the argument name (or call another function and pass them as an argument to that, or assign them properties, or convert them to strings, or whatever else you'd like to do with them).
function this_sets_the_body() {
document.body.innerHTML = "Hello, world";
}
function this_calls_a_callback(im_a_callback) {
im_a_callback();
}
this_calls_a_callback(this_sets_the_body);
In your code, you've written a function using an anonymous function expression:
function(done) {
// ...
}
… and you've told it to expect to be called with an argument which you are calling done.
Whatever value is passed to it, you are ignoring (your function doesn't mention done after the argument name).
The library you are using (presumably) is passing a function in there and expects you to call it once your function as done whatever it is that it is going to do. This lets it wait until anything asynchronous that you are doing is finished.
So call done() when your code is done.
It looks like your example is fully messed up with respect to callbacks. In some places in your example done is used as a callback -- a function given from outside to be called when everything is finished in an asynchronous process and signal the end of the operation. In other cases it looks to be used as an argument provided by the execution method itself. In yet another you use it in a promise. Anyway, as I am not familiar with gulp I can only guess, but I hope the following example would work for you to explain the concepts of callback and partially promise. I would, however, recommend to avoid situations of missing callbacks and promises in the same code as it leads to confusion.
gulp.task('clean-styles', function(done) {
console.log(1);
/* we are in the callback of gulp.task: we give the
* latter this anonymous function to call when the
* setup is ready and it gives us function done to
* call when we are done and signal the engine any errors
*/
var files = config.temp + '**/*.css';
/* this defines the action to take when files are actually deleted */
var callback = function(err, message) {
console.log(6);
console.log(message); // expect: looks good
// this is provided apparently by gulp and calling it signals the engine that everything is completed
done(err);
};
/* we call this function, but some bits (like deletion
* run asynchronously. The function will return quickly, but
* callback (function) will only be called when files are deleted */
clean(files, callback);
/* the execution of gulp.task callback is finished,
* but files are not yet deleted */
console.log(4);
});
/* done here is not the same done as above, it is actually
* the function we supply into the call above, i.e. `callback` */
function clean(path, done) {
/* the cleanup is starting */
console.log(2);
/* del is scheduled. it returns a `promise` and if
* we call `then`, then the given anonymous function
* will be executed when files are deleted. This is
* where we call the provided function `done` to
* signal that the job is complete and execute some action */
del(path).then(function() {
/* files are deleted and this callback is called */
console.log(5);
/* we let the outer caller know by calling `done` which
* was given to us from outside */
done(null, "looks good"); // null means no error
}).catch(function(err) {
done(err, "looks bad"); // err is given back
});
/* the clean method is through, but files not yet deleted */
console.log(3);
}

Node.js async, but only handle first positive/defined result

What is the best way to create parallel asynchronous HTTP requests and take the first result that comes back positive? I am familiar with the async library for JavaScript and would happy to use that but am not sure if it has exactly what I want.
Background - I have a Redis store that serves as state for a server. There is an API we can call to get some data that takes much longer than reaching the Redis store.
In most cases the data will already be in the Redis store, but in some cases it won't be there yet and we need to retrieve it from the API.
The simple thing to do would be to query Redis, and if the value is not in Redis then go to the API afterwards. However, we'll needlessly lose 20-50ms if the data is not yet in our Redis cache and we have to go to the API after failing to find the data with Redis. Since this particular API server is not under great load, it won't really hurt to go to the API simultaneously/in parallel, even if we don't absolutely need the returned value.
//pseudocode below
async.minimum([
function apiRequest(cb){
request(opts,function(err,response,body){
cb(err,body.result.hit);
}
},
function redisRequest(cb){
client.get("some_key", function(err, reply) {
cb(err,reply.result.hit);
});
}],
function minimumCompleted(err,result){
// this mimimumCompleted final callback function will be only fired once,
// and would be fired by one of the above functions -
// whichever one *first* returned a defined value for result.hit
});
is there a way to get what I am looking for with the async library or perhaps promises, or should I implement something myself?
Use Promise.any([ap, bp]).
The following is a possible way to do it without promises. It is untested but should meet the requirements.
To meet requirement of returning the first success and not just the first completion, I keep a count of the number of completions expected so that if an error occurs it can be ignored it unless it is the last error.
function asyncMinimum(a, cb) {
var triggered = false;
var completions = a.length;
function callback(err, data) {
completions--;
if (err && completions !== 0) return;
if (triggered) return;
triggered = true;
return cb(err, data);
}
a.map(function (f) { return f(callback); });
}
asyncMinimum([
function apiRequest(cb){
request(opts,function(err,response,body){
cb(err,body.result.hit);
}
},
function redisRequest(cb){
client.get("some_key", function(err, reply) {
cb(err,reply.result.hit);
});
}],
function minimumCompleted(err,result){
// this mimimumCompleted final callback function will be only fired once,
// and would be fired by one of the above functions -
// whichever one had a value for body.result.hit that was defined
});
The async.js library (and even promises) keep track of the number of asynchronous operations pending by using a counter. You can see a simple implementation of the idea in an answer to this related question: Coordinating parallel execution in node.js
We can use the same concept to implement the minimum function you want. Only, instead of waiting for the counter to count all responses before triggering a final callback, we deliberately trigger the final callback on the first response and ignore all other responses:
// IMHO, "first" is a better name than "minimum":
function first (async_functions, callback) {
var called_back = false;
var cb = function () {
if (!called_back) {
called_back = true; // block all other responses
callback.apply(null,arguments)
}
}
for (var i=0;i<async_functions.length;i++) {
async_functions[i](cb);
}
}
Using it would be as simple as:
first([apiRequest,redisRequest],function(err,result){
// ...
});
Here's an approach using promises. It takes a little extra custom code because of the non-standard result you're looking for. You aren't just looking for the first one to not return an error, but you're looking for the first one that has a specific type of result so that takes a custom result checker function. And, if none get a result, then we need to communicate that back to the caller by rejecting the promise too. Here's the code:
function firstHit() {
return new Promise(function(resolve, reject) {
var missCntr = 0, missQty = 2;
function checkResult(err, val) {
if (err || !val) {
// see if all requests failed
++missCntr;
if (missCntr === missQty) {
reject();
}
} else {
resolve(val);
}
}
request(opts,function(err, response, body){
checkResult(err, body.result.hit);
}
client.get("some_key", function(err, reply) {
checkResult(err, reply.result.hit);
});
});
}
firstHit().then(function(hit) {
// one of them succeeded here
}, function() {
// neither succeeded here
});
The first promise to call resolve() will trigger the .then() handler. If both fail to get a hit, then it will reject the promise.

Uncaught TypeError: Cannot read property 'transaction' of null with an indexeddb

I am getting an error 'Uncaught TypeError: Cannot read property 'transaction' of null '
when in this section of code
remoteDB.indexedDB.addAdmins = function() {
var db = remoteDB.indexedDB.db;
var trans = db.transaction("students", "readwrite");
var request = trans.objectStore("administrators");
/*
this section edited out since the failure is on line 3
*/
request.onsuccess = function(e) {
console.log("success Adding: ", e);
};
request.onerror = function(e) {
console.log(e.value);
};
};
remoteDB.indexedDB.db is null. This appears to be a global variable reference. In order to create a transaction, the variable must be defined, not null, and open.
indexedDB is async. There is no guarantee that when you open a connection to indexedDB and save the handle of the connection in a global variable that the variable is still defined, not null, and open at a later point in time from within the context of another asynchronous function.
It will work sometimes, if you immediately open a transaction. Sometimes the db connection persists. But it is not guaranteed. If there are no open transactions on a database connection then the browser will close the connection at some point there after.
See
Why is db.transaction not working with indexeddb?
Is it bad to open several database connections in indexedDB?
Why is onupgradeneeded never called in this code?
etc.
This error occurs in part usually because of a programmer's lack of familiarity with asynchronous javascript. This is not a criticism, it just seems to be a common pattern. To avoid errors in the future, I suggest spending some time learning about asynchronous javascript.
For example, understand how the following works (or rather why it does not work as expected) before trying to use indexedDB:
var db;
function setDB() {
db = 123;
}
setTimeout(setDB, 10);
console.log('Got a db variable! %s', db);
To really do this justice would be redundant with the thousands of other questions on stackoverflow and insightful articles and guides on the web, but here is an extreme crash course. indexedDB.open is an asynchronous (async) function. The adjective asynchronous means alot. An async function behaves extremely differently than a synchronous function. New javascript programmers generally only learn to program synchronously. So naturally you don't get why calling an async function in your sync code does not work. Sync example:
var a = 1;
var b = 2;
function sum(arg1, arg2) { return arg1 + arg2 }
var abSum = sum(a,b);
console.log('The sum of a + b is %s', abSum);
You know that b=2 is executed after a=1, and that sum=a+b is executed after b. The statements are executed in order, in line, in serial, one after the other, in the order you wrote them. You know that if you tried to put line 4 before line 1 above that it would not work because a and b do not yet have values. In synchronous code, you know that the function sum returns a value. It returns it immediately. So you know that abSum is immediately assigned the return value from calling sum(a,b).
Async code works extremely differently. In general, async functions do not return the value you want. Usually you pass in a function (called a callback function) to the function. An async function only sort of guarantees it will call the callback some time later. It does not return something to use.
var a = 1;
var b = 2;
function asyncSum(arg1,arg2,calledWhenFinished) {
var sum = arg1+arg2;
calledWhenFinished(sum);
return 'asyncSumFinished and called the callback';
}
// The following DOES NOT work as expected
var theResultOfSum = asyncSum(a,b, function(sum) {
console.log('finished. The um is %s', theResultOfSum);
});
// The following DOES work as expected
asyncSum(a,b, function(sum) {
console.log('The sum is %s', sum);
});
Notice that here in the working example, I don't care about what asyncSum returns. After all, it does not return the sum, it just returns a string saying it finished. Now let's do something that is more genuinely async.
function moreGenuineAsyncSum(a,b, callback) {
setTimeout(function() {
var sum = a + b;
console.log('The sum is %s', sum);
callback(sum);
}, 100);
console.log('Requested the sum to be calculated');
return 'Hey, I scheduled the callback function to execute in 100ms';
}
Here I really don't care about the return value of moreGenuineAsyncSum. In fact, it is worthless. It is just a string that says something. Also notice which console.log call gets executed first. The later line gets executed before the earlier line. Out of order. Out of the order in which it was written. Why is that? Because that is what async functions do, they do something at a later point in time.
indexedDB.open is an async function. It returns a IDBOpenRequest object, which is a type of Request object. Most indexedDB functions are async and return Request objects. Request objects do NOT have values. They have callbacks as properties. Therefore:
var dbRequest = indexedDB.open('mydb',1);
dbRequest.onsuccess = function(event) {
// This gets called later. The following are all viable ways to get the IDBDatabase
// object INSIDE THE BLOCK OF THIS FUNCTION ONLY. Any one of the following 3 lines
// works exactly the same way.
var db = this.result;
var db = dbRequest.result;
var db = event.target.result;
console.log('Got a db connection! It is %s', db);
// Now, INSIDE THE BLOCK OF THIS FUNCTION ONLY, do something:
var myTransaction = db.transaction('myObjectStore','readwrite');
var myObjectStore = myTransaction.objectStore('myObjectStore');
// etc.
};
To sum this up before I write an entire book, the emphasis is on the INSIDE THE BLOCK comments above. Inside the block, the 'db' variable is guaranteed to be there, and be open, and be defined, and not null. Outside the block, the db variable does not exist.
So you might be saying, that makes it pretty damn annoying to use indexedDB. You are right, it is annoying. To make it less annoying to you, you can learn about promises. Or you can write callback like functions. Or you can use one of the myriad of design patterns that deal with callback hell. There are many. One is just using a pattern like EventTarget and its relation to the DOM.

NodeJS Callback Scoping Issue

I am quite new (just started this week) to Node.js and there is a fundamental piece that I am having trouble understanding. I have a helper function which makes a MySQL database call to get a bit of information. I then use a callback function to get that data back to the caller which works fine but when I want to use that data outside of that callback I run into trouble. Here is the code:
/** Helper Function **/
function getCompanyId(token, callback) {
var query = db.query('SELECT * FROM companies WHERE token = ?', token, function(err, result) {
var count = Object.keys(result).length;
if(count == 0) {
return;
} else {
callback(null, result[0].api_id);
}
});
}
/*** Function which uses the data from the helper function ***/
api.post('/alert', function(request, response) {
var data = JSON.parse(request.body.data);
var token = data.token;
getCompanyId(token, function(err, result) {
// this works
console.log(result);
});
// the problem is that I need result here so that I can use it else where in this function.
});
As you can see I have access to the return value from getCompanyId() so long as I stay within the scope of the callback but I need to use that value outside of the callback. I was able to get around this in another function by just sticking all the logic inside of that callback but that will not work in this case. Any insight on how to better structure this would be most appreciated. I am really enjoying Node.js thus far but obviously I have a lot of learning to do.
Short answer - you can't do that without violating the asynchronous nature of Node.js.
Think about the consequences of trying to access result outside of your callback - if you need to use that value, and the callback hasn't run yet, what will you do? You can't sleep and wait for the value to be set - that is incompatible with Node's single threaded, event-driven design. Your entire program would have to stop executing whilst waiting for the callback to run.
Any code that depends on result should be inside the getCompanyId callback:
api.post('/alert', function(request, response) {
var data = JSON.parse(request.body.data);
var token = data.token;
getCompanyId(token, function(err, result) {
//Any logic that depends on result has to be nested in here
});
});
One of the hardest parts about learning Node.js (and async programming is general) is learning to think asynchronously. It can be difficult at first but it is worth persisting. You can try to fight and code procedurally, but it will inevitably result in unmaintainable, convoluted code.
If you don't like the idea of multiple nested callbacks, you can look into promises, which let you chain methods together instead of nesting them. This article is a good introduction to Q, one implementation of promises.
If you are concerned about having everything crammed inside the callback function, you can always name the function, move it out, and then pass the function as the callback:
getCompanyId(token, doSomethingAfter); // Pass the function in
function doSomethingAfter(err, result) {
// Code here
}
My "aha" moment came when I began thinking of these as "fire and forget" methods. Don't look for return values coming back from the methods, because they don't come back. The calling code should move on, or just end. Yes, it feels weird.
As #joews says, you have to put everything depending on that value inside the callback(s).
This often requires you passing down an extra parameter(s). For example, if you are doing a typical HTTP request/response, plan on sending the response down every step along the callback chain. The final callback will (hopefully) set data in the response, or set an error code, and then send it back to the user.
If you want to avoid callback smells you need to use Node's Event Emitter Class like so:
at top of file require event module -
var emitter = require('events').EventEmitter();
then in your callback:
api.post('/alert', function(request, response) {
var data = JSON.parse(request.body.data);
var token = data.token;
getCompanyId(token, function(err, result) {
// this works
console.log(result);
emitter.emit('company:id:returned', result);
});
// the problem is that I need result here so that I can use it else where in this function.
});
then after your function you can use the on method anywhere like so:
getCompanyId(token, function(err, result) {
// this works
console.log(result);
emitter.emit('company:id:returned', result);
});
// the problem is that I need result here so that I can use it else where in this function.
emitter.on('company:id:returned', function(results) {
// do what you need with results
});
just be careful to set up good namespacing conventions for your events so you don't get a mess of on events and also you should watch the number of listeners you attach, here is a good link for reference:
http://www.sitepoint.com/nodejs-events-and-eventemitter/

Categories

Resources