I'm trying to get Facebook share count about many URLs on one page. To do this I'm calling http://graph.facebook.com for each of them. Because I'm doing this in a loop I run to a problem, my loop gets executed before my callback function finishes so I run into trouble. My code looks like this
$('span.share_counter').each(function() {
site_url=$(this).attr('id');
getJSON(site_url, function(err, data) {
alert(site_url);
});
});
Is there anything to make the loop wait until the callback function of getJSON finishes and then continue or am I approaching this in a wrong way?
There is basically no good way to do what you want. Moreover, you shouldn't be trying to do this. Javascript is run on an event loop, meaning that if you have one particular action blocking, the entire UI freezes.
The creation of callbacks allows the UI to continue by letting other jobs get a turn to process. getJson takes a callback for a reason. It potentially could block for a long time, which would be terrible for your event loop.
You should try to restructure your code so that you don't need the getJson() call to return immediately.
Potentially what you want to do:
$('span.share_counter').each(function() {
var curElement = $(this);
var site_url = curElement.attr('id');
getJSON(site_url, function(err, data) {
modifyElement(curElement, data)
});
});
Related
I'm having problems with using async.eachLimit. It works properly for the first 10 elements, but it doesn't continue past that; it simply ends. So, if there are 100 elements, it only does the first 10. This is clearly an issue of me misunderstanding callbacks. What is the proper way of using eachLimit with an external function that does not contain a callback? Or is it required for such a function to have one?
async.eachLimit(items, 10, function(item, callback) {
outsideFunction(item.attrOne, item.attrTwo};
//callback(); ---> leads to all running in parallel.
},
function(err) {
console.log(err);
}
);
Your issue here is you're using an async library for a function that isn't asynchronous (or isn't acting like it's asynchronous). What async.eachLimit does is go over each item in the array, only executing limit amount at a time, and waits for callback() to be called saying the current iteration is finished and can add another one to be executed.
In your code example, the callback (when uncommented) gets called immediately after it tries calling outsideFunction because the function call is non-blocking. It doesn't wait because async says "I've been told it's done, I'll go onto the next one" so all 100 will try and be executed at the same time. If outsideFunction is an asynchronous function, it needs a callback (or have it use promises) to say it has finished executing, and inside that callback you call the callback for async.eachLimit and then it will only do 10 at a time in the way you want. Here's an example:
async.eachLimit(items, 10, function(item, callback)
{
outsideFunction(item.attrOne, item.attrTwo, function(someResult)
{
// Your outside function calls back saying it's finished ...
callback(); // ... so we tell async we're done
});
},
function(err)
{
console.log(err);
});
If outsideFunction isn't your function, and the function is actually asynchronous, then it's either using promises or you need to find a library that writes asynchronous functions properly. If the function isn't asynchronous, then async.eachLimit won't work.
If it's your function, you should make it send back a callback to say it's done (or use promises).
I'm a bit new to Node.js. I've run into a problem where I want to prevent a callback from running while it is already being executed. For example:
items.forEach(function(item) {
doLongTask(item, function handler(result) {
// If items.length > 1, this will get executed multiple times.
});
});
How do I make the other invocations of handler wait for the first one to finish before going ahead? I'm thinking something along the lines of a queue, but I'm a newbie to Node.js so I'm not exactly sure what to do. Ideas?
There are already libraries which take care of that, the most used being async.
You will be interested in the async.eachSeries() function.
As for an actual example...
const async = require('async')
async.eachSeries(
items,
(item, next) => {
// Do stuff with item, and when you are done, call next
// ...
next()
},
err => {
// either there was an error in one of the handlers and
// execution was stopped, or all items have been processed
}
)
As for how the library does this, you are better of having a look at the source code.
It should be noted that this only ever makes sense if your item handler ever performs an asynchronous operation, like interfacing with the filesystem or with internet etc. There exists no operation in Node.js that would cause a piece of JS code to be executed in parallel to another JS code within the same process. So, if all you do is some calculations, you don't need to worry about this at all.
How to prevent two callbacks from running simultaneously?
They won't run simultaneously unless they're asynchronous, because Node runs JavaScript on a single thread. Asynchronous operations can overlap, but the JavaScript thread will only ever be doing one thing at a time.
So presumably doLongTask is asynchronous. You can't use forEach for what you'd like to do, but it's still not hard: You just keep track of where you are in the list, and wait to start processing the next until the previous one completes:
var n = 0;
processItem();
function processItem() {
if (n < items.length) {
doLongTask(items[n], function handler(result) {
++n;
processItem();
});
}
}
I have a node application that is not a web application - it completes a series of asynchronous tasks before returning 1. Immediately before returning, the results of the program are printed to the console.
How do I make sure all the asynchronous work is completed before returning? I was able to achieve something similar to this in a web application by making sure all tasks we completed before calling res.end(), but I haven't any equivalent for a final 'event' to call before letting a script return.
See below for my (broken) function currently, attempting to wait until callStack is empty. I just discovered that this is a kind of nonsensical approach because node waits for processHub to complete before entering any of the asynchronous functions called in processObjWithRef.
function processHub(hubFileContents){
var callStack = [];
var myNewObj = {};
processObjWithRef(samplePayload, myNewObj, callStack);
while(callStack.length>0){
//do nothing
}
return 1
}
Note: I have tried many times previously to achieve this kind of behavior with libraries like async (see my related question at How can I make this call to request in nodejs synchronous?) so please take the answer and comments there into account before suggesting any answers based on 'just use asynch'.
You cannot wait for an asynchronous event before returning--that's the definition of asynchronous! Trying to force Node into this programming style will only cause you pain. A naive example would be to check periodically to see if callstack is empty.
var callstack = [...];
function processHub(contents) {
doSomethingAsync(..., callstack);
}
// check every second to see if callstack is empty
var interval = setInterval(function() {
if (callstack.length == 0) {
clearInterval(interval);
doSomething()
}
}, 1000);
Instead, the usual way to do async stuff in Node is to implement a callback to your function.
function processHub(hubFileContents, callback){
var callStack = [];
var myNewObj = {};
processObjWithRef(samplePayload, myNewObj, callStack, function() {
if (callStack.length == 0) {
callback(some_results);
}
});
}
If you really want to return something, check out promises; they are guaranteed to emit an event either immediately or at some point in the future when they are resolved.
function processHub(hubFileContents){
var callStack = [];
var myNewObj = {};
var promise = new Promise();
// assuming processObjWithRef takes a callback
processObjWithRef(samplePayload, myNewObj, callStack, function() {
if (callStack.length == 0) {
promise.resolve(some_results);
}
});
return promise;
}
processHubPromise = processHub(...);
processHubPromise.then(function(result) {
// do something with 'result' when complete
});
The problem is with your design of the function. You want to return a synchronous result from a list of tasks that are executed asynchronously.
You should implement your function with an extra parameter that will be the callback where you would put the result (in this case, 1) for some consumer to do something with it.
Also you need to have a callback parameter in your inner function, otherwise you won't know when it ends. If this last thing is not possible, then you should do some kind of polling (using setInterval perhaps) to test when the callStack array is populated.
Remember, in Javascript you should never ever do a busy wait. That will lock your program entirely as it runs on a single process.
deasync is desinged to address your problem exactly. Just replace
while(callStack.length>0){
//do nothing
}
with
require('deasync').loopWhile(function(){return callStack.length>0;});
The problem is that node.js is single-threaded, which means that if one function runs, nothing else runs (event-loop) until that function has returned. So you can not block a function to make it return after async stuff is done.
You could, for example, set up a counter variable that counts started async tasks and decrement that counter using a callback function (that gets called after the task has finished) from your async code.
Node.js runs on A SINGLE threaded event loop and leverages asynchronous calls for doing various things, like I/O operations.
if you need to wait for a number of asynchronous operations to finish before executing additional code
you can try using Async -
Node.js Async Tutorial
You'll need to start designing and thinking asynchronously, which can take a little while to get used to at first. This is a simple example of how you would tackle something like "returning" after a function call.
function doStuff(param, cb) {
//do something
var newData = param;
//"return"
cb(newData);
}
doStuff({some:data}, function(myNewData) {
//you're done with doStuff in here
});
There's also a lot of helpful utility functions in the async library available on npm.
I'd like to be able to dispatch a bunch of work via JavaScript to be done in the browser in such a way that the browser stays responsive throughout.
The approach I'm trying to take is to chunk up the work, passing each chunk to a function that is then queued with a setTimeout(func, 0) call.
I need to know when all the work is done, so I'm storing the returned timer ID in a map (id -> true|false). This mapping is set to false in the next block of code after I have the timer ID, and the queued function sets the mapping to true when it completes... except, of course, the queued function doesn't know its timer ID.
Maybe there's a better/easier way... or some advice on how I can manipulate my map as I need to?
I would queue the work in an array, use one timeout to process the queue and call a callback once the queue is empty. Something like:
var work = [...];
var run = function(work, callback) {
setTimeout(function() {
if(work.length > 0) {
process(work.shift());
setTimeout(arguments.callee, 25);
}
else {
callback();
}
}, 25);
};
run(work, function() {
alert('Work is done!');
});
As JavaScript in browsers is single threaded there is no real advantage to run multiple timeouts (at least I think this is what you are doing). It may even slow down the browser.
I'd like to add that although javascript is single threaded you can still have multiple ajax calls going at once. I recently had a site that needed to do potentially hundreds of ajax calls and the browser just couldn't handle it. I created a queue that used setTimeOut to run 5 calls at once. When one of the ajax calls returned it fired a callback (which is handled by a single thread) and then made the next call on the stack.
Imagine you're a manager that can only talk to one person at a time, you give 5 employees assignments, then wait for their responses, which may come in any order. Once the first employee comes back and gives you the information, you give them a new assignment and wait for the next employee (or perhaps even the same employee) to come back. So although you're "single threaded" 5 things are going on at once.
There is an example right in the HTML Standard, how it is best to handle it:
To run tasks of several milliseconds back to back without any delay,
while still yielding back to the browser to avoid starving the user
interface (and to avoid the browser killing the script for hogging the
CPU), simply queue the next timer before performing work:
function doExpensiveWork() {
var done = false;
// ...
// this part of the function takes up to five milliseconds
// set done to true if we're done
// ...
return done;
}
function rescheduleWork() {
var handle = setTimeout(rescheduleWork, 0); // preschedule next iteration
if (doExpensiveWork())
clearTimeout(handle); // clear the timeout if we don't need it
}
function scheduleWork() {
setTimeout(rescheduleWork, 0);
}
scheduleWork(); // queues a task to do lots of work
The moment of finishing the work is pretty clear, when clearTimeout is called.
I have this recursion loop where inside the function I have atleast 2 ajax get/post, and the recursion happens after the first ajax get. my function structure is like this,
function Loop() {
$.get(url, data, function(result) {
for loop to render the result {
// render the result here
}
for loop to get another data using the result {
$.post(url, result.data, function(postResult) {
// I don't know what it did here since
// I don't have an access to this post
});
// is there a way here that i will not proceed if the post is not done yet?
}
setTimeout("", 1000); // I wait for 1 second for the post to finish
Loop(); // call the recursion
}, "json");
}
can anyone tell me what's wrong with this code? why do i get a warning from the computer that my script is causing the computer to run slowly. I know that this code is the one causing it, but I don't know the work around.
I know inside the second loop inside the get is causing a lot of memory. Is there a way that it will not loop back if the ajax post is not finished?
Your setTimeout will not neatly pause the code for one second: it will just set a timer for an (empty, in your case) event to go off after a certain time. The rest of the script will continue to execute parallel to that.
So you're currently calling your recursion function a lot more frequently than you think you are. That's your first problem.
Your biggest problem, though, is that regardless of what you're doign in the result of your post, that's in another scope entirely, and you cannot break out of the Loop function from there. There is nothing in your code to break the recursion, so it is infinite, and very fast, and it sends off Ajax requests on top of that.
You need to describe in more detail what you want to achieve, and perhaps somebody can show you how you should do it. The only thing that is certain is that you need to use callbacks. I've written an example but it's making a lot of assumptions. It's a lot of approximations of what I think you might want to achieve, but no doubt you'll need to tweak this a bit to fit your needs. Hopefully it'll give you an idea of the workflow you need to use:
function Loop() {
$.get(url, data, function(result) {
for loop to render the result {
// render the result here
}
// this is what you're looping over in your second loop
var postQueue = result.someArray;
renderChildData(postQueue, 0);
}, "json");
}
function renderChildData(array, index) {
// this is just one item in the loop
var currentItem = array[index];
$.post(url, currentItem, function(postResult) {
// we have received the result for one item
// render it, and proceed to fetch the next item in the list
index++;
if(index < array.length) {
renderChildData(array, index);
}
});
}
First of all this line:
setTimeout("", 1000); // I wait for 1 second for the post to finish
doesn't make your script to wait, since it's improper usage of setTimeout function. I think you should consider to use setInterval instead and do it like:
function Loop() {
$.get(url, data, function(result) {
for loop to render the result {
// render the result here
}
for loop to get another data using the result {
$.post(url, result.data, function(postResult) {
// I don't know what it did here since
// I don't have an access to this post
});
// is there a way here that i will not proceed if the post is not done yet?
}
}, "json");
}
setInterval( Loop, 1000);
This will make execute your function every 1 sec. I guess this is exactly what you wanted to gain. There is no reason to make recursive call here.
it basically happen when you use a huge code on a page ..
so just try to compress this code