I'm trying to write a Sencha Touch 2.0 WebSql proxy that supports tree-data. I started from tomalex0's WebSql/Sqlite proxy. https://github.com/tomalex0
When modifying the script I ran into a strange debugging issue:
(I'm using Chrome 17.0.963.78 m)
The following snipped just got jumped over. The transaction never takes place! But when I set a breakpoint above or below and I run the same code in the console, it does work!
dbConn.transaction(function(tx){
console.log(tx);
if (typeof callback == 'function') {
callback.call(scope || me, results, me);
}
tx.executeSql(sql, params, successcallback, errorcallback);
});
The blue log you can see, the green log is from the success handler. When the query would be performed there would be exactly the same log above (it's a SELECT * FROM ...; so when performing multiple times without changing data I would expect the same result)
I found out that when I add the code block to the watch expressions it also runs.
It isn't being skipped over. It is being scheduled, but not being executed till much later due to the asynchronous nature of the request:
http://ejohn.org/blog/how-javascript-timers-work/
Since the code is being executed synchronously to make the asynchronous call it will delay the call till after the synchronous code has been executed, due to the single threadedness of javascript.
Related
I got a very annoying problem with PM2's (version 5.1.0) pm2.delete(process, errback) function. After two days of debugging, trying to find the root of the problem, it starts to look like an issue which is related to the PM2 library itself.
Here a simplified code snippet of what I am doing. I will explain the issue afterwards.
const debug = require("debug")("...");
const pm2 = require("pm2");
const Database = require("./Database");
...
class Backend {
static cleanup() {
return new Promise((resolve, reject) => {
pm2.connect((error) => {
if (error) {
debug("...");
reject();
return;
}
// MongoDB query and async forEach iteration
Database.getCollection("...").find({
...
})
.forEach((document) => {
pm2.delete("service_...", (error, process) => {
if (!error && process[0].pm2_env.status === "stopped") {
debug("...");
} else {
debug("...");
}
});
})
.catch(...)
.then(...);
}
});
}
...
}
Now to the problem: my processes do not get terminated and errback of pm2.delete(process, errback) is not executed AT ALL.
I printed the error parameter of the connect callback and it is always null, hence a connection to PM2 is established successfully
I placed debug text directly at the beginning of the delete callback and it is not printed
I wrapped the delete function in a while loop which only stops if the delete callback is executed at least once and the loop runs forever
I started a debug session and noticed that PM2's delete function in node_modules/pm2/lib/API.js (line 533) gets called, but for some reason the process does not get terminated and my provided callback function is not executed at all (I went through the commands step by step in debug mode but still can not tell where it fails to execute the callback (it seems to happen in PM2's API.js though))
I also noticed that when running the code step by step in debug mode with breakpoints that sometimes my process gets terminated with the API call if I cancle the execution at a certain point in between (however, the callback was still not executed)
I use PM2's delete function at another place of my software as well and there it is working like a charm
So for some reason the pm2.delete(process, errback) is not executed correctly and I don't know what to do at this point. Is someone experienced with PM2's source code or had a similar issue at some point? Any advice would be helpful.
It looks like I found the root of the problem:
At a later point in the promise chain after the forEach call, I use pm2.disconnect();. After further investigation I noticed that it is not perfectly synchronized which means in my case that PM2 gets disconnected before the delete function is completely finished. This gives the described results and the weird debugging behaviour.
All in all, the API works therefore perfectly fine but one has to pay very close attention to asynchronous code as it can cause really complicated behaviour which is also hard to debug.
One has to make sure that the delete functions are really finished before pm2.disconnect(); is called.
Good evening guys -
I've noticed that different browsers seem to handle callbacks differently.
As an example, Firefox seems to let an $.ajax.done({}) callback interrupt the current javascript instruction, but Chrome won't handle the $.ajax.done({}) callback until all current instructions finish. It's like Chrome sends the call to the end of an instruction queue, and Firefox adds it to the top of the instruction stack.
(Bear in mind that this is probably entirely incorrect terminology - I really hope this is the right place to post this)
Explicit Example that outlines my best guess:
function load_a_bunch_of_stuff()
$.ajax({
// ajax things here - e.g. load 10,000 whatevers from a server
}).always(function() {
ajaxStatus = "done!"
});
}
function do_things_with_loaded_stuff() {
// Loops as long as the status is "running" and the User is
// willing to retry:
while (ajaxStatus === "running" &&
confirm("Waiting on ajax, try again?"));
// Do some cool stuff after the $.ajax call finishes
}
// --
// Main: (Assume do_things_with_loaded_stuff() is called before the
// load_a_bunch_of_stuff() finishes)
var ajaxStatus = "running";
load_a_bunch_of_stuff();
do_things_with_loaded_stuff();
-- My best guess --
In this example - the loop will run until Firefox lets the .always({}) change the 'ajaxStatus' to "done!" (probably while the user tries to click on OK) and then we can carry on.
However in Chrome, the .always({}) doesn't ever fire because (I'm guessing) the callback is executed after the current set of instructions finishes. In other words, since the .always({}) is added to the end of the instruction set (rather than in the next slot), it's stuck in the loop and never reaches the .always({}).
This example is just something similar to an issue I've ran into recently while trying to develop between the two browsers. Does anyone know if this interpretation is true?
Can anyone actually explain what's going on?
This is not specific to AJAX, it's just about asynchronous code.
Firefox allows asynchronous callbacks to run while you're in a modal dialog from confirm(), prompt(), or alert(). Chrome, Safari, and Internet Explorer don't.
This can be demonstrated using setTimeout(), it doesn't require $.ajax.
function load_a_bunch_of_stuff() {
setTimeout(function() {
ajaxStatus = "done";
}, 3000);
}
function do_things_with_loaded_stuff() {
// Loops as long as the status is "running" and the User is
// willing to retry:
while (ajaxStatus === "running" &&
confirm("Waiting on ajax, try again?"));
$("div").text("Done");
}
var ajaxStatus = "running";
load_a_bunch_of_stuff();
do_things_with_loaded_stuff();
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div>
Running
</div>
Note that if you take out the confim() call, you should expect the loop to run infinitely on all browsers, since no Javascript engine should allow asynchronous callbacks to interrupt the main flow.
In some Node.js scripts that I have written, I notice that even if the last line is a synchronous call, sometimes it doesn't complete before Node.js exits.
I have never seen a console.log statement fail to run/complete before exiting, but I have seen some other statements fail to complete before exiting, and I believe they are all synchronous. I could see why the callback of an async function would fail to fire of course in this case.
The code in question is a ZeroMQ .send() call like so:
var zmq = require('zmq');
var pub = zmq.socket('pub');
pub.bindSync('tcp://127.0.0.1:5555');
setInterval(function(){
pub.send('polyglot');
},500);
The above code works as expected...but if I remove setInterval() and just call it like this:
var zmq = require('zmq');
var pub = zmq.socket('pub');
pub.bindSync('tcp://127.0.0.1:5555');
pub.send('polyglot'); //this message does not get delivered before exit
process.exit(0);
...Then the message will not get delivered - the program will apparently exit before the pub.send() call completes.
What is the best way to ensure a statement completes before exiting in Node.js ? Shutdown hooks would work here, but I am afraid that would just be masking the problem since you can't put everything that you need to ensure runs in a shutdown hook.
This problem can also be demonstrated this way:
if (typeof messageHandler[nameOfHandlerFunction] == 'function') {
reply.send('Success');
messageHandler[nameOfHandlerFunction](null, args);
} else {
reply.send('Failure'); //***this call might not complete before the error is thrown below.***
throw new Error('SmartConnect error: no handler for ZMQ message sent from Redis CSV uploader.');
}
I believe this is a legit/serious problem because a lot of programs just need to publish messages and then die, but how can we effectively ensure all messages get sent (though not necessarily received)?
EDIT:
One (potential) way to fix this is to do:
socket.send('xyz');
socket.close(); // supposedly this will block until the above message is sent
process.exit(0);
Diving into zeromq.node, you can see what Socket.send just pushes your data to _outgoing:
this._outgoing.push([msg, flags]);
... and then calls _flush iff zmq.ZMQ_SNDMORE is unset:
this._flush();
Looks like _flush is actually doing the socket write. If _flush() fails, it emits an error.
Edit:
I'm guessing calling pub.unbind() before exiting, will force the _flush() to be called:
pub.unbind('tcp://127.0.0.1:5555', function(err) {
if (err) console.log(err);
process.exit(0); // Probably not even needed
});
I think the simple answer is the the socket.send() method is in fact asynchronous and that is why we see the behavior I described in the OP.
The question then is - why does socket.send() have to be asynchronous - could there not be a blocking/synchronous version that we could use instead for the purpose intended in the OP? Could we please have socket.sendSync()?
I've got a couple of questions about this small snippett adapted from a tutorial I found here.
var loader = (function ($, host) {
return {
loadTemplate: function (path) {
var tmplLoader = $.get(path)
.success(function (result) {
$("body").append(result);
})
.error(function (result) {
alert("Error Loading Template");
}) // --> (1) SEMICOLON?
// (2) How does this wire up an event to the previous
// jQuery AJAX GET? Didn't it already happen?
tmplLoader.complete(function () {
$(host).trigger("TemplateLoaded", [path]);
});
}
};
})(jQuery, document);
Is there supposed to be a semicolon there?
It seems like the AJAX GET is happening and then an event is getting wired to it - what am I missing here?
Is there supposed to be a semicolon there?
It's optional, but recommended.
It seems like the AJAX GET is happening and then an event is getting wired to it - what am I missing here?
AJAX is asynchronous, so it's very unlikely the request will be already completed right after sending it. So, there's time to add another callback. And even if there weren't, it would work anyway, since jQuery implements those callbacks with promises. See example here.
With javascript, and ajax in particular it is important to understand how the browser goes about executing your code. When you make the request for remote data via an ajax GET, the rest of your code is still executing. Imagine if as soon as you made a request for some JSON to a busy server, lets say it takes a couple seconds, and everything on your page stops working during that time period. It would be very difficult to write code that wasn't difficult for the user to interact with. Luckily ajax is async, meaning it makes the request and an carries on as usual until the complete event (or equivalent) is fired. This is what executes your code pertinent to the data you just received. So when you specify that callback at the bottom of your snippit, you are telling the browser, "go do your thing for now but when you hear back from the server, do all of these things".
Oh yeah, and semicolons are optional, but as a best practice, most people use them.
They are assigning the $.get to a variable and then adding a complete handler to it.
It's the same as doing this:
$.get('/path'), function(){
//success callback
}).error(function(e){
//errors
}).complete(function(){
//always run
});
Just an unusual way of doing it.
My application is using the javascript webSQL and I am having some issue with the order of command execution. No matter what order my code is in the querys get executed last. For example in the following code 2 will be alerted before 1:
db.transaction(
function (transaction) {
transaction.executeSql(
'SELECT * FROM contacts WHERE id = ?;',
[id],
function (transaction, result) {
alert("1");
if (result.rows.length != 0) {
user = result.rows.item(0).name;
} else {}
},
errorHandler);
});
alert("2");
message = id + '%1E' + name;
Any ideas why this is happen?
When do you alert("2"), you haven't finished the transaction, so the 2nd function you pass to it has not been called. Since it's the success handler I assume, it will be called after the transaction has completed successfully. The third argument would be the code snippet to execute when the query failed, only if it failed.
Anything outside of the event handler code is executed when the page has loaded enough content to execute the javascript. Note that the entire page need not load to execute alert("2"), just enough of the JS. Since these statements are soooo close together, there is bascially 0 chance that the transaction will ever complete before the alert("2") statement is reached and executed.
However, if you had enough code between alert("2") and db.transaction(...), it's possible (in what is called a race condition) that the callback could be executed before the alert(2) code.
You want to be careful with event handlers in this case, although it depends on what your success handler does. If it modifies the page DOM, then I would highly recommend wrapping the db.transaction() and surrounding code) in an event handler that is bound to the page loading.
This isn't an answer to your question, but I thought I should give you a warning about webSQL.
As of 18 November 2010, W3C had announced that they have deprecated the Web SQL Database recommendation draft and will no longer maintain it.
So while it may WORK in browsers at the moment, I wouldn't rely on it for the future.