Should I use try-catch when calling jquery plugin - javascript

So, I have jquery plugins (or any other plugins / functions / libraries, for what matters).
I was wondering if I should call the plugin inside a try-catch in order to avoid aty undefined type errors, which may possibly block the execution of the rest of the script.
This is how/where I call the plugins right now.
(function($){
$(document).ready(function(){
// jquery plugin
try {
$("#app").plugin();
} catch (e) {
console.log(e);
}
// some other function applied to entire document
try {
libraryFunction();
} catch (e) {
console.log(e);
}
});
})(jQuery);
I know this is not code review, but if you have any suggestions on how to improve this code, please let me know.

I have dealt with scenarios when a plugin error actually screwed the rest of the script and using try-catch to call the plugin solved the problem.
Using try-catch a lot may slow down your script, but unless you're dealing with long loops, the performance change will be unnoticeable.

It is always a good practice to handle unknown behavior with a try-catch block. But even more important is to know how to handle the exception once its caught. In the above code, the exception is only being logged(which is good) and nothing else. In that case your execution may still get blocked.
Additionally, it would be good to also throw the exception back to the caller and make it handle the behavior. For example, in the above code, if jquery threw an exception, you may let the caller know about the exception and the caller may decide to call the function again or do something else.
In short, after catching, handling decides how your execution will recover. Logging alone will not help.
Edit:
An example to show why an exception should be thrown back:
Lets say I have an AppThread that requests a worker thread to store some data in an SQL database. Such a code flow will ideally not require the worker thread to return anything to the caller because the Worker Thread simply executes some Insert statements.
Now, during the insertion, worker thread caught an SQLException and simply logged and returned. Now the app Thread was never notified of this exception and it simply assumes that the data was inserted as requested. After sometime, the AppThread now wants to read the same data from the Database and asks the WorkerThread to fetch it using some Id. This time, the code will not throw any exception and the result set will simply be null. Now remember, the AppThread was sure that the data would be present and will not know what to do if the result set is null. So in one way, the code execution gets blocked after sometime of the exception.
Now, had the Worker Thread notified the exception to the App Thread earlier, the AppThread would have been aware and would have reattempted the insert operation or would show a dialog to the user letting her/him know that the data may need to be verified before attempting insert again. Also as the exception was passed back, its message would give more hints of what went wrong to the user directly. He will not have to go back to the logs to check what went wrong.

Related

How to lock thread in javascript energy efficient?

I want to patch the alert object in the browser, to show additional text, but I need to await some data to show the necessary content in the alert. However, I can't postpone the alert call.
Also, I don't know about a way to close the alert and show another alert without a user action (if it exists, it can solve my problem too).
So, I have to await some data, but I can't break the alert behavior, which is blocking execution of the code.
To await a response, I can do something like this:
var start = performance.now();
while(true) {
var time = performance.now() - start;
if (time >= 3000) break;
}
console.log('done');
But instead of checking the timer, I will check some data.
This way should work, but this is terrible for performance, because it is the opposite to alert which is just freezing the thread and does nothing until the close dialog, and we'll load the CPU with useless work.
Is it possible freeze a thread to be more energy efficient?
I have to freeze the thread until I get some data from a worker.
Why is promise not solving your problem?
Promises is not blocking a main thread, so this is not reproducing the behavior of alert which I need.
The blocking thread is not user friendly and it's not that you need to await some data
I know about it, and this note is fair enough to development web pages and applications.
But this case is special. I develop a feature for a
browser extension, to translate alerts. The browser extension must not modify the behavior of the page. So when a web site is calling alert, the thread must be freeze. The browser extension must not postpone an alert call to avoid unexpected behavior on the page.
You can see the feature explained here: Feat: Implement optional translation of alerts and console logs #102
The only way I can think of to "block" without consuming CPU this would be to make a synchronous XMLHttpRequest (which are deprecated because blocking is not user-friendly). You'll need to set up a server that can read the payload of the request and reply after the specified amount of time.
const xh = new XMLHttpRequest();
xh.open('GET', urlToYourServer, false);
xh.send('3');
where that '3' is the request body that the server parses (and responds after 3 seconds).
That said, you should not do this - it's a very inelegant and user-unfriendly approach. It'll stop any other action (including browser repainting and other requests) from occurring while this is going on. Better to properly wait for whatever you need (whether that's through a .then of a Promise, or a callback, or something else) - but without more context in the question, how exactly to accomplish this is unclear.

In IndexedDB, is there any scenario where data from a transaction is written to disk but oncomplete does not fire?

As I understand it, there are two potential outcomes for an IndexedDB transaction:
There is some error, so no changes in the transaction are written and oncomplete never fires.
Everything works, so the changes are written and then oncomplete fires.
But I've heard reports from some users about my application intermittently not working in Chrome, and one of them claims to have debugged it to the point of identifying oncomplete as the problem. He says that sometimes oncomplete is not firing even though the data is being saved and no error messages are produced.
I understand that this might not be correct because I haven't been able to observe the problem myself and nobody can come up with a set of steps to reproduce it. But I have seen weird browser-specific bugs in IndexedDB before, especially when writing lots of data in multiple transactions to large object stores (which is where the problem occurs). Has anyone noticed something like this?
I took a quick look at the spec and it says
To determine if a transaction has completed successfully, listen to the transaction’s complete event rather than the success event of a particular request, because the transaction may still fail after the success event fires.
This is stating there may be cases where a request has been successful (i.e. some data may have been written) even though the transaction itself has failed (e.g. write interrupted?)
It's still wise to listen for the complete event, but you may also want to be listening for success on the requests too if this is causing problems, as you can then identify where the error happened on the client's machine and handle the error accordingly, e.g. check how much was written and write more.

Do node.js domains automatically clean themselves up or do I have to call domain.dispose()

I'm a bit confused about node.js domains. I'm using them to catch errors that may be thrown in asynchronous code.
I'm not sure though, whether or not domains automatically clean themselves up for garbage collection once the domain.run(blah) has finished, or wether or not I have to manually call domain.dispose() once I am done with the domain?
The problem with domain.dispose() is that it also destroys all io streams that the domain may have been intercepting, which is not what I want as I'm just using this particular domain to just catch thrown errors in asynchronous code.
don't use it, it will be deprecated: https://github.com/joyent/node/issues/5018

Where do I start debugging a jQuery/Javascript function, that calls an API on my server

Where do I start debugging a jQuery/Javascript function, that calls an API on my server, when it works perfectly well locally - but when uploaded to the server, just returns an HTTP500 error?
I've tried fiddler, but it shows nothing in JSON/XML - the only thing it does show is in the Auth section:
The server Event Logs show nothing around the times I'm trying to test this.
Does the Fiddler response suggest anything is wrong, or can anyone sugget what I may need to turn on in the Event Viewer to capture whatever these 500 errors may be?
Thanks for any help,
Mark
Try adding some console.log() messages which surround the javascript call and are within the callback functions. Doing so will let you know where the failure is occurring. When debugging javascript I typically stick to the Network tab within Chrome Developer Tools and Firebug. By using these tools you get proper output from your console.log() messages.
Specifically, in your jquery result handler I would add the following:
console.log(resultObject);
This will output the entire object tree so that you can drill down into the meat from within Firebug or Chrome Developer Tools... if you need to.
If, for whatever reason, you are opposed to littering your code with log messages then check to see that the call is actually happening when you are testing from your server. You should see whether or not javascript is sending the HTTP request by looking at your network traffic either in Fiddler or browser based tools. If the request is not happening then your code is breaking prior to the call which, in your case, probably means environmental differences.
Is everything referenced and configured properly? Check for null values due to improper configuration or bad references.
500 is a "server error", which basically means something (could be almost anything) broke at the server side.
I would recommend:
Investigate your options for exception handling: http://www.asp.net/web-api/overview/web-api-routing-and-actions/exception-handling
Consider setting the IncludeErrorDetailPolicy to Always, though note that this is a setting that shouldn't be left in-use on a production environment - Error messages returned from Web API method are omitted in non-dev environment
Examine server-side error logging. I'm a big fan of ELMAH. You'll need a little extra effort to get it working properly in Web API - http://blogs.msdn.com/b/webdev/archive/2012/11/16/capturing-unhandled-exceptions-in-asp-net-web-api-s-with-elmah.aspx

Delay script until all messages have been passed?

As usual, I have Googled this a fair bit and read up on the Message Passing API, but, again, I've had to resort to the good fellas at StackOverflow.
My question is this: When passing messages between a Google Chrome extension's background page and content script, is there any way to make it asynchronous - that is, to delay the JavaScript until the messages are detected as being successfully being passed?
I have a function immediately after the message-passing function makes use of the localStorage data that is passed. On first runs the script always results in an error, due to the data not being passed fast enough.
Currently, I'm circumventing this with setTimeout(nextFunction, 250); but that's hardly an elegant or practical solution, as the amount and size of the values passed is always going to change and I have no way of knowing how long it needs to pass the values. Plus, I would imagine, that passing times are relative to the browser version and the user's system.
In short, I need it to be dynamic.
I have considered something like
function passMessages(){
chrome.extension.sendRequest({method: "methodName"}, function(response) {
localStorage["lsName"] = response.data;
});
checkPassedMessages();
}
function checkPassedMessages(){
if (!localStorage["lsName"]){
setTimeout(checkPassedMessages, 100); //Recheck until data exists
} else {
continueOn();
}
}
but I need to pass quite a lot of data (at least 20 values) and, frankly, it's not practical to !localStorage["lsName1"] && !localStorage["lsName2"] etc etc. Plus I don't even know if that would work or not.
Does anyone have any ideas?
Thanks
Update: Still no answer, unfortunately. Can anyone offer any help at all? :/
I don't know whether I'm interpreting your question wrong. As far as I understand you are sending request from your extension page to a content script. The request handler in the content handler does some operation on the message passed after which you need the control back in the extension page. If this is what you need you have everything in the Google Extension Documentation. The following code works
//Passing the message
function passMessages(){
chrome.extension.sendRequest({method: "methodName"}, function(response) {
//callback function that will be called from the receiving end
continueOn();
});
}
//Recieving the message
chrome.extension.onRequest.addListener(
function(request, sender, sendResponse) {
//Do the required operation with the message passed and call sendResponse
sendResponse();
});
You can solve the general (ie, on any platform where you are communicating between distinct threads of execution) case of this problem by collecting the information passed, while waiting for some sort of following "go" message before you begin processing the collected information. You can use the same idea to have the sender wait for the complete reply.
Of course it's possible that your particular platform provides tools for doing this; but if not, you can always build the general solution by hand.

Categories

Resources