Javascript: Suppress "Stop running this script?", or how to use setTimeout? - javascript

I'm building a js library that reads binary files, including zip files.
Because there's no direct native support for arrays of binary data, when the zip files get large, there's a lot of copying that has to go on (See my other question for details).
This results in a "Stop Running this script?" alert. Now, I know this can happen if there's an infinite loop, but in my case, it's not an infinite loop. It's not a bug in the program. It just takes a loooooong time.
How can I suppress this?

This message is for security reason enabled, because otherwise you could simply block the users browser with a simple never ending loop. I think there no way to deactivate it.
Think about splitting you processing into serval parts, and schedule them via setTimeout, this should surpress the message, because the script is now not running all the time.

You could divide the process into increments, then use setTimeout to add a small delay.

In IE (and maybe Firefox too), the message is based on the number of statements executed, not the running time. If you can split some of the processing into a separate function, and defer it with setTimeout, I believe that it won't count toward the limit.

...answering my own question, so I could post the code I used.
The main issue was, I was reading the entire contents of a file, with a readToEnd() method, which actually reads one byte at a time. When reading a large file, it took a looooong time. The solution was to read asynchronously, and in batches.
This is the code I used:
readToEndAsync : function(callback) {
_state = "";
var slarge = "";
var s = "";
var txtrdr = this;
var readBatchAsync = function() {
var c = 0;
var ch = txtrdr.readChar();
while(ch != null)
{
s += ch;c++;
if(c > 1024)
{
slarge += s;
s = "";
break;
}
ch = txtrdr.readChar();
}
if (ch!=null){
setTimeout(readBatchAsync, 2);
}
else {
callback(slarge+s);
}
};
// kickoff
readBatchAsync();
return null;
},
And to call it:
textReader.readToEndAsync(function(out){
say("The callback is complete");
// the content is in "out"
});

I believe this feature is specific to Firefox and/or other browsers, and it has nothing to do with the javascript language itself.
As far as I know you (the programmer) have no way of stopping it in your visitors' browser.

Related

How to structure my code to return a callback?

So I've been stuck on this for quite a while. I asked a similar question here: How exactly does done() work and how can I loop executions inside done()?
but I guess my problem has changed a bit.
So the thing is, I'm loading a lot of streams and it's taking a while to process them all. So to make up for that, I want to at least load the streams that have already been processed onto my webpage, and continue processing stream of tweets at the same time.
loadTweets: function(username) {
$.ajax({
url: '/api/1.0/tweetsForUsername.php?username=' + username
}).done(function (data) {
var json = jQuery.parseJSON(data);
var jsonTweets = json['tweets'];
$.Mustache.load('/mustaches.php', function() {
for (var i = 0; i < jsonTweets.length; i++) {
var tweet = jsonTweets[i];
var optional_id = '_user_tweets';
$('#all-user-tweets').mustache('tweets_tweet', { tweet: tweet, optional_id: optional_id });
configureTweetSentiment(tweet);
configureTweetView(tweet);
}
});
});
}};
}
This is pretty much the structure to my code right now. I guess the problem is the for loop, because nothing will display until the for loop is done. So I have two questions.
How can I get the stream of tweets to display on my website as they're processed?
How can I make sure the Mustache.load() is only executed once while doing this?
The problem is that the UI manipulation and JS operations all run in the same thread. So to solve this problem you should just use a setTimeout function so that the JS operations are queued at the end of all UI operations. You can also pass a parameter for the timeinterval (around 4 ms) so that browsers with a slower JS engine can also perform smoothly.
...
var i = 0;
var timer = setInterval(function() {
var tweet = jsonTweets[i++];
var optional_id = '_user_tweets';
$('#all-user-tweets').mustache('tweets_tweet', {
tweet: tweet,
optional_id: optional_id
});
configureTweetSentiment(tweet);
configureTweetView(tweet);
if(i === jsonTweets.length){
clearInterval(timer);
}
}, 4); //Interval between loading tweets
...
NOTE
The solution is based on the following assumptions -
You are manipulating the dom with the configureTweetSentiment and the configureTweetView methods.
Ideally the solution provided above would not be the best solution. Instead you should create all html elements first in javascript only and at the end append the final html string to a div. You would see a drastic change in performance (Seriously!)
You don't want to use web workers because they are not supported in old browsers. If that's not the case and you are not manipulating the dom with the configure methods then web workers are the way to go for data intensive operations.

Efficient scrolling of piped output in a browser window

I have a custom browser plugin (built with FireBreath) that will invoke a local process on a users machine and pipe stdout back to the browser, to do this i'm running the process through a popen() call and as I read data from the pipe I fire a JSAPI event and send it back to the browser.
In the browser I append the output to a div as pre-formatted text and tell the div to scroll to the bottom.
Code in the browser plugin:
FILE* in;
if(!(in = _popen(command_string, "r")))
{
return NULL;
}
while(fgets(buff, sizeof(buff), in)!=NULL)
{
send_output_to_browser(buff);
}
HTML & Javascript/jQuery:
<pre id="sync_status_window" style="overflow:scroll">
<span id="sync_output"></span>
</pre>
var onPluginTextReceived = function (text)
{
$('#sync_output').append(text);
var objDiv = document.getElementById('sync_status_window');
objDiv.scrollTop = objDiv.scrollHeight;
}
This method works for the browsers I need it to (this is a limited use internal tool), but it's frustratingly laggy. My process usually finishes about 30-60 seconds before the output window finishes scrolling. So, how do I make this more efficient? Is there a better way to pipe this text back to the browser?
There are two optimizations I see potential in:
keep a reference to your pre and span, you keep repeating the dom
tree search , which is quite costly
Chunk up the output - either on the C side (preferable) or on the JS
side.
For quick hack (without removing dependency on jquery, which should be done) could look like
//Higher or global scope
var pluginBuffer=[];
var pluginTimeout=false;
var sync_status_window=document.getElementById('sync_status_window');
function onPluginTextReceived(text)
{
pluginBuffer[pluginBuffer.length]=text;
if (!pluginTimeout) pluginTimeout=window.SetTimeout('onPluginTimer();',333);
}
function onPluginTimer()
{
var txt=pluginBuffer.join('');
pluginBuffer=[];
pluginTimeout=false;
$('#sync_output').append(text);
sync_status_window.scrollTop = sync_status_window.scrollHeight;
}
Adapt to your needs, I chose 333ms for 3 updates/second

IE stop script warning

Only in IE I get a warning when loading my site containing javascript saying that its causing the page to run slowly (and asking if I want to stop it).
I've seen other posts about this and I've looked for any long running code or infinite loops etc. The weird thing is, when I select 'No' (to not terminate the script) the page immediately loads properly. Its almost like this warning comes up right before the page is done loading. Has anybody experienced this, or know why this might be happening?
IE has its own way of making your life impossible.
Just deactivate the warning, you can further research in the future if that's necessary.
This article might help you determine why IE is giving such a warning.
Regards,
Adapted from http://www.codeproject.com/Tips/406739/Preventing-Stop-running-this-script-in-Browsers
// Magic IE<9 workaround for irritating "Stop running this script?" dialog.
RepeatOperation = function(anonymousOperation, whenToYield){
var count = 0
return function(){
if (++count >= whenToYield){
count = 0
setTimeout(function(){anonymousOperation()}, 100)
}
else anonymousOperation()
}
}
// How to implement the workaround:
//for (var i in retailers){ // Too big for IE<9! Use magic workaround:
var i = 0; var noInArray = retailers.length
var ro = new RepeatOperation(function(){
// <<Your loop body here, using return instead of continue.>>
if (++i < noInArray) ro()
else alert("Loop finished.")
}, 200) // Run this in batches of n. 500 is too much for IE<9.
ro()

Getting functions from another script in JS

I load this JS code from a bookmarklet:
function in_array(a, b)
{
for (i in b)
if (b[i] == a)
return true;
return false;
}
function include_dom(script_filename) {
var html_doc = document.getElementsByTagName('head').item(0);
var js = document.createElement('script');
js.setAttribute('language', 'javascript');
js.setAttribute('type', 'text/javascript');
js.setAttribute('src', script_filename);
html_doc.appendChild(js);
return false;
}
var itemname = '';
var currency = '';
var price = '';
var supported = new Array('www.amazon.com');
var domain = document.domain;
if (in_array(domain, supported))
{
include_dom('http://localhost/bklts/parse/'+domain+'.js');
alert(getName());
}
[...]
Note that the 'getName()' function is in http://localhost/bklts/parse/www.amazon.com/js. This code works only the -second- time I click the bookmarklet (the function doesn't seem to get loaded until after the alert()).
Oddly enough, if I change the code to:
if (in_array(domain, supported))
{
include_dom('http://localhost/bklts/parse/'+domain+'.js');
alert('hello there');
alert(getName());
}
I get both alerts on the first click, and the rest of the script functions. How can I make the script work on the first click of the bookmarklet without spurious alerts?
Thanks!
-Mala
Adding a <script> tag through DHTML makes the script load asynchroneously, which means that the browser will start loading it, but won't wait for it to run the rest of script.
You can handle events on the tag object to find out when the script is loaded. Here is a piece of sample code I use that seems to work fine in all browsers, although I'm sure theres a better way of achieving this, I hope this should point you in the right direction:
Don't forget to change tag to your object holding the <script> element, fnLoader to a function to call when the script is loaded, and fnError to a function to call if loading the script fails.
Bear in mind that those function will be called at a later time, so they (like tag) must be available then (a closure would take care of that normally).
tag.onload = fnLoader;
tag.onerror = fnError;
tag.onreadystatechange = function() {
if (!window.opera && typeof tag.readyState == "string"){
/* Disgusting IE fix */
if (tag.readyState == "complete" || tag.readyState == "loaded") {
fnLoader();
} else if (tag.readyState != "loading") {
fnError();
};
} else if (tag.readyState == 4) {
if (tag.status != 200) {
fnLoader();
}
else {
fnError();
};
};
});
It sounds like the loading of the external script (http://localhost/bklts/parse/www.amazon.com/js) isn't blocking execution until it is loaded. A simple timeout might be enough to give the browser a chance to update the DOM and then immediately queue up the execution of your next block of logic:
//...
if (in_array(domain, supported))
{
include_dom('http://localhost/bklts/parse/'+domain+'.js');
setTimeout(function() {
alert(getName());
}, 0);
}
//...
In my experience, if zero doesn't work for the timeout amount, then you have a real race condition. Making the timeout longer (e.g. 10-100) may fix it for some situations but you get into a risky situation if you need this to always work. If zero works for you, then it should be pretty solid. If not, then you may need to push more (all?) of your remaining code to be executed into the external script.
The best way I could get working: Don't.
Since I was calling the JS from a small loader bookmarklet anyway (which just tacks the script on to the page you're looking at) I modified the bookmarklet to point the src to a php script which outputs the JS code, taking the document.domain as a parameter. As such, I just used php to include the external code.
Hope that helps someone. Since it's not really an answer to my question, I won't mark this as the accepted answer. If someone has a better way, I'd love to know it, but I'll be leaving my code as is:
bookmarklet:
javascript:(function(){document.body.appendChild(document.createElement('script')).src='http://localhost/bklts/div.php?d='+escape(document.domain);})();
localhost/bklts/div.php:
<?php
print("
// JS code
");
$supported = array("www.amazon.com", "www.amazon.co.uk");
$domain = #$_GET['d']
if (in_array($domain, $supported))
include("parse/$domain.js");
print("
// more JS code
");
?>

How to stop intense Javascript loop from freezing the browser

I'm using Javascript to parse an XML file with about 3,500 elements. I'm using a jQuery "each" function, but I could use any form of loop.
The problem is that the browser freezes for a few seconds while the loop executes. What's the best way to stop freezing the browser without slowing the code down too much?
$(xmlDoc).find("Object").each(function() {
//Processing here
});
I would ditch the "each" function in favour of a for loop since it is faster. I would also add some waits using the "setTimeout" but only every so often and only if needed. You don't want to wait for 5ms each time because then processing 3500 records would take approx 17.5 seconds.
Below is an example using a for loop that processes 100 records (you can tweak that) at 5 ms intervals which gives a 175 ms overhead.
var xmlElements = $(xmlDoc).find('Object');
var length = xmlElements.length;
var index = 0;
var process = function() {
for (; index < length; index++) {
var toProcess = xmlElements[index];
// Perform xml processing
if (index + 1 < length && index % 100 == 0) {
setTimeout(process, 5);
}
}
};
process();
I would also benchmark the different parts of the xml processing to see if there is a bottleneck somewhere that may be fixed. You can benchmark in firefox using firebug's profiler and by writing out to the console like this:
// start benchmark
var t = new Date();
// some xml processing
console.log("Time to process: " + new Date() - t + "ms");
Hope this helps.
Set a timeOut between processing to prevent the loop cycle from eating up all the browser resources. In total it would only take a few seconds to process and loop through everything, not unreasonable for 3,500 elements.
var xmlElements = $(xmlDoc).find('Object');
var processing = function() {
var element = xmlElements.shift();
//process element;
if (xmlElements.length > 0) {
setTimeout(processing, 5);
}
}
processing();
I'd consider converting the 3500 elements from xml to JSON serverside or even better upload it to server converted, so that it's native to JS from the getgo.
This would minimize your load and prolly make the file size smaller too.
you can setTimeout() with duration of ZERO and it will yield as desired
Long loops without freezing the browser is possible with the Turboid framework. With it, you can write code like:
loop(function(){
// Do something...
}, number_of_iterations, number_of_milliseconds);
More details in this turboid.net article: Real loops in Javascript
Javascript is single-threaded, so aside from setTimeout, there's not much you can do. If using Google Gears is an option for your site, they provide the ability to run javascript in a true background thread.
You could use the HTML5 workers API, but that will only work on Firefox 3.1 and Safari 4 betas atm.
I had the same problem which was happening when user refreshed the page successively. The reason was two nested for loops which happened more than 52000 times. This problem was harsher in Firefox 24 than in Chrome 29 since Firefox would crash sooner (around 2000 ms sooner than Chrome). What I simply did and it worked was that I user "for" loops instead of each and then I refactored the code so that I divided the whole loop array to 4 separated calls and then merged the result into one. This solution has proven that it has worked.
Something like this:
var entittiesToLoop = ["..."]; // Mainly a big array
loopForSubset(0, firstInterval);
loopForSubset(firstInterval, secondInterval);
...
var loopForSubset = function (startIndex, endIndex) {
for (var i=startIndex; i < endIndex; i++) {
//Do your stuff as usual here
}
}
The other solution which also worked for me was the same solution implemented with Worker APIs from HTML5. Use the same concept in workers as they avoid your browser to be frozen because they run in the background of your main thread. If just applying this with Workers API did not work, place each of instances of loopForSubset in different workers and merge the result inside the main caller of Worker.
I mean this might not be perfect but this has worked. I can help with more real code chunks, if someone still thinks this might suite them.
You could try shortening the code by
$(xmlDoc).find("Object").each(function(arg1) {
(function(arg1_received) {
setTimeout(function(arg1_received_reached) {
//your stuff with the arg1_received_reached goes here
}(arg1_received), 0)
})(arg1)
}(this));
This won't harm you much ;)
As a modification of #tj111 answer the full usable code
//add pop and shift functions to jQuery library. put in somewhere in your code.
//pop function is now used here but you can use it in other parts of your code.
(function( $ ) {
$.fn.pop = function() {
var top = this.get(-1);
this.splice(this.length-1,1);
return top;
};
$.fn.shift = function() {
var bottom = this.get(0);
this.splice(0,1);
return bottom;
};
})( jQuery );
//the core of the code:
var $div = $('body').find('div');//.each();
var s= $div.length;
var mIndex = 0;
var process = function() {
var $div = $div.first();
//here your own code.
//progress bar:
mIndex++;
// e.g.: progressBar(mIndex/s*100.,$pb0);
//start new iteration.
$div.shift();
if($div.size()>0){
setTimeout(process, 5);
} else {
//when calculations are finished.
console.log('finished');
}
}
process();

Categories

Resources