Some of the scripts that I run take a long time and users might get concerned that a script stopped working if they can't see the status/step. I have added a spinner to the Sidebar to at least indicate that the script started running, but I would like to do more than that.
Ideally, I would be able to directly update the Sidebar contents from the GAS, but I gather than is not possible because of sandboxing. I have seen other questions and answers that discuss using success handlers in a daisy chain like this:
function uploadActivities(){
google.script.run.withSuccessHandler(onSuccess).activities_upload();
}
function onSuccess(lastStatus){
$('#codestatus').text(lastStatus);
google.script.run.step_two();
}
It is a hack and it would require me to split the code into smaller steps and pass values to the UI, which don't belong in the UI, and back to the code. I really don't like that approach and maintenance could be a bear.
I have tried creating a var in GAS and updating that value as the code progresses. However, I can't find a way to get the UI to periodically check until the code execution is complete AND to successfully update the UI after each step.
Here is the code I have created:
function uploadActivities(){
google.script.run.activities_upload();
getStatus();
}
function getStatus(){
var isActive = true;
while(isActive){
var lastStatus = google.script.run.getStatus();
$('#codestatus').text(lastStatus);
if(lastStatus === 'Complete'){ isActive = false; }
}
}
In GAS I use this code:
var codeStatus = 'start';
function getStatus(){
return codeStatus;
}
function activities_upload(){
codeStatus = 'Started Execution';
...
codeStatus = 'Extracting Values';
...
codeStatus = 'Uploading Activities';
...
codeStatus = 'Complete';
}
It runs the required code, and even updates the #codestatus div with the first value, but it doesn't get any values beyond the first value. Additionally, it creates a continuous loop if there is an error in the code execution, so that isn't good either.
Is there a good, efficient, and safe way to complete this approach? Or, is there a better way to notify the user of the code execution status so they don't get worried if it takes a while, and can tell if there has been an issue?
I have struggled with this for some time. Unfortunately, I don't have a good fix for your approach, but I can show what I finally did and it seems to be working.
First, create an easy way to send a toast to your users.
function updateStatus_(alert,title){
var ui = SpreadsheetApp.getActiveSpreadsheet();
var title_ = title!=""?title:"";
ui.toast(alert,title_);
}
Second, as required, use the toast to update the user.
function activities_upload(){
updateStatus_('Started Execution');
...
updateStatus_('Extracting Values');
...
updateStatus_('Uploading Activities');
...
updateStatus_('Complete');
}
This will alert the user with a temporary message as the code progresses and not require the user to clear an alert.
Please note that if the steps progress rapidly the user will see the toast flash on the screen only to be quickly replaced by the next toast. So, make sure you don't have too many throughout your execution.
To see the problem in action, see this jsbin. Clicking on the button triggers the buttonHandler(), which looks like this:
function buttonHandler() {
var elm = document.getElementById("progress");
elm.innerHTML = "thinking";
longPrimeCalc();
}
You would expect that this code changes the text of the div to "thinking", and then runs longPrimeCalc(), an arithmetic function that takes a few seconds to complete. However, this is not what happens. Instead, "longPrimeCalc" completes first, and then the text is updated to "thinking" after it's done running, as if the order of the two lines of code were reversed.
It appears that the browser does not run "innerHTML" code synchronously, but instead creates a new thread for it that executes at its own leisure.
My questions:
What is happening under the hood that is leading to this behavior?
How can I get the browser to behave the way I would expect, that is, force it to update the "innerHTML" before it executes "longPrimeCalc()"?
I tested this in the latest version of chrome.
Your surmise is incorrect. The .innerHTML update does complete synchronously (and the browser most definitely does not create a new thread). The browser simply does not bother to update the window until your code is finished. If you were to interrogate the DOM in some way that required the view to be updated, then the browser would have no choice.
For example, right after you set the innerHTML, add this line:
var sz = elm.clientHeight; // whoops that's not it; hold on ...
edit — I might figure out a way to trick the browser, or it might be impossible; it's certainly true that launching your long computation in a separate event loop will make it work:
setTimeout(longPrimeCalc, 10); // not 0, at least not with Firefox!
A good lesson here is that browsers try hard not to do pointless re-flows of the page layout. If your code had gone off on a prime number vacation and then come back and updated the innerHTML again, the browser would have saved some pointless work. Even if it's not painting an updated layout, browsers still have to figure out what's happened to the DOM in order to provide consistent answers when things like element sizes and positions are interrogated.
I think the way it works is that the currently running code completes first, then all the page updates are done. In this case, calling longPrimeCalc causes more code to be executed, and only when it is done does the page update change.
To fix this you have to have the currently running code terminate, then start the calculation in another context. You can do that with setTimeout. I'm not sure if there's any other way besides that.
Here is a jsfiddle showing the behavior. You don't have to pass a callback to longPrimeCalc, you just have to create another function which does what you want with the return value. Essentially you want to defer the calculation to another "thread" of execution. Writing the code this way makes it obvious what you're doing (Updated again to make it potentially nicer):
function defer(f, callback) {
var proc = function() {
result = f();
if (callback) {
callback(result);
}
}
setTimeout(proc, 50);
}
function buttonHandler() {
var elm = document.getElementById("progress");
elm.innerHTML = "thinking...";
defer(longPrimeCalc, function (isPrime) {
if (isPrime) {
elm.innerHTML = "It was a prime!";
}
else {
elm.innerHTML = "It was not a prime =(";
}
});
}
I have a simple html page containing a large table with more than 2000 rows. I have jQuery code written for searching and sorting in that table. It takes quite some time for searching and sorting (which is understandable).
What I want is to have a screen blocker in place when script is searching or sorting the table. This behavior is observable on AJAX calls on many websites that can be achieved by implementing onAjaxBegin and onAjaxComplete events of jQuery.
Is there any such method that can be used to put a screen blocker for long running script. if not, what is the alternative?
I would recommend breaking it up and iterate with setTimeout.
For example, instead of:
function example1() {
for (var i = 0; i < 1000; i++) {
// SOME CODE
}
}
You could write:
function example2() {
var i = 0;
helper();
function helper() {
// SOME CODE
if (++i < 1000) {
setTimeout(helper, 0);
}
}
}
You don't have to have every iteration in different callback. You could convert 1000 iterations in 1 function call to 10 iterations per function call in 100 function calls or something that would be most suitable in your case. The idea is to not block the user interface for so long that the user will notice.
Another idea would be to use Web Workers if you can but this will not work on older browsers (which may or may not be a problem for you, if you're writing a browser extension or you know what your users will use, etc.).
If you do it the way you explained in your question then you will make the browser completely unresponsive during your calculations and you will most likely trigger a "slow script - do you want to kill it?" kind of warning.
jQuery blockUI will block elements or the page and is very customizable.
When looking to improve a page's performance, one technique I haven't heard mentioned before is using setTimeout to prevent javascript from holding up the rendering of a page.
For example, imagine we have a particularly time-consuming piece of jQuery inline with the html:
$('input').click(function () {
// Do stuff
});
If this code is inline, we are holding up the perceived completion of the page while the piece of jquery is busy attaching a click handler to every input on the page.
Would it be wise to spawn a new thread instead:
setTimeout(function() {
$('input').click(function () {
// Do stuff
})
}, 100);
The only downside I can see is that there is now a greater chance the user clicks on an element before the click handler is attached. However, this risk may be acceptable and we have a degree of this risk anyway, even without setTimeout.
Am I right, or am I wrong?
The actual technique is to use setTimeout with a time of 0.
This works because JavaScript is single-threaded. A timeout doesn't cause the browser to spawn another thread, nor does it guarantee that the code will execute in the specified time. However, the code will be executed when both:
The specified time has elapsed.
Execution control is handed back to the browser.
Therefore calling setTimeout with a time of 0 can be considered as temporarily yielding to the browser.
This means if you have long running code, you can simulate multi-threading by regularly yielding with a setTimeout. Your code may look something like this:
var batches = [...]; // Some array
var currentBatch = 0;
// Start long-running code, whenever browser is ready
setTimeout(doBatch, 0);
function doBatch() {
if (currentBatch < batches.length) {
// Do stuff with batches[currentBatch]
currentBatch++;
setTimeout(doBatch, 0);
}
}
Note: While it's useful to know this technique in some scenarios, I highly doubt you will need it in the situation you describe (assigning event handlers on DOM ready). If performance is indeed an issue, I would suggest looking into ways of improving the real performance by tweaking the selector.
For example if you only have one form on the page which contains <input>s, then give the <form> an ID, and use $('#someId input').
setTimeout() can be used to improve the "perceived" load time -- but not the way you've shown it. Using setTimeout() does not cause your code to run in a separate thread. Instead setTimeout() simply yields the thread back to the browser for (approximately) the specified amount of time. When it's time for your function to run, the browser will yield the thread back to the javascript engine. In javascript there is never more than one thread (unless you're using something like "Web Workers").
So, if you want to use setTimeout() to improve performance during a computation-intensive task, you must break that task into smaller chunks, and execute them in-order, chaining them together using setTimeout(). Something like this works well:
function runTasks( tasks, idx ) {
idx = idx || 0;
tasks[idx++]();
if( idx < tasks.length ) {
setTimeout( function(){ runTasks(tasks, idx); },1);
}
}
runTasks([
function() {
/* do first part */
},
function() {
/* do next part */
},
function() {
/* do final part */
}
]);
Note:
The functions are executed in order. There can be as many as you need.
When the first function returns, the next one is called via setTimeout().
The timeout value I've used is 1. This is sufficient to cause a yield, and the browser will take the thread if it needs it, or allow the next task to proceed if there's time. You can experiment with other values if you feel the need, but usually 1 is what you want for these purposes.
You are correct, there is a greater chance of a "missed" click, but with a low timeout value, its pretty unlikely.
I've only found rather complicated answers involving classes, event handlers and callbacks (which seem to me to be a somewhat sledgehammer approach). I think callbacks may be useful but I cant seem to apply these in the simplest context. See this example:
<html>
<head>
<script type="text/javascript">
function myfunction() {
longfunctionfirst();
shortfunctionsecond();
}
function longfunctionfirst() {
setTimeout('alert("first function finished");',3000);
}
function shortfunctionsecond() {
setTimeout('alert("second function finished");',200);
}
</script>
</head>
<body>
Call my function
</body>
</html>
In this, the second function completes before the first function; what is the simplest way (or is there one?) to force the second function to delay execution until the first function is complete?
---Edit---
So that was a rubbish example but thanks to David Hedlund I see with this new example that it is indeed synchronous (along with crashing my browser in the test process!):
<html>
<head>
<script type="text/javascript">
function myfunction() {
longfunctionfirst();
shortfunctionsecond();
}
function longfunctionfirst() {
var j = 10000;
for (var i=0; i<j; i++) {
document.body.innerHTML += i;
}
alert("first function finished");
}
function shortfunctionsecond() {
var j = 10;
for (var i=0; i<j; i++) {
document.body.innerHTML += i;
}
alert("second function finished");
}
</script>
</head>
<body>
Call my function
</body>
</html>
As my ACTUAL issue was with jQuery and IE I will have to post a separate question about that if I can't get anywhere myself!
Well, setTimeout, per its definition, will not hold up the thread. This is desirable, because if it did, it'd freeze the entire UI for the time it was waiting. if you really need to use setTimeout, then you should be using callback functions:
function myfunction() {
longfunctionfirst(shortfunctionsecond);
}
function longfunctionfirst(callback) {
setTimeout(function() {
alert('first function finished');
if(typeof callback == 'function')
callback();
}, 3000);
};
function shortfunctionsecond() {
setTimeout('alert("second function finished");', 200);
};
If you are not using setTimeout, but are just having functions that execute for very long, and were using setTimeout to simulate that, then your functions would actually be synchronous, and you would not have this problem at all. It should be noted, though, that AJAX requests are asynchronous, and will, just as setTimeout, not hold up the UI thread until it has finished. With AJAX, as with setTimeout, you'll have to work with callbacks.
I am back to this questions after all this time because it took me that long to find what I think is a clean solution :
The only way to force a javascript sequential execution that I know of is to use promises.
There are exhaustive explications of promises at : Promises/A and Promises/A+
The only library implementing promises I know is jquery so here is how I would solve the question using jquery promises :
<html>
<head>
<script src="http://code.jquery.com/jquery-1.9.1.min.js"></script>
<script type="text/javascript">
function myfunction()
{
promise = longfunctionfirst().then(shortfunctionsecond);
}
function longfunctionfirst()
{
d = new $.Deferred();
setTimeout('alert("first function finished");d.resolve()',3000);
return d.promise()
}
function shortfunctionsecond()
{
d = new $.Deferred();
setTimeout('alert("second function finished");d.resolve()',200);
return d.promise()
}
</script>
</head>
<body>
Call my function
</body>
</html>
By implementing a promise and chaining the functions with .then() you ensure that the second function will be executed only after the first one has executed
It is the command d.resolve() in longfunctionfirst() that give the signal to start the next function.
Technically the shortfunctionsecond() does not need to create a deferred and return a promise, but I fell in love with promises and tend to implement everything with promises, sorry.
I am an old hand at programming and came back recently to my old passion and am struggling to fit in this Object oriented, event driven bright new world and while i see the advantages of the non sequential behavior of Javascript there are time where it really get in the way of simplicity and reusability.
A simple example I have worked on was to take a photo (Mobile phone programmed in javascript, HTML, phonegap, ...), resize it and upload it on a web site.
The ideal sequence is :
Take a photo
Load the photo in an img element
Resize the picture (Using Pixastic)
Upload it to a web site
Inform the user on success failure
All this would be a very simple sequential program if we would have each step returning control to the next one when it is finished, but in reality :
Take a photo is async, so the program attempt to load it in the img element before it exist
Load the photo is async so the resize picture start before the img is fully loaded
Resize is async so Upload to the web site start before the Picture is completely resized
Upload to the web site is asyn so the program continue before the photo is completely uploaded.
And btw 4 of the 5 steps involve callback functions.
My solution thus is to nest each step in the previous one and use .onload and other similar stratagems, It look something like this :
takeAPhoto(takeaphotocallback(photo) {
photo.onload = function () {
resizePhoto(photo, resizePhotoCallback(photo) {
uploadPhoto(photo, uploadPhotoCallback(status) {
informUserOnOutcome();
});
});
};
loadPhoto(photo);
});
(I hope I did not make too many mistakes bringing the code to it's essential the real thing is just too distracting)
This is I believe a perfect example where async is no good and sync is good, because contrary to Ui event handling we must have each step finish before the next is executed, but the code is a Russian doll construction, it is confusing and unreadable, the code reusability is difficult to achieve because of all the nesting it is simply difficult to bring to the inner function all the parameters needed without passing them to each container in turn or using evil global variables, and I would have loved that the result of all this code would give me a return code, but the first container will be finished well before the return code will be available.
Now to go back to Tom initial question, what would be the smart, easy to read, easy to reuse solution to what would have been a very simple program 15 years ago using let say C and a dumb electronic board ?
The requirement is in fact so simple that I have the impression that I must be missing a fundamental understanding of Javsascript and modern programming, Surely technology is meant to fuel productivity right ?.
Thanks for your patience
Raymond the Dinosaur ;-)
In your example, the first function does actually complete before the second function is started. setTimeout does not hold execution of the function until the timeout is reached, it will simply start a timer in the background and execute your alert statement after the specified time.
There is no native way of doing a "sleep" in JavaScript. You could write a loop that checks for the time, but that will put a lot of strain on the client. You could also do the Synchronous AJAX call, as emacsian described, but that will put extra load on your server. Your best bet is really to avoid this, which should be simple enough for most cases once you understand how setTimeout works.
I had the same problem, this is my solution:
var functionsToCall = new Array();
function f1() {
$.ajax({
type:"POST",
url: "/some/url",
success: function(data) {
doSomethingWith(data);
//When done, call the next function..
callAFunction("parameter");
}
});
}
function f2() {
/*...*/
callAFunction("parameter2");
}
function f3() {
/*...*/
callAFunction("parameter3");
}
function f4() {
/*...*/
callAFunction("parameter4");
}
function f5() {
/*...*/
callAFunction("parameter5");
}
function f6() {
/*...*/
callAFunction("parameter6");
}
function f7() {
/*...*/
callAFunction("parameter7");
}
function f8() {
/*...*/
callAFunction("parameter8");
}
function f9() {
/*...*/
callAFunction("parameter9");
}
function callAllFunctionsSy(params) {
functionsToCall.push(f1);
functionsToCall.push(f2);
functionsToCall.push(f3);
functionsToCall.push(f4);
functionsToCall.push(f5);
functionsToCall.push(f6);
functionsToCall.push(f7);
functionsToCall.push(f8);
functionsToCall.push(f9);
functionsToCall.reverse();
callAFunction(params);
}
function callAFunction(params) {
if (functionsToCall.length > 0) {
var f=functionsToCall.pop();
f(params);
}
}
If you don't insist on using pure Javascript, you can build a sequential code in Livescript and it looks pretty good. You might want to take a look at this example:
# application
do
i = 3
console.log td!, "start"
<- :lo(op) ->
console.log td!, "hi #{i}"
i--
<- wait-for \something
if i is 0
return op! # break
lo(op)
<- sleep 1500ms
<- :lo(op) ->
console.log td!, "hello #{i}"
i++
if i is 3
return op! # break
<- sleep 1000ms
lo(op)
<- sleep 0
console.log td!, "heyy"
do
a = 8
<- :lo(op) ->
console.log td!, "this runs in parallel!", a
a--
go \something
if a is 0
return op! # break
<- sleep 500ms
lo(op)
Output:
0ms : start
2ms : hi 3
3ms : this runs in parallel! 8
3ms : hi 2
505ms : this runs in parallel! 7
505ms : hi 1
1007ms : this runs in parallel! 6
1508ms : this runs in parallel! 5
2009ms : this runs in parallel! 4
2509ms : hello 0
2509ms : this runs in parallel! 3
3010ms : this runs in parallel! 2
3509ms : hello 1
3510ms : this runs in parallel! 1
4511ms : hello 2
4511ms : heyy
In javascript, there is no way, to make the code wait. I've had this problem and the way I did it was do a synchronous SJAX call to the server, and the server actually executes sleep or does some activity before returning and the whole time, the js waits.
Eg of Sync AJAX: http://www.hunlock.com/blogs/Snippets:_Synchronous_AJAX
I tried the callback way and could not get this to work, what you have to understand is that values are still atomic even though execution is not. For example:
alert('1'); <--- these two functions will be executed at the same time
alert('2'); <--- these two functions will be executed at the same time
but doing like this will force us to know the order of execution:
loop=2;
total=0;
for(i=0;i<loop;i++) {
total+=1;
if(total == loop)
alert('2');
else
alert('1');
}
Another way to look at this is to daisy chain from one function to another.
Have an array of functions that is global to all your called functions, say:
arrf: [ f_final
,f
,another_f
,f_again ],
Then setup an array of integers to the particular 'f''s you want to run, e.g
var runorder = [1,3,2,0];
Then call an initial function with 'runorder' as a parameter, e.g.
f_start(runorder);
Then at the end of each function, just pop the index to the next 'f' to execute off the runorder array and execute it, still passing 'runorder' as a parameter but with the array reduced by one.
var nextf = runorder.shift();
arrf[nextf].call(runorder);
Obviously this terminates in a function, say at index 0, that does not chain onto another function.
This is completely deterministic, avoiding 'timers'.
Put your code in a string, iterate, eval, setTimeout and recursion to continue with the remaining lines. No doubt I'll refine this or just throw it out if it doesn't hit the mark. My intention is to use it to simulate really, really basic user testing.
The recursion and setTimeout make it sequential.
Thoughts?
var line_pos = 0;
var string =`
console.log('123');
console.log('line pos is '+ line_pos);
SLEEP
console.log('waited');
console.log('line pos is '+ line_pos);
SLEEP
SLEEP
console.log('Did i finish?');
`;
var lines = string.split("\n");
var r = function(line_pos){
for (i = p; i < lines.length; i++) {
if(lines[i] == 'SLEEP'){
setTimeout(function(){r(line_pos+1)},1500);
return;
}
eval (lines[line_pos]);
}
console.log('COMPLETED READING LINES');
return;
}
console.log('STARTED READING LINES');
r.call(this,line_pos);
OUTPUT
STARTED READING LINES
123
124
1 p is 0
undefined
waited
p is 5
125
Did i finish?
COMPLETED READING LINES