"A Task was Canceled" running async JS on several concurrent offscreen browsers - javascript

I'm using CefSharp.OffScreen (C#) with Cef Package version 89.0.170.0.
I'm running 8 offscreen browsers concurrently to capture and render some web pages but 7 of them throw a "Task canceled" exception when I try and run some async JS execution task at the beginning of the capture, as if only one of them was capable of running JS while the others wait for their turn and fail miserably.
Here is the code I'm using to execute some JS:
JavascriptResponse result = await browser.MainFrame.EvaluateScriptAsync( "some JS code", timeout:TimeSpan.FromMilliseconds( 1000 ) );
I tried putting some delay between each browser's capture job and it seems to be faring a little better, but it's still dependent on the time it takes each browser to execute the scripts and continues to fail for some browsers. I don't like this method as it's not very reliable anyway.
Overall, despite the fact I have 8 different browser instances, it looks like there's only one JS execution engine running and stalling the other browsers.
Am I doing something wrong? Is there a way to make the browsers wait longer before canceling the task? What makes them even cancel the task in the first place?
Best regards.

Related

Protractor non angular tests do not obey waits and sleeps until there is an error in the code

I am using Protractor and cucumber for automation tests on some non angular pages. I have set browser.ignoreSynchronization to true.
When I run a scenario only the first line which is browser.get(...) is executed correctly. I can see the URL loads fine. All following steps are not executed (as I don't see them run in browser) but I see all green and all passed in the results. None of the waits and sleeps in the code have any effect on execution.
However if there is an error somewhere in the code, lets say in the last step of scenario/stepdef I have wrong code browser.blah.something(); then I can see all sleeps and waits being obeyed.
I don't understand what is going on! Why does this erroneous code cause Protractor to obey timeouts? Why this weird behavior? Any idea? Also just wondering why browser.blah.something() doesn't cause compile time error (error before tests start)?
Those errors are most likely things like syntax or type errors, things that are parsed prior to execution and not failures in your tests.
There's a lot of reasons why your following lines are not working, we can't say what unless you show us the code.
My guess here is that the bunch of codes that follow your first line are promises. In fact, wait(http://www.protractortest.org/#/api?view=webdriver.WebDriver.prototype.wait) itself returns a promise.
Promises run asynchronously and not synchronously which is what you might have been expecting.
Here's a short example of what might be happening:
-> App accesses the url
-> App waits for 5 seconds (let's say this is a promise)
-> close the app
You might expect the app to access the url and then wait for 5 seconds then close but what will actually happen is the app will access the url then immediately close.
Why? because the wait for 5 seconds was executed on another thread and the main thread never waited for the 5 seconds to be done (javascript is single-threaded but... you could read about it somewhere)
To counter this, you can chain them (https://javascript.info/promise-chaining) or use async/await, depending on the es version you are following.
I won't delve into promises since that doesn't seem to be the target question here but in case promises are the reason, here's a great article to get started on it
And to answer why browser.something() is not giving an error, browser is actually ProtractorBrowser.prototype, I won't delve into it since it'll be a long answer but again, here's a great article
Try doing the following
console.log(browser)
browser.something = "abc"
console.log(browser)
the second log should show a new property, 'something' with a value of "abc"
It is better now to use
browser. waitForAngularEnabled(false)
instead of
browser.ignoreSynchronization = true
http://www.protractortest.org/#/api?view=ProtractorBrowser.prototype.waitForAngularEnabled
Also, try to put this into beforeEach()
describe('my suite', ()=>{
beforeEach(()=> {
browser.waitForAngularEnabled(false)
})
it('my test', ()=> {
...
})
})
I can suggest putting this into
onPrepare section of your config file or into beforeEach block. So it will be set before running all tests.
//Your protractor configuration file
let conf = {
// Some other options ...
onPrepare: () => {
browser.waitForAngularEnabled(false)
}
}

Why google is using the term "Render-Blocking JavaScript"?

See:
https://developers.google.com/speed/docs/insights/BlockingJS
Google is talking there about "Render-Blocking JavaScript", but in my opinion that term is incorrect, confusing and misleading. It almost looks like "Google" is also not understanding it?
This point is that Javascript execution is always pausing / blocking rendering AND also always pausing / blocking the "HTML parser" (at least in Chrome and Firefox). It's even blocking it in case of an external js file, in combination with an async script tag!
So talking about removing "render-blocking Javascript" by for example using async, implies that there is also non blocking Javascript or that "async Javascript execution" is not blocking rendering, but that's not true!
The correct term would be: "Render-Blocking Download(s)". With async you will avoid that: downloading the js file, will not pause / block rendering. But the execution will still block rendering.
One more example which confirms it looks like Google is not "understanding" it.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>Test</title>
</head>
<body>
Some HTML line and this is above the fold
<script>
// Synchronous delay of 5 seconds
var timeWhile = new Date().getTime();
while( new Date().getTime() - timeWhile < 5000 );
</script>
</body>
</html>
I tested it in Firefox and Chrome and they are showing (rendering): "Some HTML line and this is above the fold" after 5 seconds and not within 5 seconds!!!! It looks like Google is thinking that in a case like that, the Javascript will not block rendering, but as expected from theory, it will block. Before the js execution will start, all the html is already in the DOM (execept end body / html tag), but rendering is not done yet and will be paused. So if Google would be really aware of this, then Chrome would first finish rendering before starting with the execution of javascript.
If you take the example above and you're using:
<script src="delay.js" async></script>
or
<script src="delay.js"></script>
instead of internal javascript. Then it can also give the same results as the example above. For example:
If the preloader (scanning for files to already download) already would have downloaded "delay.js", before the "HTML parser" is coming at the Javascript part.
Usually external files from Google, Facebook et cetera are already stored in the cache, so there is no download and they just take the file from cache.
In cases like that (and also with async), the result will be the same as the example above (at least in pretty a lot of cases). Because if there is no extra download time, the "Javascript execution" will / can already start, before the preceding html finished rendering.
So in a case like that you could even consider to put "no-cache" / "no-store" on delay.js (or even extra delay), to make your page render more fast. By forcing a download (or extra delay) you will give the browser some extra time to finish rendering of the preceding html, before executing the render blocking Javascript.
So i really don't understand why Google (and others) are using the term "Render-Blocking JavaScript", while from theory and from "real life" examples it looks like it's the wrong term and wrong thinking. I see noone talking about this on the internet, so i don't understand. I know i am f**king intelligent (j/k), but it looks kind of weird to me, to be the only one with the thoughts above.
I work with developers on Chrome, Firefox, Safari and Edge, and I can assure the people working on these aspects of the browser understand the difference between async/defer and neither. You might find others will react more politely to your questions if you ask them politely.
Here's an image from the HTML spec on script loading and execution:
This shows that the blocking happens during fetch if a classic script has neither async or defer. It also shows that execution will always block parsing, or certainly the observable effects of parsing. This is because the DOM and JS run on the same thread.
I tested it in Firefox and Chrome and they are showing (rendering): "Some HTML line and this is above the fold" after 5 seconds and not within 5 seconds!!!!
Browsers may render the line above, but nothing below. Whether the above line renders depends on the timing of the event loop in regards to the screen refresh.
It looks like Google is thinking that in a case like that, the Javascript will not block rendering
I'm struggling to find a reference to this. You linked to my article in an email you sent me, which specifically talks about rendering being blocked during fetching.
In cases like that (and also with async), the result will be the same
That isn't guaranteed by the spec. You're relying on retrieval from the cache being instant, which may not be the case.
in a case like that you could even consider to put "no-cache" / "no-store" on delay.js (or even extra delay), to make your page render more fast. By forcing a download (or extra delay) you will give the browser some extra time to finish rendering of the preceding html, before executing the render blocking Javascript.
Why not use defer in this case? It achieves the same without the bandwidth penalty and unpredictability.
Maarten B, I did test you code and you are correct indeed. Whether you use async, defer or whatever, the lines above the inline JavaScript are not beeing rendered. The information in the documentation of Google is therefore incorrect.

Is console.log atomic?

The print statement in Python is not thread-safe. Is it safe to use console.log in Node.js concurrently?
If so, then is it also interleave-safe? That is, if multiple (even hundreds) of callbacks write to the console, can I be sure that the output won't be clobbered or interleaved?
Looking at the source code, it seems that Node.js queues concurrent attempts to write to a stream (here). On the other hand, console.log's substitution flags come from printf(3). If console.log wraps around printf, then that can interleave output on POSIX machines (as shown here).
Please show me where the async ._write(chunk, encoding, cb) is implemented inside Node.js in your response to this question.
EDIT: If it is fine to write to a stream concurrently, then why does this npm package exist?
Everything in node.js is basically "atomic". That's because node.js is single threaded - no code can ever be interrupted.
The events loop of nodejs is single thread, but all the async calls of nodejs are multi-threaded, it use libuv under the hood, libuv is library that use multi threads.
link:
https://medium.com/the-node-js-collection/what-you-should-know-to-really-understand-the-node-js-event-loop-and-its-metrics-c4907b19da4c
Based on what I see on my Node.js console it is NOT "interleave-safe".
I can see my console-output is sometimes "clobbered or interleaved". Not always. When I run my program it is maybe every 5th time that I see interleaved output from multiple log-statements.
This may of course depend on your Node.js version and the OS you are running it on. For the record my Node.js version is v12.13.0 and OS is Windows 10.0.19042.

Game Screeps. Error during simulation. CPU limit reached

I'm going throw Screeps (http://screeps.com/) simulation. I've stick on stage when I need to send worker to harvest resources. So I put code from the tip to the script tab, code is:
var creep = Game.creeps.Worker1;
var sources = creep.room.find(Game.SOURCES);
creep.moveTo(sources[0]);
creep.harvest(sources[0]);
My creep had started move to source, but then it froze and I got error (light red text) in console:
CPU limit reached
What I need to do to finish this step and why I'm getting this error?
It's a limitation of the Simulation Room mode. Commit the scripts, refresh the page and start simulating again and it should work.
From the documentation:
Please remember that the exact duration of the execution of your
script is limited by the CPU time available in your service plan. In
case of exceeding the limit, the script execution will be stopped.
The exception is the Simulation Room where the script execution is always limited by 5 seconds.
So it seems that your creep can't find anything to harvest within five seconds
This appears to be a bug in Internet Explorer 11 (and probably older versions too but I don't have older versions installed). I would create a creep and everything would be fine and dandy until sometime between moving to and starting to collect from a source at which point the entire thing would freeze. I think this is just lazy programming because I was able to get everything working by switching to Chrome 39.blah.blah.blah. If you're not using Chrome I would suggest using it for this game.

Javascript debugging

I have recently started to tinker with Project Euler problems and I try to solve them in Javascript. Doing this I tend to produce many endless loops, and now I'm wondering if there is any better way to terminate the script than killing the tab in Firefox or Chrome?
Also, is firebug still considered the "best" debugger (myself I can't see much difference between firebug and web dev tool in safari/chrome ).
Any how have a nice Sunday!
Firebug is still my personal tool of choice.
As for a way of killing your endless loops. Some browsers will prevent this from happening altogether. However, I still prefer just going ctrl + w, but this still closes the tab.
Some of the other alternatives you can look into:
Opera : Dragonfly
Safari / Chrome : Web Inspector
Although, Opera has a nice set of developer tools which I have found pretty useful. (Tools->Advanced->Developer Tools)
If you don't want to put in code to explicitly exit, try using a conditional breakpoint. If you open Firebug's script console and right-click in the gutter next to the code, it will insert a breakpoint and offer you an option to trigger the breakpoint meets some condition. For example, if your code were this:
var intMaxIterations = 10000;
var go = function() {
while(intMaxInterations > 0) {
/*DO SOMETHING*/
intMaxIterations--;
}
};
... you could either wait for all 10,000 iterations of the loop to finish, or you could put a conditional breakpoint somewhere inside the loop and specify the condition intMaxIterations < 9000. This will allow the code inside the loop to run 1000 times (well, actually 1001 times). At that point, if you wish, you can refresh the page.
But once the script goes into an endless loop (either by mistake or design), there's not a lot you can do that I know of to stop it from continuing if you haven't prepared for this. That's usually why when I'm doing anything heavily recursive, I'll place a limit to the number of times a specific block of code can be run. There are lots of ways to do this. If you consider the behaviour to be an actual error, consider throwing it. E.g.
var intMaxIterations = 10000;
var go = function() {
while(true) {
/*DO SOMETHING*/
intMaxIterations--;
if (intMaxIterations < 0) {
throw "Too many iterations. Halting";
}
}
};
Edit:
It just occurred to me that because you are the only person using this script, web workers are the ideal solution.
The basic problem you're seeing is that when JS goes into an endless loop, it blocks the browser, leaving it unresponsive to any events that you would normally use to stop the execution. Web workers are still just as fast, but they leave your browser unburdened and events fire normally. The idea is that you pass off your high-demand tasks (in this case, your Euler problem algorithm) to a web worker JS file, which executes in its own thread and consumes CPU resources only when they are not needed by the main browser. The net result is that your CPU still spikes like it does now, but your browser stays fast and responsive.
It is a bit of a pest setting up a web worker the first time, but in this case you only have to do it once. If your algorithm never returns, just hit a button and kill the worker thread. See Using Web Workers on MDC for more info.
While having Firebug or the webkit debuggers is nice, a browser otherwise seems like overhead for Project Euler stuff. Why not use a runtime like Rhino or V8?

Categories

Resources