Calling Worker inside a long loop - javascript

Sorry if duplicate, but couldn't find my exact case. I'm playing around Web Worker and is pretty interesting. I was testing different cases and hit this.
Main :
var myWorker = new Worker("WK.js");
for (var i = 0; i <= 1000000000; i++) {
myWorker.postMessage(i);
}
myWorker.onmessage = function (e) {
alert(e.data);
}
Worker :
var sum = 0;
self.onmessage = function (e) {
if (e.data == 1000000000) { postMessage("done" + sum); }
sum += e.data;
}
On the worker script, I'm just summing up the passed values and post back the sum once done. The problem I face is, the above code crashes my browser(all) for this number(~1000000000) however if I move that loop to worker script, it works fine. So is there a limit for the number of postMessage calls per duration? Please note I do know this is bad code, just for testing.

Your browser may not have crashed. The problem is you have a later for loop that gets executed in the main ui that if the browser. So basically the browser is busy busy ruining your code, and can't find to any user input, I.e. It is busy, and generally this results in No responding' message in windows. The rain you don't get this in a worker is simply because that code executes in a completely separate thread (non ui).

It could be a memory / garbage collection issue. You're posting a billion messages, each at least the size of an integer, which I think in Javascript is stored just as all other numbers, so 8 bytes. Ignoring any extra overhead per message, this means that it needs to allocate at least 8gb of memory.
I have to admit a level of ignorance of the garbage collector, but it might not be able to keep up with a billion objects using 8gb of memory allocated in a short amount of time.
So is there a limit for the number of postMessage calls per duration
I suspect yes, although perhaps it's not clear what you mean by "duration" here.

Related

Is there a reliable way to measure performance of code in JavaScript?

If you look a below code you will see measuring of performance of very simple for loop.
var res = 0;
function runs(){
var a1 = performance.now();
var x = 0;
for(var i=0;i<10**9;i++) {
x++;
}
var a2 = performance.now();
res += (a2-a1);
}
for(var j=0;j<10;j++){
runs();
}
console.log(`=${res/10}`);
Additionally, just for a good measure, this will run 10 times and average results. Now, issue with this is that it is not reliable, it highly depends on your CPU, memory and other programs running on your device.
First time, may run 9s and second 23s, then subsequent call can result 8s.
Is there a way to measure performance regardless of CPU, memory and everything else?
I am after something that will give relative number of FLOPS or any other measure that when you compare two codes you will exactly know that one code executes faster than the other.
For instance for loop with 1005 will always show slower than one with 1000 iterations.
Note: Saying FLOPS is wrong in this context as it means - floating point operations per second.
I would like to exclude time completely, I only need FPOs. Meaning I do not care about seconds, just about reliably knowing that regardless of device if you have same code it will always take let say 2000 FPOs to execute.

Do I got number of operations per second in this way?

Look at this code:
function wait(time) {
let i = 0;
let a = Date.now();
let x = a + (time || 0);
let b;
while ((b = Date.now()) <= x) ++i;
return i;
}
If I run it in browser (particularly Google Chrome, but I don't think it matters) in the way like wait(1000), the machine will obviously freeze for a second and then return recalculated value of i.
Let it be 10 000 000 (I'm getting values close to this one). This value varies every time, so lets take an average number.
Did I just got current number of operations per second of the processor in my machine?
Not at all.
What you get is the number of loop cycles completed by the Javascript process in a certain time. Each loop cycle consists of:
Creating a new Date object
Comparing two Date objects
Incrementing a Number
Incrementing the Number variable i is probably the least expensive of these, so the function is not really reporting how much it takes to make the increment.
Aside from that, note that the machine is doing a lot more than running a Javascript process. You will see interference from all sorts of activity going on in the computer at the same time.
When running inside a Javascript process, you're simply too far away from the processor (in terms of software layers) to make that measurement. Beneath Javascript, there's the browser and the operating system, each of which can (and will) make decisions that affect this result.
No. You can get the number of language operations per second, though the actual number of machine operations per second on a whole processor is more complicated.
Firstly the processor is not wholly dedicated to the browser, so it is actually likely switching back and forth between prioritized processes. On top of that memory access is obscured and the processor uses extra operations to manage memory (page flushing, etc.) and this is not gonna be very transparent to you at a given time. On top of that physical properties means that the real clock rate of the processor is dynamic... You can see it's pretty complicated already ;)
To really calculate the number of machine operations per second you need to measure the clock rate of the processor and multiply it by the number of instructions per cycle the processor can perform. Again this varies, but really the manufacturer specs will likely be good enough of an estimate :P.
If you wanted to use a program to measure this, you'd need to somehow dedicate 100% of the processor to your program and have it run a predictable set of instructions with no other hangups (like memory management). Then you need to include the number of instructions it takes to load the program instructions into the code caches. This is not really feasible however.
As others have pointed out, this will not help you determine the number of operations the processor does per second due to the factors that prior answers have pointed out. I do however think that a similar experiment could be set up to estimate the number of operations to be executed by your JavaScript interpreter running on your browser. For example given a function: factorial(n) an operation that runs in O(n). You could execute an operation such as factorial(100) repeatedly over the course of a minute.
function test(){
let start = Date.now();
let end = start + 60 * 1000;
let numberOfExecutions = 0;
while(Date.now() < end){
factorial(100);
numberOfExecutions++;
}
return numberOfExecutions/(60 * 100);
}
The idea here is that factorial is by far the most time consuming function in the code. And since factorial runs in O(n) we know factorial(100) is approximately 100 operations. Note that this will not be exact and that larger numbers will make for better approximations. Also remember that this will estimate the number of operations executed by your interpreter and not your processor.
There is a lot of truth to all previous comments, but I want to invert the reasoning a little bit because I do believe it is easier to understand it like that.
I believe that the fairest way to calculate it is with the most basic loop, and not relying on any dates or functions, and instead calculate the values later.
You will see that the smaller the function, the bigger the initial overload is. That means it takes a small amount of time to start and finish each function, but at a certain point they all start reaching a number that can clearly be seen as close-enough to be considered how many operations per second can JavaScript run.
My example:
const oneMillion = 1_000_000;
const tenMillion = 10_000_000;
const oneHundredMillion = 100_000_000;
const oneBillion = 1_000_000_000;
const tenBillion = 10_000_000_000;
const oneHundredBillion = 100_000_000_000;
const oneTrillion = 1_000_000_000_000;
function runABunchOfTimes(times) {
console.time('timer')
for (let i = 0; i < times; ++i) {}
console.timeEnd('timer')
}
I've tried on a machine that has a lot of load already on it with many processes running, 2020 macbook, these were my results:
at the very end I am taking the time the console showed me it took to run, and I divided the number of runs by it. The oneTrillion and oneBillion runs are virtually the same, however when it goes to oneMillion and 1000 you can see that they are not as performant due to the initial load of creating the for loop in the first place.
We usually try to sway away from O(n^2) and slower functions exactly because we do not want to reach for that maximum. If you were to perform a find inside of a map for an array with all cities in the world (around 10_000 according to google, I haven't counted) we would already each 100_000_000 iterations, and they would certainly not be as simple as just iterating through nothing like in my example. Your code then would take minutes to run, but I am sure you are aware of this and that is why you posted the question in the first place.
Calculating how long it would take is tricky not only because of the above, but also because you cannot predict which device will run your function. Nowadays I can open in my TV, my watch, a raspberry py and none of them would be nearly as fast as the computer I am running from when creating these functions. Sure. But if I were to try to benchmark a device I would use something like the function above since it is the simplest loop operation I could think of.

In Screeps, is CPU limit enforced in a way that allows CPU limit robust code to be written?

In Screeps, each player's usage of CPU is limited, but the documentation for this feature doesn't make the way this is enforced sufficiently clear for writing CPU limit robust code. I've considered the following four possibilities:
1. The player's cycle is never interrupted.
At one extreme, the player's memory deserialize, main script execution, and memory re-serialize are never interrupted, and exceeding the CPU limit simply means that the player's cycle will be skipped on subsequent ticks until the CPU debt is repaid. CPU limit robust code isn't strictly necessary in this case, but it would still be useful to detect when the player's cycle is skipped and possibly start doing things more efficiently. This can easily be achieved using code like this:
module.exports.loop = function()
{
var skippedTicks = 0;
if ( 'time' in Memory )
{
skippedTicks = Game.time - Memory.time - 1;
}
// Main body of loop goes here, and possibly uses skippedTicks to try to do
// things more efficiently.
Memory.time = Game.time;
};
This way of managing players' CPU usage is vulnerable to abuse by infinite loops, and I'm almost certain that this is not the behaviour of Screeps.
2. The player's cycle is atomic.
The next possibility is that the player's cycle is atomic. If the CPU limit is exceeded, the player's cycle is interrupted, but neither the scheduled game state changes nor the changes to Memory are committed. It becomes more important to improve efficiency when an interrupted cycle is detected, because ignoring it means that the player's script will be unable to change game state or Memory. However, detecting interrupted cycles is still simple:
module.exports.loop = function()
{
var failedTicks = 0;
if ( 'time' in Memory )
{
failedTicks = Game.time - Memory.time - 1;
// N failed ticks means the result of this calculation failed to commit N times.
Memory.workQuota /= Math.pow( 2, failedTicks );
}
// Main body of loop goes here, and uses Memory.workQuota to limit the number
// of active game objects to process.
Memory.time = Game.time;
}
2.5. Changes to Memory are atomic but changes to Game objects are not.
EDIT: This possibility occurred to me after reading the documentation for the RawMemory object. If the script is interrupted, any game state changes already scheduled are committed, but no changes to Memory are committed. This makes sense, given the functionality provided by RawMemory, because if the script is interrupted before a custom Memory serialize is run, the default JSON serialize is run, which would make custom Memory serialization more complicated: the custom deserialization would need to capable of handling the default JSON in addition to whatever format the custom serialize wrote.
3. JavaScript statements are atomic.
Another possibility is that the player's cycle isn't atomic but JavaScript statements are. When the player's cycle is interrupted for exceeding the CPU limit, incomplete game state changes and Memory changes are committed, but with careful coding - a statement that makes a Screeps API call must assign the result of the call to a Memory key - the game state changes and Memory changes won't be inconsistent with each other. Writing fully CPU limit robust code for this case seems complicated - it's not a problem that I've solved yet, and I'd want to be sure that this is the true behaviour of Screeps before attempting it.
4. Nothing is atomic.
At the other extreme, not even single statements are atomic: a statement assigning the result of a Screeps API call to a key in Memory could be interrupted between the completion of the call and the assigning of the result, and both the incomplete game state changes and the incomplete memory changes (which are now inconsistent with each other) are committed. In this case, possibilities for writing CPU limit robust code are very limited. For example, although the presence of the value written to Memory by the following statement would indicate beyond doubt that the Screeps API call completed, its absence would not indicate beyond doubt that the call did not complete:
Memory.callResults[ Game.time ][ creep.name ] = creep.move( TOP );
Does anyone know which of these is the behaviour of Screeps? Or is it something else that I haven't considered? The following quote from the documentation:
The CPU limit 100 means that after 100 ms execution of your script will be terminated even if it has not accomplished some work yet.
hints that it could be case 3 or case 4, but not very convincingly.
On the other hand, the result of an experiment in simulation mode with a single creep, the following main loop, and selecting 'Terminate' in the dialog for a non-responding script:
module.exports.loop = function()
{
var failedTicks = 0;
if ( 'time' in Memory )
{
var failedTicks = Game.time - Memory.time - 1;
console.log( '' + failedTicks + ' failed ticks.' );
}
for ( var creepName in Game.creeps )
{
var creep = Game.creeps[ creepName ];
creep.move( TOP );
}
if ( failedTicks < 3 )
{
// This intentional infinite loop was initially commented out, and
// uncommented after Memory.time had been successfully initialized.
while ( true )
{
}
}
Memory.time = Game.time;
};
was that the creep only moved on ticks where the infinite loop was skipped because failedTicks reached its threshold value. This points towards case 2, but isn't conclusive because CPU limit in simulation mode is different from online - it appears to be infinite unless terminated using the dialog's 'Terminate' button.
Case 4 by default, but modifiable to case 2.5
As nehegeb and dwurf suspected, and experiments with a private server have confirmed, the default behaviour is case 4. Changes to both game state and Memory that occurred before the interruption are committed.
However, the running of the default JSON serialize by the server main loop is controlled by the existence of an undocumented key '_parsed' in RawMemory; the key's value is a reference to Memory. Deleting the key at the start of the script's main loop and restoring it at the end has the effect of making the whole set of Memory changes made by the script's main loop atomic i.e. case 2.5:
module.exports.loop = function()
{
// Run the default JSON deserialize. This also creates a key '_parsed' in
// RawMemory - that '_parsed' key and Memory refer to the same object, and the
// existence of the '_parsed' key tells the server main loop to run the
// default JSON serialize.
Memory;
// Disable the default JSON serialize by deleting the key that tells the
// server main loop to run it.
delete RawMemory._parsed;
...
// An example of code that would be wrong without a way to make it CPU limit
// robust:
mySpawn.memory.queue.push('harvester');
// If the script is interrupted here, myRoom.memory.harvesterCreepsQueued is
// no longer an accurate count of the number of 'harvester's in
// mySpawn.memory.queue.
myRoom.memory.harvesterCreepsQueued++;
...
// Re-enable the default JSON serialize by restoring the key that tells the
// server main loop to run it.
RawMemory._parsed = Memory;
};
It is not #1 or #2. I'm betting it's #4, it would make the most sense to monitor CPU usage external to the main loop and kill it whenever the limit is reached. #3 would require complicated code in the screeps server to perform "statement-level" transactions. There is no CPU limit in the simulator, as you've discovered.
Most players solve this problem by simply putting critical code early in their main loop, e.g. tower code comes first, then spawn code, then creep movement/work. This also protects against uncaught exceptions in your code as your most critical functions will (hopefully) already have executed. This is a poor solution for CPU limit though, in my observation it appears as though your code is skipped every 2nd tick once you've used all the CPU in your bucket and are constantly hitting your regular limit.
I don't have a CPU issue right now (I have a subscription), but I would solve this by putting CPU-intensive code near the end and, if possible, only executing it when you have plenty of CPU in your bucket and you are nowhere near your 500 CPU per-tick limit. It also helps to have larger creeps, it's common for either pathfinding or even just movement (0.2 per move) to take up a fair chunk of CPU and larger creeps means fewer creeps.
One of the ingame 'tips of the day' says:
TIP OF THE DAY: If CPU limit raises, your script will execute only partially.
Therefore i would say, it's most likely #4!
Just like dwurf says, the following approach to script layout should do the trick in most cases:
Most players solve this problem by simply putting critical code early in their main loop, e.g. tower code comes first, then spawn code, then creep movement/work. [...]

something is not right with console.time();

I am testing my javascript's speed using the console.time(); method, so it logs the loading time of a function on load.
if (window.devicePixelRatio > 1) {
var images = $('img');
console.time('testing');
var imagesObj = images.length;
for ( var i = 0; i < imagesObj; i++ ) {
var lowres = images.eq(i).attr('src'),
highres = lowres.replace(".", "_2x.");
images.eq(i).attr('src', highres);
}
console.timeEnd('testing');
}
But every time I reload the page it gives me a pretty different value. Should it have this behaviour? Shouldn't it give me a consistent value?
I have loaded it 5 times in a row and the values are the following:
5.051 ms
4.977 ms
8.009 ms
5.325 ms
6.951 ms
I am running this on XAMPP and in Chrome btw.
Thanks in advance
console.time/endTime is working correctly and the timing does indeed fluctuate by a tiny amount.
However, when dealing with such small numbers - the timings are all less than 1/100 of a second! - the deviation is irrelevant and can be influenced by a huge number of factors.
There is always variation, could be caused by a number of things.
server responding slightly slower (hat can also block other parts of the browser)
your processor is doing something in the mean time
your processor clocked down to save power
random latency in the network
browser extension doing something in the background
Also, Firefox has a system that intelligently tries to optimize javascript execution, in most cases it will perform better but it is somewhat random.

Most efficient way to throttle continuous JavaScript execution on a web page

I'd like to continuously execute a piece of JavaScript code on a page, spending all available CPU time I can for it, but allowing browser to be functional and responsive at the same time.
If I just run my code continuously, it freezes the browser's UI and browser starts to complain. Right now I pass a zero timeout to setTimeout, which then does a small chunk of work and loops back to setTimeout. This works, but does not seem to utilize all available CPU. Any better ways of doing this you might think of?
Update: To be more specific, the code in question is rendering frames on canvas continuously. The unit of work here is one frame. We aim for the maximum possible frame rate.
Probably what you want is to centralize everything that happens on the page and use requestAnimationFrame to do all your drawing. So basically you would have a function/class that looks something like this (you'll have to forgive some style/syntax errors I'm used to Mootools classes, just take this as an outline)
var Main = function(){
this.queue = [];
this.actions = {};
requestAnimationFrame(this.loop)
}
Main.prototype.loop = function(){
while (this.queue.length){
var action = this.queue.pop();
this.executeAction(e);
}
//do you rendering here
requestAnimationFrame(this.loop);
}
Main.prototype.addToQueue = function(e){
this.queue.push(e);
}
Main.prototype.addAction = function(target, event, callback){
if (this.actions[target] === void 0) this.actions[target] = {};
if (this.actions[target][event] === void 0) this.actions[target][event] = [];
this.actions[target][event].push(callback);
}
Main.prototype.executeAction = function(e){
if (this.actions[e.target]!==void 0 && this.actions[e.target][e.type]!==void 0){
for (var i=0; i<this.actions[e.target][e.type].length; i++){
this.actions[e.target][e.type](e);
}
}
}
So basically you'd use this class to handle everything that happens on the page. Every event handler would be onclick='Main.addToQueue(event)' or however you want to add your events to your page, you just point them to adding the event to the cue, and just use Main.addAction to direct those events to whatever you want them to do. This way every user action gets executed as soon as your canvas is finished redrawing and before it gets redrawn again. So long as your canvas renders at a decent framerate your app should remain responsive.
EDIT: forgot the "this" in requestAnimationFrame(this.loop)
web workers are something to try
https://developer.mozilla.org/en-US/docs/DOM/Using_web_workers
You can tune your performance by changing the amount of work you do per invocation. In your question you say you do a "small chunk of work". Establish a parameter which controls the amount of work being done and try various values.
You might also try to set the timeout before you do the processing. That way the time spent processing should count towards any minimum the browsers set.
One technique I use is to have a counter in my processing loop counting iterations. Then set up an interval of, say one second, in that function, display the counter and clear it to zero. This provides a rough performance value with which to measure the effects of changes you make.
In general this is likely to be very dependent on specific browsers, even versions of browsers. With tunable parameters and performance measurements you could implement a feedback loop to optimize in real-time.
One can use window.postMessage() to overcome the limitation on the minimum amount of time setTimeout enforces. See this article for details. A demo is available here.

Categories

Resources