I'm using a webworker to pass some data at an interval of 10 ms. In the task manager I can see the working memory set increasing till I don't cancel the interval.
Here's what I'm doing:
Sending:
function send() {
setInterval(function() {
const data = {
array1: get100Arrays(),
array2: get500Arrays()
};
let json = JSON.stringify( data );
let arbfr = str2ab (json);
worker.postMessage(arbfr, [arbfr]);
}, 10);
}
function str2ab(str) {
var buf = new ArrayBuffer(str.length*2); // 2 bytes for each char
var bufView = new Uint16Array(buf);
for (var i=0, strLen=str.length; i<strLen; i++) {
bufView[i] = str.charCodeAt(i);
}
return buf;
}
I also tried to do only this, but with no success:
// let json = JSON.stringify( data );
// let arbfr = str2ab(json);
worker.postMessage(data);
Anyone know why this might be leaking? I'm currently trying this on Chrome.
Usually memory leaks in web workers are created while passing values multiple times between the main thread and the worker thread.
If possible try to send the array to the web worker only once.
You can also detect if your Transferable Objects work properly (The array will be neutered)
var ab = new ArrayBuffer(1);
try {
worker.postMessage(ab, [ab]);
if (ab.byteLength) {
console.log('TRANSFERABLE OBJECTS are not supported in your browser!');
}
else {
console.log('USING TRANSFERABLE OBJECTS');
}
}
catch(e) {
console.log('TRANSFERABLE OBJECTS are not supported in your browser!');
}
Related
Following is the code to create a 2d matrix in javascript:
function Create2DArray(rows) {
var arr = [];
for (var i=0;i<rows;i++) {
arr[i] = [];
}
return arr;
}
now I have a couple of 2d matrices inside an array:
const matrices = []
for(let i=1; i<10000; i++){
matrices.push(new Create2DArray(i*100))
}
// I'm just mocking it here. In reality we have data available in matrix form.
I want to do operations on each matrix like this:
for(let i=0; i<matrices.length; i++){
...domeAnythingWithEachMatrix()
}
& since it will be a computationally expensive process, I would like to do it via a web worker so that the main thread is not blocked.
I'm using paralleljs for this purpose since it will provide nice api for multithreading. (Or should I use the native Webworker? Please suggest.)
update() {
for(let i=0; i<matrices.length; i++){
var p = new Parallel(matrices[i]);
p.spawn(function (matrix) {
return doanythingOnMatrix(matrix)
// can be anything like transpose, scaling, translate etc...
}).then(function (matrix) {
return back so that I can use those values to update the DOM or directly update the DOM here.
// suggest a best way so that I can prevent crashes and improve performance.
});
}
requestAnimationFrame(update)
}
So my question is what is the best way of doing this?
Is it ok to use a new Webworker or Parallel instance inside a for loop?
Would it cause memory issues?
Or is it ok to create a global instance of Parallel or Webworker and use it for manipulating each matrix?
Or suggest a better approach.
I'm using Parallel.js for as alternative for Webworker
Is it ok to use parallel.js for multithreading? (Or do I need to use the native Webworker?)
In reality, the matrices would contain position data & this data is processed by the Webworker or parallel.js instance behind the scenes and returns the processed result back to the main app, which is then used to draw items / update canvas
UPDATE NOTE
Actually, this is an animation. So it will have to be updated for each matrix during each tick.
Currently, I'm creating a new Instance of parallel inside the for loop. I fear that this would be a non conventional approach. Or it would cause memory leaks. I need the best way of doing this. Please suggest.
UPDATE
This is my example:
Following our discussion in the comments, here is an attempt at using chunks. The data is processed by groups of 10 (a chunk), so that you can receive their results regularly, and we only start the animation after receiving 200 of them (buffer) to get a head start (think of it like a video stream). But these values may need to be adjusted depending on how long each matrix takes to process.
That being said, you added details afterwards about the lag you get. I'm not sure if this will solve it, or if the problem lays in your canvas update function. That's just a path to explore:
/*
* A helper function to process data in chunks
*/
async function processInChunks({ items, processingFunc, chunkSize, bufferSize, onData, onComplete }) {
const results = [];
// For each group of {chunkSize} items
for (let i = 0; i < items.length; i += chunkSize) {
// Process this group in parallel
const p = new Parallel( items.slice(i, i + chunkSize) );
// p.map is no a real Promise, so we create one
// to be able to await it
const chunkResults = await new Promise(resolve => {
return p.map(processingFunc).then(resolve);
});
// Add to the results
results.push(...chunkResults);
// Pass the results to a callback if we're above the {bufferSize}
if (i >= bufferSize && typeof onData === 'function') {
// Flush the results
onData(results.splice(0, results.length));
}
}
// In case there was less data than the wanted {bufferSize},
// pass the results anyway
if (results.length) {
onData(results.splice(0, results.length));
}
if (typeof onComplete === 'function') {
onComplete();
}
}
/*
* Usage
*/
// For the demo, a fake matrix Array
const matrices = new Array(3000).fill(null).map((_, i) => i + 1);
const results = [];
let animationRunning = false;
// For the demo, a function which takes time to complete
function doAnythingWithMatrix(matrix) {
const start = new Date().getTime();
while (new Date().getTime() - start < 30) { /* sleep */ }
return matrix;
}
processInChunks({
items: matrices,
processingFunc: doAnythingWithMatrix,
chunkSize: 10, // Receive results after each group of 10
bufferSize: 200, // But wait for at least 200 before starting to receive them
onData: (chunkResults) => {
results.push(...chunkResults);
if (!animationRunning) { runAnimation(); }
},
onComplete: () => {
console.log('All the matrices were processed');
}
});
function runAnimation() {
animationRunning = results.length > 0;
if (animationRunning) {
updateCanvas(results.shift());
requestAnimationFrame(runAnimation);
}
}
function updateCanvas(currentMatrixResult) {
// Just for the demo, we're not really using a canvas
canvas.innerHTML = `Frame ${currentMatrixResult} out of ${matrices.length}`;
info.innerHTML = results.length;
}
<script src="https://unpkg.com/paralleljs#1.0/lib/parallel.js"></script>
<h1 id="canvas">Buffering...</h1>
<h3>(we've got a headstart of <span id="info">0</span> matrix results)</h3>
I got a blob to construct, and received almost 100 parts of (500k) to decrypt and construct a blob file.
Actually it's working fine, but when i do my decryption, that take processor, and freeze my page.
I try different approach, with defered of jquery, timeout but always the same probleme.
It's there a ways to not freez the UI thread ?
var parts = blobs.sort(function (a, b) {
return a.part - b.part;
})
// notre bytesarrays finales
var byteArrays = [];
i = 0;
for (var i = 0; i < blobs.length; i++)
{
// That job is intensive, and take time
byteArrays.push(that.decryptBlob(parts[i].blob.b64, fileType));
}
// create new blob with all data
var blob = new Blob(byteArrays, { type: fileType });
The body inside for(...) loop is synchronous, so the entire decryption process is synchronous, in simple words, decryption happens chunk after chunk. How about making it asynchronous ? Like decrypting multiple chunks in parallel. In JavaScript terminology we can use Asynchronous Workers. These workers can work in parallel, so if you spawn 5 workers for example. The total time is reduced by T / 5. (T = total time in synchronous mode).
Read more about worker threads here :
https://blog.logrocket.com/node-js-multithreading-what-are-worker-threads-and-why-do-they-matter-48ab102f8b10/
Tanks to Sebastian Simon,
I took the avenue of worker. And it's working fine.
var chunks = [];
var decryptedChucnkFnc = function (args) {
// My builder blob job here
}
// determine the number of maximum worker to use
var maxWorker = 5;
if (totalParts < maxWorker) {
maxWorker = totalParts;
}
for (var iw = 0; iw < maxWorker; iw++) {
eval('var w' + iw + ' = new Worker("decryptfile.min.js")');
var wo = eval("w" + iw);
var item = blobs.pop();
wo.postMessage(MyObjectPassToTheFile);
wo.onmessage = decryptedChucnkFnc;
}
So I'm basically doing a lot of operations on objects in an array. So I decided to use webworkers so I can process them in a parallel manner. However, if I inputted an array with 10 objects, only 9 workers would return a value. So I created this simple mockup that reproduces the problem:
var numbers = [12, 2, 6, 5, 5, 2, 9, 8, 1, 4];
var create = function(number) {
var source = 'onmessage = function(e) { postMessage(e.data * 3) }';
var blob = new Blob([source]);
var url = window.URL.createObjectURL(blob);
return new Worker(url)
};
var newnumbers = [];
for (var i = 0; i < numbers.length; i++) {
var worker = create();
worker.onmessage = function(e) {
newnumbers.push(e.data);
worker.terminate();
}
worker.postMessage(numbers[i]);
}
So basically, each number in de array gets multiplied by 3 and added to a new array newnumbers. However, numbers.length = 10 and newnumbers.length=9. I have debugged this for quite a while and I verified that 10 workers were created.
I feel like i'm doing something stupidly wrong, but could someone explain?
Run it here on JSFiddle
You call terminate on the last worker before it processes the messsage, so the last worker is not outputting anything.
This is happens because the worker variable is actually a global variable instead of a local one. You can replace var worker with let worker to make it a local variable. If you are worried about let browser compatibility use an Array to store the workers, or simply create a function scope`.
Now, terminate is called on the last worker because the var worker variable will be set to the last worker when the loop ends. Note that the loop will complete executing before any worker will start processing (as the loop is synchronous code).
In your original code instead of calling terminate() on each worker you would call 10 times terminate() on the last worker.
var newnumbers = [];
for (var i = 0; i < numbers.length; i++) {
let worker = create();
worker.onmessage = function(e) {
newnumbers.push(e.data);
worker.terminate(); // "worker" refers to the unique variable created each iteration
}
worker.postMessage(numbers[i]);
}
Demo: https://jsfiddle.net/bqf5e9o1/2/
I am streaming audio data in chunks through web-Socket from server
ws.on('message', function incoming(message) {
var readStream = fs.createReadStream("angular/data/google.mp3",
{
'flags': 'r',
'highWaterMark': 128 * 1024
}
);
readStream.on('data', function(data) {
ws.send(data);
});
readStream.on('end', function() {
ws.send('end');
});
readStream.on('error', function(err) {
console.log(err)
});
});
on the client side
var chunks = [];
var context = new AudioContext();
var soundSource;
var ws = new WebSocket(url);
ws.binaryType = "arraybuffer";
ws.onmessage = function(message) {
if (message.data instanceof ArrayBuffer) {
chunks.push(message.data)
} else {
createSoundSource(chunks);
}
};
function createSoundSource(audioData) {
soundSource = context.createBufferSource();
for (var i=0; i < audioData.length;i++) {
context.decodeAudioData(audioData[i], function(soundBuffer){
soundSource.buffer = soundBuffer;
soundSource.connect(context.destination);
soundSource.start(0);
});
}
}
But setting buffer soundSource.buffer = soundBuffer; for the second time causing an error
Uncaught DOMException: Failed to set the 'buffer' property on 'AudioBufferSourceNode': Cannot set buffer after it has been already been set
Any advice or insight into how best to update Web Audio API playback with new audio data would be greatly appreciated.
You cannot reset a buffer on an AudioBufferSourceNode once it's been set. It's like fire-and-forget. Each time you want to play a different buffer, you have to create a new AudioBufferSourceNode to continue playback. Those are very lightweight nodes so don't worry about the performance even when creating tons of them.
To account for this, you can modify your createSoundSource function to simply create an AudioBufferSourceNode for each chunk inside the cycle body, like that:
function createSoundSource(audioData) {
for (var i=0; i < audioData.length;i++) {
context.decodeAudioData(audioData[i], function(soundBuffer){
var soundSource = context.createBufferSource();
soundSource.buffer = soundBuffer;
soundSource.connect(context.destination);
soundSource.start(0);
});
}
}
I tried to keep the code style as close to original as possible, but it's 2020, and a function taking advantage of modern features could actually look like this:
async function createSoundSource(audioData) {
await Promise.all(
audioData.map(async (chunk) => {
const soundBuffer = await context.decodeAudioData(chunk);
const soundSource = context.createBufferSource();
soundSource.buffer = soundBuffer;
soundSource.connect(context.destination);
soundSource.start(0);
})
);
}
If you want to stop the old nodes as soon as new data arrives (it looks like you wanted that by resetting the .buffer but I'm not sure), you'll have to store them and call disconnect on all of them when it's time.
Not positive, but I think you have to handle your streaming websocket buffer a bit differently. Maybe websocket-streaming-audio package source code can give you better clues on how to handle your scenario.
I'm writing a TCP game server in Node.js and am having issues with splitting the TCP stream into messages. As i want to read numbers and floats from the buffer i cannot find a suitable module to outsource to as all the ones i've found deal with simple strings ending with a new line delimiter. I decided to go with prefixing each message with the length in bytes of the message. I did this and wrote a simple program to spam the server with random messages ( well constructed with a UInt16LE prefix depicting the length of the message ). I noticed that the longer I leave the programs running my actual server keeps using up more and more memory. I tried using a debugging tool to trace the memory allocation with no success so I figured i'd post my code here and hope for a reply. So here is my code... any tips or pointers as to where I'm going wrong or what I can do differently/more efficiently would be amazing!
Thanks.
server.on("connection", function(socket) {
var session = new sessionCS(socket);
console.log("Connection from " + session.address);
// data buffering variables
var currentBuffer = new Buffer(args.bufSize);
var bufWrite = 0;
var bufRead = 0;
var mSize = null;
var i = 0;
socket.on("data", function(dataBuffer) {
// check if buffer risk of overflow
if (bufWrite + dataBuffer.length > args.bufSize-1) {
var newBufWrite = 0;
var newBuffer = new Buffer(args.bufSize);
while(bufRead < bufWrite) {
newBuffer[newBufWrite] = currentBuffer[bufRead];
newBufWrite++;
bufRead++;
}
currentBuffer = newBuffer;
bufWrite = newBufWrite;
bufRead = 0;
newBufWrite = null;
}
// appending buffer
for (i=0; i<dataBuffer.length; i++) {
currentBuffer[bufWrite] = dataBuffer[i];
bufWrite ++;
}
// if beginning of message not acknowleged
if (mSize === null && (bufWrite - bufRead) >= 2) {
mSize = currentBuffer.readUInt16LE(bufRead);
}
// if difference between read and write is greater or equal to message mSize + 2
// +2 for the integer holding the message size
// this means that a full message is in the buffer and needs to be extracted
while ((bufWrite - bufRead) >= mSize+2) {
bufRead += 2;
var messageBuffer = new Buffer(mSize);
for(i=0; i<messageBuffer.length; i++) {
messageBuffer[i] = currentBuffer[bufRead];
bufRead++;
}
// this is where the message buffer would be passed to the router
router(session, messageBuffer);
messageBuffer = null;
// seeinf if another message length indicator is in the buffer
if ((bufWrite - bufRead) >= 2) {
mSize = currentBuffer.readUInt16LE(bufRead);
}
else {
mSize = null;
}
}
});
}
Buffer Frame Serialization Protocol (BUFSP) https://github.com/teambition/bufsp
It may be that you want: encode messages into buffer, write to TCP, receive and splitting the TCP stream buffers into messages.