Porting scriptprocessor based application to audioworklet - javascript

Since the old Webaudio scriptprocessor has been deprecated since 2014 and Audioworklets came up in Chrome 64 I decided to give those a try. However I'm having difficulties in porting my application. I'll give 2 examples from a nice article to show my point.
First the scriptprocessor way:
var node = context.createScriptProcessor(1024, 1, 1);
node.onaudioprocess = function (e) {
var output = e.outputBuffer.getChannelData(0);
for (var i = 0; i < output.length; i++) {
output[i] = Math.random();
}
};
node.connect(context.destination);
Another one that fills a buffer and then plays it:
var node = context.createBufferSource(), buffer =
context.createBuffer(1, 4096, context.sampleRate), data = buffer.getChannelData(0);
for (var i = 0; i < 4096; i++) {
data[i] = Math.random();
}
node.buffer = buffer;
node.loop = true;
node.connect(context.destination);
node.start(0);
The big difference between the two is the first one fills the buffer with new data during playback while the second one generates all data beforehand.
Since I generate a lot of data I can't do it beforehand. There's a lot of examples for the Audioworklet, but they all use other nodes, on which one can just run .start(), connect it and it'll start generating audio. I can't wrap my head around a way to do this when I don't have such a method.
So my question basically is how to do the above example in Audioworklet, when the data is generated continuously in the main thread in some array and the playback of that data is happening in the Webaudio thread.
I've been reading about the messageport thing, but I'm not sure that's the way to go either. The examples don't point me into that direction I'd say. What I might need is the proper way to provide the process function in the AudioWorkletProcesser derived class with my own data.
My current scriptprocessor based code is at github, specifically in vgmplay-js-glue.js.
I've been adding some code to the constructor of the VGMPlay_WebAudio class, moving from the examples to the actual result, but as I said, I don't know in which direction to move now.
constructor() {
super();
this.audioWorkletSupport = false;
window.AudioContext = window.AudioContext||window.webkitAudioContext;
this.context = new AudioContext();
this.destination = this.destination || this.context.destination;
this.sampleRate = this.context.sampleRate;
if (this.context.audioWorklet && typeof this.context.audioWorklet.addModule === 'function') {
this.audioWorkletSupport = true;
console.log("Audioworklet support detected, don't use the old scriptprocessor...");
this.context.audioWorklet.addModule('bypass-processor.js').then(() => {
this.oscillator = new OscillatorNode(this.context);
this.bypasser = new AudioWorkletNode(this.context, 'bypass-processor');
this.oscillator.connect(this.bypasser).connect(this.context.destination);
this.oscillator.start();
});
} else {
this.node = this.context.createScriptProcessor(16384, 2, 2);
}
}

So my question basically is how to do the above example in Audioworklet,
For your first example, there is already an AudioWorklet version for it:
https://github.com/GoogleChromeLabs/web-audio-samples/blob/gh-pages/audio-worklet/basic/js/noise-generator.js
I do not recommend the second example (aka buffer stitching), because it creates lots of source nodes and buffers thus it can cause GC which will interfere with the other tasks in the main thread. Also discontinuity can happen at the boundary of two consecutive buffers if the scheduled start time does not fall on the sample. With that said, you won't be able to hear glitch in this specific example because the source material is noise.
when the data is generated continuously in the main thread in some array and the playback of that data is happening in the Webaudio thread.
The first thing you should do is to separate the audio generator from the main thread. The audio generator must run on AudioWorkletGlobalScope. That's the whole purpose of AudioWorklet system - the lower latency and the better audio rendering performance.
In your code,
VGMPlay_WebAudio.generateBuffer() should be called in AudioWorkletProcessor.process() callback to fill the output buffer of the processor. That roughly matches what your onaudioprocess callback does.
I've been reading about the messageport thing, but I'm not sure that's the way to go either. The examples don't point me into that direction I'd say. What I might need is the proper way to provide the process function in the AudioWorkletProcesser derived class with my own data.
I don't think your use case requires MessagePort. I've seen other methods in the code but they really don't do much other than starting and stopping the node. That can be done by connecting/disconnecting AudioWorkletNode in the main thread. No cross-thread messaging necessary.
The code example at the end can be the setup for AudioWorklet. I am well aware that the separation between the setup and the actual audio generation can be tricky, but it will be worth it.
Few questions to you:
How does the game graphics engine send messages to the VGM generator?
Can the VGMPlay class live on the worker thread without any interaction with the main thread? I don't see any interaction in the code except for starting and stopping.
Is XMLHttpRequest essential to the VGMPlay class? Or can that be done somewhere else?

Related

How to change page size Margins and document language through Word Api ? is any alternate way to do that?

i tried to find word api for that but i guess that is not available
right now so i thought i can do it by modifying the xml but that
also didn't work, need to change page size margin and document
language
await Word.run(async (context) => {
var paragraphs = context.document.body;
// Queue a command to load the style property for the top 2 paragraphs.
paragraphs.load("style")
// Synchronize the document state by executing the queued commands,
// and return a promise to indicate task completion.
return context.sync().then(function () {
// let replacedXml=""
// Queue a a set of commands to get the OOXML of the first paragraph.
var ooxml = paragraphs.getOoxml()
// Synchronize the document state by executing the queued commands,
// and return a promise to indicate task completion.
return context.sync().then(function () {
// console.log('Paragraph OOXML: ' + ooxml.value);
console.log(ooxml.value)
let str=String(ooxml.value)
let replacedXml =ooxml.value
// paragraphs.items[0].insertOoxml(replacedXml,Word.InsertLocation.replace)
// context.document.body.insertOoxml(replacedXml, Word.InsertLocation.replace);
var range = context.document.getSelection()
range.insertOoxml(replacedXml,"Replace")
// console.log(replacedXml)
});
});
i tried to find word api for that but i guess that is not available right now
The answer is yes. Here is no such api having functionality as your expectation for now and in recent future.
so i thought i can do it by modifying the xml but that also didn't work, need to change page size margin and document
OOxml is a powerful way to change doc file indeed, but it is only applicable for those very experienced, has a bit unsatisfying performance online and may cause some problems hard to interpret. So in most cases, we don't recommend using ooxml to achieve one's goal actually.
Btw, we suggest to test above code in word desktop app. Only if insuring the correctness of code could support us to go on investigating.
At last, you can submit your request in https://techcommunity.microsoft.com/t5/microsoft-365-developer-platform/idb-p/Microsoft365DeveloperPlatform if you really want new api.

Mutiple Meyda analyzers not possible: library issue or problem in my implementation?

Ok, I'm using Meyda, a library for extracting audio features, in a Electron project. To handle everything related to audio in this project I implemented an Audio() class. Summarizing, I get the audio track, splitt it in left and right channels and merge it again. For each channel, there will be a Meyda analyzer extracting features. A simplified code, that shows only meyda sending data to a spectrogram graph object, would be:
class Audio {
constructor(audioElementID, spectrogramObj) {
const audioContext = new AudioContext();
this.audioElement = document.getElementById(audioElementID);
const track = audioContext.createMediaElementSource(this.audioElement);
const splitter = audioContext.createChannelSplitter(2);
track.connect(splitter);
this.gainNode = {
master: audioContext.createGain(),
left: audioContext.createGain(),
right: audioContext.createGain()
};
splitter.connect(this.gainNode.left, 0);
splitter.connect(this.gainNode.right, 1);
const merger = audioContext.createChannelMerger(2);
this.gainNode.left.connect(merger, 0, 0);
this.gainNode.right.connect(merger, 0, 1);
merger.connect(this.gainNode.master);
this.gainNode.master.connect(audioContext.destination);
// first analyzer
this.analyzerLeft = Meyda.createMeydaAnalyzer({
'audioContext': audioContext,
'source': this.gainNode.left,
'bufferSize': 1024,
'featureExtractors': ['amplitudeSpectrum'],
'callback': features => {
spectrogramObj.left.updatePlot(features.amplitudeSpectrum);
}
});
// second analyzer
this.analyzerRight = Meyda.createMeydaAnalyzer({
'audioContext': audioContext,
'source': this.gainNode.right,
'bufferSize': 1024,
'featureExtractors': ['amplitudeSpectrum'],
'callback': features => {
spectrogramObj.right.updatePlot(features.amplitudeSpectrum);
}
});
}
play() {
this.audioElement.play();
this.analyzerLeft.start();
this.analyzerRight.start();
};
pause() {
this.audioElement.pause();
this.analyzerLeft.stop();
this.analyzerRight.stop();
};
}
module.exports.Audio = Audio;
As you see, I correctly named both analyzer differently. Problem is: only the last analyzer works. It's seems actually that analyzerLeft and analyzerRight are all ponting to the last analyzer created. If I add a third one, named thirdAnalyzer and in the method play() DO NOT write this.thirdAnalyzer.start(), the third one will be started even so, and only it.
Is this a library issue or something related to Class implementation?
From what I can tell it looks like Meyda only allows one MeydaAnalyzer at a time. When you create a new instance of the MeydaAnalyzer with the factory method it receives the Meyda object itself as a second parameter. MeydaAnalyzer does use this object to attach all the values to it. Whenever you create the next MeydaAnalyzer it will simply overwrite the previous values.
I'm not sure if this is a bug or a feature. But since you already filed an issue we will surely find out soon. :-)
In the meantime you can work around this issue by copying the internal reference to the Meyda object directly after you created a new MeydaAnalyzer. This will for example make sure that each instance of the MeydaAnalyzer uses a different ScriptProcessorNode.
this.analyzerLeft = Meyda.createMeydaAnalyzer({
// ...
});
this.analyzerLeft._m = { ...this.analyzerLeft._m };
But keep in mind that this hack uses a private class member of the MeydaAnalyzer class which may or may not disappear in a future version of Meyda.

Phaser warning "Audio source already exists" when mp3 sound sample is played consecutively

I'm a bit confused about loading and playing sound effects. My game is setup in different states, first the Preloader state makes sure all images and sounds are loaded. The GameState is the main game, this state is re-started for each next level. There are different levels but the state is the same, it just changes a _levelIndex variable and uses the same state.
The GameState adds the needed audio to the game in the .create() function, and this create-function is called every time the GameState is started. See code below
mygame.Preloader.prototype = {
preload: function(){
this.loadingbar_bg = this.add.sprite(80, 512, "loadingbar_bg");
this.loadingbar_fill = this.add.sprite(80, 512, "loadingbar_fill");
this.load.setPreloadSprite(this.loadingbar_fill);
// load sounds
this.load.audio("button", ["snd/button.mp3", "snd/button.ogg"]);
this.load.audio("punch", ["snd/punch.mp3", "snd/punch.ogg"]);
this.load.audio("coin", ["snd/coin.mp3", "snd/coin.ogg"]);
},
create: function() {
this.state.start("MainGame");
},
};
mygame.GameState.prototype = {
create: function() {
this.stage.backgroundColor = "#f0f";
// etc.
// sound effects
this.sound1 = this.game.add.audio("button");
this.sound2 = this.game.add.audio("punch");
this.sound3 = this.game.add.audio("coin");
//etc.
},
update: function() {
if (hitFace) {
this.sound2.play();
hitFace = false;
};
},
doNextLevel: function() {
this.sound1.play();
this._levelIndex++; // next level
this.state.start("MainGame"); // restart this state
},
//etc.
};
The problem is that when I play the punch sound a couple of times in a row a couple of seconds apart, the console gives this warning (which Phaser raises in code here)
Phaser.Sound: Audio source already exists
This warning appears even when the GameState is started for the first time.
I suspect that it has to do with decoding the mp3 and ogg sounds. Do I have to decode the sound samples every time the player starts (or restarts) a level i.e. restart the GameState? In other words, if the GameState will be .create() each time a level is (re)started and the audio samples are added using game.add.audio, will the decoded samples from the previous level be destroyed and have to be reloaded/decoded each time? That seems wasteful, what is the best way to do this? So my questions are:
What does this message "Audio source already exists" mean ? Or
should I ignore it?
if I want to use sounds in a state, do I have to re-add them every time the state is started and .create() is called ?
also somewhat related, if I want to use the same sound sample in multiple different States (menu, game, options etc.) do I have to do game.add.audio() for the same sound for each state?
Well, as far as I can see you code seems to be doing things right. So I will try to answer your questions with the knowledge I have:
1. What does this message "Audio source already exists" mean ? Or should I ignore it?
The message means that there is already an instance of that sound playing as you can see in the place where it is raised:
if (this._sound && ***!this.allowMultiple***)
{
console.warn('Phaser.Sound: Audio source already exists');
// this._disconnectSource();
}
It will throw this error if the sound you are trying to play is already being played by Phaser.Sound and if is not allowMultiple... There it is the quid of the issue. AllowMultiple from source code:
/**
* #property {boolean} allowMultiple - This will allow you to have multiple instances of this Sound playing at once. This is only useful when running under Web Audio, and we recommend you implement a local pooling system to not flood the sound channels.
* #default
*/
this.allowMultiple = false;
So basically is complaining that you are trying to spawn several instances of a sound that is not being allow multiple times. You shouldnt ignore it, but instead use the right flags.
Questions 2 and 3:
You shouldnt have re-add the resource, since thats why you load in the engine the source of audio, to can be reused through all the levels. Nor you have to do it for all the states.
In order to reuse an Sound in multiple states, you should be able to add the audio or any game object in the global scope and access it (Here I found someone trying to do what you ask in the question)
Other ways will be to add this resources as an attribute to the game object, so you dont contaminate the global scope but only the Game object context.
But I believe that is better strategy to add this audios in different states and manage their deletion/creation in the states. Mainly because JS is evil* and mutability may play a bad card on you
*Not that evil
To resolve this warning: Simply use the flag allowMultiple (created in here), eg:
this.sound1 = this.game.add.audio("button") // allowMultiple is false by default
this.sound2 = this.game.add.audio("punch");
// Allow multiple instances running at the same time for sound2
this.sound2.allowMultiple = true;
this.sound3 = this.game.add.audio("coin");
// Allow multiple instances running at the same time for sound3
this.sound3.allowMultiple = true;

How to initialize a child process with passed in functions in Node.js

Context
I'm building a general purpose game playing A.I. framework/library that uses the Monte Carlo Tree Search algorithm. The idea is quite simple, the framework provides the skeleton of the algorithm, the four main steps: Selection, Expansion, Simulation and Backpropagation. All the user needs to do is plug in four simple(ish) game related functions of his making:
a function that takes in a game state and returns all possible legal moves to be played
a function that takes in a game state and an action and returns a new game state after applying the action
a function that takes in a game state and determines if the game is over and returns a boolean and
a function that takes in a state and a player ID and returns a value based on wether the player has won, lost or the game is a draw. With that, the algorithm has all it needs to run and select a move to make.
What I'd like to do
I would love to make use of parallel programming to increase the strength of the algorithm and reduce the time it needs to run each game turn. The problem I'm running into is that, when using Child Processes in NodeJS, you can't pass functions to the child process and my framework is entirely built on using functions passed by the user.
Possible solution
I have looked at this answer but I am not sure this would be the correct implementation for my needs. I don't need to be continually passing functions through messages to the child process, I just need to initialize it with functions that are passed in by my framework's user, when it initializes the framework.
I thought about one way to do it, but it seems so inelegant, on top of probably not being the most secure, that I find myself searching for other solutions. I could, when the user initializes the framework and passes his four functions to it, get a script to write those functions to a new js file (let's call it my-funcs.js) that would look something like:
const func1 = {... function implementation...}
const func2 = {... function implementation...}
const func3 = {... function implementation...}
const func4 = {... function implementation...}
module.exports = {func1, func2, func3, func4}
Then, in the child process worker file, I guess I would have to find a way to lazy load require my-funcs.js. Or maybe I wouldn't, I guess it depends how and when Node.js loads the worker file into memory. This all seems very convoluted.
Can you describe other ways to get the result I want?
child_process is less about running a user's function and more about starting a new thread to exec a file or process.
Node is inherently a single-threaded system, so for I/O-bound things, the Node Event Loop is really good at switching between requests, getting each one a little farther. See https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/
What it looks like you're doing is trying to get JavaScript to run multiple threads simultaniously. Short answer: can't ... or rather it's really hard. See is it possible to achieve multithreading in nodejs?
So how would we do it anyway? You're on the right track: child_process.fork(). But it needs a hard-coded function to run. So how do we get user-generated code into place?
I envision a datastore where you can take userFn.ToString() and save it to a queue. Then fork the process, and let it pick up the next unhandled thing in the queue, marking that it did so. Then write to another queue the results, and this "GUI" thread then polls against that queue, returning the calculated results back to the user. At this point, you've got multi-threading ... and race conditions.
Another idea: create a REST service that accepts the userFn.ToString() content and execs it. Then in this module, you call out to the other "thread" (service), await the results, and return them.
Security: Yeah, we just flung this out the window. Whether you're executing the user's function directly, calling child_process#fork to do it, or shimming it through a service, you're trusting untrusted code. Sadly, there's really no way around this.
Assuming that security isn't an issue you could do something like this.
// Client side
<input class="func1"> // For example user inputs '(gamestate)=>{return 1}'
<input class="func2">
<input class="func3">
<input class="func4">
<script>
socket.on('syntax_error',function(err){alert(err)});
submit_funcs_strs(){
// Get function strings from user input and then put into array
socket.emit('functions',[document.getElementById('func1').value,document.getElementById('func2').value,...
}
</script>
// Server side
// Socket listener is async
socket.on('functions',(funcs_strs)=>{
let funcs = []
for (let i = 0; i < funcs_str.length;i++){
try {
funcs.push(eval(funcs_strs));
} catch (e) {
if (e instanceof SyntaxError) {
socket.emit('syntax_error',e.message);
return;
}
}
}
// Run algorithm here
}

How to run an unblocking background task in a Meteor/JavaScript client?

I'd like to run a task on a Meteor client which is resource hungry in the background and keep the interface responsive for the user in the meantime. The task does some math (for example finding prime numbers like described here: https://stackoverflow.com/a/22930538/2543628 ).
I've tried to follow the tips from https://stackoverflow.com/a/21351966 but still the interface always "freezes" until the task is complete.
setTimeout, setInterval and those packages like in my current approach also didn't help:
var taskQueue = new PowerQueue();
taskQueue.add(function(done) {
doSomeMath();
// It's still blocking/freezing the interface here until done() is reached
done();
});
Can I do something to make the interface responsive during doSomeMath() is running or am I doing something wrong (also it doesn't look like there is much you could do wrong in PowerQueue)?
JavaScript libraries which solve the problem of asynchronous queuing, assume that the tasks being queued are running in a concurrent but single-threaded environment like node.js or your browser. However, in your case you need more than just concurrency - you need multi-threaded execution in order to move your CPU-intensive computation out of your UI thread. This can be achieved with web workers. Note that web workers are only supported in modern browsers, so keep reading if you don't care about IE9.
The above article should be enough to get you started, however it's worth mentioning that the worker script will need to be kept outside of your application tree so it doesn't get bundled. An easy way to do this is to put it inside of the public directory.
Here is a quick example where my worker computes a Fibonacci sequence (inefficiently):
public/fib.js
var fib = function(n) {
if (n < 2) {
return 1;
} else {
return fib(n - 2) + fib(n - 1);
}
};
self.addEventListener('message', (function(e) {
var n = e.data;
var result = fib(n);
self.postMessage(result);
self.close();
}), false);
client/app.js
Meteor.startup(function () {
var worker = new Worker('/fib.js');
worker.postMessage(40);
worker.addEventListener('message', function(e) {
console.log(e.data);
}, false);
});
When the client starts, it loads the worker and asks it to compute the 40th number in the sequence. This takes a few seconds to complete but your UI should remain responsive. After the value is returned, it should print 165580141 to the console.

Categories

Resources