I've been poking around the API to find what I'm looking for, as well as searching online (but examples of Windows Store apps are pretty scarce). What I'm essentially looking for is a starting point for analyzing audio in a Windows Store JavaScript app. If I were creating a simple visualizer, for example, and I need to detect the various kinds of "bumps" in the currently playing audio.
Can anybody point me in the right direction here? Is this something that's even possible in a Windows Store JavaScript app? Whether it's the audio of a selected song, or the device's currently playing song, or the audio on the microphone... either way is fine for my needs at the moment. I'm just looking for where to start in the analysis of the audio.
GGG's response sounded skeptical of the possibility of a signal processor on WindowsRT, and I have to admit I don't know much about WindowsRT either. But we know you will have javascript available. It sounds like you are interested in Digital Signal Processing in Javascript. If you take a look at these resources. They could get you pointed in the right direction.
https://github.com/corbanbrook/dsp.js
http://www.bores.com/courses/intro/index.htm
http://arc.id.au/SpectrumAnalyser.html
Related
thanks kindly for your time and attention. I recognise this is a long shot but I'm hoping someone might be generous enough to relay some advice or guidance. I am in the beginning phases of researching how I might build an app for a mobile device, using javascript and related tools, libraries and packages. The concept for the app is that it will access the device camera, recognise faces, and overlay animated AR assets onto the device display. However, I want users to couple their phone with a wearable headset, and use the app through a split screen VR style display. I assume I'll need some sort of VR wrapper for the core AR application. At the moment, I am focussed primarily on the graphical display and UI aspects, so that I can build a proof-of-concept to test whether the idea is even viable. I recognise I may be misguided to attempt this in javascript rather than native mobile languages - if this is the case, I would welcome any opinions on the matter. I'm asking about javascript because that's what I know, basically.
Thus far, I've been reading about various libraries such as WebXR, Three.js and others. I assume I'll need to use React Native, though it's not easy to get a clear sense of whether I should even be trying to achieve what I want using javascript. I have no code to show as yet.
Additionally, I recognise there are similar questions already posted to the forum - for example, this one: VR+AR on mobile phone.
I haven't found any recent threads that address this specific set of requirements so I do apologise if I've missed something. If there is info on the forum, grateful if someone could point me to the relevant thread. At the very least, thanks for reading. Cheers, all.
I found Snap's Lens Studio extremely intuitive and powerful. It provides templates for feature recognition, tracking, and physics. It also provides advanced controls for custom creative. I would also expect it to receive future feature development support. It can be monetized.
...or do you want to expose yourself to more computer vision terminology and patterns? Try searching CodePen or CodeSandbox for features such as: MediaPipe, OpenCV, face detection webcam. But the overhead of a VR/AR app is probably best described in an O'Reilly book or John Carmack keynote speech. Topics like pupil distance, foot tracking, and predictive tracking.
I'm currently working on an experiential project building a sound recorder.
I will use an arduino with a button to start/stop recording, so that there are no interactions with the machine (a concealed Windows laptop) at all for the end user.
Looking for a way to record the sound on the machine from a microphone, ideally in Javascript.
The sound should be recorded locally (no dependency on Wifi connection) and each sound should be saved as a separate file.
Once the project is done/installed, I will have no access to the machine any more so the files need to be easily accessed by a non technical user (hence the arduino/laptop combo and not a raspberry pi for example).
My forte is JS so I was hoping to do it using Electron but I haven't found a way to do this just yet.
I have tried the obvious navigator.mediaDevices.getUserMedia which doesn't work on Electron for security reasons. There are a number of libraries out there but the ones I saw won't work at all, are outdated, and haven't been updated in years.
I also tried using p5.js, which despite being a bit convoluted worked quite well, but requires user input when saving the audio file, which is not an option given the installation will only have one button to start/stop recording an as interface.
Has anybody done this or is anyone able to put me in the right direction?
sorry I can't find any information on this. Doing a personal project using WebAudio API, getting microphone input, but the sensitivity is way too high. A friend told me to research the keywords Decibel Threshold/Gating but I can't seem to find any relevent information.. Anyone have any resources? I've referenced a lot of open sourced code so keep the terminology to the minimum please, THANKS!
There's no way to turn down the hardware microphone gain from Web Audio API, so if it's actually clipping, the user needs to turn down the gain. You could potentially detect that it's clipping (by looking for sample values close to plus or minus 1), and ask the user to turn it down.
If it's not clipping, but is still too loud for your purposes, you can just run it through a gain node. Or if you want to turn it down only if it's over a certain level, you can run it through a compressor node.
I'm playing around a bit with the HTML5 <audio> tag and I noticed some strange behaviour that has to do with the currentTime attribute.
I wanted to have a local audio file played and let the timeupdate event detect when it finishes by comparing the currentTime attribute to the duration attribute.
This actually works pretty fine if I let the song play from the beginning to the end - the end of the song is determined correctly.
However, changing the currentTime manually (either directly through JavaScript or by using the browser-based audio controls) results in the API not giving back the correct value of the currentTime anymore but seems to set it some seconds ahead of the position that's actually playing.
(These "some seconds" ahead are based on Chrome, Firefox seems to completely going crazy which results in the discrepancy being way bigger.)
A little jsFiddle example about the problem: http://jsfiddle.net/yp3o8cyw/2/
Can anybody tell me why this happens - or did I just not getting right what the API should do?
P.S.: I just noticed this actually only happens with MP3-encoded files, OGG files are totally doing fine.
After hours of battling this mysterious issue, I believe I have figured out what is going on here. This is not a question of .ogg vs .mp3, this is a question of variable vs. constant bitrate encoding on mp3s (and perhaps other file types).
I cannot take the credit for discovering this, only for scouring the interwebs. Terrill Thompson, a gentlemen and scholar, wrote a detailed article about this problem back on February 1st, 2015, which includes the following excerpt:
Variable Bit Rate (VBR) uses an algorithm to efficiently compress the
media, varying between low and high bitrates depending on the
complexity of the data at a given moment. Constant Bit Rate (CBR), in
contrast, compresses the file using the same bit rate throughout. VBR
is more efficient than CBR, and can therefore deliver content of
comparable quality in a smaller file size, which sounds attractive,
yes?
Unfortunately, there’s a tradeoff if the media is being streamed
(including progressive download), especially if timed text is
involved. As I’ve learned, VBR-encoded MP3 files do not play back with
dependable timing if the user scrubs ahead or back.
I'm writing this for anyone else who runs into this syncing problem (which makes precise syncing of audio and text impossible), because if you do, it's a real nightmare to figure out what is going on.
My next step is to do some more testing, and finally to figure out an efficient way to convert all my .mp3s to constant bit rate. I'm thinking FFMPEG may be able to help, but I'll explore that in another thread. Thanks also to Loilo for originally posting about this issue and Brad for the information he shared.
First off, I'm not actually able to reproduce your problem on my machine, but I only have a short MP3 file handy at the moment, so that might be the issue. In any case, I think I can explain what's going on.
MP3 files (MPEG) are very simple streams and do not have absolute positional data within them. It isn't possible from reading the first part of the file to know at what byte offset some arbitrary frame begins. The media player seeks in the file by needle dropping. That is, it knows the size of the entire track and roughly how far into the track your time offset is. It guesses and begins decoding, picking up as soon as it synchronizes to the next frame header. This is an imprecise process.
Ogg is a more robust container and has time offsets built into its frame headers. Seeking in an Ogg file is much more straightforward.
The other issue is that most browsers that support MP3 do so only because the codec is already available on your system. Playing Ogg Vorbis and MP3 are usually completely different libraries with different APIs. While the web standards do a lot to provide a common abstraction, minor implementation details cause quirks like you are seeing.
I was wondering if there is a way to control audio output device patching in HTML5/JavaScript? Like, if the user wanted to have one sound in my web app to go out of one audio device, and another sound out of a different audio device. I know the user can set the default output device on their computer, but for the web app I'm working on, I would like them to be able to send individual sounds to individual outputs while other sounds are playing, similar to the interface below (from a program called QLab).
I feel like the obvious answer is NO, and I do not want to resort to using flash or java. I MIGHT be okay with having to write some sort of browser plugin that interfaces with javascript.
So, after receiving basically zero helpful answers - and finding no further helpful information online, I think I figured out something that we, as developers, NEED to start requesting from browser vendors and w3c. We need to be able to request hardware access from users, in a similar fashion that we can request to access a user's location, or how we can request to send a user push notifications.
Until web developers are allowed the same control as native application developers over hardware, we will be left at a huge disadvantage over what we can offer our users. I don't want to have my users install third/fourth party plugins to enable a little more control/access to their I/O. Users should not have to be inundated with keeping more software than just their web browser updated to have websites run well and securely. And I, for one, do not feel like it should be necessary to write in more languages than HTML, JavaScript, CSS, and PHP to get the same experience a user would get from a native application.
I have no idea how we approach browser vendors about this, but I feel like it would be good to start doing this.
I know this is a little old, but just this year a method was added called "setSinkId" that you can apply to a media element (video, audio) to set the device that audio will be outputted to.
$('#video-element').setSinkId('default');
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/setSinkId
Though currently it seems only Chrome supports it. I haven't tested on Firefox or other web browsers.
I suggest you take a look at the Web Audio API:
Specs --- Tutorial
There is the destination property in the Web audio API. However it is a readonly property ... so not settable.
Here:
The destination property always correlates to the default hardware output of sound, whether it’s through speakers, attached headphones, or a Bluetooth headset.
I'm working on a sound installation based off web audio and have run into the same problem. I want to map different channel outputs to different speakers. have you had any progress on this?
This gentleman seems to have managed to do it: http://www.brucewiggins.co.uk/?p=311
I tested this out on a apogee quartet and it worked - outputting to 8 different channels.
I also found this article useful: http://www.html5audio.org/2013/03/surround-audio-comes-to-the-web.html
if (context.destination.maxChannelCount >= 4) {
context.destination.channelCount = 4;
}
// otherwise, let's down-mix to 2.0
else {
context.destination.channelCount = 2;
}
context.destination.channelCountMode = "explicit";
context.destination.channelInterpretation = "discrete";
context.destination.numberOfOutputs = 4;
While you can certainly use the splitter and merger nodes to assign to specific channels on the output, the actual devices you output are abstracted by the browser and inaccessible by your code.
I have done some experiments with 8-channel virtual audio cables and relaying that data to other sound devices outside of the browser. Unfortunately, I can't find a browser that will actually open an 8-channel sound card with more than 2 channels.
Hopefully, browsers in the future will provide more options. This flexibility will never come directly to JavaScript... and nor should it. This is an abstraction done for you, and if the browser uses it correctly, it won't be an issue.