How can I take one frame from each MediaStream? - javascript

In our javascript app we are trying to extract a single frame from each video tag, which includes
'MediaStream's from each user (with audioTrack and videoTrack), which they received using navigator.mediaDevices.getUserMedia and then send said stream using peerjs API, with peerConnection.answer(stream).
Now I am trying to extract a single frame from MediaStream's videoTrack, which will then be sent to another server.
I have not dealt with mediaStreams in the past and will like to know any suggestions on how to implement it. will include in the next days entire code but will take some time to crop the relevant segments of the code. Thank you

Related

Streaming Icecast Audio & Metadata with Javascript and the Web Audio API

I've been trying to figure out the best way to go about implementing an idea I've had for a while.
Currently, I have an icecast mp3 stream of a radio scanner, with "now playing" metadata that is updated in realtime depending on what channel the scanner has landed on. When using a dedicated media player such as VLC, the metadata is perfectly lined up with the received audio and it functions exactly as I want it to - essentially a remote radio scanner. I would like to implement something similar via a webpage, and on the surface this seems like a simple task.
If all I wanted to do was stream audio, using simple <audio> tags would suffice. However, HTML5 audio players have no concept of the embedded in-stream metadata that icecast encodes along with the mp3 audio data. While I could query the current "now playing" metadata from the icecast server status json, due to client & serverside buffering there could be upwards of 20 seconds of delay between audio and metadata when done in this fashion. When the scanner is changing its "now playing" metadata upwards of every second in some cases, this is completely unsuitable for my application.
There is a very interesting Node.JS solution that was developed with this exact goal in mind - realtime metadata in a radio scanner application: icecast-metadata-js. This shows that it is indeed possible to handle both audio and metadata from a single icecast stream. The live demo is particularly impressive: https://eshaz.github.io/icecast-metadata-js/
However, I'm looking for a solution that can run totally clientside without needing a Node.JS installation and it seems like that should be relatively trivial.
After searching most of the day today, it seems that there are several similar questions asked on this site and elsewhere, without any cohesive, well-laid out answers or recommendations. From what I've been able to gather so far, I believe my solution is to use a Javascript streaming function (such as fetch) to pull the raw mp3 & metadata from the icecast server, playing the audio via Web Audio API and handling the metadata blocks as they arrive. Something like the diagram below:
I'm wondering if anyone has any good reading and/or examples for playing mp3 streams via the Web Audio API. I'm still a relative novice at most things JS, but I get the basic idea of the API and how it handles audio data. What I'm struggling with is the proper way to implement a) the live processing of data from the mp3 stream, and b) detecting metadata chunks embedded in the stream and handling those accordingly.
Apologies if this is a long-winded question, but I wanted to give enough backstory to explain why I want to go about things the specific way I do.
Thanks in advance for the suggestions and help!
I'm glad you found my library icecast-metadata-js! This library can actually be used both client-side and in NodeJS. All of the source code for the live demo, which runs completely client side, is here in the repository: https://github.com/eshaz/icecast-metadata-js/tree/master/src/demo. The streams in the demo are unaltered and are just normal Icecast streams on the server side.
What you have in your diagram is essentially correct. ICY metadata is interlaced within the actual MP3 "stream" data. The metadata interval or frequency that ICY metadata updates happen can be configured in the Icecast server configuration XML. Also, it may depend on your how frequent / accurate your source is for sending metadata updates to Icecast. The software used in the police scanner on my demo page updates almost exactly in time with the audio.
Usually, the default metadata interval is 16,000 bytes meaning that for every 16,000 stream (mp3) bytes, a metadata update will sent from Icecast. The metadata update always contains a length byte. If the length byte is greater than 0, the length of the metadata update is the metadata length byte * 16.
ICY Metadata is a string of key='value' pairs delimited by a semicolon. Any unused length in the metadata update is null padded.
i.e. "StreamTitle='The Stream Title';StreamUrl='https://example.com';\0\0\0\0\0\0"
read [metadataInterval bytes] -> Stream data
read [1 byte] -> Metadata Length
if [Metadata Length > 0]
read [Metadata Length * 16 bytes] -> Metadata
byte length
response data
action
ICY Metadata Interval
stream data
send to your audio decoder
1
metadata length byte
use to determine length of metadata string (do not send to audio decoder)
Metadata Length * 16
metadata string
decode and update your "Now Playing" (do not send to audio decoder)
The initial GET request to your Icecast server will need to include the Icy-MetaData: 1 header, which tells Icecast to supply the interlaced metadata. The response header will contain the ICY metadata interval Icy-MetaInt, which should be captured (if possible) and used to determine the metadata interval.
In the demo, I'm using the client-side fetch API to make that GET request, and the response data is supplied into an instance of IcecastReadableStream which splits out the stream and metadata, and makes each available via callbacks. I'm using the Media Source API to play the stream data, and to get the timing data to properly synchronize the metadata updates.
This is the bare-minimum CORS configuration needed for reading ICY Metadata:
Access-Control-Allow-Origin: '*' // this can be scoped further down to your domain also
Access-Control-Allow-Methods: 'GET, OPTIONS'
Access-Control-Allow-Headers: 'Content-Type, Icy-Metadata'
icecast-metadata-js can detect the ICY metadata interval if needed, but it's better to allow clients to read it from the header with this additional CORS configuration:
Access-Control-Expose-Headers: 'Icy-MetaInt'
Also, I'm planning on releasing a new feature (after I finish with Ogg metadata) that encapsulates the fetch api logic so that all a user needs to do is supply an Icecast endpoint, and get audio / metadata back.

About Google Analytics collect data

First of all, Hi to all of you (I'm new here).
I'm having a look on how Google Analytics works as I'm gonna develop a similar tracking js to collect all the data I need for my websites and, as far as I can see, the ga.js script send all the data (maybe not all but a good part of it) with a get request with a 1x1 gif and all the parameters following.
Seen here: How does google analytics collect its data?
So, on the server side It seems the only way to "read" all these parameters is going to analyze server logging and then collect everything on my database?
Is this the best option to get users data?
I think, server logging could "switch file" every 2 hours so you can analyze that file of the past 2 hours and show "not that old" data to your graph!
Of course will never be "realtime" graph but a 2 hours delay could be acceptable, I think.
I think you can simply put a script (PHP for example) at the image path, then through the script return as a response the image, by doing this you can act in real time, since using a script you can get all the data that would be present in your server log.
If you want to try my solution I think a good point to start (in PHP) would be this to create the GIF image and then you can use data located in $_SERVER to start gathering data!

Web Audio API: Collect all audio informations at "once"

I know that I can collect Audio Data of an currently played audio with getByteFrequenzyData() and I'll get back an Uint8Array.
Now I collect all data of one Audio File by pushing each animationFrame the currently data in an Array, do for example:
I have a audio file with duration of 20min.
Then I have after 20min all Audio data in one Array, which then looks kind a like this:
var data = [Uint8Array[1024], Uint8Array[1024], Uint8Array[1024], Uint8Array[1024], ... ];
Is there a faster way to get all these audio data, so I don't have to wait the full 20 minutes of the video, and get the audio data nearly instant?
It would be good to receive the audio information in fixed steps for, like 50ms or so!
Instead of using an AudioContext, use an OfflineAudioContext to process the data. This can run much faster than real time. To get consistent data at well defined times, you'll also need to use a browser that implements the recently added suspend and resume feature for offline contexts so that you can sample the data for getByteFrequencyData at well-defined time intervals.

Upload files asynchronously then save data about it

I am building a way for users to upload tracks with information about that track but I would like to do this asynchronously much like YouTube does.
At the moment there is an API endpoint of tracks that accepts a POST request with the uploaded file and all the meta data. It processes the track, validates everything and will then save the path to the track and all of its meta data in the database. This works perfectly but I am having trouble thinking of ways to do this asynchronously.
The user flow will be:
1) User selects a track and it starts uploading
2) A form to fill in meta data shows and user fills it in
3) Track is uploaded with its metadata to the endpoint
The problem is that the metadata form and the file upload are now two separate entities and now the file can finish uploading before the metadata is saved and vice-versa. Ideally to overcome this both the track and metadata would be saved in the browser as a cookie or something until they are both completed. At that point both would be sent to the endpoint and no changes would be required at the back end. As far as I am aware there is no way of saving files client side like this. Oh apart from that filesystem API which is pretty much deprecated.
If anyone has any good suggestions about how to do this it would be much appreciated. In a perfect world I would like there to be no changes to the back end at all but little changes are probably going to be required. Preferably no database alterations though.
Oh by the way I'm using laravel and ember.js just in case anyone knows of any packages already doing this.
I have thought about this a lot few months ago.
The closest solution that I managed to put together is to upload file and store it's filename, size, upload time (this is crucial) and other attributes in DB (as usual). Additionally, I've added the column temporary (more like a flag) which would initially be set to TRUE and only after you would sent meta data it would be negated.
Separately, I've set the cron job (I used Symfony2, but in Laravel is all the same) that would run on every 15-30 minutes and delete those files (and corresponding database records) which had temporary = TRUE and exceeded time window. In my case it was 15 minutes but you could set it to be coarse (every hour or so).
Hope this helps a bit :)

How to get access to the output buffer with the Web Audio API?

I'd like to access the audio data in the output/destination buffer. To be specific I would like to save the data to a file. I would also like to add custom effects as AudioNode's.
How can I achieve this with the Web Audio API? I don't see an AudioDestinationBuffer interface or a way to add a custom AudioNode in the specs.
You'll have to add one of these http://www.w3.org/TR/webaudio/#JavaScriptAudioNode right before connecting to the destination.
This will give you access to the raw audio data, and any processing made by effect nodes etc. will already be applied. Just make sure this is the very last node before the destination.
Here's a little something on how you use the JavaScriptAudioNode http://www.html5rocks.com/en/tutorials/webaudio/games/#toc-clip-detect, which I hope will illustrate how to access the audio data.
JavaScriptNode, later renamed to ScriptProcessorNode, has been deprecated in favor of AudioWorkletNode. There is an example on the AudioWorkletProcessor MDN page.

Categories

Resources