Bypass the 6 downloads limit for multiple videos watching - javascript

I have to code a website with the capability of watching many live streams (video-surveillance cameras) at the same time.
So far, I'm using MJPEG and JS to play my live videos and it is working well ... be only up to 6 streams !
Indeed, I'm stuck with the 6 parallel downloads limit most browser have (link).
Does someone know how to by-pass this limit ? Is there a tip ?
So far, my options are:
increase the limit (only possible on Firefox) but I don't like messing with my users browser settings
merge the streams in one big stream/video on the server side, so that I can have one download at the time. But then I won't be able to deal with each stream individually, won't I ?
Switch to JPEG stream and deal with a queue of images to be refreshed on the front side (but if I have say 15 streams, I'm afraid I will collapse my client browser on the requests (15x25images/s)
Do I have any other options ? Is there a tip or a lib, for example could I merge my stream in one big pipe (so 1 download at the time) but have access to each one individually in the front code ?
I'm sure I'm on the right stack-exchange site to ask this, if I'm not please tell me ;-)

Why not stream (if you have control over the server side and the line is capable) in one connection? You do one request for all 15 streams to be send /streamed in one connection (not one big stream) so the headers of each chunk have to match the appropriate stream-id. Read more: http://qnimate.com/what-is-multiplexing-in-http2/
More in-depth here: https://hpbn.co/http2/
With http1.0/1.1 you are out of luck for this scenario - back then when developed one video or mp3 file was already heavy stuff (work arounds where e.g. torrent libraries but unreliable and not suited for most scenarios apart from mere downloading/streaming). For your interactive scenario http2 is the way to go imho.

As Codebreaker007 said, I would prefer HTTP2 stream multiplexing too. It is specifically designed to get around the very problem of too many concurrent connections.
However, if you are stuck with HTTP1.x I don't think you're completely out of luck. It is possible to merge the streams in a way so that the clientside can destructure and manipulate the individual streams, although admittedly it takes a bit more work, and you might have to resort to clientside polling.
The idea is simple - define a really simple data structure:
[streamCount len1 data1 len2 data2 ...]
Byte 0 ~ 3: 32-bit unsigned int number of merged streams
Byte 4 ~ 7: 32-bit unsigned int length of data of stream 1
Byte 8 ~ 8+len1: binary data of stream 1
Byte 8+len1+1 ~ 8+len1+4: length of data of stream 2
...
Each data is allowed to have a length of 0, and is handled no differently in this case.
On the clientside, poll continuously for more data, expecting this data structure. Then destructure it and pipe the data to the individual streams' buffer. Then you can still manipulate the component streams individually.
On the serverside, cache the data from individual component streams in memory. Then in each response empty the cache, compose this data structure and send.
But again, this is very much a plaster solution. I would recommend using HTTP2 stream as well, but this would be a reasonable fallback.

Related

Streaming Icecast Audio & Metadata with Javascript and the Web Audio API

I've been trying to figure out the best way to go about implementing an idea I've had for a while.
Currently, I have an icecast mp3 stream of a radio scanner, with "now playing" metadata that is updated in realtime depending on what channel the scanner has landed on. When using a dedicated media player such as VLC, the metadata is perfectly lined up with the received audio and it functions exactly as I want it to - essentially a remote radio scanner. I would like to implement something similar via a webpage, and on the surface this seems like a simple task.
If all I wanted to do was stream audio, using simple <audio> tags would suffice. However, HTML5 audio players have no concept of the embedded in-stream metadata that icecast encodes along with the mp3 audio data. While I could query the current "now playing" metadata from the icecast server status json, due to client & serverside buffering there could be upwards of 20 seconds of delay between audio and metadata when done in this fashion. When the scanner is changing its "now playing" metadata upwards of every second in some cases, this is completely unsuitable for my application.
There is a very interesting Node.JS solution that was developed with this exact goal in mind - realtime metadata in a radio scanner application: icecast-metadata-js. This shows that it is indeed possible to handle both audio and metadata from a single icecast stream. The live demo is particularly impressive: https://eshaz.github.io/icecast-metadata-js/
However, I'm looking for a solution that can run totally clientside without needing a Node.JS installation and it seems like that should be relatively trivial.
After searching most of the day today, it seems that there are several similar questions asked on this site and elsewhere, without any cohesive, well-laid out answers or recommendations. From what I've been able to gather so far, I believe my solution is to use a Javascript streaming function (such as fetch) to pull the raw mp3 & metadata from the icecast server, playing the audio via Web Audio API and handling the metadata blocks as they arrive. Something like the diagram below:
I'm wondering if anyone has any good reading and/or examples for playing mp3 streams via the Web Audio API. I'm still a relative novice at most things JS, but I get the basic idea of the API and how it handles audio data. What I'm struggling with is the proper way to implement a) the live processing of data from the mp3 stream, and b) detecting metadata chunks embedded in the stream and handling those accordingly.
Apologies if this is a long-winded question, but I wanted to give enough backstory to explain why I want to go about things the specific way I do.
Thanks in advance for the suggestions and help!
I'm glad you found my library icecast-metadata-js! This library can actually be used both client-side and in NodeJS. All of the source code for the live demo, which runs completely client side, is here in the repository: https://github.com/eshaz/icecast-metadata-js/tree/master/src/demo. The streams in the demo are unaltered and are just normal Icecast streams on the server side.
What you have in your diagram is essentially correct. ICY metadata is interlaced within the actual MP3 "stream" data. The metadata interval or frequency that ICY metadata updates happen can be configured in the Icecast server configuration XML. Also, it may depend on your how frequent / accurate your source is for sending metadata updates to Icecast. The software used in the police scanner on my demo page updates almost exactly in time with the audio.
Usually, the default metadata interval is 16,000 bytes meaning that for every 16,000 stream (mp3) bytes, a metadata update will sent from Icecast. The metadata update always contains a length byte. If the length byte is greater than 0, the length of the metadata update is the metadata length byte * 16.
ICY Metadata is a string of key='value' pairs delimited by a semicolon. Any unused length in the metadata update is null padded.
i.e. "StreamTitle='The Stream Title';StreamUrl='https://example.com';\0\0\0\0\0\0"
read [metadataInterval bytes] -> Stream data
read [1 byte] -> Metadata Length
if [Metadata Length > 0]
read [Metadata Length * 16 bytes] -> Metadata
byte length
response data
action
ICY Metadata Interval
stream data
send to your audio decoder
1
metadata length byte
use to determine length of metadata string (do not send to audio decoder)
Metadata Length * 16
metadata string
decode and update your "Now Playing" (do not send to audio decoder)
The initial GET request to your Icecast server will need to include the Icy-MetaData: 1 header, which tells Icecast to supply the interlaced metadata. The response header will contain the ICY metadata interval Icy-MetaInt, which should be captured (if possible) and used to determine the metadata interval.
In the demo, I'm using the client-side fetch API to make that GET request, and the response data is supplied into an instance of IcecastReadableStream which splits out the stream and metadata, and makes each available via callbacks. I'm using the Media Source API to play the stream data, and to get the timing data to properly synchronize the metadata updates.
This is the bare-minimum CORS configuration needed for reading ICY Metadata:
Access-Control-Allow-Origin: '*' // this can be scoped further down to your domain also
Access-Control-Allow-Methods: 'GET, OPTIONS'
Access-Control-Allow-Headers: 'Content-Type, Icy-Metadata'
icecast-metadata-js can detect the ICY metadata interval if needed, but it's better to allow clients to read it from the header with this additional CORS configuration:
Access-Control-Expose-Headers: 'Icy-MetaInt'
Also, I'm planning on releasing a new feature (after I finish with Ogg metadata) that encapsulates the fetch api logic so that all a user needs to do is supply an Icecast endpoint, and get audio / metadata back.

PHP: Request 50MB-100MB json - browser crash / do not display any result

Huge json server requests: around 50MB - 100MB for example.
From what I know, it might crash when loading huge requests of data to a table (I usually use datatables), the result: memory reaches to almost 8G, and the browser crash. Chrome might not return a result, Firefox will usually ask if I want to wait or kill the process.
I'm going to start working on a project which will send requests for huge jsons, all compressed (done by the server side PHP). The purpose of my report is to fetch data, and display all in a table - made easy to filter and order. So I cant find the use of "lazy load"ing for this specific case.
I might use a vue-js datatable library this time (not sure which specifically).
What's exactly using so much of my memory? I know for sure that the json result is received. Is that rendering/parsing of the json to the DOM? (I'm referring to the datatable example for now: https://datatables.net/examples/data_sources/ajax)
What is the best practices in these kind of situations?
I started researching this issue and noticed that there are posts from 2010 that seem like they're not relevant at all.
There is no limit on the size of an HTTP response. There is a limit on other things, such as:
local storage
session storage
cache
cookies
query string length
memory (per your CPU limitations or browser allocation)
Instead, the problem is with your implementation of your datatable most likely. You can't just insert 100,000 nodes into the DOM and not expect some type of performance impact. Furthermore, if the datatable is performing logic against each of those datum as they're coming in and processing them before the node insertion, that's also going to be a big no no.
What you've done here is essentially pass the leg work of performing pagination from the server to the client, and with dire impacts.
If you must return a response that big, consider using one of the storage options that browsers provide (a few mentioned above). Then paginate off of the stored JSON response.

How internet connection download/ upload speed should be caluculated to one host

This is cross post from software engineering Q/A
There are couple (a lot of) websites provides internet speed test, I tried to build same, but I'm still not able to get the accurate results.
Trying
Created several files on server, let say (1, 8, 32, 64, 128, 256, 512, 1024)KB.
Then on the client side, I'm downloading each of them.
measuring
start time to request to the server
1st response from server
time it finishes the downloading
then, internet speed = all transfered size / time taken in seconds.
I checked a couple of other websites which do not download large files / larger data (more than 5Kb), but instead a lot of request are made to the server in parallel,
Also there is something smoothing factor or stabalizing factor, or something which samples the data, and calculates better results.
Here is how speedtest.net implemented, but I'm still not able to understand it properly.
https://support.speedtest.net/hc/en-us/articles/203845400-How-does-the-test-itself-work-How-is-the-result-calculated-
Can someone guide me to understand and point to the right direction to calculate internet speed?
Edit: I want to show my users on my web/app how much speed they are getting on it. For this I'm trying to apply a general creteria, similar to speedtest, but instead of taking from multiple servers, just want to try with one server only.
The general idea is to compute parameters to be able to stuff the physical communication channel. The main part is to determine which number of parallel downloads will reach the goal.
A single communication is clearly not sufficient because there exist many overheads during which you can send other packets. In a very rough approximation where to receive messages you need to send a packet from A to B to request some data, and then the data is sent back from B to A, you can clearly request something else while the data is sent back. You can also think of how many data packet can be sent along the link from X to Y? Just like you can have several cars in the same road from B to A. Each car being a packet from a given communication.
Determining the speed of a connection is highly dependent on many factors, and what is obtained is only an approximation.

In node.js and express, how should a client send a large, up to 3k, amount of data to the server?

the client will be sending my server a change log, containing a list of commands and parameters, JSON or not is TBD.
This payload can be a 3 or 4K not likely to be more.
What is the standard approach to deal with requirement?
Client should send a json, containing all of the changes, as part of the request body?
Any recommendations? Lessons learned?
Just POST the data. 3-4 KB is nothing unless you're dealing with feature-phone WAP browsers in the middle of rural India, performance issues of the "OMG, I'm Google and care about every byte ever because of my zillion-user userbase" type, or something like that.
If you're really worried about payload size, you can gzip-base64 encode it before sending - but only do this if a) you really care about this (which is unlikely) and b) your payload is large enough that this saves you bandwidth. (gzip-base64'ing small payloads often increases their size, since there isn't enough data to get enough compression benefit to offset the 33% size increase from base64 encoding.)
You can use a normal JSON post to send across 3/4K of data.
You should pay more attention to what you do with the data received on the server side, whether you buffer up all data before you start processing them (store in db or elsewhere), or process them in chunks. If you are simply dumping the data into files on server, you should create a Writable stream and pump chunks of data received into the stream.
How are you going to process the received data on the server? But then, 3/4K is not really worrying amount of data.
You can set the maximun upload size with
app.use(express.limit('5mb'));
if that's an issue?
But, there shouldn't really be any limitations on this as default, except the max buffer size (which I believe is 1GB).
It also sounds like this is something that you can just post to the server with a regular POST request, in other words you use a form with a file input and just upload the file the regular way, as 4kb isn't really a big file.

can I send "low priority" xmlhttprequest request that doesn't swamp the broadband?

I have a script which uploads a lot of POST data using jQuery, but this interferes with all other requests as the outgoing data swamps any other requests the browser (and other things, like ssh clients) might make.
is it possible (unlikely, yes) to tell the connection to slow down a bit as it's not a priority, and let other connections through?
jQuery is tagged, because that's the major library I'm using, but I can work on a lower level if the answer needs it.
There was no definite answer to this question, but the ideas presented by commenters have worked over time.
When I need to upload a lot of similar data these days, I write a function which handles the actual upload. It gathers data and "pauses" for a few ms to see if there will be further upload requests. when sufficient time has passed with no upload requests (or th queue is a certain length or size), the function then aggregates all upload data into a single upload to a server-side function designed to split those apart, handle them, and return the various results to the function which then andles callbacks if there are any.
the above may sound complex, but it has made a huge difference in reducing network swamping.

Categories

Resources