What I'm trying to achieve is to make Chrome load a video file as data (via the Fetch API, XHR, whatever) and to play it using <video> while it's still being downloaded without issuing two separate requests for the same URL and without waiting until the file is completely downloaded.
It's easy to get a ReadableStream from the Fetch API (response.body), yet I can't find a way to feed it into the video element. I've figured out I need a blob URL for this, which can be created using a MediaSource object. However, the SourceBuffer#appendStream method, which sounds like just what is needed, isn't implemented in Chrome, so I can't connect the stream directly to the MediaSource object.
I can probably read the stream in chunks, create Uint8Arrays out of them, and use SourceBuffer#appendBuffer, but this means playback won't start immediately unless the chunk size is really small. Also it feels like manually doing something that all these APIs should be able to do out of the box. If there is no other solutions, and I go this way, what caveats should I expect?
Are there probably other ways to create a blob URL for a ReadableStream? Or is there a way to make fetch and <video> share a request? There are so many new APIs that I could easily miss something.
After hours of experimenting, found a half-working solution:
const video = document.getElementById('audio');
const mediaSource = new MediaSource();
video.src = window.URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', async () => {
const sourceBuffer = mediaSource.addSourceBuffer('audio/webm; codecs="opus"');
const response = await fetch(audioSRC);
const body = response.body
const reader = body.getReader()
let streamNotDone = true;
while (streamNotDone) {
const {value, done} = await reader.read();
if (done) {streamNotDone = false; break;}
await new Promise((resolve, reject) => {
sourceBuffer.appendBuffer(value)
sourceBuffer.onupdateend = (() => {
resolve(true);
})
})
}
});
It works with https://developer.mozilla.org/en-US/docs/Web/API/MediaSource
Also, I tested this only with webm/opus format but I believe it should work with other formats as well as long as you specify it.
Related
I made this picture that explains what I'm trying to achieve.
Brief summary: I want users to be able to upload an encrypted video that only those which have the password will be able to see as a stream.
The point of all this is for the server to not be able to decipher it.
What I managed to do at the moment is only the first part, encrypting the chunks and sending them :
fileInput.addEventListener("change", async () => {
const file = fileInput.files[0]
const stream = file.stream()
const reader = stream.getReader()
while(true) {
const {value, done} = await reader.read()
if(done) break
handleChunk(value)
}
const out = {
size: file.size,
type: file.type,
encryptedStream: encryptedStream
}
})
function handleChunk(chunk) { // chunk: Uint8Array
const parsedChunk = CryptoJS.enc.Utf8.parse(chunk)
encryptedStream.push(CryptoJS.AES.encrypt(parsedChunk, "encryptionKey").toString())
}
I guess you can also do this with File.slice() but I did not test it.
At this point I feel like I've looked into every API related to files/streams both in Node and in vanilla js but I can't find a way to make this work, all I've managed is to make a Readable out of the encryptedStream array in the backend.
const Readable = require("stream").Readable
let readable = new Readable()
encryptedStream.forEach(aesChunk => readable.push(aesChunk))
readable.push(null)
I have doubts on how to stream it back to another frontend and literally no idea on how to play the video after deciphering it.
From what I've looked online, it looks like it's not possible to stream a crypted file, only to download it.
Is this possible to do ?
I have an audioContext that gets its media from createMediaElementSource. I want to parse this audio on the go into AudioBuffers or something similar that I can send over to another client over websockets.
let audioElement = document.querySelector('video')
let audioContext = new window.AudioContext()
let source = audioContext.createMediaElementSource(audioElement)
source.connect(deliverToOtherClientOrSomething)
I tried making a AudioWorkletNode, but the problem with this approach is that it doesn't allow me to end the chain there, but forces me to forward the audio to some other AudioContext element, which is unwanted.
So, in the end this problem was solved by using an audio worklet node. When creating an AudioWorkletNode it is possible to pass options to it. One of the options is numberOfOutputs. By doing this my question is completely answered.
Mainfile
const sendProcessor = new AudioWorkletNode(audioContext, 'send-processor', {numberOfOutputs:0})
sendProcessor.port.onmessage = (event) => {
callback(event.data);
}
Processor file
process(inputs, outputs) {
this.port.postMessage(inputs[0][0]);
return true;
}
imagine i have a video file and i want to build a blob URL from that file then play it in a html page, so far i tried this but i could not make it work ...
var files = URL.createObjectURL(new Blob([someVideoFile], {type: "video/mp4"}));
document.getElementById(videoId).setAttribute("src", files);//video tag id
document.getElementById(videoPlayer).load();//this is source tag id
document.getElementById(videoPlayer).play();//this is source tag id
it gives me a blob URL but wont play the video... am i doing something wrong? i am pretty new to electron so excuse me if my code is not good enough
i saw the similar questions mentioned in comments but they dont work for me as they dont work for others in those pages....
I know this is an old question, but it still deserves a working answer.
In order to play a video in the renderer context, you're on the right track: you can use a blob url and assign it as the video source. Except, a local filepath is not a valid url, which is why your current code doesn't work.
Unfortunately, in electron, currently there are only 3 ways to generate a blob from a file in the renderer context:
Have the user drag it into the window, and use the drag-and-drop API
Have the user select it via a file input: <input type="file">
Read the entire file with the 'fs' module, and generate a Blob from it
The third option (the only one without user input) can be done as long as nodeIntegration is enabled or if it is done in a non-sandboxed preloader. For accomplishing this via streaming vs. loading the entire file at once, the following module can be used:
// fileblob.js
const fs = require('fs');
// convert system file into blob
function fileToBlob(path, {bufferSize=64*1024, mimeType='aplication/octet-stream'}={}) {
return new Promise((resolve,reject) => {
// create incoming stream from file
const stream = fs.createReadStream(path, {highWaterMark:bufferSize});
// initialize empty blob
var blob = new Blob([], {type:mimeType});
stream.on('data', buffer => {
// append each chunk to blob by building new blob concatenating new chunk
blob = new Blob([blob, buffer], {type:mimeType});
});
stream.on('close', () => {
// resolve with resulting blob
resolve(blob);
});
});
}
// convert blob into system file
function blobToFile(blob,path, {bufferSize=64*1024}={}) {
return new Promise((resolve,reject) => {
// create outgoing stream to file
const stream = fs.createWriteStream(path);
stream.on('ready', async () => {
// iterate chunks at a time
for(let i=0; i<blob.size; i+=bufferSize) {
// read chunk
let slice = await blob.slice(i, i+bufferSize).arrayBuffer();
// write chunk
if(!stream.write(new Uint8Array(slice))) {
// wait for next drain event
await new Promise(resolve => stream.once('drain', resolve));
}
}
// close file and resolve
stream.on('close', () => resolve());
stream.close();
});
});
}
module.exports = {
fileToBlob,
blobToFile,
};
Then, in a preloader or the main context with nodeIntegration enabled, something like the following would load the file into a blob and use it for the video player:
const {fileToBlob} = require('./fileblob');
fileToBlob("E:/nodeJs/test/app/downloads/clips/test.mp4", {mimeType:"video/mp4"}).then(blob => {
var url = URL.createObjectURL(blob);
document.getElementById(videoId).setAttribute("src", url);
document.getElementById(videoPlayer).load();
document.getElementById(videoPlayer).play();
});
Again, unfortunately this is slow for large files. We're still waiting for a better solution from electron:
https://github.com/electron/electron/issues/749
https://github.com/electron/electron/issues/35629
Try
video.src = window.URL.createObjectURL(vid);
For more details please refer to this answer
I've worded my title, and tags in a way that should be searchable for both video and audio, as this question isn't specific to one. My specific case only concerns audio though, so my question body will be written specific to that.
First, the big picture:
I'm sending audio to multiple P2P clients who will connect and disconnect a random intervals. The audio I'm sending is a stream, but each client only needs the part of the stream from whence they connected. Here's how I solved that:
Every {timeout} (e.g. 1000ms), create a new audio blob
Blob will be a full audio file, with all metadata it needs to be playable
As soon as a blob is created, convert to array buffer (better browser support), and upload to client over WebRTC (or WebSockets if they don't support)
That works well. There is a delay, but if you keep the timeout low enough, it's fine.
Now, my question:
How can I play my "stream" without having any audible delay?
I say stream, but I didn't implement it using the Streams API, it is a queue of blobs, that gets updated every time the client gets new data.
I've tried a lot of different things like:
Creating a BufferSource, and merging two blobs (converted to audioBuffers) then playing that
Passing an actual stream from Stream API to clients instead of blobs
Playing blobs sequentially, relying on ended event
Loading next blob while current blob is playing
Each has problems, difficulties, or still results in an audible delay.
Here's my most recent attempt at this:
let firstTime = true;
const chunks = [];
Events.on('audio-received', ({ detail: audioChunk }) => {
chunks.push(audioChunk);
if (firstTime && chunks.length > 2) {
const currentAudio = document.createElement("audio");
currentAudio.controls = true;
currentAudio.preload = 'auto';
document.body.appendChild(currentAudio);
currentAudio.src = URL.createObjectURL(chunks.shift());
currentAudio.play();
const nextAudio = document.createElement("audio");
nextAudio.controls = true;
nextAudio.preload = 'auto';
document.body.appendChild(nextAudio);
nextAudio.src = URL.createObjectURL(chunks.shift());
let currentAudioStartTime, nextAudioStartTime;
currentAudio.addEventListener("ended", () => {
nextAudio.play()
nextAudioStartTime = new Date();
if (chunks.length) {
currentAudio.src = URL.createObjectURL(chunks.shift());
}
});
nextAudio.addEventListener("ended", () => {
currentAudio.play()
currentAudioStartTime = new Date();
console.log(currentAudioStartTime - nextAudioStartTime)
if (chunks.length) {
nextAudio.src = URL.createObjectURL(chunks.shift());
}
});
firstTime = false;
}
});
The audio-received event gets called every ~1000ms. This code works; it plays each "chunk" after the last one was played, but on Chrome, there is a ~300ms delay that's very audible. It plays the first chunk, then goes quiet, then plays the second, so on. On Firefox the delay is 50ms.
Can you help me?
I can try to create a reproducible example if that would help.
I have an XHR object that downloads 1GB file.
function getFile(callback)
{
var xhr = new XMLHttpRequest();
xhr.onload = function () {
if (xhr.status == 200) {
callback.apply(xhr);
}else{
console.log("Request error: " + xhr.statusText);
}
};
xhr.open('GET', 'download', true);
xhr.onprogress = updateProgress;
xhr.responseType = "arraybuffer";
xhr.send();
}
But the File API can't load all that into memory even from a worker
it throws out of memory...
btn.addEventListener('click', function() {
getFile(function() {
var worker = new Worker("js/saving.worker.js");
worker.onmessage = function(e) {
saveAs(e.data); // FileSaver.js it creates URL from blob... but its too large
};
worker.postMessage(this.response);
});
});
Web Worker
onmessage = function (e) {
var view = new DataView(e.data, 0);
var file = new File([view], 'file.zip', {type: "application/zip"});
postMessage('file');
};
I'm not trying to compress the file, this file is already compressed from server.
I thought storing it first on indexedDB but i i'll have to load blob or file anyway, even if i do request by range bytes, soon or late i will have to build this giant blob..
I want to create blob: url and send it to user after been downloaded by browser
I'll use FileSystem API for Google Chrome, but i want make something for firefox, i looked into File Handle Api but nothing...
Do i have to build an extension for firefox, in order to do the same thing as FileSystem does for google chrome?
Ubuntu 32 bits
Loading 1gb+ with ajax isn't convenient just for monitoring download progress and filling up the memory.
Instead I would just send the file with a Content-Disposition header to save the file.
There are however ways to go around it to monitor the progress. Option one is to have a second websocket that signals how much you have downloaded while you are downloading normally with a get request. the other option will be described later in the bottom
I know you talked about using Blinks sandboxed filesystem in the conversation. but it has some drawbacks. It may need permission if using persistent storage. It only allows 20% of the available disk that are left. And if chrome needs to free some space then it will throw away any others domains temporary storage that was last used for the most recent file. Beside it doesn't work in private mode.
Not to mention that it has been dropping support for it and may never end up in other browsers - but they will most likely not remove it since many sites still depend on it
The only way to process this large file is with streams. That is why I have created a StreamSaver. This is only going to work in Blink (chrome & opera) ATM but it will eventually be supported by other browsers with the whatwg spec to back it up as a standard.
fetch(url).then(res => {
// One idea is to get the filename from Content-Disposition header...
const size = ~~res.headers.get('Content-Length')
const fileStream = streamSaver.createWriteStream('filename.zip', size)
const writeStream = fileStream.getWriter()
// Later you will be able to just simply do
// res.body.pipeTo(fileStream)
// instead of pumping
const reader = res.body.getReader()
const pump = () => reader.read()
.then(({ value, done }) => {
// here you know how large the value (chunk) is and you can
// figure out the download speed/progress when comparing it to the size
return done
? writeStream.close()
: writeStream.write(value).then(pump)
)
// Start the reader
pump().then(() =>
console.log('Closed the stream, Done writing')
)
})
This will not take up any memory
I have a theory that is if you split the file into chunks and store them in the indexedDB and then later merge them together it will work
A blob isn't made of data... it's more like pointers to where a file can be read from
Meaning if you store them in indexedDB and then do something like this (using FileSaver or alternative)
finalBlob = new Blob([blob_A_fromDB, blob_B_fromDB])
saveAs(finalBlob, 'filename.zip')
But i can't confirm this since i haven't tested it, would be good if someone else could
Blob is cool until you want to download a large file, there is a 600MB limit(chrome) for blob since it stores everything in memory.