Downloading a video using fluent-ffmpeg in nodejs and express - javascript

I am working on a side project to download videos from Reddit, but they separate video and audio in different files. so i have to merge them first before downloading them in the client. i was able to do all of this as in the following snippet of code.
const ffmpeg = require("fluent-ffmpeg");
const proc = new ffmpeg();
app.post('/download', async (req, res) => {
const audio = "some aduio link";
const video = "some video link";
proc.addInput(video)
.output('${some path}./video.mp4')
.format('mp4')
.on("error", err => console.log(err))
.on('end', () => console.log('Done'));
if(audio) {
proc.addInput(audio);
}
proc.run()
});
using the above code, the video is being download locally in the the server in the specified path.
but i want to download the video in the client browser who sent the request. i tried :
proc.pipe(res);
but it didn't work, it's my first time working with ffmpeg , so it would be nice if someone give me a hint

add writeToStream(res, { end: true }); atn the end to stream
const ffmpeg = require("fluent-ffmpeg");
const proc = new ffmpeg();
app.post('/download', async (req, res) => {
const audio = "some aduio link";
const video = "some video link";
ffmpeg(video).format('mp4')
.on("error", err => console.log(err))
.on('end', () => console.log('Done')).writeToStream(res, { end: true });
});
`
i hope it works

Related

Downloading an mp3 file from S3 and manipulating it results in bad file

I did a script that downloads a MP3 file from my S3 bucket and then manipulates in before download (Adding ID3 Tags).
It's working and the tags are injected properly, but the files corrupts as it seems and unplayable.
I still can see my tags trough MP3tag so it has data in it, but no audio is playing trough the file.
Heres my code,
Trying to figure it what went wrong
const downloadFileWithID3 = async (filename, downloadName, injectedEmail) => {
try {
const data = await s3Client.send(
new GetObjectCommand({
Bucket: "BUCKETNAME",
Key: filename,
})
);
const fileStream = streamSaver.createWriteStream(downloadName);
const writer = fileStream.getWriter();
const reader = data.Body.getReader();
const pump = () =>
reader.read().then(({ value, done }) => {
if (done) writer.close();
else {
const arrayBuffer = value;
const writerID3 = new browserId3Writer(arrayBuffer);
const titleAndArtist = downloadName.split("-");
const [artist, title] = titleAndArtist;
writerID3.setFrame("TIT2", title.slice(0, -4));
writerID3.setFrame("TPE1", [artist]);
writerID3.setFrame("TCOM", [injectedEmail]);
writerID3.addTag();
let taggedFile = new Uint8Array(writerID3.arrayBuffer);
writer.write(taggedFile).then(pump);
}
});
await pump()
.then(() => console.log("Closed the stream, Done writing"))
.catch((err) => console.log(err));
} catch (err) {
console.log(err);
}
};
Hope you can help me solve this wierd bug,
Thanks in advance!
Ok so i've figured it out, instead of using chunks of the stream itself i've used getSignedUrl from the s3 bucket it works.
Thanks everyone for trying to help out!

Is there a way send a file stored on a remote URL from the server to the client efficiently?

Context:
This is my first time working with files using Nodejs.
I am making a Youtube video downloader for personal use.
In the frontend I have multiple buttons representing a video quality, each button has attached to it a URL where the video can be found for the specified quality.
When a specific button is pressed, the function 'download' from 'client.js' is being called and gets passed the URL representing the button and a filename.
My first try was to create a write stream to the public folder of my app, and after the download was finished, get the video from the path in the frontend, but it was taking too long and it was really inefficient for big files.
The current way it works is mostly the same as the other one, but this method is even slower than the other method I used.
How can I make the download more efficient?
For example when the user presses a button, the download to right away.
client.js
const download = async (url, filename) => {
const error = document.querySelector(".error")
const status = document.querySelector(".status")
try {
status.textContent = "Downloading ..."
const response = await axios.get("/download", {params: {url, filename}})
window.location.href = response.request.responseURL
error.textContent = ""
status.textContent = "Download Complete"
}
catch(e) {
error.textContent = "Cannot Download The Data!"
status.textContent = ""
}
}
server.js
app.get('/download', async (request, response) => {
try {
const URL = request.query.url
const filename = request.query.filename
response.attachment(filename)
const { data } = await axios.get(URL, { responseType: 'stream' })
data.pipe(response)
}
catch (e) {
return response.status(404).send({error: "Url Not Found!"})
}
})

Saving Readstream as File on async loop

=> What I need to do :
I'm trying to loop over multiples tracks got from database
Getting audio file path/name of each track, and get the readstream from AWS then save it to a temporary repertory on my server
Make some changes on the audio with FFMPEG
ReUpload the changed audio file to AWS
Delete the file from the temporary repertory from my server
So first I got all the tracks, loop on it, and call the function with track parameter that make audio process on each track :
exports.updateAllTracksWithNewTag = async (req, res) => {
try {
const allTracks = await Tracks.findAll()
await Promise.all(
allTracks.map(async (track) => {
return await this.updateOne(track)
})
)
return res.status(200).json({ message: 'All tracks are done' })
} catch (error) {
return ResponseErrorFormated.responseError(error, res)
}
}
This is the function that make the audio process on each track :
exports.updateOne = async (track) => {
const filesToDeleteFromTemp = []
try {
const fileAudioURL = track.MP3_audio_url
const originalFileReadstream = await AWSS3Utils.getReadStream(
URLConfig.URL_ASSETS.PRIVATE.USERS_TRACKS_AUDIOS.path.slice(1) + fileAudioURL
)
const tempPathToSaveStreamGetted = 'assets/temp/audios/'
// It seem he just loop all the tracks to this line and then continue the process
console.log('SAVING FILE for track.id=', track.id) /////////////////////
await FilesUtils.saveReadStreamAsFile(originalFileReadstream, tempPathToSaveStreamGetted, fileAudioURL)
console.log('FILE SAVED for track.id=', track.id) /////////////////////
filesToDeleteFromTemp.push(tempPathToSaveStreamGetted + fileAudioURL)
const fileInfosForTag = {
path: tempPathToSaveStreamGetted + fileAudioURL,
filename: fileAudioURL,
}
console.log('CREATING TAGGED for track.id=', track.id) /////////////////////
const resultTaggedMP3 = await FilesFormater.createTaggedAudioFromMP3(fileInfosForTag)
console.log('TAGGED CREATED for track.id=', track.id) /////////////////////
const readStreamTaggedMP3 = resultTaggedMP3.readStream
const finalFilePathTaggedMP3 = resultTaggedMP3.finalFilePath
const finalFileNameTaggedMP3 = resultTaggedMP3.finalFileNameTaggedMP3
const newFileKeyMP3 =
URLConfig.URL_ASSETS.PUBLIC.USERS_TRACKS_TAGGED.path.slice(1) + finalFileNameTaggedMP3
filesToDeleteFromTemp.push(finalFilePathTaggedMP3)
await AWSS3Utils.uploadFileFromReadstream(readStreamTaggedMP3, newFileKeyMP3)
await FilesUtils.unlinkFiles(filesToDeleteFromTemp)
} catch (error) {
await FilesUtils.unlinkFiles(filesToDeleteFromTemp)
throw error
}
}
Excepted Result :
SAVING FILE for track.id=120
FILE SAVED for track.id=120
CREATING TAGGED for track.id=120
TAGGED CREATED for track.id=120
SAVING FILE for track.id=121
FILE SAVED for track.id=121
CREATING TAGGED for track.id=121
TAGGED CREATED for track.id=121
The real result :
SAVING FILE for track.id=120
SAVING FILE for track.id=121
SAVING FILE for track.id=122
SAVING FILE for track.id=123

Having issues with working with socket.io and mediastream api to send realtime, live audio chat between clients in a room

I am having an issue that I can't seem to figure out. I am creating an application in which clients in a room will be able to talk to each other via their microphone. So more specifically, I need to stream audio from one client's microphone, send the data (while it is being recorded) to the server and then to a different client so that they can can hear the other persons voice in real-time. I have a method of doing this, but it is inefficient, choppy, and blatantly bad...
My method as of now:
This snippet is on the client side of the person recording audio
setInterval(() => {
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start();
const audioChunks = [];
mediaRecorder.addEventListener("dataavailable", (event) => {
audioChunks.push(event.data);
});
mediaRecorder.addEventListener("stop", () => {
socket.emit('liveAudioToServer', audioChunks)
});
setTimeout(() => {
mediaRecorder.stop();
},2000);
});
}, 2000);
This snippet is the server side:
socket.on('liveAudioToServer', (data) => {
socket.broadcast.to(room).emit('liveAudioToClient', data)
})
And this snippet is the client side on the receiving end of the audio:
socket.on('liveAudioToClient', (data) => {
const audioBlob = new Blob(data);
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
audio.play()
})
Basically, what this code does is it sends an audio buffer every two seconds to the server and then the server broadcast the buffer to the other clients in the room. once the other clients receive the buffer, they compile it into audio that is played. The main issue with this is that you can clearly tell when the audio ends and begins and it is not smooth in the slightest. However, I tried a different solution that with some tweaking I feel like it could work. But it doesn't work as of now..
Other Method:
if(navigator.getUserMedia) {
navigator.getUserMedia(
{audio: true},
function(stream) {
const audioContext3 = new AudioContext();
const audioSource3 = audioContext3.createMediaStreamSource(stream);
const analyser3 = audioContext3.createAnalyser();
audioSource3.connect(analyser3);
analyser3.fftSize = 256;
const bufferLength = analyser3.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
function sendAudioChunks(){
analyser3.getByteFrequencyData(dataArray);
requestAnimationFrame(sendAudioChunks);
socket.emit('liveAudioToServer', dataArray)
}
sendAudioChunks();
},
function(){ console.log("Error 003.")}
);
}
server side:
socket.on('liveAudioToServer', (data) => {
socket.broadcast.to(getRoom(socket.id)).emit('liveAudioToClient', data)
})
other clients side (receiving audio)
socket.on('liveAudioToClient', (data) => {
const audioBlob = new Blob([new Uint8Array(data)])
const audioUrl = URL.createObjectURL(audioBlob);
const audioo = new Audio(audioUrl);
audioo.play()
})
I've really looked everywhere for the solution to this and I have had no luck so if anyone could help me that would be greatly appreciated!!!

Playing an audio file using discord.js and ytdl-core

I'm trying to download and play an audio file fetched from youtube using ytdl and discord.js:
ytdl(url)
.pipe(fs.createWriteStream('./music/downloads/music.mp3'));
var voiceChannel = message.member.voiceChannel;
voiceChannel.join().then(connection => {
console.log("joined channel");
const dispatcher = connection.playFile('./music/downloads/music.mp3');
dispatcher.on("end", end => {
console.log("left channel");
voiceChannel.leave();
});
}).catch(err => console.log(err));
isReady = true
I successfully manage to play the mp3 file in ./music/downloads/ without the ytdl part (ytdl(url).pipe(fs.createWriteStream('./music/downloads/music.mp3'));). But when that part is in the code, the bot just joins and leaves.
Here is the output with the ytdl part:
Bot has started, with 107 users, in 43 channels of 3 guilds.
joined channel
left channel
And here is the output without the ytdl part:
Bot has started, with 107 users, in 43 channels of 3 guilds.
joined channel
[plays mp3 file]
left channel
Why is that and how can i solve it?
Use playStream instead of playFile when you need to play a audio stream.
const streamOptions = { seek: 0, volume: 1 };
var voiceChannel = message.member.voiceChannel;
voiceChannel.join().then(connection => {
console.log("joined channel");
const stream = ytdl('https://www.youtube.com/watch?v=gOMhN-hfMtY', { filter : 'audioonly' });
const dispatcher = connection.playStream(stream, streamOptions);
dispatcher.on("end", end => {
console.log("left channel");
voiceChannel.leave();
});
}).catch(err => console.log(err));
You're doing it in an inefficient way. There's no synchronization between reading and writing. Wait for the file to be written to the filesystem, then read it!
Directly stream it
Redirect YTDL's video output to dispatcher, which would be converted to opus audio data packets first, then streamed from your computer to Discord.
message.member.voiceChannel.join()
.then(connection => {
console.log('joined channel');
connection.playStream(ytdl(url))
// When no packets left to send, leave the channel.
.on('end', () => {
console.log('left channel');
connection.channel.leave();
})
// Handle error without crashing the app.
.catch(console.error);
})
.catch(console.error);
FWTR (First write, then read)
The approach you used wass pretty close to success, but the failure is when you don't synchronize read/write.
var stream = ytdl(url);
// Wait until writing is finished
stream.pipe(fs.createWriteStream('tmp_buf_audio.mp3'))
.on('end', () => {
message.member.voiceChannel.join()
.then(connection => {
console.log('joined channel');
connection.playStream(fs.createReadStream('tmp_buf_audio.mp3'))
// When no packets left to send, leave the channel.
.on('end', () => {
console.log('left channel');
connection.channel.leave();
})
// Handle error without crashing the app.
.catch(console.error);
})
.catch(console.error);
});

Categories

Resources