I'm having a bit of an issue and I'd really appreciate it if I could get some insights.
What I am trying to do is to add an album cover to the mp3 file that will be downloaded from the front-end.
Context
I'm downloading a video stream from YouTube and converting it to mp3 using fluent-ffmpeg.
To get the video I use the ytdl npm module.
I then pipe this stream to the front-end.
What I've found
fluent-ffmpeg offers either pipe() or saveToFile().
What I figured is that when I use the saveToFile() function and actually save my stream into an mp3 file, it works, I do get the album cover.
But when I pipe the stream to front-end or even into a file, the song is saved properly into a file but without the album cover.
Here is my code
Back-end (NodeJS)
let video = ytdl(`http://youtube.com/watch?v=${videoId}`, {
filter: (format) => format.container === 'mp4' && format.audioEncoding,
quality: 'lowest'
});
let stream = new FFmpeg()
.input(video)
.addInput(`https://i.ytimg.com/vi/${videoId}/default.jpg`)
.outputOptions([
'-map 0:1',
'-map 1:0',
'-c copy',
'-c:a libmp3lame',
'-id3v2_version 3',
'-metadata:s:v title="Album cover"',
'-metadata:s:v comment="Cover (front)"'
])
.format('mp3');
And then piping it to my front-end.
stream.pipe(res);
stream
.on('end', () => {
console.log('******* Stream end *******');
res.end.bind(res);
})
.on('error', (err) => {
console.log('ERR', err);
res.status(500).end.bind(res);
});
Front-end (React)
axios.get(url)
.then(res => {
axios(`${url}/download`, {
method: 'GET',
responseType: 'blob'
})
.then(stream => {
const file = new Blob(
[stream.data],
{ type: 'audio/mpeg' });
//Build a URL from the file
const fileURL = URL.createObjectURL(file);
})
.catch(err => {
console.log('ERROR', err);
});
})
.catch(err => {
console.log('ERROR', err);
});
Unfortunately, it seems there is no possible solution to complete this task with streams. I've researched a lot but found only an explanation of why we can't do this with FFmpeg and piping stream. njoyard wrote the following:
Actually this problem is not specific to Windows. Most formats write
stream information (duration, bitrate, keyframe position...) at the
beginning of the file, and thus ffmpeg can only write this information
when its output is seekable (because it has to finish processing
streams to the end before knowing what to write). Pipes are not
seekable, so you won't get this information when using an output pipe.
As for your note about the output format, ffmpeg determines the output
format from the output file extension, which is not possible with
pipes; that's why you have to specify the output format explicitly.
Here is a link to find it by yourself: https://github.com/fluent-ffmpeg/node-fluent-ffmpeg/issues/159
So, the only solution I see is saving file with saveToFile() method and attaching it to response.
Related
I'm trying to get a stream link for my video files to stream it on my web app from google drive api, but its not working properly. I have double checked the docs for any errors in syntax and i can't seem to find any.
for context here is my code:
`
drive.files.get({fileId: myfileId,alt: 'media'},{responseType: 'stream'}, (err, res) => {
if (err) return console.log(`The API returned an error: ${err}`);
console.log(res)
});
`
I'm getting a passthrough object in res.data field and its giving an error of "Unknown output format: 'media' ". The file i'm trying to stream is a .mp4 file.
I have also double checked my authentication and its working fine because i was able to retrieve my folder id and file id using the api.
Am i doing anything wrong here? Any help would be appreciated.
THANKS.
Once you have authenticated the client library, you can use the following code to get a stream link for a video file stored in Google Drive
// Replace fileId with the ID of the video file you want to stream
const fileId = '1234567890';
// Get the file from Google Drive
const file = await drive.files.get({ fileId, alt: 'media' });
// Get the stream link for the file
const streamLink = file.data;
I'm following the documentation for the Node.JS implementation of the IBM Watson Text-to-Speech API.
I want to output the resultant file into MP3 format. The documentation recommends augmenting the base code but I'm not sure how to do that. My code is rendering unplayable MP3s.
Here is what it says in the documentation:
textToSpeech.synthesize(synthesizeParams)
.then(response => {
// The following line is necessary only for
// wav formats; otherwise, `response.result`
// can be directly piped to a file.
return textToSpeech.repairWavHeaderStream(response.result);
})
.then(buffer => {
fs.writeFileSync('hello_world.wav', buffer);
})
.catch(err => {
console.log('error:', err);
});
As it says, response.result should be directly piped to a file. This is one of my many attempts (that renders an error).
textToSpeech
.synthesize(synthesizeParams)
.then(response => {
fs.writeFileSync('Hello.mp3', response.result)
})
.catch(err => {
console.log('error:', err)
})
How can I output the text-to-speech input as an MP3?
Provided your params are requesting an mp3 file, this will be the accept parameter, then your code looks ok. So if the output file isn't being recognised as an audio, then it is most likely a text file containing an error message. This error message will indicate what is wrong, which most likely will be an unauthorised message.
I take it that your catch error block isn't logging anything.
I'm building a speech-to-audio web app that takes mic input, converts the recording to an MP3(using the mic-recorder-to-mp3 NPM package), and then sends it to the node.js/express server-side for storage and to pass along as a subsequent POST request to the speech-to-text API(rev.ai).
The recording functions fine on the UI, I have the recoding playing in an tag and it sounds fine and is the full length recording:
stopBtn.addEventListener("click", () => {
recorder
.stop()
.getMp3().then(([buffer, blob]) => {
let newBlob = new Blob(buffer);
recordedAudio.src = URL.createObjectURL(blob);
recordedAudio.controls=true;
sendData(blob);
}).catch((e) => {
console.log(e);
});
});
function sendData(blob) {
let fd = new FormData();
fd.append('audio', blob);
fetch('/audio', {
headers: { Accept: "application/json", "Transfer-Encoding": "chunked" },
method: "POST", body: fd
});
}
Now, at first in my server-side express route I was seeing multiple requests coming through per recording and thought it was an error that I could sort out later, so I put a quick boolean check to see if the request was already being processed and if so just res.end() back to the UI.
This was all good and fine until I realized that only the first 4 seconds of the recording were being saved. This 4 second recoding saved fine as an MP3 on the server-side and also plays correctly when opened up in a music app, and also transcribed correctly in rev.ai, but still it was only 4 seconds.
I realized that the audio blob was being sent in chunks to the UI and each chunk was part of the multiple requests I was seeing. So then I started looking into how to reassemble the chunks into on audio blob that can be saved as an MP3 and parsed correctly as audio on rev.ai, but nothing I've tried so far has worked. Here is my latest attempt:
app.post("/audio", async (req, res) => {
let audioBlobs = [];
let audioContent;
let filename = `narr-${Date.now()}.mp3`;
//let processed = false;
req.on('readable', async () => {
//if(!processed){
//processed = true;
//let audioChunk = await req.read();
//}
while(null !== (audioChunk = await req.read())) {
console.log("adding chunk")
audioBlobs.push(audioChunk);
}
});
req.on("end", () => {
audioContent = audioBlobs.join('');
fs.writeFile(`./audio/${filename}`, audioContent, async function(err) {
if (err) {
console.log("an error occurred");
console.error(err);
res.end();
}
const stream = fs.createReadStream(`./audio/${filename}`);
let job = await client.submitJobAudioData(stream, filename, {}).then(data => {
waitForRevProcessing(data.id);
}).catch(e => {
console.log("caught an error");
console.log(e);
});
res.end();
})
});
});
The blob is saved on the server-side with this code, but it's not playable in a music app and rev.ai rejects the recording as it does not interpret the blob as an audio file.
Something about the way I'm reassembling the chunks is corrupting the integrity of the MP3 format.
I'm thinking this could be for few reasons:
The chunks could be coming to the server-side out of order, although it wouldn't make a whole lot of sense considering that when I had the boolean check in place it was seemingly saving the first chunk and not mid-chunks
The last chunk is being left "open" or there's some metadata that's missing or padding that's messing with the encoding
These might not be the correct events to listen to for starting/ending the assembly
I'm hoping that Express/the http node module have something built-in to automatically handle this and I'm doing this manual reassembly unnecessarily - I was pretty surprised there was nothing off-the-shelf in Express to handle this, but maybe it's not as common a use case as I imagined?
Any help that can be offered would be greatly appreciated.
I'm uploading an MP3 file to google cloud storage then converting it to a WAV file and saving the result to cloud storage. The problem now is when i try to read the WAV file with scipy.io an error is thrown --> Unexpected end of file
i then converted my MP3 file locally and uploaded it to cloud storage and called it and opened it like i did with the other one and everything worked as expected.
After that i noticed despite the conversion was done with the same script, the two files have the same size and when opened by a media player they have same length and it seems there is no difference in the sound quality, but when converting them into strings and printing them in the console shows that they're not identical.
NOTES :
MP3 file uploaded to cloud using a html form and saved with type audio/mpeg
WAV file was converted using the following code and saved with type audio/Wav :
const remoteWriteStream =
storage.bucket(bucketName).file("path/to/file/file.wav")
.createWriteStream({
contentType: 'audio/wav'
})
ffmpeg(audioContent)
.withAudioChannels(1)
.toFormat('wav')
.on('error', (err) => {
console.log("error while conversion " + err)
reject(err)
})
.on('progress', (progress) => {
console.log("conversion in progress")
})
.on('end', () => {
console.log("conversion from mp3 to wav finished")
resolve()
})
.pipe(remoteWriteStream, { end: true })
code to convert file locally :
ffmpeg(mp3File)
.withAudioChannels(1)
.toFormat('wav')
.on('error', (err) => {
console.log("error while conversion " + err)
reject(err)
})
.on('progress', (progress) => {
console.log("conversion in progress")
})
.on('end', () => {
console.log("conversion from mp3 to wav finished")
resolve()
})
.save('path/to/save/file/file.wav')
Conversion is done with JavaScript in both cases and calling and opening the waV file is done with python
code to call and open WAV file from cloud :
wavBucketFileBlob = bucket.get_blob('path/to/file/in/cloud/file.wav')
wavBucketFileString = wavBucketFileBlob.download_as_string()
rate,audio = wavfile.read(io.BytesIO(wavBucketFileString))
**EDIT : **
I think the problem is in the header :
wav file that can't be read (saved in the cloud as audio/wav) :
b'RIFF$\x01A\x01WAVEfmt \x10\x00\x00\x00\x01\x00\x01\x00D\xac\x00\x00\x88X\x01\x00\x02\x00\x10\x00LIST\xf8\x00\x00\x00INFOIART\n\x00\x00\x00Cat Power\x00ICMT6\x00\x00\x00https://open.spotify.com/track/2ilo3w0stilJKeQZS61FeN\x00ICOP\x1d\x00\x00\x002018 Domino Recording Co Ltd\x00\x00ICRD\x05\x00\x00\x002018\x00\x00IGNR\x08\x00\x00\x00Art Pop\x00INAM\x05\x00\x00\x00Stay\x00\x00IPRD\t\x00\x00\x00Wanderer\x00\x00IPRT\x05\x00\x00\x006/11\x00\x00ISFT\x0e\x00\x00\x00Lavf58.24.100\x00ITCH\x14\x00\x00\x00Domino Recording Co\x00data\x00\x00A\x01\x00\
wav file that can be read (saved in the cloud automatically when uploaded as audio/x-wav):
b'RIFF\xff\xff\xff\xffWAVEfmt \x10\x00\x00\x00\x01\x00\x01\x00D\xac\x00\x00\x88X\x01\x00\x02\x00\x10\x00LIST\xf8\x00\x00\x00INFOIART\n\x00\x00\x00Cat Power\x00ICMT6\x00\x00\x00https://open.spotify.com/track/2ilo3w0stilJKeQZS61FeN\x00ICOP\x1d\x00\x00\x002018 Domino Recording Co Ltd\x00\x00ICRD\x05\x00\x00\x002018\x00\x00IGNR\x08\x00\x00\x00Art Pop\x00INAM\x05\x00\x00\x00Stay\x00\x00IPRD\t\x00\x00\x00Wanderer\x00\x00IPRT\x05\x00\x00\x006/11\x00\x00ISFT\x0e\x00\x00\x00Lavf58.24.100\x00ITCH\x14\x00\x00\x00Domino Recording Co\x00data\xff\xff\xff\xff\
I tried when converting my mp3 to wav to save it as audio/x-wav since that seems to work but that was not the case.. i was still having the same problem.
My workaround :
I was using scipy.io to open read the wav file so i switched to soundfile which solved my problem but with a downside --> the quality of my output after doing further processing seems to be lower (but not really noticeable) than when i use scipy.io
Im using DropBox API to upload files. To upload the files to dropbox I am going through the following steps:
First upload file from form to a local directory on the server.
Read File from local directory using fs.createReadStream
Send file to Dropbox via the dropbox API.
The issue:
For some reason fs.createReadStream takes absolute ages when reading and uploading a large file. Now the file I'm trying to upload is only 12MB which is not a big file and it takes approximately 18mins to upload/process a 12MB file.
I don't know where the issue is either it's in createReadStream or dropbox api code.
It works with files of size within kb.
My Code:
let options = {
method: 'POST',
uri: 'https://content.dropboxapi.com/2/files/upload',
headers: {
'Authorization': 'Bearer TOKEN HERE',
'Dropbox-API-Arg': "{\"path\": \"/test/" + req.file.originalname + "\",\"mode\": \"overwrite\",\"autorename\": true,\"mute\": false}",
'Content-Type': 'application/octet-stream'
},
// I think the issue is here.
body: fs.createReadStream(`uploads/${req.file.originalname}`)
};
rp(options)
.then(() => {
return _deleteLocalFile(req.file.originalname)
})
.then(() => {
return _generateShareableLink(req.file.originalname)
})
.then((shareableLink) => {
sendJsonResponse(res, 200, shareableLink)
})
.catch(function (err) {
sendJsonResponse(res, 500, err)
});
Update:
const rp = require('request-promise-native');
I had an experience similar to this issue before and after a large amount of head scratching and digging around, I was able to resolve the issue, in my case anyway.
For me, the issue arose due to the default chunking size for createReadStream() being quite small 64kb and this for some reason having a knock on effect when uploading to Dropbox.
The solution therefore was to increase the chunk size.
// Try using chunks of 256kb
body: fs.createReadStream(`uploads/${req.file.originalname}`, {highWaterMark : 256 * 1024});
https://github.com/request/request#streaming
I believe you need to pipe the stream to the request.
see this answer:
Sending large image data over HTTP in Node.js