I am trying to play .ulaw format in reactjs, I didn't find any straight forward way to play the ulaw file. So I tried to convert ulaw format to wav, by using the wavfile plugin. But after conversion it playing different noise, I am not able to identify where is the actual error.
ulaw file details:
Sample Rate 8000
Bit rate 64
Channel MONO
import voice from '../src/app/components/voice.ulaw'
const WaveFile = require('wavefile').WaveFile;
let wav = new WaveFile();
const playUlawFile = () => {
fetch(voice)
.then(r => r.text())
.then(text => Buffer.from(text, "utf-8").toString("base64"))
.then(result =>{
wav.bitDepth = '128000'
wav.fromScratch(1, 8000, '64', result);
wav.fromMuLaw();
wav.toSampleRate(64000);
return wav;
}).then(wav => {
audio = new Audio(wav.toDataURI())
return audio;
}).then(audio => {
audio.play();
})
}
Below changes worked for me, still some noise is there, but it can produce 80% matching sound.
const readFile = () => {
fetch(voice)
.then(r => r.text())
.then(text => btoa(unescape(encodeURIComponent(text))) )
.then(result =>{
wav.fromScratch(1, 18000, '8m', Buffer.from(result, "base64"));
wav.fromMuLaw(32);
wav.toSampleRate(64000, {method: "sinc"});
return wav;
}).then(wav => {
audio = new Audio(wav.toDataURI())
return audio;
}).then(audio => {
audio.play();
})
}
Related
I did a script that downloads a MP3 file from my S3 bucket and then manipulates in before download (Adding ID3 Tags).
It's working and the tags are injected properly, but the files corrupts as it seems and unplayable.
I still can see my tags trough MP3tag so it has data in it, but no audio is playing trough the file.
Heres my code,
Trying to figure it what went wrong
const downloadFileWithID3 = async (filename, downloadName, injectedEmail) => {
try {
const data = await s3Client.send(
new GetObjectCommand({
Bucket: "BUCKETNAME",
Key: filename,
})
);
const fileStream = streamSaver.createWriteStream(downloadName);
const writer = fileStream.getWriter();
const reader = data.Body.getReader();
const pump = () =>
reader.read().then(({ value, done }) => {
if (done) writer.close();
else {
const arrayBuffer = value;
const writerID3 = new browserId3Writer(arrayBuffer);
const titleAndArtist = downloadName.split("-");
const [artist, title] = titleAndArtist;
writerID3.setFrame("TIT2", title.slice(0, -4));
writerID3.setFrame("TPE1", [artist]);
writerID3.setFrame("TCOM", [injectedEmail]);
writerID3.addTag();
let taggedFile = new Uint8Array(writerID3.arrayBuffer);
writer.write(taggedFile).then(pump);
}
});
await pump()
.then(() => console.log("Closed the stream, Done writing"))
.catch((err) => console.log(err));
} catch (err) {
console.log(err);
}
};
Hope you can help me solve this wierd bug,
Thanks in advance!
Ok so i've figured it out, instead of using chunks of the stream itself i've used getSignedUrl from the s3 bucket it works.
Thanks everyone for trying to help out!
I have some code like this:
fetch(audioSRC).then(response => {
return new Response( new ReadableStream({
start(controller) {
const reader = response.body.getReader();
read(); function read() {
reader.read().then(({done, value}) => {
if (done) {
controller.close();
return;
}
read();
})
}
}
})
);
}).then(response => response.blob()).then(blob => {
let vid = URL.createObjectURL(blob);
player.src = vid;
})
i was wondering if it is possible to take this stream while it's incomplete, and play what IS downloaded in the video/audio element before the download is complete? If there is a way to do it smoothly, it would be good so that the user doesn't have to wait until it's fully downloaded.
It works with https://developer.mozilla.org/en-US/docs/Web/API/MediaSource
After hours of experimenting, found a half-working solution: https://stackoverflow.com/a/68778572/11979842
I am having an issue that I can't seem to figure out. I am creating an application in which clients in a room will be able to talk to each other via their microphone. So more specifically, I need to stream audio from one client's microphone, send the data (while it is being recorded) to the server and then to a different client so that they can can hear the other persons voice in real-time. I have a method of doing this, but it is inefficient, choppy, and blatantly bad...
My method as of now:
This snippet is on the client side of the person recording audio
setInterval(() => {
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start();
const audioChunks = [];
mediaRecorder.addEventListener("dataavailable", (event) => {
audioChunks.push(event.data);
});
mediaRecorder.addEventListener("stop", () => {
socket.emit('liveAudioToServer', audioChunks)
});
setTimeout(() => {
mediaRecorder.stop();
},2000);
});
}, 2000);
This snippet is the server side:
socket.on('liveAudioToServer', (data) => {
socket.broadcast.to(room).emit('liveAudioToClient', data)
})
And this snippet is the client side on the receiving end of the audio:
socket.on('liveAudioToClient', (data) => {
const audioBlob = new Blob(data);
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
audio.play()
})
Basically, what this code does is it sends an audio buffer every two seconds to the server and then the server broadcast the buffer to the other clients in the room. once the other clients receive the buffer, they compile it into audio that is played. The main issue with this is that you can clearly tell when the audio ends and begins and it is not smooth in the slightest. However, I tried a different solution that with some tweaking I feel like it could work. But it doesn't work as of now..
Other Method:
if(navigator.getUserMedia) {
navigator.getUserMedia(
{audio: true},
function(stream) {
const audioContext3 = new AudioContext();
const audioSource3 = audioContext3.createMediaStreamSource(stream);
const analyser3 = audioContext3.createAnalyser();
audioSource3.connect(analyser3);
analyser3.fftSize = 256;
const bufferLength = analyser3.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
function sendAudioChunks(){
analyser3.getByteFrequencyData(dataArray);
requestAnimationFrame(sendAudioChunks);
socket.emit('liveAudioToServer', dataArray)
}
sendAudioChunks();
},
function(){ console.log("Error 003.")}
);
}
server side:
socket.on('liveAudioToServer', (data) => {
socket.broadcast.to(getRoom(socket.id)).emit('liveAudioToClient', data)
})
other clients side (receiving audio)
socket.on('liveAudioToClient', (data) => {
const audioBlob = new Blob([new Uint8Array(data)])
const audioUrl = URL.createObjectURL(audioBlob);
const audioo = new Audio(audioUrl);
audioo.play()
})
I've really looked everywhere for the solution to this and I have had no luck so if anyone could help me that would be greatly appreciated!!!
I am working on a side project to download videos from Reddit, but they separate video and audio in different files. so i have to merge them first before downloading them in the client. i was able to do all of this as in the following snippet of code.
const ffmpeg = require("fluent-ffmpeg");
const proc = new ffmpeg();
app.post('/download', async (req, res) => {
const audio = "some aduio link";
const video = "some video link";
proc.addInput(video)
.output('${some path}./video.mp4')
.format('mp4')
.on("error", err => console.log(err))
.on('end', () => console.log('Done'));
if(audio) {
proc.addInput(audio);
}
proc.run()
});
using the above code, the video is being download locally in the the server in the specified path.
but i want to download the video in the client browser who sent the request. i tried :
proc.pipe(res);
but it didn't work, it's my first time working with ffmpeg , so it would be nice if someone give me a hint
add writeToStream(res, { end: true }); atn the end to stream
const ffmpeg = require("fluent-ffmpeg");
const proc = new ffmpeg();
app.post('/download', async (req, res) => {
const audio = "some aduio link";
const video = "some video link";
ffmpeg(video).format('mp4')
.on("error", err => console.log(err))
.on('end', () => console.log('Done')).writeToStream(res, { end: true });
});
`
i hope it works
I'm trying to download and play an audio file fetched from youtube using ytdl and discord.js:
ytdl(url)
.pipe(fs.createWriteStream('./music/downloads/music.mp3'));
var voiceChannel = message.member.voiceChannel;
voiceChannel.join().then(connection => {
console.log("joined channel");
const dispatcher = connection.playFile('./music/downloads/music.mp3');
dispatcher.on("end", end => {
console.log("left channel");
voiceChannel.leave();
});
}).catch(err => console.log(err));
isReady = true
I successfully manage to play the mp3 file in ./music/downloads/ without the ytdl part (ytdl(url).pipe(fs.createWriteStream('./music/downloads/music.mp3'));). But when that part is in the code, the bot just joins and leaves.
Here is the output with the ytdl part:
Bot has started, with 107 users, in 43 channels of 3 guilds.
joined channel
left channel
And here is the output without the ytdl part:
Bot has started, with 107 users, in 43 channels of 3 guilds.
joined channel
[plays mp3 file]
left channel
Why is that and how can i solve it?
Use playStream instead of playFile when you need to play a audio stream.
const streamOptions = { seek: 0, volume: 1 };
var voiceChannel = message.member.voiceChannel;
voiceChannel.join().then(connection => {
console.log("joined channel");
const stream = ytdl('https://www.youtube.com/watch?v=gOMhN-hfMtY', { filter : 'audioonly' });
const dispatcher = connection.playStream(stream, streamOptions);
dispatcher.on("end", end => {
console.log("left channel");
voiceChannel.leave();
});
}).catch(err => console.log(err));
You're doing it in an inefficient way. There's no synchronization between reading and writing. Wait for the file to be written to the filesystem, then read it!
Directly stream it
Redirect YTDL's video output to dispatcher, which would be converted to opus audio data packets first, then streamed from your computer to Discord.
message.member.voiceChannel.join()
.then(connection => {
console.log('joined channel');
connection.playStream(ytdl(url))
// When no packets left to send, leave the channel.
.on('end', () => {
console.log('left channel');
connection.channel.leave();
})
// Handle error without crashing the app.
.catch(console.error);
})
.catch(console.error);
FWTR (First write, then read)
The approach you used wass pretty close to success, but the failure is when you don't synchronize read/write.
var stream = ytdl(url);
// Wait until writing is finished
stream.pipe(fs.createWriteStream('tmp_buf_audio.mp3'))
.on('end', () => {
message.member.voiceChannel.join()
.then(connection => {
console.log('joined channel');
connection.playStream(fs.createReadStream('tmp_buf_audio.mp3'))
// When no packets left to send, leave the channel.
.on('end', () => {
console.log('left channel');
connection.channel.leave();
})
// Handle error without crashing the app.
.catch(console.error);
})
.catch(console.error);
});