I have a URL that fetches a music file from an AWS S3 bucket.
it takes about 10 - 15 seconds for the song to start playing, because it fetches the song first before playing.
I want to be able to fetch the song in chunks and play them in the html5 audio player.
So that once the user plays the song, it starts streaming while the other song is being fetched.
import React, { useEffect } from 'react';
import Waveform from './Waveform';
const appendBuffer = (buffer1, buffer2, context) => {
const numberOfChannels = Math.min(
buffer1.numberOfChannels,
buffer2.numberOfChannels
);
const tmp = context.createBuffer(
numberOfChannels,
buffer1.length + buffer2.length,
buffer1.sampleRate
);
for (let i = 0; i < numberOfChannels; i++) {
const channel = tmp.getChannelData(i);
channel.set(buffer1.getChannelData(i), 0);
channel.set(buffer2.getChannelData(i), buffer1.length);
}
return tmp;
};
const Player = ({ selectedTrack }) => {
// const [blob, setBlob] = useState();
const getData = async (selectedTrack) => {
const fetchedData = await fetch(selectedTrack);
const reader = await fetchedData.body.getReader();
const context = new AudioContext();
reader.read().then(async function processAudio({ done, value }) {
if (done) {
console.log('Stream finished. Content received:');
return;
}
try {
console.log('processAudio -> value', value.buffer);
const buffer = await context.decodeAudioData(value.buffer);
const source = context.createBufferSource();
const newaudioBuffer =
source && source.buffer
? appendBuffer(source.buffer, buffer, context)
: buffer;
source.buffer = newaudioBuffer;
source.connect(context.destination);
source.start(source.buffer.duration);
console.log(
'processAudio -> source.buffer.duration',
source.buffer.duration
);
} catch (error) {
console.log('processAudio -> error', error);
}
return reader.read().then(processAudio);
});
};
useEffect(() => {
getData(selectedTrack);
}, [selectedTrack]);
return (
<div className="player">
<Waveform url={selectedTrack} />
<br />
</div>
);
};
export default Player;
what i have now gets the code in chunks but plays so fast and there is no way to make it play one after another
Related
I'm trying to replicate on React a metronome built with Vue. Basically the play function keeps calling itself while playing is true. Here is the Vue code:
data() {
return {
playing: false,
audioContext: null,
audioBuffer: null,
currentTime: null
}
},
mounted() {
this.initContext();
},
methods: {
toggle()
if (!this.playing) {
this.currentTime = this.audioContext.currentTime;
this.playing = true;
this.play();
} else {
this.playing = false;
}
},
play() {
this.currentTime += 60 / this.$store.state.tempo;
const source = this.audioContext.createBufferSource();
source.buffer = this.audioBuffer;
source.connect(this.audioContext.destination);
source.onended = this.playing ? this.play : "";
source.start(this.currentTime);
},
async initContext() {
this.audioContext = new AudioContext();
this.audioBuffer = await fetch("click.wav")
.then(res => res.arrayBuffer())
.then(arrayBuffer =>
this.audioContext.decodeAudioData(arrayBuffer)
)
}
}
This is what I tried. The problem I have is that the state being async on React won't update while the function is running and the value of playing will never change.
const firstRender = useRef(true);
const [bpm, setBpm] = useState<number>(60);
const [playing, setPlaying] = useState<boolean>(false);
const [context, setContext] = useState<AudioContext>(undefined);
const [buffer, setBuffer] = useState<AudioBuffer>(undefined);
const [currentTime, setCurrentTime] = useState<number>(0);
useEffect(() => {
if (firstRender.current) {
firstRender.current = false;
initContext();
return;
}
});
const toggle = () => {
if (!playing) {
setCurrentTime(context.currentTime);
setPlaying(true);
play();
} else {
setPlaying(false)
}
}
const play = () => {
setCurrentTime(currentTime + 60 / bpm);
let source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.onended = playing ? play : undefined;
source.start(currentTime);
}
const initContext = async () => {
const audioContext = new AudioContext();
const audioBuffer = await fetch("click.wav")
.then(res => res.arrayBuffer())
.then(arrayBuffer =>
audioContext.decodeAudioData(arrayBuffer)
)
setContext(audioContext);
setBuffer(audioBuffer);
}
I guess the solution should be on useEffect but I could not manage to recreate it. Any suggestion is appreciated!
I need to take an audio recording and throw it to firebase, I do this, but the file type goes to firebase incorrectly, when it goes correctly, I cannot listen to the audio in firebase.
import {
useReactMediaRecorder,
} from "react-media-recorder";
const { status, startRecording, stopRecording, clearBlobUrl, mediaBlobUrl } =
useReactMediaRecorder({
audio: true,
blobPropertyBag: {
type: "audio/m4a",
},
});
const audioUpload = async (e: any) => {
saveMedia(reader.result?.toString(), user);
};
Firebase :
const uploadImage = async (
file: any,
user: IUser
): Promise<IUploadImage | null> => {
const storage = getStorage(firebaseApp);
const child = "1abcd/koray-test/" + idGenerator();
const imageRef = ref(storage, child);
const snapshot = await uploadToFirebase(imageRef, file);
if (!snapshot) {
return null;
}
const fbReference = snapshot.ref.fullPath;
return { fbReference, file };
};
So i have a function in a module, This is the Index file
const track = require('./components/trackPlays')
const Plays = track.Plays(username, expT ,expA )
this is the module file.
const axios = require('axios');
const keys = require('../../../../keys/lfmconfig.json')
const Plays = (f,t,a) => {
const requestURL = `http://ws.audioscrobbler.com/2.0/?method=track.getInfo&api_key=${keys.apikey}&username=${f}&artist=${a}&track=${t}&format=json`
axios.get(requestURL)
.then((response) => {
console.log(response.data.track.userplaycount)
return response.data.track.userplaycount
})
}
exports.Plays = Plays
I want to use the value response.data.track.userplaycount in my index file but when i try to return it, it just returns undefined
I need to upload file (audio/video) using default input type='file' and the I should pass duration of the video in api request, how i ca do this?
const uploadFile = async (event) => {
let file = await event.target.files[0];
//here api POST request where i should pass duration
}:
You can get the audio duration with HTMLMediaElement.duration:
async function getDuration(file) {
const url = URL.createObjectURL(file);
return new Promise((resolve) => {
const audio = document.createElement("audio");
audio.muted = true;
const source = document.createElement("source");
source.src = url; //--> blob URL
audio.preload= "metadata";
audio.appendChild(source);
audio.onloadedmetadata = function(){
resolve(audio.duration)
};
});
}
Then in your function:
const uploadFile = async (event) => {
let file = event.target.files[0];
//here api POST request where i should pass duration
const duration = await getDuration(file);
}:
You just need to create an element based on user input(video/audio) and get the duration property -
const VIDEO = "video",
AUDIO = "audio";
const uploadApiCall = (file, data = {}) => {
// ----- YOUR API CALL CODE HERE -----
document.querySelector("#duration").innerHTML = `${data.duration}s`;
document.querySelector("#type").innerHTML = data.type;
};
let inputEl = document.querySelector("#fileinput");
inputEl.addEventListener("change", (e) => {
let fileType = "";
let file = inputEl.files[0];
if (file.type.startsWith("audio/")) {
fileType = AUDIO;
} else if (file.type.startsWith("video/")) {
fileType = VIDEO;
} else {
alert("Unsupported file");
return;
}
let dataURL = URL.createObjectURL(file);
let el = document.createElement(fileType);
el.src = dataURL;
el.onloadedmetadata = () => {
uploadApiCall(file, {
duration: el.duration,
type: fileType
});
};
});
<form>
<input type="file" accept="video/*,audio/*" id="fileinput" />
<hr />
Type:<span id="type"></span>
<br />
Duration:<span id="duration"></span>
</form>
In Vue 3 JS, I had to create a function first:
const getDuration = async (file) => {
const url = URL.createObjectURL(file);
return new Promise((resolve) => {
const audio = document.createElement("audio");
audio.muted = true;
const source = document.createElement("source");
source.src = url; //--> blob URL
audio.preload = "metadata";
audio.appendChild(source);
audio.onloadedmetadata = function(){
resolve(audio.duration)
};
});
}
The user would select an MP3 file. Then when it was submitted I could call that function in the Submit function:
const handleAudioSubmit = async () => {
console.log('Your Epsiode Audio is being stored... please stand by!')
if (file.value) {
// returns a number that represents audio seconds
duration.value = await getDuration(file.value)
// remove the decimals by rounding up
duration.value = Math.round(duration.value)
console.log("duration: ", duration.value)
// load the audio file to Firebase Storage using a composable function
await uploadAudio(file.value)
.then((downloadURL) => {
// composable function returns Firebase Storage location URL
epAudioUrl.value = downloadURL
})
.then(() => {
console.log("uploadAudio function finished")
})
.then(() => {
// Set the Album Fields based on the album id to Firestore DB
const updateAudio = doc(db, "artist", artistId.value, "albums, albumID.value);
updateDoc(updateAudio, {
audioUrl: audioUrl.value,
audioDuration: duration.value
})
console.log("Audio URL and Duration added to Firestore!")
})
.then(() => {
console.log('Episode Audio has been added!')
router.push({ name: 'Next' })
})
} else {
file.value = null
fileError.value = 'Please select an audio file (MP3)'
}
}
This takes some time to run and needs refactoring, but works provided you allow the async functions the time to finish. Hope that helps!
I am trying to create a video chat with twilio. I could turn the webcam and run the video, however i could not make the audio work. When i select the control, i get to enlarge the video and picture to picture mode but not control the audio.
This is how seen
Here is the code
function App() {
let localMediaRef = React.useRef(null);;
const [data, setIdentity] = React.useState({
identity: null,
token: null
});
const [room, setRoom] = React.useState({
activeRoom: null,
localMediaAvailable: null,
hasJoinedRoom: null
});
async function fetchToken() {
try {
const response = await fetch("/token");
const jsonResponse = await response.json();
const { identity, token } = jsonResponse;
setIdentity({
identity,
token
});
} catch (e) {
console.error("e", e);
}
}
React.useEffect(() => {
fetchToken();
}, []);
const attachTracks = (tracks, container) => {
tracks.forEach(track => {
container.appendChild(track.attach());
});
};
// Attaches a track to a specified DOM container
const attachParticipantTracks = (participant, container) => {
const tracks = Array.from(participant.tracks.values());
attachTracks(tracks, container);
};
const roomJoined = room => {
// Called when a participant joins a room
console.log("Joined as '" + data.identity + "'");
setRoom({
activeRoom: room,
localMediaAvailable: true,
hasJoinedRoom: true
});
// Attach LocalParticipant's Tracks, if not already attached.
const previewContainer = localMediaRef.current;
if (!previewContainer.querySelector("video")) {
attachParticipantTracks(room.localParticipant, previewContainer);
}
};
const joinRoom = () => {
let connectOptions = {
name: "Interview Testing"
};
let settings = {
audio: true
}
console.log('data', data, data.token)
Video.connect(
data.token,
connectOptions,
settings
).then(roomJoined, error => {
alert("Could not connect to Twilio: " + error.message);
});
};
return (
<div className="App">
<FeatureGrid>
<span onClick={joinRoom}>Webcam</span>
</FeatureGrid>
<PanelGrid>
{room.localMediaAvailable ? (
<VideoPanels>
<VideoPanel ref={localMediaRef} />
</VideoPanels>
) : (
""
)}
</PanelGrid>
</div>
);
}
export default App;
How do i enable audio too? Also the settings of video is shown only after right click. can't we show this by default?
UPDATE
its a LocalAudioTrack
this is remoteaudiotrack