I am trying to create an extension that has the capabilities to change the volume of the current tab.
Specifically have the capabilities to go above 100% volume.
I have found a way to change the audio of videos with the video html tag, but this is not consistent depending on the website.
I can't find an chrome api or anything to help with this.
Does anyone have any insight about this topic?
This is the code I currently am using.
var videoElement = document.querySelector("video")
var audioCtx = new AudioContext()
var source = audioCtx.createMediaElementSource(videoElement)
var gainNode = audioCtx.createGain()
gainNode.gain.value = 1
source.connect(gainNode)
gainNode.connect(audioCtx.destination)
Related
I am making a website which is a simple piano. I hosted the site, but the audio is lagging now. When I play it from my local system it works fine, but not from the hosted site.
So I want all the audio resources to load and save it in the user's device (as cache or something else) and play the audio from it so the audio won't lag. I can't figure out how do I do it. please help me with a solution.
This is what I tried:
const pianoKeys = document.querySelectorAll('.key');
function playSound(soundUrl){
const sound = new Audio(soundUrl)
sound.currentTime = 0
sound.play()
}
pianoKeys.forEach((pianoKey, i)=> {
const num = i<9 ? '0'+(i+1) : (i+1);
const soundUrl = 'sounds/key'+num+'.ogg';
pianoKey.addEventListener('mousedown', ()=>playSound(soundUrl))
})
The soundUrl creates the directory to the audio file. Is there any way to load the file and play from it?
I'm stuck with a problem in which whenever I pass the stream from createMediaStreamDestination to an audio element srcObject, no audio is being played. My implementation is based off of the response posted here Combine setSinkId with stereoPanner?
Initially, I have an audio element in which I isolate the sound so that it would only play from the left speaker
const audio = document.createElement('audio');
audio.src = audioUrl;
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
let destination = audioContext.destination;
panner.pan.value = -1;
source.connect(panner).connect(destination);
The above plays sound fine when I add audio.play() but I want to be able to set specifically the speakers that the audio would play out of while keeping the panner changes. Since audioContext doesn't contain any possibility of setting the sinkId yet, I created a new audio element and mediastreamdestination and passed the mediaStream into the source object
const audio = document.createElement('audio');
audio.src = audioUrl;
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
let destination = audioContext.createMediaStreamDestination();
panner.pan.value = -1;
source.connect(panner).connect(destination);
const outputAudio = new Audio();
outputAudio.srcObject = destination.stream;
outputAudio.setSinkId(audioSpeakerId);
outputAudio.play();
With the new code, however, when I start up my application, the outputAudio doesn't play any sound at all. Is there anything wrong with my code that is causing the outputAudio element not to play sound? I'm fairly new to web audio api and I tried implementing the code from the mentioned stackoverflow thread but it doesn't seem to be working for me. Any help would be appreciated!
In the description of your first code block you mention that you additionally also call audio.play() to start the audio. That's also necessary for the second code block to work. You need to start both audio elements.
Generally calling play() on an audio element and creating a new AudioContext should ideally happen in response to a user action to make sure the browser's autoplay policy doesn't block the audio.
If all goes well the state of your AudioContext should be "running".
I want to mix different audio media streams in to one stream. I'm been doing this with Web Audio audiocontext and createMediaStreamSource.
But the final mixed audio is stuttering.
Have anyone an idea how to optimize this to avoid stuttering?
// init audio context
var audioContext = new AudioContext({ latencyHint: 0 });
var audioDestination = audioContext.createMediaStreamDestination();
// add audio streams
audioContext.createMediaStreamSource(audioStream1).connect(audioDestination);
audioContext.createMediaStreamSource(audioStream2).connect(audioDestination);
audioContext.createMediaStreamSource(audioStream3).connect(audioDestination);
audioContext.createMediaStreamSource(audioStream4).connect(audioDestination);
// get mixed audio stream tracks
var audioTrack = audioDestination.stream.getTracks()[0];
// get video track
var videoTrack = videoStream.getTracks()[0];
// combine video and audio tracks into single stream.
var finalStream = new MediaStream([videoTrack, audioTrack]);
// assign to video element
el_video.srcObject = finalStream;
You could try setting the latencyHint to 'playback' like this:
const audioContext = new AudioContext({ latencyHint: 'playback' });
This allows the browser to add a bit of latency to the audio graph which can help on underpowered devices. Setting the latencyHint to 0 on the other hand will tell the browser that it should do things as fast as possible which increases the risk of dropouts.
Having said that, the latencyHint is only a hint. The browser may very well ignore it. You can check what the browser is actually doing by inspecting the baseLatency property.
console.log(audioContext.baseLatency);
I used this js to export everything in my canvas as an mp4 video. I succeeded in exporting it as a video but the video is always 0 in time.
Here's the js I used
https://github.com/antimatter15/whammy
Here's the code I have so far that can download the canvas and elements inside but not the animation.
var canvas_video = document.querySelector('canvas').getContext('2d');
canvas_video.save();
console.log(canvas_video);
var encoder = new Whammy.Video(15);
var progress = document.getElementById('progress');
encoder.add(canvas_video);
console.log("1",encoder);
encoder.compile(false, function(output){
//var url = (window.URL || window.URL).createObjectURL(output);
var url = URL.createObjectURL(output);
console.log(url);
document.getElementById('download_link').href = url;
});
When I checked on the console to debug it, it shows encodeFrame 0.
Can anyone advise on what should I do and if I miss something?
For anyone who's still looking for the answer -
That Library will only output to .webm instead of .mp4.
As far as I know, except for Chrome, no other browser support webm playback. So, use Chrome to view the video. Other browsers will stuck at 0 time.
I have my javascript audio player working with .mp3s, but I'm not sure how to add a second audio format (.ogg) so the files will also play in Firefox. Can anyone help with this. Here is the array code:
var urls = new Array();
urls[0] = 'audio/song1.mp3';
urls[1] = 'audio/song2.mp3';
urls[2] = 'audio/song3.mp3';
urls[3] = 'audio/song4.mp3';
var next = 0;
The easiest way to play sounds is with SoundManager 2 with uses Flash and HTML 5 when available.