Controlling mixing with an oscillator or lfo - javascript

I'have two oscillators with different waveshape (triangular and square):
var oscTri = audioCtx.createOscillator();
var oscSqu = audioCtx.createOscillator();
oscTri.type = 'triangle';
oscSqu.type = 'square';
var mixTri = audioCtx.createGain();
var mixSqu = audioCtx.createGain();
oscTri.connect(this.mixTri);
oscSqu.connect(this.mixSqu);
mixTri.connect(audioCtx.destination);
mixSqu.connect(audioCtx.destination);
I'd like to control the mixing of the two with a third oscillator so the output sound will oscillate between the two (when gain of triangle is 1, square is 0; when triangle is 0.5, square is 0.5, triangle is 0.75, square is 0.25; and so on):
var modOsc = audioCtx.createOscillator();
How I can connect this modulator oscillator to have an "oscillation" between the two previous waveforms?

Set mixtri = 1 and mixSqu = -1 then connect the modOSC to the gains gain value that should to the trick. Personally i would use filters cause i like it more i made and example for you at https://gtube.de my site click on the PUBLISH / SYNTHY DATABASE on your example. Than press the Letter A to hear the effect. You can see the setup on the Synthesizer Tab. My site doesn't work with the gain nodes because they are not fixed => It allows more than one key to be pressed. But with only gain nodes it should work as well.
Cheers
Kilian

Related

Can I change the interval of the pitch shift while playing audio using tone.js?

I'll show you my code
audio_music = new Audio();
var track = audioContext.createMediaElementSource(audio_music);
//Import music files from other sources into base64 form.
audio_music.src = "data:audio/ogg;base64,"+ data.music;
var splitter = audioContext.createChannelSplitter(6);
var merger = audioContext.createChannelMerger(2);
track.connect(splitter);
//omitted in addition to 0 and 1 due to repetition of the similar content
gainNode0 = audioContext.createGain(); gainNode0.gain.setValueAtTime((musicvolume*0.1), audioContext.currentTime);
gainNode1 = audioContext.createGain(); gainNode1.gain.setValueAtTime((musicvolume*0.1), audioContext.currentTime);
splitter.connect(gainNode0, 0);
splitter.connect(gainNode1, 1);
var pitchshift0 = new Tone.PitchShift(pitch);
var pitchshift1 = new Tone.PitchShift(pitch);
Tone.connect(gainNode0, pitchshift0);
Tone.connect(gainNode1, pitchshift1);
Tone.connect(pitchshift0, merger, 0, 0);
Tone.connect(pitchshift1, merger, 0, 1);
Tone.connect(merger, audioContext.destination);
I am not familiar with the use of audioContext and tone.js, so I don't know if I understand correctly, but my intention is to separate input sources with six channels and process them in the order of gain adjustment, pitch shift, and marge, respectively.
This will do everything else, but you can't change the value of the pitch shift during playback.
I want a way to function similar to the setValueAtTime used in GainNode in pitch shift.
What should I do?
You can change the pitch by setting the pitch parameter:
pitchshift0.pitch = -12 // Semitone to shift the pitch to.
If you want to set this at a specific time during playback, you can use the Transport class to schedule this:
Tone.Transport.schedule(() => pitchshift0.pitch = -12, time /* The transport time you want to schedule it at */);

Physijs simple collision between meshes without gravity

i am using Physijs to determine static collision between my meshes. As i need to know what surfaces are intersecting.
i hacked a simple demo that seems to work.
currently i have to configure my scene to use gravity, which prevents me from position my meshes in any y position, as they start to fall or float.
is there is simple way to remove the gravity from the simulation, and just use the mesh collision detection?
--update---
i had to explicitly set the mass to each mesh to 0 rather than blank. With mass=0 gravity has no affect. great!
however meshes are not reporting a collision.
any ideas where i am going wrong?
thanks
-lp
You cannot use Physijs for collision detection alone. It just comes fully equipped with real-time physics simulation, based on the ammo.js library. When you set the mass of the meshes to 0, it made them static. They were then unresponsive to external forces, such as collision responses (i.e. the change of velocity applied on the mesh after the collision was detected) or gravity. Also, two static meshes that overlap each other do not fire a collision event.
Solution A: Use ammo.js directly
Ported from Bullet Physics, the library provides the necessary tools for generating physics simulations, or just detect collisions between defined shapes (which Physijs doesn't want us to see). Here's a snippet for detecting collision between 2 rigid spheres:
var bt_collision_configuration;
var bt_dispatcher;
var bt_broadphase;
var bt_collision_world;
var scene_size = 500;
var max_objects = 10; // Tweak this as needed
bt_collision_configuration = new Ammo.btDefaultCollisionConfiguration();
bt_dispatcher = new Ammo.btCollisionDispatcher(bt_collision_configuration);
var wmin = new Ammo.btVector3(-scene_size, -scene_size, -scene_size);
var wmax = new Ammo.btVector3(scene_size, scene_size, scene_size);
// This is one type of broadphase, Ammo.js has others that might be faster
bt_broadphase = new Ammo.bt32BitAxisSweep3(
wmin, wmax, max_objects, 0, true /* disable raycast accelerator */);
bt_collision_world = new Ammo.btCollisionWorld(bt_dispatcher, bt_broadphase, bt_collision_configuration);
// Create two collision objects
var sphere_A = new Ammo.btCollisionObject();
var sphere_B = new Ammo.btCollisionObject();
// Move each to a specific location
sphere_A.getWorldTransform().setOrigin(new Ammo.btVector3(2, 1.5, 0));
sphere_B.getWorldTransform().setOrigin(new Ammo.btVector3(2, 0, 0));
// Create the sphere shape with a radius of 1
var sphere_shape = new Ammo.btSphereShape(1);
// Set the shape of each collision object
sphere_A.setCollisionShape(sphere_shape);
sphere_B.setCollisionShape(sphere_shape);
// Add the collision objects to our collision world
bt_collision_world.addCollisionObject(sphere_A);
bt_collision_world.addCollisionObject(sphere_B);
// Perform collision detection
bt_collision_world.performDiscreteCollisionDetection();
var numManifolds = bt_collision_world.getDispatcher().getNumManifolds();
// For each contact manifold
for(var i = 0; i < numManifolds; i++){
var contactManifold = bt_collision_world.getDispatcher().getManifoldByIndexInternal(i);
var obA = contactManifold.getBody0();
var obB = contactManifold.getBody1();
contactManifold.refreshContactPoints(obA.getWorldTransform(), obB.getWorldTransform());
var numContacts = contactManifold.getNumContacts();
// For each contact point in that manifold
for(var j = 0; j < numContacts; j++){
// Get the contact information
var pt = contactManifold.getContactPoint(j);
var ptA = pt.getPositionWorldOnA();
var ptB = pt.getPositionWorldOnB();
var ptdist = pt.getDistance();
// Do whatever else you need with the information...
}
}
// Oh yeah! Ammo.js wants us to deallocate
// the objects with 'Ammo.destroy(obj)'
I transformed this C++ code into its JS equivalent. There might have been some missing syntax, so you can check the Ammo.js API binding changes for anything that doesn't work.
Solution B: Use THREE's ray caster
The ray caster is less accurate, but can be more precise with the addition of extra vertex count in your shapes. Here's some code to detect collision between 2 boxes:
// General box mesh data
var boxGeometry = new THREE.CubeGeometry(100, 100, 20, 1, 1, 1);
var boxMaterial = new THREE.MeshBasicMaterial({color: 0x8888ff, wireframe: true});
// Create box that detects collision
var dcube = new THREE.Mesh(boxGeometry, boxMaterial);
// Create box to check collision with
var ocube = new THREE.Mesh(boxGeometry, boxMaterial);
// Create ray caster
var rcaster = new THREE.Raycaster(new THREE.Vector3(0, 0, 0), new THREE.Vector3(0, 1, 0));
// Cast a ray through every vertex or extremity
for(var vi = 0, l = dcube.geometry.vertices.length; vi < l; vi++){
var glovert = dcube.geometry.vertices[vi].clone().applyMatrix4(dcube.matrix);
var dirv = glovert.sub(dcube.position);
// Setup ray caster
rcaster.set(dcubeOrigin, dirv.clone().normalize());
// Get collision result
var hitResult = rcaster.intersectObject(ocube);
// Check if collision is within range of other cube
if(hitResult.length && hitResult[0].distance < dirv.length()){
// There was a hit detected between dcube and ocube
}
}
Check out these links for more information (and maybe their source code):
Three.js-Collision-Detection
Basic Collision Detection, Raycasting with Three.js
THREE's ray caster docs

Gradually Change Web Audio API Panner

I'm trying to use a simple HTML range input to control the panning of my Web Audio API audio but I can only get 3 "positions" for my audio output:
-Center
-100% to the left
-100% to the right.
I would like to have something in between does positions, like 20% left and 80% right and so on...
The code that I'm using is:
//Creating the node
var pannerNode = context.createPanner();
//Getting the value from the HTML input and using it on the position X value
document.getElementById('panInput').addEventListener('change', function () {
pannerNode.setPosition(this.value, 0, 0);
});
And it refers to this input on my HTML file:
<input id="panInput" type="range" min="-1" max="1" step="0.001" value="0"/>
Does anyone knows what am I doing wrong?
You shouldn't need to use two panners - Panner is stereo. This old answer is a great one to this question:
How to create very basic left/right equal power panning with createPanner();
I've actually found simple left/right panning to be kind of difficult with the Web Audio API. It's really set up for surround / spatial stuff, and I honestly don't understand it very well.
The way that I usually do panning is like this:
var panLeft = context.createGain();
var panRight = context.createGain();
var merger = context.createMerger(2);
source.connect(panLeft);
source.connect(panRight);
panLeft.connect(merger, 0, 0);
panRight.connect(merger, 0, 1);
merger.connect(context.destination);
document.getElementById('panInput').addEventListener('change', function () {
var val = this.value;
panLeft.gain.value = ( val * -0.5 ) + 0.5;
panRight.gain.value = ( val * 0.5 ) + 0.5;
});
Basically, you send the signal to two gain nodes that you're going to use as your left and right channel. Then you take the value from your range element and use it to set the gain on each of the nodes.
This is sort of the lazy version though. In serious audio apps, there's usually a bit more math involved with the panning to make sure there aren't changes in overall level -- but hopefully this is enough to get you started.
I'm quite sure there is a better and easier way to do that but, for now, it definitely works for me.
If anyone else have a better/cleaner way of doing it, please share it here!
Thanks to Kevin Ennis for giving me this hint!
JavaScript File
//Create a splitter to "separete" the stereo audio data to two channels.
var splitter = context.createChannelSplitter(2);
//Connect your source to the splitter (usually, you will do it with the last audio node before context destination)
audioSource.connect(splitter);
//Create two gain nodes (one for each side of the stereo image)
var panLeft = context.createGain();
var panRight = context.createGain();
//Connect the splitter channels to the Gain Nodes that we've just created
splitter.connect(panRight,0);
splitter.connect(panLeft,1);
//Getting the input data from a "range" input from HTML (the code used on this range will be shown right on the end of this code)
var panPosition = document.getElementById("dispPanPositionLiveInput");
document.getElementById('panControl').addEventListener('change', function () {
var val = this.value;
panPosition.value = val;
panLeft.gain.value = ( val * -0.5 ) + 0.5;
panRight.gain.value = ( val * 0.5 ) + 0.5;
});
//Create a merger node, to get both signals back together
var merger = context.createChannelMerger(2);
//Connect both channels to the Merger
panLeft.connect(merger, 0, 0);
panRight.connect(merger, 0, 1);
//Connect the Merger Node to the final audio destination (your speakers)
merger.connect(context.destination);
HTML File
< input id="panControl" type="range" min="-1" max="1" step="0.001" value="0"/>

How to modulate an AudioParam with a LFO in Web Audio API

How can i modulate any of the AudioParams in Web Audio API, for example gain value of the GainNode using a Low Frequency Oscillator ?
https://coderwall.com/p/h1jnmg
var saw = context.createOscillator(),
sine = context.createOscillator(),
sineGain = context.createGainNode();
//set up our oscillator types
saw.type = saw.SAWTOOTH;
sine.type = sine.SINE;
//set the amplitude of the modulation
sineGain.gain.value = 10;
//connect the dots
sine.connect(sineGain);
sineGain.connect(saw.frequency);
You're not saving your actual nodes, just the value - so when you try to connect to oscillator.frequency, you're passing an integer value (400 - the frequency you saved in the node). Try http://jsfiddle.net/GCSEq/6/ - this stores the nodes, and properly routes to the AudioParam.
this.oscillator = context.createOscillator();
this.gain = context.createGainNode();
and
osctest2.play(osctest.oscillator.frequency , 1000);
(You were getting an error in the console.)

Shadow map appearing on wrong place

I'm trying to make use of the built-in shadow map plugin in three.js. After initial difficulties I have more or less acceptable image with one last glitch. That one being shadow appearing on top some (all?) surfaces, with normal 0,0,1. Below are pictures of the same model.
Three.js
Preview.app (Mac)
And the code used to setup shadows:
var shadowLight = new THREE.DirectionalLight(0xFFFFFF);
shadowLight.position.x = cx + dmax/2;
shadowLight.position.y = cy - dmax/2;
shadowLight.position.z = dmax*1.5;
shadowLight.lookAt(new THREE.Vector3(cx, cy, 0));
shadowLight.target.position.set(cx, cy, 0);
shadowLight.castShadow = true;
shadowLight.onlyShadow = true;
shadowLight.shadowCameraNear = dmax;
shadowLight.shadowCameraFar = dmax*2;
shadowLight.shadowCameraLeft = -dmax/2;
shadowLight.shadowCameraRight = dmax/2;
shadowLight.shadowCameraBottom = -dmax/2;
shadowLight.shadowCameraTop = dmax/2;
shadowLight.shadowBias = 0.005;
shadowLight.shadowDarkness = 0.3;
shadowLight.shadowMapWidth = 2048;
shadowLight.shadowMapHeight = 2048;
// shadowLight.shadowCameraVisible = true;
scene.add(shadowLight);
UPDATE: And a live example over here: http://jsbin.com/okobum/1/edit
Your code looks fine. You just need to play with the shadowLight.shadowBias parameter. This is always a bit tricky. (Note that the bias parameter can be negative.)
EDIT: Tighten up your shadow-camera near and far planes. This will help reduce both shadow acne and peter-panning. For example, your live link, set shadowLight.shadowCameraNear = 3*dmax;. This worked for me.
You can also try adding depth to your table tops, if it's not already there.
You can try setting renderer.shadowMapCullFrontFaces = false. This will cull back faces instead of front ones.

Categories

Resources