Can I use a (soundfont) sample e.g. "C5.mp3" and extend or shorten its duration for a given time (without distorting the pitch)?
(Would be great if this was as easy as using an oscillator and change the timings of NoteOn and NoteOff, but with a more natural sound rather than sine waves)? (Can that be done easily without having to resort to MIDI.js or similar?)
You would either need to loop the mp3 or record a longer sample. Looping can be tricky to make seamless without clicks or pops though, and depending on the sample you would hear the attack of the note each time it looped.
.mp3s and other formats of recorded audio are all finite, predetermined sets of binary data. The reason it's so easy to manipulate sine waves with an oscillator is because the web audio api is dynamically generating the wave based on the input you're giving it.
Good soundfonts have loop points for endless sounds. See example for WebAudioFont
https://surikov.github.io/webaudiofont/examples/flute.html
Related
On this page I enter the data for the "home loan simulation". and a "video" is generated in response to the data that I entered, it is something like a dynamic "video".
This "video" shows related data that I entered.
I inspect the code and I don't see anything that shows a video tag or something. This looks like video, from the control bar, to the full screen option, and the cc. It also has audio, although the voice of the "video" does not mention any dynamic data.
I inspect the code and I don't see anything that shows a video tag or something.
This looks like video, from the control bar, to the full screen option, and the cc. It also has audio, although the voice of the "video" does not mention any dynamic data.
Does anyone know how this was done or has an example of how to do it? is there any way to do the same using javascript, css, and html?
thank you
this is the link
https://www.grupobancolombia.com/personas/creditos/vivienda/simulador-credito-vivienda##sim-results
The video from your question is actually a very big SVG as #Kaiido mentioned.
The animation script is very hard to understand. Here is just a part of it:
You can see it has more that 320.000 lines of code. And we have no clue what all this numbers mean. Of course, some of them are time codes, some are coordinates, but we need reverse engineering to understand.
Your original question: is there any way to do the same using javascript, css, and html? of course has the answer: yes. Almost any animation is possible.
But we need examples. Ok, there are two possible ways to solve the animation: use existing library or create your own. If you are interested in your own, just ask in comments.
Use library
Google suggest: animate.js library.
Here is an example of using controls (play/pause/resume/reset/set time) as it is in real video player: click.
Here are 3 examples of using SVG: move along the path, morph to other shape, change line properties.
More examples using animate.js are here.
Write your own library
I use some kind of self-written library in one of my projects. The idea is:
have an array of keyframes - this is where animation changes. Each keyframe has: time start, duration (similar to having "time end"), the list of changes (objects or their properties).
I update the animation in requestAnimationFrame() loop (because my animation goes only to the future, I do not have controls)
when current time becomes greater than new keyframe start time, I drop (remove) previous keyframe from array and apply new objects/values
if current time is greater than keyframe start, but less than keyframe end, I use lerp (linear interpolation) to calculate in-between values of objects
But this description is just for the idea, so that you can create something that suits your needs.
Audio
I think, audio is just a normal audio tag in HTML:
<audio id="a">
<source src="horse.ogg" type="audio/ogg">
<source src="horse.mp3" type="audio/mpeg">
Your browser does not support the audio element.
</audio>
It can be controlled with methods and properties: look here. Example:
const a = document.getElementById('a');
a.currentTime = 0.8; // playing at 0.8 seconds from the start
a.play();
From what i can understand, they are using an API provided by a company. What they are doing its basically taking your inputs, process them and send a bunch of info to the API as a POST request, then the API responds with a custom URL that they show to you with an iframe.
If you wanna learn more you should check the company that provides the API. IndiVideo
Also, i don't think that this is the right place to ask for something like this.
It appears to be an SVG video that takes in your data and renders it out in real time. It's definitely something anyone can build, but it would take a little less effort if you used an API.
I would like use a custom waveform with a WebAudio OscillatorNode. I'm new to audio synthesis and still struggle quite a lot with the mathematics (I can, at least, program).
The waveforms are defined as functions, so I have the function itself, and can sample the wave. However, the OscillatorNode.createPeriodicWave method requires two arrays (real and imag) that define the waveform in the frequency domain.
The AnalyserNode has FFT methods for computing an array (of bytes or floats) in the frequency domain, but it works with a signal from another node.
I cannot think of a way to feed a wavetable into the AnalyserNode correctly, but if I could, it only returns a single array, while OscillatorNode.createPeriodicWave requires two.
TLDR Starting with a periodic function, how do you compute the corresponding arguments for OscillatorNode.createPeriodicWave?
Since you have a periodic waveform defined by a function, you can compute the Fourier Series for this function. If the series has an infinite number of terms, you'll need to truncate it.
This is a bit of work, but this is exactly how the pre-defined Oscillator types are computed. For example, see the definition of the square wave for the OscillatorNode. The PeriodicWave coefficients for the square wave were computed in exactly this way.
If you know the bandwidth of your waveform, you can simplify the work a lot by not having to do the messy integrals. Just uniformly sample the waveform fast enough, and then use an FFT to get the coefficients you need for the PeriodicWave. Additional details on in the sampling theorem.
Or you can just assume that sample rate of the AudioContext (typically 44.1 kHz or 48 kHz) is high enough and just sample your waveform every 1/44100 or 1/48000 sec and compute the FFT of the resulting samples.
I just wrote an implementation of this. To use it, drag and drop the squares to form a waveform and then play the piano that appears afterwards. Watch the video in this tweet to see a use example. The live demo is in alpha version, so the code and UI are a little rough. You can check out the source here.
I didn't write any documentation, but I recorded some videos (Video 1) (Video 2) (Video 3) of me coding the project live. They should be pretty self-explanatory. There are a couple of bugs in there that I fixed later. For the working version, please refer to the github link.
I found out app called https://skinmotion.com/ and for learning purposes, I would like to create my own, web version of the app.
Web application work as follows. It asks user for permission to access camera. After that, video is caputer. Once every second, image is taken from the stream and processed. During this process, I look for soundwave patern in the image.
If the pattern is found, video recording stops and some action is executed.
Example of pattern - https://www.shutterstock.com/cs/image-vector/panorama-mini-earthquake-wave-on-white-788490724.
Idealy, it should work like with QR codes - even small qrcode is detected, it should not depend on rotation and scaling.
I am no computer vision expert, this field is fairly new to me. I need some help. Which is the best way to do this?
Should I train my own Tensorflow dataset and use tensorflow.js? Or is there easier and more "light weight" option?
My problem is, I could not find or come up with algorithm for processing the captured image to make as "comparable" as possible - scale it up, rotate, threshold to white and black colors, etc.
I hope that after this transformation, resemble.js could be used to compare "original" and "captured" image.
Thank you in advance.
With Deep Learning
If there are certain waves patterns to be recognized, a classification model can be written using tensorfow.js.
However if the model is to identify waves pattern in general, it can be more complex. An object detection model is to be used.
Without deep learning
Adding to the complexity, would be detecting the waveform and play an audio from it. In this latter case, the image can be read byte by byte. The wave graph is drawn with a certain color that is different from the background of the image. The area of interest can be identified and an array representing the wav form can be generated.
Then to play the audio from the array, it can be done as shown here
I am trying to create a water ripple effect in a video embedded in HTML5 default web player.
I am doing it fine with using images and and a overlay canvas on top of it, but what I am trying to do now is to get single frames from a video and output it to a canvas every 1-5ms using this tutorial.
And I am stuck at this point, I can output frame into another canvas using
canvas.toDataURL() function.
I have seen advanced web-based video players that allow for applying Processing.js sketches on top of videos, would that be a good solution?
My question is: what would be the best and most reliable solution for applying visual effects (water ripples in this case) using JavaScript to a video playing in HTML5 media player.
My question is: what would be the best and most reliable solution for applying visual effects (water ripples in this case) using JavaScript to a video playing in HTML5 media player.
In my opinion the best approach would be to use WebGL to create the effects, using the video as input texture and a simple flat geometry that is manipulated using a animated bump-map - or - directly manipulating the vertices - or - a perhaps a shader program, and then output the result to a canvas.
I would not place the video in DOM at all but create control buttons and display those together with the output [webgl-]canvas in DOM.
The obvious drawback would be besides from slightly more complex code, when used on computers which doesn't have a GPU (but that would be a drawback in any case and more so if you used a regular 2D canvas and pixel manipulation).
This is of course very broad in terms of code example. But I assume you get the general idea.
A couple of notes:
[...] what I am trying to do now is to get single frames from a video and output it to a canvas every 1-5ms [...]
This is sort of pointless in regards to the time-budget, as the typical monitor can only refresh the image every ~16.7ms which means you're wasting at least 3-4 frames that are never displayed.
Also, a typical video source never runs faster than 30 FPS (in the US, 25 FPS in Europe) which means there is only a new frame every 33.3ms (there are of course special case videos such as VR/AR, stereoscopic, games and "Mac" recorded video etc. that may use higher fps - but for the most part anything > 30 fps is usually wasted cycles) which allows for a higher time-budget for processing per frame.
I can output frame into another canvas using canvas.toDataURL() function [...]
Outch! :) This comes which a huge overhead. The browser would have to use line filtering (in case of PNG - well, some browsers skip this and use filter 0), compression, base-64 encoding, then apply the source to an image element, decode the base-64, decompress, defilter...
You can instead simply use the source canvas for drawImage() directly and only have to deal with a in-memory bitmap (super fast!).
All that being said: if simplicity is important code-wise, you can of course do all this using a 2D canvas. You can use drawImage() to displace pixels/blocks and therefor work on the GPU (but not necessarily faster than working on the bitmap directly depending on how you apply the actual displacement).
But there are still many caveats such as video source and destination resolution which has a exponential impact on performance, limited use of the GPU as you still would have to do multiple serial operations versus parallel operations with webgl/gpu and so forth. In essence, the performance will suffer compared to a webgl solution.
If you want to get high performance you can use WebGL. Following is a github reop for Water ripple project.
https://github.com/sirxemic/jquery.ripples/
Following is a ruining example for jQuery WebGL Ripples
http://sirxemic.github.io/jquery.ripples/
Think this might help
You can do it in several ways. You can write core javascript code with canvas.
But I think it is best to use Jquery plugins. There are several plugins available for water ripple.
You may check the following links:
https://github.com/virtyaluk/paper-ripple
https://github.com/sirxemic/jquery.ripples
https://github.com/andyvr/water-ripple
For a few duys i started working on a small js based browser racing game.
I dont have too much experience in javascript but i learn fast. The game is based on php and a little js for the animation. Also trying to implement node.js to make it realtime.
The game works but want to add sound so it would be more interesting... but i came across a little problem, gapless sound looping. Tryied several methods and frameworks but no results, only in chrome with some but thats not enough.
Please give some ideas/solutions/examples of how would you do it. Thanks in advance.
I managed to make a reasonable engine sound by recording a real engine in 250 RPM increments, then created the additional very high RPM wav files by pitch shifting and increasing the volume. I then mix between these continually playing wav files using multiple HTML5 web audio gainNodes.
The pitch-shift is still noticable, but not to bad. If you want perfect sound you would will need to add additional wav files that start at one pitch and shift gradually to another, then logic to fade these in/out appropriately.
Looping the audio is done by having two wav files that overlap slightly and fade between them through the overlap, so you don't hear a click. Or alternatively create one long wav file beforehand using this technique so the clicks are infrequent.