JavaScript: Detecting shape in canvas - javascript

I found out app called https://skinmotion.com/ and for learning purposes, I would like to create my own, web version of the app.
Web application work as follows. It asks user for permission to access camera. After that, video is caputer. Once every second, image is taken from the stream and processed. During this process, I look for soundwave patern in the image.
If the pattern is found, video recording stops and some action is executed.
Example of pattern - https://www.shutterstock.com/cs/image-vector/panorama-mini-earthquake-wave-on-white-788490724.
Idealy, it should work like with QR codes - even small qrcode is detected, it should not depend on rotation and scaling.
I am no computer vision expert, this field is fairly new to me. I need some help. Which is the best way to do this?
Should I train my own Tensorflow dataset and use tensorflow.js? Or is there easier and more "light weight" option?
My problem is, I could not find or come up with algorithm for processing the captured image to make as "comparable" as possible - scale it up, rotate, threshold to white and black colors, etc.
I hope that after this transformation, resemble.js could be used to compare "original" and "captured" image.
Thank you in advance.

With Deep Learning
If there are certain waves patterns to be recognized, a classification model can be written using tensorfow.js.
However if the model is to identify waves pattern in general, it can be more complex. An object detection model is to be used.
Without deep learning
Adding to the complexity, would be detecting the waveform and play an audio from it. In this latter case, the image can be read byte by byte. The wave graph is drawn with a certain color that is different from the background of the image. The area of interest can be identified and an array representing the wav form can be generated.
Then to play the audio from the array, it can be done as shown here

Related

How to convert a wavetable for use with `OscillatorNode.setPeriodicWave`?

I would like use a custom waveform with a WebAudio OscillatorNode. I'm new to audio synthesis and still struggle quite a lot with the mathematics (I can, at least, program).
The waveforms are defined as functions, so I have the function itself, and can sample the wave. However, the OscillatorNode.createPeriodicWave method requires two arrays (real and imag) that define the waveform in the frequency domain.
The AnalyserNode has FFT methods for computing an array (of bytes or floats) in the frequency domain, but it works with a signal from another node.
I cannot think of a way to feed a wavetable into the AnalyserNode correctly, but if I could, it only returns a single array, while OscillatorNode.createPeriodicWave requires two.
TLDR Starting with a periodic function, how do you compute the corresponding arguments for OscillatorNode.createPeriodicWave?
Since you have a periodic waveform defined by a function, you can compute the Fourier Series for this function. If the series has an infinite number of terms, you'll need to truncate it.
This is a bit of work, but this is exactly how the pre-defined Oscillator types are computed. For example, see the definition of the square wave for the OscillatorNode. The PeriodicWave coefficients for the square wave were computed in exactly this way.
If you know the bandwidth of your waveform, you can simplify the work a lot by not having to do the messy integrals. Just uniformly sample the waveform fast enough, and then use an FFT to get the coefficients you need for the PeriodicWave. Additional details on in the sampling theorem.
Or you can just assume that sample rate of the AudioContext (typically 44.1 kHz or 48 kHz) is high enough and just sample your waveform every 1/44100 or 1/48000 sec and compute the FFT of the resulting samples.
I just wrote an implementation of this. To use it, drag and drop the squares to form a waveform and then play the piano that appears afterwards. Watch the video in this tweet to see a use example. The live demo is in alpha version, so the code and UI are a little rough. You can check out the source here.
I didn't write any documentation, but I recorded some videos (Video 1) (Video 2) (Video 3) of me coding the project live. They should be pretty self-explanatory. There are a couple of bugs in there that I fixed later. For the working version, please refer to the github link.

webaudio API: adjust play length of an audio sample (eg "C5.mp3")?

Can I use a (soundfont) sample e.g. "C5.mp3" and extend or shorten its duration for a given time (without distorting the pitch)?
(Would be great if this was as easy as using an oscillator and change the timings of NoteOn and NoteOff, but with a more natural sound rather than sine waves)? (Can that be done easily without having to resort to MIDI.js or similar?)
You would either need to loop the mp3 or record a longer sample. Looping can be tricky to make seamless without clicks or pops though, and depending on the sample you would hear the attack of the note each time it looped.
.mp3s and other formats of recorded audio are all finite, predetermined sets of binary data. The reason it's so easy to manipulate sine waves with an oscillator is because the web audio api is dynamically generating the wave based on the input you're giving it.
Good soundfonts have loop points for endless sounds. See example for WebAudioFont
https://surikov.github.io/webaudiofont/examples/flute.html

Apply water ripple effect with JavaScript in HTML video player

I am trying to create a water ripple effect in a video embedded in HTML5 default web player.
I am doing it fine with using images and and a overlay canvas on top of it, but what I am trying to do now is to get single frames from a video and output it to a canvas every 1-5ms using this tutorial.
And I am stuck at this point, I can output frame into another canvas using
canvas.toDataURL() function.
I have seen advanced web-based video players that allow for applying Processing.js sketches on top of videos, would that be a good solution?
My question is: what would be the best and most reliable solution for applying visual effects (water ripples in this case) using JavaScript to a video playing in HTML5 media player.
My question is: what would be the best and most reliable solution for applying visual effects (water ripples in this case) using JavaScript to a video playing in HTML5 media player.
In my opinion the best approach would be to use WebGL to create the effects, using the video as input texture and a simple flat geometry that is manipulated using a animated bump-map - or - directly manipulating the vertices - or - a perhaps a shader program, and then output the result to a canvas.
I would not place the video in DOM at all but create control buttons and display those together with the output [webgl-]canvas in DOM.
The obvious drawback would be besides from slightly more complex code, when used on computers which doesn't have a GPU (but that would be a drawback in any case and more so if you used a regular 2D canvas and pixel manipulation).
This is of course very broad in terms of code example. But I assume you get the general idea.
A couple of notes:
[...] what I am trying to do now is to get single frames from a video and output it to a canvas every 1-5ms [...]
This is sort of pointless in regards to the time-budget, as the typical monitor can only refresh the image every ~16.7ms which means you're wasting at least 3-4 frames that are never displayed.
Also, a typical video source never runs faster than 30 FPS (in the US, 25 FPS in Europe) which means there is only a new frame every 33.3ms (there are of course special case videos such as VR/AR, stereoscopic, games and "Mac" recorded video etc. that may use higher fps - but for the most part anything > 30 fps is usually wasted cycles) which allows for a higher time-budget for processing per frame.
I can output frame into another canvas using canvas.toDataURL() function [...]
Outch! :) This comes which a huge overhead. The browser would have to use line filtering (in case of PNG - well, some browsers skip this and use filter 0), compression, base-64 encoding, then apply the source to an image element, decode the base-64, decompress, defilter...
You can instead simply use the source canvas for drawImage() directly and only have to deal with a in-memory bitmap (super fast!).
All that being said: if simplicity is important code-wise, you can of course do all this using a 2D canvas. You can use drawImage() to displace pixels/blocks and therefor work on the GPU (but not necessarily faster than working on the bitmap directly depending on how you apply the actual displacement).
But there are still many caveats such as video source and destination resolution which has a exponential impact on performance, limited use of the GPU as you still would have to do multiple serial operations versus parallel operations with webgl/gpu and so forth. In essence, the performance will suffer compared to a webgl solution.
If you want to get high performance you can use WebGL. Following is a github reop for Water ripple project.
https://github.com/sirxemic/jquery.ripples/
Following is a ruining example for jQuery WebGL Ripples
http://sirxemic.github.io/jquery.ripples/
Think this might help
You can do it in several ways. You can write core javascript code with canvas.
But I think it is best to use Jquery plugins. There are several plugins available for water ripple.
You may check the following links:
https://github.com/virtyaluk/paper-ripple
https://github.com/sirxemic/jquery.ripples
https://github.com/andyvr/water-ripple

Generating events for gesture-controlled websites

I am very happy that I got the opportunity to work on a website that is gesture-based.
I have a few inspiration for this: link
I visited lot of websites and googled it, Wikipedia and gitHub also didn't help much. There is not much information provided as these technologies are in nascent stages.
I think I will have to use some js for this project
gesture.js (our custom javascript code)
reveal.js (Frame work for slideshow)
My questions are how come gestures generate events, how does my JavaScript interact with my webcam? Do I have to use some API or algorithms?
I am not asking for code. I am just asking the mechanism, or some links providing vital info will do. I seriously believe that if the accuracy on this technology can be improved, this technology can do wonders in the near future.
To enable gestural interactions in a web app, you can use navigator.getUserMedia() to get video from your local webcam, periodically put video frame data into a canvas element and then analyse changes between frames.
There are several JavaScript gesture libraries and demos available (including a nice slide controller). For face/head tracking you can use libraries like headtrackr.js: example at simpl.info/headtrackr.
I'm playing a little bit with that at the moment so, from what I understood
the most basic technique is:
you request to use the user's webcam to take a video.
when permission is given, create a canvas in which to put the video.
you use a filter (black and white) on the video.
you put some control points in the canvas frame (a small area in where all the pixel colors in it are registered)
you start attaching a function for each frame (for the purpose of the explanation, I'll only demonstrate left-right gestures)
At each frame:
If the frame is the first (F0) continue
If not: we subtract the current frame's pixels (Fn) from the previous one
if there were no movement between Fn and F(n-1) all the pixels will be black
if there are, you will see the difference Delta = Fn - F(n-1) as white pixels
Then you can test your control points for which areas are light up and store them
( ** )x = DeltaN
Repeat the same operations until you have two or more Deltas variables and you subtract the control points DeltaN from the control points Delta(n-1) and you'll have a vector
( **)x = DeltaN
( ** )x = Delta(N-1)
( +2 )x = DeltaN - Delta(N-1)
You can now test if the vector is either positive or negative, or test if the values are superior to some value of your choosing
if positive on x and value > 5
and trigger an event, then listen to it:
$(document).trigger('MyPlugin/MoveLeft', values)
$(document).on('MyPlugin/MoveLeft', doSomething)
You can greatly improve the precision by caching the vectors or adding them and only trigger an event when the vector values becomes a sensible value.
You can also expect a shape on your first subtractions and try to map a "hand" or a "box"
and listen to the changes of the shape's coordinates, but remember the gestures are in 3D and the analysis is 2D so the same shape can change while moving.
Here's a more precise explanation. Hope my explanation helped.

Uses for canvas | Practical examples

I'm trying to understand what people use canvas for?
I see it on job postings I read about it in Definitive JavaScript, but I don't quite get what it is used for.
I understand that you can draw 2d or 3d ( usually 2d ) objects but why no just use Gimp or Photoshop and upload the image.
Is it so you can create dynamic images based on say...user-specific data?
What is a practical example or perhaps a link to a professional implementation of canvas ( Definite JavaScript show basis stuff like drawing circles ).
MDN Tutorial
I have used a canvas to draw a graph, and it falls back to requesting a PHP-generated image if the browser doesn't support <canvas>. It's always a good idea to delegate processing from the server to the client, as this places less load on the server. In other words, instead of the server going "here's the stuff", it's more like "here's the data and the instructions to show it".
Another use I've seen is to highlight areas of an imagemap when moused over.
<canvas> is central in HTML5 game development, since it is used to draw the entire game viewport. Without it, there is no game.
Is it so you can create dynamic images based on say...user-specific data?
Yes
We used <canvas> to build interactive design editor for apparel in our e-commerce store — http://printio.ru/tees/new
The kind of interactivity we provide was only possible with Flash until recently.
Even on back end, we use Node.js and <canvas>-based image processing+generation to take data from online editor and create designs that are later used in store. These canvas-generated designs are eventually printed out on tshirts, mugs, caps, bags, and so on.
I think that's a pretty practical example :)
This is all done via Fabric.js canvas library (developed by us as well).
<canvas> is used for better Performance. <canvas> is much more faster and dynamic than jpeg or anything else. Example: http://www.profistart.com/internetseiten.html here was <canvas> used for Background.

Categories

Resources