How does Flash ActionScript read hand gesture using webcam? - javascript

So sorry that I'm so very newbie, and have googled only to find the same result again and again, till desperately I decided to ask here...
So, I need to make a webpage that can detect/reading hand gesture. At first, I was so very lost in what to do, but then this and that, I've found these questions:
Using webcam to track hand gestures
Generating events for gesture-controlled websites
How to detect hand gesture in live webcam using javascript?
hand gesture recognition with camera video in actionscript
And after surfed things I found there, then I (thought that I) got that it could be done using Javascript or Flash ActionSript, and have seen some of the examples for both. While I (thought that I) understood that people did it through Javascript by implementing algorithm for image processing and its friends, actually I'm kinda lost at how Flash did that. I found the common usage of gesture using Flash is in 'webcam game', and then I found these tutorial:
Webcam Motion Detection: Using the BitmapData API in Flash 8
Np's Webcam Game Tutorial
While I (thought that I) kinda get how the first one did it, I'm lost at how the second working out his collision detection... And I don't really understand why, while mostly the Javascript I found has pretty good documentation, so hard to find something like that for Flash ActionScript, and the one I found is kinda old (the first tutorial said it's created at 2005, and the second uploaded in 2007). I would just use Javascript because of those things, but then this Incubator Quasimondo Minority Cube is really cool, and I haven't found an implementation in that level for Javascript...
Thus all of it leads me to a question, roughly:
how hand gesture detection usually implemented in Flash? like, in webcame games?
does those implementation really 'recognize' it's a hand? or does it just execute anything that moves?
what support Flash/ActionScript have for detecting gesture? Like Camera.activityLevel property, is there another property or library?
I'm so sorry for my lack of knowledge...

All the examples I've seen are simply using the 'image recognition method'. Thus the results are not solid and sometimes there won't be anything to detect at all. That's why Eugene created augmented reality framework, but you need a marker in order to use it. He did some experiments without markers, but I cannot find any of them now, sorry.
I've used Kinect for gestures and it works pretty well, and it has AIR sdk.. Maybe you should give it a try :)

Related

Workflow: Animation project to Web Background. Is WebGL/HTML5/ Javascript a viable alternative to video background limitations. Advice needed.

As many before me, I now face the greatest challenge any web dev/designer must encounter: creating a personal portfolio. If you're a perfectionist like me, then you know how difficult it is to find satisfaction in any one idea, especially when that idea is meant as a direct reflection upon your talent, skills, and overall ability. Making a website for a client: No problem. Making a website for yourself: Battle of the Century.
My previous personal portfolio websites were always basic, in that they used many of the common elements one might find in a typical web portfolio. A little jQuery Isotopes here, a little Scroll.js there, wrap it up behind Foundations framework for responsive design. Boom. Instopresto, you've got a portfolio...one that looks and feels like everyone elses.
Now I'm not trying to reinvent the wheel. A portfolio should accomplish what it is meant to do, display ones previous experience/work and encapsulate ones overall "ability" or skill level.
For this round I really wanted to go with a full width video background. The other design elements are easy to pull off. Give the site an offset menu, perhaps go with a onepage minimalistic design, or use Ajax for page/content navigation.
I keep having trouble with the video background. It's the limitations we face with HTML5 video and how it displays on mobile device. Forcing or tricking a user into having to press a play button is one extra step that kills the idea, and using an image as a fallback defeats the awesomeness of using a video in the first place.
There is an alternative though. Using Three.js and taking advantage of the technology offered in machines today via graphics rendering, let's create an animation for use as a full width background. It's cross browser compatible and works well on tablets and mobile devices.
My question is to those with heavy Javascript or python experience and those who have utilized three.js before. I typically use Cinema4D for animation as it provides a fluid and seamless workflow between itself and After Effects. I've already created a 3D element, given it animation, and created a camera to capture it all.
How can one export from Cinema4D for WebGL use and Three.js. Most tutorials/information on the web is extremely outdated. Even a viable workflow from C4D to Blender to WebGL would work for me if only someone who understands the process could explain it.
Here are a few examples that have fueled the inspiration behind this project:
http://mrdoob.com/lab/javascript/threejs/css3d/periodictable/ - Built using CSS3 for the overall functinality, this is AWESOME and really what I am going for in replicating aesthically.
http://blogs.truthlabs.com/2013/11/12/illustrator-webgl-workflow-tips/ - This tutorial is fantastic, but being based out of Blender I am in no mans land. For a simple solution, this works fine. Creating in C4D would take no time at all, its how to get it to WebGL.
Thank you for any advice and taking the time to read through this post.
-Cheers,
Branden Dane

Full width video background: A non-HTML5, purely jQuery solution...maybe

Long time Stack Overflow creeper. This community has come up with some incredibly elegant solutions to rather perplexing questions.
I'm more of a CSS3 or PHP kinda guy when it comes to handling dynamically displayed content. Ideally someone with a solid knowledge base of jQuery and/or Javascript would be able to answer this one best. Here is the idea, along with the thought process behind it:
Create a Full Screen (width:100%; height:auto; background:cover;) Video background. But instead of going about using HTML5's video tag, a flash fallback, iFrame, or even .GIF, create a series of images, much like the animation render output of say Cinema4D, that if put together in sequential order create a seamless pseudo-video experience.
In Before "THAT's JUST A .GIF, YOU'RE AN IDIOT" Guy.
I believe jQuery/Javascript could solve this. Would it or would it not be possible to write a script that essentially recognizes (or even adds) the div class of an image, then sets that image to display for say .0334ms (29.7 frame rate) then sets this image back in z space while at the same time firing in the next image within the sequential class order to display for another .0336ms; and so on and so forth until all of the images (or "frames") play out seamlessly fluid, so the user would assume he/she is actually seeing a video. Not a knowing it's actually a .GIF on steroids.
Here's a more verbose way of explaining the intended result:
You have a 1 second super awesome 1080p video clip (video format doesn't matter for helping to answer this question, just assume its lossless and really pretty k?). It's recorded at 29.97 frames per second. Break each frame into it's own massive image file, leaving you with essentially 30 images. 24 frames a second would mean you'd have 24 images, 60 frames per second would mean you'd have 60 images, etc., etc., excedera.
If you have ever used Cinema4D, the output I am looking to recreate is reflexive to that of an animation render, where you are left with a .TIFF per frame, placed side by side so that when uploaded into Photoshop or viewed in Quicktime you get a "slideshow" of images displaying so fast it look likes a video.
HTML would look something like this:
<div id="incredible-video">
<div class="image-1">
<img source=url('../movie/scene-one.tiff');/>
</div>
<div class="image-2">
<img source=url('../movie/scene-two.tiff');/>
</div>
<div class="image-3">
<img source=url('../movie/scene-three.tiff');/>
</div>
<div class="image-4">
<img source=url('../movie/scene-four.tiff');/>
</div>
<div class="image-5">
<img source=url('../movie/scene-five.tiff');/>
</div>
....etc.....
....etc.....
....etc.....
</div>
jQuery/Javascript could handle appending the sequential image classes instead of writting it all out by hand for each "frame".
CSS would look like:
#incredible-video img {
position:absolute;
width:100%;
height:auto;
background:cover;
}
But what would the jQuery/Javascript need to be to pull the off/can it be done? It would need to happen right after window load, and run on an infinite loop. Ofcourse audio is not happening in this example, but say we don't need it. Say we just want our End User to have a visually appealing page, with a minimal design implemented in the UI.
I love video animation, and really love sites built with Full Screen Backgrounds. But a site out with this visual setup and keeping it responsive is proving to strenuous a challenge. HTML5 will only get you so far, and makes mobile compatibility null and void (data usage protection). .GIF files are MASSIVE compared to calling in a .mp4, .Webm, or .OGG so that option is out.
I've actually recently played around with Adobe Edge Animate. Using the Edge Hero .js library I was able to reproduce a similar project to this: http://www.edgehero.com/tutorials/starwars-crawl-tutorial
I found it worked on ALL devices. Very cool. Made me think that maybe it's possibly to use this program or hit jQuery/Javascript directly to achieve the desired effect.
Thanks for taking a look at this one guys.
-Cheers,
Branden Dane
I found a viable solution to what I was looking to do. It's actually rather interesting. The answer it's introduces many interesting ideas on how we can display any kind of content dynamically on a site, in an app, or even a a full fledged software application.
The answer came about while diving hard into WebGl, canvas animation (both 2d and 3d), 2D video games techniques, and 3D video game techniques. Instead of looking for that "perfect" workflow, if you are someone interested in creating visually effective design and really seeing what the bleeding edge can do for your thoughts on development, skip the GUI's. Ignore the ads with software promising to make things doable in 5 min. It's not. However we are getting there. 3 major events we have to look forward too in just a few months are
1.) the universal agreement to implment WebGL natively in Opera, Chrome and Firefox (ofcourse), Safari will move to ship with webGL enabled, compered to the user having to enable it manually, and even IE is going to try and give her a go (in IE 12).
2.) Unity 3D, an industry standard in game development, has announced that next month it will release version 5, and with it a complete, intuitive workflow from start to exporting in Javascript (not JSON actual JavaScript). The Three.JS library more specifically as it is one of the most popular of the seemingly endless games engines out today.
How does this answer my initial question?:
Though WebGL has been around for about 3 years now, we are only now starting to see it shine. It's far more than a simple video game engine. With ThreeJS We have a full working JavaScript library, capable of rendering in WebGL, to the Canvas, or EVEN with a few CSS3 magic. Can't use your great movie as a mobile background? It ruining the overall UI? Cheer up. ThreeJS can working with both 2D and 3D javascript draw function, though not at the same time. Hover other libraries exist that allow you to bypass this rule.
AND DRUM ROLL. It is, or can be very easily made in a responsive or adaptive way.
The answer to my question came from looking at custom preloaders. Realizing I can create incredible looping animations in AE, and export them as GIFs offered the quality I wanted, but not control, no optimization, now sound. However, PNG Sequences CAN be exported. Then the epiphany hit. Before I just say what I am using to solve my problem, I'd like to leave a list of material anyone looking to move beyond easy development and challenge limits can use as a reference guide. This will be in order with what I began to where I am now. I hope it helps someone. The time to find it all out would be very much worth it.
1.) WebGL-Three.JS
WebGL opened my eyes to a new world. It's a technology quickly evolving and is here to stay. In a nutshell, all live applications you create now have access to more than just a CPU, but also the Graphics card as well. With GPU's getting more and more powerful, and not so unreasonably priced, the possibilities are endless. The idea we could be playing Crysis 3 "in-browser" without the need of a 3rd party client is no fiction. It's the future. Apply that to websites. Mind blown.
2.) First Cinema4D, then start working around with Verold.com & PlayCanvas.com
C4D is just my personal favorite because if it's easy integration with AE. You will find that with exporting your 3D models, Textures, Mesh's, anything to Three.JS (or any game engine period) that it is Blender that is the most widely supported. As of writing this, their are 2 separate C4D workflows to ThreeJS. Both are tedious, not always going to work, and actually just unnecessary. PlayCanvas was also a bit of a let down. Verold, however is an EXCELLENT browser based 3D editor in which you can import a variety of files (even FBX with Baked animations!) and when you are satisfied you can export into a standalone client or an iframe. The standalone client is superb. It is a bit glitchy, so have patience. You shouldn't get comfortable with it any way. Go back to your roots.
3.) iPhone app development, Android app dev (to an impressive extent), Web Sites, Web Apps, and more all function in a way that an application need only be made using JavaScript, HTML/5 and CSS/3. Once this is understood, and the truth hits you as to how much control you may not have known you had, then the day becomes good indeed. Learn the code. With a million untested and horrible "GUI's" out there that claim to do what you want, avoid the useless search. Learn the code. You can never go wrong at that point.
4.)What code do I need to learn?
JavaScript is the most essential. More on that in a moment. Seriously dive into creating apps of any kind with ThreeJS. Mr. Doob (co-creator of the library) has an EXCELLENT, well-documented website with tons of examples, tuts, and source code for you to dive into. Chrome Experiments is your next awesome option to see how people are really taking this kind of development to a new level. In the process of learning ThreeJS, you'll become more proficient with JavaScript. You will also start to play with things you maybe never had to, like JSON, or XML files for packaging data. You'll also learn how simple it is to implement Three.JS as a WebGL render, or even fallbacks to Canvas and even CSS3D if and when possible.
Before going on, I will make a caveat. I believe that once Unity 3D drops ThreeJS fro pro and free users, we will see much much more 3D in the web. In that case, it can't hurt to Download the software and play around a bit. It also serves an an excellent visual editor. There are exporters from Unity 3D to ThreeJS, but again they are still pre alpha stage.
2D or not 2D. that is the question
After getting a little dirty with 3D I moved into drawing in the 2D realm using the canvas. Flash still seems like a viable tool, but again, it's all about the code. Learn how to do it and you may find Flash is actually costing you time. I found 2D more difficult than 3D because the nature of 2D has yet to radically change, at least in my lifetime. You'll need to start learning Spritesheet creation tutorials. Nothing incredible hard if you know where to look. Use A photoshop, or an equivalent application. Create as many "movement" frames that if were put together in a GIF would be enough to seamlessly loop the sprite. OR render a master image out and cut around the elements naturally distinct pats. Ex: You want to make the guy you have standing on a street corner you created, stays. Cut that character up in as many separate PNG files as you believe you need. The second method is all about using the same sprite sheet we brought in the first try. The first scenario meant writing CSS selector and have javascript written for the regular user would become increasingly difficult.
First solution: Using CSS and Javascript to plot "frames" meticulously put together in the sprite sheet. This really can become a pain if not done correctly all the way through.
Second solution: We lose the frame by frame effect if we need it, but our overall 2D animations will look incredible. Also, building in this way creates more efficient games when implementing physics engines and setting up collision detectors. We will still use the same sprite sheet, however we only need to choose the frames we really actually need. The idea is to use dynamic tweening between frames that are called together via Javascript. In the end you have a fully animated Sprite, but could have done so with just one frame. Ex: You have a Stickman you want to show walking in a straight line. Solution one would jump frame by frame, creating a mild chop, to illustrate an animated walk. In solution 2, we take the Stick man and chop his dynamic bits apart so we can call them through JavaScript, then build our sprite from JavaScript directly. To create the walking effect, we cut apart stickmans legs and have those separate in the sprite sheet from the rest of his body (unless you need to animate another body part as well). We map out where the coordinates are for each piece of stickman. Free software like DarkFunctionEditor is one of many programs that will instantly take care of generating for you a reliable sprite sheet, printing out the coordinates of your sprite sheet after you bake it. With this knowledge, head into JavaScript and call in your variables that you wish to associate to the pieces of Stick Man and their corresponding coordinates. Then use Javascript to "build" all the pieces together. The walking animation is accomplished by the Tween we talked about earlier. Each leg essentially runs on a beautifully fluid path you set in JavaScript. No chop. Very easy to customize and control. If you want to make it even easier for yourself, try using one of the many libraries for Sprite animation. My favorite at the moment being CreateJS.
If you are looking to include collision detection or create particle systems then you will need a physics engine. For 2D I am torn between 2 at the moment. Right now I would put PhysicsJS over KineticJS. Both are fantastic. I believe PhysicsJS integrates with CaccoonJS and other mobile scripts easier.
My last words of advice are=, after reading this, understand you will be working will JavaScript. You will have a bit of jQuery to make it easy, but you will encounter things that are difficult on the way. My HUGE recommendation is to move into learning how to build using NodeJS. It's an Asynchronous Javascript Server-side and client-side development space. The documentation is wonderful. Your first stop should be learing about npm, and bower. Then understand how to effectively implement Grunt into the workflow. Try out NodeJS assets like Yeoman to give you "boilerplate" Node setups from which to start with. After you start understanding NodeJS mechanics and feel comfortable with setting up your initial package.json, you'll find that all this JavaScript will almost feel like it's writing itself after a certain point.
And that's all you need to know to get into 2D and 3D design and development. My initial question could have been answered using say a 3D rendered fullscreen. However my final conclusion came in a different method entirely.
After learning about 2D sprites and framing, then noticing the encoding process of gifs. I had the idea to try and create PNG Sprite Animations. Not PNG Gifs, per say. But rather creating a 2D scene and using a PNG sequence that I would then animate via JavaScript. I found a few great libraries on Github, both for my idea and cool ideas for GIF manipulation.
My final choices was with the Github Repo "jquery.animateSprite" Instead of mulling through sprite sheets, you take your individual PNG's and this library gives you an incredible amount of control in how you can store variables for later use, but also the animations you can pull off in general. For a full screen, responsive background that works on any device (and can even be animated to sound....) I'd recommend this technique. It works much like a flip book animation works, except much much more effectively.
I hope this helps someone along the way. If you have a question on anything I have mentioned here, or know of an area that needs further detail, then by all means please let me know.
-Cheers

In JavaScript, is it possible to have to audio input as a event listener? (Idea construction)

There is an idea which I have been toying with for the past few weeks. I am extremely serious to realise this concept, but I totally lack any know how about the implementation. I have some thoughts which I'll be sharing as I explain what the idea is.
We have websites. Most of them are responsive.
What is responsive web design?
By and large, responsive web design means that design and development should respond to the user’s behaviour and environment based on screen size, platform and orientation. If I change my window size, my website to should change its dimensions accordingly.
If I scale some object on the screen, then the website should rearrange/ rescale accordingly.
This is good, but nothing exciting (nowadays!).
I am again so damnfully limited by a screen and to whatever happening inside it that what I do outside seems still very external and not seamless. Except my contact with the mouse/ keyboard, I do not know any other way to communicate with the things that happen inside the screen. My screen experience is still a very external feature, not seamless with my cognition. I think this is also a reason why my computer always remains a computer and does not behave a natural extension of the human body.
I was toying with a idea which I have no clue how to realize. I have a basic idea, but I need some some technical/ design help from those who are fascinated by this as much as I am.
The idea is simple: Make the screen more responsive, but this time without use of a mouse or any such input device. All laptops and most desktops have a microphone. How about using this small device for input?
Let me explain:
Imagine, a website in which screen icons repopulate when you blow a whiff onto the screen. The song on your favourite playlist changes when you whistle to the screen.
Say you have an animated tree on the screen. That tree sheds its leaves when you blow air to it. The shedding depends on how fast you blow. Getting a small idea?
Let me put some graphics (see the hyperlink after this paragraph) which I think will make it better. I plan to make a website/ API in which there is a person with long hair staring at you. If you blow air from the right side of your screen, her hair moves to the left. If you blow air from the left, her hair blows to the right. If you blow faint, her hair suffers faint scattering. Some naughty stuff, say you whistle: The character on the screen winks, or say throws a disgusting expression- whatever.
The whole concept is that every element of the web must have a direct relation with the user who is sitting outside the screen. It gives a whole lot of realism to the website architecture if things like my whistle, whiff or say even my sneeze can do something to the website! I am not tied to the mouse or the keyboard for my response to be noted. Doesn’t that reduce a hell of a lot of cognitive load on the user?
See this image: http://imgur.com/mg4Whua
Now coming to the technical aspect that I need guidance on.
If I was building a regular responsive website in JavaScript, I'd use addeventhandler("Click", Animate) or addeventhandler("resize", Animate) - something like that. Here I want my event handler to be the audio input that is coming from the microphone. Also, I need to know the place from where the audio is originating that I can decide which side the hair must fall and play that animation.
So in the span of 180/360 degree of the microphone, I need to not just catch the audio, but also its angle that the right animation can be played. It'd be a crashing fail if the same animation is played where-ever I blow air. It needs to have that element of realism.
I have asked around and some people suggested to me that I try WebRTC of HTML5. I am still seeing if that works, but otherwise are there any more options? I guess Processing is one. Has anyone handled its audio features?
I want to build a simple prototype first before I delve into the immense possibilities this idea could have. But if you have some really awesome thing in mind, please let me know about it. Exciting ideas are one thing, and exciting implementation totally another. We want both.
Are there such websites already? Any work happening in this side?
Any small guidance counts!
There are plenty of ways to create your own events. Most libraries have some built-in way of doing so. Basically you're talking about the observer pattern and here's a nice article to explain it in greater detail: https://dottedsquirrel.com/javascript/observer-pattern/
Also as far as listening to audio goes, using an analyzer-node (AnalyserNode) on the input signal and some ingenious code to determine that the sound is what you want to listen to, firing the event is a piece of cake using aforementioned custom events.
But, before diving into those, determining the angle of the sound? I do not think that is even possible. You might be able to determine the angle of the origin of the sound in a '2d' scope, but that certainly won't give you an angle. I think you'd need something rather more ingenious than a simple stereo mic setup to determine the angle.

replicate the javascript glow around the giant cloud on MobileMe Login Page?

If you access your mobile me account online with Safari, you can select an icon and login directly to selected service, great feature btw.
But if you access the same page using other browser like firefox or Chrome, you will see a gorgeous login page with a big, no huge cloud in the middle (the MobileMe logo) and interesting lighballs comming out of it.
Here's the link:
https://auth.me.com/authenticate?service=mail&ssoNamespace=appleid&formID=loginForm&returnURL=aHR0cHM6Ly9tZS5jb20vbWFpbC8=
And the greatest thing is that you can mouse over those little light balls and they follow your mouse movement.
Its just beautiful and i have never seen anything like that in Javascript. And i couldnt understand by looking at their code, how they did it. Of course their javascript is compressed so i couldn't look at it, but in the markup those shiny lights are just a bunch of canvas tags.
Does any one have an idea of how to make something like that? Its probably way beyond my javascript skills but it would be great to add such an effect to one of my projects.
Thanks in advance for all your suggestions ;)
that takes a lot of skills. I believe its achievable with processing.js
http://processingjs.org/
Take a look at this [quote]:
So, how is this eye candy accomplished? Through over 6000 lines of
(unminified) JS. MobileMe usually uses SproutCore for its
applications, but after looking through the source code, I didn’t find
a single reference to it. There did appear to be some resemblance of
a library being used in the login page, however, but I think it is
pretty custom. There appeared to be a class for each of the visual
components on the screen, at least one if not two separate animation
libraries (one 2d and one 3d), a particle rendering library, and
libraries for dealing with canvas drawing and DOM manipulation.
So it looks like it was custom made. You can read more here: http://badassjs.com/post/1649735994/the-new-mobileme-login-page-has-some-badass-js
I hope this helps.

dragging object from html page using (using javascript) into a flash container?

I am doing some pre-production on a project that requires drawing on a 3d canvas, which I think flash is the best way to go. But there is a chance down the line that this client might want the site to show up on the ipad, iphone or other mobile devices that don't support flash.
So I was playing with the idea of doing everything in html and javascript except for the actual drawing/3D area. Almost like using flash as the element. I think html5 is too premature to start using this, but might be beneficial down the line. Chances are I will just go the entire flash route, but I thought it would be interesting to try.
Anyway, my question is pretty top level. 1) how hard would it be to drag an object from an html page using javascript, and dropping it into the flashplayer. And then manipulating it from there.
Are there any examples out there that have tried to do this?
Communication between Flash and an HTML page requires the use of ExternalInterface. That's a good place to start. These guys seem to have figured out some of the details.
As far as the iPad and iPhone are concerned, Flash just ain't gonna happen.
However, I don't think it's fair to say HTML5 is "too premature". Lots of people are doing lots of cool things with the canvas element. You could too. :)

Categories

Resources