I have an animated character (half pig half woman) that plays an integral role in my client's brand. She performs several movements/actions such as walking/running/climbing in place, dancing in several different ways and gesturing with facial and body movements.
On the website, she will be displayed floating near the user's scroll position and will perform different actions (i.e. play a specified segment of this motion/action) based on what the user is doing at the time. For example, while scrolling, the page may be playing the climbing loop and or when focus is given to a lead-capture form, the character starts dancing and when the user is typing in the 'email' field of said form, we jump to the super-fun part of the dance... I'm not sure if it will be exactly those, but something along those lines.
With the exception of the dance, which will be around 60 seconds in its non-looped state, everything else will only be a couple seconds max. So I'm trying to figure out what the most efficient way to rig this is - by that I mean the best way to use JavaScript to control the character's action based on the user's actions.
I'm considering using animated GIFs for everything except the dancing and just switching out their src when appropriate (i.e. $pigImg.src='pig-smile.gif' when she is clicked on and $pigImg.src='pig-climb.gif' for scrolling...) and then hiding the GIF (or displaying something blank) and streaming the video at its appropriate timestamp when it's time for her to dance.
I think I'll do all this within a <canvas> element to maximize the flexibility:simplicity ratio but if for no other reason than to be able to use clips with a green-screen background to do things I haven't even planned yet in the future.
I know this is a pretty broad intro, so I'll try it focus this post on the question of whether or not there are any potential obstacles that I need to consider with my <canvas> and mix of video and GIFs approach? I know that switching between the 2 media types may cause some mis-registration issues (i.e. the character not lining up 100% on the nose).
I'm sure this will end up being FAR more complex in reality that it is in my head right now, so I guess making sure I'm not dreaming up something filled with numerous technical holes of which I am unaware is a good starting point.
Long time Stack Overflow creeper. This community has come up with some incredibly elegant solutions to rather perplexing questions.
I'm more of a CSS3 or PHP kinda guy when it comes to handling dynamically displayed content. Ideally someone with a solid knowledge base of jQuery and/or Javascript would be able to answer this one best. Here is the idea, along with the thought process behind it:
Create a Full Screen (width:100%; height:auto; background:cover;) Video background. But instead of going about using HTML5's video tag, a flash fallback, iFrame, or even .GIF, create a series of images, much like the animation render output of say Cinema4D, that if put together in sequential order create a seamless pseudo-video experience.
In Before "THAT's JUST A .GIF, YOU'RE AN IDIOT" Guy.
I believe jQuery/Javascript could solve this. Would it or would it not be possible to write a script that essentially recognizes (or even adds) the div class of an image, then sets that image to display for say .0334ms (29.7 frame rate) then sets this image back in z space while at the same time firing in the next image within the sequential class order to display for another .0336ms; and so on and so forth until all of the images (or "frames") play out seamlessly fluid, so the user would assume he/she is actually seeing a video. Not a knowing it's actually a .GIF on steroids.
Here's a more verbose way of explaining the intended result:
You have a 1 second super awesome 1080p video clip (video format doesn't matter for helping to answer this question, just assume its lossless and really pretty k?). It's recorded at 29.97 frames per second. Break each frame into it's own massive image file, leaving you with essentially 30 images. 24 frames a second would mean you'd have 24 images, 60 frames per second would mean you'd have 60 images, etc., etc., excedera.
If you have ever used Cinema4D, the output I am looking to recreate is reflexive to that of an animation render, where you are left with a .TIFF per frame, placed side by side so that when uploaded into Photoshop or viewed in Quicktime you get a "slideshow" of images displaying so fast it look likes a video.
HTML would look something like this:
<div id="incredible-video">
<div class="image-1">
<img source=url('../movie/scene-one.tiff');/>
</div>
<div class="image-2">
<img source=url('../movie/scene-two.tiff');/>
</div>
<div class="image-3">
<img source=url('../movie/scene-three.tiff');/>
</div>
<div class="image-4">
<img source=url('../movie/scene-four.tiff');/>
</div>
<div class="image-5">
<img source=url('../movie/scene-five.tiff');/>
</div>
....etc.....
....etc.....
....etc.....
</div>
jQuery/Javascript could handle appending the sequential image classes instead of writting it all out by hand for each "frame".
CSS would look like:
#incredible-video img {
position:absolute;
width:100%;
height:auto;
background:cover;
}
But what would the jQuery/Javascript need to be to pull the off/can it be done? It would need to happen right after window load, and run on an infinite loop. Ofcourse audio is not happening in this example, but say we don't need it. Say we just want our End User to have a visually appealing page, with a minimal design implemented in the UI.
I love video animation, and really love sites built with Full Screen Backgrounds. But a site out with this visual setup and keeping it responsive is proving to strenuous a challenge. HTML5 will only get you so far, and makes mobile compatibility null and void (data usage protection). .GIF files are MASSIVE compared to calling in a .mp4, .Webm, or .OGG so that option is out.
I've actually recently played around with Adobe Edge Animate. Using the Edge Hero .js library I was able to reproduce a similar project to this: http://www.edgehero.com/tutorials/starwars-crawl-tutorial
I found it worked on ALL devices. Very cool. Made me think that maybe it's possibly to use this program or hit jQuery/Javascript directly to achieve the desired effect.
Thanks for taking a look at this one guys.
-Cheers,
Branden Dane
I found a viable solution to what I was looking to do. It's actually rather interesting. The answer it's introduces many interesting ideas on how we can display any kind of content dynamically on a site, in an app, or even a a full fledged software application.
The answer came about while diving hard into WebGl, canvas animation (both 2d and 3d), 2D video games techniques, and 3D video game techniques. Instead of looking for that "perfect" workflow, if you are someone interested in creating visually effective design and really seeing what the bleeding edge can do for your thoughts on development, skip the GUI's. Ignore the ads with software promising to make things doable in 5 min. It's not. However we are getting there. 3 major events we have to look forward too in just a few months are
1.) the universal agreement to implment WebGL natively in Opera, Chrome and Firefox (ofcourse), Safari will move to ship with webGL enabled, compered to the user having to enable it manually, and even IE is going to try and give her a go (in IE 12).
2.) Unity 3D, an industry standard in game development, has announced that next month it will release version 5, and with it a complete, intuitive workflow from start to exporting in Javascript (not JSON actual JavaScript). The Three.JS library more specifically as it is one of the most popular of the seemingly endless games engines out today.
How does this answer my initial question?:
Though WebGL has been around for about 3 years now, we are only now starting to see it shine. It's far more than a simple video game engine. With ThreeJS We have a full working JavaScript library, capable of rendering in WebGL, to the Canvas, or EVEN with a few CSS3 magic. Can't use your great movie as a mobile background? It ruining the overall UI? Cheer up. ThreeJS can working with both 2D and 3D javascript draw function, though not at the same time. Hover other libraries exist that allow you to bypass this rule.
AND DRUM ROLL. It is, or can be very easily made in a responsive or adaptive way.
The answer to my question came from looking at custom preloaders. Realizing I can create incredible looping animations in AE, and export them as GIFs offered the quality I wanted, but not control, no optimization, now sound. However, PNG Sequences CAN be exported. Then the epiphany hit. Before I just say what I am using to solve my problem, I'd like to leave a list of material anyone looking to move beyond easy development and challenge limits can use as a reference guide. This will be in order with what I began to where I am now. I hope it helps someone. The time to find it all out would be very much worth it.
1.) WebGL-Three.JS
WebGL opened my eyes to a new world. It's a technology quickly evolving and is here to stay. In a nutshell, all live applications you create now have access to more than just a CPU, but also the Graphics card as well. With GPU's getting more and more powerful, and not so unreasonably priced, the possibilities are endless. The idea we could be playing Crysis 3 "in-browser" without the need of a 3rd party client is no fiction. It's the future. Apply that to websites. Mind blown.
2.) First Cinema4D, then start working around with Verold.com & PlayCanvas.com
C4D is just my personal favorite because if it's easy integration with AE. You will find that with exporting your 3D models, Textures, Mesh's, anything to Three.JS (or any game engine period) that it is Blender that is the most widely supported. As of writing this, their are 2 separate C4D workflows to ThreeJS. Both are tedious, not always going to work, and actually just unnecessary. PlayCanvas was also a bit of a let down. Verold, however is an EXCELLENT browser based 3D editor in which you can import a variety of files (even FBX with Baked animations!) and when you are satisfied you can export into a standalone client or an iframe. The standalone client is superb. It is a bit glitchy, so have patience. You shouldn't get comfortable with it any way. Go back to your roots.
3.) iPhone app development, Android app dev (to an impressive extent), Web Sites, Web Apps, and more all function in a way that an application need only be made using JavaScript, HTML/5 and CSS/3. Once this is understood, and the truth hits you as to how much control you may not have known you had, then the day becomes good indeed. Learn the code. With a million untested and horrible "GUI's" out there that claim to do what you want, avoid the useless search. Learn the code. You can never go wrong at that point.
4.)What code do I need to learn?
JavaScript is the most essential. More on that in a moment. Seriously dive into creating apps of any kind with ThreeJS. Mr. Doob (co-creator of the library) has an EXCELLENT, well-documented website with tons of examples, tuts, and source code for you to dive into. Chrome Experiments is your next awesome option to see how people are really taking this kind of development to a new level. In the process of learning ThreeJS, you'll become more proficient with JavaScript. You will also start to play with things you maybe never had to, like JSON, or XML files for packaging data. You'll also learn how simple it is to implement Three.JS as a WebGL render, or even fallbacks to Canvas and even CSS3D if and when possible.
Before going on, I will make a caveat. I believe that once Unity 3D drops ThreeJS fro pro and free users, we will see much much more 3D in the web. In that case, it can't hurt to Download the software and play around a bit. It also serves an an excellent visual editor. There are exporters from Unity 3D to ThreeJS, but again they are still pre alpha stage.
2D or not 2D. that is the question
After getting a little dirty with 3D I moved into drawing in the 2D realm using the canvas. Flash still seems like a viable tool, but again, it's all about the code. Learn how to do it and you may find Flash is actually costing you time. I found 2D more difficult than 3D because the nature of 2D has yet to radically change, at least in my lifetime. You'll need to start learning Spritesheet creation tutorials. Nothing incredible hard if you know where to look. Use A photoshop, or an equivalent application. Create as many "movement" frames that if were put together in a GIF would be enough to seamlessly loop the sprite. OR render a master image out and cut around the elements naturally distinct pats. Ex: You want to make the guy you have standing on a street corner you created, stays. Cut that character up in as many separate PNG files as you believe you need. The second method is all about using the same sprite sheet we brought in the first try. The first scenario meant writing CSS selector and have javascript written for the regular user would become increasingly difficult.
First solution: Using CSS and Javascript to plot "frames" meticulously put together in the sprite sheet. This really can become a pain if not done correctly all the way through.
Second solution: We lose the frame by frame effect if we need it, but our overall 2D animations will look incredible. Also, building in this way creates more efficient games when implementing physics engines and setting up collision detectors. We will still use the same sprite sheet, however we only need to choose the frames we really actually need. The idea is to use dynamic tweening between frames that are called together via Javascript. In the end you have a fully animated Sprite, but could have done so with just one frame. Ex: You have a Stickman you want to show walking in a straight line. Solution one would jump frame by frame, creating a mild chop, to illustrate an animated walk. In solution 2, we take the Stick man and chop his dynamic bits apart so we can call them through JavaScript, then build our sprite from JavaScript directly. To create the walking effect, we cut apart stickmans legs and have those separate in the sprite sheet from the rest of his body (unless you need to animate another body part as well). We map out where the coordinates are for each piece of stickman. Free software like DarkFunctionEditor is one of many programs that will instantly take care of generating for you a reliable sprite sheet, printing out the coordinates of your sprite sheet after you bake it. With this knowledge, head into JavaScript and call in your variables that you wish to associate to the pieces of Stick Man and their corresponding coordinates. Then use Javascript to "build" all the pieces together. The walking animation is accomplished by the Tween we talked about earlier. Each leg essentially runs on a beautifully fluid path you set in JavaScript. No chop. Very easy to customize and control. If you want to make it even easier for yourself, try using one of the many libraries for Sprite animation. My favorite at the moment being CreateJS.
If you are looking to include collision detection or create particle systems then you will need a physics engine. For 2D I am torn between 2 at the moment. Right now I would put PhysicsJS over KineticJS. Both are fantastic. I believe PhysicsJS integrates with CaccoonJS and other mobile scripts easier.
My last words of advice are=, after reading this, understand you will be working will JavaScript. You will have a bit of jQuery to make it easy, but you will encounter things that are difficult on the way. My HUGE recommendation is to move into learning how to build using NodeJS. It's an Asynchronous Javascript Server-side and client-side development space. The documentation is wonderful. Your first stop should be learing about npm, and bower. Then understand how to effectively implement Grunt into the workflow. Try out NodeJS assets like Yeoman to give you "boilerplate" Node setups from which to start with. After you start understanding NodeJS mechanics and feel comfortable with setting up your initial package.json, you'll find that all this JavaScript will almost feel like it's writing itself after a certain point.
And that's all you need to know to get into 2D and 3D design and development. My initial question could have been answered using say a 3D rendered fullscreen. However my final conclusion came in a different method entirely.
After learning about 2D sprites and framing, then noticing the encoding process of gifs. I had the idea to try and create PNG Sprite Animations. Not PNG Gifs, per say. But rather creating a 2D scene and using a PNG sequence that I would then animate via JavaScript. I found a few great libraries on Github, both for my idea and cool ideas for GIF manipulation.
My final choices was with the Github Repo "jquery.animateSprite" Instead of mulling through sprite sheets, you take your individual PNG's and this library gives you an incredible amount of control in how you can store variables for later use, but also the animations you can pull off in general. For a full screen, responsive background that works on any device (and can even be animated to sound....) I'd recommend this technique. It works much like a flip book animation works, except much much more effectively.
I hope this helps someone along the way. If you have a question on anything I have mentioned here, or know of an area that needs further detail, then by all means please let me know.
-Cheers
There is an idea which I have been toying with for the past few weeks. I am extremely serious to realise this concept, but I totally lack any know how about the implementation. I have some thoughts which I'll be sharing as I explain what the idea is.
We have websites. Most of them are responsive.
What is responsive web design?
By and large, responsive web design means that design and development should respond to the user’s behaviour and environment based on screen size, platform and orientation. If I change my window size, my website to should change its dimensions accordingly.
If I scale some object on the screen, then the website should rearrange/ rescale accordingly.
This is good, but nothing exciting (nowadays!).
I am again so damnfully limited by a screen and to whatever happening inside it that what I do outside seems still very external and not seamless. Except my contact with the mouse/ keyboard, I do not know any other way to communicate with the things that happen inside the screen. My screen experience is still a very external feature, not seamless with my cognition. I think this is also a reason why my computer always remains a computer and does not behave a natural extension of the human body.
I was toying with a idea which I have no clue how to realize. I have a basic idea, but I need some some technical/ design help from those who are fascinated by this as much as I am.
The idea is simple: Make the screen more responsive, but this time without use of a mouse or any such input device. All laptops and most desktops have a microphone. How about using this small device for input?
Let me explain:
Imagine, a website in which screen icons repopulate when you blow a whiff onto the screen. The song on your favourite playlist changes when you whistle to the screen.
Say you have an animated tree on the screen. That tree sheds its leaves when you blow air to it. The shedding depends on how fast you blow. Getting a small idea?
Let me put some graphics (see the hyperlink after this paragraph) which I think will make it better. I plan to make a website/ API in which there is a person with long hair staring at you. If you blow air from the right side of your screen, her hair moves to the left. If you blow air from the left, her hair blows to the right. If you blow faint, her hair suffers faint scattering. Some naughty stuff, say you whistle: The character on the screen winks, or say throws a disgusting expression- whatever.
The whole concept is that every element of the web must have a direct relation with the user who is sitting outside the screen. It gives a whole lot of realism to the website architecture if things like my whistle, whiff or say even my sneeze can do something to the website! I am not tied to the mouse or the keyboard for my response to be noted. Doesn’t that reduce a hell of a lot of cognitive load on the user?
See this image: http://imgur.com/mg4Whua
Now coming to the technical aspect that I need guidance on.
If I was building a regular responsive website in JavaScript, I'd use addeventhandler("Click", Animate) or addeventhandler("resize", Animate) - something like that. Here I want my event handler to be the audio input that is coming from the microphone. Also, I need to know the place from where the audio is originating that I can decide which side the hair must fall and play that animation.
So in the span of 180/360 degree of the microphone, I need to not just catch the audio, but also its angle that the right animation can be played. It'd be a crashing fail if the same animation is played where-ever I blow air. It needs to have that element of realism.
I have asked around and some people suggested to me that I try WebRTC of HTML5. I am still seeing if that works, but otherwise are there any more options? I guess Processing is one. Has anyone handled its audio features?
I want to build a simple prototype first before I delve into the immense possibilities this idea could have. But if you have some really awesome thing in mind, please let me know about it. Exciting ideas are one thing, and exciting implementation totally another. We want both.
Are there such websites already? Any work happening in this side?
Any small guidance counts!
There are plenty of ways to create your own events. Most libraries have some built-in way of doing so. Basically you're talking about the observer pattern and here's a nice article to explain it in greater detail: https://dottedsquirrel.com/javascript/observer-pattern/
Also as far as listening to audio goes, using an analyzer-node (AnalyserNode) on the input signal and some ingenious code to determine that the sound is what you want to listen to, firing the event is a piece of cake using aforementioned custom events.
But, before diving into those, determining the angle of the sound? I do not think that is even possible. You might be able to determine the angle of the origin of the sound in a '2d' scope, but that certainly won't give you an angle. I think you'd need something rather more ingenious than a simple stereo mic setup to determine the angle.
I am trying to create a feature where a user can change (back and forth) between multiple videos while maintaining a single consistent audio. Think of being able to watch a concert from multiple angles but listening to a single audio. The trouble I am having with this feature is that there can not be a lag between the changes in video or the audio will no longer sync with the videos (especially true after multiple changes).
I have tried two methods, both using html5 only (I would prefer not use flash although I will eventually have a fallback) that have not worked seamlessly, although depending on the browser and hardware, it can come very close.
Basic Methods:
Method 1: Preloading all videos and changing the video src path on each click using javascript
Method 2: Again preloading video and using multiple tags and changing between them using javascript on each click.
Is there anyway to get either of these two methods to work seamlessly without a gap? Should I be using a slight of hand trick, like playing both videos concurrently for a second before revealing the second and stoping the first? Can this just not be done with html5 players? Can it be done with flash?
I have seen this type of question a couple of times with both video and audio with no clear solution, but they were a couple of months old and I was hoping there is now a solution. Thanks for the help.
Worth adding that it is possible with the MediaSource API proposed by Google. This API allows you to feed arbitrary binary data to a single video element, thus if you have your video split into chunks you can fetch those chunks via XHR and append them to your video element, they'll be played without gaps.
Currently it's implemented only in Chrome and you need to enable Enable Media Source API on <video> elements in chrome:flags to use it. Also, only WebM container is currently supported.
Here is an article on HTML5Rocks that demonstrates how the API works: "Stream" video using the MediaSource API.
Another useful article that talks about chunked playlist: Segmenting WebM Video and the MediaSource API.
I hope this implementation gets adopted and gets wider media container support.
UPDATE JUN 2014 Browser support is slowly getting better: (thanks #Hugh Guiney for the tip)
Chrome Stable
FF 25+ has a flag media.mediasource.enabled [MDN]
IE 11+ on Windows 8.1 [MSDN]
Did you find a better way to do that?
I implemented a double-buffered playback using two video tags.
One is used for the current playback, and the second for preloading the next video.
When the video ends I "swap" the tags:
function visualSwap() {
video.volume = video2.volume;
video2.volume = 0;
video.style.width = '300px';
video2.style.width = '0px';
}
It has some non-deterministic behavior, so I am not 100% satisfied, but it's worth trying...
Changing the SRC tag is fast, but not gapless. I'm trying to find the best method for a media player I'm creating and preloading the next track and switching the src via "ended" leaves a gap of about 10-20ms, which may sound tiny, but it's enough to be noticable, especially with music.
I've just tested using a second audio element which fires off as soon as the first audio element ends via the event 'ended' and that incurred the same tiny gap.
Looks like (without using elaborate hacks) there isn't an simple(ish) way of achieving gapless playback, at least right now.
it is possible. you can check this out: http://evelyn-interactive.searchingforabby.com/ it's all done in html5. they are preloading all videos at the beginning and start them at the same time. didn t had time yet, to check how they re doing it exactly, but maybe it helps if you check their scripts via firebug
After many attempts, I did end up using something similar to Method 2. I found this site http://switchcam.com and basically copied their approach. I pre-buffered as the video start time approached and then auto-played as the videos starting point hit. I had the current videos playing simultaneously (in a little div - as a UI bonus) and users could toggle between the videos switching the "main screen view". Since all videos were playing at once, you could choose the audio and the gap didn't end up being an issue.
Unfortunately, I never ended up solving my problem exactly and I am not sure my solution has the best performance, but it works okay with a fast connection.
Thanks for everyones help!
I have a video I'm playing on iOS 4.2 where I'm listening in the timeupdate events and pausing video at certain times. This is fairly accurate. However, I also have player controls that seek to certain parts of the video using current time.
There appears to be a bug where the time seeked is never accurate - not accurate enough for what I need to do with it. The problem gets worse as the length of the video increases and I've also noticed that at the beginning of the video the seek time would be around 0.5 miliseconds off the time I specify but as I try to seek further along in the video this increases. Seeking 2 minutes into a video files is off by around 2 seconds.
I don't think this is a probloem with my code as I've replicated the same behaviour using the opensource Jplayer.
http://www.jplayer.org/HTML5.Media.Event.Inspector/
currentTime has caused me nothing but problems on iOS. It didn't even work on 3.2.
Is the problem I'm having now a known bug and is there a workaround for this?
I ran a test to see if I could confirm the same behavior on my emulated build of iOS 4.1.
I definitely experienced similar problems on both the iPhone and iPad. But I didn't notice the offset growing proportionately as the video got longer - it seemed more random. If I had to guess, I'd say the video is seeking to the keyframe prior to to your requested time rather than to the correct position.
The iOS devices seem to be reporting currentTime accurately... You are able to pause the video in the correct place and it looks like the timecode on the iPhone matches that on the desktop. It just won't queue up in the correct place.
What kind of video are you using? I tested h264 video encoded with ffmpeg.
It might be worth adding more keyframes to your video or looking for a switch in your encoder that makes the content more easily seekable. I know ogg video has support for indexing (see this post). That won't help this issue, but we might be able to find a parallel solution that works here.
This is going to be a problem for me very soon, so I'm very interested to see if you found a fix. Please post back if you have.