Best way to deliver dynamic video clips via web page... with interactivity - javascript

I have an animated character (half pig half woman) that plays an integral role in my client's brand. She performs several movements/actions such as walking/running/climbing in place, dancing in several different ways and gesturing with facial and body movements.
On the website, she will be displayed floating near the user's scroll position and will perform different actions (i.e. play a specified segment of this motion/action) based on what the user is doing at the time. For example, while scrolling, the page may be playing the climbing loop and or when focus is given to a lead-capture form, the character starts dancing and when the user is typing in the 'email' field of said form, we jump to the super-fun part of the dance... I'm not sure if it will be exactly those, but something along those lines.
With the exception of the dance, which will be around 60 seconds in its non-looped state, everything else will only be a couple seconds max. So I'm trying to figure out what the most efficient way to rig this is - by that I mean the best way to use JavaScript to control the character's action based on the user's actions.
I'm considering using animated GIFs for everything except the dancing and just switching out their src when appropriate (i.e. $pigImg.src='pig-smile.gif' when she is clicked on and $pigImg.src='pig-climb.gif' for scrolling...) and then hiding the GIF (or displaying something blank) and streaming the video at its appropriate timestamp when it's time for her to dance.
I think I'll do all this within a <canvas> element to maximize the flexibility:simplicity ratio but if for no other reason than to be able to use clips with a green-screen background to do things I haven't even planned yet in the future.
I know this is a pretty broad intro, so I'll try it focus this post on the question of whether or not there are any potential obstacles that I need to consider with my <canvas> and mix of video and GIFs approach? I know that switching between the 2 media types may cause some mis-registration issues (i.e. the character not lining up 100% on the nose).
I'm sure this will end up being FAR more complex in reality that it is in my head right now, so I guess making sure I'm not dreaming up something filled with numerous technical holes of which I am unaware is a good starting point.

Related

How could I improve performance when repeatedly updating an SVG DOM in React JS?

Last year I tried to learn a bit React JS for making the project you can find here. I apologize for my rather vague / imprecise description below, but I'm by no means versed in this.
Basically, there is a single <svg> tag, which will contain a number of paths etc. as created by the user. The problem I have is that things become very slow the more paths are present. To my current understanding, this is due to the fact that the entire SVG DOM gets updated repeatedly upon user interactions that involve dragging the mouse or using the mouse wheel.
This holds true, particularly, for two user interactions:
a) Panning - all paths are being moved at the same time; I think one might circumvent this issue by taking a snapshot image first and moving that around instead. However, that's not a solution for the other user interaction, which is:
b) Expanding/collapsing paths - here, all paths are being modified in terms of coordinates of some of their points. That is, every path must be modified in a different way, but all of them must be modified at once, and this must happen repeatedly because it's a user interaction controlled with the mouse wheel where changes happen gradually and the user requires immediate visual feedback on these changes as they happen.
Particularly for b), I see no alternative that would involve a single transformation or something.
After extensive research last year, I came to the conclusion that choosing SVG to display and modify a lot of things dynamically on screen was a wrong decision in the first place, but I realized too late, so I gave up and have never touched it since. I'm pretty certain that there isn't any way to deal with the low performance that builds upon what I already have; I have no intention to start this project from scratch with a completely different approach. Also, the reason why I chose SVG was that it's easy to manipulate.
In summary, I'd basically like to get confirmation that there is no feasible way to rescue this project.

How to check if a video is stabilized or not?

I was wondering how to detect camera motion in a youtube video.
I want to read in an youtube link process the video and tell the user if it was filmed using a tripod or if it was super shakey.
Do anyone know where I would even start? It might not even be possible?
Just spitballing here, but I'd start by capturing frames that are close together at various points throughout the video.
You would take the frames from each section and compare them to each other for variations in composition, dunno how to best go about that.. I'd probably start with like colour detection in various spots? anyway, start building a "difference score"
once you've gone through the frames for each section you sampled, you'll have a "difference score" and you can then start trying to figure out what the cut off point is for detecting a shakey video.
you probably couldn't do this anywhere close to realtime, so be prepared to have a bit of a wait period while the video processes.
Take some images each seconds (avoid to take it at a fixed frequency, because if the film has a move frequency, ie: if we are on a boat and we film during high waves)
After you can convert it to black and white (not gray levels) and compare it. Using the position of the minor color. (But it ain't gonna work fine)
Usually we use, edge detection : http://fr.mathworks.com/discovery/edge-detection.html and compare some of it. To see what is the scene and work together and waht is not. You must find "interest points" an calculate the vector between two frames. Some of this vector will move together. It's an object. Now you have to find which object if the scene.

Click through photo wall, with perspective

So I'm building a portfolio and sales website for a painter (my wife) based on WooComerce (WordPress). This is a side project that I have plenty of time to finish. I want to build a live/moving photo wall, with perspective. The following photo will give you a (albeit, very rough) idea.
Basically, I want to start off with 16 images (the number is actually arbitrary), apply perspective to them and allow the visitor to click any of the pics and go to that images associated page. Now, after a given time, I want new photos to show up.
I'm not particularly concerned if I flip these pics, randomly, like tiles to introduce new ones OR if a column slides off and a new column is added (i.e. the adding of 17-20 in my picture). This is a semantic difference in the way I build this code and isn't part of my question (I don't think). All of the original pictures are going to be square and will be uploaded by the user, whom we assume is of novice/intermediate computer experience.
So my question is about the approach. Do I:
Make my wall script (likely using jQuery and HTML < map > and < area >) take care of the flipping and linking, but the perspective and scaling is done and cached on the backend.
Every image I upload to the server, for the photo wall, run it through a ImageMagick script that will transform (i.e. apply perspective of the largest size for the wall) and then scale it down for the other columns using a naming convention like: orignalthumnail_marilyn.png perspective0_marilyn.png perspective1_marilyn.png etc. (with the number for the different columns, relating to the scaled sizes). This is will be harder on bandwidth (maybe not, if compressed correctly) and easiest on the user's hardware (assuming non-mobile).
Use Javascript & CSS (and possibly HTML5) to do everything. I load the images into and use the CSS3 skewed/transformed < div >s, JS to flip/moves the tiles (I could do CSS I suppose). I feel that this option is the worst, as far as looks. This is because CSS clips horribly using the transform attribute (on my browser, FF 30) (I also made a quick demo at http://jsbin.com/febatohi/2/edit). Also, it requires the user's hardware to be able to handle all of the transforms, which is not always appreciated online. Maybe there is a way to handle this with a JS library I'm not aware of.
Use Flash. This is my least desirable option. It requires me to either not build this myself or pay someone else (pfft!) or that I acquire and learn Flash from Adobe (I said time wasn't an object, but patience can be). However, it can produce the best looking result, as I have seen things done similarly to this is Flash. It also is a middle ground of hardware and bandwidth, but to me the most time consuming and also limiting to those browsers and users who use Flash (though I feel this is only a small percentage of users).
Other suggestions?

In JavaScript, is it possible to have to audio input as a event listener? (Idea construction)

There is an idea which I have been toying with for the past few weeks. I am extremely serious to realise this concept, but I totally lack any know how about the implementation. I have some thoughts which I'll be sharing as I explain what the idea is.
We have websites. Most of them are responsive.
What is responsive web design?
By and large, responsive web design means that design and development should respond to the user’s behaviour and environment based on screen size, platform and orientation. If I change my window size, my website to should change its dimensions accordingly.
If I scale some object on the screen, then the website should rearrange/ rescale accordingly.
This is good, but nothing exciting (nowadays!).
I am again so damnfully limited by a screen and to whatever happening inside it that what I do outside seems still very external and not seamless. Except my contact with the mouse/ keyboard, I do not know any other way to communicate with the things that happen inside the screen. My screen experience is still a very external feature, not seamless with my cognition. I think this is also a reason why my computer always remains a computer and does not behave a natural extension of the human body.
I was toying with a idea which I have no clue how to realize. I have a basic idea, but I need some some technical/ design help from those who are fascinated by this as much as I am.
The idea is simple: Make the screen more responsive, but this time without use of a mouse or any such input device. All laptops and most desktops have a microphone. How about using this small device for input?
Let me explain:
Imagine, a website in which screen icons repopulate when you blow a whiff onto the screen. The song on your favourite playlist changes when you whistle to the screen.
Say you have an animated tree on the screen. That tree sheds its leaves when you blow air to it. The shedding depends on how fast you blow. Getting a small idea?
Let me put some graphics (see the hyperlink after this paragraph) which I think will make it better. I plan to make a website/ API in which there is a person with long hair staring at you. If you blow air from the right side of your screen, her hair moves to the left. If you blow air from the left, her hair blows to the right. If you blow faint, her hair suffers faint scattering. Some naughty stuff, say you whistle: The character on the screen winks, or say throws a disgusting expression- whatever.
The whole concept is that every element of the web must have a direct relation with the user who is sitting outside the screen. It gives a whole lot of realism to the website architecture if things like my whistle, whiff or say even my sneeze can do something to the website! I am not tied to the mouse or the keyboard for my response to be noted. Doesn’t that reduce a hell of a lot of cognitive load on the user?
See this image: http://imgur.com/mg4Whua
Now coming to the technical aspect that I need guidance on.
If I was building a regular responsive website in JavaScript, I'd use addeventhandler("Click", Animate) or addeventhandler("resize", Animate) - something like that. Here I want my event handler to be the audio input that is coming from the microphone. Also, I need to know the place from where the audio is originating that I can decide which side the hair must fall and play that animation.
So in the span of 180/360 degree of the microphone, I need to not just catch the audio, but also its angle that the right animation can be played. It'd be a crashing fail if the same animation is played where-ever I blow air. It needs to have that element of realism.
I have asked around and some people suggested to me that I try WebRTC of HTML5. I am still seeing if that works, but otherwise are there any more options? I guess Processing is one. Has anyone handled its audio features?
I want to build a simple prototype first before I delve into the immense possibilities this idea could have. But if you have some really awesome thing in mind, please let me know about it. Exciting ideas are one thing, and exciting implementation totally another. We want both.
Are there such websites already? Any work happening in this side?
Any small guidance counts!
There are plenty of ways to create your own events. Most libraries have some built-in way of doing so. Basically you're talking about the observer pattern and here's a nice article to explain it in greater detail: https://dottedsquirrel.com/javascript/observer-pattern/
Also as far as listening to audio goes, using an analyzer-node (AnalyserNode) on the input signal and some ingenious code to determine that the sound is what you want to listen to, firing the event is a piece of cake using aforementioned custom events.
But, before diving into those, determining the angle of the sound? I do not think that is even possible. You might be able to determine the angle of the origin of the sound in a '2d' scope, but that certainly won't give you an angle. I think you'd need something rather more ingenious than a simple stereo mic setup to determine the angle.

slider that also magnifies around cursor

It can be difficult to use (webpage) sliders that cover a large range with fine granularity. On the one hand, it is easy to move across the range. On the other hand, it is difficult to locate the exact point one wants, assuming a fine enough granularity.
I was thinking that a magnify effect around the cursor could solve this problem (assuming the problem really exists).
I looked for existing solutions or ideas via google, but couldn't find anything.
Any suggestions here?
I doubt if this is what you're looking for, but... within Mac OSX, holding down the control key and moving the scroll wheel will zoom in and out.
I'm having trouble thinking of a scenario where having so much data that scrolling of this nature would be a problem you'd want to have. In almost all scenarios it makes more sense to chunk up the data or reduce it down in some other way.
About the only thing that makes sense is the seek-bar/scrubber on a video player. If your player is 400px wide with a 360px wide scrubber, but the video is an hour long, the best granularity you'll get is 10 seconds-per-step (with the step-size being 1 pixel).
If that isn't enough granularity, then it's possible you'll need to augment your scrubber with another UI convention - which could be a magnifier - but it could also be other things. Like a "jump to point" text field that would allow to user to entire a time and seek to that exact position.
It sounds like you're going for something (visually) like the OS X dock. This is called a fish-eye effect. There's a jQuery plugin for a fish eye menu which you may be able adapt and merge with a slider to give you the functionality that you're looking for.

Categories

Resources