It can be difficult to use (webpage) sliders that cover a large range with fine granularity. On the one hand, it is easy to move across the range. On the other hand, it is difficult to locate the exact point one wants, assuming a fine enough granularity.
I was thinking that a magnify effect around the cursor could solve this problem (assuming the problem really exists).
I looked for existing solutions or ideas via google, but couldn't find anything.
Any suggestions here?
I doubt if this is what you're looking for, but... within Mac OSX, holding down the control key and moving the scroll wheel will zoom in and out.
I'm having trouble thinking of a scenario where having so much data that scrolling of this nature would be a problem you'd want to have. In almost all scenarios it makes more sense to chunk up the data or reduce it down in some other way.
About the only thing that makes sense is the seek-bar/scrubber on a video player. If your player is 400px wide with a 360px wide scrubber, but the video is an hour long, the best granularity you'll get is 10 seconds-per-step (with the step-size being 1 pixel).
If that isn't enough granularity, then it's possible you'll need to augment your scrubber with another UI convention - which could be a magnifier - but it could also be other things. Like a "jump to point" text field that would allow to user to entire a time and seek to that exact position.
It sounds like you're going for something (visually) like the OS X dock. This is called a fish-eye effect. There's a jQuery plugin for a fish eye menu which you may be able adapt and merge with a slider to give you the functionality that you're looking for.
Related
I have a set of elements, set next to each each in a row. The number, scale, etc of these is dynamic. I would like them to pass from one side to the other on the screen in an infinite loop, so as one element leaves the one side it comes in again on the opposite, like this:
Here is a Codepen Illustrating the above example. Imagine the black box is the viewport, so you can't see outside of it.
What is the easiest way to implement this conveyor belt/treadmill approach?
I've tried several ways of implementing this but am stuck finding a reliable, smooth, and flexible solution to what seems like a very simple problem. I've hit a wall, how would you do this?
I'm just looking for the concept, library, etc.
Could a GreenSock library work well for this?
If this is too ambiguous could anyone point me toward a more appropriate place to ask?
Thanks.
I don't know what makes you say it "seems like a very simple problem", because (for me) it clearly isn't. Let's break it down:
Make the conveyor belt move (I'm assuming you move the belt container for this).
Trigger whenever an element completely left the screen.
Move that element in DOM at the other end of the belt and simultaneously adjust the belt position so the change in DOM is not visible in the belt animation, which should remain smooth.
This is how I'd go for it, but there are chances that the animation might stagger/flicker when the change in DOM is made, especially if you have other animations running in the page at the same time. If this happens, you might want to clone elements instead of moving them and only delete the originals after the rendering of the clone is finished. It might "seem" (sic) like the same thing, but the browser will do them one after the other instead of in the same time. It sometimes helps.
I'm a curious guy by nature so I'm already planning on making a fiddle with this at the end of the day. If I find anything notable or if I come up with another approach I'll update.
There is an idea which I have been toying with for the past few weeks. I am extremely serious to realise this concept, but I totally lack any know how about the implementation. I have some thoughts which I'll be sharing as I explain what the idea is.
We have websites. Most of them are responsive.
What is responsive web design?
By and large, responsive web design means that design and development should respond to the user’s behaviour and environment based on screen size, platform and orientation. If I change my window size, my website to should change its dimensions accordingly.
If I scale some object on the screen, then the website should rearrange/ rescale accordingly.
This is good, but nothing exciting (nowadays!).
I am again so damnfully limited by a screen and to whatever happening inside it that what I do outside seems still very external and not seamless. Except my contact with the mouse/ keyboard, I do not know any other way to communicate with the things that happen inside the screen. My screen experience is still a very external feature, not seamless with my cognition. I think this is also a reason why my computer always remains a computer and does not behave a natural extension of the human body.
I was toying with a idea which I have no clue how to realize. I have a basic idea, but I need some some technical/ design help from those who are fascinated by this as much as I am.
The idea is simple: Make the screen more responsive, but this time without use of a mouse or any such input device. All laptops and most desktops have a microphone. How about using this small device for input?
Let me explain:
Imagine, a website in which screen icons repopulate when you blow a whiff onto the screen. The song on your favourite playlist changes when you whistle to the screen.
Say you have an animated tree on the screen. That tree sheds its leaves when you blow air to it. The shedding depends on how fast you blow. Getting a small idea?
Let me put some graphics (see the hyperlink after this paragraph) which I think will make it better. I plan to make a website/ API in which there is a person with long hair staring at you. If you blow air from the right side of your screen, her hair moves to the left. If you blow air from the left, her hair blows to the right. If you blow faint, her hair suffers faint scattering. Some naughty stuff, say you whistle: The character on the screen winks, or say throws a disgusting expression- whatever.
The whole concept is that every element of the web must have a direct relation with the user who is sitting outside the screen. It gives a whole lot of realism to the website architecture if things like my whistle, whiff or say even my sneeze can do something to the website! I am not tied to the mouse or the keyboard for my response to be noted. Doesn’t that reduce a hell of a lot of cognitive load on the user?
See this image: http://imgur.com/mg4Whua
Now coming to the technical aspect that I need guidance on.
If I was building a regular responsive website in JavaScript, I'd use addeventhandler("Click", Animate) or addeventhandler("resize", Animate) - something like that. Here I want my event handler to be the audio input that is coming from the microphone. Also, I need to know the place from where the audio is originating that I can decide which side the hair must fall and play that animation.
So in the span of 180/360 degree of the microphone, I need to not just catch the audio, but also its angle that the right animation can be played. It'd be a crashing fail if the same animation is played where-ever I blow air. It needs to have that element of realism.
I have asked around and some people suggested to me that I try WebRTC of HTML5. I am still seeing if that works, but otherwise are there any more options? I guess Processing is one. Has anyone handled its audio features?
I want to build a simple prototype first before I delve into the immense possibilities this idea could have. But if you have some really awesome thing in mind, please let me know about it. Exciting ideas are one thing, and exciting implementation totally another. We want both.
Are there such websites already? Any work happening in this side?
Any small guidance counts!
There are plenty of ways to create your own events. Most libraries have some built-in way of doing so. Basically you're talking about the observer pattern and here's a nice article to explain it in greater detail: https://dottedsquirrel.com/javascript/observer-pattern/
Also as far as listening to audio goes, using an analyzer-node (AnalyserNode) on the input signal and some ingenious code to determine that the sound is what you want to listen to, firing the event is a piece of cake using aforementioned custom events.
But, before diving into those, determining the angle of the sound? I do not think that is even possible. You might be able to determine the angle of the origin of the sound in a '2d' scope, but that certainly won't give you an angle. I think you'd need something rather more ingenious than a simple stereo mic setup to determine the angle.
I don't know if this is possible, but the client is adamant. He wants his navigation bar contents to be aligned along a "Fibonacci Spiral".
This thing:
I don't even think the CSS3 rotation aspect is functional in any browser currently, and I have no clue if any of the scripting languages would allow me even the faintest of possibilities to create a custom, curving track to force objects to follow instead of the standard (and pretty much only) horizontal and vertical alignment methods. However, I truly do embrace a good challenge. Backing down without an effort is hardly doing a good job.
If any of you know any possibility even of the greatest magnitude in how I might achieve this effect, I would be amazed. THank you for your time! If you think this is truly impossible to achieve in a current web browser, say so!
Interesting idea anyways. Hope you can make your client happy.
I thought i might chip in with something.
Found a jQuery-plugin that bends text along a curve:
http://tympanus.net/Development/Arctext/
Perhaps one could make a layout of square divs of diminishing size and specify a curve for each one?
Possible div-layout if you turn it around: http://upload.wikimedia.org/wikipedia/commons/9/95/FibonacciBlocks.svg
The plugin specifies the curving from a radius-value and can curve upwards or downwards. It does not seem to be constructed for tilted curves, but that can perhaps be modified.
EDIT: I experimented a bit with the plugin, and i believe it certainly is possible to achieve the effect you need, albeit one does have to know trigonometry quite well (as far as i can tell) to make it function properly.
Another option, and the easiest way i can think of so far, is to make use of an old classic: Image map!
http://en.wikipedia.org/wiki/Image_map
Just photoshop a nice spiral image however you like and use image mapping to set linkable areas. This can maybe be of interest: http://www.outsharked.com/imagemapster/default.aspx?demos.html
I'm new to HTML5/Canvas/Game programming, but have been tinkering around with it after reading a couple of books. I THINK I have a fairly good idea of how things work out. This question asks several smaller questions, but in general is basically a "structural approach" question. I'm not expecting verbose responses, but hopefully small pointers here and there :) Here is a link to a non-scrolling, and currently rather boring Super Mario World.
Super Mario World Test
NOTE: Controls are Left/Right and Spacebar to jump. This is only setup for Firefox right now as I'm just learning.
Did I Do Something Wrong at This Point?
Currently I've just focused on how Mario runs and jumps, and think that I've gotten it down fairly okay. The coin box doesn't do anything and the background is just an image loaded in for looks. Here's my approach, please let me know if there is anything entirely wrong with this:
Allows Mario to jump by enacting on 2 Y velocities (Gravity and Jump variables)
Allows Mario to run by enacting on 1 velocity (Left or Right "Friction" + Acceleration)
Sprites are used and positioned according to keypress/keydown
I'm not sure if this is right, but I'm using a constructor function to build an object, then inside the main animation loop I'm calling the prototype.draw function for that object to update all variables and redraw the object.
I'm clearing the entire canvas each Frame
Should I be splitting this into more than just a draw function, like Mario.move()?
I've setup a GroundLevel and a JumpLevel variable to create 2 planes of gameplay. The JumpLevel is setup to allow for controlling how high Mario can jump on the fly. The 2 places would allow for the ground to rise like a hill - keeping the point at which Gravity overrules Mario's jumping force at the same distance from the ground.
For clarity sake, everything is separated into different JS files, but would obviously consolidate.
Moving Forward:
Now that I've finished setting up how Mario moves around (I think there are a couple other minor things I might do like mushroom up/down and shooting fireballs). I think I can figure that out, but I'm really lost when it comes to visualizing the following and how HTML5/Canvas can handle this easily:
Scrolling background (I've tried setting up Ground Tiles and using Screen Wrapping, but that seems to cause a lot of uneven issues since I was moving the tiles in the opposite direction. Unfortunately, since I'm trying to account for acceleration, this threw off the count and was causing gaps in the ground. I ditched this idea. Would a DIV beneath the canvas with a large background image be the best solution?
Enemies: Would I create enemies the same way and run a loop for collision detection on every enemy during each frame?
Background Boxes: I'm trying to allow Mario to stand on the boxes in the background, but am unsure how to approach this. I currently have boundaries setup for Mario to stay on the canvas, do I continue to expand these conditions to setup different boundaries based on the boxes? I can see that having several boxes on the screen and doing it this way would get kind of crazy, especially if I would be doing the same hit testing for enemies? I know I'm missing something here....
Level Movement: This is somewhat related. When the Right key is pressed, basically everything in the level needs to move to the left. Would I need to track out all positions of everything that could touch Mario (boxes for him to stand on and enemies for him to collide with) during every animation frame? This seems like it would get kind of inefficient?
Thanks to all! I'll keep this updated with results/solutions :)
Wow, okay. I really like your question because you've obviously done a lot of thinking on this, but partially because of that it's incredibly broad and conversational. You'd do better to find a forum to ask this question.
...That being said, I'm gonna answer the handful of points I'm qualified to, in no particular order. :)
Level Movement: That's a weird (read: inefficient) way to do it. I wouldn't do any calculations based on onscreen positions: track a canonical, camera-agnostic set of coordinates for everything in your level and update the visuals to match. This will stop you from running into weird niggling problems where framerate impacts what you can and can't walk through, or causing slower computers to let Mario run through enemies without being damaged sometimes. Tracking positions this way will incidentally fix a lot of your other problems.
You should absolutely be splitting this into multiple functions. Having movement code and rendering code in the same place is going to screw you, particularly by interacting malignantly with your update/refresh rate. It's going to essentially mean that every time the player does a tricky jump the game does more updates than usual which will make animation/hit detection/etc much less likely to be even.
Enemies: I'd suggest rolling this in with everything else. Do one hit-detection pass against everything, and if you hit something check to see what it was. You could try to optimize this by only checking any given entity against objects within 100 pixels of itself, but if you do it this way you'll need to run separate collision detection events for every enemy. Letting the enemies clip through each other would be computationally cheaper.
Edit: I'd like to clarify about my first point on 'level movement.' Essentially, what you don't want to do is move every entity onscreen every time the camera does, or to store all entity locations as offsets from the camera location (in which case you're still effectively having to move everything, every time the camera moves.)
Your ideal approach is to store your enemy, block, terrain locations with X/Y coordinates that are offset from the absolute top-left of the level (at the very beginning.) In order to render a frame, you'd do essentially this: (pseudocode because we're talking about a hypothetical level format!)
function GetVisible(x,width,level_entities_array) {
for (i = 0; i < count(level_array); i++){
if (level_entities_array[i][x] > x && level_entities_array[i][x] < x+width) {
visible_elements[] = level_entities_array[i][x];
}
}
return visible_elements;
}
Boom, you've got everything that should be inside the window. Now you subtract the camera's x offset from the entity's x location and ZAP, you've got its position on the canvas. Pose as a team, 'cause things just got real.
You'll note that I'm not bothering to cull on the Y axis. This can be rectified by extrapolation, which I'm guessing you can handle because you've made it this far. This will be necessary if you want to do any Mario-style vertical exploration.
Yes, I know my pseudocode looks like C# and JavaScript's unholy lovechild. I'm sorry, that's just how I roll at 11:30 at night. ;)
I'm trying to build something in HTML5/Canvas to allow tracing over an image and alert if deviating from a predefined path.
I've figured out how to load an external image into the canvas, and allow mousedown/mousemovement events over it to draw over the image, but what I'm having trouble getting my head around is comparing the two.
Images are all simple black on white outlines, so from what I can tell a getPixel style event can tell if there is black underneath where has been drawn upon or underneath where the mouse is on.
I could do it with just the mouse position, but that would require defining the paths of every image outline (and there are a fair number, hence ideally wanting to do it by analyzing the underlying image)..
I've been told that its possible with Flash, but would like to avoid that if possible so that compatability with non-flash platforms (namely the ipad) can be maintained as they are the primary target for the page to run.
Any insight or assistance would be appreciated!
I think you already touched upon the most straight-forward approach to solving this.
Given a black and white image on a canvas, you can attach a mousemove event handler to the element to track where the cursor is. If the user is holding left-mouse down, you want to determine whether or not they are currently tracing the pre-defined path. To make things less annoying for the user, I would approach this part of the problem by sampling a small window of pixels. Something around 9x9 pixels would probably be a good size. Note that you want your window size to be odd in both dimensions so that you have a symmetric sampling in both directions.
Using the location of the cursor, call getImageData() on the canvas. Your function call would look something like this: getImageData(center_x - Math.floor(window_size / 2), center_y - Math.floor(window_size / 2), window_size, window_size) so that you get a sample window of pixels with the center right over the cursor. From there, you could do a simple check to see if any non-white pixels are within the window, or you could be more strict and require a certain number of non-white pixels to declare the user on the path.
The key to making this work well, I think, is making sure the user doesn't receive negative feedback when they deviate the tiniest bit from the path (unless that's what you want). At that point you run the risk of making the user annoyed and frustrated.
Ultimately it comes down to one of two approaches. Either you load the actual vector path for the application to compare the user's cursor to (ie. do point-in-path checks), or you sample pixel data from the image. If you don't require the perfect accuracy of point-in-path checking, I think pixel sampling should work fine.
Edit: I just re-read your question and realized that, based on your reference to getPixel(), you might be using WebGL for this. The approach for WebGL would be the same, except you would of course be using different functions. I don't think you need to require WebGL, however, as a 2D context should give you enough flexibility (unless the app is more involved than it seems).