I setup an example jsfiddle to illustrate this with proper assets.
When your character is moving and the camera starts to pan, you will notice the background has a small "jitterness". This can be disabled by setting game.camera.roundPx to true.
However, if that is disabled and you move the character. Your character starts to jitter. Some things I've found in this adventure:
This only happens when moving with body.velocity.x, under both P2 and Arcade physics.
If you move the character with body.x or just x it's absolutely fine.
If you remove the tilemap texture you can literally see the jitterness happen behold your eyes when moving. Example here -- Make sure you move far enough for the camera to pan.
I've also tried game.renderer.renderSession.roundPixels = false; with no prevail.
This happens under CANVAS and WEBGL render modes
Great Question! These jitters are caused by subpixel rendering.
Phaser can use non-integer values for the positions of sprites when game.camera.roundPx is false. According to the documentation for roundPx:
If a Camera has roundPx set to true it will call view.floor as part of its update loop, keeping its boundary to integer values. Set this to false to disable this from happening.
view.floor:
Runs Math.floor() on both the x and y values of this Rectangle.
When drawing to non-integer positions, the browser tries to interpolate to make it appear as if a pixel is placed between two pixels. This can cause a single pixel of your source image to be displayed as two physical pixels. The switching between these two states when the camera pans is what is causing the jittering. Here's an example:
That's why setting game.camera.roundPx = true fixes it.
Related
Using pixi.js to make a 2D game, when the main player dies (because he hit a fox, a car or something else), I would like to zoom on it. The desire effect in mind is kind of the end of a level in super meat boy.
Check it, it's between 19:20 and 19:22 : http://youtu.be/3VKWn41Bqss?t=19m20s
I have a worldLayer variable that is a DisplayObjectContainer. This layer holds every element of the world, so I'm scaling it, like so, every frame :
this.worldLayer.scale.x += GAME.config.dead_zoom_speed;
this.worldLayer.scale.y += GAME.config.dead_zoom_speed;
Now you imagine that this piece of code zooms in direction of the coordinate (0,0), which is the top left corner of the screen. But of course, I'd like to zoom on the player, which is not is (0,0). It can be anywhere actually.
Zooming in on a specific point would be doable if the DisplayObjectContainer had the anchor property ; but it doesn't. This code for exemple, with another lib than pixi, uses this technique : it modifies the anchor (called origin in this fiddle).
So my conclusion is, for zooming on a specific point, I have to use the position of the worldLayer. So for every frame :
scale the world layer
calculate the position of scaled worldlayer so that the viewport (with a fixed size, right) is centered on the main character.
This last point is where I'm stuck. How would you go and calculate the position ? My mind can't get around it.
With this comes the problem of centering without displaying the outer canvas. For instance, if the main character is near the right edge of the screen, it would not be totally centered on the viewport. But that's kind of another problem.
So I'd like to discuss that with you guys : have you ever had to implement such a feature ? how ? Am I missing something in the pixi.js API ?
Regards,
I was asking myself is there is a "more accurate mousemove" in JavaScript.
The normal event is fired when the mouse moves, but it's possible that it "jumps" over some pixels, so my question is if there's a way to detect every pixels that was crossed.
An application that could use this could be something like paint where you want to draw something (e.g. a stroke)
The mouse does not cross every pixel, though. Especially with touchscreens.
You can see this in Microsoft Paint. If you drag the mouse back and forth while drawing, you'll see that it is just guessing and drawing lines in between the points the OS is sending it.
If you need to handle every pixel, then take the last pixel you saw, and the current pixel, and have your code find all of the pixels that fall on a line between the 2 points.
I have big horizontal strip image in photoshop which is made of lots of smaller elements. The background is transparent and the strip goes from smaller elements (left) to bigger elements (right). My goal is to make this strip interactive to mouse events.
Each element is some kind of polygonal image which is trimmed left and right and then exported as a png. It is then imported into a canvas.
The problem is that I can put them side by side but since they are not rectangles I need a way to calculate the offset made up by the transparent pixels on each side of each element to make them stick together correctly... I am using KineticJs to get a precise hitarea for each element... So maybe there is a way to do it automatically with kineticjs,or there is some kind of operation I could do using each image data?
My problem illustrated:
Any ideas?
Also I am doing this simply because I would prefer precise mouseOver bounding box on each item (rather than a simple rectangle) and would rather avoid the solution to calculate each offset manually... But maybe that's not worth it?!
Ok, so you have yourself a custom shape you want to use, here is a tutorial for that: http://www.html5canvastutorials.com/kineticjs/html5-canvas-kineticjs-shape-tutorial/ , the simplest thing you can do, and even that seems fairly long, is to calculate the bounding lines for that shape. (Two somewhat vertical lines, and two somewhat horizontal lines). Then you test if the right vertical line of shape one crosses with the left vertical line of shape two, if they do, then set the coordinates of the images to be the same coordinate.
http://www.mathopenref.com/coordintersection.html
line1 = ax + b ..... line2 = cx+d //see possible tests
if(...intersection test...){ // or just test if some coordinate is left of some other coordinate
shape2.setX(shape1.getX()+shape1.getWidth()); //account for image width, so they don't overlap
shape2.setY(shape1.getY()) // no need to account for height
}
UPDATE: This is a very rough solution to the workings of the problem. The next step would be to do more fine tuning dependent on each image.
http://jsfiddle.net/9jkr7/15/
If you want precise areas, use an image map. With some clever finagling and a blank image gif you should be able to have the background you want whenever you hover over any particular area of the image map (might require javascript).
The other option I can think of would be to use SVG itself or one of the many libraries in existance to build interactive vector graphics into your page.
You could also write a function that calculates the left most, top most, right most, and bottom most pixel by looking at all of the pixels in the image data. Here's a tutorial on that:
http://www.html5canvastutorials.com/advanced/html5-canvas-get-image-data-tutorial/
I have an element of given dimensions (say, 100x300 px) living in a container of the same height and variable width that I want to transform using rotateX around -webkit-transform-origin: top center; while picking the -webkit-perspective of the container so that it appears that the bottom line of the image stays where it is but only expands to fill the entire container.
Wow, that sounds confusing. Here's a picture:
So basically, I want to create a trapezoid with a fixed upper width and a variable lower width. I can't however quite figure out the math behind the relations... Javascript welcome. Following example works IF the body is 600px wide: http://jsfiddle.net/24qrQ/
Now the task is to change the perspective and rotation continuously with the body width. Any Ideas?
Okay, after a glass of wine the maths came back to me:
First, let's look at the perspective / rotation ratio. Viewed from the side, it looks like this:
The red element is rotated around its upper edge, if we project its lower edge to the lower edge of the container, the intersection between the projection line and the line perpendicular to the container at its upper edge is the required viewpoint. We get this by simple trigonometry (notice phi here is in radians, not in degree).
If we apply this, the lower edge of the element will always appear on the lower edge of the container. Now the free parameter is rotation. This seems to have the relation
rad = pi/2 - element.width / container.width
for sufficiently large widths, however I can't quite wrap my head around the actual relationship. Here is a fiddle: http://jsfiddle.net/24qrQ/6/
Basically, you are trying to figure out how to put an object in 3D space, so it lines up with a 2D viewport. That's always a tricky thing.
I don't know what the math is, and most other probably don't either. This is hardly a common problem. But here's how I would go about figuring it out.
The only variable here is width. And the 2 values that would need to change based on the width is -webkit-perspective on the container and -webkit-transformon the inner element. So I would manually edit the values for a few different widths and record the 3D values that you had to enter to make things look right. (I'd use the web inspector to edit the values in realtime so you get immediate feedback)
One you have a few data points, plot them out on a graph and then try to figure out how they change. I have a hunch it's a parabolic curve, but it may but hyperbolic or sinusoidal too, my 3D math isn't good enough to know for sure.
Then you can try figure out an equation where when you input the widths you've sampled, you get back the manual 3D values you set previously. Then use JS to read the width of the container and set the CSS values to make it look right.
I've done that with 3 widths 300, 450, 600:
http://jsfiddle.net/24qrQ/3/
Some trends are obvious. As width increases, perspective goes up at an increasing reate, and rotation goes down at an increasing rate.
Figuring out the exact formula, is now up to you.
As a simpler alternative, if figuring out a formula becomes too difficult, you could manually curate a handful of widths and 3D values that look nice and store them in JS somewhere. Then you could just linearly interpolate between them. It wouldn't be exact, but it might be close enough.
It would also be less fun!
I created a webgl animation using scenejs library (start it by clicking the button at the bottom left, note it plays music as well which you can't currently disable).
The problem I am encountering is that the floor/plane in the middle starts flickering and continues to flicker/blink through out the animation. Only towards the very end does the flickering lower and by the end stops completely (when the plane is about to end).
If I reduce the size of the plane to 10% of its size (from ~26000 to ~2600), it does not flicker at all.
I've tried adjusting the texture scales, has no effect. Lowering the fps didn't seem to have an effect either. Does WebGL have problems rendering large objects? Are there any work arounds this?
Could probably make the plane static, and have the texture of it moving, but it certainly would make a lot more things trickier, especially when more elements are added to it.
Setting the requestAnimationFrame had no effect, nor did removing the flash video. The only time it works fine is when the plane is significantly smaller, or when it is reaching the end of it.
Scene looks fine to me - what happens when you remove the Flash?
PS. Share this on a jsFiddle if you like..http://jsfiddle.net/
Also, what happens when you use the requestAnimationFrame option for the render loop?
Example here:
http://scenejs.wikispaces.com/scene#Starting