How to get Proper Resolution for VR Headset? - javascript

I'm creating a WebVR application using three.js. Currently I'm trying to set the resolution of the window to the resolution of the hmd in order to get a clearer picture. I've been trying to get this information using VREyeParameters renderWidth/Height components. However with the Vive I'm using I keep getting a resolution much larger than the supposed 2160x1200 (I got the width to be 3448). Am I grabbing the wrong information and is there somewhere else I need to be getting these values from?

The values look correct.
You are rendering to a temporary texture that is projected with barrel distortion to the destination LCD. For best quality, the temporary texture is in higher resolution.

Related

How to position elements consistently across different device sizes in React Native?

So I have an Image of the map which users can plot points and then the location the user has touched or clicked is then saved by sending the e.nativeEvent.locationX value and e.nativeEvent.locationY value to my database. This will allow the user to review the points they have plot before in the future.
What's the problem
So when I plot the point on my Tablet and review where I placed the plot on the map using my PC, It is displayed incorrectly. I assume this is due to different screen sizes and the image being a different size depending on the device.
How do I resolve this issue so that the plots are consistent no matter what device you are using?
You'll need to keep track of what size the image is when you're getting the points. So, for example, if your point is (5,5) on an image that is being displayed at 25x25, you may want to store the values in your database as (20,20) -- eg 5/25,5/25. So, it's a relative point rather than an absolute point.
Then, on the other device, maybe that image gets displayed at 200x200 -- then, your point would get displayed at 40,40 -- 200/5,200/5
If you need to keep track of what the Image/View size is, you can refer to plenty of existing questions on that topic such as: Get size of a View in React Native
Note: it's also important that you are displaying the image with the same aspect ratio on both devices

How to make THREE.Mesh look volumetric with WebVR?

I'm working on porting an existing three.js project to WebVR + Oculus Rift. Basically, this app takes an STL file as input, creates a THREE.Mesh based on it and renders it on an empty scene. I managed to make it work in Firefox Nightly with VREffect plugin to three.js and VRControls. A problem I have is models rendered in VR aren't really 3D. Namely, when I move the HMD back and forth an active 3D model doesn't get closer/farther, and I can't see different sides of the model. It looks like the model is rather a flat background image stuck to its position. If I add THREE.AxisHelper to the scene, it is transformed correctly when HMD is moved.
Originally, THREE.OrbitControls were used in the app and models were rotated and moved properly.
There's quite some amount of source code so I'll post some snippets on demand.
It turned out that technically there was no problem. The issue was essentially with different scales of my models and Oculus movements. When VRControls is used with default settings, it reports a position of HMD as it reads it from Oculus, in meters. So, the range of movements of my head could barely exceed 1 m, whereas average sizes of my models are about a few dozens of their own units. When I used them altogether at the same scene, it was like a viewer is an ant looking at a giant model. Naturally, the ant have to walk a while to see another side of the model. That's why it seemed like not a 3D body.
Fortunately, there's a scale property of VRControls that should be used for adjusting scale of HMD movements. When I set it to about 30, everything works pretty well.
Thanks to #brianpeiris's comment, I decided to check coordinates of the model and camera once again to make sure they're not knit with each other. And, it led me to the solution.

How can I increase map rendering performance in HTML Canvas?

We are developing a web-based game. The map has a fixed size and is procedually generated.
At the moment, all these polygons are stored in one array and checked whether they should be drawn or not. This requires a lot of performance. Which is the best rendering / buffering solution for big maps?
What I've tried:
Quadtrees. Problem: Performance still not as great because there are so many polygons.
Drawing sections of the map to offscreen-canvases. A test run: http://norizon.ch/repo/buffered-map-rendering/ Problem: The browser crashes when trying to buffer that much data and such big images (maybe 2000x2000) still seem to perform badly on a canvas.
(posting comments as an answer for convenience)
One idea could be, when the user is translating the map, to re-use the part that will still be in view, and to draw only the stripe(s) that are no longer corrects.
I believe (do you confirm ?) that the most costly operation is the drawing, not to find which polygon to draw.
If so, you should use your QuadTree to find the polygons that are within the strips. Notice that, given Javascript's overhead, a simple 2D bucket that contains the polygons that are within a given (x,y) tile might be faster to use (if the cost of the quadtree is too high).
Now i have a doubt about the precise way you should do that, i'm afraid you'll have to experiment / benchmark, and maybe choose a prefered browser.
Problems :
• Copying a canvas on itself can be very slow depending on devices/Browsers. (might require to do 2 copy, in fact)
• Using an offscreen canvas can be very slow depending on devices/Browsers. (might not use hardware acceleration when off-screen).
If you are drawing things on top of the map, you can either use a secondary canvas on top of the map canvas, or you'll be forced to use an off-screen canvas that you'll copy on each frame.
I have tried a lot of things and this solution turned out to be the best for us.
Because our map has a fixed size, it is calculated server-side.
One big image atlas with all the required tiles will be loaded at the beginning of the game. For each image on the atlas, a seperate canvas is created. The client loads the whole map data into one two-dimensional array. The values determine, which tile has to be loaded. Maybe it would be even better if the map was drawn on a seperate canvas, so that only the stripes have to be painted. But the performance is really good, so we won't change that.
Three conclusions:
Images are fast. GetImageData is not!
JavaScript has not yet great support for multi threading, so we don't calculate the map client-side in game-time.
Quadtrees are fast. Arrays are faster.

Javascript/JQuery Calculation of Dimensions

I am looking to achieve something like this. A HTML view has a finite number of images (shown as red boxes in the image below). Are there any browser/jQuery APIs available today (cross-browser) which will let me calculate the dimensions of the remaining space (shown in green boxes) quickly? In the example shown below, it is easy to calculate the green area dimensions using simple geometry given the dimensions of the red boxes. But I am talking about very complex scenarios and complicated combination of images.
Appreciate any help. Thanks.
If you every images have absolute property, you can calculate dimension through top and left properties like $('#elementID').offset().top and $('#elementID').offset().left
From my experience working with DOM element dimensions, you cannot rely on them for exact values, and certainly can't really on them for the same values cross-browser. You can get OK results, but if you have complex scenarios then you will probably come undone at some point.
One way I have achieved similar things in the past is by drawing images to HTML5 Canvas. Using canvas you can have very fine-grained control. I have even iterated canvases pixel-by-pixel to get pixel perfect measurements of items on the canvas.
Check out this tutorial for a brief overview of drawing an image.
UPDATE
There is no easy way to do it. Using this method is low-level and will require you to use mathematics, and possibly byte-level image data from the canvas. However, if your problem is as complex as you suggest then you will have to get stuck in. When I did something similar I was also looking for an easy way to achieve what I wanted in the browser, then spent a month getting to grips with the canvas API, learning about byte-level colour data etc, but in then end I got what I needed, and ended up with something quite unique as it was difficult to achieve in a browser.
To get started, first I would say look at implementing a layered canvas by absolutely positioning multiple canvases on top of each other, then drawing a single image on each one. You already know the sizes of the images, and you can decide the coordinates of where to draw the image, so that's a start. In fact that may be all you need, you can track each image as you draw them by storing coords and dimensions, and you should be able to build up an accurate picture in numbers of where all your images are in 2D space.
Using those numbers you should then be able to calculate any empty spaces on there. However, that is a beyond me and probably a question for Mathematics Stack Exchange (which is actually down at the moment :D).

Combine Vector advantages with Bitmap in an HTML canvas element - how?

What I am trying to do is create a game that has an extreme amount of zoom-ability on a canvas element. I would like to make use of the advantage that vector graphics have insofar as being able to be programmatically created at runtime, with the high performance of bitmap images.
What I would like to do is programmatically create the first-frame image of a game "sprite"... this would be a vector image. After the first frame though, I do not want to keep wasting CPU cycles on drawing the image though.. i would like to cache it as a bitmap/high performance image for that zoom level.
Following this, if the user zooms in by >20%, I then redraw the image with a higher level of detail vector image. As above, this vector image would then be cached and optimized.
As you can see here, this would be a pretty basic space ship.. I would first render it programmatically as a vector and then.. raster it I guess? Goal is to avoid wasting CPU.
If the user zooms in...
A new vector image of the same shape would be drawn, albeit with a much higher level of detail. This is basically a Level Of Detail system. In this case as well, after the initial programmatic draw, I would "raster" the image for maximum performance.
Does anyone have ideas on what tools I would need to make this a reality inside of a HTML canvas? (The rest of the game will be running inside of the canvas element..)
Thank you very much for your thoughts.
**Edit: I wanted to add... perhaps the route of rendering an image via SVG (programmatically), then pushing that png file into the canvas using drawimage(), might provide some success? Something similar? Hmm...
Check out that article , but it seems there is no standard method to do what you want and it may fail in IE.
http://svgopen.org/2010/papers/62-From_SVG_to_Canvas_and_Back/#svg_to_canvas
You should perhaps go with an all SVG game , or provide a maximum zooming rate to your game and use big images as sprite assets. it would not have been a problem using flash,but i guess you wont go with flash anyway.
Maybe there is a framework that can translate SVG into a "canvas drawing sequence" but i would not bet on high performances in that case.
I managed to answer my own question.
The way to do this is to first create an SVG file, and then convert it to a PNG file on the client using "canvg". The PNG can be created at different levels of details based on what you want, and in this way you could create a dynamic LOD system.
Flash does something similar automatically by cashing a bitmap image of the SVG file... it's called "pre-rendering". If the SVG isn't scaled or the alpha isn't changed, flash will just use the bitmap instead (much faster then continuously re-rendering the SVG file, in complex cases). Size (and thus detail) of the PNG output can be modified however you like, and so pre-rendering could be done based on events as well.
From this information, I have decided to implement the LOD system such that SVG is used whilst the user is actively zooming (scaling the target "sprite"), and then as the zoom slows down, compute a PNG pre-render. Also, at extremely high levels of zoom, I simply use the SVG, as it is much easier for the CPU to compute SVG's at high resolution, then bitmap images that cover most of the screen. (just take a look at some of the HTML5 icon tests that put lots of icons on the screen... the bigger the icons are, the slower it runs).
Thanks very much to everyone's comments here and I hope that my question/answer has helped someone.

Categories

Resources