I want to display text with WebGL, and I know that there is not a built in way to do this. However, I know it can be done, with textures. I am new to OpenGL, so I don't really have much experience with shaders, so if someone could add how to set up the shaders for this. I would like to draw the entire string on the same object, instead of a bunch of seperate letters, and the strings are NOT preset, they will not always be the same. How can I get the text to appear? Also, how do I know how to space each letter?
I read post #7 at this page, and that sounds like it's what I want to do, but I don't understand exactly what It all means. (It's mostly the shader stuff I don't understand).
By the way, I am using sylvester.js
There are many ways to render text but one of the simplest is called bitmap font rendering.
All you need to get started is a sprite sheet with all of the letters you might want to render. Then you simply render a quad with the texture coordinates set to the location of the character you want to draw. To render a full sentence, just draw a bunch of quads, each representing a single letter.
Your sprite sheet will look something like the following texture.
Once you have that, you'll need the texture coordinates, essentially (x, y) coordinates in the range 0 to 1, for each character in the sprite texture. Use these when generating quad meshes. You'll end up drawing something like this to the screen:
Now that you have text on the screen, you can get fancy and take into account the glyph kerning between the letters. This allows you to render more natural text.
Unfortunately, I can't find a tutorial to point you to. And its not really something that I can whip together for you here. There are many pieces to the puzzle and its no small task (matrix math, camera's, orthographic projection, texture coords, textures, sprites, generating meshes, etc...).
If you'd like you can look through one of my projects where I have done this with WebGL. I even generate the initial sprite sheet using javascript + 2d canvas.
Sprite Sheet generated here:
https://github.com/zfedoran/prefab.js/blob/master/app/graphics/spriteFont.js
Quad Mesh generated in this file:
https://github.com/zfedoran/prefab.js/blob/master/app/controllers/labelController.js
Wrapper around WebGL:
https://github.com/zfedoran/prefab.js/blob/master/app/graphics/device.js
Or You Could
Watch Notch (the guy who made Minecraft) do this, in only about 30 minutes, in Java (fast forward to 2:21 hours in):
http://www.twitch.tv/notch/b/487451713
http://www.twitch.tv/notch/b/487621698
Good luck, and have fun :)
Three.js has actual text glyph support. In addition, dimensionthree.net uses textures on shapes. if you need source let me know.
There also is my http://taccGL.org library that can draw HTML text on a 2D canvas and then use it as textures on 3D objects drawn on a 3D/WebGL canvas.
Related
I have a use case where I need to render a significant amount (~50,000 glyphs) of crisp, scalable text strings on a canvas element. The best solution I've tried so far involves triangulating text drawn on a canvas element (Text was drawn using fillText method), uploading matrix uniforms and the Float32Array of triangles representing that string to the GPU via WebGL. Using this method, I was able to render 100,000 glyphs at about 30fps. Glyphs become blocky at very high zoom levels, but that is fine for my use case.
However, this method has overhead of about ~250ms per string, since I first draw the string to a canvas element in memory, read pixel data, turn the bitmap image into a vector and then triangulate the vector data. Searching the web for solutions, I came across two interesting open-source projects:
OpenType.js: https://opentype.js.org/
Earcut: https://github.com/mapbox/earcut
So now I want to re-write my initial proof of concept to use OpenType and Earcut. OpenType for feeding curve data into Earcut, and Earcut for triangulating that data and returning an array representing the point for each triangle.
My problem is, I can't figure out how to get the data OpenType provides and convert it into the format that Earcut accepts. Can anyone provide assistance for this?
More Info:
This StackOverflow question had some great information, but lacks some of the implementation details: Better Quality Text in WebGL. I suppose what I am trying to accomplish is the "Font as Geometry" approach described in the first answer.
You can create a path using Font.getPath. Path consists of move-to, line-to, curve-to, quad-to and close instructions, accessed via path.commands. You will need to convert bezier curve instructions into small segments first, of course.
Once you have a set of closed paths, you need to determine which ones are holes. Inner outlines will be oriented in an opposite direction to outer ones, and you can assign them to the smallest outer outline containing them. Once you have groups of <outer outline and a set of holes> you should be able to feed it to earcut library.
This is a simple implementation that assumes there are no intersections. For me it worked very well for most fonts, except for very few "fancy" fonts that have intersecting paths.
Here's a working example: https://jsbin.com/gecakub/edit?html,js,output
Instead of creating meshes for each string, you could also create them for individual characters, and then position them yourself using kerning data from the library.
Edit: this solution will only work for TTF fonts, though it can be easily adjusted for CCF (.otf) by ignoring path orientation and using a better "path A is inside path B" check, unless the font has intersecting paths.
How can I draw a Bezier Line between two non-static DOM elements, like this:
The two lines should be drawn between the
<div class="brick small">Line starts here</div>
and the
<div class="brick small">Line ends here</div>
of this CodePen: https://codepen.io/anon/pen/XeamWe
Note that the boxes can be dragged. If one of the elements changes its position, the line should be updated accordingly.
If I'm not wrong I can't use a canvas, right? What can I use instead?
Let me point you toward the answer I beleve you're looking for, it's a dom element type called 'SVG' which is supported by most if not all web browsers of today (so you won't need to plug in anything external), in which you can draw lines, shapes, apply graphical filters much like in Photoshop and many other useful things, but the one to be pointed out here is the so called 'path', a shape that can consist of both straight lines with sharp corners, or curved lines (bezier) or both combined.
The easiest way to create such paths is to first draw them in for example Illustrator, save the shape in the SVG format, open that file in a text editor and pretty much just copy the generated markup code and paste it into your html, as it is supported there. This will result in the drawn shape to be displayed on your site. But in your case, you won't come around the a little bit complex structuring of the paths, because you wish to have control of it using javascript, so I would suggest first making a few simple paths in this way by exporting from Illustrator, study these in code, then manipulate their bezier values in javascript until you get the hang of how they work, once you've done that you will be able to create the accurate bezier shape you have in mind and (knowing the positions of the elements you want to connect) position them so that they connect your boxes.
Paths can even be decorated with markers, like an arrowhead in the end or beginning of the path, you can even design your own markers as you like them to look and much more if you would dig deeper into it.
Good luck! :)
I´m searching for a (or more) best practice(s) for the following problem. I´ll try to describe it as abstract as possible, so the solution can be applied to scenarios i have not yet thought of.
Data available: Voxels (Volumetric Pixels), forming a cube, with coordinates x,y,z and a color attached.
Goal: Use OpenGL to display this data, as you move through it from different sides.
Question: Whats the best practice to render those voxels, depending on the viewpoint? How (which type of Object) can store the data?
Consider the following:
The cube of data can be considered as z layers of x y data. It should
be possible to view, in-between-layers, then the displayed color
should be interpolated from the closest matching voxels.
For my application, i have data sets of (x,y,z)=(512,512,128) and
more, containing medical data (scans of hearts, brains, ...).
What i´ve tried so far:
Evaluated different frameworks (PIXI.js, three.js) and worked through a few WebGL tutorials.
If something is not yet clear enough, please ask.
There are 2 major ways to represent / render 3D datasets. Rasterization and Ray-tracing.
One fair rasterization approach is a surface reconstruction technique by the use of algorithms such as Marching Cubes, Dual Contouring or Dual Marching Cubes.
Three.js have a Marching Cubes implementation in the examples section. You basically create polygons from your voxels for classical rasterization. It may be faster than it seems. Depending the level of detail you want to reach, the process can be fast enough to be done more than 60 times per second, for thousands of vertices.
Although, unless you want to simply represent cubes (I doubt) instead of a surface, you will also need more info associated to each of your voxels rather than only voxel positions and colors.
The other way is raycasting. Unless you find a really efficient raycasting algorithm, you will have serious performance hit with a naive implementation.
You can try to cast rays from your camera position through your data structure, find / stop marching through when you reach a surface and project your intersection point back to screen space with the desired color.
You may draw the resulting pixel in a texture buffer to map it on a full-screen quad with a simple shader.
In both cases, you need more information than just colors and cubes. For example, you need at least density values at each corners of your voxels for Marching cubes or intersection normals along voxels edges (hermite data) for Dual Contouring.
The same for ray-casting, you need at least some density information to figure out where the surface lies or not.
One of the keys is also in how you organize the data in your structure specially for out-of-core accesses.
I am looking to achieve something like this. A HTML view has a finite number of images (shown as red boxes in the image below). Are there any browser/jQuery APIs available today (cross-browser) which will let me calculate the dimensions of the remaining space (shown in green boxes) quickly? In the example shown below, it is easy to calculate the green area dimensions using simple geometry given the dimensions of the red boxes. But I am talking about very complex scenarios and complicated combination of images.
Appreciate any help. Thanks.
If you every images have absolute property, you can calculate dimension through top and left properties like $('#elementID').offset().top and $('#elementID').offset().left
From my experience working with DOM element dimensions, you cannot rely on them for exact values, and certainly can't really on them for the same values cross-browser. You can get OK results, but if you have complex scenarios then you will probably come undone at some point.
One way I have achieved similar things in the past is by drawing images to HTML5 Canvas. Using canvas you can have very fine-grained control. I have even iterated canvases pixel-by-pixel to get pixel perfect measurements of items on the canvas.
Check out this tutorial for a brief overview of drawing an image.
UPDATE
There is no easy way to do it. Using this method is low-level and will require you to use mathematics, and possibly byte-level image data from the canvas. However, if your problem is as complex as you suggest then you will have to get stuck in. When I did something similar I was also looking for an easy way to achieve what I wanted in the browser, then spent a month getting to grips with the canvas API, learning about byte-level colour data etc, but in then end I got what I needed, and ended up with something quite unique as it was difficult to achieve in a browser.
To get started, first I would say look at implementing a layered canvas by absolutely positioning multiple canvases on top of each other, then drawing a single image on each one. You already know the sizes of the images, and you can decide the coordinates of where to draw the image, so that's a start. In fact that may be all you need, you can track each image as you draw them by storing coords and dimensions, and you should be able to build up an accurate picture in numbers of where all your images are in 2D space.
Using those numbers you should then be able to calculate any empty spaces on there. However, that is a beyond me and probably a question for Mathematics Stack Exchange (which is actually down at the moment :D).
I am searching for a 2D physics engine to simulate gravity using images, preferably PNG images with transparency. So the engine will know how to calculate the collision base on the opaque parts of the image. I have only found Javascript engines that works with primitive shapes and basic HTML elements, but not with images.
I don't know any way to do what you desire, but you can try drawing your shapes in HTML5 Canvas and use Box2D.js for working with shape collision.
One think you could do is compute the convex hull of your image (you can have a look here) and then use those hulls to compute collisions and so on (using GJK for example, you can find some great explanations here or here)
As noted by micnic, I guess you can indeed use Box2D.js and feed a b2PolygonShape why the non transparent pixels of your images (or you can compute their contours and use contours as input for the b2PolygonShape)