How to make a Three.js shader that outputs a string? - javascript

I'm developing a web application with Three.js that renders a scene in ASCII, much like the example given here. I'm having issues with frame rate though.
I've tried all sorts of different algorithms to convert the rendered scene into an ASCII string. Some slower than the example, some much faster than the example, but all too slow for rendering large scenes, even with the WebGL renderer.
Now I'm considering moving this conversion process over to the GPU via a shader, although I'm not sure how to make a Three.js shader output a string. Optimally I would also like to be able to input a custom string of ASCII characters to be used as a palette, though there isn't a string type in GLSL.
Thanks! :)

See this sample
It basically takes a texture, let's call it the map texture, and uses each pixel as a lookup into another texture of sprite images.
In your case you'd change those tiles to ascii characters and you'd render your 3d scene to the map texture by attaching it to a framebuffer. In other words,
render your scene to a texture
use that texture as a lookup into another texture of ascii characters.

Related

Copy mesh thousand times and animate without big performance hit?

I have a demo where i used hundreds of cubes that have exactly same geometry and texture, example:
texture = THREE.ImageUtils.loadTexture ...
material = new THREE.MeshLambertMaterial( map: texture )
geometry = new THREE.BoxGeometry( 1, 1, 1 )
cubes = []
for i in [0..1000]
cubes.push new THREE.Mesh geometry, material
... on every frame
for cube in cubes
// do something with each cube
Once all the cubes are created i start moving them on the screen.
All of them have the same texture, same size, they just change position and rotation. The problem here is that when i start using many hundreds of cubes the computer starts to suffer to render it.
Is there any way i could tell Three.js / WebGL that all those objects are the same object, they are identical copies just in different positions ?
I ready something about BufferGeometry and Geometry2 being able to boost performance for this sort of situation but i'm not exactly sure what would be the best in this case.
Thank you
Is there any way i could tell Three.js / WebGL that all those objects are the
same object, they are identical copies just in different positions ?
Unfortunately there's nothing that can automatically determine and optimize rendercalls in that regard. That would be pretty awesome.
I read something about BufferGeometry and Geometry2 being able to boost performance for this sort of situation but i'm not exactly sure what would be the best in this case.
So, the details here is this: the normal THREE.Geometry-class three.js provides is built for developer-convenience, but is a bit far from how data is handled by WebGL. This is what the DirectGeometry (earlier called Geometry2) and BufferGeometry are for. A BufferGeometry is a representation of how WebGL expects data for drawcalls to be held: it contains a typed-array for every attribute of the geometry. The conversion from Geometry to BufferGeometry happens automatically every time geometry.verticesNeedsUpdate is set to true.
If you don't change any of the attributes, this conversion will happen once per geometry (of which you have 1) so this is completely ok and moving to a buffer-geometry won't help (simply because you are already using it).
The main problem you face with several hundred geometries is the number of drawcalls required to render the scene. Generally speaking, every instance of THREE.Mesh represents a single drawcall. And those drawcalls are expensive: A single drawcall that outputs hundred thousands of triangles is no problem at all, but a thousands of drawcalls with 100 triangles each will very quickly become a serious performance problem.
Now, there are different ways how the number of drawcalls can be reduced using three.js. The first is (as already mentioned in comments) to combine multiple meshes/geometries into a single one (in the end, meshes are just a collection of triangles, so there's no requirement that they form a single "body" or something like that). This isn't too practical in your case as this would involve applying the position and rotation of each of your cubes via JS and update the vertex-arrays accordingly on each frame.
What you are really looking for is a WebGL-feature called geometry instancing.
This is not as easy to use as regular meshes and geometries, but not too complicated either.
With instancing, you can create a huge amount of objects in a single drawcall. All of the rendered objects will share a single geometry (your cube-geometry with its vertices, normals and uv-coordinates). The instancing happens when you add special attributes named InstancedBufferAttribute that can contain independent values for each of the instances. So you could add two per-instance attributes for position and rotation (or a single instance transformation-matrix if you like).
These examples should pretty much be what you are looking for:
http://threejs.org/examples/?q=instancing
The only difficulty with instancing as of now is the material: you will need to provide a custom vertex-shader that knows how to apply your per-instance-attributes to the vertex-positions from the original geometry (this can also be seen in the code of the examples).
You have a webgl tag so Im going to give a non three js answer.
The best way to handle this is to allocate a float texture array made of model transform matrix data (or just vec3 positions if thats all you need). Then you allocate a mesh chunk containing all your cube data. You need to add an additional attribute which I refer to as modelTransform index. For each "cube instance" in the mesh chunk, write the correct modelTransform index value corresponding to the correct offset in the model transform data texture.
On each frame, you calculate the correct model transform data for all the cubes and write to the model transform data texture with correct offsets and such. Upload the texture to GPU on each frame.
In the vertex shader, access the model transform data from the modelTransform index attribute and the float texture. Rest is the same.
This is what I am using in my engine and it works well for smallish objects such as cubes. Note however, updating 150000 cubes on 60 FPS will likely take most of your CPU resources from JS. This is unavoidable regardless of which instancing scheme you take.
If the motion/animation of each cube is fixed, then a even better way to do it is to upload a velocity attribute and initial creation time stamp attribute for each cube instance. On each frame, send the current time as uniform and calculate the position as "pos += attr_velocity * getDeltaTime(attr_initTime, unif_currentTime);". This skips work on CPU all together and allows you to render a much higher number of cubes.

Three.js running out of texture units

I have written an app using Three.js (r73) that allows the user to load multiple .dae files using the ColladaLoader.
If the user selects a sufficient number of objects the texture will not show for any of the objects...at this point I get this:
WebGLRenderer: trying to use 26 texture units while this GPU supports only 16
The error message seems fairly self-explanitory - does this mean I can only load 16 textures at any one time? Is there a way around this? Can I render my scene with half my objects - clear the texture units - and then render the other half?
Quite new to Three.js - so sorry if its a stupid question.
This number is based on what your GPU supports, you can see it listed here at WebGL Report, under Max Texture Image Units: 16.
Many people confuse this number with how many textures you can have in a single scene, this is false. This number represents how many textures you can use for a single object (i.e. in a single draw call).
So if you have an extremely complicated object, with hundreds of separate textures. You'll have to find a way to either merge the textures together, or split the object into multiple objects that can be drawn separately.
However, if you draw 1000 separate objects, each with a different texture, this shouldn't be a problem.
The warning comes from exceeding the maximum number of "total" texture units, and not the Vertex texture units. Refer to WebGLRenderer.js, function getTextureUnit() for the reasoning behind this and the printing of this error message. (ex, https://searchcode.com/codesearch/view/96702746/, line 4730)
To avoid the warning, analyse the shaders, and reduce the count of texture units required in the shader, for the rendering.

Using a cubemap texture as diffuse source in place of a 2D one

I'm trying to project massive, 32k-resolution equirectangular maps on spheres.
Since a 32k texture is hardly accessible for older graphics cards supporting 1k-2k sized textures, and scaling a 32k image to 1k loses a tremendous amount of detail, I've resolved to splitting each source map by projecting each into 6 cube faces to make up a cubemap, so that more detail can be displayed on older cards.
However, I'm having trouble actually displaying these cubemapped spheres with three.js. I can set the MeshPhongMaterial.envMap to my cubemap, but of course this makes the sphere mesh reflect the texture instead.
An acceptable result can be produced by using ShaderMaterial along with ShaderLib['cube'] to "fake" a skysphere of some sort. But this drops all ability for lighting, normal mapping and all the other handy (and pretty) things possible with MeshPhongMaterial.
If at all possible, I'd like to not have to write an entire shader from scratch for such a simple tweak (switching one texture2D call to textureCube). Is there a way to coerce three.js to read the diffuse term from a cubemap instead of a 2D texture, or a simple way to edit the shader three.js uses internally?

Display text with WebGL

I want to display text with WebGL, and I know that there is not a built in way to do this. However, I know it can be done, with textures. I am new to OpenGL, so I don't really have much experience with shaders, so if someone could add how to set up the shaders for this. I would like to draw the entire string on the same object, instead of a bunch of seperate letters, and the strings are NOT preset, they will not always be the same. How can I get the text to appear? Also, how do I know how to space each letter?
I read post #7 at this page, and that sounds like it's what I want to do, but I don't understand exactly what It all means. (It's mostly the shader stuff I don't understand).
By the way, I am using sylvester.js
There are many ways to render text but one of the simplest is called bitmap font rendering.
All you need to get started is a sprite sheet with all of the letters you might want to render. Then you simply render a quad with the texture coordinates set to the location of the character you want to draw. To render a full sentence, just draw a bunch of quads, each representing a single letter.
Your sprite sheet will look something like the following texture.
Once you have that, you'll need the texture coordinates, essentially (x, y) coordinates in the range 0 to 1, for each character in the sprite texture. Use these when generating quad meshes. You'll end up drawing something like this to the screen:
Now that you have text on the screen, you can get fancy and take into account the glyph kerning between the letters. This allows you to render more natural text.
Unfortunately, I can't find a tutorial to point you to. And its not really something that I can whip together for you here. There are many pieces to the puzzle and its no small task (matrix math, camera's, orthographic projection, texture coords, textures, sprites, generating meshes, etc...).
If you'd like you can look through one of my projects where I have done this with WebGL. I even generate the initial sprite sheet using javascript + 2d canvas.
Sprite Sheet generated here:
https://github.com/zfedoran/prefab.js/blob/master/app/graphics/spriteFont.js
Quad Mesh generated in this file:
https://github.com/zfedoran/prefab.js/blob/master/app/controllers/labelController.js
Wrapper around WebGL:
https://github.com/zfedoran/prefab.js/blob/master/app/graphics/device.js
Or You Could
Watch Notch (the guy who made Minecraft) do this, in only about 30 minutes, in Java (fast forward to 2:21 hours in):
http://www.twitch.tv/notch/b/487451713
http://www.twitch.tv/notch/b/487621698
Good luck, and have fun :)
Three.js has actual text glyph support. In addition, dimensionthree.net uses textures on shapes. if you need source let me know.
There also is my http://taccGL.org library that can draw HTML text on a 2D canvas and then use it as textures on 3D objects drawn on a 3D/WebGL canvas.

Get the image of an element that gets drawn in the three.js while rendering

Well the way I think three.js handles the render is that it turns each element into an image and then draws it on a context.
where can I get more information on that.
and if I am right, is there a way to get that image, and manipulate it?
any information will be appreciated.
Three.js internally has a description of what the scene looks like in 3D space, including all the vertices and materials among other things. The rendering process takes that 3D representation and projects it into a 2D space. Three.js has several renderers, including WebGLRenderer (the most common), CanvasRenderer, and CSS3DRenderer. They all use different methods to draw that 2D projection:
WebGLRenderer uses JavaScript APIs that map to OpenGL graphics commands. As much as possible, the client's GPU takes parts of the 3D representation and more or less performs the 2D projection itself. As each geometry is rendered, it is painted onto a buffer. The complete buffer is a frame, which is the complete 2D projection of the 3D space that shows up in your <canvas>.
CanvasRenderer uses JavaScript APIs for 2D drawing. It does the 2D projection internally (which is slower) but otherwise works similarly to the WebGLRenderer at a high level.
CSS3DRenderer uses DOM elements and CSS3D transforms to represent the 3D scene. This roughly means that the browser takes normal 2D DOM elements and transforms them into 3D space to match the Three.js 3D internal representation, then projects them back onto the page in 2D.
(All this is highly simplified.)
It's important to understand that the frame rendered WebGL and Canvas representations is the resulting picture that you see on your screen, but it's not an <img>. Typically, your browser will render 60 frames per second. You can extract a frame by dumping the <canvas> into an image. Typically you'll want to stop the animation loop in order to do this as otherwise you might not be capturing the frame you want. Capturing frames this way is slow and given that your browser is rendering so many frames per second there are not easy ways to capture every frame.
Additionally, Chrome has built-in canvas inspection tools which allow you to take a closer look at each frame the browser paints.
You can't easily intercept the buffer as Three.js is rendering the frame, but you can draw directly onto the canvas as you normally would. renderer.context is the graphics context that Three.js draws onto, where renderer is the Renderer instance you create when setting up a Three.js scene. (A graphics context is basically a helper to assemble the buffer that makes up the frame.)

Categories

Resources