Using Native WebGL API to assign a video texture to Sphere - javascript

I using webgl programing ,I want to assign a video texture to a sphere,I try many many times, at last I failed,I there any good way to assign a video texture to a sphere using webgl native API without using any other library or framework such as three.js and so on, actually,gl-matrix is not incluted. I think,when you want to assign a vedio texture to sphere, you must constructed the sphere's vetex position,texture coordinate and vextex index.and you must update the texture time after time. I do so as above,but it nothing display on the screen, Who can help me,and show me a such demo for me,Thankyou.

Related

Volume ray tracing using three.js without textures

I'm trying to visualize hydrogen wave functions, and would like to do this using volume ray tracing/casting. All guides online for creating volume rendering is based on having 2D textures from some medical imaging. In my case, I don't have any data as images, but instead the 3D data is already in memory (I'm using them to generate particles right now).
Do I really need to convert all my 3D data to 2D textures, only to load them in again, and fake a 3D texture? If not, how can it be done without textures?
Yes, from your link I understand that you have a function that takes a 3D coordinate and returns a propability between 0 and 1. You can use this directly during the evaluation of each ray.
For each ray,
for each distance ∆ along the ray
calculate the coordinates at distance ∆ from the camera
calculate the propability at those coordinates using your function
add the probability to the ray's accumulated color
Using this method, you skip the particle positions that you have rendered in the linked example, and use the function directly.

Using a cubemap texture as diffuse source in place of a 2D one

I'm trying to project massive, 32k-resolution equirectangular maps on spheres.
Since a 32k texture is hardly accessible for older graphics cards supporting 1k-2k sized textures, and scaling a 32k image to 1k loses a tremendous amount of detail, I've resolved to splitting each source map by projecting each into 6 cube faces to make up a cubemap, so that more detail can be displayed on older cards.
However, I'm having trouble actually displaying these cubemapped spheres with three.js. I can set the MeshPhongMaterial.envMap to my cubemap, but of course this makes the sphere mesh reflect the texture instead.
An acceptable result can be produced by using ShaderMaterial along with ShaderLib['cube'] to "fake" a skysphere of some sort. But this drops all ability for lighting, normal mapping and all the other handy (and pretty) things possible with MeshPhongMaterial.
If at all possible, I'd like to not have to write an entire shader from scratch for such a simple tweak (switching one texture2D call to textureCube). Is there a way to coerce three.js to read the diffuse term from a cubemap instead of a 2D texture, or a simple way to edit the shader three.js uses internally?

THREE.JS : How do I dynamically change a specific face for a JSON loaded model?

I have a JSON loaded mirror that I would like to hook up to a webcam. The mirror's reflection will be updated with video coming from a canvas. I was able to follow this source http://threejs.org/examples/#canvas_materials_video
to display a video on a canvas.
However, I need the video texture to run on the specific face of the object. I've tried targeting the object via geometry.faces[i]materialIndex to no avail. The animation is moving, so having a plane to emulate that the the texture is on the model is also not optimal. Is there any advice on what to do?

How to get angles value of perspective camera in Three.js?

How can I get the value from each angle of my perspective camera in 3D scene.
I'm using Three.js library.
To be more accurate, I shall mark what I want to get known with the next sign:
What coordinates I need to know:
It's needed for me, because I'm creating a real mode map engine with moving in 3D scene via mouse cursor.
What I'm trying to achieve is available here:
http://www.zephyrosanemos.com/windstorm/current/live-demo.html
As you could see, in this sample, new terrain is loading with intersecting the new location (which previously were not available with the garbage collection, when the camera is leaving the old viewport):
Now, I want to show a piece of screen from my three.js application:
As you are able to see, I'm loading my scene statically, where is available only one plane with buildings (buildings data are loading from my server, and the data I have token from some osm services).
And it may be controlled only with pushing the keyboard button (for e.g. the 3D scene is locating to the new location by pushing on keyboard arrows, aslo you could see the empty space in map :) this is only because of preparing the cut data in DB for testing purposes, when the application is ready - it won't me empty, it's miles easy to work with light count of records in DB ). All meshes are being deleted, and with each new movement the new data is loaded and the new building are rendered.
But I want to made them loaded dynamically with camera movement. So I want to make it be able dynamically loaded as in the example of dynamic terrain generation. I suggest, that I shall prepare a big plane matrix, which loads data only for the 8 planes (as in terrain generation sample) and to make a logic with the camera intersacting/leavaing old view for such a dynamic work.
So... I want you to help me with this piece of hard task :)
To get the field of view angle simply get the value of this field:
Three.PerspectiveCamera.fov
With that angle you should can have an imaginary cubic cone and test it for collision. For the collision part refer to this question:
How to detect collision in three.js?

Get the image of an element that gets drawn in the three.js while rendering

Well the way I think three.js handles the render is that it turns each element into an image and then draws it on a context.
where can I get more information on that.
and if I am right, is there a way to get that image, and manipulate it?
any information will be appreciated.
Three.js internally has a description of what the scene looks like in 3D space, including all the vertices and materials among other things. The rendering process takes that 3D representation and projects it into a 2D space. Three.js has several renderers, including WebGLRenderer (the most common), CanvasRenderer, and CSS3DRenderer. They all use different methods to draw that 2D projection:
WebGLRenderer uses JavaScript APIs that map to OpenGL graphics commands. As much as possible, the client's GPU takes parts of the 3D representation and more or less performs the 2D projection itself. As each geometry is rendered, it is painted onto a buffer. The complete buffer is a frame, which is the complete 2D projection of the 3D space that shows up in your <canvas>.
CanvasRenderer uses JavaScript APIs for 2D drawing. It does the 2D projection internally (which is slower) but otherwise works similarly to the WebGLRenderer at a high level.
CSS3DRenderer uses DOM elements and CSS3D transforms to represent the 3D scene. This roughly means that the browser takes normal 2D DOM elements and transforms them into 3D space to match the Three.js 3D internal representation, then projects them back onto the page in 2D.
(All this is highly simplified.)
It's important to understand that the frame rendered WebGL and Canvas representations is the resulting picture that you see on your screen, but it's not an <img>. Typically, your browser will render 60 frames per second. You can extract a frame by dumping the <canvas> into an image. Typically you'll want to stop the animation loop in order to do this as otherwise you might not be capturing the frame you want. Capturing frames this way is slow and given that your browser is rendering so many frames per second there are not easy ways to capture every frame.
Additionally, Chrome has built-in canvas inspection tools which allow you to take a closer look at each frame the browser paints.
You can't easily intercept the buffer as Three.js is rendering the frame, but you can draw directly onto the canvas as you normally would. renderer.context is the graphics context that Three.js draws onto, where renderer is the Renderer instance you create when setting up a Three.js scene. (A graphics context is basically a helper to assemble the buffer that makes up the frame.)

Categories

Resources