TL;DR: How do i make textures appear bigger than the faces they are attached to, with a fading effect, so that all textures overlap each other?
-
Learning three.js by trying to recreate the Game Warzone 2100. :)
I'm loading a default texture for the ground with:
var texture = THREE.ImageUtils.loadTexture('tile-53.png'); // Specify file
texture.wrapS = texture.wrapT = THREE.RepeatWrapping; // Make the texture repeat
texture.repeat.set(map_width, map_height); // Repeat for every face
texture.anisotropy = 100; // Disable anisoptropy
At the moment it looks like this. Now compare it to this.
Warzone 2100 finally started looking good with the new renderer especially because they made the textures show bigger than the faces and overlap each other, making the sharp borders vanish. Is it possible to reach the same effect with three.js and if so, how would i go there?
Texture is something that is attached to it's geometry in the first place (speaking about 3D), so to make just the texture itself overlap other textures is not quite possible. You can perfectly make your geometries overlap each other, though.
For your textures looking "bigger", try looking here.
Related
I have a THREE.PerspectiveCamera to which I add a THREE.CameraHelper.
cameraLocal = new THREE.PerspectiveCamera(70, 1, 20, 120);
scene.add(cameraLocal);
cameraLocalHelper = new THREE.CameraHelper(cameraLocal);
cameraLocal.add(cameraLocalHelper);
However when I rotate the camera,
cameraLocal.rotateX(0.1);
the CameraHelper rotates by a larger amount than the camera.
I've made a
demo that shows this.
Initially, cameraLocal can't see the help lines drawn by the CameraHelper. However, if cameraLocal is rotated either way about the x-axis, the help lines come into view, I'm supposing on account of the CameraHelper rotating by a different amount.
Could anyone point out what I'm doing wrong here?
I'm using the build of three.js from 5-Aug-2019.
CameraHelper needs to be added directly to the scene.
Do not try to add it as a child of the camera itself.
three.js r.107
I am writing a 3d game in javascript with threejs. I made a skybox, and it works, but if I make my cameras near and far distances too small it doesn't show.
I understand why this happens, the camera attached to my player doesn't see as far as the skybox. If I make my cameras "near" and "far" attributes large enough (corresponding to the size of my game map) I can make it so that my skybox is always within range, but I don't want that, since I don't want the camera to see all the objects that far away.
Any ideas of how to force the camera to see the skybox but still have a small "far" attribute so as to no see all the objects in the world?
Any help would be greatly appreciated.
There’s scene.background, which can be set to a CubeTexture.
Just want to add an example, because someone might find it useful here:
var loader = new THREE.CubeTextureLoader();
loader.load([
'./img/sky/galaxy-X.jpg', './img/sky/galaxy+X.jpg',
'./img/sky/galaxy-Y.jpg', './img/sky/galaxy+Y.jpg',
'./img/sky/galaxy-Z.jpg', './img/sky/galaxy+Z.jpg'
] , function(texture)
{
scene.background = texture;
});
I have a forked three.js codepen experiment that has square particles floating around.
But i'm trying to modify it such that i can pass text (geometry?) into it, replacing the square particles, somewhat like a word / tag cloud. Is this possible?
Link to current codepen:
https://codepen.io/farisk/pen/pWQGxB
Heres what i wish to achieve:
I'm currently not sure where to start.
I was thinking of somehow using a text geometry
var loader = new THREE.FontLoader();
let font = loader.parse(fontJSON);
var geometry = new THREE.TextGeometry("hello", {font: font, size: 120, height: 10, material: 0});
But someone mentioned that this is not the right way? I'm pretty new to three.js / html canvas so any help is appreciated.
Passing in geometry per particle is usually not possible in particle systems because the fact that its the same geometry for each particle is what makes these systems efficient.
To achieve the effect that you are looking for there are basically two options:
Render all texts into a single sprite texture and provide texture coordinates for each particle such that each particle renders the correct text. (Only two dimensional rendering of the text and not scalable for a large amount of texts) See this example.
Make each text object it's own geometry and render them without a particle system. (You loose the performance gain of particle systems)
If you really just want to achieve a tag cloud you could also just use pure JavaScript and transform the position of the text elements according to some calculated 3D positions.
I am new to Three.js. I am using this example with 6 image cube for panorama effect, where one can pan, zoom in and out around cubes.
https://threejs.org/examples/?q=panorama#webgl_panorama_equirectangular
I want to figure out how, at maximum zoom-in level, I can transition user into a different panorama cube (with different image source), mapped to this particular cube part. So I would, sort of, open the next scene to take user further to the next level in his journey.
This is nearly what Google Street View does when you click on arrows to move forward down the road.
I do not see many examples out there. I researched and saw this may be possible with creating 2 scenes? Any ideas how to make it functional I would appreciate.
Detecting WHEN to transition:
In the example given, the mouse events are all given. The zoom is handled in onDocumentMouseWheel by adjusting the camera's fov property. "Zoom In" reduces the fov, and "Zoom Out" increases it. It would be trivial to detect when the fov has reached a minimum/maximum value, which would trigger your transition to a new scene.
Detecting WHERE to transition:
The next step is determining into which new scene you will transition. You could do something hotspot-like, where you shoot a ray from the camera to see if it hit a particular place (for example a THREE.Sphere which you have strategically positioned). But for simplicity, let's assume you only have the 6 directions you mentioned, and that you're still using the example's mouse control.
Camera movement is handled in onDocumentMouseMove by updating the lat and lon variables (which appear to be in degrees). (Note: It seems lon increases without bounds, so for clarity it might be good to give it a reset value so it can only ever be between 0.0-359.99 or something.) You can get all math-y to check the corners better, or you could simply check your 45's:
if(lat > 45){
// you're looking up
}
else if(lat < -45){
// you're looking down
}
else{
// you're looking at a side, check "lon" instead
}
Your look direction determines to which scene you will transition, should you encounter your maximum zoom.
Transitioning
There are lots of ways you can do this. You could simply replace the texture on the cube that makes up the panorama. You could swap in a totally different THREE.Scene. You could reset the camera--or not. You could play with the lights dimming out/in while the transition happens. You could apply some post-processing to obscure the transition effect. This part is all style, and it's all up to you.
Addressing #Marquizzo's concern:
The lighting is simply a suggestion for a transition. The example doesn't use a light source because the material is a MeshBasicMaterial (doesn't require lighting). The example also doesn't use scene.background, but applies the texture to an inverted sphere. There are other methods one can use if you simply can't affect the "brightness" of the texture (such as CSS transitions).
I added the following code the the example to make it fade in and out, just as an example.
// These are in the global scope, defined just before the call to init();
// I moved "mesh" to the global scope to access its material during the animation loop.
var mesh = null,
colorChange = -0.01;
// This code is inside the "update" function, just before the call to renderer.render(...);
// It causes the color of the material to vary between white/black, giving the fading effect.
mesh.material.color.addScalar(colorChange);
if(mesh.material.color.r + colorChange < 0 || mesh.material.color.r + colorChange > 1){ // not going full epsilon checking for an example...
colorChange = -colorChange;
}
One could even affect the opacity value of the material to make one sphere fade away, and another sphere fade into place.
My main point is that the transition can be accomplished in a variety of ways, and that it's up to #Vad to decide what kind of effect to use.
I'm newbie in three.js and WebGL.
In my application, there is 3D scene in which the two objects.
object - it is a big sphere;
object - a smaller sphere, which is located on the surface of the first object.
Big sphere rotates around its axis. And also there is the possibility to rotate the camera around the spheres.
So as a small sphere on the surface of a large sphere, it also rotates with it. Small sphere will be visible to us as large sphere turns to the camera and it will not be visible when a large sphere is in front of it.
The question is, how do I determine when a small sphere is visible to the camera, and when it is not visible?
Also, I need to get the coordinates in 2d for small sphere where it is visible. How can I do this?
This can be accomplished with three.js's built-in raycaster and projector functionalities. To start, try taking a look at this demo and its source code. Here is another example. In this way you can determine which objects are closer to an invisible line that is emitted from the camera's position.
Otherwise, if you are simply interested in which of the two objects is closer to the camera, you can simply check to see which of their position values have a lesser distance to the camera's coordinates. The three-dimensional distance formula would come in handy:
bigSphereDistance = Math.sqrt( Math.pow(camera.position.x - big.position.x,2) +
Math.pow(camera.position.y - big.position.y,2) +
Math.pow(camera.position.z - big.position.z,2) );
smallSphereDistance = Math.sqrt( Math.pow(camera.position.x - small.position.x,2) +
Math.pow(camera.position.y - small.position.y,2) +
Math.pow(camera.position.z - small.position.z,2) );
//then check...
bigSphereDistance > smallSphereDistance ? /*case*/ : /*case*/;
Intuitively, the small sphere is visible when its distance is less than that of the big sphere, with a buffer of the small sphere's radius.
To answer your second question, finding any object's 2D coordinates can accomplished like this.