I am trying to have some camera control in a threejs scene.
I looked at this example and it seems that it is completely handled with those lines :
controls = new THREE.TrackballControls( camera );
controls.rotateSpeed = 1.0;
controls.zoomSpeed = 1.2;
controls.panSpeed = 0.8;
controls.noZoom = false;
controls.noPan = false;
controls.staticMoving = true;
controls.dynamicDampingFactor = 0.3;
Those lines use THREE.TrackballControls which comes from js/controls/TrackballControls.js
My question is : what exactly is TrackballControls.js? I cannot find it in the threejs download bundle. Is it an extension? Where can I find it? (Apart from taking it directly from the example's file)
TrackballControls.js is in the jsm/controls sub-directory of the examples directory.
https://github.com/mrdoob/three.js/tree/master/examples/jsm/controls
It is part of the examples -- not the library. You must include it explicitly in your project. You are free to modify it to your liking.
You may also want to consider OrbitControls, which is appropriate if your scene has a natural "up" direction.
three.js r.147
I noticed that the TrackballControls linked by #WestLangley is much more slow than an old version used by this example.
Fiddle with new code: https://jsfiddle.net/vt8n6dcs/1/
Fiddle with old code: https://jsfiddle.net/vt8n6dcs/
I tested it with Firefox 41.0.2. No benchmarks, the performance degradation is quite evident, since when you start the rotation using the mouse, sometimes the image update lags. It happens also with the old version, but a lot less frequently. Not surprisingly, performance seems quite the same in Chrome 48.0.2564.82.
Furthermore mouse sensitivity is much lower. You have to move it a lot to get an appreciable effect on the image. This happens on both Firefox and Chrome.
The only problem I found with the old version is that the center of the commands are always set at the center of the page. You can fix it by using the code of the newer version for handleResize() function:
this.handleResize = function () {
if ( this.domElement === document ) {
this.screen.offsetLeft = 0;
this.screen.offsetTop = 0;
this.screen.width = window.innerWidth;
this.screen.height = window.innerHeight;
} else {
var box = this.domElement.getBoundingClientRect();
// adjustments come from similar code in the jquery offset() function
var d = this.domElement.ownerDocument.documentElement;
this.screen.offsetLeft = box.left + window.pageXOffset - d.clientLeft;
this.screen.offsetTop = box.top + window.pageYOffset - d.clientTop;
this.screen.width = box.width;
this.screen.height = box.height;
}
this.radius = ( this.screen.width + this.screen.height ) / 4;
};
You can import it directly from the example. Insert the following line into your HTML file's <head>:
<script src="https://threejs.org/examples/js/controls/TrackballControls.js"></script>
Here is a demo.
Related
I have a simple javascript application that spins a wheel based on drag velocity. It currently works fine in browser despite some bugs. I started testing it on my I-pad and the javascript doesn't load at all. I'm assuming it encounters an internal error. I looked through it and ran an alert at the end of the program uncommenting each line sequentially, there are hang ups in the innerHtml modifications under the drawFrame function. I'm wondering what I'm missing here, I've read the syntax is supported in ipad but it baffles me why this won't run on. code is attached in a paste-bin for reference. https://pastebin.com/H19b0sN5. Below is an example of code that would break the display.
var drawFrame = function(){
if(isDragging){
var mouseAngle = getMouseAngle()
var delta = getAngleDelta(lastFrameMouseAngle, mouseAngle);
currentWheelAngle += delta;
lastFrameMouseAngle = mouseAngle
angleHistoryQueue.push(currentWheelAngle)
if (angleHistoryQueue.length > velocityAverageSpan){
angleHistoryQueue.shift()
}
}
else{
currentWheelAngle += angularVelocity
if( angularVelocity != 0){
var direction = angularVelocity / Math.abs(angularVelocity);
angularVelocity = direction * Math.max(Math.abs(angularVelocity) - deceleration, 0)
}
}
head.innerHTML = `<h1>${currentWheelAngle}</h1>`
wheelImage.style.transform = `rotate(${currentWheelAngle}deg)`
}
This was a template literals es6 compatibility issue as Robin Zigmond advised me to look into.
I've been trying to experiment with box2d and threejs.
So box2d has a series of js iterations, I've been successful at using them so far in projects as well as threejs in others, but I'm finding when including the latest instance of threejs and box2dweb, threejs seems to be mis-performing when just close to box2dweb but maybe I'm missing something really simple, like a better way to load them in together, or section them off from one another?
I've tried a few iterations of the box2d js code now and I always seemed to run into the same problem with later versions of threejs and box2d together! - currently version 91 threejs.
The problem I'm seeing is very weird.
I'm really hoping someone from either the box2d camp or threejs camp can help me out with this one, please?
Below is a very simple example where I don't initialize anything to do with box2d, but just by having the library included theres problems and you can test by removing that resource, then it behaves like it should.
The below demo uses threejs 91 and box2dweb. It is supposed to every couple of seconds create a box or a simple sphere each with a random colour. Very simple demo, you will see the mesh type never changes and the colour seems to propagate across all mesh instances. However if you remove the box2dweb resource from the left tab then it functions absolutely fine, very odd :/
jsfiddle link here
class Main {
constructor(){
this._container = null;
this._scene = null;
this._camera = null;
this._renderer = null;
console.log('| Main |');
this.init();
}
init(){
this.initScene();
this.addBox(0, 0, 0);
this.animate();
}
initScene() {
this._container = document.getElementById('viewport');
this._scene = new THREE.Scene();
this._camera = new THREE.PerspectiveCamera(75, 600 / 400, 0.1, 1000);
this._camera.position.z = 15;
this._camera.position.y = -100;
this._camera.lookAt(new THREE.Vector3());
this._renderer = new THREE.WebGLRenderer({antialias:true});
this._renderer.setPixelRatio( 1 );
this._renderer.setSize( 600, 400 );
this._renderer.setClearColor( 0x000000, 1 );
this._container.appendChild( this._renderer.domElement );
}
addBox(x,y,z) {
var boxGeom = new THREE.BoxGeometry(5,5,5);
var sphereGeom = new THREE.SphereGeometry(2, 5, 5);
var colour = parseInt(Math.random()*999999);
var boxMat = new THREE.MeshBasicMaterial({color:colour});
var rand = parseInt(Math.random()*2);
var mesh = null;
if(rand == 1) {
mesh = new THREE.Mesh(boxGeom, boxMat);
}
else {
mesh = new THREE.Mesh(sphereGeom, boxMat);
}
this._scene.add(mesh);
mesh.position.x = x;
mesh.position.y = y;
mesh.position.z = z;
}
animate() {
requestAnimationFrame( this.animate.bind(this) );
this._renderer.render( this._scene, this._camera );
}
}
var main = new Main();
window.onload = main.init;
//add either a box or a sphere with a random colour every now and again
setInterval(function() {
main.addBox(((Math.random()*100)-50), ((Math.random()*100)-50), ((Math.random()*100)-50));
}.bind(this), 4000);
so the way im including the library locally is just a simple...
<script src="js/vendor/box2dweb.js"></script>
So just by including the box2d library threejs starts to act weird, I have tested this across multiple computers too and multiple version of both box2d (mainly box2dweb) and threejs.
So with later versions of threejs it seems to have some comflicts with box2d.
I found from research that most of the box2d conversions to js are sort of marked as not safe for thread conflicts.
Im not sure if this could be the cause.
I also found examples where people have successfully used box2d with threejs but the threejs is always quite an old version, however you can see exactly the same problems occurring in my example, when I update them.
So below is a demo I found and I wish I could credit the author, but here is a copy of the fiddle using threejs 49
jsfiddle here
.....and then below just swapping the resource of threejs from 49 to 91
jsfiddle here
its quite an odd one and maybe the two libraries just don't play together anymore but would be great if someone can help or has a working example of them working together on latest threejs version.
I have tried a lot of different box2d versions but always found the same problem, could this be a problem with conflicting libraries or unsafe threads?
but also tried linking to the resource include in the fiddles provided.
Any help really appreciated!!
I have written a Three.js application using StereoEffect, using 3 scenes for overlaying purposes, by clearing renderer depth.
(Using this approach Three.js - Geometry on top of another).
However, I now need to use VREffect, for better compatibility with headsets such as the Gear VR, using the WebVR Polyfill.
The following are snippets from the code, to show how it's set up:
const renderer = new THREE.WebGLRenderer({antialias: false})
renderer.setSize(window.innerWidth, window.innerHeight)
renderer.autoClear = false;
document.body.appendChild(renderer.domElement)
effect = new THREE.VREffect(renderer)
effect.separation = 0
effect.setSize(window.innerWidth, window.innerHeight)
let vrDisplay
navigator.getVRDisplays().then(displays => {
if (displays.length > 0)
vrDisplay = displays[0]
})
// Add button to enable the VR mode (display stereo)
const vrButton = VRSamplesUtil.addButton("Enter VR", "E", "/vr/cardboard64.png", () => {
vrDisplay.requestPresent([{source: renderer.domElement}])
})
... The rest of the code ...
Inside my animation loop:
renderer.clear()
effect.render(scene, camera)
renderer.clearDepth()
effect.render(scene2, camera)
effect.render(scene3, camera)
However, this approach doesn't seem to work when using VREffect (only when entering VR mode - EG viewing it on my desktop works fine). I think the issue is that the renderer.clear() or renderer.clearDepth() is not taking effect, as the canvas is pitch black, apart from some elements in scene3.
Furthermore, when commenting out the rendering of scene2 and scene3, I can perfectly well see everything in the first scene, rendered correctly.
Looking through the code in VREffect, and StereoEffect, I couldn't figure out which part rendered my changes useless.
Any help/hints would be greatly appreciated.
I never did find out how to fix that issue, but I got around it by using StereoEffect instead of VREffect for browsers other than the Gear VR's, as VREffect worked perfectly fine there, but not on a normal phone browser, where cardboard might be used. Here's what I did, if anyone else runs into this issue:
From the example above, I turned the vrButton bit into this:
const vrButton = VRSamplesUtil.addButton("Enter VR", "E", "/images/cardboard64.png", () => {
if(navigator.userAgent.includes("Mobile VR")){
vrDisplay.requestPresent([{source: renderer.domElement}])
}else {
effect = new THREE.StereoEffect(renderer)
effect.separation = 0
effect.setSize(window.innerWidth, window.innerHeight)
document.getElementById("vr-sample-button-container").style.display = "none"
}
})
where I switched over from VREffect to StereoEffect when the 'View in VR' button is clicked.
With this approach, however, the content will not be fullscreen and the device will eventually go to sleep. To fix both issues, you can have the user tap the screen to manually turn on fullscreen with this:
renderer.domElement.addEventListener("click", () => {
if(document.fullscreenEnabled && renderer.domElement.requestFullScreen() ||
document.webkitFullscreenEnabled && renderer.domElement.webkitRequestFullScreen() ||
document.mozFullScreenEnabled && renderer.domElement.mozRequestFullScreen() ||
document.msFullScreenEnabled && renderer.domElement.msRequestFullScreen() ){}
})
Obviously, this is not as good of a user experience, and you don't get the nice UI, so if someone finds this and knows of an actual fix, please leave an answer/comment. I'll update this if I find anything myself.
Some final thoughts, afaik the Gear VR browser has some sort of native WebVR implementation whereas in other places, the polyfill was used, so that could be part of the issue.
I'm ultimately trying to draw a polygon on top of my house. I can do that.
The problem is that on zoom-out, zoom-in, and rotation (or camera move) the polygon doesn't stick to the top of my house. I received great help from this answer. So, now I'm trying to go through the sample code but there is a lot of Cesium methods and functionality that I need to learn.
The sample code I am trying to follow is located in the gold standard that appears to be baked into the existing camera controller here.
I call testMe with the mousePosition as Cartesian3 and the SceneMode is 3D, so pickGlobe is executed.
Here is my code:
var pickedPosition;
var scratchZoomPickRay = new Cesium.Ray();
var scratchPickCartesian = new Cesium.Cartesian3();
function testMe(mousePosition) {
if (Cesium.defined(scene.globe)) {
if(scene.mode !== Cesium.SceneMode.SCENE2D) {
pickedPosition = pickGlobe(viewer, mousePosition, scratchPickCartesian);
} else {
pickedPosition = camera.getPickRay(mousePosition, scratchZoomPickRay).origin;
}
}
}
var pickGlobeScratchRay = new Cesium.Ray();
var scratchDepthIntersection = new Cesium.Cartesian3();
var scratchRayIntersection = new Cesium.Cartesian3();
function pickGlobe(viewer, mousePosition, result) {
var globe = scene.globe;
var camera = scene.camera;
if (!Cesium.defined(globe)) {
return undefined;
}
var depthIntersection;
if (scene.pickPositionSupported) {
depthIntersection = scene.pickPosition(mousePosition, scratchDepthIntersection);
}
var ray = camera.getPickRay(mousePosition, pickGlobeScratchRay);
var rayIntersection = globe.pick(ray, scene, scratchRayIntersection);
var pickDistance;
if(Cesium.defined(depthIntersection)) {
pickDistance = Cesium.Cartesian3.distance(depthIntersection, camera.positionWC);
} else {
pickDistance = Number.POSITIVE_INFINITY;
}
var rayDistance;
if(Cesium.defined(rayIntersection)) {
rayDistance = Cesium.Cartesian3.distance(rayIntersection, camera.positionWC);
} else {
rayDistance = Number.POSITIVE_INFINITY;
}
var scratchCenterPosition = new Cesium.Cartesian3();
if (pickDistance < rayDistance) {
var cart = Cesium.Cartesian3.clone(depthIntersection, result);
return cart;
}
var cart = Cesium.Cartesian3.clone(rayIntersection, result);
return cart;
}
Here is my problem:
Here is the result:
Here are my questions to get this code working:
1. How do I get the scene.pickPositionSupported set to true? I'm using Chrome on Windows 10. I cannot find in the sample code anything about this and I haven't had much luck with the documentation or Google.
2. Why is rayIntersection not getting set? ray and scene have values and scratchRayIntersection in an empty Cartesian3.
I think if I can get those two statements working, I can probably get the rest of the pickGlobe method working.
WebGLGraphics Report:
I clicked on Get WebGL and the cube is spinning!
Picking positions requires that the underlying WebGL implementation support depth textures, either through the WEBGL_depth_texture or WEBKIT_WEBGL_depth_texture extensions. scene.pickPositionSupported is returning false because this extension is missing. You can verify this by going to http://webglreport.com/ and looking at the list of extensions; I have both of the above listed there. There is nothing you can do in your code itself to make it suddenly return true, it's a reflection of the underlying browser.
That being said, I know for a fact that Chrome supports the depth texture and it works on Windows 10, so this sounds like a likely video card driver issue. I full expect downloading and installing the latest drivers for your system to solve the problem.
As for rayIntersection, from a quick look at your code I only expect it to be defined if the mouse is actually over the globe, which may not always be the case. If you can reduce this to a runnable Sandcastle example, it would be easier for me to debug.
OK. So it turned out that I had a totally messed up Cesium environment. I had to delete it and reinstall it in my project (npm install cesium --save-dev). Then I had to fix a few paths and VOILA! It worked. Thanks to both of you for all your help.
I'm trying to make a game using SVG images for scalability and for procedurally making physical objects from them (see matter.js for how).
The problem I'm having is if I load 2 different SVG textures and then render them, the second has the first layered underneath it.
This doesn't happen with raster images and doesn't happen with the canvas options, only with WebGL.
Is there a way to stop this or am I doing the SVGs wrong?
var renderer = PIXI.autoDetectRenderer(
window.innerWidth,
window.innerHeight,
{
backgroundColor : 0xffffff,
resolution:2
}
);
// add viewport and fix resolution doubling
document.body.appendChild(renderer.view);
renderer.view.style.width = "100%";
renderer.view.style.height = "100%";
var stage = new PIXI.Container();
//load gear svg
var texture = PIXI.Texture.fromImage('https://upload.wikimedia.org/wikipedia/commons/thumb/0/0b/Gear_icon_svg.svg/2000px-Gear_icon_svg.svg.png');
var gear = new PIXI.Sprite(texture);
//position and scale
gear.scale = {x:0.1,y:0.1};
gear.position = {x:window.innerWidth / 2,y:window.innerHeight / 2};
gear.anchor = {x:0.5,y:0.5};
//load heart svg
var texture2 = PIXI.Texture.fromImage('https://upload.wikimedia.org/wikipedia/commons/thumb/4/42/Love_Heart_SVG.svg/2000px-Love_Heart_SVG.svg.png');
var heart = new PIXI.Sprite(texture2);
//position and scale
heart.scale = {x:0.1,y:0.1};
heart.position = {x:window.innerWidth/4,y:window.innerHeight / 2};
heart.anchor = {x:0.5,y:0.5};
//add to stage
stage.addChild(gear);
stage.addChild(heart);
// start animating
animate();
function animate() {
gear.rotation += 0.05;
// render the container
renderer.render(stage);
requestAnimationFrame(animate);
}
<script src="https://github.com/pixijs/pixi.js/releases/download/v4.8.2/pixi.min.js"></script>
Well, this example seems to work pretty well!
var beeSvg = "https://s3-us-west-2.amazonaws.com/s.cdpn.io/106114/bee.svg";
beeTexture = new PIXI.Texture.fromImage(beeSvg, undefined, undefined, 1.0);
var bee = new PIXI.Sprite(beeTexture)
See more at: https://codepen.io/osublake/pen/ORJjGj
So I think you're mixing concepts a bit.
SVG is one thing and WebGL is another.
SVG's are rendered by the browser and you can scale them up or down without losing quality/resolution or whatever you want to call it.
This characteristic however is not possible in WebGL because WebGL rasterises images. A bit like taking a screenshot and putting it in a layer on Photoshop. You can manipulate that image, but u can't scale it without starting to see the pixels.
So short answer is, you can't use SVG's in a WebGL hoping to make your graphics "scale".
In regards to your example above, the result is the expected.
You are loading 2 png textures and overlaying them.