I'm creating a 3d game using Babylon.js and saw that using Dynamic Texture (https://doc.babylonjs.com/how_to/dynamictexture) you can use canvas elements for the texture: var ctx = myDynamicTexture.getContext();. Is there some way to project a PhaserJS canvas onto a texture of a 3D element in Babylon.js? I know it can be done in three.js, but I like Babylon.js better and don'
t want to switch.
The second variable of the DynamicTexture constructor can be an already-existing canvas with 2d context (https://doc.babylonjs.com/how_to/dynamictexture#creating-and-applying-a-dynamic-texture). It will use the canvas's width and height to define the texture's size:
const phaserCanvas = getMyPhaserCanvas();
const dt = new DynamicTexture('phaser canvas', phaserCanvas, babylonScene);
Related
in WebGL, we create a rendering context with
const canvas = document.getElementById("#mycanvas");
let gl = canvas.getContext("webgl2");
in every operation I see people using gl.canvas just instead of canvas, I wish to know why,
like to find aspect ratio,
let aspectRatio = gl.canvas.clientWidth/gl.canvas.clientHeight;
why don't we use canvas element instead of gl.canvas .
I'm currently learning three.js. I wanna display a 16x9 photo in my scene.
This is the code for adding an Array of images to my scene:
const material = new MeshBasicMaterial({
map: loader.load(images[i]),
transparent: true,
opacity: 1,
});
const plane = new Mesh(new PlaneGeometry(imageWidth, 45), material);
plane.overdraw = true;
plane.position.x = i * (imageWidth + imageOffset);
plane.position.y = 0;
this.introImages.push({
obj: plane,
round: 1,
});
this.introImagesGroup.add(this.introImages[i].obj);
Now I'm getting the console warning:
THREE.WebGLRenderer: image is not power of two (1600x900). Resized to 1024x512
I have read that texture dimensions should be the power of two so it can be put into memory in an optimized way which makes me think if the way I'm putting my images in the scene is the correct way or if there is another way of putting images that don't follow this in three.js?
You probably don't want THREE.js to scale down your image, because you'd be losing resolution, so you want to scale it up to the next ^2. You have two options:
You could scale up your image and export it at a ^2 resolution 2048 x 1024 in your favorite photo editor.
You could dynamically generate a 2048 x 1024 canvas, draw the image onto it scaled up, and use that canvas as your texture source:
var imgURL = "path/to/whatever.jpg";
// Create image element
const image = document.createElement('img');
image.src = imgURL;
// Once image is loaded
image.onload = () => {
// Create canvas, size it to ^2 dimensions
var canvas = document.createElement('canvas');
canvas.width = 2048;
canvas.height = 1024;
// Draw image on canvas, scaled to fit
var ctx = canvas.getContext('2d');
var ctx.drawImage(image, 0, 0, 2048, 1024);
// Create texture
var texture = new THREE.Texture(canvas);
};
Then you can assign that texture variable to whatever material you want, and Three.js won't complain.
Edit:
If you want to avoid texture re-sizing, you could just change your texture.minFilter to THREE.NearestFilter or THREE.LinearFilter, and the engine won't give you warnings. The problem with doing this is that your textures could look grainy or aliased when scaled down, since they won't be Mipmapped
This could be the result you like, or it could look bad, depending on your project. You could see the effects of using NearestFilter in this example: https://threejs.org/examples/?q=filter#webgl_materials_texture_filters
I have read that texture dimensions should be the power of two so it can be put into memory in an optimized way which makes me think if the way I'm putting my images in the scene is the correct way or if there is another way of putting images that don't follow this in three.js?
In WebGL 1 you need POT textures for mipmapping. The mentioned warning disappears if you set the .minFilter property of your canvas texture to THREE.LinearFilter. Keep in mind that using mipmaps is not necessary for all scenarios.
Very new to three.js and webgl and I am getting very strange looking shadows with a directional light.
Here is my code for the renderer:
this.renderer.shadowMapEnabled = true;
this.renderer.shadowMapSoft = true;
this.renderer.shadowCameraNear = 3;
this.renderer.shadowCameraFar = this.camera.far;
this.renderer.shadowCameraFov = 75;
this.renderer.shadowMapBias = 0.0039;
this.renderer.shadowMapDarkness = 0.5;
this.renderer.shadowMapWidth = 1024;
this.renderer.shadowMapHeight = 1024;
Any ideas?
The problem is that your light source is too large for the object you are shadowing. You can visualize the shadow camera by setting
light.shadowCameraVisible = true
Then try reducing the size of your light source by varying the parameter d below
light.shadowCameraLeft = -d
light.shadowCameraRight = d
light.shadowCameraTop = d
light.shadowCameraBottom = -d
This results from the way that DirectionLight is created in three.js (I had this question before). The approach used in three.js for a direction light is about the same as any other shadow creation: it creates a shadow map. With a directional light this shadow map is created with an orthogonal camera. So think about your light as the same as an OrthogonalCamera and think about how those view the scene. The light views the scene from the angle of the directional light, and creates a shadow map based on that view. This view of course has a different projection matrix than your camera. The main camera projection thus must transform the shadow to appear in its view. This results in shadows that look as your image shows. Indeed, the pixelation of that shadow reveals how the shadow camera is oriented, and its scale.
There's no way in three.js to create a true orthogonal shadow using the standard lights. Getting the ideal coverage of the shadow map from the camera's perspective is also not possible.
I have this
function doFirst(){
var x = document.getElementById('canvas');
var canvas = x.getContext('webgl') || x.getContext("experimental-webgl");
}
And I want to draw a image 'sheep.png' on the canvas. I use this but it is not working:
var pic = new Image();
pic.src = "images/sheep.png";
pic.addEventListener("load", function() { canvas.drawImage(pic,0,0,0)}, false);
drawImage is only for use with the 2D context, you can't use it in a webgl context.
In order to use it in webgl, you'll need to build a mesh with your image used as the texture.
If you're not familiar with webgl, you might want to look at three.js as an alternative that's easier to use.
drawImage()
takes only 3 arguments I guess.
See this demo.
and this reference
I want to take an irregularly shaped section from an existing image and render it as a new image in Javascript using HTML5 canvases. So, only the data inside the polygon boundary will be copied. The approach I came up with involved:
Draw the polygon in a new canvas.
Create a mask using clip
Copy the data from the original canvas using getImageData (a rectangle)
Apply the data to the new canvas using putImageData
It didn't work, the entire rectangle (e.g. the stuff from the source outside the boundary) is still appearing. This question explains why:
"The spec says that putImageData will not be affected by clipping regions." Dang!
I also tried drawing the shape, setting context.globalCompositeOperation = "source-in", and then using putImageData. Same result: no mask applied. I suspect for a similar reason.
Any suggestions on how to accomplish this goal? Here's basic code for my work in progress, in case it's not clear what I'm trying to do. (Don't try too hard to debug this, it's cleaned up/extracted from code that uses a lot of functions that aren't here, just trying to show the logic).
// coords is the polygon data for the area I want
context = $('canvas')[0].getContext("2d");
context.save();
context.beginPath();
context.moveTo(coords[0], coords[1]);
for (i = 2; i < coords.length; i += 2) {
context.lineTo(coords[i], coords[i + 1]);
}
//context.closePath();
context.clip();
$img = $('#main_image');
copy_canvas = new_canvas($img); // just creates a new canvas matching dimensions of image
copy_ctx = copy.getContext("2d");
tempImage = new Image();
tempImage.src = $img.attr("src");
copy_ctx.drawImage(tempImage,0,0,tempImage.width,tempImage.height);
// returns array x,y,x,y with t/l and b/r corners for a polygon
corners = get_corners(coords)
var data = copy_ctx.getImageData(corners[0],corners[1],corners[2],corners[3]);
//context.globalCompositeOperation = "source-in";
context.putImageData(data,0,0);
context.restore();
dont use putImageData,
just make an extra in memory canvas with document.createElement to create the mask and apply that with a drawImage() and the globalCompositeOperation function (depending on the order you need to pick the right mode;
I do something similar here the code is here (mind the CasparKleijne.Canvas.GFX.Composite function)