I’m having weird issues exporting a model from Three.js and having it appear properly in PowerPoint. When exported to gltf and imported into PowerPoint it looks like this:
the black areas aren't the issue, its the overlapping faces across the model showing through that are the problem.
I’ve set and unset these flags on the export and nothing changes:
let options = {
binary: true,
forcePowerOfTwoTextures: true,
onlyVisible: true,
embedImages: true,
forceIndices: true,
}
But when i run it through the GLTF online converter by Lewy here: https://blackthread.io/gltf-converter/ with nothing but ‘embeded images’ turned on, it then works in PowerPoint without issue:
So I'm left wondering whether on his website, because the model is re-generated from the uploaded one, in the process of reading it, it's getting cleaned up by the GLTFLoader?
I’m at a complete dead end as to how to fix this within three.js without using the external uploader/re-exporter :(
UPDATE:
Seems to be an issue with alpha blend mode and material being set to transparent. If the blend mode is changed to 'mask' the issue disappears but then textures that have transparency alpha greater than the alphaTest value completely disappear. But this issue is only turning up for models imported to PowerPoint, when i upload to SketchFab it's all fine.
Related
I have a .gltf file. I can open this with 3d viewer. But, when I try to load this in the browser, it loads (no error in the console),, but it doesn't show up in the scene. I tried to load the model in (https://threejs.org/editor/), but, failed.
How can i fix this??? please help..
The file link: https://drive.google.com/file/d/1ckC1qiIRGDskO240Y5N7oIBJufocEiTb/view?usp=sharing
It all works fine. Your 3d Model is simply to big. Change the scale in THREE.js to 0.001 or change the size of the model in an editor like blender. Also add a light source of some sort. Otherwise it will be all black.
Copied from github issue,
Hello, I was experimenting with loading models using the custom layer interface and ran into a problem. I'm actually drawing the custom layer through DeckGL and react-map-gl, however I believe that the heart of the problem lies in the custom layer which belongs here so I'm asking the question here. Anyways the problem is pretty similar to issue #8936. However the solution there doesn't work but infact, it's the cause of the problem as far as I understood. If you try clearing manually or set renderer.autoClear to true, the underlying map disappears and only a white background is shown with the model laid on top.
This can be easily seen using the official example and it's jsfiddle. Just set autoClear to true.
I was actually experimenting with using post processing, which requires to use composer.render instead of renderer.render and the former clears the buffers during passes.
I thought this was due to a non transparent canvas issue perhaps and initialized the threejs renderer with a separate canvas, with alpha set to true and setting the clear color to all 0's but on alpha 0, it shows white and if you increase alpha it starts showing whatever clearColor was set to. It doesn't show the underlying mapbox canvas.
A screencap to demonstrate clearly whats the problem,
clearColor all 0's and alpha 0.
ClearColor is red and alpha is 0.2.
Intended picture when using clearing,
Update:- Ok it seems like the transparency is working fine. This is because when I changed the clearColor to red and alpha to 0.4 I noticed there was a small time period where the map was shown with a red overlay like so
However as soon as the model appears, the map disappears and the image resembles the second one. So something's wrong there.
***UPDATE***This has to be an issue with the files and the way they are exported, i just don't know what that issue is. I have downloaded some more example models and they all render just fine.
I am experiencing an issue with Three.js when loading .obj and .mtl files.
I have a bunch of objects and their corresponding material files exported from 3ds. I am not the one who has exported these files, I am not a 3d modeler however if this turns out to be an issue with the files I can ask the modeler to export them again.
I have used THREE.js a few times and never come across this issue, I am loading the .mtl and .obj file using the following:
mtlLoader.load("stands/objects/Table&Chairs.mtl", function(materials){
materials.preload();
var objLoader = new THREE.OBJLoader();
objLoader.setMaterials(materials);
objLoader.load("stands/objects/Table&Chairs.obj", function(object){
scene.add(object);
object.position.set(-5, 0, 4);
});
});
My problem is the object loads fine, there are no errors, however nothing is shown. The object is not being rendered to the scene.
If i download some example assets from other sources and switch out the files that are being uploaded, without changing anything else, the object renders.
This leads me to believe it could be an issue with the way in which the files are being exported.
screen showing of my .obj being rendered
Screen showing the example .obj being rendered
Any help as to what could be causing this would be much appreciated.
I have uploaded the objects and materials here.
Mine are the Table&Chairs, the example ones are the Tent_Poles_01 files.
The great thing about .OBJ files is that you can just open them up in the text editor of your choice and see what is inside. This will give you a rough idea of position and scale:
Looking at the vertices in your tent model, it seems to have a size of about 2 units in each dimension, centered around the origin.
Looking at the vertices in the Table & Chairs model, they have a size of a couple of hundred/thousand units, and start near (+6000,0,-2000).
In other words, the first suspect would be your model being rendered somewhere far outside the visible viewport.
Usually, when you work together with a 3D artist, you discuss the scale you want to work at beforehand, and have them nicely align the model with the origin.
You can (sort of) correct this in code, but it will not be as practical.
mesh.geometry.center() will recenter the geometry around (0, 0, 0)... but do note that is usually not what you want. For example, a table will then be half-way through the floor in it's default centered position.
to scale the geometry to an absolute size, use something like (pseudocode)
var currentSize = new THREE.Box3().setFromObject(model).getSize();
mesh.geometry.scale(
targetWidth / currentSize.X,
targetHeight / currentSize.Y,
targetDepth / currentSize.Z)
I'm working on porting an existing three.js project to WebVR + Oculus Rift. Basically, this app takes an STL file as input, creates a THREE.Mesh based on it and renders it on an empty scene. I managed to make it work in Firefox Nightly with VREffect plugin to three.js and VRControls. A problem I have is models rendered in VR aren't really 3D. Namely, when I move the HMD back and forth an active 3D model doesn't get closer/farther, and I can't see different sides of the model. It looks like the model is rather a flat background image stuck to its position. If I add THREE.AxisHelper to the scene, it is transformed correctly when HMD is moved.
Originally, THREE.OrbitControls were used in the app and models were rotated and moved properly.
There's quite some amount of source code so I'll post some snippets on demand.
It turned out that technically there was no problem. The issue was essentially with different scales of my models and Oculus movements. When VRControls is used with default settings, it reports a position of HMD as it reads it from Oculus, in meters. So, the range of movements of my head could barely exceed 1 m, whereas average sizes of my models are about a few dozens of their own units. When I used them altogether at the same scene, it was like a viewer is an ant looking at a giant model. Naturally, the ant have to walk a while to see another side of the model. That's why it seemed like not a 3D body.
Fortunately, there's a scale property of VRControls that should be used for adjusting scale of HMD movements. When I set it to about 30, everything works pretty well.
Thanks to #brianpeiris's comment, I decided to check coordinates of the model and camera once again to make sure they're not knit with each other. And, it led me to the solution.
I am facing a very obscure issue.
I am loading .obj 3d models into a WebGL application I am working on. I am parsing them using a custom function. I know it works correctly. Because I get this:
Granted the texture is stretched way too much. But that's the way the UVs are mapped.
But, when I load a goose model here's what I get:
An untextured goose. I am using the same texture.
Weirder still is the fact that the texture renders only around the eyes. If I don't draw the wireframe, I get this:
Those two tiny specs that I've circled are the eyes.
Just to make sure that it wasn't this model, I tried another one:
I have no idea why it won't render the texture everywhere else.
Here's what I've tried:
Recalcuated normals and exported from Blender again. Same outcome.
Loaded models in Photoshop and they preview correctly. Meaning models are fine.
Checked my parsing of the .obj file into a usable format for WebGL over and over. Also, since the cube loads correctly, I assume it's not that.
Tried switching off MipMapping, disabling CULL_FACE and changing winding order. No difference.
Used different texture image. The texture images are all NPOT. Tried non NPOT. Nothing changed, except with different images the texels of the little eyes or the fragment on the axe changed to the texture.
Checked UV mappings, all look fine. No negative or out of range values.
I am totally out of ideas. If someone has had similar experiences, I'd appreciate pointers as to where to look next.
I generally don't like to ask questions without posting the code, but the code is very convoluted because I'm using sweet.js for custom javascript syntax. Let me know if anyone would like to look at the code and I could try and clean up some code and upload somewhere.
From your images i was wondering if you did not draw a constant number of polygon, or if the draw started then silently failed on its road because of some bug. That's why i asked if the drawn polygons were the first of the models.
And so it seems that indeed there was a hard coded limit to the number of drawn polygons.
Glad you found it !