is it possible to move 3Dmodels between hosts? - javascript

Here's a nice one for you guys:
--> I have a website THREE.js scene with a 3D object in it.
--> I have a mobile app with a scene and a QR scanner. it can use .obj files and .vrx files (special format for viro react)
I want this mobile app to load the 3D model by scanning a QR code on the website with the THREE.js scene (containing the model i want to show on my mobile app)
Now what i thought of is, transform the 3d model into a JSON object. Then save it temporarily to a remote host and transform the URL to the path of this 3D object to a QR code which i can scan with the phone. Then the phone will call the object from the remote server and transform it into .obj and then display it in the scene.
Does this sound like a viable solution? You think there's a better solution or other similar projects that have been done before?
The main problem is that each time this 3d model is different and it has to happen almost instantaniously for user experience.

Related

What Does sketchfab or thangs use to show model?

I built a site that serves as a library for a specific category of 3d models. I currently use modelviewer to show glb files of 3d models but on mobile devices the performance isn't good. Often times the page will crash and randomly reload, and the problem occurs even more when the site is loaded through the instagram web viewer which a lot of our customers use. Ive tried sites such as thangs and sketchfab and this never happens on their website so can someone tell me what they use to show model preview.
Sketchfab uses it's own render engine with lots of optimizations applied on the fly. It's not a fork of three.js afaik.

Viewing a 3d model in browser

I would like to know how I can go from a 3d model to a moveable in browser model. An example of this would be the roblox website. If you go to any player profile there is a character model that you can move around. My model has a .obj format. Would this format be compatible with the technology roblox uses?
Export your model in Maya to a wavefront(.obj) file. Then you could use a library like three.js, along with this script to load and view .obj files within the browser
obj to three.js JSON
Programming the moveable camera can also be done using the library. If you want to use the default formats, it will be tricky as you may need to write the parser yourself if there isn't one and the spec is freely available.

How does AR.js uses its' tracking backend?

I want to extend AR.js by adding my own tracking backend. However, I have troubles finding any documentation on the architecture of this library or how it interacts with underlying components. Likewise, it'd be useful to have more information on how AR.js relates to ARToolkit, Tango, A-Frame, WebVR, ARCore and WebARonARCore. Since the area is quite new and striving, there are a lot of projects going on simultaneously and it's quite confusing and hard to differentiate their functionality sometimes.
The backend I need to implement is the object recognition based on YOLO. I have a prototype running - Android Unity Tango application, it offloads video captured from the device camera onto the edge node, where it is processed in real-time and the information about recognized objects is sent back to the device, where it is used to render annotations. I'd like to have these annotations to be represented as a-frame tags, in order to make content layering easy by using JavaScript.
Any ideas/pointers are welcomed.

Forge viewer: Zoom in on a asset inside a room (front of the asset)

I am using the JS forge viewer.
I am trying to select a asset in a building and then zoom in on it with the camera inside the room where the asset is. I am trying to use
let boundingBox = this.viewer.utilities.getBoundingBox(false);
this.viewer.navigation.fitBounds(false, boundingBox, true);
But this will zoom in but not the right direction (from outside to be precise)
Is it possible to automaticly detect the front of a asset, rotate the camera to this and then zoom in?
Apologizing for a long waiting. It has been confirmed by the dev team, Revit room won't be translated in the translation procedure of the Model Derivative service. Therefore, room related functions or APIs will not supported by the Forge Viewer currently.
Besides, we found your API request, zoom in on a asset inside a room, would only appropriate for the BIM app developed with the Forge tech., and there might be many different kinds of use case based on this request. So, it's hard to design a general function or API for your request in the Forge Viewer. Therefore, it might be not supported in the future releases of the Forge Viewer.
However, we encourage developers like you to implement this feature by yourself, and here is a workaround for you:
Open your Revit project with room elements only via the Navisworks, and upload to Forge for translation, use this result as the secondary model of your viewer app as well. [Here is some info. about the room from my colleague (link)].
Convert fragments of the selected asset in your app into a pure THREE.Geometry. [Here is a example to access mesh info. of the Forge Viewer (link)].
Compute BoundingSphere of the THREE.Geometry from the step. 2, and treat the sphere center as the central point of the selected asset.
Do Three.js raytracing with room geometries from the BoundingSphere center to find some rays without any obstruction between the camera and the selected asset. [Here is a example that shows how to use Three.js raytracing with the Forge Viewer (link)].
Treat rays got from the step. 4 as the sight lines.
Pick a desired sight line from the step. 5 to re-calculate position, target and pivot of the camera.
P.S. Since this is just a workaround, not the formal solution. You would have to use it on your own risk.

Are there any automated tools that can generate 2d images from COLLADA format 3d models for display on a website?

First off, I have very little experience with 3d modeling, so this may be a poorly worded question. If that is the case, then I apologize.
Essentially, I have a large database of COLLADA format 3d models that need to be displayed in a gallery on a website. The number of models is on the order of thousands, so it would be preferable for any type of display format to be automated.
My initial thought was to display these files in 3d using WebGL. However, the lack of support from Internet Explorer is, unfortunately, a deal breaker.
Also, any other Javascript API for 3d model display would probably not be feasible as far as loading time goes, given that these do not involve any sort of hardware acceleration.
My next best option would be to have multiple 2d images of the models taken from various angles. However, with the number of models in this database, it would be nearly impossible to manually output 2d images of each model.
So, my question, then, is this: are there any tools that can be used to auto-generate images from a large set of 3d models? Or, even better yet, is there a way that these images can be rendered directly from the model to be displayed in the browser without an excessive amount of load time?
Thank you so much!
You could use meshtool to generate 2D screenshots from 3D models, either on the command line or from the Python API.
Here's an example from the command line of saving a single screenshot:
meshtool --load_collada file.dae --save_screenshots ss.png
There's also a command to take more than one screenshot, rotating around the model:
meshtool --load_collada file.dae --save_rotate_screenshots ss 10 800 600
This would save 10 screenshots of size 800x600 to files named ss.1.png, ss.2.png, etc. You can also use the Python API of meshtool to do any custom export you want. It uses Panda3D under the hood, which is very easy to use.

Categories

Resources