I would like to know how I can go from a 3d model to a moveable in browser model. An example of this would be the roblox website. If you go to any player profile there is a character model that you can move around. My model has a .obj format. Would this format be compatible with the technology roblox uses?
Export your model in Maya to a wavefront(.obj) file. Then you could use a library like three.js, along with this script to load and view .obj files within the browser
obj to three.js JSON
Programming the moveable camera can also be done using the library. If you want to use the default formats, it will be tricky as you may need to write the parser yourself if there isn't one and the spec is freely available.
Related
I know that Adobe 3D pdf support JavaScript. So, I have JavaScript code in Three.js for 3D models in 3D world.
Is it possible to have same (or similar) code in 3D pdf?
I mean, I load 3D models, and do things with 3D models and have that all in 3D pdf?
What do I have to have to have this?
Can I create scene with Adobe API?
Is it free, Adobe API?
You can add JavaSCript to a 3D Annotation using Adobe Acrobat manually or just about any PDF library tool that supports Rich Media Annotations or will allow you to access objects at the COS dictionary level. I can advise better if you can tell me what programming languages you are most proficient at. One thing to note is that there is no physics engine or collision detection in the Acrobat JavaScript for 3D API, you'd need to add that yourself... though it is certainly possible.
Here's a nice one for you guys:
--> I have a website THREE.js scene with a 3D object in it.
--> I have a mobile app with a scene and a QR scanner. it can use .obj files and .vrx files (special format for viro react)
I want this mobile app to load the 3D model by scanning a QR code on the website with the THREE.js scene (containing the model i want to show on my mobile app)
Now what i thought of is, transform the 3d model into a JSON object. Then save it temporarily to a remote host and transform the URL to the path of this 3D object to a QR code which i can scan with the phone. Then the phone will call the object from the remote server and transform it into .obj and then display it in the scene.
Does this sound like a viable solution? You think there's a better solution or other similar projects that have been done before?
The main problem is that each time this 3d model is different and it has to happen almost instantaniously for user experience.
I want to extend AR.js by adding my own tracking backend. However, I have troubles finding any documentation on the architecture of this library or how it interacts with underlying components. Likewise, it'd be useful to have more information on how AR.js relates to ARToolkit, Tango, A-Frame, WebVR, ARCore and WebARonARCore. Since the area is quite new and striving, there are a lot of projects going on simultaneously and it's quite confusing and hard to differentiate their functionality sometimes.
The backend I need to implement is the object recognition based on YOLO. I have a prototype running - Android Unity Tango application, it offloads video captured from the device camera onto the edge node, where it is processed in real-time and the information about recognized objects is sent back to the device, where it is used to render annotations. I'd like to have these annotations to be represented as a-frame tags, in order to make content layering easy by using JavaScript.
Any ideas/pointers are welcomed.
First off, I have very little experience with 3d modeling, so this may be a poorly worded question. If that is the case, then I apologize.
Essentially, I have a large database of COLLADA format 3d models that need to be displayed in a gallery on a website. The number of models is on the order of thousands, so it would be preferable for any type of display format to be automated.
My initial thought was to display these files in 3d using WebGL. However, the lack of support from Internet Explorer is, unfortunately, a deal breaker.
Also, any other Javascript API for 3d model display would probably not be feasible as far as loading time goes, given that these do not involve any sort of hardware acceleration.
My next best option would be to have multiple 2d images of the models taken from various angles. However, with the number of models in this database, it would be nearly impossible to manually output 2d images of each model.
So, my question, then, is this: are there any tools that can be used to auto-generate images from a large set of 3d models? Or, even better yet, is there a way that these images can be rendered directly from the model to be displayed in the browser without an excessive amount of load time?
Thank you so much!
You could use meshtool to generate 2D screenshots from 3D models, either on the command line or from the Python API.
Here's an example from the command line of saving a single screenshot:
meshtool --load_collada file.dae --save_screenshots ss.png
There's also a command to take more than one screenshot, rotating around the model:
meshtool --load_collada file.dae --save_rotate_screenshots ss 10 800 600
This would save 10 screenshots of size 800x600 to files named ss.1.png, ss.2.png, etc. You can also use the Python API of meshtool to do any custom export you want. It uses Panda3D under the hood, which is very easy to use.
I'm looking for a way to load a virtual human (ie; a model rigged with a skeleton, preferably with moving eyebrows/etc) onto a web page. The functionality should be similar to the dated library Haptek Player (http://www.youtube.com/watch?v=c2iIuiT3IW8), but allow for a transparent background. Ideally it would be in WebGL/O3D since it can be directly integrated with my existing code. However, if there's an implementation out there already in Flash3D or a different plugin, I can quickly switch my codebase to actionscript.
I've investigated trying to send the Haptek Player vertices to a Float32Array (used by WebGL) using an npapi plugin. I can place the vertex data into a javascript array and draw the virtual human. The vertex data cannot be changed, however, since the array must be copied to a typed array (Float32Array) to be used by WebGL.
Thanks for any input!
try http://expression.sourceforge.net/
C++ and OpenGL
you can experiment with converting the code to javascript using emscripten