Does webGL contain push/popMatrix? - javascript

Does webGL contain push/popMatrix? and if not, how would I go about recreating them?

No, WebGL is based off of OpenGL ES 2.0, so there is no built in matrix management or fixed function pipeline. The model view and projection matrices need to be completely managed in your own code and passed to shaders at draw time. You really don't need push and pop matrix if you are using a scene graph or some kind of other similar scene management system. All you really need is a good matrix and vector math library.
If you are still set on using push and pop matrix, you could simply use an array of matrices, and write functions like push and pop that simply save your current matrix into the array and push or pop the index down.
I would get the OpenGL ES 2.0 programming guide if you need more help with transitioning to WebGL. The book's website here: http://opengles-book.com/ contains a download link to some source code with demos for a variety of platforms and languages including WebGL. It also contains a decent math library if you need it.

From what I see here and there, there is no native support but you can easily build such a stack with an Array.

Related

Use Spline interpolation in WebGL

How can I use spline interpolation in 3D with WebGL?. This is for molecule modeling, specifically Trace representation. I have been searching information about it, but i hven't found how to use it in WebGL. Will i need create the algorithm?
You can use it, but there're no ready-to-use facilities in WebGL. What you need to do is instead of generating geometry for strait lines between atoms (?) generate appropriately segmented splines. Also, WebGL 2 geometry shaders may come in handy to generate such splines in a performant manner.
However, if you're implementing a molecular viewer or editor and need to display alpha helices, beta sheets and such for a protein, maybe the best way would be to render them as pre-made primitives (e.g., modeled in a 3d software, exported and used as a ready geometry).

Parot AR drone controlled by opencv in c++

I'm currently building a texture classifier in the c++ api of opencv. I was looking to use this to recognise textures and ideally help a parot ar drone 2.0 to navigate to a specific texture. I have found the documentation on node copter and it's opencv bindings. I wasn't sure about whether this would require me to re write my program in javascript?
If there is some sort of interface then is it feasible to run my program in the background, pull images from the parrot analyse them and send back control commands to the parrot?
I have been working with opencv for about 3 months and have some basic understanding of node.
Thanks in advance!
There are lots of ways to interface with a Parot AR drone. NodeCopter is one option, but there are others. ROS has good AR drone bindings I've used which would give you tons of flexibility at the expense of some complexity.
You might also consider building your C++ program into a stand-alone option and calling it from Node.js. You could also interface with the AR Drone API directly.
It's not too hard to write a program to control an AR.Drone with some sort of OpenCV-based tracking. Javascript would probably be my suggestion as the easiest way to do that, but as #abarry alluded, you could do it with any language that has bindings for the AR.Drone communications protocol and OpenCV.
The easiest thing would be to have a single program that controls the drone, and processes images with OpenCV. You don't need to run anything in the background.
copterface is a Node.js application that uses node-ar-drone and node-opencv to recognize faces and steer the drone toward them. It might be a good starting point for your application.
Just to give an example in another language, turboshrimp-tracker is a Clojure application that shows you live video from the drone, lets you select a region of the video containing an object, and then tracks that object using OpenCV. It doesn't actually steer the drone toward the tracked object, but that would be pretty easy to add.

Are there any automated tools that can generate 2d images from COLLADA format 3d models for display on a website?

First off, I have very little experience with 3d modeling, so this may be a poorly worded question. If that is the case, then I apologize.
Essentially, I have a large database of COLLADA format 3d models that need to be displayed in a gallery on a website. The number of models is on the order of thousands, so it would be preferable for any type of display format to be automated.
My initial thought was to display these files in 3d using WebGL. However, the lack of support from Internet Explorer is, unfortunately, a deal breaker.
Also, any other Javascript API for 3d model display would probably not be feasible as far as loading time goes, given that these do not involve any sort of hardware acceleration.
My next best option would be to have multiple 2d images of the models taken from various angles. However, with the number of models in this database, it would be nearly impossible to manually output 2d images of each model.
So, my question, then, is this: are there any tools that can be used to auto-generate images from a large set of 3d models? Or, even better yet, is there a way that these images can be rendered directly from the model to be displayed in the browser without an excessive amount of load time?
Thank you so much!
You could use meshtool to generate 2D screenshots from 3D models, either on the command line or from the Python API.
Here's an example from the command line of saving a single screenshot:
meshtool --load_collada file.dae --save_screenshots ss.png
There's also a command to take more than one screenshot, rotating around the model:
meshtool --load_collada file.dae --save_rotate_screenshots ss 10 800 600
This would save 10 screenshots of size 800x600 to files named ss.1.png, ss.2.png, etc. You can also use the Python API of meshtool to do any custom export you want. It uses Panda3D under the hood, which is very easy to use.

Virtual Human library for WebGL or Flash

I'm looking for a way to load a virtual human (ie; a model rigged with a skeleton, preferably with moving eyebrows/etc) onto a web page. The functionality should be similar to the dated library Haptek Player (http://www.youtube.com/watch?v=c2iIuiT3IW8), but allow for a transparent background. Ideally it would be in WebGL/O3D since it can be directly integrated with my existing code. However, if there's an implementation out there already in Flash3D or a different plugin, I can quickly switch my codebase to actionscript.
I've investigated trying to send the Haptek Player vertices to a Float32Array (used by WebGL) using an npapi plugin. I can place the vertex data into a javascript array and draw the virtual human. The vertex data cannot be changed, however, since the array must be copied to a typed array (Float32Array) to be used by WebGL.
Thanks for any input!
try http://expression.sourceforge.net/
C++ and OpenGL
you can experiment with converting the code to javascript using emscripten

Bing Maps - Javascript vs Silverlight

Currently, I am evaluating the creating of a map based system to plot data. This data would consists of shape layers (a grid - stored in a SQL 2008 Geography column) and multiple points (~5500 initially - Lat/Lon points in the same DB) that will plot the location of items on the grid. So, my question is - is there a large difference between the SilverLight Bing Map implementation and the JavaScript based implementation. Here is what I can gather from my research:
SilverLight Pros
Can handle large amounts of data more quickly
API/SDK to tie directly to .NET application code
JavaScript Pros
Do not have to download/install Silverlight on client side
Can leverage JQuery or other frameworks to pull data from webservice (I know SL can do this to using WCF, but I know JQuery rather well)
I know from this list that it looks like I should go with Silverlight, however I also have 'NEVER' done a bit of coding using the XAML stuff. Most of my experience as of late is the .NET MVC stuff and I cannot help but to take that into account as well. Does anyone know the performance 'ratio' between SilverLight and Javascript or at what point JavaScript implementation will choke? One more thing, I have looked at the DataConnect project on codeplex, but it seems to be broken - I cannot get the WKT or XAML functions to work either on their live site or the downloaded project.
If anyone out there has done a comparison/has words of wisdom for guidance/can add to my list for either of the two, I am all ears.
EDIT
I found a great Javascript/.NET MVC application example using SQL 2008 on CodePlex - Ajax Map Data Connector. It gives examples of pulling polygons, lines and points of interest from the database, placing them on the map using images tiles or the MS API as well as using intersection to determine items around a point or within a bounded box.
Personally I prefer the Javascript version because it's more multiplatform (e.g. mobiles) and easy to integrate in a webapp (plus I also love jQuery), but I think the deciding factor is probably what do you want to use the application for ?
However for Javascript, even if I love version 7, you may want to stick with version 6.3 for now because too many core components were removed (but are planned to be re-added in the future), e.g. infoboxes and client-side clustering (of course you can do your own implementations, that's what I did personally, but I would advise to use 6.3 for now).
I'd go with the javascript control (better support for multi-devices, is currently being more actively developed than the Silverlight control, sounds better suited to your skill set). However, don't try to plot 5,500 points on it. It will die.
What's more, if you're thinking about plotting 5,500 points then there's something wrong with your application design anyway - an end user is not going to be able to discern that many different points on the map. Let them filter for particular types of points, or only retrieve those that are visible in the current mapview, or use clustering to group up points at higher zoom levels - you should only be looking to have at most maybe 100 - 200 data visible data points on the map at any one time. If you really must plot that many points, then pre-render them as a tile layer and cache this rather than trying to plot dynamic vector data on the map.
And, I disagree with wildpeaks - v 7.0 is the latest stable release of the Bing Maps AJAX platform, and is a major change from v6.3. If you start coding with v6.3 now you'll only have to go through upheaval at a later date when you have to migrate to v7.0. Best to start off with v7.0 than learn a deprecated API.

Categories

Resources