XML3D: generateRay - javascript

I'm implementing draggable object in XML3D and i need a help with xml3d.generateRay function, as an argument it takes two numbers, as I understand correctly those are x,y coordinates of point in projection space that ray passes through. But are those coordinates in reference to the window element (left-top corner of the browser window) or to the xml upper-left corner?
Second question: how can I get hit point from getElementByRay
Specification says differently for different versions - and since there is no spec for 4.9 I ask.

The coordinates are given in window space, so relative to the top left corner of the window.
You can get the hit point and hit normal from getElementByRay by passing two XML3DVec3 objects into the function, eg:
var hitPoint = new XML3DVec3();
var hitNormal = new XML3DVec3();
xml3dElement.getElementByRay(ray, hitPoint, hitNormal);
The function will fill the vectors with the hit point and normal in world space.

Related

Find 'view' co-ordinates in vis.js

I'm working on a modification to vis.js's Graph3d to do a filled line graph, like this:
The hard part - unsurprisingly - is working out the rendering order for the polygons. I think I can do this by checking whether a ray from the viewer to a given line B crosses line A:
In this example, since line A is "in the way" of line B, we should draw line A first. I'll use a snippet of code from How do you detect where two line segments intersect? to check whether the lines cross.
However, I haven't figured how to find the position of the user's view. I kind of assumed this would be the camera object, so wrote a little bit of debug code to draw the camera on the graph:
var camera = this._convert3Dto2D(this.camera.getCameraLocation());
ctx.strokeStyle = Math.random()>0.5 ? 'ff0000' : '00ff00';
ctx.beginPath();
ctx.moveTo(camera.x, camera.y);
ctx.lineTo(camera.x, camera.y+5);
ctx.stroke();
In fact, the camera co-ordinates as measured by this are always at 0,0,0 on the graph (which would be the far top right on the above screengrab). What I need, I think, is effectively the bottom of the screen.
How can I find this? Or is there a better way to achieve what I'm trying to do?
I don't know if this is still an active issue, but FWIW, Graph3D has internal handling of the sort ordering.
All graph points are sorted with respect to the viewpoint, using a representative coordinate called point.bottom. The rendering is then done using this ordering, with the most distant elements drawn first. This works fine as long as none of the elements intersect; in that case, you can expect artefacts.
Basically, all you need to do, is define point.bottom per graph polygon, and Graph3D will then pick it up from there.
If you are still interested in working on this:
This happens in Graph3d.js, method Graph3d.prototype._calcTranslations(). For an example, have a look at how the Grid and Surface graph elements are initialized in Graph3d.prototype._getDataPoints(). The relevant code is:
obj = {};
obj.point = point3d;
obj.trans = undefined;
obj.screen = undefined;
obj.bottom = new Point3d(x, y, this.zRange.min);

Three.JS: Get position of rotated object

In Three.JS, I am capable of rotating an object about its origin. If I were to do this with a line, for instance, the line rotates, but the positions of its vertices are not updated with their new locations. Is there some way to apply the rotation matrix to the position of the vertices to find the new position of the point? Say I rotate a line with points at (0,0,0) and (0,100,100) by 45° on the x, 20° on the y, and 100° on the z. How would I go about finding the actual position of the vertices with respect to the entire scene.
Thanks
yes, 'entire scene' means world position.
THREE.Vector3() has a applyMatrix4() method,
you can do the same things that the shader does so in order to project a vertex into world space you would do this
yourPoint.applyMatrix4(yourObject.matrixWorld);
to project that into camera space you can apply this next
yourPoint.applyMatrix4(camera.matrixWorld);
to get an actual screen position in -1 to 1
yourPoint.applyMatrix4(camera.projectionMatrix);
you would access your point like this
var yourPoint = yourObject.geometry.vertices[0]; //first vertex
also, rather than doing this three times, you can just combine the matrices. Didnt test this, but something along the lines of this. Might go the other way:
var neededPVMmatrix = new THREE.Matrix4().multiplyMatrices(yourObject.matrixWorld, camera.matrixWorld);
neededPVMmatrix.multiplyMatrices(neededPVMmatrix, camera.projectionMatrix);
if you need a good tutorial on what this does under the hood i recommend this
Alteredq posted everything there is to know about three.js matrices here
edit
One thing to note though, if you want just the rotation, not the translation, you need to use the upper 3x3 portion which is the rotation matrix, of the models world matrix. This might be slightly more complicated. I forgot what three.js gives you, but i think the normalMatrix would do the trick, or perhaps you can convert your THREE.Vector3() to THREE.Vector4(), and set .w to 0, this will prevent any translation from being applied.
edit2
if you want to move the line point in your example, instead of applying it to the particle, apply it to
var yourVertexWorldPosition = new THREE.Vector3().clone(geo.vertices[1]); //this is your second line point, to whatever you set it in your init function
yourVertexWorldPosition.applyMatrix4();//this transforms the new vector into world space based on the matrix you provide (line.matrixWorld)

Exclude overlaid element from Google Maps viewport bounds

I am using Google Maps API v3 to create an inline map on a website. In its container element, I also have an absolute positioned overlay which shows some detail information, visually hovering over the map. Determining on context this element may grow up to the size of the entire map element.
All this is working fine, however the Maps instance of course still considers the overlaid part of the map a valid usable part of the map. This means that, especially if the overlay is at maximum height, setCenter doesn't focus on the visible center, and routes drawn with DirectionsRenderer are partially underneath the overlay.
See this image:
Is there a way to limit the actual viewport to the blueish area, so that setCenter centers on the arrow tip and setBounds fits to the blue part?
I have managed to implement an acceptably functional workaround for the time being.
Some general notes which are good to know:
Every Map object has a Projection, which can convert between LatLng points to map points.
The map points a Projection uses for calculation are in 'world' coordinates, meaning they are pixels on the world map at zoom level 0.
Every zoom level exactly doubles the number of pixels shown. This means that the number of pixels in a given map point equals 2 ^ zoom.
The samples below assume a 300px wide sidebar on the right - adapting to other borders should be easy.
Centering
Using this knowledge, it becomes trivial to write a custom function for off-center centering:
function setCenter(latlng)
{
var z = Math.pow(2, map.getZoom());
var pnt = map.getProjection().fromLatLngToPoint(latlng);
map.setCenter(map.getProjection().fromPointToLatLng(
new google.maps.Point(pnt.x + 150/z, pnt.y)));
}
The crucial bits here are the z variable, and the pnt.x + 150/z calculation in the final line. Because of the above assumptions, this moves the point to center on 150 pixels to the left for the current zoom level, and as such compensates for the missing 300 pixels on the right sidebar.
Bounding
The bounds issue is far less trivial. The reason for this is that to offset the points correctly, you need to know the zoom level. For recentering this doesn't change, but for fitting to previously unknown bounds it nearly always will. Since Google Maps uses unknown margins itself internally when fitting to bounds, there is no reliable way to predict the required zoom level.
Thus a possible solution is to invoke a two-step rocket. First off, call fitBounds with the entire map. This should make the bounds and zoom level at least nearly correct. Then right after that, do a second call to fitBounds corrected for the sidebar.
The following sample implementation should be called with a LatLngBounds object as parameter, or no parameters to default to the current bounds.
function setBounds(bnd, cb)
{
var prj = map.getProjection();
if(!bnd) bnd = map.getBounds();
var ne = prj.fromLatLngToPoint(bnd.getNorthEast()),
sw = prj.fromLatLngToPoint(bnd.getSouthWest());
if(cb) ne.x += (300 / Math.pow(2, map.getZoom()));
else google.maps.event.addListenerOnce(map,'bounds_changed',
function(){setBounds(bnd,1)});
map.fitBounds(new google.maps.LatLngBounds(
prj.fromPointToLatLng(sw), prj.fromPointToLatLng(ne)));
}
What we do here at first is get the actual points of the bounds, and since cb isn't set we install a once-only event on bounds_changed, which is then fired after the fitBounds is completed. This means that the function is automatically called a second time, after the zoom has been corrected. The second invocation, with cb=1, then offsets the box to correct for the 300 pixel wide sidebar.
In certain cases, this can lead to a slight off-animation, but in practice I've only seen this occur when really spamclicking on buttons causing a fit operation. It's running perfectly well otherwise.
Hope this helps someone :)
You can use the map panBy() method which allows you to change the center of the map by a given distance in pixels.
Hope this helps!
I had a similar need and ended up just forcing some "padding" to the east of a LatLngBounds object.
On the upside, it's simple and it works. On the downside it's not really versatile. Just a quick little hack.
// start with a standard LatLngBounds object, extending as you usually would...
bounds = new google.maps.LatLngBounds();
// ...
ne = bounds.getNorthEast();
sw = bounds.getSouthWest();
// the multiplier used to add space; positive for east, negative for west
lngPadding = 1.5
extendedLng = ne.lng() + (ne.lng() - sw.lng()) * lngPadding;
// copy original and extend with the new Lng
extendedBounds = bounds;
extendedBounds.extend(new google.maps.LatLng(ne.lat(), extendedLng));
map.fitBounds(extendedBounds);

Create SVGPoint inside an element with user coordinate

I have a small project (to learn SVG) running (using javascript).
I would like to be able to track a point in a shape with its own user coordinate system. My idea is to find the coordinates of the point within the shape, then create an SVGPoint, so that I can pass on that element. I have seen the method create SVGPoint in examples, but it seems it is used in the context of the 'SVG_root' (that is, document.documentElement.createSVGPoint() works).
When I use (in Firefox)
inSvgObj.createSVGPoint()
where inSVGObj is a element, the web console says "TypeError: inSvgObj.createSVGPoint is not a function". Is it possible to create an SVG point within the to subsequently set with values representing coordinates in that 's user coordinate system?
EDIT (after considernig Robert Longson's answer):
Given that SVGPoint is created only within an "SVG root" and that I have been unable to find a way to move that to within another element, I have found more convenient to use a different svg element type: SVGMatrix. In case it helps someone (as I have spent some time trying to deal with this),It is possible to manipulate analogue values inside an SVG Point by creating an SVGMatrix that would work as a simulated point (for the purposes of coordinates. To that endthe methods .createSVGMatrix(), getCTM() and.multiply() (this last from SVGMatrix) are used. To illustrate that, I will include a (js) function that takes 4 arguments: x-coordinate in user coordinate system (ucs) to transform, y-coordinate is that ucs, object whose ucs is the want we want to transform and an object in the ucs we want to transform to; and returns am object with thrre poperties the x-coordinate in the transformed ucs, its y-coordinate and 1 (for consistency with SVG Recommendations).
function coorUcsAToUcsB(ucsAx,ucsAy,svgObjUcsA,svgObjUcsB){
var ctmUcsA=svgObjUcsA.getCTM();
var ctmUcsB=svgObjUcsB.getCTM().inverse();
var mtx=document.getElementsByTagName('svg')[0].createSVGMatrix();
mtx.e=ucsAx;
mtx.f=ucsAy;
var simulSvgP=ctmUcsB.multiply(ctmUcsA.multiply(mtx)); //1
return {"x":simulSvgP.e,"y":simulSvgP.f,"z":1};
}
//1 this line creates an svg matrix with 1st and 2nd column at 0, 3rd with coordinates of ucsB from the analogue svg matrix with coordinates in ucsA - it takes the coordinates in ucsA to viewport's cs and from there to coordinates in ucsB. For the matrix operation explanation, see this.
Any comments, in particular having overlooked a existing method that does the same or any drawbacks, will be more than welcome.
You create the SVG Point using the root element creation but once you've done that you can set whatever values in it you want. When you assign those values to an object the object will interpret them in its coordinate system.

How would you create a particle SURFACE emitter based on a created canvas shape? HTMLS CANVAS JS

I have a shape (a quarter circle) that I've created using the html canvas function:
moveTo
LineTo
QuadraticCurveTo
How do I go about exploding the shape into particles and then return them to form a circle?
I'm not going to write the code for you because it will take some time, and I'm sure you can find examples on the web, but I'll tell you the theory you need to know in order to make such a thing.
Create an in-memory canvas (using document.createElement('canvas')) that will never be seen on the page. This canvas must be at least as large as your object. I'm going to assume it is exactly as large as your object. We'll call this tempCanvas and it has tempCtx
Draw your object to tempCtx.
There will be some event that you didn't mention exactly but I'm sure you have in mind. Either you press a button or click on the object and it "explodes". For the sake of picking something I'll assume you want it to explode on click.
So to do the explosion:
Draw the object onto your normal context: ctx.drawImage(tempCanvas, x, y) so the user sees something
You're going to want to have an array of pixels for the location of each pixel in tempCanvas. So if tempCanvas is 20x30 you'll want an array of [20][30] to correspond.
You have to keep data for each of those pixels. Specifically, their starting point, which is easy, because pixel [2][4]'s starting point is (2,4)! And also their current location, which is identical to starting point at first but will change on each frame.
When the explosion event occurs keep track of the original mouse x and y position.
At this point for every single pixel you have a vector which means you have a direction. If you clicked in the middle of the object you'll want to save the mouse coordinates of (10,15) (see note 1). So now all of the pixels of the to-be-exploded image have their trajectory. There's a bit of math here that I'm taking for granted, but if you search separate topics either on SO or on the internet you'll find out how to find the slope/etc of these lines and continue them.
For every frame hereafter you must take each pixel [x][y] and use ctx.drawImage(tempCanvas, x, y, 1, 1, newX, newY, 1, 1) where x and y are the same as the pixel's [x][y] and the newX and newY are calculated using the vector and finding what the next point would be along its line.
The result will be each pixel of the image being drawn in a location that is slightly more away from the original click point. If you continue to do this frame after frame it will look as if the object has exploded.
That's the general idea, anyway. Let me know if any of it is unclear.
note 1: Most likely your normal canvas won't be the same size as the to-explode object. Maybe the object is placed at 100,100 so you really clicked on 110, 115 instead of 10,15. I'm omitting that offset just for the sake of simplicity.

Categories

Resources