getPointAtLength with SVG not working correctly - javascript

I have a huge SVG file and all of its data is in one single element. It's something like this:
<path d="M724 1541c133,-400 36,-222 334,-520 76,-75 440,-37 557,-37 145,291 111,
-32 111,445 0,344 -3,260 483,260 457,0 177,-111 409,-111 0,-62 0,-124 0,-186
-368,0 190,-111 -409,-111 -143,0 2,40 -148,223 ... huge SVG "/>
I am converting this huge SVG to polylines using the function "getPointAtLength"as this guy answered my question: https://stackoverflow.com/a/39405746/2934699
The problem is this: this SVG is not a continuous one, it has some shapes (rectangles, circles...) which are not connected. But when I use the method of the link above all my shapes get connected. Is there someway to solve this problem?

There are two possible approaches I can think of to solve your problem.
1. Quick and dirty
As you loop through the path, calculate the distance from the last path point. If that distance exceeds some limit, you can consider that you have stepped into a new subpath. So begin a new polyline.
2. More accurate but trickier
Preprocess the path using the mypath.pathSegList property. That call returns a list of the path commands in the path.
Then loop through the pathSegList and take note of where each move command is. These mark the start of each subpath
As you loop through flattening the path, call the mypath.getPathSegAtLength() function. It returns the index of the pathseg entry at that length.
Compare that with the data you recorded in step 1 to see if you have moved into a new subpath
If you have, start a new polyline (or polygon)
One complication is that Chrome has deprecated support for the pathSegList property, and have instead moved to the new SVG2 API for this (mypath.getPathData()). Fortunately there is a polyfill for Chrome to add back support for the old API. Or you can switch to the new API and use a different polyfill so that the new API works on older browsers.
You can find details on the two polyfills here

Related

Optimizing smooth tween between svg paths in JavaScript / React Native

I'm currently porting an application to React Native that captures user input as a stroke and animates it to the correct position to match an svg (pictures below). In the web, I use a combination of multiple smoothing libraries & pixijs to achieve perfectly smooth transitions with no artifacts.
With React Native & reanimated I'm limited to functions I can write by hand to handle the interpolation between two paths. Currently what I'm doing is:
Convert the target svg to a fixed number N of points
Smooth the captured input and convert it to a series of N points
Loop over each coordinate and interpolate the value between those two points (linear interpolation)
Run the resulting points array through a Catmull-Rom function
Render the resulting SVG curve
1 & 2 I can cache prior to the animation, but steps 3 4 & 5 need to happen on each render.
Unfortunately, using this method, I'm limited to a value of around 300 N as the maximum amount of points before dropping some frames. I'm also still seeing some artifacts at the end of an animation that I don't know how to fix.
This is sufficient, but given that in the web I can animate tens of thousands of points without dropping frames, I feel like I am missing a key performance optimization here. For example, is there a way to combine steps 3 & 4? Are there more performant algorithms than Catmull-Rom?
Is there a better way to achieve a smooth transition between two vector paths using just pure JavaScript (or dropping into Swift if that is possible)?
Is there something more I can do to remove the artifacts pictured in the last photo? I'm not sure what these are called technically so it's hard for me to research - the catmull-rom spline removed most of them but I still see a few at the tail ends of the animation.
Animation end/start state:
Animation middle state:
Animation start/end state (with artifact):
You might want to have a look at flubber.js
Also why not ditch the catmull-rom for simple linear sections (probably detailed enough with 1000+ points)
If neither helps, or you want to get as fast as possible, you might want to leverage the GPUs power for embarrassingly parallel workflows like the interpolation between to N-sized arrays.
edit:
also consider using the skia renderer which already leverages the gpu and supports stuff perfectly fitting your use-case
import {Canvas, Path, Skia, interpolatePath} from "#shopify/react-native-skia";
//obv. you need to modify this to use your arrays
const path1 = new Path();
path1.moveTo(0, 0);
path1.lineTo(100, 0);
const path2 = new Path();
path2.moveTo(0, 0);
path2.lineTo(0, 100);
//you have to do this dynamically (maybe using skia animations)
let animationProgress = 0.5;
//magic already implemented for you
let path = interpolatePath(animationProgress, [0, 1], [path1, path2]);
const PathDemo = () => {
return (
<Canvas style={{ flex: 1 }}>
<Path
path={path}
color="lightblue"
/>
</Canvas>
);
};

JS Canvas get pixel value very frequently

I am creating a video game based on Node.js/WebGL/Canvas/PIXI.js.
In this game, blocks have a generic size: they can be circles, polygons, or everything. So, my physical engine needs to know where exactly the things are, what pixels are walls and what pixels are not. Since I think PIXI don't allow this, I create an invisible canvas where I put all the wall's images of the map. Then, I use the function getImageData to create a function "isWall" at (x, y):
function isWall(x, y):
return canvas.getImageData(x, y, 1, 1).data[3] != 0;
However, this is very slow (it takes up to 70% of the CPU time of the game, according to Chrome profiling). Also, since I introduced this function, I sometimes got the error "Oops, WebGL crashed" without any additional advice.
Is there a better method to access the value of the pixel? I thought about storing everything in a static bit array (walls have a fixed size), with 1 corresponding to a wall and 0 to a non-wall. Is it reasonable to have a 10-million-cells array in memory?
Some thoughts:
For first check: Use collision regions for all of your objects. The regions can even be defined for each side depending on shape (ie. complex shapes). Only check for collisions inside intersecting regions.
Use half resolution for hit-test bitmaps (or even 25% if your scenario allow). Our brains are not capable of detecting pixel-accurate collisions when things are moving so this can be taken advantage of.
For complex shapes, pre-store the whole bitmap for it (based on its region(s)) but transform it to a single value typed array like Uint8Array with high and low values (re-use this instead of getting one and one pixels via the context). Subtract object's position and use the result as a delta for your shape region, then hit-testing the "bitmap". If the shape rotates, transform incoming check points accordingly (there is probably a sweet-spot here where updating bitmap becomes faster than transforming a bunch of points etc. You need to test for your scenario).
For close-to-square shaped objects do a compromise and use a simple rectangle check
For circles and ellipses use un-squared values to check distances for radius.
In some cases you can perhaps use collision predictions which you calculate before the games starts and when knowing all objects positions, directions and velocities (calculate the complete motion path, find intersections for those paths, calculate time/distance to those intersections). If your objects change direction etc. due to other events during their path, this will of course not work so well (or try and see if re-calculating is beneficial or not).
I'm sure why you would need 10m stored in memory, it's doable though - but you will need to use something like a quad-tree and split the array up, so it becomes efficient to look up a pixel state. IMO you will only need to store "bits" for the complex shapes, and you can limit it further by defining multiple regions per shape. For simpler shapes just use vectors (rectangles, radius/distance). Do performance tests often to find the right balance.
In any case - these sort of things has to be hand-optimized for the very scenario, so this is just a general take on it. Other factors will affect the approach such as high velocities, rotation, reflection etc. and it will quickly become very broad. Hope this gives some input though.
I use bit arrays to store 0 || 1 info and it works very well.
The information is stored compactly and gets/sets are very fast.
Here is the bit library I use:
https://github.com/drslump/Bits-js/blob/master/lib/Bits.js
I've not tried with 10m bits so you'll have to try it on your own dataset.
The solution you propose is very "flat", meaning each pixel must have a corresponding bit. This results in a large amount of memory being required--even if information is stored as bits.
An alternative testing data ranges instead of testing each pixel:
If the number of wall pixels is small versus the total number of pixels you might try storing each wall as a series of "runs". For example, a wall run might be stored in an object like this (warning: untested code!):
// an object containing all horizontal wall runs
var xRuns={}
// an object containing all vertical wall runs
var yRuns={}
// define a wall that runs on y=50 from x=100 to x=185
// and then runs on x=185 from y=50 to y=225
var y=50;
var x=185;
if(!xRuns[y]){ xRuns[y]=[]; }
xRuns[y].push({start:100,end:185});
if(!yRuns[x]){ yRuns[x]=[]; }
yRuns[x].push({start:50,end:225});
Then you can quickly test an [x,y] against the wall runs like this (warning untested code!):
function isWall(x,y){
if(xRuns[y]){
var a=xRuns[y];
var i=a.length;
do while(i--){
var run=a[i];
if(x>=run.start && x<=run.end){return(true);}
}
}
if(yRuns[x]){
var a=yRuns[x];
var i=a.length;
do while(i--){
var run=a[i];
if(y>=run.start && y<=run.end){return(true);}
}
}
return(false);
}
This should require very few tests because the x & y exactly specify which array of xRuns and yRuns need to be tested.
It may (or may not) be faster than testing the "flat" model because there is overhead getting to the specified element of the flat model. You'd have to perf test using both methods.
The wall-run method would likely require much less memory.
Hope this helps...Keep in mind the wall-run alternative is just off the top of my head and probably requires tweaking ;-)

Node.js/Javascript library to test if point is in geojson multipolygon

Is there some library for node.js or javascript in general that provides a function to check if a coordinate is in a geojson multipolygon?
I'm trying to create a small HTTP API that tells me which multipolygons (representing countries, counties, cities, etc.) contain a given coordinate.
I thought that I'll hold a list of all multipolygons & their bounding-box in memory and then first check for each polygon if its bounding box cointains the coordinate. If yes, then it'll check if the coordinate is in the multipolygon itself.
I know there's a library called "clipper" that got ported to javascript, but it seems that the library does not provide a simple "pointInPolygon" function, even if the library itself is very powerful.. Is it still possible with this library?
Additionally, I've found another library called "geojson-js-utils" but it does not seem to support multipolygons (at least it's not mentioned there)
I've found some other libraries that can check if a point is in a polygon, but I don't know how to use them to check if a point is in a multipolygon.
Any hints?
In newest Clipper there is an efficient PointInPolygon function. It uses algorithm The Point in Polygon Problem for Arbitrary Polygons by Hormann & Agathos.
The documentation of Javascript Clipper's PointInPolygon function says:
ClipperLib.Clipper.PointInPolygon()
Number PointInPolygon(IntPoint pt, Path poly)
Returns 0 if false, -1 if pt is on poly and +1 if pt is in poly.
Usage:
var poly = [{X:10,Y:10},{X:110,Y:10},{X:110,Y:110},{X:10,Y:110}];
var pt = new ClipperLib.IntPoint(50,50);
var inpoly = ClipperLib.Clipper.PointInPolygon(pt, poly);
// inpoly is 1, which means that pt is in polygon
To test multipolygon, you can traverse subpolygons and check them using PointInPolygon.

Create SVGPoint inside an element with user coordinate

I have a small project (to learn SVG) running (using javascript).
I would like to be able to track a point in a shape with its own user coordinate system. My idea is to find the coordinates of the point within the shape, then create an SVGPoint, so that I can pass on that element. I have seen the method create SVGPoint in examples, but it seems it is used in the context of the 'SVG_root' (that is, document.documentElement.createSVGPoint() works).
When I use (in Firefox)
inSvgObj.createSVGPoint()
where inSVGObj is a element, the web console says "TypeError: inSvgObj.createSVGPoint is not a function". Is it possible to create an SVG point within the to subsequently set with values representing coordinates in that 's user coordinate system?
EDIT (after considernig Robert Longson's answer):
Given that SVGPoint is created only within an "SVG root" and that I have been unable to find a way to move that to within another element, I have found more convenient to use a different svg element type: SVGMatrix. In case it helps someone (as I have spent some time trying to deal with this),It is possible to manipulate analogue values inside an SVG Point by creating an SVGMatrix that would work as a simulated point (for the purposes of coordinates. To that endthe methods .createSVGMatrix(), getCTM() and.multiply() (this last from SVGMatrix) are used. To illustrate that, I will include a (js) function that takes 4 arguments: x-coordinate in user coordinate system (ucs) to transform, y-coordinate is that ucs, object whose ucs is the want we want to transform and an object in the ucs we want to transform to; and returns am object with thrre poperties the x-coordinate in the transformed ucs, its y-coordinate and 1 (for consistency with SVG Recommendations).
function coorUcsAToUcsB(ucsAx,ucsAy,svgObjUcsA,svgObjUcsB){
var ctmUcsA=svgObjUcsA.getCTM();
var ctmUcsB=svgObjUcsB.getCTM().inverse();
var mtx=document.getElementsByTagName('svg')[0].createSVGMatrix();
mtx.e=ucsAx;
mtx.f=ucsAy;
var simulSvgP=ctmUcsB.multiply(ctmUcsA.multiply(mtx)); //1
return {"x":simulSvgP.e,"y":simulSvgP.f,"z":1};
}
//1 this line creates an svg matrix with 1st and 2nd column at 0, 3rd with coordinates of ucsB from the analogue svg matrix with coordinates in ucsA - it takes the coordinates in ucsA to viewport's cs and from there to coordinates in ucsB. For the matrix operation explanation, see this.
Any comments, in particular having overlooked a existing method that does the same or any drawbacks, will be more than welcome.
You create the SVG Point using the root element creation but once you've done that you can set whatever values in it you want. When you assign those values to an object the object will interpret them in its coordinate system.

apply all transform matrices

I am looking for a possibly fast way to apply all transform matrices of a given svg-graphic. In other words: the algorithm should remove all "transform" attributes and transform all coordinates of the graphic to absolute coordinates.
Is their any library that can do this, or is their any SVGDomInterface method that coulld do that?
EDIT::
If I call the consolidate method like this:
$.each( svg.find( 'path' ), function( i ){
this.transform.baseVal.consolidate();
});
nothing happens, if i call it like this:
$.each( svg.find( 'path' ), function( i ){
this.transform.animVal.consolidate();
});
i get this error:
So, how should i use the "consolidate" method, on which elements shall I call it?
Greetings
philipp
Here's a jsFiddle with some javascript library code (based in part on Raphael.js) to bake the transforms into all paths' data:
http://jsfiddle.net/ecmanaut/2Wez8/
(Not sure what's up with Opera here, though; it's usually best in class on SVG. I may be stumbling in some way the other browsers are more forgiving about.)
The consolidate method only reduces the list of matrices to a single matrix. And the error you get on the animVal example is because you are not allowed to modify the animated values (consolidate destructively modifies the transform list).
To answer your question, no there's no existing method in SVG DOM that applies the transforms by modifying the values of paths etc. There are options in Inkscape (and Illustrator too IIRC) for applying transforms like that.
If you're looking for a library or utility that does this you can try SVG Cleaner.
SVG Cleaner didn't seem to apply all transforms for me, but Inkscape does. Here's the command line I use when I need to apply one:
inkscape copy-of-file.svg --select=id-of-node \
--verb=EditCut --verb=EditPaste \
--verb=FileSave --verb=FileClose
Assuming you have "Transforms -> Store transformation" set to "Optimized" in inkscape's prefs (I believe it is on by default), this will apply it and produce the wanted result. Be sure you operate on a copy of your input file, as the operation replaces your original file!
If you are running this on a mac, you may want to first do this in your shell:
alias inkscape=/Applications/Inkscape.app/Contents/Resources/bin/inkscape
Better use EditPasteInPlace instead of EditPaste. EditPaste pastes at the mouse location, which is not the location of the node.
In order to retreive the path relative to some other, parent DOM node (e.g. the SVG root container), you can use the technique here.
It uses SVG.getTransformToElement which calculates the transform between a parent and some node on the SVG tree. The returned object contains methods to return the inverse of the transform, etc, to do practical things with it.
How to determine size of Raphael object after scaling & rotating it?
I dont know exactly what you are trying to achieve, but this has the power to do it.

Categories

Resources