I'm sorry if this turns out to be a trivial answer I have not been able to figure out how to find the orientation of an irregular polygon (given as an array). By orientation I mean the angle between 0 and the long axis of the object.
Searching around I found a function in MatLab called regionprops() that is almost exactly what I need.
https://www.mathworks.com/help/images/ref/regionprops.html
Is there a JavaScript equivalent that I just cannot find, or would it be advisable to learn how to include MatLab in my project?
Seems you need to calculate image moments.
Information about image orientation can be derived by first using the
second order central moments to construct a covariance matrix.
Also consider principal components analysis
Related
Recently I am attempting to modify the source codes of this page. The underlying technique of this interactive programe is called sketch-rnn, a deep learning algorithm that can generate sketches. I need to access the real time images on the canvas so that I can use convolutional neural network (CNN), and feed the image as a 2d array to the neural network so that I can further improve the programe. Is there any p5.js function that can help me achieve that?
It depends in what format the CNN accepts input.
The simplest thing I can think of is using plain JavaScript (outside of p5.js) to access the <canvas /> element.
For example this is something you can try in your browser console on the sketch_rnn_demo page:
// access the default p5.js Canvas
canvasElement = document.querySelector('#defaultCanvas0')
// export the data as needed, for example encoded as a Base64 string:
canvasElement.toDataURL()
If you want to access pixels, you can via the Canvas context and getImageData():
//access <canvas/> context
var context = canvasElement.getContext('2d');
//access pixels:
context.getImageData(0,0,canvasElement.width,canvasElement.height);
This will return a 1D array of unsigned 8-bit integers (e.g. values from 0-255) in R,G,B,A order
(e.g. pixel0R,pixel0G,pixel0B,pixel0A,pixel1R,pixel1G,pixel1B,pixel1A...etc.)
If you want to use p5.js instead, call loadPixels() first, then access the pixels[] array which is the same format as above.
You can also use get(x,y) in p5.js which allows a 2D way to access to pixel data, however this is much slower.
If you CNN takes in a 2D array you still need to create this 2D array yourself and populate it pixel values (using pixels[] or get() for example). Be sure to double check the CNN input:
it is a 2D array of 32-bit integers (e.g. R,G,B,A or A,R,G,B as a single int (0xAARRGGBB or 0xRRGGBBAA), just RGB, etc.)
what resolution should the 2d array be ? (your sketch-rnn canvas may be a different size and you might need to resize it to match what the CNN expects as an input)
Update
I've just re-read the question and realised the answer above has half of the answer. The other half about sketch-rnn is missing.
(I happen to have worked on a cool sketch-rnn project in the past)
Personally I believe the question could've been phrased better: the CNN part is confusing. My understanding now is that you have a canvas, probably from p5.js and you want to feed information from there to sketch-rnn to generate new drawings. What still isn't clear is what happens to this canvas: is it something you generate and have control over, is it a simply loading some external images, something else ?
If the input to sketch-rnn is a canvas you would need to extract paths/vector data from the pixel/raster data. This functionality moves away from p5.js into image processing/computer vision and therefore not built into the library, however you could use a specialised library like OpenCV.js and it's findContours() functionality.
I actually started a library to make easier to interface between OpenCV.js and p5.js and you can see a basic contour example here. To access the contours as an array of p5.Vector instances you'd use something like myContourFinder.getPolylines() to get everything or myContourFinder.getPolyline(0) to get the first one.
It's also worth asking if you need to convert pixels to paths (for sketch-rnn strokes) in the first place. If you have control over how things are drawn into that canvas (e.g. your own p5.js sketch), you could easily keep track of the points being drawn and simply format them in the sketch-rnn stroke format.
In terms of using sketch-rnn in js, the sketch-rnn demo you've linked above actually uses p5.js and you can find more examples on the magenta-demos github repo (basic_predict is a good start).
Additionally, there's another library called ml5 which is a nice and simple way to make use of modern machine learning algorithms from p5.js including sketch-rnn. As you can see on the documentation page, there is even a ready to remix p5.js editor sketch
Unfortunately I won't have the time to put all the above together as a nice ready to use example, but I do hope there is enough information on how to take these ingredients and put them together into your own sketch.
I want to create a HTML5 canvas animation likely the one on this site: https://flowstudio.co/.
I have started with GSAP, but it looks like creating something like this, is really a big task.
I have to create mostly every point/move singular and i have no idea if there is a faster/better way.
Currently i only have looked at GSAP without plugins.
Is there some special tool/(GSAP) plugin that can help to create this?
Or should i maybe use d3.js?
I also tried to find an tutorial for this, but it looks like there is nothing for this more advanced case.
Thanks for the help!
The example you provided is using THREE.js and I would suggest you to use it too since you want to operate in 3D space also.
When you want to animate a large ammount of points you will need to use a vertex shader. That's because vertex shader will allow you to calculate all of the points positions in one step (thanks to parallel computing on the GPU), whereas doing it the 'normal way' (on the CPU) is very bad on performance, since every single point has to be calculated one by one. (here you can see the difference)
The way you animate the points is a little different than you might think- you don't want to apply animation to every. single. point...
Instead you will need three things that you will pass to the shader:
-array containing starting points position,
-array containing final points position,
-blend parameter (just a float variable taking values from 0 to 1).
Then you use GSAP to animate only the blend parameter, and shader does the rest (for example when the blend parameter is 0.5 the point position is exactly halfway between starting position and final position that you provided to the shader)
The example you provided is also using some kind of Perlin Noise function which you will have to implement in the shader also.
Its a lot to bite at one time but here's some great tutorials from Yuri Artyukh which will help you achieve something similiar:
https://www.youtube.com/watch?v=XjZ9iu_Z9G8&t=5713s
https://www.youtube.com/watch?v=QGMygnzlifk
https://www.youtube.com/watch?v=RKjfryYz1qY
https://www.youtube.com/watch?v=WVTLnYL84hQ&t=4452s
Hope it helps and...good luck!
I'm combining 3D content with Three.js with HTML and SVG content. The Three.js CSSLoader does a pretty good job synchronizing placement of HTML and SVG content in the 3D world.
But the SVG/HTML coordinate systems are 'left-handed', whereas Three.js coordinate system is 'right-handed'. This basically means that their y-axes are reversed. In SVG/HTML, y/top goes up as you go down the screen, and Three.js uses the more standard mathematical convention of y going down as you go down the screen.
I have to continually convert from one to the other, which is pretty error prone. I know I am not the first to run into this (for example, look here). Has someone come up with a general solution? Here's what I tried:
Do everything in an Object3D with .scale.y = -1. As you may suspect, this turns out to be a disaster. It turns everything inside-out, and don't even try to put your camera in there.
Do everything in an Object3D with .rotate.x = Math.PI. This is more promising, but the z axis is no longer consistent with the HTML concept of z-index. Still, this is what I'm using now.
In HTML, don't use top, use bottom. In SVG, do everything inside a <g transform="scale(1, -1)"> inside a <g transform="translate(0, imageHeight)">. However, I feel this would be more confusing for developers, and the imageHeight has to be kept up to date at all times, which is yet another burden.
Has anyone come up with something better? Perhaps a library to help with this?
I would suggest you to use the SVG Global Transform Attribute, if you post an example of your code, i could edit the answer and post the example here, maybe a JSfiddle.
Basically you will need to add the transformation to your SVG, in your case to change the direction of y-axis, you can do a "scale(1,-1)".
See the W3 documentation with examples in the following link:
http://www.w3.org/TR/SVG/coords.html#SVGGlobalTransformAttribute
The first common use of this attribute:
Most ProjectedCRS have the north direction represented by positive
values of the second axis and conversely SVG has a y-down coordinate
system. That's why, in order to follow the usual way to represent a
map with the north at its top, it is recommended for that kind of
ProjectedCRS to use the ‘svg:transform’ global attribute with a
'scale(1, -1)' value as in the third example below.
They have some examples there too, I hope it solves your problem. :)
I have a B-Spline curve. I have all the knots, and the x,y coordinates of the Control Points.
I need to convert the B-Spline curve into Bezier curves.
My end goal is to be able to draw the shape on an html5 canvas element. The B-Spline is coming from a dxf file which doesn't support Beziers, while a canvas only supports Beziers.
I've found several articles which attempt to explain the process, however they are quite a bit over my head and really seem to be very theory intensive. I really need an example or step by step help.
Here's what I've found:
(Explains B-Splines),(Converting to Beziers),(Javascript Example)
The last link is nice because it contains actual code, however it doesn't seem to take into account the weight assigned by the nodes. I think this is kind of important as it seems to influence whether the curve passes through a control point.
I can share my Nodes or Control Points if that would be useful. If someone would point me to a step-by-step procedure or help me with some psuedo(or actual)code, I would be so grateful.
I wrote a simple Javascript implementation of Boehm's algorithm for cubic B-Splines a while back. It's a fairly straightforward implementation involving polar values, described here in section 6.3: Computer Aided Geometric Design- Sederberg
If you're just interested in the implementation, I've linked the classes I wrote here: bsplines.js
This could be helpful - https://github.com/Tagussan/BSpline
My project has moved on and I no longer need it, but this seems to be a pretty useful way to feed control points and have a curve drawn.
I want to code a little Game, dealing with fonts and letters. I want to make them move arround in 2d space and i am using box2dweb as physics engine, what is actually doing a very great job. At the moment all I am struggling with, is the problem of building the b2Body for a Letter. Box2d can only handle primitive, convex shapes and to build an more complex hitbox I have to combine some of them. In the image I tried to figure out what i would like to reach, an algorithm, that takes an svg-path of a letter and generates a series of b2shapes which represent the hitbox.
All in all i have no Idea where i could find some Information about this, if there is a library that is capable of doing this. Even if this Library is not available in Javascript, i could do the job on Server.
I know that there is paper.js and raphalel, some clever vector libraries, but i have not found any hint how to solve this yet.
I would be happy for any kind of help, links to ressources, or the correct name of the problem in mathematical sense.
Greetings and thanks in advance...
Philipp
I just want to leave the result of investigation here, maybe someone will help it. The initial idea is based on »ear cutting«, »ear culling«, or »ear cropping«. A demo here will describe this. But the algorithm, which produces less, but box2d suitable polygons is shown in a demo here. The idea is to merge as much triangles as possible, as long as they are convex and this case, do not have more than eight edges. A triangle is suitable to be added to a polygon, if one can find two points in the triangle and two adjective points in the polygon, with the same x and y coordinates.