I'm working in image edition in JavaScript. I have to create mask with different tools (rectangle, brush, magic wand...) and save it to the database. To avoid sending all the pixels coordinates, I'd like to vectorize the mask and send only the vertex.
How can I get that vertex from the CANVAS context?
The canvas is a raster surface. If you are treating it as a vector surface then you will need to record all of the operations applied in a separate data structure and send that to the server.
You may find that using SVG makes more sense than canvas in your case.
Related
We want to load heavy SVG's JSON object (around 10k vector object) into the canvas. Right now we are able to do but the process is lagging very much due to numbers of vector object in canvas. Below is the flow we are following
Load SVG which is saved in the database as the JSON object in the
canvas
So the user can edit.
Convert the canvas in SVG.
Save the edited SVG in the database as the JSON object.
This is working fine when vector objects are less in number (less than 2K), but when the number goes high, system start lagging and some it crashed. We want to manage around 15k - 20k vector objects. We are using fabricjs for this.
->
Try to use the last version of fabricjs (2.3). It use a canvas for caching the path. This how the path is drawed only when you resize it.
Question:
Using WebGL, what is the most efficient way I can draw a bitmap on the offscreen buffer given that a raster operation must be performed for every pixel/fragment? Raster operations may vary between draw calls.
Context:
As part of the application I am developing, I have to develop a function used to copy a bitmap onto a destination buffer. The bitmap can be smaller than the canvas and is placed using offsets for the X and Y co-ordinates. Also, given a ROP code, a raster operation can be performed when copying the bitmap. The full list of ROPs can be found here.
In OpenGL ES 1.x, there was a function called glLogicOp() which would perform the required binary operations when drawing. However, since WebGL is based on OpenGL ES 2.0, this function does not exist. I cannot yet think of a straight forward way to solve this problem. I have a few ideas but I am not as yet confident they will offer optimal performance.
Possible solutions:
Perform the ROPs in Javascript by keeping a typed array with the colour data for the offscreen buffer. This however would rely on the CPU to compute the ROP for each pixel. On draw I would upload the offscreen data to a texture and draw it on the offscreen buffer using a simple shader.
Use the ping-pong texture technique described here. Having 2 textures I could sample from one and write on the other, however, I would be forced to draw the whole offscreen buffer on every draw call, even when the bitmap to be drawn covers only a small fraction of the canvas. A possible optimization would be to switch the textures only when an ROP dependent on the destination pixel is used.
Use readPixels() to read data from the offscreen buffer before a draw call requiring an ROP dependent on the destination pixel. The data of the pixels on which the bitmap will be drawn is read, uploaded to a texture and passed to the shader together with the bitmap itself to be used for sampling. The readPixels function however is said to be one of the slowest WebGL functions since it requires synchronization between the CPU and GPU.
EDIT
Using the ping post texture technique, I think I will have to draw the whole screen every time since, for every drawing, in order to sample the destination pixel correctly I would need a representation of the current offscreen (screen before drawing). Unless I draw the whole screen every time, both textures would only represent alternate drawings rather than the real offscreen image.
It is easier to show this visually but consider this example:
1) I draw a rectangle on the left half of the screen with the first draw call having tex1 as sampling source and tex2 as drawing destination.
2) I draw rectangle on the right half of the screen with the second draw call having tex2 as sampling source and tex1 as drawing destination.
3) When I try to draw a horizontal line across the whole screen on the next draw call, since tex1 (source) does not contain the rectangle drawn in the first draw call, sampling from the left part of the screen would result in wrong colors of destination pixels.
I'm creating a whiteboard application using ASP.NET/JavaScript. HTML5 canvas is my current tool for developing whiteboard features (drawing, text, image, shape ...). How can I save my scene so that I can load it later (and continue drawing)? I've seen a function for saving PNG image of the canvas, is it the only way of saving canvas data? or is there another way that enables me to save canvas data along their semantics, for instance can I save lines, shapes, texts and images drawing on the canvas? what are my options?
A canvas is a rasterised output of the instructions given to it, purely pixel data is stored. If you need to know the steps taken to get there you will need to log them in the javascript you are using to create it in the first place. Otherwise, if it was relatively simple, you could draw each element to stacked / offscreen canvases and export these separately. Obviously this could get expensive depending on the amount you need to do.
There are things like this that 'undo' canvas operations but essentially it's just storing an array of lines and redrawing them all from scratch every time. If you click undo it removes a line from the array, otherwise it adds one. To do what you desire you would need to store an array of operations to be drawn like this from a completely black canvas.
I'm new to HTML5 and JavaScript, and I'm trying to use the canvas element to create a high(ish) quality output image.
The current purpose is to allow users to all their own images (jpeg, png, svg). this works like a charm. however when I render the image it's always a 32-bit png. how can I create a higher quality picture using JavaScript(preferably) ?
when I output the file, it always seems to keep the same resolution as the canvas, how can I change this using JavaScript(preferably)
Thanks in Advance guys, I looked around for a while but I couldn't find the answer to this :(
If you want to create a larger image with getImageData or toDataURL then you have to:
Create an in-memory canvas that is larger than your normal canvas
Redraw your entire scene onto your in-memory canvas. You will probably want to call ctx.scale(theScale, theScale) on your in-memory context in order to scale your images, but this heavily depends on how they were created in the first place (images? Canvas paths?)
Call getImageData or toDataURL on the in-memory canvas and not your
normal canvas
By in-memory canvas I mean:
var inMem = document.createElement('canvas');
// The larger size
inMem.width = blah;
inMem.height = blah;
Well firstly, when you draw the image to the canvas it's not a png quite yet, it's a simple raw bitmap where the canvas API works on as you call it's methods, then it's converted to a png in order for the browser to display it, and that's what you get when you use toDataURL. When using toDataURL you're able to choose which image format you want the output to be, I think jpeg and bmp are supported along with png. (Don't think of it as a png converted to another format, cause it's not)
And I don't know what exactly do you mean by higher quality by adding more bits to a pixel, you see 32 bits are enough for all RGBA channels to have a true color 8 bits depth giving you way more colors than the human eye can see at once. I know depending on the lighting and angle in which the user is exposed to your picture his perception of the colors may vary but not the quality of it which I'd say only depends on the resolution it has. Anyway the canvas was not designed to work with those deeper colors and honestly that much color information isn't even necessary on any kind of scene you could render on the canvas, that's only relevant for high definition movies and games made by big studios, also, even if you could use deep colors on the canvas it would really depend on the support of the user's videocard and screen which I think the majority of people doesn't have.
If you wish to add information not directly concerned to the color of each pixel but maybe on potencial transformations they could have you better create your own type capable of carrying the imageData acceptable by the canvas API, keeping it's 32-bit format, and the additional information on a corresponding array.
And yes, the output image has the same resolution as the canvas do but there are a couple of ways provided for you to resize your final composition. Just do as Simon Sarris said, create an offscreen canvas which resolution is the final resolution you want your image to be, then, you can either:
Resize the raster image by calling drawImage while providing the toDataURL generated image making use of the resizing parameters
Redraw your scene there, as Simon said, which will reduce quality loss if your composition contains shapes created through the canvas API and not only image files put together
In case you know the final resolution you want it to be beforehand then just set the width and height of the canvas to it, but the CSS width and height can be set differently in order for your canvas to fit in your webpage as you wish.
I'm writing an app for shape manipulation, such that after creating simple shapes the user can create more complex ones by clipping the shapes against each other (i.e. combining two circles together into a figure 8 stored using a single path rather than a group, or performing intersection of two circles to create a "bite" mark), and am trying to decide on a graphics library to use.
SVG seems to handle 80% of the functionality I need out of the box (shape storage, movement, rotation, scaling). The problem is that the other 20% (using clipping to create a new set of complex polygons) seems impossible to achieve without recreating SVG functionality in my own modules (I'd have to store the shape once for drawing inside SVG, and once for processing clipping myself). I could be wrong about SVG, but by reading about Raphael library (based on SVG), it seems like it only handles clipping using a rectangle, and even that clipping is temporary (it only renders part of the shape, but still stores entire shape to be rerendered once the clipping rectangle is moved). Perhaps I'm just confused about SVG standard, but even retrieving/parsing the paths to compute a new path using subsets of previous paths seems non-obvious in SVG (there is a Subpath() function, but I don't see anything to find the points of intersection of two polygon perimeters, or combine several subpaths into a single path).
As a result, Canvas seems like a better alternative since it doesn't introduce the extra overhead by keeping track of shapes I'd already have to keep track of to make my own clipping implementation work. Not only that, I've already implemented the polygon class that can be moved, rotated, and scaled. Canvas has some other issues, however (I'd have to implement my own redraw method, which I'm sure will not be as efficient as SVG one that takes advantage of browser-specific frameworks in Chrome and Firefox; and I'd have to accept IE incompatibility which is handled for free with libraries like Raphael).
Thanks
This may address what you're mentioning.
Clipping can be done using non-rectangular objects using the 'clipPath' element.
For example, I have element with id of 'clipper' that defines what to clip out, and a path that is subject to the clipping. Not sure if they intersect in this snippet.
<g clip-rule="nonzero">
<clipPath id="clipper">
<ellipse rx="70" ry="95" clip-rule="evenodd"/>
</clipPath>
<!-- stuff to be clipped -->
<path clip-path="url(#clipper)" d="M-100 0 a100 50"/>
</g>
This is just a snippet from something I have. Hope it helps.
Seems to me that you are trying to do 2D constructive geometry. Since SVG runs in retained mode, the objects you draw are stored and then the various operations performed. With Canvas you are running against a bit map so the changes are effected immediately. Since your users will in turn perform more operations on your simpler shapes to create ever more complex ones Canvas should in the long term be a better fit.
The only outstanding question is what will be done with those objects once your users are finished with them. If you zoom the image it will get the jaggies. SVG will avoid that problem but you trade-off with greater complexity and performance impact.
Both svg and canvas are a vector graphical technology.Each one having some different functionality.
Canvas
Canvas is a bitmap with an immediate modegraphics application programming interface (API) for drawing on it. Canvas is a “fire and forget” model that renders its graphics directly to its bitmap and then subsequently has no sense of the shapes that were drawn; only the resulting bitmap stays around.
More Information about canvas - http://www.queryhome.com/51054/about-html5-canvas
SVG
SVG is used to describe Scalable Vector Graphics
SVG is known as a retained mode graphics model persisting in an in-memory model. Analogous to HTML, SVG builds an object model of elements, attributes, and styles. When the element appears in an HTML5 document, it behaves like an inline block and is part of the HTML document tree.
More Information about SVG - http://www.queryhome.com/50869/about-svg-part-1
See here for more information about canvas vs svg in detail - Comparing svg vs canvas
You're right - you'll have to mathematically perform the clipping and creation of new shapes regardless of whether you use SVG or Canvas. I'm biased, it seems like it would be more useful to use SVG since you also get things like DOM events on the shapes (mouse, dragging) and serialization into a graphical format for free.