Load heavy SVG (around 10k vector object) into canvas - javascript

We want to load heavy SVG's JSON object (around 10k vector object) into the canvas. Right now we are able to do but the process is lagging very much due to numbers of vector object in canvas. Below is the flow we are following
Load SVG which is saved in the database as the JSON object in the
canvas
So the user can edit.
Convert the canvas in SVG.
Save the edited SVG in the database as the JSON object.
This is working fine when vector objects are less in number (less than 2K), but when the number goes high, system start lagging and some it crashed. We want to manage around 15k - 20k vector objects. We are using fabricjs for this.
->

Try to use the last version of fabricjs (2.3). It use a canvas for caching the path. This how the path is drawed only when you resize it.

Related

Is there a way to merge images in JavaScript without using canvas?

I'm trying to export an image (PNG) which is larger than the maximum canvas size. I've tiled the canvas export, so that each tile is small enough to be generated (with toBlob). Now I need a way to merge the images together, but can't find a way that itself doesn't use a canvas. Is this possible somehow?
Kaiido's answer worked great. I'm tiling the canvas export with getImageData (moving the contents of the canvas for each tile), then loop over all ImageData tiles & creating a new 1D array of RGBA values which I send to fast-png's encode. It's not very fast, especially on mobile (a ~30MP image takes about 40s to merge on an iPhone X), but I'm doing it on a background thread with a Worker.

Vectorize canvas drawing

I'm working in image edition in JavaScript. I have to create mask with different tools (rectangle, brush, magic wand...) and save it to the database. To avoid sending all the pixels coordinates, I'd like to vectorize the mask and send only the vertex.
How can I get that vertex from the CANVAS context?
The canvas is a raster surface. If you are treating it as a vector surface then you will need to record all of the operations applied in a separate data structure and send that to the server.
You may find that using SVG makes more sense than canvas in your case.

Saving HTML5 canvas data

I'm creating a whiteboard application using ASP.NET/JavaScript. HTML5 canvas is my current tool for developing whiteboard features (drawing, text, image, shape ...). How can I save my scene so that I can load it later (and continue drawing)? I've seen a function for saving PNG image of the canvas, is it the only way of saving canvas data? or is there another way that enables me to save canvas data along their semantics, for instance can I save lines, shapes, texts and images drawing on the canvas? what are my options?
A canvas is a rasterised output of the instructions given to it, purely pixel data is stored. If you need to know the steps taken to get there you will need to log them in the javascript you are using to create it in the first place. Otherwise, if it was relatively simple, you could draw each element to stacked / offscreen canvases and export these separately. Obviously this could get expensive depending on the amount you need to do.
There are things like this that 'undo' canvas operations but essentially it's just storing an array of lines and redrawing them all from scratch every time. If you click undo it removes a line from the array, otherwise it adds one. To do what you desire you would need to store an array of operations to be drawn like this from a completely black canvas.

Get the image of an element that gets drawn in the three.js while rendering

Well the way I think three.js handles the render is that it turns each element into an image and then draws it on a context.
where can I get more information on that.
and if I am right, is there a way to get that image, and manipulate it?
any information will be appreciated.
Three.js internally has a description of what the scene looks like in 3D space, including all the vertices and materials among other things. The rendering process takes that 3D representation and projects it into a 2D space. Three.js has several renderers, including WebGLRenderer (the most common), CanvasRenderer, and CSS3DRenderer. They all use different methods to draw that 2D projection:
WebGLRenderer uses JavaScript APIs that map to OpenGL graphics commands. As much as possible, the client's GPU takes parts of the 3D representation and more or less performs the 2D projection itself. As each geometry is rendered, it is painted onto a buffer. The complete buffer is a frame, which is the complete 2D projection of the 3D space that shows up in your <canvas>.
CanvasRenderer uses JavaScript APIs for 2D drawing. It does the 2D projection internally (which is slower) but otherwise works similarly to the WebGLRenderer at a high level.
CSS3DRenderer uses DOM elements and CSS3D transforms to represent the 3D scene. This roughly means that the browser takes normal 2D DOM elements and transforms them into 3D space to match the Three.js 3D internal representation, then projects them back onto the page in 2D.
(All this is highly simplified.)
It's important to understand that the frame rendered WebGL and Canvas representations is the resulting picture that you see on your screen, but it's not an <img>. Typically, your browser will render 60 frames per second. You can extract a frame by dumping the <canvas> into an image. Typically you'll want to stop the animation loop in order to do this as otherwise you might not be capturing the frame you want. Capturing frames this way is slow and given that your browser is rendering so many frames per second there are not easy ways to capture every frame.
Additionally, Chrome has built-in canvas inspection tools which allow you to take a closer look at each frame the browser paints.
You can't easily intercept the buffer as Three.js is rendering the frame, but you can draw directly onto the canvas as you normally would. renderer.context is the graphics context that Three.js draws onto, where renderer is the Renderer instance you create when setting up a Three.js scene. (A graphics context is basically a helper to assemble the buffer that makes up the frame.)

How to increase the canvas bit depth to create a higher quality picture in javaScript? and how to increate the canvas size when outputting

I'm new to HTML5 and JavaScript, and I'm trying to use the canvas element to create a high(ish) quality output image.
The current purpose is to allow users to all their own images (jpeg, png, svg). this works like a charm. however when I render the image it's always a 32-bit png. how can I create a higher quality picture using JavaScript(preferably) ?
when I output the file, it always seems to keep the same resolution as the canvas, how can I change this using JavaScript(preferably)
Thanks in Advance guys, I looked around for a while but I couldn't find the answer to this :(
If you want to create a larger image with getImageData or toDataURL then you have to:
Create an in-memory canvas that is larger than your normal canvas
Redraw your entire scene onto your in-memory canvas. You will probably want to call ctx.scale(theScale, theScale) on your in-memory context in order to scale your images, but this heavily depends on how they were created in the first place (images? Canvas paths?)
Call getImageData or toDataURL on the in-memory canvas and not your
normal canvas
By in-memory canvas I mean:
var inMem = document.createElement('canvas');
// The larger size
inMem.width = blah;
inMem.height = blah;
Well firstly, when you draw the image to the canvas it's not a png quite yet, it's a simple raw bitmap where the canvas API works on as you call it's methods, then it's converted to a png in order for the browser to display it, and that's what you get when you use toDataURL. When using toDataURL you're able to choose which image format you want the output to be, I think jpeg and bmp are supported along with png. (Don't think of it as a png converted to another format, cause it's not)
And I don't know what exactly do you mean by higher quality by adding more bits to a pixel, you see 32 bits are enough for all RGBA channels to have a true color 8 bits depth giving you way more colors than the human eye can see at once. I know depending on the lighting and angle in which the user is exposed to your picture his perception of the colors may vary but not the quality of it which I'd say only depends on the resolution it has. Anyway the canvas was not designed to work with those deeper colors and honestly that much color information isn't even necessary on any kind of scene you could render on the canvas, that's only relevant for high definition movies and games made by big studios, also, even if you could use deep colors on the canvas it would really depend on the support of the user's videocard and screen which I think the majority of people doesn't have.
If you wish to add information not directly concerned to the color of each pixel but maybe on potencial transformations they could have you better create your own type capable of carrying the imageData acceptable by the canvas API, keeping it's 32-bit format, and the additional information on a corresponding array.
And yes, the output image has the same resolution as the canvas do but there are a couple of ways provided for you to resize your final composition. Just do as Simon Sarris said, create an offscreen canvas which resolution is the final resolution you want your image to be, then, you can either:
Resize the raster image by calling drawImage while providing the toDataURL generated image making use of the resizing parameters
Redraw your scene there, as Simon said, which will reduce quality loss if your composition contains shapes created through the canvas API and not only image files put together
In case you know the final resolution you want it to be beforehand then just set the width and height of the canvas to it, but the CSS width and height can be set differently in order for your canvas to fit in your webpage as you wish.

Categories

Resources