Kineticjs class hierarchy clarification - javascript

After reviewing the Kineticjs docs I have come up with the following
Kinetic.Node - Nodes are entities that can be transformed, layered, and have bound events.
Kinetic.Shape (Node) - Shapes are primitive objects such as rectangles, circles, text, lines, etc.
Kinetic.Container (Node) - Containers are used to contain nodes or other containers
Kinetic.Stage (Container(Node)) - A stage is used to contain multiple layers add(Layer)
Kinetic.Layer (Container(Node)) - Layers are tied to their own canvas element and are used to contain groups or shapes add(Node)
Kinetic.Group (Container(Node)) - Groups are used to contain shapes or other groups. add(Node)
Kinetic.BaseLayer (Container(Node)) - ???
Kinetic.FastLayer (Container(Node)) - is used for layers that don't need user interaction (update thanks markE)
Kinetic.Collection (Array) - This class is used in conjunction with Kinetic.Container#get
What are BaseLayer and 'FastLayer' used for exactly? In the documentation FastLayer has the exact same description as Layer and BaseLayer just says that it is a constructor.
in one of the commit comments it is inferred that FastLayer does not have to remove a hit canvas ... I am guessing this is because it does not have one thus making it faster?
Some clarification on what these two classes do, and how to effectively use them would be appreciated.
EDIT: Updated Question to reflect markE's input, anyone have insight on BaseLayer?

Note: as of this post the fast layer was introduced only several days ago. But as I understand...
The new fast layer is the old layer but with eventing turned off.
The KineticJS docs say:
If you don't need node nesting, mouse and touch interactions, or event
pub/sub, you should use FastLayer instead of Layer to create your
layers. It renders about 2x faster than normal layers.
The fast layer is used for layers that don't need user interaction:
a static background layer with no user interaction required.
a static layer that is manipulated and drawn entirely thru JS code with no user interaction required.
Drawing fast layers is faster because there is no overhead related to eventing.
Normal layers also have a supporting offscreen canvas which supports hit-testing and dragging.
I suspect the fast layer does not have this overhead either since hit-testing and dragging is related to eventing.
Having said this...I need to investigate this new tool more myself. ;-)

Related

KineticJS- Why does it use so many hidden canvas elements?

I am just starting to learn about the canvas element, however I believe that it is double buffered.
Looking through the code for kineticjs It seems that the Kinetic.Stage creates two canvases (not in the DOM) a Kinetic.SceneCanvas and a Kinetic.hitCanvas. When you add a layer to the stage it seems to create 2 more canvases, another Scene and Hit canvas, one of which it displays in the DOM. Why does it need so many overlapping canvases? Or have I misread the code and/or missed the point?
Thanks
Take straight from the KineticJS GitHub Readme:
Kinetic stages are made up of user defined layers. Each layer has two canvas renderers, a scene renderer and a hit graph renderer. The scene renderer is what you can see, and the hit graph renderer is a special hidden canvas that's used for high performance event detection. Each layer can contain shapes, groups of shapes, or groups of other groups. The stage, layers, groups, and shapes are virtual nodes, similar to DOM nodes in an HTML page.
Additionally in the features section, KineticJS features:
High performance event detection via color map hashing

Storing shapes in JavaScript array to redraw after some operation

I am developing an editor in html5. I have buttons for creating shapes when clicked, including triangle, rectangle, hexa, penta, heptagons, lines, and so on. Now I also want to perform operations on these shapes such as rotate, flip, undo, redo, ...etc. I want to save these drawn objects in a JavaScript array or something so I can create them after performing operations on the canvas, since individual shapes cannot be rotated or flipped in canvas, we have to redraw it. How can I achieve this? Thanks in advance.
I have a project where if you click on an image of a rectangle you can then draw a rectangle, click on an ellipse then you can draw an ellipse. My shapes are stored as objects which are then drawn using Canvas and can be flipped, rotated etc I have not implemented undo redo.
My project is at http://canvimation.github.com/
The source code for my project is at https://github.com/canvimation/canvimation.github.com
The master branch is the current working code. You are welcome to use any of the code or fork the project.
as you said, you have to clear your context and redraw your shapes any time you change them.
It's not mandatory to clear and redraw all the context, you can just redraw the region in which a shape is modified.
So you have to think your shapes as objects (in a OOP way) with their own properties and render method.
What I'd do is to create another class to apply transformations to a shape (a flip is just a -1 scale).
If you go this way, it could become a huge work (the more features you add, the more complexe your code becomes and the first design of your application may be re-think during the work).
What I can suggest to you is to use a framework that already does the job.
For example, cgSceneGraph is designed to let developers add their own rendering method and provides a lot of methods to manipulate them. I'm the designer of the framework, feel free to ask more on about how to apply transformations or create your own nodes (tutorials and examples are already on the website, but I'll please to help you).

Algorithm to draw connections between nodes without overlapping nodes

I have a series of nodes in a graph. The nodes are placed by the user in specific spots. The nodes are guaranteed to not overlap and, in fact, to have a buffer of space between them. These nodes are connected and each edge joins to a node at a specific point. I need to draw the edges between the nodes such that the edges:
(required) do not overlap the parent nodes
(ideally) would not overlap any node
I am not worried about edge crossings. Bonus points if there's an implementation of this in Javascript. I am unable to use any libraries outside of Javascript.
One solution could be using Bézier Curves:
"A Bézier curve is defined by a set of control points P0 through Pn,
where n is called its order (n = 1 for linear, 2 for quadratic, etc.).
The first and last control points are always the end points of the
curve; however, the intermediate control points (if any) generally do
not lie on the curve."
So the basic idea is to use parent node(s) as intermediate control points. You may also use points of the edges as intermediate control points to avoid edges overlapping.
In the wiki article you can find nice animations explaining it.
For javascript implementation I took a look at the followings libs:
jsdraw2d (LGPL license) with a nice demo and well referenced. Implemented it also using HTML5 and SVG for performance (jsdraw2dX).
jsbezier on google code
But if you google "javascript bezier library" you can find more.
If you are familiar with C# and .NET you can explore Microsoft.GLEE library (description is here and here) via ILSpy, or even theoretically save this sources to .csproj, modify and recompile it with Script# to JavaScript.

Creating a complex Feature / Vector in OpenLayers

The closest post I could find to my question is Compound complex feature in OpenLayers. Alas, no one answered it. I am quite proficient in JavaScript but relatively new to OpenLayers and its complex API. I have created complex Controls prior to this. However, this time I am looking to create a complex Feature / Vector. The general idea of it is that the feature has a display icon (like a pin, for example) as the main component. The component is interactive and responds to user actions (select, drag, etc). Upon selection, I desire to render additional vectors that are logically associated with this component (circles, rectangles, etc). These Vectors listen to user interactions as well.
Previously, in case of Controller, I was able to use source of other controllers to make sense of development direction and successfully proceed. Its a little harder with Features / Vectors, imho.
I started by extending OpenLayers.Feature.Vector using OpenLayers.Feature.Vector.CustomClass = OpenLayers.Class( OpenLayers.Feature.Vector, {...}); code. Constructor takes specific parameters to my feature, creates several geometry objects (points, polygon, lines), adds them to the OpenLayers.Geometry.Collection, and invokes OpenLayers.Feature.Vector constructor with collection passed into it.
Unfortunately, I realized that in order to display an icon, I cannot just use a Geometry.Point but need to create a Vector for it. That kind of threw me off because I will create Vectors within my custom Vector object. It is nothing unusual in general but I wonder if this is the way things are done in OpenLayers. Like I have mentioned, I do not find API documentation very useful as it simply states general function headers / brief description.
I would greatly appreciate if someone could point me into the right direction (haven't found many tutorials online beyond the basic "create marker with custom image" types). If the description isn't clear, let me know and I'll try to provide additional information.
I've had to tackle similar problems in the past. The best approach with OpenLayers (or any mapping tool for that matter) is usually to separate your layers into feature classes, each of which represents a collection of points, lines, or polygons. Once you create all of your layers, you can create a select control that listens for events on each of these layers and responds appropriately.
If you need to logically associate subsets of these features together, you could store references to these features externally, or within the parent feature's attributes object.
My solution is to provide a FeatureCollection geojson as the complex/compound type data. In my case the FeatureCollection consists of many Point features, and one LineString feature. Openlayers can consume this geojson:
var features = (new ol.format.GeoJSON()).readFeatures(geojson)
... and provide the collection of features. You can then iterate over those features and provide some unifying attribute/object to each feature. Then, when you define event handlers (hover or select/click), access the unifying attribute to get hold of any other related feature.

Choosing right technology (SVG vs Canvas)

I'm writing an app for shape manipulation, such that after creating simple shapes the user can create more complex ones by clipping the shapes against each other (i.e. combining two circles together into a figure 8 stored using a single path rather than a group, or performing intersection of two circles to create a "bite" mark), and am trying to decide on a graphics library to use.
SVG seems to handle 80% of the functionality I need out of the box (shape storage, movement, rotation, scaling). The problem is that the other 20% (using clipping to create a new set of complex polygons) seems impossible to achieve without recreating SVG functionality in my own modules (I'd have to store the shape once for drawing inside SVG, and once for processing clipping myself). I could be wrong about SVG, but by reading about Raphael library (based on SVG), it seems like it only handles clipping using a rectangle, and even that clipping is temporary (it only renders part of the shape, but still stores entire shape to be rerendered once the clipping rectangle is moved). Perhaps I'm just confused about SVG standard, but even retrieving/parsing the paths to compute a new path using subsets of previous paths seems non-obvious in SVG (there is a Subpath() function, but I don't see anything to find the points of intersection of two polygon perimeters, or combine several subpaths into a single path).
As a result, Canvas seems like a better alternative since it doesn't introduce the extra overhead by keeping track of shapes I'd already have to keep track of to make my own clipping implementation work. Not only that, I've already implemented the polygon class that can be moved, rotated, and scaled. Canvas has some other issues, however (I'd have to implement my own redraw method, which I'm sure will not be as efficient as SVG one that takes advantage of browser-specific frameworks in Chrome and Firefox; and I'd have to accept IE incompatibility which is handled for free with libraries like Raphael).
Thanks
This may address what you're mentioning.
Clipping can be done using non-rectangular objects using the 'clipPath' element.
For example, I have element with id of 'clipper' that defines what to clip out, and a path that is subject to the clipping. Not sure if they intersect in this snippet.
<g clip-rule="nonzero">
<clipPath id="clipper">
<ellipse rx="70" ry="95" clip-rule="evenodd"/>
</clipPath>
<!-- stuff to be clipped -->
<path clip-path="url(#clipper)" d="M-100 0 a100 50"/>
</g>
This is just a snippet from something I have. Hope it helps.
Seems to me that you are trying to do 2D constructive geometry. Since SVG runs in retained mode, the objects you draw are stored and then the various operations performed. With Canvas you are running against a bit map so the changes are effected immediately. Since your users will in turn perform more operations on your simpler shapes to create ever more complex ones Canvas should in the long term be a better fit.
The only outstanding question is what will be done with those objects once your users are finished with them. If you zoom the image it will get the jaggies. SVG will avoid that problem but you trade-off with greater complexity and performance impact.
Both svg and canvas are a vector graphical technology.Each one having some different functionality.
Canvas
Canvas is a bitmap with an immediate modegraphics application programming interface (API) for drawing on it. Canvas is a “fire and forget” model that renders its graphics directly to its bitmap and then subsequently has no sense of the shapes that were drawn; only the resulting bitmap stays around.
More Information about canvas - http://www.queryhome.com/51054/about-html5-canvas
SVG
SVG is used to describe Scalable Vector Graphics
SVG is known as a retained mode graphics model persisting in an in-memory model. Analogous to HTML, SVG builds an object model of elements, attributes, and styles. When the element appears in an HTML5 document, it behaves like an inline block and is part of the HTML document tree.
More Information about SVG - http://www.queryhome.com/50869/about-svg-part-1
See here for more information about canvas vs svg in detail - Comparing svg vs canvas
You're right - you'll have to mathematically perform the clipping and creation of new shapes regardless of whether you use SVG or Canvas. I'm biased, it seems like it would be more useful to use SVG since you also get things like DOM events on the shapes (mouse, dragging) and serialization into a graphical format for free.

Categories

Resources