Examples of JS hierarchical tree with mixed canvas/DIV approach - javascript

I am wishing to provide a visualisation of groups of data on a website, each containing multiple fields. The groups are related to other groups in a largely hierarchical fashion.
The Spacetree examples from the JavaScript InfoVis toolkit provide almost all functionality, with the major caveat that the entire graph is rendered to a canvas. Node types are therefore visually restricted to canvas drawing elements.
Instead, I'm looking for a library that allows <div>s to be rendered (each with my multiple fields, icons, Javascript functionality, etc.) and visually linked in a similar fashion to the Spacetree examples. Essentially, the general concept is similar to UML or database diagrams.
I suppose that I could just use the InfoVis toolkit, overlay my <div>s and limit interactivity, but I'm wondering if anyone has come across a library that does this out of the box (and preferably for free).

It's already doing just that! Looking at the example on the InvoVis site, there is a chunk of javascript which is actually constructing html nodes for the nodes shown on the screen. All you need to do, it seems, is modify that section to acquire your html chunk:
//This method is called on DOM label creation.
//Use this method to add event handlers and styles to
//your node.
onCreateLabel: function(label, node){
label.id = node.id;
label.innerHTML = node.name;
label.onclick = function(){
st.onClick(node.id);
};
//set label styles
var style = label.style;
style.width = 40 + 'px';
style.height = 17 + 'px';
style.cursor = 'pointer';
style.color = '#fff';
//style.backgroundColor = '#1a1a1a';
style.fontSize = '0.8em';
style.textAlign= 'center';
style.textDecoration = 'underline';
style.paddingTop = '3px';
},
The important line is
label.innerHTML = node.name;

Related

How to get the position X and Y from a polygon element SVG With javascript? [duplicate]

I'm using the SVG located at http://upload.wikimedia.org/wikipedia/commons/3/32/Blank_US_Map.svg in a project and interacting with it with d3.js. I'd like to create a click to zoom effect like http://bl.ocks.org/2206590, however that example relies on path data stored in a JSON object to calculate the centroid. Is there any way to load path data in d3 from an existing SVG to get the centroid?
My (hackish) attempt so far:
function get_centroid(sel){
var coords = d3.select(sel).attr('d');
coords = coords.replace(/ *[LC] */g,'],[').replace(/ *M */g,'[[[').replace(/ *z */g,']]]').replace(/ /g,'],[');
return d3.geo.path().centroid({
"type":"Feature",
"geometry":{"type":"Polygon","coordinates":JSON.parse(coords)}
});
}
This seems to work on some states, such as Missouri, but others like Washington fail because my SVG data parsing is so rudimentary. Does d3 support something like this natively?
The D3 functions all seem to assume you're starting with GeoJSON. However, I don't actually think you need the centroid for this - what you really need is the bounding box, and fortunately this is available directly from the SVG DOM interface:
function getBoundingBoxCenter (selection) {
// get the DOM element from a D3 selection
// you could also use "this" inside .each()
var element = selection.node();
// use the native SVG interface to get the bounding box
var bbox = element.getBBox();
// return the center of the bounding box
return [bbox.x + bbox.width/2, bbox.y + bbox.height/2];
}
This is actually slightly better than the true centroid for the purpose of zooming, as it avoids some projection issues you might otherwise run into.
The accepted answer was working great for me until I tested in Edge. I can't comment since I don't have enough karma or whatever but was using this solution and found an issue with Microsoft Edge, which does not use x or y, just top/left/bottom/right, etc.
So the above code should be:
function getBoundingBoxCenter (selection) {
// get the DOM element from a D3 selection
// you could also use "this" inside .each()
var element = selection.node();
// use the native SVG interface to get the bounding box
var bbox = element.getBBox();
// return the center of the bounding box
return [bbox.left + bbox.width/2, bbox.top + bbox.height/2];
}
From here
The solution is to use the .datum() method on the selection.
var element = d3.select("#element");
var centroid = path.centroid(element.datum());

Layout DirectedGraph (dagre) only on a subset of nodes

I'm looking for a way to layout only a subset of nodes of a directed graph with JointJS / Rappid diagramming library.
I need some "fixed" nodes in the graph and layout the "others", assuming that they can be connected between each other or with some of the fixed nodes (the graph is already added into the paper).
Since the joint.layout.DirectedGraph.layout API must be used on a graph object, I was wondering if there is any mechanism to have some nodes of the graph "fixed" during the layout calculation (some proprety to add in the cell object, for example).
Also something like this can be fine, but no incoming and outgoing links should be retrieved by the getSubgraph API
var subGraph = graph.getSubgraph([A, B]);
joint.layout.DirectedGraph.layout(subGraph, layoutOpt);
Looking at the docs I was not able to identify this kind of feature.
If this feature is not supported, there is any other approach that I could use to achieve my goal? (Of course I can also layout the entire graph and apply my fixed chords when the operation ends, but I was looking for something better than this).
So I'm pretty sure that the auto layout functionality using Dagre is now only available in the Rappid version of joint js (i.e. paid for). What I do is use Dagre separately to perform the layout calculations and then iterate back through the elements and use the Dagre output to change their position manually. Not ideal, however it does allow you to do whatever you want in terms of only looking at a subset of nodes. Basic code below (graphObj is the jointjs graph object), you should be able to use this as a starting point if you wanted to work with a subset of elements and links:
var nodes = [];
var edges = [];
var elements = graphObj.getElements();
elements.forEach(function(element){
element.label = element.id;
element.width = element.attributes.size.width;
element.height = element.attributes.size.height;
});
var links = graphObj.getLinks();
links.forEach(function(link){
edges.push({source: link.getSourceElement(), target: link.getTargetElement()});
});
dagre.layout()
.nodeSep(150)
.edgeSep(100)
.rankSep(150)
.rankDir("LR")
.nodes(elements)
.edges(edges)
.run();
elements.forEach(function(element){
element.position(element.dagre.x, element.dagre.y);
element.attributes.prop.metadata.x = element.dagre.x;
element.attributes.prop.metadata.y = element.dagre.y;
});

Point coordinate translate to specific Surface in Famo.us

We have a pretty complex web app built in meteor. The UI is mainly in nested HTML elements. Now we are trying to rewrite the UI with Famo.us so we can have better performance as well as adding great animation effects. One feature in our app is, when user drag on top of an element A, we need to draw a new element B based on the precise position of the mouse events in B. That is, we need to calculate the coordinate of a point in any elements, even the element has complex transforms. We were using the 'webkitConvertPointFromPageToNode' function in webkit browsers(we only support webkit.) to do the job. Does Famo.us has a similar function so I can calculate a point coordinate in a specific Surface? Or do you have any suggestions on how to accomplish such features with current API?
Thanks
Given that the transforms in Famo.us are all backed by absolute positioning, finding the coordinates in any given surface is pretty straightforward. In the Event object you can grab the offsetX and offsetY of the target surface.
Check out this example..
Hope it helps!
var Engine = require('famous/core/Engine');
var Surface = require('famous/core/Surface');
var StateModifier = require('famous/modifiers/StateModifier');
var Transform = require('famous/core/Transform');
var context = Engine.createContext();
var surface = new Surface({
size:[200,200],
properties: {
backgroundColor:'green',
color:'white',
textAlign:'center',
lineHeight:'200px'
}
})
surface.on('mousemove',function(e){
surface.setContent("x: "+e.offsetX+", y: "+e.offsetY);
})
surface.state = new StateModifier({
transform: Transform.translate(100,100,0)
})
context.add(surface.state).add(surface);
I have found the right way to do this.
First, I dug into the problem mentioned in my comment that the offsetX/offsetY value is actually based on the child surfaces. Because offsetX/offsetY values are generated by DOM's MouseEvent and copied into famo.us with no modification. DOM doesn't provide the coordinate of the mouse point on the 'currentTarget'. It only provide the value for 'target', which is the element the event occurs. So we can only use the clientX/clientY coordinate in the viewport, then calculate the coordinate of that point on the target element. No official API to do the calculation either. Only webkit provide the 'webkitConvertPointFromPageToNode' api to do it because the layout engine knows all about the position and transforms on a specific element.
But then I realise that with Famo.us, we know the transforms of each surface! In the render tree, all the modifiers on the path from root context to a RenderNode form the transform for that node and the nodes below. We can multiply them to get one transform matrix M. Then we can do a coordinate system transformation to calculate the point's right coordinate in the node's local coordinate system.
But Famo.us doesn't have direct API to get all the modifiers for a node, I did it myself in my code. I would suggest Famo.us to add a 'parent' reference on each RenderNode, then we can get them easily for any node.
It took me a while but this work for me:
var myX=event.clientX;
var myY=event.clientY;
for(var i=0;i<event.path.length;i++)
{
if(event.path[i].style===undefined)
continue;
var matrix=event.path[i].style.transform;
var matrixPattern = /^\w*\((((\d+)|(\d*\.\d+)),\s*)*((\d+)|(\d*\.\d+))\)/i;
if (matrixPattern.test(matrix)) {
var matrixCopy = matrix.replace(/^\w*\(/, '').replace(')', '');
myX-=matrixCopy.split(/\s*,\s*/)[12];
myY-=matrixCopy.split(/\s*,\s*/)[13];
}
}
Tested with align and size modifier

how to get the html file of an edited canvas in html5

I am trying to make a website which helps its users to create a page by dragging and dropping elements on the canvas. The user should be able to save the html file of the edited canvas. I cannot figure how to convert the changes made to the canvas to an html file.
I don't think it's possible to get Markup out of canvas. I've searched it for a month but can't find a valid solution. but may be some experts may know. Best of luck buddy.
Canvas is basically just a bit-map image. Whatever you draw on the canvas is stored as pixels not as elements. So changes to the canvas are just changes in pixel values. To do what you wish you would need to store your 'elements' as 'objects' within your code where each 'object' stores all the required data for your 'element'.
it would then be possible to open a new window and export code into it using document.writeln
The code below may give you an idea of what sort of thing would be needed
newwindow=window.open('','_blank');
newwindow.document.writeln('<!DOCTYPE HTML>');
newwindow.document.writeln('<html>');
newwindow.document.writeln('<head>');
newwindow.document.writeln('<style>');
newwindow.document.writeln('#element0 {');
newwindow.document.writeln('background:'+ obj0.background+';');
newwindow.document.writeln('width:'+ obj0.width+';');
newwindow.document.writeln ('}');
newwindow.document.writeln('</style>');
newwindow.document.writeln('</head>');
newwindow.document.writeln('<body>');
newwindow.document.writeln('<div id="element0"></div>');
newwindow.document.writeln('</body>');
newwindow.document.writeln('</html>');
newwindow.document.writeln('<html>');
newwindow.document.close();
Hope this helps
Canvas won't help you here for anything other than to visualize the objects you have dropped onto it.
You need to record the objects you drop in a "shadow" structure behind the scene sort of. That is to say: build a object list internally which you then can use as source data to render:
Canvas visualization of it
Raw HTML code from it.
You can for example drop an image to the canvas and your code will record a new object (intention with the following code is to show the principle not to provide a full working solution):
var myObjects = [];
/// a drop occurred
var o = new myElement(x, y, width, height, id, type);
myElement is a pre-defined object that you set up in advance to hold the given arguments.
Then push the object to your object stack and render it to canvas:
myObjects.push(o);
for(var i = 0, o; o = myObjects[i]; i++) {
/// draw the look of this object here to canvas
}
When you then need a HTML version of it you do the same:
for(var i = 0, o; o = myObjects[i]; i++) {
var el = '<' + o.type + ' id="' + o.id + ' .... other things here
}
This way you can produce canvas graphics, HTML, send data over a socket etc.
The key in these sort things is to keep raw base data available. In this case it would be the element type you want to drop, its position and dimension. For HTML you also have to consider things as nesting etc. but that would require a bit more code than shown here.

How can I stop elements overlapping using JavaScript and the Raphael JavaScript library

I’m generating multiple, random sized, circular elements using the Raphael JavaScript library but because it’s random a lot of the circular elements being generate overlap or cover each other. What I wanted to know, is there any way with JavaScript to tell if one element is in already in particular position so to avoid the overlapping? Essentially, I want to create random elements on a canvas, of a random size that don’t overlap or cover each other.
There's a couple of test files I created here to give you an idea of what I'm doing. The first one generates random objects and the second link sets them to a grid to stop the overlapping.
http://files.nicklowman.co.uk/movies/raphael_test_01/
http://files.nicklowman.co.uk/movies/raphael_test_03/
The easiest way is to create an object and give it a repulsive force that degrades towards zero at it's edge. As you drop these objects onto the canvas the objects will push away from each other until they reach a point of equilibrium.
Your examples aren't working for me, so I cannot visualize your exact scenario.
Before you "drop" an element on the canvas, you could query the positions of your other elements and do some calculations to check if the new element will overlap.
A very simple example of this concept using circle elements might look like this:
function overlap(circ1, circ2) {
var attrs = ["cx", "cy", "r"];
var c1 = circ1.attr(attrs);
var c2 = circ2.attr(attrs);
var dist = Math.sqrt(Math.pow(c1.cx - c2.cx ,2) + Math.pow(c1.cy - c2.cy, 2));
return (dist < (c1.r + c2.r));
}
var next_drop = paper.circle(x, y, r);
for (var i in circles) {
if (overlap(next_drop, circles[i])) {
// do something
}
}
Of course calculating just where you're going to place a circle after you've determined it overlaps with others is a little more complicated.

Categories

Resources