My team and I are looking at using mxGraph to programmatically generate diagrams. We've created a graph and saved it as an XML file. How do we go from there to an SVG file?
I understand mxGraph uses SVG files natively for the display, and that it can write SVG files that have XML encoded in them so they can be reopened in diagrams.net. How can I get mxGraph to put our graph on an SVG canvas and then serialize it to disk?
// from https://stackoverflow.com/a/57829704
const jsdom = require("jsdom");
const { JSDOM } = jsdom;
const dom = new JSDOM();
const fs = require('fs');
global.window = dom.window;
global.document = window.document;
global.XMLSerializer = window.XMLSerializer;
global.navigator = window.navigator;
const mxgraph = require("mxgraph")({
mxImageBasePath: "./src/images",
mxBasePath: "./src"
});
const {mxGraph, mxCodec, mxUtils, mxConstants, mxSvgCanvas2D} = mxgraph;
function makeHelloWorld() {
// Extracted from https://github.com/jgraph/mxgraph/blob/master/javascript/examples/helloworld.html
const graph = new mxGraph();
// Gets the default parent for inserting new cells. This
// is normally the first child of the root (ie. layer 0).
var parent = graph.getDefaultParent();
// Adds cells to the model in a single step
graph.getModel().beginUpdate();
try {
var v1 = graph.insertVertex(parent, null, 'Hello,', 20, 20, 80, 30);
var v2 = graph.insertVertex(parent, null, 'World!', 200, 150, 80, 30);
var e1 = graph.insertEdge(parent, null, '', v1, v2);
} finally {
// Updates the display
graph.getModel().endUpdate();
}
return graph;
}
const helloWorldGraph = makeHelloWorld();
function graphToXML(graph) {
var encoder = new mxCodec();
var result = encoder.encode(graph.getModel());
return mxUtils.getXml(result);
}
const xml = graphToXML(helloWorldGraph);
fs.writeFileSync('./graph.xml', xml);
Everything up to this point works--we can output an XML file. Now we have to get from there to an SVG.
I've created an SVG canvas, like so:
function createSvgCanvas(graph) {
const svgDoc = mxUtils.createXmlDocument();
const root = (svgDoc.createElementNS != null) ? svgDoc.createElementNS(mxConstants.NS_SVG, 'svg') : svgDoc.createElement('svg');
if (svgDoc.createElementNS == null) {
root.setAttribute('xmlns', mxConstants.NS_SVG);
root.setAttribute('xmlns:xlink', mxConstants.NS_XLINK);
} else {
root.setAttributeNS('http://www.w3.org/2000/xmlns/', 'xmlns:xlink', mxConstants.NS_XLINK);
}
const bounds = graph.getGraphBounds();
root.setAttribute('width', (bounds.x + bounds.width + 4) + 'px');
root.setAttribute('height', (bounds.y + bounds.height + 4) + 'px');
root.setAttribute('version', '1.1');
svgDoc.appendChild(root);
const svgCanvas = new mxSvgCanvas2D(root);
return svgCanvas;
}
The problems are:
The new canvas is sized for the graph, but it doesn't have the graph on it yet.
I need to figure out how to turn the SVG canvas into an SVG file on disk.
EDIT: I've added the following to the end of the program:
const canvas = createSvgCanvas(helloWorldGraph);
const imgExport = new mxImageExport();
imgExport.drawState(helloWorldGraph.getView().getState(helloWorldGraph.model.root), canvas); // adapted from https://jgraph.github.io/mxgraph/docs/js-api/files/util/mxImageExport-js.html
const xml2 = mxUtils.getXml(canvas)
const svgString = '<?xml version="1.0" encoding="UTF-8"?>\n'
+ '<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">\n'
+ xml2
fs.writeFileSync('./graph.svg', svgString);
This gets me "Uncaught TypeError: Failed to execute 'serializeToString' on 'XMLSerializer': parameter 1 is not of type 'Node'." I've tried a few things, but I'm not familiar enough with mxGraph to get the Node it needs out of my canvas.
For point 1, you can have a look at the draw.io code which provides details about how to setup the canvas.
https://github.com/jgraph/drawio/blob/v13.2.3/src/main/webapp/js/diagramly/EditorUi.js#L1756
https://github.com/jgraph/drawio/blob/v13.2.3/src/main/webapp/js/mxgraph/Graph.js#L7780
But this code is intended to be used in a browser, so you may have issue when running in your script which relies on JSDOM.
You will have to use an mxImageExport to use the canvas.
Then wrap the produced node to generate the svg string (in the following svgRoot is the Element produced by the custom code and updated by the mxImageExport)
'<?xml version="1.0" encoding="UTF-8"?>\n'
+ '<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">\n'
+ mxUtils.getXml(svgRoot)
For point 2 in a script
once you have the svg string, you can write it directly into a file as you did to store the mxgraph model in the xml file.
For point 2 in a browser
You can use the download and href attributes of the Anchor element (https://developer.mozilla.org/en-US/docs/Web/HTML/Element/a#Attributes)
1st, generate an href data containing the encoding value of the svg string produced with point 1
'data:image/svg+xml' + encodeURIComponent(svg)
Then create the anchor with this href data and trigger it when clicking on a button for instance.
More details in https://ourcodeworld.com/articles/read/189/how-to-create-a-file-and-generate-a-download-with-javascript-in-the-browser-without-a-server
To generate SVG files from XML, you can download the desktop version of diagrams.net from http://get.diagrams.net and run it on the command line.
Documentation for the command line flags can be found here or by running draw.io --help.
/Applications/draw.io.app/Contents/MacOS/draw.io -x -f svg graph.xml
However, SVG files generated in this way lack the embedded XML data which would allow them to be reopened in diagrams.net. It may be possible to reinsert this data using regex or another method.
Related
I have used THREE.js GLTFLoader to load glb file then I used USDZ Exporter to export it to usdz then when I tried to open it on browser it opened on saferi but didn't shown on arkit on object mode
and on AR Mode appear above my head this is a simple file
https://drive.google.com/file/d/1uAZNZLWI-zdtjcetfT2tIBh9a0tyzGyi/view?usp=sharing
This is my trial
const loader = new GLTFLoader().setPath(`${origin}${folderPath}`);
loader.load(modelName, async function (gltf) {
model = gltf.scene;
scene.add(model);
const exporter = new USDZExporter();
const arraybuffer = await exporter.parse(model );
const blob = new Blob([arraybuffer], { type: 'application/octet-stream' });
const link = document.getElementById('usdz-link');
link.style.display = ''
link.href = URL.createObjectURL(blob);
also I have tried to center the glb model by this code
const box = new THREE.Box3().setFromObject(gltf.scene);
const center = box.getCenter(new THREE.Vector3());
var sceneCopy=gltf.scene.clone();
sceneCopy.position.x += (gltf.scene.position.x - center.x);
sceneCopy.position.y += (gltf.scene.position.y - center.y);
sceneCopy.position.z += (gltf.scene.position.z - center.z);
and then used sceneCopy to be exported to usdz but unfortunately this didn't help me
I think we answered this one through slack, but I'll repost here for closure..
The offset of the model needs to be close to the origin for it to appear correctly in the quicklook viewer on iPad/iPhone.
You can manually set the offset by changing these three lines...
https://github.com/wallabyway/quicklook-example/blob/6f1e1453b983e0effc4ad82b2eda5eac905865a9/alliedbim-piping.gltf#L1023-L1025
which is the opposite of these 3 values:
the opposite to these 3 values: https://github.com/wallabyway/quicklook-example/blob/6f1e1453b983e0effc4ad82b2eda5eac905865a9/alliedbim-piping.gltf#L1044-L1046
Basically, when you use filtering in forge-convert-utils, the center option doesn't take into account the offset.
There is a GitHub repo feature request to fix this...
https://github.com/petrbroz/forge-convert-utils/issues/44
and a branch with a possible solution.
https://github.com/petrbroz/forge-convert-utils/commit/bb4bd0a13c685c34966a3bf5c2784ba9b1343a7d
Here is my problem: I have created an image collage function in javascript. (I started off with some code from this post btw: dragging and resizing an image on html5 canvas)
I have 10 canvas elements stacked on top of each other and all parameters, including 2dcontext, image data, positions etc. for each canvas is held in instances of the function 'collage'.
This is working fine, I can manipulate each canvas separately (drag, resize, adding frames, etc). But now and I want the user to be able to save the current work.
So I figure that maybe it would be possible to create a blob, that contains all the object instances, and then save the blob as a file on disk.
This is the function collage (I also push each instance to the array collage.instances, to be able to have numbered indexes)
function collage() {
this.canvas_board = '';
this.canvas = '';
this.ctx = '';
this.canvasOffset = '';
this.offsetX = '';
this.offsetY = '';
this.startX = '';
this.startY = '';
this.imageX = '';
this.imageY = '';
this.mouseX = '';
this.mouseY = '';
this.imageWidth = '';
this.imageHeight = '';
this.imageRight = '';
this.imageBottom = '';
this.imgframe = '';
this.frame = 'noframe';
this.img = '';
collage.instances.push(this);
}
collage.instances = [];
I tried with something like this:
var oMyBlob = new Blob(collage.instances, {type: 'multipart/form-data'});
But that doesn't work (only contains about 300 bits of data).
Anyone who can help? Or maybe suggest an alternative way to save the current collage work. It must of course must be possible to open the blob and repopulate the object instances.
Or maybe I am making this a bit more complicated than it has to be... but I am stuck right now, so I would appreciate any hints.
You can extract each layer's image data to DataURLs and save the result as a json object.
Here's a quick demo: http://codepen.io/gunderson/pen/PqWZwW
The process literally takes each canvas and saves out its data for later import.
The use of jquery here is for convenience:
$(".save-button").click(function() {
var imgData = JSON.stringify({
layers: getLayerData()
});
save(imgData, "myfile.json");
});
function save(filecontents, filename) {
try {
var $a = $("<a>").attr({
href: "data:application/json;," + filecontents,
download: filename
})[0].click();
return filecontents;
} catch (err) {
console.error(err);
return null;
}
}
function getLayerData() {
var imgData = [];
$(".layer").each(function(i, el) {
imgData.push(el.toDataURL("image/png"));
});
return imgData;
}
To restore, you can use a FileReader to read the contents of the JSON back into the browser, then make <img>s for each layer, set img.src to the dataURLs in your JSON and from there you can draw the <img> into onload canvases.
Add a reference (src URL) for the image to the instance, then serialize the instance array as JSON and use f.ex. localStorage.
localStorage.setItem("currentwork", JSON.stringify(collage.instances));
Then to restore you would need to do:
var tmp = localStorage.getItem("currentwork");
collage.instances = tmp ? JSON.parse(tmp) : [];
You then need to iterate through the array and reload the images using proper onload handling. Finally re-render everything.
Can you store image data on client? Yes, but not recommended. This will take a lot of space and if too much you will not be able to save all the data, the user may refuse to allow more storage space etc.
Keeping a link to the image on a server is a better approach for these things IMO. But if you disagree, look into IndexedDB (or WebSQL although deprecated) to have local storage which can be expanded in available space. localStorage can only hold between 2.5 - 5 mb, ie. no image data and only strings. Each char takes two bytes, data-uris adds 33% on top, so this will run empty pretty fast...
I'm trying to load image and put its data into HTML Image element but without success.
var fs = require("fs");
var content = fs.read('logo.png');
After reading content of the file I have to convert it somehow to Image or just print it to canvas. I was trying to conver binary data to Base64 Data URL with the code I've found on Stack.
function base64encode(binary) {
return btoa(unescape(encodeURIComponent(binary)));
}
var base64Data = 'data:image/png;base64,' +base64encode(content);
console.log(base64Data);
Returned Base64 is not valid Data URL. I was trying few more approaches but without success. Do you know the best (shortest) way to achieve that?
This is a rather ridiculous workaround, but it works. Keep in mind that PhantomJS' (1.x ?) canvas is a bit broken. So the canvas.toDataURL function returns largely inflated encodings. The smallest that I found was ironically image/bmp.
function decodeImage(imagePath, type, callback) {
var page = require('webpage').create();
var htmlFile = imagePath+"_temp.html";
fs.write(htmlFile, '<html><body><img src="'+imagePath+'"></body></html>');
var possibleCallback = type;
type = callback ? type : "image/bmp";
callback = callback || possibleCallback;
page.open(htmlFile, function(){
page.evaluate(function(imagePath, type){
var img = document.querySelector("img");
// the following is copied from http://stackoverflow.com/a/934925
var canvas = document.createElement("canvas");
canvas.width = img.width;
canvas.height = img.height;
// Copy the image contents to the canvas
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
// Get the data-URL formatted image
// Firefox supports PNG and JPEG. You could check img.src to
// guess the original format, but be aware the using "image/jpg"
// will re-encode the image.
window.dataURL = canvas.toDataURL(type);
}, imagePath, type);
fs.remove(htmlFile);
var dataUrl = page.evaluate(function(){
return window.dataURL;
});
page.close();
callback(dataUrl, type);
});
}
You can call it like this:
decodeImage('logo.png', 'image/png', function(imgB64Data, type){
//console.log(imgB64Data);
console.log(imgB64Data.length);
phantom.exit();
});
or this
decodeImage('logo.png', function(imgB64Data, type){
//console.log(imgB64Data);
console.log(imgB64Data.length);
phantom.exit();
});
I tried several things. I couldn't figure out the encoding of the file as returned by fs.read. I also tried to dynamically load the file into the about:blank DOM through file://-URLs, but that didn't work. I therefore opted to write a local html file to the disk and open it immediately.
I have a difficult question to you, which i'm struggling on for some time now.
I'm looking for a solution, where i can save a file to the users computer, without the local storage, because local storage has 5MB limit. I want the "Save to file"-dialog, but the data i want to save is only available in javascript and i would like to prevent sending the data back to the server and then send it again.
The use-case is, that the service im working on is saving compressed and encrypted chunks of the users data, so the server has no knowledge whats in those chunks and by sending the data back to the server, this would cause 4 times traffic and the server is receiving the unencrypted data, which would render the whole encryption useless.
I found a javascript function to save the data to the users computer with the "Save to file"-dialog, but the work on this has been discontinued and isnt fully supported. It's this: http://www.w3.org/TR/file-writer-api/
So since i have no window.saveAs, what is the way to save data from a Blob-object without sending everything to the server?
Would be great if i could get a hint, what to search for.
I know that this works, because MEGA is doing it, but i want my own solution :)
Your best option is to use a blob url (which is a special url that points to an object in the browser's memory) :
var myBlob = ...;
var blobUrl = URL.createObjectURL(myBlob);
Now you have the choice to simply redirect to this url (window.location.replace(blobUrl)), or to create a link to it. The second solution allows you to specify a default file name :
var link = document.createElement("a"); // Or maybe get it from the current document
link.href = blobUrl;
link.download = "aDefaultFileName.txt";
link.innerHTML = "Click here to download the file";
document.body.appendChild(link); // Or append it whereever you want
FileSaver.js implements saveAs for certain browsers that don't have it
https://github.com/eligrey/FileSaver.js
Tested with FileSaver.js 1.3.8 tested on Chromium 75 and Firefox 68, neither of which have saveAs.
The working principle seems to be to just create an <a element and click it with JavaScript oh the horrors of the web.
Here is a demo that save a blob generated with canvas.toBlob to your download folder with the chosen name mypng.png:
var canvas = document.getElementById("my-canvas");
var ctx = canvas.getContext("2d");
var pixel_size = 1;
function draw() {
console.log("draw");
for (x = 0; x < canvas.width; x += pixel_size) {
for (y = 0; y < canvas.height; y += pixel_size) {
var b = 0.5;
ctx.fillStyle =
"rgba(" +
(x / canvas.width) * 255 + "," +
(y / canvas.height) * 255 + "," +
b * 255 +
",255)"
;
ctx.fillRect(x, y, pixel_size, pixel_size);
}
}
canvas.toBlob(function(blob) {
saveAs(blob, 'mypng.png');
});
}
window.requestAnimationFrame(draw);
<canvas id="my-canvas" width="512" height="512" style="border:1px solid black;"></canvas>
<script src="https://cdnjs.cloudflare.com/ajax/libs/FileSaver.js/1.3.8/FileSaver.min.js"></script>
Here is an animated version that downloads multiple images: Convert HTML5 Canvas Sequence to a Video File
See also:
how to save canvas as png image?
JavaScript: Create and save file
HERE is the direct way.
canvas.toBlob(function(blob){
console.log(typeof(blob)) //let you have 'blob' here
var blobUrl = URL.createObjectURL(blob);
var link = document.createElement("a"); // Or maybe get it from the current document
link.href = blobUrl;
link.download = "image.jpg";
link.innerHTML = "Click here to download the file";
document.body.appendChild(link); // Or append it whereever you want
document.querySelector('a').click() //can add an id to be specific if multiple anchor tag, and use #id
}, 'image/jpeg', 1); // JPEG at 100% quality
spent a while to come upto this solution, comment if this helps.
Thanks to Sebastien C's answer.
this node dependence was more utils fs-web;
npm i fs-web
Usage
import * as fs from 'fs-web';
async processFetch(url, file_path = 'cache-web') {
const fileName = `${file_path}/${url.split('/').reverse()[0]}`;
let cache_blob: Blob;
await fs.readString(fileName).then((blob) => {
cache_blob = blob;
}).catch(() => { });
if (!!cache_blob) {
this.prepareBlob(cache_blob);
console.log('FROM CACHE');
} else {
await fetch(url, {
headers: {},
}).then((response: any) => {
return response.blob();
}).then((blob: Blob) => {
fs.writeFile(fileName, blob).then(() => {
return fs.readString(fileName);
});
this.prepareBlob(blob);
});
}
}
From a file picker or input type=file file chooser, save the filename to local storage:
HTML:
<audio id="player1">Your browser does not support the audio element</audio>
JavaScript:
function picksinglefile() {
var fop = new Windows.Storage.Pickers.FileOpenPicker();
fop.suggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.musicLibrary;
fop.fileTypeFilter.replaceAll([".mp3", ".wav"]);
fop.pickSingleFileAsync().then(function (file) {
if (file) {
// save the file name to local storage
localStorage.setItem("alarmname$", file.name.toString());
} else {
alert("Operation Cancelled");
}
});
}
Then later in your code, when you want to play the file you selected, use the following, which gets the file using only it's name from the music library. (In the UWP package manifest, set your 'Capabilites' to include 'Music Library'.)
var l = Windows.Storage.KnownFolders.musicLibrary;
var f = localStorage.getItem("alarmname$").toString(); // retrieve file by name
l.getFileAsync(f).then(function (file) {
// storagefile file is available, create URL from it
var s = window.URL.createObjectURL(file);
var x = document.getElementById("player1");
x.setAttribute("src", s);
x.play();
});
I'm using openxml in my HTML5 mobile app to generate word documents on the mobile device.
In general openxml works fine and straight forward, but I'm struggling with an annyoing problem.
The document generation only works the first time after I've started the app. This time I can open and view the document. Restart the app means:
- Redeploy from development machine
- Removing the app from the task pane (pushing aside; I assume the app is removed then?)
The second time I get the message the document is corrupted and I'm unable to view the file
UPDATE:
I can't reproduce this behaviour when I'm running the app connected to the remote debugger without having a breakpoint set. Doing it this way I always get a working document.
I doesn't make a difference wether I do any changes on the document or not. Simply open and saving reproduce this error.
After doing some research I've found that structure of the docx.zip file of the working and the corrupt file is the same. They also have the same file length. But in the corrupt docx there are some files I've found some files having a wrong/invalid CRC. See here an example when trying to get a corrupt file out of the zip. Other files are working as expected.
The properties for this file are->
(CRC in a working version is: 44D3906C)
Code for processing the doc-template:
/*
* Process the template
*/
function processTemplate(doc64, callback)
{
"use strict";
console.log("PROCESS TEMPLATE");
var XAttribute = Ltxml.XAttribute;
var XCData = Ltxml.XCData;
var XComment = Ltxml.XComment;
var XContainer = Ltxml.XContainer;
var XDeclaration = Ltxml.XDeclaration;
var XDocument = Ltxml.XDocument;
var XElement = Ltxml.XElement;
var XName = Ltxml.XName;
var XNamespace = Ltxml.XNamespace;
var XNode = Ltxml.XNode;
var XObject = Ltxml.XObject;
var XProcessingInstruction = Ltxml.XProcessingInstruction;
var XText = Ltxml.XText;
var XEntity = Ltxml.XEntity;
var cast = Ltxml.cast;
var castInt = Ltxml.castInt;
var W = openXml.W;
var NN = openXml.NoNamespace;
var wNs = openXml.wNs;
var doc = new openXml.OpenXmlPackage(doc64);
// add a paragraph to the beginning of the document.
var body = doc.mainDocumentPart().getXDocument().root.element(W.body);
var tpl_row = ((doc.mainDocumentPart().getXDocument().descendants(W.tbl)).elementAt(1).descendants(W.tr)).elementAt(2);
var newrow = new XElement(tpl_row);
doc.mainDocumentPart().getXDocument().descendants(W.tbl).elementAt(1).add(newrow);
// callback(doc);
var mod_file = null;
var newfile;
var path;
if (doc != null && doc != undefined ) {
mod_file = doc.saveToBlob();
// Start writing document
path = "Templates";
newfile = "Templates/Bau.docx";
console.log("WRITE TEMPLATE DOCUMENT");
fs.root.getFile("Templates/" + "MyGenerated.docx", {create: true, exclusive: false},
function(fileEntry)
{
fileEntry.createWriter(
function(fileWriter)
{
fileWriter.onwriteend = function(e) {
console.log("TEMPLATE DOCUMENT WRITTEN:"+e.target.length);
};
fileWriter.onerror = function(e) {
console.log("ERROR writing DOCUMENT:" + e.code + ";" + e.message);
};
var blobreader = new FileReader();
blobreader.onloadend = function()
{
fileWriter.write(blobreader.result); // reader.result contains the contents of blob as a typed array
};
blobreader.readAsArrayBuffer(mod_file);
},
null);
}, null);
};
Any ideas what I'm doing wrong?
Thanks for posting about the error. There were some issues with jszip.js that I encountered when I was developing the Open XML SDK for JavaScript.
At the following link, there is a sample javascript app that demonstrates generating a document.
Open XML SDK for JavaScript Demo
In that app you can save multiple DOCXs, one after another, and they are not corrupted.
In order to work on this issue, I need to be able to re-produce locally. Maybe you can take that little working web app and replace parts with your parts until it is generating invalid files?
Cheers, Eric
P.S. I am traveling and have intermittent access to internet. If you can continue the thread on OpenXmlDeveloper.org, then it will help me to answer quicker. :-)
What made it work for me, was changing the way of adding images (Parts) to the document. I was using the type "binary" for adding images to document. I changed this to "base64"
So I changed the source from:
mydoc.addPart( "/word/"+reltarget, openXml.contentTypes.png, "binary", fotodata ); // add Image Part to doc
to:
mydoc.addPart( "/word/"+reltarget, openXml.contentTypes.png, "base64", window.btoa(fotodata) ); // add Image Part to doc