Converted file from threejs-usdzExporter not shown on Iphone - javascript

I have used THREE.js GLTFLoader to load glb file then I used USDZ Exporter to export it to usdz then when I tried to open it on browser it opened on saferi but didn't shown on arkit on object mode
and on AR Mode appear above my head this is a simple file
https://drive.google.com/file/d/1uAZNZLWI-zdtjcetfT2tIBh9a0tyzGyi/view?usp=sharing
This is my trial
const loader = new GLTFLoader().setPath(`${origin}${folderPath}`);
loader.load(modelName, async function (gltf) {
model = gltf.scene;
scene.add(model);
const exporter = new USDZExporter();
const arraybuffer = await exporter.parse(model );
const blob = new Blob([arraybuffer], { type: 'application/octet-stream' });
const link = document.getElementById('usdz-link');
link.style.display = ''
link.href = URL.createObjectURL(blob);
also I have tried to center the glb model by this code
const box = new THREE.Box3().setFromObject(gltf.scene);
const center = box.getCenter(new THREE.Vector3());
var sceneCopy=gltf.scene.clone();
sceneCopy.position.x += (gltf.scene.position.x - center.x);
sceneCopy.position.y += (gltf.scene.position.y - center.y);
sceneCopy.position.z += (gltf.scene.position.z - center.z);
and then used sceneCopy to be exported to usdz but unfortunately this didn't help me

I think we answered this one through slack, but I'll repost here for closure..
The offset of the model needs to be close to the origin for it to appear correctly in the quicklook viewer on iPad/iPhone.
You can manually set the offset by changing these three lines...
https://github.com/wallabyway/quicklook-example/blob/6f1e1453b983e0effc4ad82b2eda5eac905865a9/alliedbim-piping.gltf#L1023-L1025
which is the opposite of these 3 values:
the opposite to these 3 values: https://github.com/wallabyway/quicklook-example/blob/6f1e1453b983e0effc4ad82b2eda5eac905865a9/alliedbim-piping.gltf#L1044-L1046
Basically, when you use filtering in forge-convert-utils, the center option doesn't take into account the offset.
There is a GitHub repo feature request to fix this...
https://github.com/petrbroz/forge-convert-utils/issues/44
and a branch with a possible solution.
https://github.com/petrbroz/forge-convert-utils/commit/bb4bd0a13c685c34966a3bf5c2784ba9b1343a7d

Related

Blob download not working with JavaScript inside a WebView

I have a React application which needs to support both WebViews and Browsers. I have a functionality where-in we have an API which returns base64String and we need to convert it to PDF and provide a link to user to download the PDF. Below is a sample code snippet.
Code Snippet:
const convertBase64StringToPDF = (data: string) => {
const byteCharacters = atob(data);
const byteNumbers = new Array(byteCharacters.length);
for (let i = 0; i < byteCharacters.length; i++) {
byteNumbers[i] = byteCharacters.charCodeAt(i);
}
const byteArray = new Uint8Array(byteNumbers);
return new Blob([byteArray], { type: 'application/pdf' });
};
getbase64StringData(baseUrl2)
.then((response) => {
const blob = convertBase64StringToPDF(response.foobar);
const link = document.createElement('a');
link.href = window.URL.createObjectURL(blob);
link.download = 'MyDoc' + new Date() + '.pdf';
link.click();
})
The problem I'm facing is when a user clicks on the link we create, the download isn't getting triggered only in WebViews. I searched for solutions but most of them suggest making changes in WebView code. Problem is I don't have control over the App code which has the WebView embedded in it. Is there a way I can solve this in React/JavaScript application? I don't care where the download happens, I just need it to get triggered and get stored in users device.

Outputting SVG files from mxGraph

My team and I are looking at using mxGraph to programmatically generate diagrams. We've created a graph and saved it as an XML file. How do we go from there to an SVG file?
I understand mxGraph uses SVG files natively for the display, and that it can write SVG files that have XML encoded in them so they can be reopened in diagrams.net. How can I get mxGraph to put our graph on an SVG canvas and then serialize it to disk?
// from https://stackoverflow.com/a/57829704
const jsdom = require("jsdom");
const { JSDOM } = jsdom;
const dom = new JSDOM();
const fs = require('fs');
global.window = dom.window;
global.document = window.document;
global.XMLSerializer = window.XMLSerializer;
global.navigator = window.navigator;
const mxgraph = require("mxgraph")({
mxImageBasePath: "./src/images",
mxBasePath: "./src"
});
const {mxGraph, mxCodec, mxUtils, mxConstants, mxSvgCanvas2D} = mxgraph;
function makeHelloWorld() {
// Extracted from https://github.com/jgraph/mxgraph/blob/master/javascript/examples/helloworld.html
const graph = new mxGraph();
// Gets the default parent for inserting new cells. This
// is normally the first child of the root (ie. layer 0).
var parent = graph.getDefaultParent();
// Adds cells to the model in a single step
graph.getModel().beginUpdate();
try {
var v1 = graph.insertVertex(parent, null, 'Hello,', 20, 20, 80, 30);
var v2 = graph.insertVertex(parent, null, 'World!', 200, 150, 80, 30);
var e1 = graph.insertEdge(parent, null, '', v1, v2);
} finally {
// Updates the display
graph.getModel().endUpdate();
}
return graph;
}
const helloWorldGraph = makeHelloWorld();
function graphToXML(graph) {
var encoder = new mxCodec();
var result = encoder.encode(graph.getModel());
return mxUtils.getXml(result);
}
const xml = graphToXML(helloWorldGraph);
fs.writeFileSync('./graph.xml', xml);
Everything up to this point works--we can output an XML file. Now we have to get from there to an SVG.
I've created an SVG canvas, like so:
function createSvgCanvas(graph) {
const svgDoc = mxUtils.createXmlDocument();
const root = (svgDoc.createElementNS != null) ? svgDoc.createElementNS(mxConstants.NS_SVG, 'svg') : svgDoc.createElement('svg');
if (svgDoc.createElementNS == null) {
root.setAttribute('xmlns', mxConstants.NS_SVG);
root.setAttribute('xmlns:xlink', mxConstants.NS_XLINK);
} else {
root.setAttributeNS('http://www.w3.org/2000/xmlns/', 'xmlns:xlink', mxConstants.NS_XLINK);
}
const bounds = graph.getGraphBounds();
root.setAttribute('width', (bounds.x + bounds.width + 4) + 'px');
root.setAttribute('height', (bounds.y + bounds.height + 4) + 'px');
root.setAttribute('version', '1.1');
svgDoc.appendChild(root);
const svgCanvas = new mxSvgCanvas2D(root);
return svgCanvas;
}
The problems are:
The new canvas is sized for the graph, but it doesn't have the graph on it yet.
I need to figure out how to turn the SVG canvas into an SVG file on disk.
EDIT: I've added the following to the end of the program:
const canvas = createSvgCanvas(helloWorldGraph);
const imgExport = new mxImageExport();
imgExport.drawState(helloWorldGraph.getView().getState(helloWorldGraph.model.root), canvas); // adapted from https://jgraph.github.io/mxgraph/docs/js-api/files/util/mxImageExport-js.html
const xml2 = mxUtils.getXml(canvas)
const svgString = '<?xml version="1.0" encoding="UTF-8"?>\n'
+ '<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">\n'
+ xml2
fs.writeFileSync('./graph.svg', svgString);
This gets me "Uncaught TypeError: Failed to execute 'serializeToString' on 'XMLSerializer': parameter 1 is not of type 'Node'." I've tried a few things, but I'm not familiar enough with mxGraph to get the Node it needs out of my canvas.
For point 1, you can have a look at the draw.io code which provides details about how to setup the canvas.
https://github.com/jgraph/drawio/blob/v13.2.3/src/main/webapp/js/diagramly/EditorUi.js#L1756
https://github.com/jgraph/drawio/blob/v13.2.3/src/main/webapp/js/mxgraph/Graph.js#L7780
But this code is intended to be used in a browser, so you may have issue when running in your script which relies on JSDOM.
You will have to use an mxImageExport to use the canvas.
Then wrap the produced node to generate the svg string (in the following svgRoot is the Element produced by the custom code and updated by the mxImageExport)
'<?xml version="1.0" encoding="UTF-8"?>\n'
+ '<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">\n'
+ mxUtils.getXml(svgRoot)
For point 2 in a script
once you have the svg string, you can write it directly into a file as you did to store the mxgraph model in the xml file.
For point 2 in a browser
You can use the download and href attributes of the Anchor element (https://developer.mozilla.org/en-US/docs/Web/HTML/Element/a#Attributes)
1st, generate an href data containing the encoding value of the svg string produced with point 1
'data:image/svg+xml' + encodeURIComponent(svg)
Then create the anchor with this href data and trigger it when clicking on a button for instance.
More details in https://ourcodeworld.com/articles/read/189/how-to-create-a-file-and-generate-a-download-with-javascript-in-the-browser-without-a-server
To generate SVG files from XML, you can download the desktop version of diagrams.net from http://get.diagrams.net and run it on the command line.
Documentation for the command line flags can be found here or by running draw.io --help.
/Applications/draw.io.app/Contents/MacOS/draw.io -x -f svg graph.xml
However, SVG files generated in this way lack the embedded XML data which would allow them to be reopened in diagrams.net. It may be possible to reinsert this data using regex or another method.

BodyPix (Tensorflow.js) error: Unknown input type: [object ImageData]

I am trying to load images serverside (locally) using node.js so that I can run the segmentation functions on them.
In the BodyPix README it states that the segmentPerson function accepts ImageData object images:
"Params in segmentPerson()
image - ImageData|HTMLImageElement|HTMLCanvasElement|HTMLVideoElement The input image to feed through the network."
https://github.com/tensorflow/tfjs-models/tree/master/body-pix
Here is my code:
var bodyPix = require("#tensorflow-models/body-pix");
var tfjs = require("#tensorflow/tfjs")
var inkjet = require('inkjet');
var createCanvas = require('canvas');
var fs = require('fs');
async function loadAndPredict(data) {
const net = await bodyPix.load({
architecture: 'MobileNetV1',
outputStride: 16,
multiplier: 0.75,
quantBytes: 2
});
imgD = createCanvas.createImageData(new Uint8ClampedArray(data.data), data.width, data.height);
const segmentation = await net.segmentPerson(imgD, {
flipHorizontal: false,
internalResolution: 'medium',
segmentationThreshold: 0.7
});
const maskImage = bodyPix.toMask(segmentation, false);
}
inkjet.decode(fs.readFileSync('./person.jpg'), function(err, data) {
if (err) throw err;
console.log('OK: Image');
loadAndPredict(data)
});
It loads an image from my current directory, then converts it to the specified ImageData format, then it feeds it into the segmentPerson function. The half of the code that doesn't relate to the loading and formatting of the image is stripped from the GitHub README AND has worked for me when using HTML Image elements.
However it returns an Unknown input type: [object ImageData] on the call to "net.segmentPerson(imgD,..."
I haven't been able to find a solution to this issue so any help or guidance would be GREATLY appreciated. Thank you.
Check this repo: https://github.com/ajaichemmanam/Posenet-NodeServer
Posenet (Bodypix uses this) is used in a nodejs server script. You can check how the image is loaded here and is used for posenet estimation

tensorflow js RGB to Lab from input img

I have been searching and debugging but I can not find anything that works for me. I'm doing a web application in which I try to go from black and white images to color and for that I have put an input in which I load the image and make the inference (currently with an image-to-image model).
The fact is that I want to transform the image of rgb to lab as a preprocess before it enters the network because that is how I intend to train it. My code is as follows:
var myInput = document.getElementById('myFileInput');
function processPic() {
if (myInput.files && myInput.files[0]) {
var reader = new FileReader();
reader.onload = function (e) {
$('#prev_img_id').attr('src', e.target.result);
//Initiate the JavaScript Image object.
var image = new Image();
//Set the Base64 string return from FileReader as source.
image.src = e.target.result;
image.onload = function () {
//alert(this.height)
const webcamImage = tf.fromPixels(this);
const batchedImage = webcamImage.expandDims(0);
predict(batchedImage.toFloat().div(tf.scalar(127)).sub(tf.scalar(1)))
}
}
reader.readAsDataURL(myInput.files[0]);
}
}
myInput.addEventListener('change', processPic, false);
function predict(the_img) {
//get predictions
let pred = mobilenet.predict(the_img);
//retreive the highest probability class label
let cls = pred.argMax().buffer().values[0];
alert(IMAGENET_CLASSES[cls]);
}
I wrote a some code to do this using the tensorflow.js operations the code can be optimized further with matrix multiplications but it will work and should put you on the right path if this is still relevant.
RGB2LAB TFJS
There are some resources around regarding the conversion RGB to LAB e.g. http://www.easyrgb.com/en/math.php.
You could also give this JS implementation a try (https://github.com/antimatter15/rgb-lab, which is actually using the equations from the easyrgb website), calling the rgb2lab() function inside your image.onload.
To access the image data required by ``, you can have a look at this SO thread (How do I access/change pixels in a javascript image object?) i.e. using an intermediary canvas.

Save to Local File from Blob

I have a difficult question to you, which i'm struggling on for some time now.
I'm looking for a solution, where i can save a file to the users computer, without the local storage, because local storage has 5MB limit. I want the "Save to file"-dialog, but the data i want to save is only available in javascript and i would like to prevent sending the data back to the server and then send it again.
The use-case is, that the service im working on is saving compressed and encrypted chunks of the users data, so the server has no knowledge whats in those chunks and by sending the data back to the server, this would cause 4 times traffic and the server is receiving the unencrypted data, which would render the whole encryption useless.
I found a javascript function to save the data to the users computer with the "Save to file"-dialog, but the work on this has been discontinued and isnt fully supported. It's this: http://www.w3.org/TR/file-writer-api/
So since i have no window.saveAs, what is the way to save data from a Blob-object without sending everything to the server?
Would be great if i could get a hint, what to search for.
I know that this works, because MEGA is doing it, but i want my own solution :)
Your best option is to use a blob url (which is a special url that points to an object in the browser's memory) :
var myBlob = ...;
var blobUrl = URL.createObjectURL(myBlob);
Now you have the choice to simply redirect to this url (window.location.replace(blobUrl)), or to create a link to it. The second solution allows you to specify a default file name :
var link = document.createElement("a"); // Or maybe get it from the current document
link.href = blobUrl;
link.download = "aDefaultFileName.txt";
link.innerHTML = "Click here to download the file";
document.body.appendChild(link); // Or append it whereever you want
FileSaver.js implements saveAs for certain browsers that don't have it
https://github.com/eligrey/FileSaver.js
Tested with FileSaver.js 1.3.8 tested on Chromium 75 and Firefox 68, neither of which have saveAs.
The working principle seems to be to just create an <a element and click it with JavaScript oh the horrors of the web.
Here is a demo that save a blob generated with canvas.toBlob to your download folder with the chosen name mypng.png:
var canvas = document.getElementById("my-canvas");
var ctx = canvas.getContext("2d");
var pixel_size = 1;
function draw() {
console.log("draw");
for (x = 0; x < canvas.width; x += pixel_size) {
for (y = 0; y < canvas.height; y += pixel_size) {
var b = 0.5;
ctx.fillStyle =
"rgba(" +
(x / canvas.width) * 255 + "," +
(y / canvas.height) * 255 + "," +
b * 255 +
",255)"
;
ctx.fillRect(x, y, pixel_size, pixel_size);
}
}
canvas.toBlob(function(blob) {
saveAs(blob, 'mypng.png');
});
}
window.requestAnimationFrame(draw);
<canvas id="my-canvas" width="512" height="512" style="border:1px solid black;"></canvas>
<script src="https://cdnjs.cloudflare.com/ajax/libs/FileSaver.js/1.3.8/FileSaver.min.js"></script>
Here is an animated version that downloads multiple images: Convert HTML5 Canvas Sequence to a Video File
See also:
how to save canvas as png image?
JavaScript: Create and save file
HERE is the direct way.
canvas.toBlob(function(blob){
console.log(typeof(blob)) //let you have 'blob' here
var blobUrl = URL.createObjectURL(blob);
var link = document.createElement("a"); // Or maybe get it from the current document
link.href = blobUrl;
link.download = "image.jpg";
link.innerHTML = "Click here to download the file";
document.body.appendChild(link); // Or append it whereever you want
document.querySelector('a').click() //can add an id to be specific if multiple anchor tag, and use #id
}, 'image/jpeg', 1); // JPEG at 100% quality
spent a while to come upto this solution, comment if this helps.
Thanks to Sebastien C's answer.
this node dependence was more utils fs-web;
npm i fs-web
Usage
import * as fs from 'fs-web';
async processFetch(url, file_path = 'cache-web') {
const fileName = `${file_path}/${url.split('/').reverse()[0]}`;
let cache_blob: Blob;
await fs.readString(fileName).then((blob) => {
cache_blob = blob;
}).catch(() => { });
if (!!cache_blob) {
this.prepareBlob(cache_blob);
console.log('FROM CACHE');
} else {
await fetch(url, {
headers: {},
}).then((response: any) => {
return response.blob();
}).then((blob: Blob) => {
fs.writeFile(fileName, blob).then(() => {
return fs.readString(fileName);
});
this.prepareBlob(blob);
});
}
}
From a file picker or input type=file file chooser, save the filename to local storage:
HTML:
<audio id="player1">Your browser does not support the audio element</audio>
JavaScript:
function picksinglefile() {
var fop = new Windows.Storage.Pickers.FileOpenPicker();
fop.suggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.musicLibrary;
fop.fileTypeFilter.replaceAll([".mp3", ".wav"]);
fop.pickSingleFileAsync().then(function (file) {
if (file) {
// save the file name to local storage
localStorage.setItem("alarmname$", file.name.toString());
} else {
alert("Operation Cancelled");
}
});
}
Then later in your code, when you want to play the file you selected, use the following, which gets the file using only it's name from the music library. (In the UWP package manifest, set your 'Capabilites' to include 'Music Library'.)
var l = Windows.Storage.KnownFolders.musicLibrary;
var f = localStorage.getItem("alarmname$").toString(); // retrieve file by name
l.getFileAsync(f).then(function (file) {
// storagefile file is available, create URL from it
var s = window.URL.createObjectURL(file);
var x = document.getElementById("player1");
x.setAttribute("src", s);
x.play();
});

Categories

Resources