I'm passing a few kB of data (a generated PNG file) from a Unity3D WebGL context to javascript so that the user can download the PNG file without leaving the WebGL context. Unity uses emscripten and embeds the js as jslib. It's the fist time I've looked at emscripten or used pointers in js, having trouble finding the basics in the emscripten docs.
It's working, but I think it is a poor implementation, here's the code:
mergeInto(LibraryManager.library, {
JSDownload: function(filenamePointer, dataPointer, dataLength) {
filename = Pointer_stringify(filenamePointer);
var data = new Uint8Array(dataLength);
for (var i = 0; i < dataLength; i++) {
data[i]=HEAPU8[dataPointer+i];
}
var blob = new Blob([data], {type: 'application/octet-stream'});
if(window.navigator.msSaveOrOpenBlob) {
window.navigator.msSaveBlob(blob, filename);
}
else{
var elem = window.document.createElement('a');
elem.href = window.URL.createObjectURL(blob);
elem.download = filename;
document.body.appendChild(elem);
elem.click();
document.body.removeChild(elem);
}
}
});
What bothers me is stepping through the data like that, since I already have the address and the length I want to instantiate the 'data' array at the known address, like I would with * and & in C, rather than copying it byte by byte, or if I have to copy it, at least do that in one hit rather than a loop. I think my biggest issue is not knowing where to look for the documentation. I've found more from looking at random projects on GitHub than here: https://emscripten.org/docs/api_reference/preamble.js.html
Any help would be appreciated, thanks.
So you don't like this part?
var data = new Uint8Array(dataLength);
for (var i = 0; i < dataLength; i++) {
data[i]=HEAPU8[dataPointer+i];
}
var blob = new Blob([data], {type: 'application/octet-stream'});
You can make it one-liner:
var blob = new Blob([HEAPU8.subarray(dataPointer, dataPointer + dataLength)], {type: 'application/octet-stream'});
// or this
var blob = new Blob([new Uint8Array(HEAPU8.buffer, dataPointer, dataLength)], {type: 'application/octet-stream'});
Both of them should be much faster then your original code, and both of them should have exactly the same performance. It's because they create a new Blob directly from HEAPU8 without creating duplicated array like your original code.
HEAPU8 is a Uint8Array, one of TypedArray family. One really important thing about TypedArray is that it is actually not buffer/data but it's rather a "view" of the underlying ArrayBuffer (it's HEAPU8.buffer) object which holds the actual data. See ArrayBufferView.
So HEAPU8 provides an interface for HEAPU8.buffer ArrayBuffer object, specifically WebAssembly.Memory.buffer in Emscripten, to look like an uint8_t array. Emscripten also provides HEAPU16, HEAPU32, HEAPF32, and etc but they have the same ArrayBuffer with different views.
What .subarray(start, end) and new Uint8Array(buffer, offset, size) do is to create a new "view" of the ArrayBuffer object with the specified range, not to copy the buffer. So you will have the minimal performance penalty.
Related
I"m a noob in javascript so I am sorry if my question is a simple one. anyway,
I'm writing a code that creates a batch file in order to open a certain file in the default application defined by the operation system. For example, pdf files will open in Adobe's Acrobat Reader. To do so, I'm using the FileSaver.js
And my code goes like this:
$(document).ready(function() {
$('#openPdf').click(function() {
saveAs(data2blob(
myPDF),
'openPDF.bat');
});
});
function data2blob(data, isBase64) {
var chars = "";
if (isBase64)
chars = atob(data);
else
chars = data;
var bytes = new Array(chars.length);
for (var i = 0; i < chars.length; i++)
bytes[i] = chars.charCodeAt(i);
var blob = new Blob([new Uint8Array(bytes)],
{type: "text/plain;charset=utf-8"});
return blob;
}
with myPDF being a string to a specific file I want to open which I'm certain of its existence. When I test my code on IE, it works perfectly. However, when I try it on Firefox, the file created is 'openPDF.bat.sdx' instead of 'openPDF.bat'. I've checked that it is indeed the same file only with the added extension. Does anyone have an idea what is the reason for this? and how can I overcome it?
I finally managed to download the file in Firefox the same way as in IE. I've made one minor change in the code: when creating the blob variable in the data2blob function I've used:
var blob = new Blob([new Uint8Array(bytes)], {type: "application/octet-stream"});
Not sure, what is the difference between the way it was before and how it is now, except of the result of course.
I have to read a big file which user uploads using javascript's file api. Since this file is huge, reading it as is crashes the browser. So, I slice the file and it forms an array of blobs (Can slice method create array of any other type?).
JS Code:
var chunkSize = 100000;
var currentStart = 0;
var currentEnd = Math.min(currentStart+chunkSize, file.size);
while(currentEnd != file.size){
var blobPart = file.slice(currentStart, currentEnd);
blobs.push(blobPart);
currentStart = currentEnd+1;
currentEnd = Math.min(currentStart+chunkSize, file.size);
}
After this I have an array of blobs known as 'blobs'. How do I store this array(I am using localforage (indexeddb)). Currently I am storing the blobs array as it is.
JS Code:
localforage.setItem(file.name,blobs,function(){
localforage.getItem(file.name, function(err, value){
var fullRetrievedBlobArray = [];
var x = value;
});
});
Can converting it to something else help? Or should I read the blobs in 'blobs' array and convert them to one big array? (If yes, how do I do that?).
After retrieving x will contain the array of blobs which I had store. Now How do I merge this blob to get the original file as a big data url?
If I upload a video in similar way? How can I merge the blob arrays to form a data url to add to attribute tag of html 5 video tag?
Also, Can someone provide a little bit of high-level explanation of how localforage stores this array. Does it directly set it?
The blob constructor takes a mixed array of buffers, strings and other blobs:
var hugeBlob = new Blob(blobs, {type: "video/mp4"});
document.getElementById("video").src = URL.createObjectURL(hugeBlob);
I am trying to save image canvas to disk as .png in chrome extension with file name reflecting MD5 hash. For this I use something like this:
var img = document.createElement("img");
img.src=canvas.toDataURL("image/png");
var image_data = atob(img.src.split(',')[1]);
var arraybuffer = new ArrayBuffer(image_data.length);
var view = new Uint8Array(arraybuffer);
for (var i=0; i<image_data.length; i++) {
view[i] = image_data.charCodeAt(i);
}
var blob = new Blob([view], {type: 'image/png'});
var url = (window.webkitURL || window.URL).createObjectURL(blob);
var b = new FileReader;
b.readAsDataURL(blob);
b.onloadend = function () {
filename = SparkMD5.hash(b.result);
}
// ....some code
chrome.downloads.download ({ url, filename+'.png', saveAs: false });
The file is saved correctly, but MD5 hash that I get in code via SparkMD5 is different from the one I see in Windows after the file is saved. I cannot understand why. Experimented a bit with different approaches to saving (directly XMLHttpRequest, etc), but no luck yet. Probably I misunderstand some basic concept, as far as I am a bit of newbee to web programming.
I have also saved files via chrome.pageCapture.saveAsMHTML with the use of FileReader and in that case MD5 are equal.
What is wrong and is there a way to get equal MD5 for filename and final file while saving .png from Chrome extension?
I have a rails app on Heroku (cedar env). It has a page where I render the canvas data into an image using toDataURL() method. I'm trying to upload the returned base64 image data string directly to s3 using JavaScript (bypassing the server-side). The problem is that since this isn't a file, how do I upload the base64 encoded data directly to S3 and save it as a file there?
I have found a way to do this. After a lot of searching a looking at different tutorials.
You have to convert the Data URI to a blob and then upload that file to S3 using CORS, if you are working with multiple files I have separate XHR requests for each.
I found this function which turns your the Data URI into a blob which can then be uploaded to S3 directly using CORS (Convert Data URI to Blob )
function dataURItoBlob(dataURI) {
var binary = atob(dataURI.split(',')[1]);
var array = [];
for(var i = 0; i < binary.length; i++) {
array.push(binary.charCodeAt(i));
}
return new Blob([new Uint8Array(array)], {type: 'image/jpeg'});
}
Here is a great tutorial on uploading directly to S3, you will need to customise the code to allow for the blob instead of files.
Jamcoope's answer is very good, however the blob constructor is not supported by all browsers. Most notably android 4.1 and android 4.3. There are Blob polyfills, but xhr.send(...) will not work with the polyfill. The best bet is something like this:
var u = dataURI.split(',')[1],
binary = atob(u),
array = [];
for (var i = 0; i < binary.length; i++) {
array.push(binary.charCodeAt(i));
}
var typedArray = Uint8Array(array);
// now typedArray.buffer can be passed to xhr.send
If anyone cares: here is the coffescript version of the function given above!
convertToBlob = (base64) ->
binary = atob base64.split(',')[1]
array = []
for i in [0...binary.length]
array.push binary.charCodeAt i
new Blob [new Uint8Array array], {type: 'image/jpeg'}
Not sure if OP has already solved this, but I'm working on a very similar feature. In doing a little research, I came across these articles that might be helpful.
http://blog.danguer.com/2011/10/25/upload-s3-files-directly-with-ajax/
http://www.tweetegy.com/2012/01/save-an-image-file-directly-to-s3-from-a-web-browser-using-html5-and-backbone-js/
I'm making an export function to a HTML5 game of mine and my current saving method is a crude serialization of game data and then:
// this is Javascript
var gameData = "abc"; // this is actually a HUGE string of over 2MB
try
{
document.location = "data:text/octet-stream,"+encodeURIComponent(JSON.stringify(gameData));
}
catch(e)
{
console.log(e);
}
From: Using HTML5/Javascript to generate and save a file
I don't mind the fact that I can't use it for big strings, but I'd like it to generate a warning that informs that this method doesn't work, unfortunately Chrome (16) crashes without catching that exception.
Is there a better way to implement this kind of export, the important thing being for me is to make it work locally. FileAPI would be a better solution, but doesn't work locally.
AFAIK not possible client-side; but a 1.99MB file can be saved this way in Chrome; maybe you should try to compress/optimize your game data a little bit. One way to do that is to use JSZip.
In order to check if the current browser is Google Chrome and so the method doesn't work with long strings you can use something like this:
if(gameData.length > 1999999 && window.chrome){
alert("Sorry, this export method does not work in Google Chrome")
return;
}
I assume that the json.stringify is working
But that the win.location fail due to the fact that the URI is to big.
What you can do is to convert your uri to a blob, and then
URL.createObjectURL(file)
This will create a url that point to an internal object (internal to the browser).
And it will work as expected.
Here is a a method to convert dataURI to blob
function(dataURI) {
// convert base64 to raw binary data held in a string
// doesn't handle URLEncoded DataURIs
var byteString;
if (dataURI.split(',')[0].indexOf('base64') >= 0)
byteString = atob(dataURI.split(',')[1]);
else
byteString = unescape(dataURI.split(',')[1]);
// separate out the mime component
var mimeString = dataURI.split(',')[0].split(':')[1].split(';')[0];
// write the bytes of the string to an ArrayBuffer
var ab = new ArrayBuffer(byteString.length);
var ia = new Uint8Array(ab);
for (var i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i);
}
// write the ArrayBuffer to a blob, and you're done
return new Blob([ab],{type: mimeString});
}
}