IE11 doesn't work well with Blob constructor & javascript array - javascript

I've been using Blob of HTML5 to consolidate a bunch of data including files and strings. Since the files and strings to be sent are not pre-specified in program, and I need to pack all data in a JS file and send them immediately, so I use Array of javascript to collect available data, then make this array as a parameter of Blob constructor. It works fine in Chrome and Firefox, but throws a javascript error when using IE11.
Unhandled exception at line 161, column 9 in ##$%.js
0x800a139e - JavaScript Runtime Error: InvalidStateError
My code is as follows:
var blobPackage_array = [];
if(userType != null)
blobPackage_array.push(userType);
if(userInfo != null)
blobPackage_array.push(userInfo);
for (var i = 0; i < fileList.length; i++) {
blobPackage_array.push(fileList[i]);
}
var blobPackage = new Blob(blobPackage_array); //throw javascript runtime error
I previously suspected that IE doesn't support Blob, so I tested this:
var blobPackage = new Blob(["test", fileList[0]]);
It worked fine, no error. My last guess is that IE doesn't recognise blobPackage_array as a valid parameter of Blob constructor. But Blob doesn't have a append method, meanwhile I can not know how many files that need to be appended, which means I can not construct a Blob once and for all.
Anyone ever encounter this? anything I can use to bypass this? I'd appreciate any suggestion.
Update! For some reason, I can not use FormData instead, It has to be blob...
anybody can help me on this?
Update again! Thanks to your kind reply, there are some progress. I checked MSDN, Blob's constructor should be like this: var blobObject = new Blob([new Uint8Array(array)], { type: 'image/png' });. I tried to construct a Uint8Array with blobPackage_array by this var uint8array = new Uint8Array(blobPackage_array);. I find that data is lost while this transformation. But in fact, var blobPackage = new Blob([uint8array]); can work, without errors. Thus I just need to fix the transformation problem.

I figured this out. I'm such an idiot..IE do not recognize my original blobPackage_array as a valid parameter because of those variables I append:
if(userType != null)
blobPackage_array.push(userType);
I just need to validate userType by this:
if(userType != null)
blobPackage_array.push(new String(userType));
So, don't bother to transform all data to UInt8Array type...

Related

Converting a Blob object into a File, for Ms Edge

I have a Blob Object, which is an image and I am trying to convert into a file object, But it shows errors in MS edge version 41. I am using formdata in 1st two attempts for the same
Attempt 1
fd.set('file', blobObj, fileName);
return (fd.get('file'));
This resulted in an error
object doesn't support this property or method 'set'
Attempt 2
I replaced set with append and then I got this
object doesn't support this property or method 'get'
Attempt 3
I replaced formdata entirely with a new logic which looked like this
let fileObject = new File([u8arr], fileName, { type: mime });
and I got an error saying
object doesn't support this action
Is there any other method that can be used? Can I directly use blob as a file?
AFAIK, Your third approach seems to be working ,
Try once by hard-coding the mime type to "image/jpeg" / "image/png" and include the date modeified and then verify once
var fileInstance = new File([blob], "FileName",{type:"image/jpeg", lastModified:new Date()})
If you are displaying it in javascript you should use something like this:
var URL = window.URL || window.webkitURL;
var url_instance = URL.createObjectURL(blob);
var image_source = new Image();
image_source.src = url_instance;
document.body.appendChild(image_source);
A File object is a specific kind of a Blob, it's just missing the two properties: lastModifiedDate and name(file name property).
So, you could convert the blob object to file object using the following code:
var blobtoFile = function blobToFile(theBlob, fileName) {
//A Blob() is almost a File() - it's just missing the two properties below which we will add
theBlob.lastModifiedDate = new Date();
theBlob.name = fileName;
return theBlob;
}
var file = blobtoFile(blob, "test.png");
More detail information about using the above code, please check this sample.
Besides, please check the FormData Method Browser compatibility, from it we can see most of the methods support Microsoft Edge 44+(EdgeHTML 18+, more detail, please check this article).
So, if you want to use FormData set or get method, please try to upgrade the Windows version(Microsoft Edge is part of the operating system and can't be updated separately. It receives updates through Windows Update, like the rest of the operating system.). Otherwise, you could use a JavaScript Object to store the blob or file object.
Detail updated steps as below: Select Start > Settings > Update & Security > Windows Update , then select Check for updates and install any available updates.

how to correctly convert pdf file to base64 in browser?

I have three failing versions of the following code in a chrome extension, which attempts to intercept a click to a link pointing to a pdf file, fetch that file, convert it to base64, and then log it. But I'm afraid I don't really know anything about binary formats and encodings, so I'm royally sucking this up.
var links = document.getElementsByTagName("a");
function transform(blob) {
return btoa(String.fromCharCode.apply(null, new Uint8Array(blob)));
};
function getlink(link) {
var x = new XMLHttpRequest();
x.open("GET", link, true);
x.responseType = 'blob';
x.onload = function(e) {
console.log("Raw response:");
console.log(x.response);
console.log("Direct transformation:");
console.log(btoa(x.response));
console.log("Mysterious thing I got from SO:");
console.log(transform(x.response));
window.location.href = link;
};
x.onerror = function (e) {
console.error(x.statusText);
};
x.send(null);
};
for (i = 0, len = links.length; i < len; i++) {
var l = links[i]
l.addEventListener("click", function(e) {
e.preventDefault();
e.stopPropagation();
e.stopImmediatePropagation();
getlink(this.href);
}, false);
};
Version 1 doesn't have the call to x.responseType, or the call to transform. It was my original, naive, implementation. It threw an error: "The string to be encoded contains characters outside of the Latin1 range."
After googling that error, I found this prior SO, which suggests that in parsing an image:
The response type needs to be set to blob. So this code does that.
There's some weird line, I don't know what it does at all: String.fromCharCode.apply(null, new Uint8Array(blob)).
Because I know nothing about binary formats, I guessed, probably stupidly, that making a PDF base64 would be the same as making some random image format base64. So, in fine SO tradition, I copied code that I don't really understand. In stages.
Version 2 of the code just set the response type to blob but didn't try the second transformation. And the code worked, and logged something that looked like a base64 string, but a clearly incorrect string. In its entirety, it logged:
W29iamVjdCBCbG9iXQ==
Which is just goofily wrong. It's obviously too short for a 46k pdf file, and a reference base64 encoding I created with python from the commandline was much much much longer, as one would expect.
Version 3 of the code then also applies the mysterious transformation using stringFromCharCode and all the rest, which I shoved into the transform function.
However, that doesn't log anything at all---a blank line appears in the console in its appropriate place. No errors, no nonsense output, just a blank line.
I know I'm getting the correct file from prior testing. Also, the call to log the raw response object produces Blob {size: 45587, type: "application/pdf"}, which is the correct filesize for the pdf I'm experimenting with, so the blob actually contains what it should when it gets into the browser.
I'm using, and only need to support, a current version of chrome.
Can someone tell me what I'm doing wrong?
Thanks!
If you only need to support modern browsers, you should also be able to use FileReader#readAsDataURL.
That would let you do something like this:
var reader = new FileReader();
reader.addEventListener("load", function () {
console.log(reader.result);
}, false);
// The function accepts Blobs and Files
reader.readAsDataURL(x.response);
This logs a data URI, which will contain your base64 data.
I think I've found my own solution. The response type needs to be arraybuffer not blob.

How to load a PDF into a blob so it can be uploaded?

I'm working on a testing framework that needs to pass files to the drop listener of a PLUpload instance. I need to create blob objects to pass inside a Data Transfer Object of the sort generated on a Drag / Drop event. I have it working fine for text files and image files. I would like to add support for PDF's, but it seems that I can't get the encoding right after retrieving the response. The response is coming back as text because I'm using Sahi to retrieve it in order to avoid Cross-Domain issues.
In short: the string I'm receiving is UTF-8 encoded and therefore the content looks like you opened a PDF with a text editor. I am wondering how to convert this back into the necessary format to create a blob, so that after the document gets uploaded everything looks okay.
What steps do I need to go through to convert the UTF-8 string into the proper blob object? (Yes, I am aware I could submit an XHR request and change the responseType property and (maybe) get closer, however due to complications with the way Sahi operates I'm not going to explain here why I would prefer not to go this route).
Also, I'm not familiar enough but I have a hunch maybe I lose data by retrieving it as a string? If that's the case I'll find another approach.
The existing code and the most recent approach I have tried is here:
var data = '%PDF-1.7%����115 0 obj<</Linearized 1/L ...'
var arr = [];
var utf8 = unescape(encodeURIComponent(data));
for (var i = 0; i < utf8.length; i++) {
arr.push(utf8.charCodeAt(i));
}
var file = new Blob(arr, {type: 'application/pdf'});
It looks like you were close. I just did this for a site which needed to read a PDF from another website and drop it into a fileuploader plugin. Here is what worked for me:
var url = "http://some-websites.com/Pdf/";
//You may not need this part if you have the PDF data locally already
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (this.readyState == 4 && this.status == 200) {
//console.log(this.response, typeof this.response);
//now convert your Blob from the response into a File and give it a name
var fileOfBlob = new File([this.response], 'your_file.pdf');
// Now do something with the File
// for filuploader (blueimp), just use the add method
$('#fileupload').fileupload('add', {
files: [ fileOfBlob ],
fileInput: $(this)
});
}
}
xhr.open('GET', url);
xhr.responseType = 'blob';
xhr.send();
I found help on the XHR as blob here. Then this SO answer helped me with naming the File. You might be able to use the Blob by itself, but you won't be able to give it a name unless its passed into a File.

Read Raw Data in with Mozilla Add-on

I'm trying to read and write raw data from files using Mozilla's add-on SDK. Currently I'm reading data with something like:
function readnsIFile(fileName, callback){
var nsiFile = new FileUtils.File(fileName);
NetUtil.asyncFetch(nsiFile, function (inputStream, status) {
var data = NetUtil.readInputStreamToString(inputStream, inputStream.available(),{charset:"UTF-8"});
callback(data, status, nsiFile);
});
}
This works for text files, but when I start messing with raw bytes outside of Unicode's normal range, it doesn't work. For example, if a file contains the byte 0xff, then that byte and anything past that byte isn't read at all. Is there any way to read (and write) raw data using the SDK?
You've specified an explicit charset in the options to NetUtil.readInputStream.
When you omit the charset option, then the data will be read as raw bytes. (Source)
function readnsIFile(fileName, callback){
var nsiFile = new FileUtils.File(fileName);
NetUtil.asyncFetch(nsiFile, function (inputStream, status) {
// Do not specify a charset at all!
var data = NetUtil.readInputStreamToString(inputStream, inputStream.available());
callback(data, status, nsiFile);
});
}
The suggestion to use io/byte-streams is OK as well, but keep in mind that that SDK module is still marked experimental, and that using ByteReader via io/file as the example suggests is not a good idea because this would be sync I/O on the main thread.
I don't really see the upside, as you'd use NetUtil anyway.
Anyway, this should work:
const {ByteReader} = require("sdk/io/byte-streams");
function readnsIFile(fileName, callback){
var nsiFile = new FileUtils.File(fileName);
NetUtil.asyncFetch(nsiFile, function (inputStream, status) {
var reader = new ByteReader(inputStream);
var data = reader.read(inputStream);
reader.close();
callback(data, status, nsiFile);
});
}
Also, please keep in mind that reading large files like this is problematic. Not only will the whole file buffered in memory, obviously, but:
The file is read as a char (byte) array first, so there will be a temporary buffer in the stream of at least file.size length (via asyncFetch).
Both NetUtil.readInputStreamToString and ByteReader will use another char (byte) array to read the result into from the inputStream, but ByteReader will do that in 32K chunks, while NetUtil.readInputStreamToString, will use a big buffer of file.length.
The data is then read into the resulting jschar/wchar_t (word) array aka. Javascript string, i.e. you need file.size * 2 bytes in memory at least.
E.g., reading a 1MB file would require more than fileSize * 4 = 4MB memory (NetUtil.readInputStreamToString) and/or more than fileSize * 3 = 3MB memory (ByteReader) during the read operation. After the operation, 2MB of that memory will be still alive to store the resulting data in a Javascript string.
Reading a 1MB file might be OK, but a 10MB file might be already problematic on mobile (Firefox for Android, Firefox OS) and a 100MB would be problematic even on desktop.
You can also read the data directly into an ArrayBuffer (or Uint8Array), which has more efficient storage for byte arrays than a Javascript string and avoid the temporary buffers of NetUtil.readInputStreamToString and/or ByteReader.
function readnsIFile(fileName, callback){
var nsiFile = new FileUtils.File(fileName);
NetUtil.asyncFetch(nsiFile, function (inputStream, status) {
var bs = Cc["#mozilla.org/binaryinputstream;1"].
createInstance(Ci.nsIBinaryInputStream);
bs.setInputStream(inputStream);
var len = inputStream.available();
var data = new Uint8Array(len);
reader.readArrayBuffer(len, data.buffer);
bs.close();
callback(data, status, nsiFile);
});
}
PS: The MDN documentation might state something about "iso-8859-1" being the default if the charset option is omitted in the NetUtil.readInputStreamToString call, but the documentation is wrong. I'll fix it.

XMLHttpRequest: Browser support for sendAsBinary?

Is Firefox the only that supports the sendAsBinary method?
At the moment, I believe only FF3+ supports this, though there is a workaround for Chrome.
The links around http://code.google.com/p/chromium/issues/detail?id=35705 are very confusing, but I do not think there is any workaround on Chrome 8 for POST'ing binary data.
You can convert the data to base64 and upload that, but then the server has to be able to decode it.
Chrome 9 (currently in Dev channel, not even Beta yet) lets you do XmlHttpRequest.send(blob) where the blob's bytes are sent as-is (not converted to utf-8), so the non-standard XmlHttpRequest.sendAsBinary() is not necessary for binary file uploads.
You must create this blob from the "binary" string that is in evt.target.result after a successful FileReader.readAsBinaryString(). That requires using ArrayBuffer and Uint8Array, which are not available in Chrome 8.
As far as I know, yes, only Firefox supports it. It's not part of the W3C standard, so there's no guarantee that it'll ever be supported by any other browser.
I had same error, but I'm also using Prototype.js. Seems it has some replacement for map function and it were throwing TypeError for me Object ..file data here.. has no method 'each'
So i used this replacement instead
//fix sendAsBinary for chrome
try {
if (typeof XMLHttpRequest.prototype.sendAsBinary == 'undefined') {
XMLHttpRequest.prototype.sendAsBinary = function(text){
var data = new ArrayBuffer(text.length);
var ui8a = new Uint8Array(data, 0);
for (var i = 0; i < text.length; i++) ui8a[i] = (text.charCodeAt(i) & 0xff);
this.send(ui8a);
}
}
} catch (e) {}
The workaround for Chrome is explained at the following URL:
http://code.google.com/p/chromium/issues/detail?id=35705

Categories

Resources