I am converting fonts from opentype.js to base64 using this method:
const buffer = glyphs.toArrayBuffer() // from opentype
let binary = []
const bytes = new Uint8Array(buffer)
const len = bytes.byteLength
for (let i = 0; i < len; i++) {
binary[i] = String.fromCharCode(bytes[ i ])
}
return window.btoa(binary.join(''))
I take this base64 string and embed it as a css string:
src: url('data:text/plain;charset=utf-8;base64,BASE64DARA')
95% of fonts work in Chrome and 100% in Safari. Some fonts fail in Chrome with
Failed to decode downloaded font
If I encode the font using a more native approach such as the CLI it works fine. I have read that the browser method above leaves to be desired fo multi bytes but it works fine in Safari and also decodes back to the ttf.
I have played around with a few variations of the encoder=, mime types etc to no avail.
Here's a gist of the offending base64.
How do I make this wok in Chrome?
Related
I'm having a really weird issue only on Google Chrome and Chromium.
The background is:
I upload files to my server using the multi-part upload method, meaning that I break the files into chunks of 10mb and send each chunk to the server. This works flawlessly in all browsers with files of any size, the issue started when I needed to encrypt each chunk.
For encryption I use CryptoJS and, before uploading the chunk, I encrypt it and get the resulting Blob to upload, this works fine on Chrome when I have to upload less than 50 chunks (50 blobs, around 500mb in total), after that I get a POST http://(...) net::ERR_FILE_NOT_FOUND.
Weirdly, this works on all of the other browsers, including Opera which is basically Chrome nowadays, except Chrome and Chromium. I tested it on IE, Firefox, Edge, Safari, Opera, Chrome and Chromium.
Below you can see how my code works so you guys can have an idea, this is not the real code I use in the app but, rather, it's a test code I wrote that yields the same result.
Instead of getting a slice (File.slice) of the File I'm going to upload as a chunk and encrypting it to get the blob, I'm going to generate a bogus blob with the size of my chunk. I put the setTimeout to simulate the time it takes to encrypt a blob. Like I said before, I get the same result as my real code by doing this:
function uploadNext(prevResponse) {
if (currentPart == totalPartsFile)
return;
//var chunk = getNextChunk();
var totalSize = file.size;
setTimeout(function() {
var blob = new Blob([new ArrayBuffer(constants.chunkSize)], {
type: 'application/octet-string',
name: file.name
});
console.log(blob);
blob.encrypted = true;
blob.key = encryptionKey;
blob.mimeType = file.mimeType;
blob.name = file.name;
blob.originalFileSize = originalFileSize || file.size;
uploadFile(objectId, currentPart, blob, totalSize, prevResponse, function(resp) {
uploadNext(resp);
});
}, 1000);
}
So, the code above is where my blob is generated, below there's the upload part:
function uploadFile (objectId, index, blob, totalSize, prevResponse, callback) {
var format = "encrypted";
var params = "?format=" + format + (format === "encrypted" ? "&encoding=base64" : "");
var endPoint = constants.contentServiceUrl + resourceService.availableResources.addContents.link.split(':objectId').join(objectId) + params;
var formData = new FormData();
formData.append("totalFileSizeBytes", totalSize);
formData.append("partIndex", index);
formData.append("partByteOffset", previousOffset);
formData.append("chunkSize", blob.size);
formData.append("totalParts", totalPartsFile);
formData.append("filename", blob.name);
if (currentPart != 0) {
formData.append("uploadId", prevResponse.uploadId);
formData.append("bucket", prevResponse.bucket);
}
if (finalChunk) {
for (var key in etags1) {
formData.append("etags[" + key + "]", etags1[key]);
}
}
formData.append("data", blob);
previousOffset += blob.size;
var request = {
method: 'POST',
url: endPoint,
data: formData,
headers: {
'Content-Type': 'multipart/form-data'
}
}
$http(request)
.success(function(d) {
_.extend(etags1, d.etags);
console.log(d);
callback(d);
})
.error(function(d) {
console.log(d);
});
}
Of course there are other supporting variables and code that I didn't put here, but this is enough to give an idea of what we're dealing with.
In this example I'm using AngularJS' $http module, but I've tried with pure XMLHttpRequest as well and I got the same result.
Like I said, I only get the POST http://(...) net::ERR_FILE_NOT_FOUND with files bigger than 499mb (50+ chunks) and only in Chrome.
I'm posting this here as I've been looking for a solution but I couldn't find anything related to this problem, the closest thing I found on the internet was this issue in the Chromium project forum:
https://code.google.com/p/chromium/issues/detail?id=375297
At this point I really don't know what to do anymore so I'd like to know if anyone has had a similar problem in the past and could fix it somehow.
Thank you for the answers in advance.
The chrome can only allocate 500mb for any blob, so if you try to allocate 500mb + 1 byte, It will clearly ignore that byte, to solve this you will have to read file in chunks of 499mb and then you will have to merge file at server.
Or you can try something like ZipJS and then upload the zip, it worked for me.
var zip = new JSZip();
zip.file("file1", "content1");
zip.file("file2", "content2");
At last chromium source files, I had found a blob limits.
ChromeOS:
Ram - 20%
Disk - 50% Note: The disk is the user partition, so the operating system can still function if this is full.
Android:
RAM - 1%
Disk - 6%
Desktop:
Ram - 20%, or 2 GB if x64.
Disk - 10%
chromium repo link: https://cs.chromium.org/chromium/src/storage/browser/blob/blob_memory_controller.cc?l=63
Seems, it's Firebug plugin issue. Try to disable it. It works for me.
Firefox browser. I had a problem when I loaded file by chunks. I disabled plugins and memory leak doesn't appear. Maybe it will help you
For Chrome all works fine.
We transform HTML to PDF in the backend (PHP) using dompdf. The generated output from dompdf is Base64 encoded with
$output = $dompdf->output();
base64_encode($output);
This Base64 encoded content is saved as a file on the server. When we decode this file content like this:
cat /tmp/55acbaa9600f4 | base64 -D > test.pdf
we get a proper PDF file.
But when we transfer the Base64 content to the client as a string value inside a JSON object (the server provides a RESTful API...):
{
"file_data": "...the base64 string..."
}
And decode it with atob() and then create a Blob object to download the file later on, the PDF is always "empty"/broken.
$scope.downloadFileData = function(doc) {
DocumentService.getFileData(doc).then(function(data) {
var decodedFileData = atob(data.file_data);
var file = new Blob([decodedFileData], { type: doc.file_type });
saveAs(file, doc.title + '.' + doc.extension);
});
};
When we log the decoded content, it seems that the content is "broken", because several symbols are not the same as when we decode the content on the server using base64 -D.
When we encode/decode the content of simple text/plain documents, it's working as expected. But all binary (or not ASCII formats) are not working.
We have searched the web for many hours, but didn't find a solution for this that works for us. Does anyone have the same problem and can provide us with a working solution? Thanks in advance!
This is a example for a on the server Base64 encoded content of a PDF document:
JVBERi0xLjMKMSAwIG9iago8PCAvVHlwZSAvQ2F0YWxvZwovT3V0bGluZXMgMiAwIFIKL1BhZ2VzIDMgMCBSID4+CmVuZG9iagoyIDAgb2JqCjw8IC9UeXBlIC9PdXRsaW5lcyAvQ291bnQgMCA+PgplbmRvYmoKMyAwIG9iago8PCAvVHlwZSAvUGFnZXMKL0tpZHMgWzYgMCBSCl0KL0NvdW50IDEKL1Jlc291cmNlcyA8PAovUHJvY1NldCA0IDAgUgovRm9udCA8PCAKL0YxIDggMCBSCj4+Cj4+Ci9NZWRpYUJveCBbMC4wMDAgMC4wMDAgNjEyLjAwMCA3OTIuMDAwXQogPj4KZW5kb2JqCjQgMCBvYmoKWy9QREYgL1RleHQgXQplbmRvYmoKNSAwIG9iago8PAovQ3JlYXRvciAoRE9NUERGKQovQ3JlYXRpb25EYXRlIChEOjIwMTUwNzIwMTMzMzIzKzAyJzAwJykKL01vZERhdGUgKEQ6MjAxNTA3MjAxMzMzMjMrMDInMDAnKQo+PgplbmRvYmoKNiAwIG9iago8PCAvVHlwZSAvUGFnZQovUGFyZW50IDMgMCBSCi9Db250ZW50cyA3IDAgUgo+PgplbmRvYmoKNyAwIG9iago8PCAvRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDY2ID4+CnN0cmVhbQp4nOMy0DMwMFBAJovSuZxCFIxN9AwMzRTMDS31DCxNFUJSFPTdDBWMgKIKIWkKCtEaIanFJZqxCiFeCq4hAO4PD0MKZW5kc3RyZWFtCmVuZG9iago4IDAgb2JqCjw8IC9UeXBlIC9Gb250Ci9TdWJ0eXBlIC9UeXBlMQovTmFtZSAvRjEKL0Jhc2VGb250IC9UaW1lcy1Cb2xkCi9FbmNvZGluZyAvV2luQW5zaUVuY29kaW5nCj4+CmVuZG9iagp4cmVmCjAgOQowMDAwMDAwMDAwIDY1NTM1IGYgCjAwMDAwMDAwMDggMDAwMDAgbiAKMDAwMDAwMDA3MyAwMDAwMCBuIAowMDAwMDAwMTE5IDAwMDAwIG4gCjAwMDAwMDAyNzMgMDAwMDAgbiAKMDAwMDAwMDMwMiAwMDAwMCBuIAowMDAwMDAwNDE2IDAwMDAwIG4gCjAwMDAwMDA0NzkgMDAwMDAgbiAKMDAwMDAwMDYxNiAwMDAwMCBuIAp0cmFpbGVyCjw8Ci9TaXplIDkKL1Jvb3QgMSAwIFIKL0luZm8gNSAwIFIKPj4Kc3RhcnR4cmVmCjcyNQolJUVPRgo=
If you atob() this, you don't get the same result as on the console with base64 -D. Why?
Your issue looks identical to the one I needed to solve recently.
Here is what worked for me:
const binaryImg = atob(base64String);
const length = binaryImg.length;
const arrayBuffer = new ArrayBuffer(length);
const uintArray = new Uint8Array(arrayBuffer);
for (let i = 0; i < length; i++) {
uintArray[i] = binaryImg.charCodeAt(i);
}
const fileBlob = new Blob([uintArray], { type: 'application/pdf' });
saveAs(fileBlob, 'filename.pdf');
It seems that only doing a base64 decode is not enough...you need to put the result into a Uint8Array. Otherwise, the pdf pages appear blank.
I found this solution here:
https://github.com/sayanee/angularjs-pdf/issues/110#issuecomment-579988190
You can use btoa() and atob() work in some browsers:
For Exa.
var enc = btoa("this is some text");
alert(enc);
alert(atob(enc));
To JSON and base64 are completely independent.
Here's a JSON stringifier/parser (and direct GitHub link).
Here's a base64 Q&A. Here's another one.
I'm trying to find a clean and consistent approach for downloading the contents of a canvas as an image file.
For Chrome or Firefox, I can do the following
// Convert the canvas to a base64 string
var image = canvas.toDataURL();
image = image.replace(/^data:[a-z]*;/, 'data:application/octet-stream;');
// use the base64 string as the 'href' attribute
var download = $('<a download="' + filename + '" target="_blank" href="' + image + '">');
Since the above doesn't work in IE, I'm trying to build a Blob with the 'window.navigator.msSaveOrOpenBlob' function.
var image = canvas.toDataURL();
image = image.replace(/^data:[a-z]*;,/, '');
// Convert from base64 to an ArrayBuffer
var byteString = atob(image);
var buffer = new ArrayBuffer(byteString.length);
var intArray = new Uint8Array(buffer);
for (var i = 0; i < byteString.length; i++) {
intArray[i] = byteString.charCodeAt(i);
}
// Use the native blob constructor
blob = new Blob([buffer], {type: "image/png"});
// Download this blob
window.navigator.msSaveOrOpenBlob(blob, "test.png");
In the example above, do I really have to convert a canvas to base64, base64 to ArrayBuffer, and finally convert from base64 to blob? (Firefox has a 'canvas.toBlob' function, but again that's not available in IE). Also, this only works in IE10, not IE9.
Does anyone have any suggestions for a solution that will work in Chrome, Safari, Firefox, IE9, and IE10?
Yes you have to do all the extra footwork.
toBlob support in Chrome and FF is very recent, even though its been in the spec for several years. Recent enough that it wasn't even on the radar when MS made IE9 and 10.
Unfortunately MS has no intent of changing IE9 or IE10 canvas implementations, which means they will exist as they are for all eternity, with bugs and missing pieces. (IE10 canvas has several bugs that IE9 does not, that are fixed again in IE11. Like this gem. It's a real shamble.)
I'm trying to find a cross browser way to store data locally in HTML5. I have generated a chunk of data in a Blob (see MDN). Now I want to move this Blob to the actual filesystem and save it locally. I've found the following ways to achieve this;
Use the <a download> attribute. This works only in Chrome currently.
Microsoft introduces a saveAs function in IE 10 which will achieve this.
Open the Blob URL in the browser and save it that way.
None of these seems to work in Safari though. While (1) works in Chrome, (2) in IE and (3) in Firefox no one works in Safari 6. The download attribute is not yet implemented and when trying to open a blob using the URL Safari complains that URLs starting with blob: are not valid URLs.
There is a good script that encapsulates (1) and (3) called FileSaver.js but that does not work using the latest Safari version.
Is there a way to save Blobs locally in a cross browser fashion?
FileSaver.js has beed updated recently and it works on IE10, Safari5+ etc.
See: https://github.com/eligrey/FileSaver.js/#supported-browsers
The file name sucks, but this works for me in Safari 8:
window.open('data:attachment/csv;charset=utf-8,' + encodeURI(csvString));
UPDATE: No longer working in Safari 9.x
The only solution that I have come up with is making a data: url instead. For me this looks like:
window.open("data:image/svg+xml," + encodeURIComponent(currentSVGString));
Here data is the array buffer data coming from response while making http rest call in js. This works in safari, however there might me some issue in filename as it comes to be untitled.
var binary = '';
var bytes = new Uint8Array(data);
var len = bytes.byteLength;
for (var i = 0; i < len; i++) {
binary += String.fromCharCode(bytes[i]);
}
var base64 = 'data:' + contentType + ';base64,' + window.btoa(binary);
var uri = encodeURI(base64);
var anchor = document.createElement('a');
document.body.appendChild(anchor);
anchor.href = uri;
anchor.download = fileName;
anchor.click();
document.body.removeChild(anchor);
Have you read this article? http://updates.html5rocks.com/2012/06/Don-t-Build-Blobs-Construct-Them
Relating to http://caniuse.com/#search=blob, blobs are possible to use in safari.
You should consturct a servlet which delivers the blob via standard http:// url, so you can avoid using blob: url. Just make a request to that url and build your blob.
Afterwards you can save it in your filesystem or local storage.
The download attribute is supported since ~safari 10.1, so currently this is the way to go.
This is the only thing that worked for me on safari.
var newWindow = window.open();
const blobPDF = await renderMapPDF(); // Your async stuff goes here
if (!newWindow) throw new Error('Window could not be opened.');
newWindow.location = URL.createObjectURL(blobPDF);
I have a rails app on Heroku (cedar env). It has a page where I render the canvas data into an image using toDataURL() method. I'm trying to upload the returned base64 image data string directly to s3 using JavaScript (bypassing the server-side). The problem is that since this isn't a file, how do I upload the base64 encoded data directly to S3 and save it as a file there?
I have found a way to do this. After a lot of searching a looking at different tutorials.
You have to convert the Data URI to a blob and then upload that file to S3 using CORS, if you are working with multiple files I have separate XHR requests for each.
I found this function which turns your the Data URI into a blob which can then be uploaded to S3 directly using CORS (Convert Data URI to Blob )
function dataURItoBlob(dataURI) {
var binary = atob(dataURI.split(',')[1]);
var array = [];
for(var i = 0; i < binary.length; i++) {
array.push(binary.charCodeAt(i));
}
return new Blob([new Uint8Array(array)], {type: 'image/jpeg'});
}
Here is a great tutorial on uploading directly to S3, you will need to customise the code to allow for the blob instead of files.
Jamcoope's answer is very good, however the blob constructor is not supported by all browsers. Most notably android 4.1 and android 4.3. There are Blob polyfills, but xhr.send(...) will not work with the polyfill. The best bet is something like this:
var u = dataURI.split(',')[1],
binary = atob(u),
array = [];
for (var i = 0; i < binary.length; i++) {
array.push(binary.charCodeAt(i));
}
var typedArray = Uint8Array(array);
// now typedArray.buffer can be passed to xhr.send
If anyone cares: here is the coffescript version of the function given above!
convertToBlob = (base64) ->
binary = atob base64.split(',')[1]
array = []
for i in [0...binary.length]
array.push binary.charCodeAt i
new Blob [new Uint8Array array], {type: 'image/jpeg'}
Not sure if OP has already solved this, but I'm working on a very similar feature. In doing a little research, I came across these articles that might be helpful.
http://blog.danguer.com/2011/10/25/upload-s3-files-directly-with-ajax/
http://www.tweetegy.com/2012/01/save-an-image-file-directly-to-s3-from-a-web-browser-using-html5-and-backbone-js/