I have a problem when my site sends converted audio in base64 to my server it converts the base64 in audio format and works with it.
But when the site is accessed from an iPhone and the recording arrives at the server, then a one-second recording after converting remains on the server. On all other devices and operation systems, everything comes and is processed normally. How can this be fixed?
This converter's python code on server:
...
data = json.loads(await request.body())
file = open(f'{UPLOAD_DIRECTORY}/' + ID_file + '.wav', "wb")
file.write(base64.b64decode(data['audio']))
file.close()
...
This converter's JS code on site:
...
var reader = new window.FileReader();
reader.readAsDataURL(audioBlob);
reader.onloadend = function() {
base64 = reader.result;
base64 = base64.split(',')[1];
...
Related
I have a question about the File API and uploading files in JavaScript and how I should do this.
I have already utilized a file uploader that was quite simple, it simply took the files from an input and made a request to the server, the server then handled the files and uploaded a copy file on the server in an uploads directory.
However, I am trying to give people to option to preview a file before uploading it. So I took advantage of the File API, specifically the new FileReader() and the following readAsDataURL().
The file object has a list of properties such as .size and .lastModifiedDate and I added the readAsDataURL() output to my file object as a property for easy access in my Angular ng-repeat().
My question is, it occurred to me as I was doing this that I could store the dataurl in a database rather than upload the actual file? I was unsure if modifying the File data directly with it's dataurl as a property would affect its transfer.
What is the best practice? Is it better to upload a file or can you just store the dataurl and then output that, since that is essentially the file itself? Should I not modify the file object directly?
Thank you.
Edit: I should also note that this is a project for a customer that wants it to be hard for users to simply take uploaded content from the application and save it and then redistribute it. Would saving the files are urls in a database mitigate against right-click-save-as behavior or not really?
There is more then one way to preview a file. first is dataURL with filereader as you mention. but there is also the URL.createObjectURL which is faster
Decoding and encoding to and from base64 will take longer, it needs more calculations, more cpu/memory then if it would be in binary format.
Which i can demonstrate below
var url = 'https://upload.wikimedia.org/wikipedia/commons/c/cc/ESC_large_ISS022_ISS022-E-11387-edit_01.JPG'
fetch(url).then(res => res.blob()).then(blob => {
// Simulates a file as if you where to upload it throght a file input and listen for on change
var files = [blob]
var img = new Image
var t = performance.now()
var fr = new FileReader
img.onload = () => {
// show it...
// $('body').append(img)
var ms = performance.now() - t
document.body.innerHTML = `it took ${ms.toFixed(0)}ms to load the image with FileReader<br>`
// Now create a Object url instead of using base64 that takes time to
// 1 encode blob to base64
// 2 decode it back again from base64 to binary
var t2 = performance.now()
var img2 = new Image
img2.onload = () => {
// show it...
// $('body').append(img)
var ms2 = performance.now() - t2
document.body.innerHTML += `it took ${ms2.toFixed(0)}ms to load the image with URL.createObjectURL<br><br>`
document.body.innerHTML += `URL.createObjectURL was ${(ms - ms2).toFixed(0)}ms faster`
}
img2.src = URL.createObjectURL(files[0])
}
fr.onload = () => (img.src = fr.result)
fr.readAsDataURL(files[0])
})
The base64 will be ~3x larger. For mobile devices I think you would want to save bandwidth and battery.
But then there is also the latency of doing a extra request but that's where http 2 comes to rescue
I have a blob variable which represents an audio file. When this code is executed, a valid data url string is logged into console, but it is very small and works only in chrome, the browser probably stores it somewhere. Is it possible to convert it into a data url string that contains all the audio data? Also, everything works properly on other machines.
var reader = new FileReader();
reader.readAsDataURL(blob);
reader.onloadend = function() {
base64data = reader.result;
console.log( base64data );
}
Thanks.
We transform HTML to PDF in the backend (PHP) using dompdf. The generated output from dompdf is Base64 encoded with
$output = $dompdf->output();
base64_encode($output);
This Base64 encoded content is saved as a file on the server. When we decode this file content like this:
cat /tmp/55acbaa9600f4 | base64 -D > test.pdf
we get a proper PDF file.
But when we transfer the Base64 content to the client as a string value inside a JSON object (the server provides a RESTful API...):
{
"file_data": "...the base64 string..."
}
And decode it with atob() and then create a Blob object to download the file later on, the PDF is always "empty"/broken.
$scope.downloadFileData = function(doc) {
DocumentService.getFileData(doc).then(function(data) {
var decodedFileData = atob(data.file_data);
var file = new Blob([decodedFileData], { type: doc.file_type });
saveAs(file, doc.title + '.' + doc.extension);
});
};
When we log the decoded content, it seems that the content is "broken", because several symbols are not the same as when we decode the content on the server using base64 -D.
When we encode/decode the content of simple text/plain documents, it's working as expected. But all binary (or not ASCII formats) are not working.
We have searched the web for many hours, but didn't find a solution for this that works for us. Does anyone have the same problem and can provide us with a working solution? Thanks in advance!
This is a example for a on the server Base64 encoded content of a PDF document:
JVBERi0xLjMKMSAwIG9iago8PCAvVHlwZSAvQ2F0YWxvZwovT3V0bGluZXMgMiAwIFIKL1BhZ2VzIDMgMCBSID4+CmVuZG9iagoyIDAgb2JqCjw8IC9UeXBlIC9PdXRsaW5lcyAvQ291bnQgMCA+PgplbmRvYmoKMyAwIG9iago8PCAvVHlwZSAvUGFnZXMKL0tpZHMgWzYgMCBSCl0KL0NvdW50IDEKL1Jlc291cmNlcyA8PAovUHJvY1NldCA0IDAgUgovRm9udCA8PCAKL0YxIDggMCBSCj4+Cj4+Ci9NZWRpYUJveCBbMC4wMDAgMC4wMDAgNjEyLjAwMCA3OTIuMDAwXQogPj4KZW5kb2JqCjQgMCBvYmoKWy9QREYgL1RleHQgXQplbmRvYmoKNSAwIG9iago8PAovQ3JlYXRvciAoRE9NUERGKQovQ3JlYXRpb25EYXRlIChEOjIwMTUwNzIwMTMzMzIzKzAyJzAwJykKL01vZERhdGUgKEQ6MjAxNTA3MjAxMzMzMjMrMDInMDAnKQo+PgplbmRvYmoKNiAwIG9iago8PCAvVHlwZSAvUGFnZQovUGFyZW50IDMgMCBSCi9Db250ZW50cyA3IDAgUgo+PgplbmRvYmoKNyAwIG9iago8PCAvRmlsdGVyIC9GbGF0ZURlY29kZQovTGVuZ3RoIDY2ID4+CnN0cmVhbQp4nOMy0DMwMFBAJovSuZxCFIxN9AwMzRTMDS31DCxNFUJSFPTdDBWMgKIKIWkKCtEaIanFJZqxCiFeCq4hAO4PD0MKZW5kc3RyZWFtCmVuZG9iago4IDAgb2JqCjw8IC9UeXBlIC9Gb250Ci9TdWJ0eXBlIC9UeXBlMQovTmFtZSAvRjEKL0Jhc2VGb250IC9UaW1lcy1Cb2xkCi9FbmNvZGluZyAvV2luQW5zaUVuY29kaW5nCj4+CmVuZG9iagp4cmVmCjAgOQowMDAwMDAwMDAwIDY1NTM1IGYgCjAwMDAwMDAwMDggMDAwMDAgbiAKMDAwMDAwMDA3MyAwMDAwMCBuIAowMDAwMDAwMTE5IDAwMDAwIG4gCjAwMDAwMDAyNzMgMDAwMDAgbiAKMDAwMDAwMDMwMiAwMDAwMCBuIAowMDAwMDAwNDE2IDAwMDAwIG4gCjAwMDAwMDA0NzkgMDAwMDAgbiAKMDAwMDAwMDYxNiAwMDAwMCBuIAp0cmFpbGVyCjw8Ci9TaXplIDkKL1Jvb3QgMSAwIFIKL0luZm8gNSAwIFIKPj4Kc3RhcnR4cmVmCjcyNQolJUVPRgo=
If you atob() this, you don't get the same result as on the console with base64 -D. Why?
Your issue looks identical to the one I needed to solve recently.
Here is what worked for me:
const binaryImg = atob(base64String);
const length = binaryImg.length;
const arrayBuffer = new ArrayBuffer(length);
const uintArray = new Uint8Array(arrayBuffer);
for (let i = 0; i < length; i++) {
uintArray[i] = binaryImg.charCodeAt(i);
}
const fileBlob = new Blob([uintArray], { type: 'application/pdf' });
saveAs(fileBlob, 'filename.pdf');
It seems that only doing a base64 decode is not enough...you need to put the result into a Uint8Array. Otherwise, the pdf pages appear blank.
I found this solution here:
https://github.com/sayanee/angularjs-pdf/issues/110#issuecomment-579988190
You can use btoa() and atob() work in some browsers:
For Exa.
var enc = btoa("this is some text");
alert(enc);
alert(atob(enc));
To JSON and base64 are completely independent.
Here's a JSON stringifier/parser (and direct GitHub link).
Here's a base64 Q&A. Here's another one.
I'm working into a project to encrypt/decrypt files in JavaScript. To save the encrypted/decrypted file in disk, I'm using blob. All the process is working, the files get encrypted (and some tests show-me that decrypted too) correctly. And I can save even large files with the blob method (I was using URI data before, and it was causing browser crash errors when files size more than 1MB). But for some reason, I can't save the decrypted blob content into a file correctly. When it's a TXT file, I get this in the beginning of the file content:
data:text/plain;base64,
and it continues with the text content encoded in base64. I need it to be saved as original file, not in base64. When I decrypt an exe file, it's corrupted, so if I open it into some text editor, I get:
data:application/x-msdownload;base64,
Again, looks like the file is getting saved in base64 and with this header attached. Here's my code to create/save the content of blob (on decrypt routine):
reader.onload = function(e){
var decrypted = CryptoJS.AES.decrypt(e.target.result, password)
.toString(CryptoJS.enc.Latin1);
var blob = new Blob([decrypted]);
var objectURL = window.URL.createObjectURL(blob);
if(!/^data:/.test(decrypted)){
alert("Invalid pass phrase or file! Please try again.");
return false;
}
a.attr('href','' + objectURL);
a.attr('download', file.name.replace('.encrypted',''));
step(4);
};
reader.readAsText(file);
}
How can I save the files with the original content? And not header+base64 encode?
Using Windows Azure storage services, I have created a container and subsequently created a BlockBlob (a JPEG image) using the PUT Rest API. I can log into my Azure portal and download the image.
When I call the GET API Azure successfully returns me the blob in the response body -- and I think it's the same raw binary I uploaded.
When I call the GET API, I'm doing so via an XHR request in my JavaScript (Sencha Touch) application. I can see the response (the raw binary), but I cannot figure out how to read the binary into an image that I can display.
I've tried the following:
rawBinary = response.responseText;
encodedBinary = btoa(unescape(encodeURIComponent(rawBinary)));
img.setSrc('data:' + file.type + ';base64,' + encodedBinary);
...which gives me something like this:
data:image/jpeg;base64,77+977+977+977+9ABBKRklGAAEBAAABAAEAAO+/ve+/vQBYRXhpZgAATU0AKgAAAAgAAgESAAMAAAABAAEAAO+/vWkABAAAAAEAAAAmAAAAAAAD77+9AQADAAAAAQABAADvv70CAAQAAAABAAAKIO+/vQMABAAAAAEAAAfvv70AAAAA77.......
This correctly sets a background URL on a DIV as a base64 encoded image... but nothing displays. It looks like a valid base64 string, and there are no errors in my console or network tabs. But nothing shows.
Can anyone help?
EDIT: Below is what the "binary" response looks like in the XHR response body:
����JFIF��XExifMM*�i&��
����C ��C��� "��
���}!1AQa"q2���#B��R��$3br�
...etc... VERY long response of unreadable characters
Can you try this jsfiddle by Vlad - http://jsfiddle.net/79NnG/
function hexToBase64(str) {
return btoa(String.fromCharCode.apply(null, str.replace(/\r|\n/g, "").replace(/([\da-fA-F]{2}) ?/g, "0x$1 ").replace(/ +$/, "").split(" ")));
}
var img = new Image();
img.src = "data:image/jpeg;base64,"+hexToBase64(getBinary());
alert(hexToBase64(getBinary()));
document.body.appendChild(img);
Also this post should help you - How to display binary data as image - extjs 4