I upload a file using the file from the browser using the file type input. I have a system, where I receive that file on the client side read the binary of it and then convert it to base64 (using btoa), do some more stuff as part of my system and then the system uploads it to the remote Apache server as a Multipart request.
When I download the file, I do the reverse, parse the response, get the raw string, convert it to base64, send that string to client side and convert that base64 string into binary again (using atob) and save the file using octet-stream type using Blob constructor and the createObjectURL method. The file is saved on the disk, so no problem here. When I open the file it's content is unreadable.
I did some research and compared my base64 during upload with the base64 during the download part and I see that the base64 string gets corrupted in the download process.
So, for example, I am just making up the strings, but the intent is to show what is happening
Upload base64:
UIABvcfce+tre2df
Download corrupted base64:
UIABvcfDuuuvvv55rrre2df
You would see as shown above ce+t is changed to Duuuvvv55rr.
And therefore, my question - How to remove the mutation (or corruption) from a base64 string?
I tried a lot of search on stackoverflow and other online platform with direct or indirect references but somewhere it is not working.
I have a piece of client side code that exports a .docx file from Google Drive and sends the data to my server. It's pretty straight forward, it just exports the file, makes it into a blob, and sends the blob to a POST endpoint.
gapi.client.drive.files.export({
fileId: file_id,
mimeType: "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
}).then(function (response) {
// the zip file data is now in response.body
var blob = new Blob([response.body], {type: "application/vnd.openxmlformats-officedocument.wordprocessingml.document"});
// send the blob to the server to extract
var request = new XMLHttpRequest();
request.open('POST', 'return-xml.php', true);
request.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
request.onload = function() {
// the extracted data is in the request.responseText
// do something with it
};
request.send(blob);
});
Here is my server side code to save this file onto my server so I can do things with it:
<?php
file_put_contents('tmp/document.docx', fopen('php://input', 'r'));
When I run this, the file is created on my server. However, I believe it is corrupted, because when I try to unzip it (as you can do with .docx), this happens:
$ mv tmp/document.docx tmp/document.zip
$ unzip tmp/document.zip
Archive: document.zip
error [document.zip]: missing 192760059 bytes in zipfile
(attempting to process anyway)
error [document.zip]: start of central directory not found;
zipfile corrupt.
(please check that you have transferred or created the zipfile in the
appropriate BINARY mode and that you have compiled UnZip properly)
Why isn't it recognizing it as a proper .zip file?
You should first download the original zip, and compare its content to that what yhou receive on you server, you can do this e.gg. with totalcommander or line "diff" command.
When you do this, you will see if your zip is change during transfer.
With this information you can continue searching WHY it is changed.
E.g. when in you zipfile ascii 10 is transformed to "13" or "10 13" it could be a line ending problem on the file transfer
Because when you open files in php with fopen(..., 'r') it can happen, that \n signs are transformed when you are using windows, you could try to use fopen(..., 'rb') wich enforces BINARY reading a file without transfering line endings.
#see: https://stackoverflow.com/a/7652022/2377961
#see php documentation fopen
I think it may depends by that "application/x-www-form-urlencoded". So when you read the request data with php://input it saves also some http property, so the .zip it's corrupted. Try to open the .zip file and look at what there is inside.
To fix, if the problem is what I said before try to change the Contenent-type to application/octet-stream.
I would suggest using base64 to encode the binary data into a text stream before posting, I've done this before and it works well, using url encoding for binary data isn't going to work. Then on your server you base 64 decode to convert back to binary before storing.
Once its in base64 you can post it as text.
Well, to me it is not a ZIP file. Looking at the Drive API you can see that application/vnd.openxmlformats-officedocument.wordprocessingml.document is not zipped, like application/zipis. You should handle the file as an DOCX, i think. Have you tried that?
You are sending a BLOB (binary file) using "Content-type", "application/x-www-form-urlencoded" with no url encoding applied on the BLOB... so, the file that PHP receive is not a ZIP file, it's a corrupted one. Change the "Content-type" or apply url enconding to BLOB. You can get a better idea looking at MDN - Sending forms through JavaScript. This questions should help too: question 1, question 2. You must send the file properly.
A PDF file is being generated client-side using jsPDF, encoded in base64 using btoa(), sent to a PHP API and there it's decoded and saved as a binary file, but it isn't working and I'm getting a malformed PDF.
PHP code:
$destination = 'test/file.pdf';
$content = base64_decode($content);
$uploaded = file_put_contents($destination, $content);
If I compare both files (The pdf file downloaded directly from the frontend, which works, vs the one downloaded from the server) this is what I get:
Original PDF File fragment (I cannot disclose the full file):
Post encode/decode one:
What could be causing this difference? Seems to be an encoding problem?
I cannot comment, because I need 50 rep :) Leaving an answer instead.
Make sure you are doing your POST request correctly. Instead of your PDF file, try to post another file to the server, for example an image file and try to open posted image file on the server.
I've done some looking at the Cloudinary API and upload examples for NodeJS, and it looks like the server-side uploads use a file path. Meanwhile, the client-side uploads require a frontend input tag. I already have a frontend for users to select and crop a picture to their liking, and this gives me a data URI. I'd like to save this file to Cloudinary without having to use their built in frontend option. Is this possible? This would basically mean that I would be able to call some kind of upload function that can take a URI or file blob.
Cloudinary supports uploading files using a data-URI encoded string too.
Please make sure that you send your content as a Data-URI as explained here: http://en.wikipedia.org/wiki/Data_URI_scheme.
For example, in Node.js:
cloudinary.uploader.upload("data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
function(result) {console.log(result)});
I'm trying to save the result of a XMLHTTPRequest to a PDF file.
I am communicating with a server I am calling to get a chunk of data formatted as PDF data.
I'm using XMLHTTPRequets to log into the server, then make a search request which in return creates a PDF which is streamed back to me though the XMLHTTPRequest.
I need to save that result as a PDF so I can later open it in Acrobat.
When I save the response text to a file the result is not a valid PDF. The request is doing something to the stream which makes it invalid as a PDF.
I have no control over that server so I cant make it send back a link to a temporary valid PDF file.
Is there a way around that ?
Is there a way to encode that stream into a valid PDF file ?
I am using Javascript for that application.
Thanks
Erez
what you could do is return from your http server an xml file like the following:
<resp><![CDATA[YOURPDFSTREAM]]></resp>
and them retrieve the RESP node to build your pdf file.
Don't forget to encode your stream in the desired encoding format as well.