When I encode image data to a base64 string I use the server file path to get the image data with fs.readFile(). I have question: does this mean other people can decode the base64 string then get the server path from the encoded data as below?
...
fs.readFile(destinationFilePath, function(error, data){
fulfill(data.toString('base64'));
});
I don't want to leak my server path to so I also tried encode the host url like below code, I'm not sure this correct way to use base64? and I don't get any error but also I got no response - did I miss something?
var base64EncodeData = function(destinationFilePath) {
return new Promise(function (fulfill, reject){
var request = require('request').defaults({ encoding: null });
request.get(destinationFilePath, function (error, response, body) {
if (!error && response.statusCode == 200) {
data = "data:" + response.headers["content-type"] + ";base64," + new Buffer(body).toString('base64');
console.log(data);
fulfill(data);
}
});
});
};
No you don't leak your server path by base64 encoding images. The base64 you are generating only includes a base64 representation of the binary image data. Indeed by base64 encoding them you remove any use of a path when you display them within a HTML page for example:
<img alt="base64 image" src="data:image/png;base64,isdRw0KGgot5AAANdSsDIA..." />
The src attribute contains a flag that data is being provided data: the file mimetype image/png; the encoding base64, and the encoded image data isdRw0KGgot5AAANdSsDIA....
Related
I have a C# backend generating a zip file in memory (with System.IO.Compression) and sending it to my front end. If I download the zip file before sending it, it is working well and is in ANSI encoding (found on notepad++).
This is how I return my file currently, I have tried many different ways to do it, such as simply returning a file without headers, but right now it looks like this :
[HttpPost]
[Route("GetUserWorkContext")]
public async Task<IActionResult> GetUserWorkContext([FromBody] GetUserWorkContextRequest request, [FromServices] IExportManager exportManager)
{
var zipStream = await exportManager.GetUserWorkContext(userId, request.IncludeArchived);
HttpContext.Response.Headers.Add("Content-Disposition", "attachment; filename = test.zip; charset=Windows-1252");
HttpContext.Response.Headers.Add("Content-Length", zipStream.ToArray().Length.ToString());
return File(zipStream.ToArray(), "application/octet-stream");
}
It seems that no matter how I download the file with Javascript (front-end), it is saved with utf8 encoding (found with notepad++ again). I tried using js-file-download ( https://www.npmjs.com/package/js-file-download ) or creating blobs, but anything I end up downloading is encoded in utf8.
How should I go about downloading this file in Javascript without corrupting the archive ?
Here is my current attempt in Javascript, using a piece of code I found here (JavaScript blob filename without link) to download the file :
function getUserWorkContext({ includeArchived }) {
return new Promise(function () {
Axios.post('/api/Export/GetUserWorkContext', {includeArchived})
.then((response) => {
if(response.data){
var blobObject = new Blob([response.data], {type: 'application/zip;charset=Windows-1252'});
downloadFile(blobObject, "test.zip");
}
})
}
function downloadFile(file, fileName) {
if (navigator.msSaveBlob) { // For ie and Edge
return navigator.msSaveBlob(file, fileName);
}
else {
let link = document.createElement('a');
link.href = window.URL.createObjectURL(file);
link.download = fileName;
document.body.appendChild(link);
link.dispatchEvent(new MouseEvent('click', { bubbles: true, cancelable: true, view: window }));
link.remove();
window.URL.revokeObjectURL(link.href);
}
}
Note : the actual zip is 3,747KB where as the download zip from Javascript is always much bigger, in this case : 6,917KB.
This is a problem with axios:
I guess, you should use blob or arraybuffer as responseType for axios.
{ responseType: 'blob' }
// responseType indicates the type of data that the server will
respond with // options are: 'arraybuffer', 'document', 'json',
'text', 'stream' // browser only: 'blob'
responseType: 'json' // default
Check this answer: https://stackoverflow.com/a/60461828/2487565
=== 8< ======================= previous version ======================= 8< ===
Your Content-Disposition header is wrong. There is no charset parameter for Content-Disposition header.
Check the docs: MDN HTTP Content-Disposition
That's why your file is still sent in UTF-8, since your charset parameter has no effect.
To use UTF-8:
Delete both Content- headers from C# and the charset parameter from JavaScript
var blobObject = new Blob([response.data], {type: 'application/zip'});
If you really need to use Windows-1252, you can try to set it with the content type parameter.
return File(zipStream.ToArray(), "application/octet-stream;charset=Windows-1252");
Check this also: Charset Encoding in ASP.NET Response
By the way, UTF-8 is the preferred charset encoding: W3 QA choosing encodings
And, yes #nikneem, there is no need in the Content-Disposition and Content-Length headers. They will be generated automatically.
basically I need to receive png or jpeg image from my server, and show it inside img tag on my website.
My architecture looks like this:
Client - Server1 (my server) - Server2 (some public server)
Client sends Ajax request to Server1.
Server1 sends request to Server2.
Server2 sends image back to Server1.
Server1 sends image back to Client.
Client code:
$("#testButton").click(function() {
$.ajax({
method: "get",
async: false,
url: "/test"
}).done(function (response){
alert(response.imageData);
$("#resultImage").attr("src", "data:image/png;base64," + response.imageData);
});
});
Server1 code:
router.get('/test', function(req, res, next){
var url = 'https://ion.gemma.feri.um.si/ion/services/geoserver/demo1/wms?service=WMS&version=1.1.0&request=GetMap&layers=demo1%3Amaribor-dof25&bbox=546631.6237364038%2C156484.86830455417%2C550631.7865026393%2C160485.0310707898&width=767&height=768&srs=EPSG%3A3794&format=image%2Fjpeg';
request(url, function (error, response, body) {
console.log('error:', error);
console.log('statusCode:', response && response.statusCode);
console.log(response.headers['content-type']);
console.log(body);
return res.json({ imageData: body});
});
});
If I enter the url above directly to img src, it is shown correctly.
Image is also shown correctly when I input url directly into browser.
When I receive image data back to my client from server1, data looks like this:
Any ideas how to fix this ?
Since you're building a base64 encoded image on the front end, the backend must return a base64 encoded image.
You are returning the image in utf8 format which of course won't work. utf8 is the default encoding when using request package.
You can use encoding property of the request package for that. Or pass encoding: null & convert body to a base64 string using .toString('base64')
request({ url, encoding: 'base64' }, function (error, response, body) {
console.log('error:', error);
console.log('statusCode:', response && response.statusCode);
console.log(response.headers['content-type']);
console.log(body);
return res.json({ imageData: body});
});
Now response.imageData is a base64 encoded string that you can use with: data:image/png;base64,
Have in mind that you're harcoding png on the front end. If you're going to work with different extensions, you can send the full src from the server:
// 'content-type' must not be image/png or image/jpeg, improve the logic
// I'll leave that to you.
const src = `data:${response.headers['content-type']};base64,${body}`;
return res.json({ src });
Another option, is to remove the ajax, and send the image directly, without base64.
front
$("#resultImage").attr("src", "/test");
back
app.get('/test', (req, res) => {
let url = 'https://ion.gemma.feri.um.si/ion/services/geoserver/demo1/wms?service=WMS&version=1.1.0&request=GetMap&layers=demo1%3Amaribor-dof25&bbox=546631.6237364038%2C156484.86830455417%2C550631.7865026393%2C160485.0310707898&width=767&height=768&srs=EPSG%3A3794&format=image%2Fjpeg';
request(url).pipe(res);
});
I'm trying to upload a binary file to Google Drive via the
multipart upload API v3.
Here's the hex representation of the content of the file:
FF FE
For some reason the above content gets encoded as UTF-8 (I assume)
when I try to POST it, enclosed in a multipart payload:
--BOUNDARY
Content-Type: application/json
{"name": "F.ini"}
--BOUNDARY
Content-Type: application/octet-stream
ÿþ <-- in the outbound request, this gets UTF-8 encoded
--BOUNDARY--
Hex representation of the file that ultimately gets stored on server side:
C3 BF C3 BE
The problem only occurs in the sending stage:
if I check the length of the content read from the file I always get 2;
regardless of whether I use FileReader#readAsBinaryString or FileReader#readAsArrayBuffer
(producing a string with length 2, and an ArrayBuffer with byteLength 2, respectively).
Here's the minimal code that I'm using to generate the multipart payload:
file = picker.files[0]; // 'picker' is a file picker
reader = new FileReader();
reader.onload = function (e) {
content = e.target.result;
boundary = "BOUNDARY";
meta = '{"name": "' + file.name + '"}';
console.log(content.length); // gives 2 as expected
payload = [
"--" + boundary, "Content-Type: application/json", "", meta, "", "--" + boundary,
"Content-Type: application/octet-stream", "", content, "--" + boundary + "--"
].join("\r\n");
console.log(payload.length); // say this gives n
xhr = new XMLHttpRequest();
xhr.open("POST", "/", false);
xhr.setRequestHeader("Content-Type", "multipart/related; boundary=" + boundary);
xhr.send(payload); // this produces a request with a 'Content-Length: n+2' header
// (corresponding to the length increase due to UTF-8 encoding)
};
reader.readAsBinaryString(file);
My question is twofold:
Is there a way to avoid this automatic UTF-8 encoding? (Probably not, because
this answer
implies that the UTF-8 encoding is part of the XHR spec.)
If not, what is the correct way to "inform" the Drive API that my file content is UTF-8 encoded?
I have tried these approaches, with no success:
appending ; charset=utf-8 or ; charset=UTF-8 to the binary part's Content-Type header
doing the same to the HTTP header on the parent request
(Content-Type: multipart/related; boundary=blablabla, charset=utf-8;
also tried replacing the comma with a semicolon)
I need the multipart API because AFAIU the "simple" API
does not allow me to upload into a folder
(it only accepts a filename as metadata, via the Slug HTTP header,
whereas the JSON metadata object in the multipart case allows a parent folder ID to be specified as well).
(Just thought of mentioning this because the "simple" API handles things correctly
when I directly POST the File (from the picker) or ArrayBuffer (from FileReader#readAsArrayBuffer) as the XHR's payload.)
I do not want to utilize any third-party libraries because
I want to keep things as light as possible, and
keeping aside reinventing-the-wheel and best-practices stuff, anything that is accomplished by a third party library should be doable via plain JS as well (this is just a fun exercise).
For the sake of completeness I tried uploading the same file via the GDrive web interface, and it got uploaded just fine;
however the web interface seems to base64-encode the payload, which I would rather like to avoid
(as it unnecessarily bloats up the payload, esp. for larger payloads which is my eventual goal).
How about this modification?
Modification points:
Used new FormData() for creating the multipart/form-data.
Used reader.readAsArrayBuffer(file) instead of reader.readAsBinaryString(file).
Send the file as a blob. In this case, the data is sent as application/octet-stream.
Modified script:
file = picker.files[0]; // 'picker' is a file picker
reader = new FileReader();
reader.onload = function (e) {
var content = new Blob([file]);
var meta = {name: file.name, mimeType: file.type};
var accessToken = gapi.auth.getToken().access_token;
var payload = new FormData();
payload.append('metadata', new Blob([JSON.stringify(meta)], {type: 'application/json'}));
payload.append('file', content);
xhr = new XMLHttpRequest();
xhr.open('post', 'https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart');
xhr.setRequestHeader('Authorization', 'Bearer ' + accessToken);
xhr.onload = function() {
console.log(xhr.response);
};
xhr.send(payload);
};
reader.readAsArrayBuffer(file);
Note:
In this modified script, I put the endpoint and the header including the access token. So please modify this for your environment.
In this case, I used a scope of https://www.googleapis.com/auth/drive.
Reference:
Using FormData Objects
In my environment, I could confirmed that this script worked. But if this didn't work in your environment, I'm sorry.
I have a response from an ebay-api
--MIMEBoundaryurn_uuid_C91296EA5FF69EE9571479882375576565344 Content-Type: application/xop+xml; charset=utf-8; type="text/xml"
Content-Transfer-Encoding: binary Content-ID:
<0.urn:uuid:C91296EA5FF69EE9571479882375576565345>
Success1.1.02016-11-23T06:26:15.576Z514
--MIMEBoundaryurn_uuid_C91296EA5FF69EE9571479882375574545344 Content-Type: application/zip Content-Transfer-Encoding: binary
Content-ID:
PKY'uIi[��#�50014028337_report.xmlUT y�2Xy�2Xux
00�R�j�#��+��[��PlX#�(�x,=l�q]Lfewc��w Ĥ��O��١�HT���t��GGT�
��6�;���'������.$����=d����m;c}Wߦ�RW�A
f�����g�I��4U��x��3��f���ғ{f��xj�,+���ۖI%5��B's��G,#��t,L{�c�����MD笓��)!�9��
�M�o;8_��<�i�y����sz���u���=��Ջ^2�S��%+2�2�`QV�$�����~?�w�ǥ�_Q�퉦�'PKY'uIi[��#���50014028337_report.xmlUTy�2Xux
00PK\�
--MIMEBoundaryurn_uuid_C91296EA5FF69EE9571479882375576565344--
This is of type string. and i extracted the attached zip file data i.e.
PKY'uIi[��#�50014028337_report.xmlUT y�2Xy�2Xux
00�R�j�#��+��[��PlX#�(�x,=l�q]Lfewc��w Ĥ��O��١�HT���t��GGT�
��6�;���'������.$����=d����m;c}Wߦ�RW�A
f�����g�I��4U��x��3��f���ғ{f��xj�,+���ۖI%5��B's��G,#��t,L{�c�����MD笓��)!�9��
�M�o;8_��<�i�y����sz���u���=��Ջ^2�S��%+2�2�`QV�$�����~?�w�ǥ�_Q�퉦�'PKY'uIi[��#���50014028338_report.xmlUTy�2Xux
00PK\�
This shows that it has a report.xml in it. So when i write this data in a zip file, it creates a zip file and upon extract gives error.
fs.writeFile("./static/DownloadFile.zip", fileData, 'binary', function(err){
if (err) throw err;
console.log("success");
});
How can i write this data in a zip file properly. Pls advice. If required any more information.
EDIT:
I tried writing the zip file in PHP and is succssfully writing it with this code:
$zipFilename="DownloadFile.zip";
$data = $fileData;
$handler = fopen($zipFilename, 'wb')
or die("Failed. Cannot Open $zipFilename to Write!</b></p>");
fwrite($handler, $data);
fclose($handler);
Please advice how can i achieve the same thing in nodejs.
Depending on what HTTP Client you are using the implementation might change a little.
With axios I'm doing something like so:
I'm requesting a zip file so I specify the Accept header as application/zip
In order to get a buffer and not Binary, specify the responseType as arrayBuffer
const res = await axios.get('/routToThat/file', {
headers: {
Accept: 'application/zip',
},
responseType: 'arraybuffer',
});
By doing the latter, instead of receiving a Binary from the response:
A#B�ArE⏾�7�ϫ���f�걺N�����Yg���o_M^�D�T�U X_���e?� hi\...
I receive a Buffer:
Buffer(22781691) [80, 75, 3, …]
Once the request is resolved and I have that Buffer, I use that same writeFile function from fs
NOTE: I'm not specifying the Encoding in writeFile
fs.writeFile(name, res.data, (err) => {
if (err) throw err;
console.log("success");
});
As I see in your code example your binary data is already mangled by request module. Just use in request setting
encoding:null
and the zip file is a valid binary in body (now buffer instead of utf-8 string!) you can decompress. As long as you see the questions marks you still have the encoding issue.
Using the request module to load a webpage, I notice that for he UK pound symbol £ I sometimes get back the unicode replacement character \uFFFD.
An example URL that I'm parsing is this Amazon UK page: http://www.amazon.co.uk/gp/product/B00R3P1NSI/ref=s9_newr_gw_d38_g351_i2?pf_rd_m=A3P5ROKL5A1OLE&pf_rd_s=center-2&pf_rd_r=0Q529EEEZWKPCVQBRHT9&pf_rd_t=101&pf_rd_p=455333147&pf_rd_i=468294
I'm also using the iconv-lite module to decode using the charset returned in the response header:
request(urlEntry.url, function(err, response, html) {
const contType = response.headers['content-type'];
const charset = contType.substring(contType.indexOf('charset=') + 8, contType.length);
const encBody = iconv.decode(html, charset);
...
But this doesn't seem to be helping. I've also tried decoding the response HTML as UTF-8.
How can I avoid this Unicode replacement char?
Firstly, the Amazon webpage is encoded in ISO-8859-1, not UTF-8. This is what causes the Unicode replacement character. You can check this in the response headers. I used curl -i.
Secondly, the README for requests says:
encoding - Encoding to be used on setEncoding of response data. If
null, the body is returned as a Buffer. Anything else (including the
default value of undefined) will be passed as the encoding parameter
to toString() (meaning this is effectively utf8 by default).
It is UTF-8 by default... and (after a little experimentation) we find that it sadly it doesn't support ISO-8859-1. However, if we set the encoding to null we can then decode the resulting Buffer using iconv-lite.
Here is a sample program.
var request = require('request');
var iconvlite = require('iconv-lite');
var url = "http://www.amazon.co.uk/gp/product/B00R3P1NSI/ref=s9_newr_gw_d38_g351_i2?pf_rd_m=A3P5ROKL5A1OLE&pf_rd_s=center-2&pf_rd_r=0Q529EEEZWKPCVQBRHT9&pf_rd_t=101&pf_rd_p=455333147&pf_rd_i=468294";
request({url: url, encoding: null}, function (error, response, body) {
if (!error && response.statusCode == 200) {
var encoding = 'ISO-8859-1';
var content = iconvlite.decode(body, encoding);
console.log(content);
}
});
This question is somewhat related, and I used it whilst figuring this out:
http.get and ISO-8859-1 encoded responses