In Nodejs, I'm getting response from an api
{
"file": "PHN0eWxlPnRlRrU3VRbUNDJyAvPjwvcD4K",
"mime_type": "text/html",
"document_type": "shippingLabel"
}
To reconstruct the file, the data from the node needs to be base64 decoded, and interpreted according to the mime_type.
Help me to get the file in .pdf and save to directory.
Using fs.writeFileSync(file, data[, options]):
const fs = require('fs');
// get your response somehow...
const response = {
file: 'PHN0eWxlPnRlRrU3VRbUNDJyAvPjwvcD4K',
mime_type: 'text/html',
document_type: 'shippingLabel'
};
// LUT for MIME type to extension
const ext = {
'text/html': 'html',
// ...
}
// save to shippingLabel.html
fs.writeFileSync(`${response.document_type}.${ext[response.mime_type]}`, response.file, 'base64');
Related
I have to convert js code to python.
The js code performs a file upload via a POST request, using fetch().
This is the js code:
<input type="file" />
<button onclick="upload()">Upload data</button>
<script>
upload = async() =>
{
const fileField = document.querySelector('input[type="file"]');
await uploadDoc(fileField.files[0] );
};
uploadDoc = async( file ) =>
{
let fd = new FormData();
fd.append( 'file', file );
fd.append( 'descr', 'demo_upload' );
fd.append( 'title', name );
fd.append( 'contentType', 'text' );
fd.append( 'editor', user );
let resp = await fetch( url, { method: 'POST', mode: 'cors', body: fd });
};
</script>
The code works and complies with the fetch() docs, provided here:
https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#uploading_a_file
Now when i try to recreate this in python, i get a 500 HTTP Status Code
This is the python code:
from urllib import request
from urllib.parse import urlencode
import json
with open('README.md', 'rb') as f:
upload_credentials = {
"file": f,
"descr": "testing",
"title": "READMEE.md",
"contentType": "text",
"editor": username,
}
url_for_upload = "" #here you place the upload URL
req = request.Request(url_for_upload, method="POST")
form_data = urlencode(upload_credentials)
form_data = form_data.encode()
response = request.urlopen(req, data=form_data)
http_status_code = response.getcode()
content = response.read()
print(http_status_code)
print(content)
This doesn't work however and i get this error:
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 500:
Someone with both js and python experience might be able to see what is wrong in the python side, or how to convert the fetch() function to python.
I think the problem is that you're reading the file in binary mode (rb). You need just r:
with open('README.md', 'r') as f: # r instead of rb
However, I still reccomend the requests module, which is more widely used, and is easier in this case.
import requests # pip install requests
url = "https://www.example.com/upload_file"
headers = {
"content-type": "text/plain" # force text file content type
}
files = {
"my_file": ("FILE_NAME.md", open("README.md","r")) # tuple containing file name, and io.BytesIO file buffer
}
data = { # in case you want to send a request payload too
"foo": "bar"
}
r = requests.post(url, headers=headers, files=files, data=data)
if r.status_code == 200:
print(r.text) # print response as a string (r.content for bytes, if response is binary like an image)
I have written function where I want to download an xlsx file via a service. Download also works so far. But when I open the file I get the error message file extension or file format is invalid. How can I solve the problem?
Code:
// Service
getDownloadPlan(): Observable<any> {
const url = `/download-plan?sales-plan=0&personnel-plan=0&investment-plan=0&loan-plan=0&material-cost-plan=0`;
return this.http.get(`${environment.baseUrl}` + url, { responseType: 'blob'});
}
// TS
downloadPlanBwa() {
this.planBwaService.getDownloadPlan().subscribe(response => {
const downloadFile: any = new Blob([response], { type: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' });
fileSaver.saveAs(downloadFile, 'Plan');
}, error => console.log('ERROR'),
() => console.log('SUCCESSFUL')
);
}
If i use the MIME-Type application/vnd.ms-excel;charset=utf-8 this is for the xls-format then it works.
What do I need to change in my code to successfully open xlsx files?
This URL below points to a zip file which contains a file called bundlesizes.json. I am trying to read the contents of that json file within my React application (no node server/backend involved)
https://dev.azure.com/uifabric/cd9e4e13-b8db-429a-9c21-499bf1c98639/_apis/build/builds/8838/artifacts?artifactName=drop&api-version=4.1&%24format=zip
I was able to get the contents of the zip file by doing the following
const url =
'https://dev.azure.com/uifabric/cd9e4e13-b8db-429a-9c21-499bf1c98639/_apis/build/builds/8838/artifacts?artifactName=drop&api-version=4.1&%24format=zip';
const response = await Axios({
url,
method: 'GET',
responseType: 'stream'
});
console.log(response.data);
This emits the zip file (non-ascii characters). However, I am looking to read the contents of the bundlesizes.json file within it.
For that I looked up jszip and tried the following,
var zip = new JSZip();
zip.createReader(
new zip.BlobReader(response.data),
function(reader: any) {
// get all entries from the zip
reader.getEntries(function(entries: any) {
if (entries.length) {
// get first entry content as text
entries[0].getData(
new zip.TextWriter(),
function(text: any) {
// text contains the entry data as a String
console.log(text);
// close the zip reader
reader.close(function() {
// onclose callback
});
},
function(current: any, total: any) {
// onprogress callback
console.log(current);
console.log(total);
}
);
}
});
},
function(error: any) {
// onerror callback
console.log(error);
}
);
However, this does not work for me, and errors out.
This is the error I receive
How can I read the contents of the file within the zip within my React application by using Javascript/Typescript?
THE SITUATION:
Frontend: Vue. Backend: Laravel.
Inside the web app I need to let the user download certain pdf files:
I need Laravel to take the file and return it as a response of an API GET request.
Then inside my Vue web app I need to get the file and download it.
THE CODE:
API:
$file = public_path() . "/path/test.pdf";
$headers = [
'Content-Type' => 'application/pdf',
];
return response()->download($file, 'test.pdf', $headers);
Web app:
downloadFile() {
this.$http.get(this.apiPath + '/download_pdf')
.then(response => {
let blob = new Blob([response.data], { type: 'application/pdf' })
let link = document.createElement('a')
link.href = window.URL.createObjectURL(blob)
link.download = 'test.pdf'
link.click()
})
}
OUTCOME:
Using this code I do manage to download a pdf file. The problem is that the pdf is blank.
Somehow the data got corrupted (not a problem of this particular pdf file, I have tried with several pdf files - same outcome)
RESPONSE FROM SERVER:
The response itself from the server is fine:
PDF:
The problem may be with the pdf file. It definitely looks corrupted data. This is an excerpt of how it looks like the response.data:
THE QUESTION:
How can I properly download a pdf file using Laravel for the API and Vue for the web app?
Thanks!
SOLUTION:
The code above was correct. What was missing was adding the proper responseType as arraybuffer.
I got scared by those ???? inside the response, and that was misleading me.
Those question marks were just okay since pdf is a binary data and is meant to be read by a proper reader.
THE ARRAYBUFFER:
And arraybuffer is precisely used to keep binary data.
This is the definition from the mozilla website:
The ArrayBuffer object is used to represent a generic, fixed-length
raw binary data buffer. You cannot directly manipulate the contents of
an ArrayBuffer; instead, you create one of the typed array objects or
a DataView object which represents the buffer in a specific format,
and use that to read and write the contents of the buffer.
And the ResponseType string indicates the type of the response. By telling its an arraybuffer, it then treats the data accordingly.
And just by adding the responseType I managed to properly download the pdf file.
THE CODE:
This is corrected Vue code (exactly as before, but with the addition of the responseType):
downloadFile() {
this.$http.get(this.appApiPath + '/testpdf', {responseType: 'arraybuffer'})
.then(response => {
let blob = new Blob([response.data], { type: 'application/pdf' })
let link = document.createElement('a')
link.href = window.URL.createObjectURL(blob)
link.download = 'test.pdf'
link.click()
})
}
EDIT:
This is a more complete solution that take into account other browsers behavior:
downloadContract(booking) {
this.$http.get(this.appApiPath + '/download_contract/' + booking.id, {responseType: 'arraybuffer'})
.then(response => {
this.downloadFile(response, 'customFilename')
}, response => {
console.warn('error from download_contract')
console.log(response)
// Manage errors
}
})
},
downloadFile(response, filename) {
// It is necessary to create a new blob object with mime-type explicitly set
// otherwise only Chrome works like it should
var newBlob = new Blob([response.body], {type: 'application/pdf'})
// IE doesn't allow using a blob object directly as link href
// instead it is necessary to use msSaveOrOpenBlob
if (window.navigator && window.navigator.msSaveOrOpenBlob) {
window.navigator.msSaveOrOpenBlob(newBlob)
return
}
// For other browsers:
// Create a link pointing to the ObjectURL containing the blob.
const data = window.URL.createObjectURL(newBlob)
var link = document.createElement('a')
link.href = data
link.download = filename + '.pdf'
link.click()
setTimeout(function () {
// For Firefox it is necessary to delay revoking the ObjectURL
window.URL.revokeObjectURL(data)
}, 100)
},
You won't be able to do the download from Laravel to Vue since both are running at different ports I assume.
Even if you try something like this.
public function getDownload()
{
//PDF file is stored under project/public/download/info.pdf
$file= public_path(). "/download/info.pdf";
$headers = [
'Content-Type' => 'application/pdf',
];
return response()->download($file, 'filename.pdf', $headers);
}
It won't help as you are sending headers to the Laravel Port Try using Vue js libraries and try to send that pdf content on the library
Try this
Get help from here
it's works for me.
from laravel backend:
$pdf = PDF::loadView('your_view_name', ['data' => $data]);
return $pdf->output();
from vuejs frontend:
axios({
url: 'http://localhost:8000/api/your-route',
method: 'GET',
responseType: 'blob',
}).then((response) => {
var fileURL = window.URL.createObjectURL(new Blob([response.data]));
var fileLink = document.createElement('a');
fileLink.href = fileURL;
fileLink.setAttribute('download', 'file.pdf');
document.body.appendChild(fileLink);
fileLink.click();
});
downloadFile: function () {
this.$http.post('{{ route('download.download') }}', {
_token: "{{ csrf_token() }}",
inputs: this.inputs
},{responseType: 'arraybuffer'}).then(response => {
var filename = response.headers.get('content-disposition').split('=')[1].replace(/^\"+|\"+$/g, '')
var url = window.URL.createObjectURL(new Blob([response.body],{type:response.headers.get('content-type')}))
var link = document.createElement('a')
link.href = url
link.setAttribute('download', filename)
document.body.appendChild(link)
link.click()
});
},
How to ad-hoc decode/uncompress the output produced by the nock recorder so we can see the response as text? I guess we do not understand if the response is gzipped and/or encoded
The object works find when we load it into nock, and our tests are behaving as we expect. To see what the API produced, we are having to put logging statements in the implementation file.
We are recording and saving the JSON the responses:
nock.recorder.rec({output_objects: true, dont_print: true});
JSON.stringify(nock.recorder.play())
And our file looks like:
[
{
"scope": "https://some.api.com:443",
"method": "POST",
"path": "/auth?key=some_key",
"body": {
"logonId": "user#api.com",
"logonPassword": "secret"
},
"status": 400,
"response": [
"1f8b0800000000000000458cbd6ac34010067b3fc5c735691263bb741344ec42f827420a492916692d1d9cb461f71c218cdf3d97266e6786b92d00c7aaa205290d1c59cd6d71bb3fff8b376939a1cd6abd7ac003cf89b97a5f96757efecc8ef9aede9fb2fc586455f5f55eeedca33db119757f0f5704266334a2ca4d44ec19170941263f76f06657b62dd6cb2af919ec9357cc7255f0cb403e4014df643689b6687d3b3e450c149b1e534f1113a3a71f868cb8f8c04b7ca48b8fa08efcf8ea16f75fa1776d91ee000000"
],
"headers": {
"cache-control": "no-store, no-cache, must-revalidate",
"content-encoding": "gzip",
"content-type": "application/json",
"transfer-encoding": "chunked",
"connection": "Close"
}
}
]
Nock serialize compressed (gzipped) response as "hex buffer";
luckily xxd can revert the hex-buffer to binary data that can be gunzipped to get the plain json text.
in summary:
echo <YOUR-HEX-BUFFER-HERE> | xxd -r -p | gunzip
with reference to the example in question:
$ echo 1f8b0800000000000000458cbd6ac34010067b3fc5c735691263bb741344ec42f827420a492916692d1d9cb461f71c218cdf3d97266e6786b92d00c7aaa205290d1c59cd6d71bb3fff8b376939a1cd6abd7ac003cf89b97a5f96757efecc8ef9aede9fb2fc586455f5f55eeedca33db119757f0f5704266334a2ca4d44ec19170941263f76f06657b62dd6cb2af919ec9357cc7255f0cb403e4014df643689b6687d3b3e450c149b1e534f1113a3a71f868cb8f8c04b7ca48b8fa08efcf8ea16f75fa1776d91ee000000 \
> | xxd -r -p \
> | gunzip
{
"errorParameters": {},
"errorCode": 2010,
"errorKey": "_ERR_INVALID_EMAILPASSWORD",
"errorMessage": "Please correct the following issues: 1.Sorry either your e-mail or password didn't match what we have on file. Try it again?"
}
Also at the moment I'm answering there are active discussions and proposals on the nock project and may be this could change in future releases; with reference to:
https://github.com/nock/nock/issues/1212
https://github.com/nock/nock/pull/1372
https://github.com/nock/nock/issues/1212
The response from the http request is coming back as gzipped data, indicated by the content-encoding header. Nock is saving this data a hex encoded buffer string.
You can convert these cassettes into json with the following utility:
var zlib = require('zlib');
var fs = require('fs');
var argv = process.argv.slice(2);
var path = require('path');
var filename = path.resolve(argv[0]);
var file = fs.readFileSync(filename, { encoding: 'utf8' });
var cassettes = JSON.parse(file);
cassettes.forEach(function (cassette) {
if (cassette.headers['content-encoding'] !== 'gzip') {
return;
}
var response = new Buffer(cassette.response[0], 'hex');
var contents = zlib.gunzipSync(response).toString('utf8');
cassette.response = JSON.parse(contents);
delete cassette.headers['content-encoding'];
});
fs.writeFileSync(filename, JSON.stringify(cassettes, null, 2), { encoding: 'utf8' });
Note, this will overwrite the original cassette with one that has converted all gzip requests into json. Also note that I'm not checking content type, so you'll need to adapt this if you have responses that aren't json.
Use jq with xxd to extract and decode the response field:
jq -r .response file.json | xxd -r -p | gunzip
A little late to the party, but using Franco Rondini's and ChiperSoft's answer I came up with this:
const zlib = require('zlib');
// Goes from a hex representation of gzipped binary data to an object
module.exports.decode = input => {
if (typeof input.join === 'function') {
input = input.join('');
}
const tempBuffer = Buffer.from(input, 'hex');
const unzippedBuffer = zlib.gunzipSync(tempBuffer);
const contents = unzippedBuffer.toString('utf8');
return JSON.parse(contents);
};
// Goes from an object to a zipped buffer encoded in hex
module.exports.encode = input => {
const inputAsString = JSON.stringify(input);
const tempBuffer = Buffer.from(inputAsString);
const zippedBuffer = zlib.gzipSync(tempBuffer);
return zippedBuffer.toString('hex');
};
This is probably not perfect but it was helpful to be able to replace replies on the fly with objects.