I am using Node.js 8.0.0 and wanted to update a file on a platform. For this they have an API with very clear ways to use it.
In this case I have to use a PUT method for this, also the right hostname and path, right auth keys and the file itself that must be of Content-Type: multipart/form-data. So I really wanted to use the node https module and try to not install anything else.
I tried using the request http client (https://github.com/request/request)and worked like a charm but as I previously told, would like to use what we already have in Node without installing anything else. I see some working replies here using Request, but no one using Node https module.
Using the https.request I managed to go to the right URL, pass auth, but it always shows me a correlation error (specifically for this platform not something you can Google I guess).
function update(method, path, params) {
return new Promise((resolve, reject) => {
https.request({
method,
host: HOST,
path: path + (params ? '?' + qs.stringify(params) : ''),
auth: `${USER}:${PASS}`,
formData: document,
}, res => {
let body = '';
res
.on('data', message => body += message)
.on('error', (e) => console.log(e))
.on('end', () => resolve(body));
})
.end();
Where: const document = fs.createReadStream(path.resolve(__dirname, '../../src/', 'myfile.xlf'));
And where I call the update function like this:
await operations.update('PUT', '/right/path/to/update', {
id: `${rightId}`,
});
With this code I don't have any auth problem and I can communicate with the platform, in fact if I use other api method (GET, POST) I can obtain statistics and things like that, but in this case explained before, the response have a 400 Bad Request error, that I am sure is a problem with the way I am "trying to send the file".
Using the request http client, I get no errors and managed to update the document with this code:
function update(path, params) {
const url = 'https://' + HOST + path + (params ? '?' + qs.stringify(params) : '');
return new Promise((resolve, reject) => {
try {
resolve(requests.put({
url,
formData: document,
}).auth(USER, PASS));
} catch (err) {
reject(err);
}
});
}
Related
This question already has answers here:
Encoding issue with requesting JSON from StackOverflow API
(2 answers)
Closed 1 year ago.
Problem
Related to Get UTF-8 html content with Node's http.get - but that answer isn't working for me.
I'm trying to call the Stack Overflow API questions endpoint:
https://api.stackexchange.com/2.3/questions?site=stackoverflow&filter=total
Which should return the following JSON response:
{"total":21951385}
Example Code
I'm using the https node module to submit a get request like this:
const getRequest = (url: string) => new Promise((resolve, reject) => {
const options: RequestOptions = {
headers: {
'Accept': 'text/*',
'Accept-Encoding':'identity',
'Accept-Charset' : 'utf8',
}
}
const req = get(url, options, (res) => {
res.setEncoding('utf8');
let responseBody = '';
res.on('data', (chunk) => responseBody += chunk);
res.on('end', () => resolve(responseBody));
});
req.on('error', (err) => reject(err));
req.end();
})
And then invoking it like this:
const questionsUrl = 'https://api.stackexchange.com/2.3/questions?&site=stackoverflow&filter=total'
const resp = await getRequest(questionsUrl)
console.log(resp)
However, I get the response:
▼�
�V*�/I�Q�22�454�0�♣��♥‼���↕
What I've Tried
I've tried doing several variations of the following:
I'm calling setEncoding to utf8 on the stream
I've set the Accept header to text/* - which
Provides a text MIME type, but without a subtype
I've set the Accept-Encoding header to identity - which
Indicates the identity function (that is, without modification or compression)
This code also works just fine with pretty much any other API server, for example using the following url:
https://jsonplaceholder.typicode.com/todos/1
But the StackOverlow API works anywhere else I've tried it, so there must be a way to instruct node how to execute it.
My suggestion is to use an http library that supports both promises and gzip built in. My current favorite is got(). http.get() is like the least featured http request library anywhere. You really don't have to write all this yourself. Here's what your entire code would look like with the got() library:
const got = require('got');
function getRequest(url) {
return got(url).json();
}
This library handles all these things you need for you automatically:
Promises
JSON conversion
Gzip decoding
2xx status detection (other status codes like 404 are turned into a promise rejection which your code does not do).
And, it has many, many other useful features for other general use. The days of coding manually with http.get() should be long over. No need to rewrite code that has already been written and well-tested for you.
FYI, there's a list of very capable http libraries here: https://github.com/request/request/issues/3143. You can pick the one that has the API you like the best.
Response Header - Content-Encoding - Gzip
As jfriend00 pointed out - looks like the server isn't respecting the Accept-Encoding value being passed and returning a gzipped response none-the-less.
Unzipping Response
According to the answer on How do I ungzip (decompress) a NodeJS request's module gzip response body?, you can unzip like this:
import { get } from 'https'
import { createGunzip } from 'zlib'
const getRequest = (url: string) => new Promise((resolve, reject) => {
const req = get(url, (res) => {
const buffer: string[] = [];
if (!res.headers['content-encoding']?.includes('gzip')) {
console.log('utf8')
res.on('data', (chunk) => buffer.push(chunk));
res.on('end', () => resolve(buffer.join("")))
} else {
console.log('gzip')
const gunzip = createGunzip();
res.pipe(gunzip);
gunzip.on('data', (data) => buffer.push(data.toString()))
gunzip.on("end", () => resolve(buffer.join("")))
gunzip.on("error", (e) => reject(e))
}
});
req.on('error', (err) => reject(err));
req.end();
})
I am currently trying to directly send an image via ngx-webcam without saving it to my backend server and send it to a Face Detection API via my node.js. The problem is that I keep getting an error for my header in my node.js file. How can I resolve this issue?
I noticed that the image url being passed is quite long. Could that be an issue?
Image url:
"data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAMCAgICAgMCAgIDAwMDBAYEBAQEBAgGBgUGCQgKCgkICQkKDA8MCgsOCwkJDRENDg8QEBEQCgwSExIQEw8QEBD/2wBDAQMDAwQDBAgEBAgQCwkLEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBD/wAARCAHgAoADASIAAhEBAxE..."
My error is:
TypeError [ERR_HTTP_INVALID_HEADER_VALUE]: Invalid value "undefined" for header "Content-Length"
at ClientRequest.setHeader (_http_outgoing.js:473:3)
at FormData.<anonymous> (C:\Users\Roger\Documents\GitHub\angular-face-recognition-app\back-end\node_modules\form-data\lib\form_data.js:321:13)
at C:\Users\Roger\Documents\GitHub\angular-face-recognition-app\back-end\node_modules\form-data\lib\form_data.js:265:7
at C:\Users\Roger\Documents\GitHub\angular-face-recognition-app\back-end\node_modules\form-data\node_modules\async\lib\async.js:251:17
at done (C:\Users\Roger\Documents\GitHub\angular-face-recognition-app\back-end\node_modules\form-data\node_modules\async\lib\async.js:126:15)
at C:\Users\Roger\Documents\GitHub\angular-face-recognition-app\back-end\node_modules\form-data\node_modules\async\lib\async.js:32:16
at C:\Users\Roger\Documents\GitHub\angular-face-recognition-app\back-end\node_modules\form-data\node_modules\async\lib\async.js:248:21
at C:\Users\Roger\Documents\GitHub\angular-face-recognition-app\back-end\node_modules\form-data\node_modules\async\lib\async.js:572:34
at C:\Users\Roger\Documents\GitHub\angular-face-recognition-app\back-end\node_modules\form-data\lib\form_data.js:105:13
at FSReqWrap.oncomplete (fs.js:153:21)
Front end: Angular
Component file:
//captures image function
public handleImage(webcamImage: WebcamImage): void {
//stores it into webcamImageg variable
this.webcamImage = webcamImage;
//uses fda.sendImage function to send webcamImage to api via a service
this.fda.sendImage(this.webcamImage.imageAsDataUrl).subscribe(res => {});
}
Service file
sendImage(imgUrl){
console.log(imgUrl);
const obj = {
url: imgUrl
};
return this.http.post(`${this.uri}`, obj);
}
Backend: node.js
Route file
facedetAPIRoutes.route("/").post(function (req, res){
let imageUrl = req.body.url;
myFaceDetAPI.recognizeImg(imageUrl).then(function(result) {
// here is your response back
res.json(result);
});
});
Function file for api call: uses a promise
//I believe problem lies here somewhere
this.recognizeImg = (url)=>{
let requestString = "https://lambda-face-recognition.p.rapidapi.com/recognize";
let req = unirest("POST", requestString);
let imgURL = url;
let promise = new Promise(function(resolve, reject) {
unirest.post(requestString)
.header("X-RapidAPI-Key", API_KEY)
.attach("files", fs.createReadStream(imgURL))
.field("album", ALBUM_NAME)
.field("albumkey", ALBUM_KEY)
.end(result => {
console.log("successfully recognized image");
resolve(result.body) // giving response back
});
});
return promise;
}
You should try adding x-rapidapi-host and content-type headers.
.headers({
"content-type": "application/x-www-form-urlencoded",
"x-rapidapi-host": "lambda-face-recognition.p.rapidapi.com",
"x-rapidapi-key": "",
"useQueryString": true
})
I am developing a react application (chat app) using electron js (for desktops) I want to make Http requests to certain websites, to get URL metadata (opengraph, schema.org, twitterCard, etc). This cannot be done without disabling the webSecurity in electronJS.
a) is it a good idea to disable webSecurity in electron JS ? since users can send others pretty much anything ?
b) I have managed to achieve this using electron net package. I used it in react (renderer process) and it works smoothly, no need to disable webSecurity. however when a invalid URL is provided it throws an exception in main process (net::ERR_NAME_NOT_RESOLVED) which pops a error dialog box. is there a way to catch this exception in the renderer process?
below is how I used electron net package.
const {net} = window.require('electron').remote
function ScrapeMeta(url) {
var promise = new Promise((resolve, reject) => {
const options = {
url: url,
timeout: 2000
};
const request = net.request(options)
request.on('response', (response) => {
var body = '';
response.on('data', function (d) {
body += d;
});
response.on('end', () => {
if (response.statusCode == 200) {
ParseMeta(body)
.then(meta => resolve(meta))
.catch(err => reject(err))
} else {
reject("request failed with " + response.statusCode);
}
})
})
request.end();
})
return promise;
}
What is the best way to achieve this. thanks.
I am currently in the process of creating a REST API for my personal website. I'd like to include some downloads and I would like to offer the possibility of selecting multiple ones and download those as a zip file.
My first approach was pretty easy: Array with urls, request for each of them, zip it, send to user, delete. However, I think that this approach is too dirty considering there are things like streams around which seems to be quite fitting for this thing.
Now, I tried around and am currently struggling with the basic concept of working with streams and events throughout different scopes.
The following worked:
const r = request(url, options);
r.on('response', function(res) {
res.pipe(fs.createWriteStream('./file.jpg'));
});
From my understanding r is an incoming stream in this scenario and I listen on the response event on it, as soon as it occurs, I pipe it to a stream which I use to write to the file system.
My first step was to refactor this so it fits my case more but I already failed here:
async function downloadFile(url) {
return request({ method: 'GET', uri: url });
}
Now I wanted to use a function which calls "downloadFile()" with different urls and save all those files to the disk using createWriteStream() again:
const urls = ['https://download1', 'https://download2', 'https://download3'];
urls.forEach(element => {
downloadFile(element).then(data => {
data.pipe(fs.createWriteStream('file.jpg'));
});
});
Using the debugger I found out that the "response" event is non existent in the data object -- Maybe that's already the issue? Moreover, I figured that data.body contains the bytes of my downloaded document (a pdf in this case) so I wonder if I could just stream this to some other place?
After reading some stackoveflow threads I found the following module: archiver
Reading this thread: Dynamically create and stream zip to client
#dankohn suggested an approach like that:
archive
.append(fs.createReadStream(file1), { name: 'file1.txt' })
.append(fs.createReadStream(file2), { name: 'file2.txt' });
Making me assume I need to be capable of extracting a stream from my data object to proceed.
Am I on the wrong track here or am I getting something fundamentally wrong?
Edit: lmao thanks for fixing my question I dunno what happened
Using archiver seems to be a valid approach, however it would be advisable to use streams when feeding large data from the web into the zip archive. Otherwise, the whole archive data would need to be held in memory.
archiver does not support adding files from streams, but zip-stream does. For reading a stream from the web, request comes in handy.
Example
// npm install -s express zip-stream request
const request = require('request');
const ZipStream = require('zip-stream');
const express = require('express');
const app = express();
app.get('/archive.zip', (req, res) => {
var zip = new ZipStream()
zip.pipe(res);
var stream = request('https://loremflickr.com/640/480')
zip.entry(stream, { name: 'picture.jpg' }, err => {
if(err)
throw err;
})
zip.finalize()
});
app.listen(3000)
Update: Example for using multiple files
Adding an example which processes the next file in the callback function of zip.entry() recursively.
app.get('/archive.zip', (req, res) => {
var zip = new ZipStream()
zip.pipe(res);
var queue = [
{ name: 'one.jpg', url: 'https://loremflickr.com/640/480' },
{ name: 'two.jpg', url: 'https://loremflickr.com/640/480' },
{ name: 'three.jpg', url: 'https://loremflickr.com/640/480' }
]
function addNextFile() {
var elem = queue.shift()
var stream = request(elem.url)
zip.entry(stream, { name: elem.name }, err => {
if(err)
throw err;
if(queue.length > 0)
addNextFile()
else
zip.finalize()
})
}
addNextFile()
})
Using Async/Await
You can encapsulate it into a promise to use async/await like:
await new Promise((resolve, reject) => {
zip.entry(stream, { name: elem.name }, err => {
if (err) reject(err)
resolve()
})
})
zip.finalize()
I have my own REST API to call in order to download a file. (At the end, the file could be store in different kind of server... Amazon s3, locally etc...)
To get a file from s3, I should use this method:
var url = s3.getSignedUrl('getObject', params);
This will give me a downloadable link to call.
Now, my question is, how can I use my own rest API to download a file when it comes from that link? Is there a way to redirect the call?
I'm using Hapi for my REST server.
{
method: "GET", path: "/downloadFile",
config: {auth: false},
handler: function (request, reply) {
// TODO
reply({})
}
},
Instead of using a redirect to download the desired file, just return back an unbufferedStream instead from S3. An unbufferedStream can be returned from the HttpResponse within the AWS-SDK. This means there is no need to download the file from S3, then read it in, and then have the requester download the file.
FYI I use this getObject() approach with Express and have never used Hapi, however I think that I'm pretty close with the route definition but hopefully it will capture the essence of what I'm trying to achieve.
Hapi.js route
const getObject = require('./getObject');
{
method: "GET", path: "/downloadFile",
config: {auth: false},
handler: function (request, reply) {
let key = ''; // get key from request
let bucket = ''; // get bucket from request
return getObject(bucket, key)
.then((response) => {
reply.statusCode(response.statusCode);
response.headers.forEach((header) => {
reply.header(header, response.headers[header]);
});
return reply(response.readStream);
})
.catch((err) => {
// handle err
reply.statusCode(500);
return reply('error');
});
}
},
getObject.js
const AWS = require('aws-sdk');
const S3 = new AWS.S3(<your-S3-config>);
module.exports = function getObject(bucket, key) {
return new Promise((resolve, reject) => {
// Get the file from the bucket
S3.getObject({
Bucket: bucket,
Key: key
})
.on('error', (err) => {
return reject(err);
})
.on('httpHeaders', (statusCode, headers, response) => {
// If the Key was found inside Bucket, prepare a response object
if (statusCode === 200) {
let responseObject = {
statusCode: statusCode,
headers: {
'Content-Disposition': 'attachment; filename=' + key
}
};
if (headers['content-type'])
responseObject.headers['Content-Type'] = headers['content-type'];
if (headers['content-length'])
responseObject.headers['Content-Length'] = headers['content-length'];
responseObject.readStream = response.httpResponse.createUnbufferedStream();
return resolve(responseObject);
}
})
.send();
});
}
Return a HTTP 303 Redirect with the Location header set to the blob's public URL in the S3 bucket.
If your bucket is private then you need to proxy the request instead of performing a redirect, unless your clients also have access to the bucket.