I'm a node.js beginner. I'm trying to request a json file from a url (i.e 'http://www.example.com/sample_data.json').
My goal is to download/request the file only once when the server loads and then save it on the client side so I can manipulate/change it locally.
I tried
var file = request('http//exmaple.com/sample_data.json')
but it returns an import module error.
If anyone could give me a start that would be great!
thanks
To do that i would use the request module.
var request = require('request');
request('http//exmaple.com/sample_data.json', function (error, response, body) {
if (!error && response.statusCode == 200) {
var importedJSON = JSON.parse(body);
console.log(importedJSON);
}
})
For more information about the module check this link: https://github.com/request/request
Just some basics about node, and some first things to try:
1) request is a good choice to use for getting the file, but did you do an npm install? "npm install request --save"
2) in order to use the module, you have to "require" it at the top of your code, like: var request = require('request');
I'd start by checking those things first.
Related
I've started to build a typescript library (intended to be used on the server side) and right now I'm trying to use the node repl to play around with my code and see what happens in certain situations... I've built and required the file, but now I'm having a problem: I have a function that takes a http Request (type Request from express.js), and I'd like to try and run it in the repl providing it with a copy of a request that I previously made from my browser. Is this feasible?
I thought maybe I could do it by either:
doing regex magic on the request exported as cURL or
sending the request to node, but then how am I going to receive it while in the repl?
I'm not sure I understand your use-case, but you can try something like this:
In some temp folder type:
npm install "request-promise"
Then from the same temp folder, enter the REPL and type:
(async () => {const response = await require("request-promise").get("https://cnn.com"); console.log(response)})()
This example is for get, but it can be easily changed to other HTTP methods.
I've found a fairly simple way to do what I want... It involved quickly setting up a basic express server (set up following this tutorial):
mkdir scratch && cd scratch && npm init
(select defaults except entrypoint app.js)
npm i express
Create an app.js (vi app.js) with the following contents:
var express = require('express');
var app = express();
var circ = {};
circ.circ = circ;
var cache = [];
app.get('/', function (req, res) {
res.send(JSON.stringify(req, (key, value) => {
if (typeof value === 'object' && value !== null) {
// Duplicate reference found, discard key
if (cache.includes(value)) return;
// Store value in our collection
cache.push(value);
}
return value;
}));
});
app.listen(3000, function () {
console.log('Example app listening on port 3000!');
});
(See this answer for JSON.stringify custom replacer resp. second argument to JSON.stringify). You can optionally use flatted instead, which I discovered later and is surely better.
Now do the following:
Run the app with node app.js
In your browser, navigate to the website where your desired request is posted to.
Open your browsers development tools (Ctrl+shift+c works for me).
Go to the network tab.
Find the request that interests you and right click on it.
Click on copy > copy as curl (or similar, depending on which browser you're using).
Run that curl request, but change the url it's posted to to 127.0.0.1:3000 (e.g. change curl 'example.com' \...etc to curl '127.0.0.1:3000' \...etc
You should now get that request on standard output as a JSON object and it's in the format that express usually deals with. Yay! Now, pipe it into your clipboard (likely xclip -selection c on linux) or probably even better, redirect it to a file.
...
Step 2 - ?
Step 3 - Profit :)
I crawled some contents by JavaScript, and want to print it on HTML.
JavaScript code below is named 'js.js'(worked well on CMD)
var request = require('request');
var cheerio = require('cheerio');
request('...URL...', function (err, res, body) {
if (err) console.log('Err :' + err);
var $ = cheerio.load(body);
$('.class').each(function () {
var content = $(this).find('.abc').text().trim();
document.write(content);
});
});
But "error:require is not defined" was printed, so I looking for solutions.
I found this page and follow advice which said that use webpack or browseify.
new code(2MB after bundled) give me 2 new error:"fail to fetch" and "access-control-allow-origin". What should I do?
the require() keyword does not exists in browser/client JavaScript, that is why you need to use webpack to transpile nodejs code to browser compatible javascript.
For the "access-control-allow-origin", the url you are tying to connect to does not allow response to unknown origin.
If you own the API/URL you could add a response header Access-Control-Allow-Origin: *.
For reference: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin
I want to download page (https://www.csfd.cz/tvurce/65871) in NodeJS, but I get just random data.
�}Ms�F������+i"��)�Jْ;�e���7�KM0��LƩ��]��Yg��b��
Ow7U��J�#�K�9��L
I thought it is just wrong encoding, but even size is wrong (downloaded page have 44K, whereas this file have only 19K. What's more surprising is that simple downloading it by python works good.
Python code:
import requests
url = "https://www.csfd.cz/tvurce/65871"
r = requests.get(url)
with open('pyth.txt','wb') as handle:
handle.write(r.content)
JavaScript code:
const request = require('request-promise')
const fs = require('fs')
request('https://www.csfd.cz/tvurce/65871').then((html) => {
fs.writeFileSync('output.html', html)
})
I tried also additional methods like request.get with parameters and so on, but still the same result. Can you please tell me what I am doing wrong?
Use compressed option in request module, see example with request module (https://github.com/request/request).
You need also followRedirect and followAllRedirect parameters to automatically follow 301 and 302 redirection cuz your request is returning 302 :
curl -X GET https://www.csfd.cz/tvurce/65871 --compressed -v -i
Response : 302
<h1>Redirect</h1>
<p><a href="https://www.csfd.cz/tvurce/65871-kit-harington/">Please
click here to continue</a>.</p>
In addition replace your writeFileSync with standard writeFile function
const request = require('request')
const fs = require('fs')
request.get({
url:'https://www.csfd.cz/tvurce/65871',
gzip: true,
followRedirect: true,
followAllRedirect: true
}, function(err, response, body){
if(err || !response || response.statusCode != 200)
{
// error case, do stg
}
else
{
fs.writeFile('output.html', body, "utf8", function(err){
if(err)
{
// error do stg
}
else
{
// success
}
});
}
})
I tried different things, different options and encodings, some parsers, and I didn't get it to work with request and request-promise. From the docs, I would say you aren't doing anything wrong.
I tried then a different module, unirest (npm install unirest --save), and it worked out of the box.
const unirest = require('unirest');
const fs = require('fs');
var Request = unirest.get('https://www.csfd.cz/tvurce/65871')
.end(function(res) {
console.log(res.body);
fs.writeFileSync('output.html', res.body)
});
Hope this is of help.
Read the Content-Encoding header. It's most likely compressed, which would explain the size difference.
I've distilled my issue to some really basic functionality here. Basically, we're sending a request to a server (you can go ahead and c/p the URL and see the json document we get in response).
We get the response, we pipe it into a write stream and save it as a .json file - but the problem is that the file keeps being cut off. Is the .json file too large? Or am I missing something? Node.js newbie - massively appreciate any help I can get.
var fs = require('fs');
var url = 'https://crest-tq.eveonline.com/market/10000002/history/?type=https://crest-tq.eveonline.com/inventory/types/34/'
var request = require('request');
request(url).pipe(fs.createWriteStream('34_sell.json'));
var fs = require('fs');
var request = require('request');
var url = 'https://crest-tq.eveonline.com/market/10000002/history/?type=https://crest-tq.eveonline.com/inventory/types/34/';
request(url).pipe(fs.createWriteStream('34_sell.json'));
Not an answer but this is the code I used. I'm using request version 2.74.0. And node version v5.4.1.
Try writing a get request to the url and send the json as response and write an error handling statement like if err then throw err..console log and see the result..hope it works
What is wrong with this code? I want to pull my hair out! I'm getting JSON from the Instagram API. console logging just body gives me the JSON, but when I do something like body.data or body.pagination, I get nothing! Help please and thank you.
var express = require("express"),
app = express(),
https = require("https"),
fs = require("fs"),
request = require("request");
request("https://api.instagram.com/v1/tags/nofilter/media/recent?access_token=xxxxx&scope=public_content&count=1", function(error, response, body) {
if (!error && response.statusCode == 200) {
console.log(body) // returns all the relevant JSON
console.log(body.data) // **returns undefined!!!!!**
}
}).pipe(fs.createWriteStream("./test.txt"))
body is literally what it says on the tin - the body of the HTTP response. In other words, it's a string - you need to run it through JSON.parse to actually access it as an object.
console.log(JSON.parse(body).data);
Obviously, if you were going to use this for real, you'd assign the parsed object to a variable rather than running it every time you access it.