D3.csv not loading the csv file - javascript

For some reason, I am not able to load any .csv file with the d3.csv(...) function. What I do is that I load a csv file using the function above and print out the data directly to the console.
This is what what I do:
I create a file called irisData.csv using notepad. It includes some data from the iris dataset from https://archive.ics.uci.edu/ml/datasets/Iris
sepal_length,sepal_width,petal_length,petal_width,species
5.1,3.5,1.4,0.2,Iris-setosa
4.9,3,1.4,0.2,Iris-setosa
4.7,3.2,1.3,0.2,Iris-setosa
4.6,3.1,1.5,0.2,Iris-setosa
5,3.6,1.4,0.2,Iris-setosa
5.4,3.9,1.7,0.4,Iris-setosa
I write this code in my irisTest.html, to print out the data into the console, checking if it works correctly.
...
d3.csv("irisData.csv", type, function(error, data){
console.log(data);
});
...
Not sure if this is relevant, but I will put it up anyway: In order to run my html, I use Node.js to run a server. This is the code for the server, server.html:
var http = require('http');
fs = require('fs');
http.createServer(function(req, res){
fs.readFile('./irisTest.html', function(err, data){
if(err){
res.writeHead(500, {'Content-type': 'text/plain'});
res.end('500 - Internal Error');
}else{
res.writeHead(200, {'Content-type': 'text/html'});
res.end(data);
}
});
}).listen(3000);
So what I would expect is that the console prints out an object consisting of the data from the csv file. Something like this:
[
{data...}
{data...}
{data...}
...
]
However what I get is the code of my irisTest.html (which is the html code itself) wrapped into objects. I realize that it doesn't matter what I put instead of "irisData.cvs" as the path in d3.csv("irisData.csv", ...), it always outputs my own code such as below. So I thought it might be a problem with the path to the csv file, but there shouldn't be. All files are in the same folder.
[
...
{<!DOCTYPE html>: "d3.csv("irisData.csv", type, function(error, data){"}
{<!DOCTYPE html>: "console.log(data);"}
{<!DOCTYPE html>: "});}"}
...
]
Does anyone know what is going on?

As specified in the documentation here, the anonymous function is expected instead of type. I quote the example from the doc:
d3.csv("example.csv", function(d) {
return {
year: new Date(+d.Year, 0, 1), // convert "Year" column to Date
make: d.Make,
model: d.Model,
length: +d.Length // convert "Length" column to number
};
}, function(error, rows) {
console.log(rows);
});
So, in your case, reading the csv file should be done this way:
d3.csv("irisData.csv",function(data){
console.log(data);
},function(error, rows){
console.log(rows);
});
Here is a working example in gist, check the console to see the data object.

Related

Error creating json file in node js

I have followed many solutions provided in the previous questions but mine is not working. The problem is in .json extension. Whenever I use filename.json, the app will crash with ERR_CONNECTION_RESET but successfully created an empty .json file. However, if I change the extension to filename.txt, the fs.writeFile will successfully create the filename.txt with the data inside and the app will work as expected. Did I miss any configuration here to create the JSON file?
Here is the example code I used.
var jsonData = '{"persons":[{"name":"John","city":"New York"},{"name":"Phil","city":"Ohio"}]}';
// parse json
var jsonObj = JSON.parse(jsonData);
console.log(jsonObj);
// stringify JSON Object
var jsonContent = JSON.stringify(jsonObj);
console.log(jsonContent);
fs.writeFile("./public/output.json", jsonContent, 'utf8', function(err) {
if (err) {
console.log("An error occured while writing JSON Object to File.");
return console.log(err);
}
console.log("JSON file has been saved.");
});
So, ERR_CONNECTION_RESET means that the connection was closed midway. My guess, as in the comments, would be that it's a reloading server.
Try using --ignore public/**/*.json and it should work.

Will fs.readFile() cache the file's contents in the server's memory after the first read?

I would like to know if the following code would cache the file's contents in the server's memory after reading it once. The reason I ask is because I don't want to have to re read the file every time the user requested the page. I would prefer to have it cached after the first read.
fs.exists(fileName, function (exists) {
if (!exists) {
console.log("== 404 error");
resp.writeHead(404, {'Content-Type': 'text/html'});
resp.end(pageError);
return;
}
fs.readFile(fileName, 'utf8', function (err, data) {
if (err) {
resp.writeHead(404, {"Content-Type": "text/html"});
resp.end(pageError);
return;
}
var contentType = getContentType(req.url);
var mimeType = mimeTypes[contentType];
resp.writeHead(200, {"Content-Type": mimeType});
resp.end(data);
});
});
NOTE ** I only want to know how to do this using internal Node JS modules (no express)
You shouldn't use fs.exists() as its deprecated; instead, use fs.stat() if you only want to check existence. If you are going to open and read a file after checking for existence, then just use fs.readFile() and handle the passed error accordingly for a not existing file. This is noted within the fs docs for fs.access() but still applies to fs.stat() as well. Below is the excerpt from the Node.js docs.
Using fs.access() to check for the accessibility of a file before calling fs.open(), fs.readFile() or fs.writeFile() is not recommended. Doing so introduces a race condition, since other processes may change the file's state between the two calls. Instead, user code should open/read/write the file directly and handle the error raised if the file is not accessible.
fs.readFile() does not do any caching for you, this is something you'll need to create/manage yourself. The below example shows how to create a file cache using a JS Object as a dictionary to keep the file contents indexed by filename. Its important to note that you shouldn't be putting gigs of data in the fileCache object, instead this will be good for lots of smaller files.
fileCache just needs to be in scope of getFileFromCache() and in a place that won't be garbage collected during runtime.
const fileCache = {}
const getFileFromCache = (filename, cb) => {
if (fileCache[filename]) {
return cb(null, fileCache[filename])
}
fs.readFile(filename, 'utf8', (err, data) => {
if (err) {
return cb(err)
}
fileCache[filename] = data
return cb(null, data)
})
}
Will fs.readFile() cache the file's contents in the server's memory after the first read?
No. fs.readFile() itself does no caching.
But, the underlying operating system will do file caching and as long as there isn't so much other file activity going on that the cached read gets flushed, then the OS will probably fetch the file from a local memory cache the 2nd, 3rd times you read it.
If you want to assure the caching yourself, then you should just store the contents yourself once you first read it and from then on, you can just use the previously read contents.
You could store the file data in a variable, or in a global variable (by using global.<varname> = <filedata>) if you want to access it across multiple modules.
Of course, as George Cambpell said, anoy modification to the file won't be noticed by your program, since it won't re-read the file.
So I would do something like this:
function sendResponse(data) {
let contentType = getContentType(req.url);
let mimeType = mimeTypes[contentType];
resp.writeHead(200, {"Content-Type": mimeType});
resp.end(data);
}
if(global.fileData) {
return sendResponse(global.fileData);
}
fs.readFile(fileName, 'utf8', function (err, data) {
if (err) {
resp.writeHead(404, {"Content-Type": "text/html"});
resp.end(pageError);
return;
}
global.fileData = data;
sendResponse(global.fileData);
});
The first time global.fileData will be empty, so you'll proceed with fs.readfile, store the file content in global.fileData, and send the response.
The second time global.fileData will contain stuff, so you just send the response with that stuff, and you won't read the file again.
For further reference take a look at the official NodeJS documentation: https://nodejs.org/api/globals.html#globals_global
Another thing you should do is replace fs.exists with fs.access or fs.stat (I usually use fs.access), because the exists method is deprecated.
https://nodejs.org/api/fs.html#fs_fs_stat_path_callback
https://nodejs.org/api/fs.html#fs_fs_access_path_mode_callback
Happy coding!

How do I run a Frisby.js test inside a function

I can't figure out why this frisby tests won't run!
Basically I'm trying to import JSON from a file and check it against a return from a request. The compiler doesn't seem to find any tests when I run this file.
If anyone could possibly suggest a better way to do this? I'm thinking about trying a different way to handle the file reading. I know about readFileSync() but I do not want to use that if I don't have to! Any help would be appreciated.
function readContent(callback,url,file) {
fs.readFile(file, 'UTF8', function (err, content) {
if (err) return callback(err)
data = JSON.parse(content)
callback(null, data)
})
}
readContent(function (err, content) {
frisby.create('Testing API')
.get(url)
.expectStatus(200)
.expectBodyContains(content)
.toss()
},
url,file)
Here's one I prepared earlier:
// Read a JSON file from disk, and compare its contents with what comes back from the API.
fs.readFile(path.resolve(__dirname, 'GET_ReferenceTypes.json'), 'utf-8', function(error, data){
if (error) throw error
frisby.create('GET ReferenceTypes, inside readFile callback')
.get(URL + 'ReferenceTypes?requestPersonId=2967&id=99')
.expectStatus(200)
// JSON.parse() is required to convert the string into a proper JSON object for comparison.
// The .replace() strips the BOM character from the beginning of the unicode file.
.expectJSON(JSON.parse(data.replace(/^\uFEFF/, '')))
.toss();
});
Double check the encoding on your JSON file, because this whole thing comes apart without the .replace() call.

Node xml2js is returning 'undefined' when using parseString()

I'm using this package elsewhere and it's working just fine, however in one particular example with one XML file I'm getting "undefined" errors.
Example:
fs.readFile('./XML/theXMLfile13mb.xml', 'ascii', function(err,data){
if(err) {
console.log("Could not open file " + err);
process.exit(1);
}
parseString(data, function (err, result) {
console.log(result); // Returns undefined
var json1 = JSON.stringify(result); // Gives an error
var json = JSON.parse(json1);
The xml2js docs don't really mention how this might be possible/what this might mean. I've tried using other XML files and they work fine. This particular XML file is no bigger than the others nor does it appear to be any less in-tact (it opens fine in my browser and all the data is presented as expected).
Any ideas on how I could troubleshoot this?
You need to convert the data from a Buffer to a String, use this:
parseString(data.toString(), function (err, result) {
Instead of:
parseString(data, function (err, result) {

Render an image with nodejs

I am storing an image in mongodb with BinData type.
I can query the database using mongojs with this.
db.images.findOne({
file_name: 'temp.jpg',
},
function(err, data){
console.log(data.image); // image buffer appears on the console
res.writeHead(200, {'Content-Type': 'image/jpg'});
res.end(data.image);
});
This produces "TypeError: first argument must be a string or Buffer".
I am pretty sure this has something to do with buffers or encoding.
Can some please explain what I should be doing to the image-data before sending to the browser?
set the correct content type
set the correct content length
short example how i serve my files from mongodb gridfs
.getFileFromGridFs(doc,function(err, buffer){
if(err) return next(err)
res.setHeader('content-type',doc.mimeType)
res.setHeader('content-length',doc.size)
buffer.pipe(res)
})

Categories

Resources