I'm trying to push a new object into an external Javascript document but I am having problems pulling in the JSON file to write to. I have an external, local file called check.json.
How do I call the external json file correctly in node?
var newData = JSON.parse(check.json);
newData.check.push({
cheap: $el.text()
});
check.json = JSON.stringify(newData);
You can use the Filesystem object to read and write to files. Specifically, the readFile and writeFile methods.
For example, to read:
fs.readFile('/path/to/my/json.json', function (err, data) {
if (err) throw err;
var newData = JSON.parse(data);
});
That said, flat files are not a good format for storing data and being accessed like a database. You can run into race conditions and lose data. You would be better off with a real database that will take case of that sort of thing for you.
SQLite gives you a simple file (but puts lots of protection in around it). If you really wanted JSON as the storage format then you could look at something like couchDB.
Related
I'm trying to make a live JSON database of IDs and a tag. The Database refreshes by reading from the JSON file and I have traced my problem to Nodejs not writing to disk, and I don't quite know why.
This is my reading operation, and yes there is a file there with proper syntax.
let dbraw = fs.readFileSync('db.json');
var db = JSON.parse(dbraw);
This is my writing operation, where I need to save the updates to disk.
var authorid = msg.author.id
db[authorid] = "M";
fs.writeFileSync('db.json', JSON.stringify(db));
Am I doing something wrong? Is it a simple error I am just forgetting? Is there an easier/more efficient way to do this I am forgetting about? I can't seem to figure out what exactly is going wrong, but it has something to do with these two bits. There are no errors in my console, just the blank JSON file it reads every time on the Read Operation.
There is a problem with your JSON file's path.
Try using __dirname.
__dirname tells you the absolute path of the directory containing the currently executing file.
— source (DigitalOcean)
Example:
If the JSON file is in the root directory:
let dbraw = fs.readFileSync(__dirname + '/db.json');
var db = JSON.parse(dbraw);
If the JSON file is in a subdirectory:
let dbraw = fs.readFileSync(__dirname + '/myJsonFolder/' + 'db.json');
var db = JSON.parse(dbraw);
Side note: I suggest you read about Google Firestore, as it will be a faster way to work with real time updates.
Here's a simple block that does what is desired
const fs = require('fs');
let file_path = __dirname + '/db.json',
dbraw = fs.readFileSync(file_path),
db = JSON.parse(dbraw),
authorid = 'abc';
console.log(db);
db[authorid] = "M";
fs.writeFileSync(file_path, JSON.stringify(db));
dbraw = fs.readFileSync(file_path), db = JSON.parse(dbraw)
console.log(db);
I've added a couple of log statements for debugging. This works and so there may be something else that's missing or incorrect in your flow. The most probable issue would be that of different path references as pointed out by jfriend00 in the comment to your question.
As for better solutions, following are a few suggestions
use require for the json directly instead of file read if the file is small which will do the parsing for you
Use async fs functions
Stream the file if it's big in size
See if you can use a cache like redis or database as a storage means to reduce your app's serialization and deserialization overhead
I am looking to access the locally created PDF in the server(NODEJS/Express) side, i am fairly new to java script and haven't seen any method to access the file.
I think i can use something like ${__dirname}/${user_details.Invoice_No_latest}.pdf but i am not sure if it is correct, any suggestions please
As can seen below the PDF that is generated (using html-pdf) is getting saved locally in the folder D:\Programming\Repo\JavaScript-Project\ec-server\22.pdf'
Once i get access to the file then i will use some thing similar as below code to save it to the database MySql
Snippet of code that i am looking to use afterwards
var jsfile = Buffer.concat(chunks);
var query = 'INSERT INTO `files` SET ?',
values = {
type: 'pdf',
data: jsfile
};
mysql.query(query, values, function(err) {
if (err) throw err; // TODO: improve
// do something after successful insertion
});
just looking if there is any simple way to access annd play with any file in the nodejs/Express that is stored locally there.
I found a way to use a locally stored file using the fs.readFile(/* Arguments Here */) function as shown here:
fs.readFile(`${__dirname}\\` + `${Invoice_No_Actual}` + `.pdf`, (err, data) => {
if (err) res.status(500).send(err);
else {
console.log(data);
res.contentType("blob");
res.send(`data:application/pdf;base64,${new Buffer.from(data).toString("base64")}`);
}
});
I am reviewing a nodeJS program someone wrote to merge objects from two files and write the data to a mongodb. I am struggling to wrap my head around how this is working - although I ran it and it works perfectly.
It lives here: https://github.com/muhammad-asad-26/Introduction-to-NodeJS-Module3-Lab
To start, there are two JSON files, each containing an array of 1,000 objects which were 'split apart' and are really meant to be combined records. The goal is to merge the 1st object of both files together, and then both 2nd objects ...both 1,000th objects in each file, and insert into a db.
Here are the excerpts that give you context:
const customerData = require('./data/m3-customer-data.json')
const customerAddresses = require('./data/m3-customer-address-data.json')
mongodb.MongoClient.connect(url, (error, client) => {
customerData.forEach((element, index) => {
element = Object.assign(element, customerAddresses[index])
//I removed some logic which decides how many records to push to the DB at once
var tasks = [] //this array of functions is for use with async, not relevant
tasks.push((callback) => {
db.collection('customers').insertMany(customerData.slice(index, recordsToCopy), (error, results) => {
callback(error)
});
});
})
})
As far as I can tell,
element = Object.assign(element, customerAddresses[index])
is modifying the current element during each iteration - IE the JSON object in the source file
to back this up,
db.collection('customers').insertMany(customerData.slice(index, recordsToCopy)
further seems to confirm that when writing the completed merged data to the database the author is reading out of that original customerData file - which makes sense only if the completed merged data is living there.
Since the source files are unchanged, the two things that are confusing me are, in order of importance:
1)Where does the merged data live before being written to the db? The customerData file is unchanged at the end of runtime.
2)What's it called when you access a JSON file using array syntax? I had no idea you could read files without the functionality of the fs module or similar. The author read files using only require('filename'). I would like to read more about that.
Thank you for your help!
Question 1:
The merged data lives in the customerData variable before it's sent to the database. It exists only in memory at the time insertMany is called, and is passed in as a parameter. There is no reason for anything on the file system to be overwritten -- in fact it would be inefficient to modify that .json file every time you called the database -- storing that information is the job of the database, not a file within your application. If you wanted to overwrite the file, it would be easy enough -- just add something like fs.writeFile('./data/m3-customer-data.json', JSON.stringify(customerData), 'utf8', console.log('overwritten')); after the insertMany. Be sure to include const fs = require('fs');. To make it clearer what is happening, try writing the value of customerData.length to the file instead.
Question 2:
Look at the docs on require() in Node. All it's doing is parsing the data in the JSON file.
There's no magic here. A static json file is parsed to an array using require and stored in memory as the customerData variable. Its values are manipulated and sent to another computer elsewhere where it can be stored. As the code was originally written, the only purpose that json file serves is to be read.
I tried to add a CSV file into aerospike using nodejs with the put() command. It is showing all records but storing only last record. I need to store the entire CSV file into aerospike using client node js.
client.put(key, rec, function (error) {
if (error) {
console.log('error: %s ', error.message)
}
else {
console.log('Record %d written to database successfully.',count)
}
How to store a CSV file in aerospike using client nide js?
This is a repeated action if I understand it correctly. The problem is that since key remains unchanged, put will override it, the last overrides the penultimate, the penultimate its previous, etc. You will either need to have multiple different keys and use those or inside the loop concatenate your texts and use client.put after the loop, but with the concatenated string.
Another option for loading CSV files would be to use the Aerospike Data Loader tool: https://github.com/aerospike/aerospike-loader.
I'm working on a NoDB CMS in Meteor, but I'm new to both Meteor and JavaScript frameworks.
How do I go about reading and writing files to the server?
Within the Node fs module you have a writeFile function.
getUser = Meteor.users.findOne({_id : Meteor.userId()});
userObject = JSON.stringify(getUser);
var path = process.env["PWD"] + "/public/";
fs.writeFile(process.env["PWD"] + "/public/"+Meteor.userId()+'.txt', userObject,
function (err) {
if (err) throw err;
console.log('Done!');
}
);
The above snippet would create a file with all the information of the user. You could access the properties of the result of your query with something like getUser._id to prepare your data parameter (String or Buffer) to print pretty.
All this of course is server side.
you can try to use Npm.require inside the startup function. Like so
Meteor.startup(function () {
fs = Npm.require('fs');
}
But you should definitely have a look at collectionFS that does what you are looking for: storing files on the server and allowing you to retrieve them
an added advantage is that you can distribute everything over many nodes of a MongoDB cluster
to manipulate image files, you can use imagemagick with nodejs this should allow you to transform in any way you need.
The node fs module is a start. http://nodejs.org/api/fs.html
You might want to be a bit more specific with your question though, as it's kind of broad.