Search and update value in json file in node js - javascript

I'm trying to create a key-value paired datastore in nodejs in which I'm trying to use a json file as a database. Now I'm trying to read a particular value of key and update it or delete it.
For eg.(dummy data)
{
name: 'my name',
no:12345,
course : {
name: 'cname',
id: 'cid'
}
}
Now I want to change it as
{
name: 'my name',
no:12345,
course : {
name: 'cname1',
id: 'cid1'
},
fees: {
primaryFee: 1000,
donation: 100,
}
}
Or even delete the course key with it's value.
One way I thought of to achieve this is read the entire file and store it in a variable(json parsed). And update the value in that variable and write the whole data to the file.
But this is not efficient as it reads and writes the whole data every time of an update.
So is there any efficient approach for this to update or delete a particular key in the file itself???

Yes, even in a high-level programming language like JS, it is possible to update parts of a file, though it is usually more common in low-level programming languages like e.g. C
For node.js, check-out the official documentation of the file system module, especially the write function: https://nodejs.org/api/fs.html#fs_fs_write_fd_buffer_offset_length_position_callback
Other possible solutions to your problem:
Use a database as eol suggested
Use an append-only file where you only append the updates. This is more efficient because not the whole file has to be written.
Split your database-file into several files (this is what databases often do below the surface)

Since you are working on a file based data store, you could use FS in node.js to read and write files. The readfilesync callback returns you the whole content of the file which should be parsed as JSON and changes are to be made.
So AFAIK the only solution is to read the whole file and then update or delete the value. In the worst case scenario you will traverse O(n) when you are trying to update or deleting the key.

You've discovered why databases are so complicated ;)
There isn't really a good way to update a json file without completely reading & parsing the data, editing and then writing the result again.
If you're looking for a simple file database, have a look at sqlite

For making these types of data changes, USE DB, for JSON files you have to store full data in the variable and perform the operations.

Related

how to insert data into pages in nodeJS

I have recently begun to work with NodeJS, after a long time working with PHP, and i'm wondering if there was anything similar to 'echo'. a way of sending the data as parts, allowing me to send my own values in between. For example, writing the structure for a section of a website in html, then using node to send the relevant data inside of the div.
The short answer is "yes". You can repeatedly write to the response object and then call end on it when you are done.
response.setHeader("Content-Type", "text/plain");
response.write("foo");
response.write("bar");
response.write("baz");
response.end();
However, that generally isn't the approach taken for good reason.
Separating your concerns (e.g. splitting "data fetching" and "HTML generation") usually makes things more manageable.
This is why the most common way to build pages server-side with Node.js is to collect all the data for the page into an object and then pass it into a template.
You can, of course, split the data collection logic up into functions in their own modules so you can end up with something like:
const my_data = {
header: get_header_data(request),
content: get_template_name_data(request),
};
response.render('template_name', my_data)
With as much division into smaller chunks are you like.

Recursive RESTAPI Query

For some background, I have a number of enterprise systems management products that run a REST API called Redfish. I can visit the root folder on one such device by going here:
https://ip.of.device/redfish/v1
If I browse the raw JSON from a GET request, the output includes JSON data that look like this. I'm truncating this due to how long it is, so perhaps some JSON syntax errors here.
{
Description: "Data",
AccountService: {
#odata.id: "/redfish/v1/AccountService"
},
CertificateService: {
#odata.id: "/redfish/v1/CertificateService"
}
}
Perhaps in my searching I'm using the wrong terminology, but each of those #odata.id items is basically a 'folder' I can navigate into. Each folder has additional data values, but still more folders. I can capture contents of folders I know about via javascript and parse the JSON simple enough, but there are hundreds of folders here, some multiple layers deep and from one device to the next, sometimes the folders are different.
Due to the size and dynamic nature of this data, is there a way to either recursively query this from an API itself or recursively 'scrape' an API's #odata.id 'folder' structure like this using Javascript? I'm tempted to write a bunch of nested queries in foreach loops, but there's surely a better way to accomplish this.
My eventual goal is to perform this from nodejs, parse the data, then present the data in a web form for a user to select what fields to keep, which we'll store for faster lookups in a mongodb database along with the path to the original data for more targeted api queries later.
Thanks in advance for any suggestions.

Writing to a .json file with node.js filesystem

As seen here: https://anidiots.guide/coding-guides/storing-data-in-a-json-file.html
It shows you how to create a point system in discord.js. But what caught my eye is how they used let points = JSON.parse(fs.readFileSync("./points.json", "utf8"));
to read the file. So i am trying to learn how to make a database where i get the points plus money that can be redeemed daily and shared. kinda like a bank. but i don't know how to do that. If anyone could help me with a hastebin link or anywhere i can learn in depth how to use the JSON.parse(fs.readFileSync("./points.json", "utf8")); thing.
and if you want to see my bot in action don't hesitate to use https://discord.me/knut
The line you're asking about is made of two call to the functions JSON.parse and fs.readFileSync.
JSON.parse. This function receives a bunch of text and transform it (parse it) into a javascript object. It can be very useful when you want to, for example, build something dynamically based on the content of a file. Maybe w3school is a good place to start looking for info about it.
Example
var string = "{id: 4, name:'Volley'}"
var parseObject = JSON.parse(string)
console.log(parseObject.id); //4
console.log(parseObject.name); //Volley
fs.readFileSync. As you probably know, most of the functions in javascript and node.js are asynchronous, that is, instead of calling and get the returned value, you have to define a callback within which you would use the value you want. fs.readFileSync is just the synchronous version of fs.readFile(callback), which returns the content of the read file. Here you have the docs about that function.
These functions are actually simple to use, you should struggle in finding some examples or trying them by yourself.
If you want to imitate what the tutorial said, then you would need to define another file, with the money of each point, or edit the first file if you can, so you could an object like
var point_and_money = {
points : [...],
money : [....]
}
or two objects with the separate information
var points = JSON.parse(fs.readFileSync("./points.json", "utf8"));
var money = JSON.parse(fs.readFileSync("./money.json", "utf8"));
Hope I gave you a hint about what you asked
not really sure what you are trying to achieve?
JSON.parse(fs.readFileSync("./points.json", "utf8"));
This line reads a json file and parse it to a Javascript-Method. Nothing less and nothing more. this can also be done in Nodejs via
var points = require('./points.json');
You mentioned something like how to do a database? Basically I am not sure if you want to develop a database or better use an existing one. Look for MongoDB, SQLLite,IndexedDB, etc. There a tons of database for almost every use case.
Remember that your line of code reads synchronous in a blocking way when the file gets large.
And when multiple users would access the file at the same time you need to handle this somehow. So definitely would suggest to look for some existing database solution and have more time to focus on your business logic.
I hope I understand your question correct and my answer helps.
Maybe this one is also a good question to start: Lightweight Javascript DB for use in Node.js

Storing large amounts of text in js object vs xml or json

I'm making a RPG now and storing dialog text in a JS object, like so:
var dialog = {
quests : {
Lee : {
"1 - Introductions" :
{
"chinese" :
[
"Hi, I'm Lee.",
"I checked your information, I think we can use you...",
Then accessing it as such:
game.data.NPCdialog = dialog.quests[game.data.currNPC][currTask]["chinese"][0];
I asked how to use require.js to dynamically load js files because I want to store several npc_dialog files per level and load them as needed. I asked that here:
using requireJS to dynamically load js files
Both replyers mentions using XML or JSON to store the dialog text rather than inside a JS object, and loading it with AJAX.
Why is storing the text in that format better?
Storing it as JSON or XML isn't better or worse. It's just more easy to load and use in your scenario. It's a simpler solution.
The nice thing about storing it in smaller json objects and loading them with AJAX is the site will use less resources in the browser and you can just load the required data when needed. If the data isn't too large then having it all loaded in a javascript object would provide faster access.
One thing to note is that your javascript object could have script in it and not just data where if you used JSON or XML files they would be limited to just data.

Piping/streaming JavaScript objects in Node.js

I'm trying to wrap my head around Node.js streams, not that I'm pretty new to JavaScript and node, the last languages I really got were Perl and PHP :-D
I've read the Buffer/Streams documentation # nodejs.org, watched James Halliday # LXJS, read his stream-handbook and Thorsten Lorenz event-stream post. I start to understand the basics :)
I process data which is serialized in RDF (which is neither JSON nor XML). I manage to fetch the data (in real code via request) and parse it into a JS object using rdfstore module.
So far I do this:
s.createReadStream('myRDFdata.ttl').pipe(serialize()).pipe(process.stdout);
Where serialize()does the job of parsing an serializing the code at the same time right now. I use through module to interface to the stream.
Now I have some more methods (not the real function declaration but I hope you get the point):
getRecipe(parsedRDF) -> takes the parsed RDF (as a JavaScript object) and tells me how to use it
createMeal(parsedRDF, recipe) -> takes the parsed RDF and the recipe from above and creates a new RDF object out of it
this new object needs to get serialized and sent to the browser
(In the real world getRecipe will have to do a user interaction in the browser)
I like the idea of chaining this together via pipes for higher flexibility when I enhance the code later. But I don't want to serialize it to a RDF serialization every time but just send around the JS object. From what I've read in the documentation I could use the stringify module to get a string out of each step for piping it to the next step. But:
does this actually make sense? In terms of do I add unnecessary overhead or is this negligible?
I don't see how I could give the parsedRDF to both methods with the dependency that getRecipe would have to be called first and the output is input for createMeal as well. Are there modules which help me on that?
It might be that I have to ask the user for the final recipe selection so I might need to send stuff to the browser there to get the final answer. Can I do something like this over sockets while the pipe is "waiting"?
I hope this shows what I'm trying to do, if not I will try to give more details/rephrase.
Update: After sleeping over it I figured out some more things:
It probably doesn't make sense to serialize a format like RDF into something non-standard if there are official serialization formats. So instead of using stringify I will simply pass an official RDF serialization between the steps
This does imply that I parse/serialize the objects in each step and this surely does add overhead. Question is do I care? I could extend the RDF module I use to parse from stream and serialize into one
I can solve the problem with the dependency between getRecipe and createMeal by simply adding some information from getRecipe to parseRDF, this can be done very easily with RDF without breaking the original data model. But I would still be interested to know if I could handle dependencies like this with pipes
yes, It's okay to make a stream of js objects,
you just have to remember to pipe it through something that will serialize the stream again after before writing it to IO.
I'd recomend writing a module called rdfStream that parses and serializes rdf, you would use it like this
var rdf = require('rdf-stream')
fs.createReadStream(file) //get a text stream
.pipe(rdf.parse()) //turn it into objects
.pipe(transform) //optional, do something with the objects
.pipe(rdf.stringify()) //turn back into text
.pipe(process.stdout) //write to IO.
and it could also be used by other people working with rdf in node, awesome!

Categories

Resources