I'm using a variable to store a json file that is used as a reference in my code. i have a code that checks if the variable is outdated or not. if it is outdated, it will be updated from its source.
const someFile = require('./something.json')
Everytime the file was outdated, the program tries to update it. and it was successful. however, nodejs kept using the old .json file (that has been replaced) as a reference. making my code output an outdated response.
so how can I tell node.js to use the updated file? thank you in advance!
//myjson.json
{
"data": {
"name": "uday",
"age": 25
}
}
controller.js
var myJSON = require('./myjson.json');
var fs = require('fs');
console.log("before change-->", myJSON);
myJSON.data.age = 26;
console.log("after change-->", myJSON);
fs.writeFileSync('myjson.json', JSON.stringify(myJSON));
console result
before change--> { data: { name: 'uday', age: 25 } }
after change--> { data: { name: 'uday', age: 26 } }
Your issue is with the const keyword, when a value gets assigned to a constant it cannot be changed (Info), just use let keyword and your code should work.
Update
If both files exist :
let someFile= require('./ab.json')
console.log(someFile); // Will Print content of ab.json
someFile= require('./bc')
console.log(someFile); // Will Print content of bc.json
But this would only work for already existing files on app start. So for a run time created file you cannot use require for that you need another option like the fs module (Docs Node 8.x)
P.S : While it's valid solution, if the file is a json and gets changed regularly you can just use redis and it's Keyspace Notifications if this is a large scale application if not fs is the way to go.
As described in Require Documentation, require will cache the file, so even if you update the file and use require again to get the file you wont see the changes.
Caching
Modules are cached after the first time they are loaded. This means
(among other things) that every call to require('foo') will get
exactly the same object returned, if it would resolve to the same
file.
Instead can try reading the file each time.
var fs = require('fs');
var data = fs.readFileSync('something.json', 'utf8');
var someFile = JSON.parse(data);
Related
Hello, I am currently developing a website for my personal needs for which I need to be able to regularly save data in a "data.json" file, which I can then retrieve for later use.
I then remembered that I had already used the "file sync" module of NodeJs before, but this time I can't manage to use it with my project as it doesn't seem to apply to my html files...
I suspect that there are other easier solutions or that I must be using NodeJs wrong in this case.
I use the sublime text editor and I work on 2 html files linked to the same script.js and style.css file
Would you have solutions to propose to me?
Thank you, cordially,
Florent
Let's say at the backend.
const htmlTxt = `
<html>
...
</html>
`
// write the file locally
fs.writeFileSync("sampleHtml.html", htmlTxt);
// read the contents of the file
console.log(fs.readFileSync("sampleHtml.html", "utf8"));
I would use the File system module.
So on your nodeJS server, you would have to add:
const fs = require('fs');
then create your json:
let jsonElement = {
aa: 'aa',
test: 123,
};
let data = JSON.stringify(jsonElement);
And save it to file
fs.writeFileSync('json_file.json', data);
https://nodejs.org/api/fs.html
I'm trying to write in a text file, but not at the end like appendFile() do or by replacing the entiere content...
I saw it was possible to chose where you want to start with start parameter of fs.createwritestream() -> https://nodejs.org/api/fs.html#fs_fs_createwritestream_path_options
But there is no parameter to say where to stop writting, right ? So it remove all the end of my file after I wrote with this function.
const fs = require('fs');
var logger = fs.createWriteStream('result.csv', {
flags: 'r+',
start: 20 //start to write at the 20th caracter
})
logger.write('5258,525,98951,0,1\n') //example a new line to write
Is there a way to specify where to stop writting in the file to have something like:
....
data from begining
....
5258,525,98951,0,1
...
data till the end
...
I suspect you mean, "Is it possible to insert in the middle of the file." The answer to that is: No, it isn't.
Instead, to insert, you have to:
Determine how big what you're inserting is
Copy the data at your insertion point to that many bytes later in the file
Write your data
Obviously when doing #2 you need to be sure that you're not overwriting data you haven't copied yet (either by reading it all into memory first or by working in blocks, from the end of the file toward the insertion point).
(I've never looked for one, but there may be an npm module out there that does this for you...)
You could read/parse your file at first. Then apply the modifications and save the new file.
Something like:
const fs = require("fs");
const fileData = fs.readFileSync("result.csv", { encoding: "utf8" });
const fileDataArray = fileData.split("\n");
const newData = "5258,525,98951,0,1";
const index = 2; // after each row to insert your data
fileDataArray.splice(index, 0, newData); // insert data into the array
const newFileData = fileDataArray.join("\n"); // create the new file
fs.writeFileSync("result.csv", newFileData, { encoding: "utf8" }); // save it
I'm trying to use the File and Directory Entries API to create a file uploader tool that will allow me to drop an arbitrary combination of files and directories into a browser window, to be read and uploaded.
(I'm fully aware that similar functionality can be achieved by using an file input element with webkitdirectory enabled, but I'm testing a use case where the user isn't forced to put everything into a single folder)
Using the Drag and Drop API, I've managed to read the DataTransfer items and convert them to FileSystemEntry objects using DataTransferItem.webkitGetAsEntry.
From there, I am able to tell that if the entry is a FileSystemFileEntry or a FileSystemDirectoryEntry. My plan of course if to recursively walk the directory structure, if any, which I should be able to do using the FileSystemDirectoryReader method readEntries, like this:
handleDrop(event) {
event.preventDefault();
event.stopPropagation();
//assuming I dropped only one directory
const directory = event.dataTransfer.items[0];
const directoryEntry = directory.webkitGetAsEntry();
const directoryReader = directoryEntry.createReader();
directoryReader.readEntries(function(entires){
// callback: the "entries" param is an Array
// containing the directory entries
});
}
However, I'm running into the following issue: in Chrome, the readEntries method only returns 100 entries. Apparently, this is the expected behavior as the way to obtain subsequent files from the directory is to call readEntries again. However, I'm finding this impossible to do. A subsequent call to the method throws the error:
DOMException: An operation that depends on state cached in an interface object was made but the state had changed since it was read from disk.
Does anyone know a way around this? Is this API hopelessly broken for directories of 100+ files in Chrome? Is this API deprecated? (not that it was ever "precated"). In Firefox, readEntries returns the whole directory content at once, which apparently against the spec, but it is usable.
Please advice.
Of course, as soon as I had posted this question the answer hit me. What I was trying to do was akin to the following:
handleDrop(event) {
event.preventDefault();
event.stopPropagation();
//assuming I dropped only one directory
const directory = event.dataTransfer.items[0];
const directoryEntry = directory.webkitGetAsEntry();
const directoryReader = directoryEntry.createReader();
directoryReader.readEntries(function(entries){
// callback: the "entries" param is an Array
// containing the directory entries
}, );
directoryReader.readEntries(function(entries){
//call entries a second time
});
}
The problem with this is that readEntries is asynchronous, so I'm trying to call it while it's "busy" reading the first batch (I'm sure lower-level programmers will have a better term for that). A better way of achieving what I was trying to do:
handleDrop(event) {
event.preventDefault();
event.stopPropagation();
//assuming I dropped only one directory
const directory = event.dataTransfer.items[0];
const directoryEntry = directory.webkitGetAsEntry();
const directoryReader = directoryEntry.createReader();
function read(){
directoryReader.readEntries(function(entries){
if(entries.length > 0) {
//do something with the entries
read(); //read the next batch
} else {
//do whatever needs to be done after
//all files are read
}
});
}
read();
}
This way we ensure the FileSystemDirectoryReader is done with one batch before starting the next one.
This is in p5.js which includes most javascript functions!
I am trying to make a save-file for my game. By this I mean: the user presses the save button in my game. It updates an array that is saved on a file included in the game package, the player keeps playing. How would I do something like this (creating files that can be accessed by my code and changed).
var SM = {
//save files
sf1: [1,0,0,0,0],
[0,0,0,0,0],
[0,0,0,0,0],
sf2: [1,0,0,0,0],
[0,0,0,0,0],
[0,0,0,0,0],
sf3: [1,0,0,0,0],
[0,0,0,0,0],
[0,0,0,0,0],
};
One more thing (FOR PROCESSING CODERS FROM HERE ON): I tried to use processing functions like saveStrings(); and loadStrings(); but I couldn't get saveStrings() to save to a specific location nor could I properly load a txt file. Here is the code I used for that:
var result;
function preload() {
result = loadStrings('assets/nouns.txt');
}
function setup() {
background(200);
var ind = floor(random(result.length));
text(result[ind], 10, 10, 80, 80);
}
I had a folder called assets within the sketch folder and assets had a txt file called nouns with strings in it (downloaded from saveStrings then manually moved) but the sketch wont go past the loading screen?
If you are running it from a browser, you can't save or load a file how you want, period. Saving and loading files in browser JavaScript involves user interaction, and they get to pick the file and where it saves.
If you want to save it locally, instead of trying to write it to a file, you should write and read it from localStorage, which you can then do just fine.
// save
localStorage.setItem('saveData', data);
// load
const data = localStorage.getItem('saveData');
If it is somehow a game run directly on the client (out of the browser), like written in Node.js, then you'd want to use the fs functions.
To expand a bit, if you have your save data as an object:
const saveData = {
state: [1,2,3],
name: 'player'
};
Then to save it, you would simply call:
localStorage.setItem('saveData', JSON.stringify(data));
You'll want to stringify it when you save it to make it work properly. To read it back, you can then just read it back with getItem():
const data = JSON.parse(localStorage.getItem('saveData') || '{}');
(That extra || '{}' bit will handle if it hasn't been saved before and give you an empty object.)
It's actually much easier than trying to write a JavaScript file that you would then read in. Even if you were writing a file, you'd probably want to write it as JSON, not JavaScript.
In order to save strings into a file in Javascript, I would recommand you this previous StackOverflow question, which provides a link to a very clear and easy-to-use library to manage files in Javascript.
I am using XPCOM to read/write file(s) on my hard drive (since Java is no longer supported on FF16,17,18,+ I have to use this). I use it in my FireFox extension(s) (I use iMacros). On this document click I found this example.
var string = '\u5909\u63db\u30c6\u30b9\u30c8';
file.initWithPath('C:\\temp\\temp.txt');
file.create(file.NORMAL_FILE_TYPE, 0666);
var charset = 'EUC-JP';
var fileStream = Components
.classes['#mozilla.org/network/file-output-stream;1']
.createInstance(Components.interfaces.nsIFileOutputStream);
fileStream.init(file, 2, 0x200, false);
var converterStream = Components
.classes['#mozilla.org/intl/converter-output-stream;1']
.createInstance(Components.interfaces.nsIConverterOutputStream);
converterStream.init(fileStream, charset, string.length,
Components.interfaces.nsIConverterInputStream.DEFAULT_REPLACEMENT_CHARACTER);
converterStream.writeString(string);
converterStream.close();
fileStream.close();
So this code does the following. If file doesn't exist it creates it and saves the data in it. However if file does exists it will return error.
If I comment that part of the code (and file exists) it will just overwrite the old data and put the new.
I need this code to create file, if it exists just move on without an error and save the data in the new line without overwriting.
Like this.
before:
data11, data12, data13
data21, data22, data23
after:
data11, data12, data13
data21, data22, data23
data31, data32, data33
data41, data42, data43
Try passing 18 as the second parameter when you init the output stream (instead of 2).
fileStream.init(file, 18, 0x200, false);
That adds the PR_APPEND flag to the io mode parameter (it's 0x10; the 2 is for PR_WRONLY).