Reading a file and storing it in an array in javascript - javascript

First of all, I'm programming in Javascript, but not for a website or anything like that. Just a .js file in a folder in my PC (later I pass that .js file to other people so they can use it).
Now, I wanted to read a txt file within the same folder as the script and store its content in a variable. I'd like to do something like this: Reading a file and storing it in an array, then splitting up the file everywhere there is a },
Then if a string (input by the user, I already have this covered) contains a substring from the array, it would call a function.
Can you please help me?

As we answered the first part of your question in the comments, here is my solution to the second part of your question.
You can add an event listener on the input and check the user input against the values in your array. I may have misunderstood what you exactly mean by "substring"
var myData = ["world","one","two", "blue"];
document.getElementById('theInput').addEventListener('input',checkInput);
function checkInput(){
var input = this.value;
if(myData.indexOf(input) > -1){
console.log("match!")
// call your function
}
}
<input id='theInput' type='text'/>

If not need to run in the broswer you can use node and use fs for read/write files.
Node
Node fs (File System):
If you need to run in the broswer you can use XMLHttpRequest and ajax.
Or use a input type=file and use FileReader

Related

open a file as base 64 in nodejs

quick question here, I'm pretty confident this is not complicated and a lot of chance that is a duplicate but I still cannot find a way to do it.
I'm in the backend and the front end return send me a cdv file in the body. I cannot change that.
What I need to do is simply parse it but this is the trick. I can't "open" the given data to have something to work with cdv-parse.
the format of the data is (when I do a console.log): data:text/csv;name=toto.csv;base64,VHlwZSBk (and so on)
console.log(myfile); // data:text/csv;name=toto.csv;base64,VHlwZSBk (and so on)
console.log(fs.readFileSync(myfile, "base64"), "base64")); // error: ENOENT: no such file or directory, open 'u�Z��m�
I also tried with ```Buffer.from(myfile, "base64").toString()`` and again the format isn't the data expected.
EDIT: it seem that using myfile = myfile.replace("data:text/csv;name=toto.csv;base64,", "");does the trick with buffer.from().toString();
but I want something more générique, I guess something exist no ?
Thanks in advance

Login validation through JSON using JS

New at coding. So part of my assignment require me to validate login credentials through JSON.
user.json file would look something like this.. first is the email address then their password.
{
"mary#mary.com":"12345678",
"joseph#gmail.com":"293sfvdet"
}
my website will ask for the login and will go through JSON to validate the information. I am only allowed to use JSON, JS and HTML.I am definitely not familiar with JSON.
i would like to know how I could access my JSON file through JS and how i should go about using JSON for validation.
You may link your local json file in the page:
<script type="text/javascript" src="user.json"></script>
Then you can use it like:
var userdata = JSON.parse(user);
First of all (although you are maybe aware of this) this is not a secure way to implement a login at all but I assume, since this is for your assignment, this isn't really relevant to you.
Instead of writing your user data into a seperate JSON-file, you could just declare a constant inside your JS code like so:
const users = {
"mary#mary.com":"12345678",
"joseph#gmail.com":"293sfvdet"
}
After that you could just validate the entered credentials against this.
You may want to read this to learn more about javascript objects.
You're going to want to use Javascript to read the contents of the JSON file. (jQuery can help with this, if you're allowed to use a library, otherwise, look into fetch).
Then, you'll take the returned object that will look something like: response = { "mary#mary.com":"12345678", "joseph#gmail.com":"293sfvdet" } and you'll check to see if the email/password entered matches any record in the json.
var enteredEmail = 'mary#mary.com';
var enteredPassword = '12345678';
if(response[enteredEmail] == enteredPassword){
// login validated.
}
As other users have pointed out, storing user info in a JSON file like this is not a secure practice - nor is it practical. However, if it's just an assignment, this should work.

Why is the foreach loop NOT making a change to the file?

I am reviewing a nodeJS program someone wrote to merge objects from two files and write the data to a mongodb. I am struggling to wrap my head around how this is working - although I ran it and it works perfectly.
It lives here: https://github.com/muhammad-asad-26/Introduction-to-NodeJS-Module3-Lab
To start, there are two JSON files, each containing an array of 1,000 objects which were 'split apart' and are really meant to be combined records. The goal is to merge the 1st object of both files together, and then both 2nd objects ...both 1,000th objects in each file, and insert into a db.
Here are the excerpts that give you context:
const customerData = require('./data/m3-customer-data.json')
const customerAddresses = require('./data/m3-customer-address-data.json')
mongodb.MongoClient.connect(url, (error, client) => {
customerData.forEach((element, index) => {
element = Object.assign(element, customerAddresses[index])
//I removed some logic which decides how many records to push to the DB at once
var tasks = [] //this array of functions is for use with async, not relevant
tasks.push((callback) => {
db.collection('customers').insertMany(customerData.slice(index, recordsToCopy), (error, results) => {
callback(error)
});
});
})
})
As far as I can tell,
element = Object.assign(element, customerAddresses[index])
is modifying the current element during each iteration - IE the JSON object in the source file
to back this up,
db.collection('customers').insertMany(customerData.slice(index, recordsToCopy)
further seems to confirm that when writing the completed merged data to the database the author is reading out of that original customerData file - which makes sense only if the completed merged data is living there.
Since the source files are unchanged, the two things that are confusing me are, in order of importance:
1)Where does the merged data live before being written to the db? The customerData file is unchanged at the end of runtime.
2)What's it called when you access a JSON file using array syntax? I had no idea you could read files without the functionality of the fs module or similar. The author read files using only require('filename'). I would like to read more about that.
Thank you for your help!
Question 1:
The merged data lives in the customerData variable before it's sent to the database. It exists only in memory at the time insertMany is called, and is passed in as a parameter. There is no reason for anything on the file system to be overwritten -- in fact it would be inefficient to modify that .json file every time you called the database -- storing that information is the job of the database, not a file within your application. If you wanted to overwrite the file, it would be easy enough -- just add something like fs.writeFile('./data/m3-customer-data.json', JSON.stringify(customerData), 'utf8', console.log('overwritten')); after the insertMany. Be sure to include const fs = require('fs');. To make it clearer what is happening, try writing the value of customerData.length to the file instead.
Question 2:
Look at the docs on require() in Node. All it's doing is parsing the data in the JSON file.
There's no magic here. A static json file is parsed to an array using require and stored in memory as the customerData variable. Its values are manipulated and sent to another computer elsewhere where it can be stored. As the code was originally written, the only purpose that json file serves is to be read.

Fine-Uploader Replace File but keep UUID

Does Fine-Uploader have the concept of replacing an existing file, but keeping the same UUID?
My uploaders are restricted to one file only. When clicking upload file again (after a file has already been successfully uploaded) results in a new UUID created for the new file. I'd like to keep the same UUID since it may already be cross linked to other data points in our back end.
Reusing a UUID for multiple files defeats the entire purpose of a UUID. So this is not supported.
I had the exact same use case and did not find a direct solution. The easiest solution for me was to handle the onSubmit event and change the UUID to a value that you create in your back end.
var sameUuid = qq.getUniqueId();
var uploader = new qq.FineUploader({
callbacks: {
onSubmit: function (id, fileName) {
this.setUuid(id, sameUuid);
}
}
});
You could also generate a second UUID that does not change and send it along as a parameter or as a custom header.

Mirth channelMap in source JavaScript

In my source connector, I'm using javascript for my database work due to my requirements and parameters.
The end result is storing the data.
ifxResults = ifxConn.executeCachedQuery(ifxQuery); //var is declared
I need to use these results in the destination transformer.
I have tried channelMap.put("results", ifxResults);.
I get the following error ReferenceError: "channelMap" is not defined.
I have also tried to use return ifxResults but I'm not sure how to access this in the destination transformer.
Do you want to send each row as a separate message through your channel? If so, sounds like you want to use the Database Reader in JavaScript mode. Just return that ResultSet (it's really a CachedRowSet if you use executeCachedQuery like that) and the channel will handle the rest, dispatching an XML representation of each row as discrete messages.
If you want to send all rows in the result set aggregated into a single message, that will be possible with the Database Reader very soon: MIRTH-2337
Mirth Connect 3.5 will be released next week so you can take advantage of it then. But if you can't wait or don't want to upgrade then you can still do this with a JavaScript Reader:
var processor = new org.apache.commons.dbutils.BasicRowProcessor();
var results = new com.mirth.connect.donkey.util.DonkeyElement('<results/>');
while (ifxResults.next()) {
var result = results.addChildElement('result');
for (var entries = processor.toMap(ifxResults).entrySet().iterator(); entries.hasNext();) {
var entry = entries.next();
result.addChildElement(entry.getKey(), java.lang.String.valueOf(entry.getValue()));
}
}
return results.toXml();
I know this question is kind of old, but here's an answer just for the record.
For this answer, I'm assuming that you are using a Source connector type of JavaScript Reader, and that you're trying to use channelMap in the JavaScript Reader Settings editing pane.
The problem is that the channelMap variable isn't available in this part of the channel. It's only available in filters and transformers.
It's possible that what you want can be accomplished by using the globalChannelMap variable, e.g.
globalChannelMap.put("results", ifxResults);
I usually need to do this when I'm processing one record at a time and need to pass some setting to the destination channel. If you do it like I've done in the past, then you would first create a globalChannelMap key/value in the source channel's transformer:
globalchannelMap.put("ProcID","TestValue");
Then go to the Destinations tab and select your destination channel to make sure you're sending it to the destination (I've never tried this for channels with multiple destinations, so I'm not sure if anything different needs to be done).
Destination tab of source channel
Notice that ProcID is now listed in the Destination Mappings box. Click the New button next to the Map Variable box and you'll see Variable 1 appear. Double click on that and put in your mapping key, which in this case is ProcID.
Now go to your destination channel's source transformer. There you would enter the following code:
var SentValue = sourceMap.get("ProcID");
Now SentValue in your destination transformer has whatever was in ProcID when your source channel relinquished control.

Categories

Resources