Map over collection to upsert into the database. How to batch upsert? - javascript

Say, I have a data structure coming in from the frontend as follows:
const userData = [
{
id: 11223,
bb: [
{
id: 12,
},
{
id: 34,
bbb: "bbb",
},
],
},
{
id:4234,
...
},
];
Because, none/ some/ all of the data may already be in the database, here is what I have come up with:
const collection = [];
for (let i = 0; i < userData.length; i++) {
const cur = userData[i];
const subCur = cur.bb;
const updatedCur = await db.cur.upsert({
where: {
id : cur.id
},
update: {
...
},
create: {
...
},
})
);
collection.push(updatedCur);
for (let j = 0; j < subCur.length; j++) {
const latest = subCur[j];
await db.subcur.upsert({
where: {
id : latest.id
},
update: {
...
},
create: {
...
},
});
}
}
To summarise, I am mapping over the userData & upsert each object one by one. Within the loop, I map over the child collections & upsert them in the db.
My concern is that I am making a lot of entries into the Db this way. Is this the best way to do this?
Aside:
I previously, tried to do multiple inserts within the upsert, however, I got stuck with the update section as to my knowledge, we cannot upsert multiple records within the update nested within upsert. Is this correct?
UPDATE:
As requested by Ryan, here is what the Schema looks like:
model Cur {
id Int,
subCur SubCur[]
...
}
model SubCur {
id Int,
cur Cur #relation(fields: [curId], references : [id])
curId Int
...
}
To summarise, there are many models like 'SubCur' with 1-n relation with 'Cur' model. As the 'UserData' payload, may have some data that is new, some that is update for existing data already in Db, I was curious, whats the best approach to upsert the data into the db. To be specific, do I have to insert each one, one at a time?

I assumed your schema to be this:
model Cur {
id Int #id
}
model Subcur {
id Int #id
bbb String?
}
And here's a better version:
const collection = await prisma.$transaction(
userData.map(cur =>
prisma.cur.upsert({
where: { id: cur.id },
update: {},
create: { id: cur.id },
})
)
)
await prisma.$transaction(
userData
.flatMap(cur => cur.bb)
.map(latest =>
prisma.subcur.upsert({
where: {
id: latest.id,
},
update: {
bbb: latest.bbb,
},
create: {
id: latest.id,
bbb: latest.bbb,
},
})
)
)

Related

How to copy the elements of an array into an array in a Mongodb document

I have an array
let data = [a, b, c, d, e];
And I want to insert the elements of this array into an array in a MongoDb document. This is my schema.
const userSchema = mongoose.Schema({
info: reqString,
data: [String]
});
As you can see the data field represents an array. In my index.js I use a for-loop to iterate through the array.
for(var i= 0; i < data.length; i++){
connectToDb(data[i]);
}
And the connectToDb() is here:
const connectToMongoDB = async (_data) => {
await mongo().then(async (mongoose) => {
try {
console.log('Connected to mongodb!');
await userSchema.findOneAndUpdate({
info: 'facts',
}, {
$push: {
data: _data,
}
});
} finally {
mongoose.connection.close();
console.log('Disconnected from mongodb!');
}
});
}
However although a documents is created in MongoDb, the data field which represents the array remains empty. I welcome all suggestions
From your question it is not clear if you want to set the data field value with your _data array, or append your _data array to the existing array in the data field.
1. To replace the array of the data field:
await userSchema.findOneAndUpdate(
{ info: 'facts' },
{ $set: { data: _data } }
).exec();
2. To append an array to the array of the data field:
await userSchema.findOneAndUpdate(
{ info: 'facts' },
{ $push: { data: { $each: _data } } }
).exec();

saveData method saves twice

I am building a React app that includes one separate component for CRUD functionality of Products and another separate component for CRUD functionality of Suppliers.
I am using the same saveData method for both components (the Create functionality of CRUD.. that is triggered when the User presses Save after filling in the input fields of Product or Supplier). The saveData method is located in a central ProductsAndSuppliers.js file that is available to both the Products and Supplier components.
In both of the Product & Supplier components, there is a table showing the Products or Suppliers already present as dummy data.
I made a button at the bottom of each page to add a new Product or Supplier... depending on which tab the user has selected on the left side of the screen (Product or Supplier).
Since I am using the same saveData method in both cases, I have the same problem whenever I try to add a new Product or Supplier to each respective table after filling out the input fields. My new Product or Supplier is added.. but twice and I can't figure out why.
I have tried using a spread operator to add the new item to the collection but am having no success:
saveData = (collection, item) => {
if (item.id === "") {
item.id = this.idCounter++;
this.setState((collection) => {
return { ...collection, item }
})
} else {
this.setState(state => state[collection]
= state[collection].map(stored =>
stored.id === item.id ? item : stored))
}
}
Here is my original saveData method that adds the new Product or Supplier, but twice:
saveData = (collection, item) => {
if (item.id === "") {
item.id = this.idCounter++;
this.setState(state => state[collection]
= state[collection].concat(item));
} else {
this.setState(state => state[collection]
= state[collection].map(stored =>
stored.id === item.id ? item : stored))
}
}
my state looks like this:
this.state = {
products: [
{ id: 1, name: "Kayak", category: "Watersports", price: 275 },
{ id: 2, name: "Lifejacket", category: "Watersports", price: 48.95 },
{ id: 3, name: "Soccer Ball", category: "Soccer", price: 19.50 },
],
suppliers: [
{ id: 1, name: "Surf Dudes", city: "San Jose", products: [1, 2] },
{ id: 2, name: "Field Supplies", city: "New York", products: [3] },
]
}
There are issues with both of your implementations.
Starting with the top one:
// don't do this
this.setState((collection) => {
return { ...collection, item }
})
In this case, collection is your component state and you're adding a property called item to it. You're going to get this as a result:
{
products: [],
suppliers: [],
item: item
}
The correct way to do this with the spread operator is to return an object that represents the state update. You can use a computed property name to target the appropriate collection:
this.setState((state) => ({
[collection]: [...state[collection], item]
})
)
* Note that both this and the example below are using the implicit return feature of arrow functions. Note the parens around the object.
In the second code sample you're
mutating the existing state directly which you should not do.
returning an array instead of a state update object.
// don't do this
this.setState(state =>
state[collection] = state[collection].concat(item)
);
Assignment expressions return the assigned value, so this code returns an array instead of an object and I'd frankly be surprised if this worked at all.
The correct implementation is the same as above except it uses concat instead of spread to create the new array:
this.setState(state => ({
[collection]: state[collection].concat(item)
})
);
needlessly fancy, arguably silly id generators:
const nextId = (function idGen (start = 100) {
let current = start;
return () => current++;
})(100);
console.log(nextId()); // 100
console.log(nextId()); // 101
console.log(nextId()); // 102
// ----------------
// a literal generator, just for fun
const ids = (function* IdGenerator(start = 300) {
let id = start;
while (true) {
yield id++;
}
})();
console.log(ids.next().value); // 300
console.log(ids.next().value); // 301
console.log(ids.next().value); // 302

How to add dynamic elements to an object in typescript

I am sorry if I am asking a very basic question, I have done some research over the internet but not getting anything useful.
I have a typescript object like :
var productIds=["one","two","three"];
let searchfilter = {
or: [{
id: { match:productids['0'] }
},{
id: { match:productids['1'] }
},{
id: { match:productids['2'] }
}]
};
My productIds can be dynamic and may hold different counts of values.
How can I create the same structure for a dynamic number of values. I tried forEach, but not sure about the syntax.
productids.forEach(function(value){
// not sure if this is right syntax, I am not getting desired results.
searchfilter.or = { id: { match:value }};
});
Can you help me with it?
You can create your full or array with a simple .map() :
var productIds = ["1", "2", "3"];
let searchfilter = {
or : productIds.map( n => ({ id : { match : productIds[n] } }))
};
However Mongo (which I believe you are using) has a $match method that's made to match a list :
{
$match: {
productIds: {
$in: productIds
}
}
}
I'll keep it as simple as I can
var productIds=["one","two","three"];
let searchfilter = productIds.map(p => {
return {id: { match: p }};
});
// function
addNewProduct(id: string) {
this.searchfilter.push({id: { match: id }});
}

MongoDB retrieve all keys with Node.js [duplicate]

I'd like to get the names of all the keys in a MongoDB collection.
For example, from this:
db.things.insert( { type : ['dog', 'cat'] } );
db.things.insert( { egg : ['cat'] } );
db.things.insert( { type : [] } );
db.things.insert( { hello : [] } );
I'd like to get the unique keys:
type, egg, hello
You could do this with MapReduce:
mr = db.runCommand({
"mapreduce" : "my_collection",
"map" : function() {
for (var key in this) { emit(key, null); }
},
"reduce" : function(key, stuff) { return null; },
"out": "my_collection" + "_keys"
})
Then run distinct on the resulting collection so as to find all the keys:
db[mr.result].distinct("_id")
["foo", "bar", "baz", "_id", ...]
With Kristina's answer as inspiration, I created an open source tool called Variety which does exactly this: https://github.com/variety/variety
You can use aggregation with the new $objectToArray aggregation operator in version 3.4.4 to convert all top key-value pairs into document arrays, followed by $unwind and $group with $addToSet to get distinct keys across the entire collection. (Use $$ROOT for referencing the top level document.)
db.things.aggregate([
{"$project":{"arrayofkeyvalue":{"$objectToArray":"$$ROOT"}}},
{"$unwind":"$arrayofkeyvalue"},
{"$group":{"_id":null,"allkeys":{"$addToSet":"$arrayofkeyvalue.k"}}}
])
You can use the following query for getting keys in a single document.
db.things.aggregate([
{"$match":{_id: "<<ID>>"}}, /* Replace with the document's ID */
{"$project":{"arrayofkeyvalue":{"$objectToArray":"$$ROOT"}}},
{"$project":{"keys":"$arrayofkeyvalue.k"}}
])
A cleaned up and reusable solution using pymongo:
from pymongo import MongoClient
from bson import Code
def get_keys(db, collection):
client = MongoClient()
db = client[db]
map = Code("function() { for (var key in this) { emit(key, null); } }")
reduce = Code("function(key, stuff) { return null; }")
result = db[collection].map_reduce(map, reduce, "myresults")
return result.distinct('_id')
Usage:
get_keys('dbname', 'collection')
>> ['key1', 'key2', ... ]
If your target collection is not too large, you can try this under mongo shell client:
var allKeys = {};
db.YOURCOLLECTION.find().forEach(function(doc){Object.keys(doc).forEach(function(key){allKeys[key]=1})});
allKeys;
If you are using mongodb 3.4.4 and above then you can use below aggregation using $objectToArray and $group aggregation
db.collection.aggregate([
{ "$project": {
"data": { "$objectToArray": "$$ROOT" }
}},
{ "$project": { "data": "$data.k" }},
{ "$unwind": "$data" },
{ "$group": {
"_id": null,
"keys": { "$addToSet": "$data" }
}}
])
Here is the working example
Try this:
doc=db.thinks.findOne();
for (key in doc) print(key);
Using python. Returns the set of all top-level keys in the collection:
#Using pymongo and connection named 'db'
reduce(
lambda all_keys, rec_keys: all_keys | set(rec_keys),
map(lambda d: d.keys(), db.things.find()),
set()
)
Here is the sample worked in Python:
This sample returns the results inline.
from pymongo import MongoClient
from bson.code import Code
mapper = Code("""
function() {
for (var key in this) { emit(key, null); }
}
""")
reducer = Code("""
function(key, stuff) { return null; }
""")
distinctThingFields = db.things.map_reduce(mapper, reducer
, out = {'inline' : 1}
, full_response = True)
## do something with distinctThingFields['results']
I am surprise, no one here has ans by using simple javascript and Set logic to automatically filter the duplicates values, simple example on mongo shellas below:
var allKeys = new Set()
db.collectionName.find().forEach( function (o) {for (key in o ) allKeys.add(key)})
for(let key of allKeys) print(key)
This will print all possible unique keys in the collection name: collectionName.
I think the best way do this as mentioned here is in mongod 3.4.4+ but without using the $unwind operator and using only two stages in the pipeline. Instead we can use the $mergeObjects and $objectToArray operators.
In the $group stage, we use the $mergeObjects operator to return a single document where key/value are from all documents in the collection.
Then comes the $project where we use $map and $objectToArray to return the keys.
let allTopLevelKeys = [
{
"$group": {
"_id": null,
"array": {
"$mergeObjects": "$$ROOT"
}
}
},
{
"$project": {
"keys": {
"$map": {
"input": { "$objectToArray": "$array" },
"in": "$$this.k"
}
}
}
}
];
Now if we have a nested documents and want to get the keys as well, this is doable. For simplicity, let consider a document with simple embedded document that look like this:
{field1: {field2: "abc"}, field3: "def"}
{field1: {field3: "abc"}, field4: "def"}
The following pipeline yield all keys (field1, field2, field3, field4).
let allFistSecondLevelKeys = [
{
"$group": {
"_id": null,
"array": {
"$mergeObjects": "$$ROOT"
}
}
},
{
"$project": {
"keys": {
"$setUnion": [
{
"$map": {
"input": {
"$reduce": {
"input": {
"$map": {
"input": {
"$objectToArray": "$array"
},
"in": {
"$cond": [
{
"$eq": [
{
"$type": "$$this.v"
},
"object"
]
},
{
"$objectToArray": "$$this.v"
},
[
"$$this"
]
]
}
}
},
"initialValue": [
],
"in": {
"$concatArrays": [
"$$this",
"$$value"
]
}
}
},
"in": "$$this.k"
}
}
]
}
}
}
]
With a little effort, we can get the key for all subdocument in an array field where the elements are object as well.
This works fine for me:
var arrayOfFieldNames = [];
var items = db.NAMECOLLECTION.find();
while(items.hasNext()) {
var item = items.next();
for(var index in item) {
arrayOfFieldNames[index] = index;
}
}
for (var index in arrayOfFieldNames) {
print(index);
}
Maybe slightly off-topic, but you can recursively pretty-print all keys/fields of an object:
function _printFields(item, level) {
if ((typeof item) != "object") {
return
}
for (var index in item) {
print(" ".repeat(level * 4) + index)
if ((typeof item[index]) == "object") {
_printFields(item[index], level + 1)
}
}
}
function printFields(item) {
_printFields(item, 0)
}
Useful when all objects in a collection has the same structure.
To get a list of all the keys minus _id, consider running the following aggregate pipeline:
var keys = db.collection.aggregate([
{ "$project": {
"hashmaps": { "$objectToArray": "$$ROOT" }
} },
{ "$group": {
"_id": null,
"fields": { "$addToSet": "$hashmaps.k" }
} },
{ "$project": {
"keys": {
"$setDifference": [
{
"$reduce": {
"input": "$fields",
"initialValue": [],
"in": { "$setUnion" : ["$$value", "$$this"] }
}
},
["_id"]
]
}
}
}
]).toArray()[0]["keys"];
I know I am late to the party, but if you want a quick solution in python finding all keys (even the nested ones) you could do with a recursive function:
def get_keys(dl, keys=None):
keys = keys or []
if isinstance(dl, dict):
keys += dl.keys()
list(map(lambda x: get_keys(x, keys), dl.values()))
elif isinstance(dl, list):
list(map(lambda x: get_keys(x, keys), dl))
return list(set(keys))
and use it like:
dl = db.things.find_one({})
get_keys(dl)
if your documents do not have identical keys you can do:
dl = db.things.find({})
list(set(list(map(get_keys, dl))[0]))
but this solution can for sure be optimized.
Generally this solution is basically solving finding keys in nested dicts, so this is not mongodb specific.
Based on #Wolkenarchitekt answer: https://stackoverflow.com/a/48117846/8808983, I write a script that can find patterns in all keys in the db and I think it can help others reading this thread:
"""
Python 3
This script get list of patterns and print the collections that contains fields with this patterns.
"""
import argparse
import pymongo
from bson import Code
# initialize mongo connection:
def get_db():
client = pymongo.MongoClient("172.17.0.2")
db = client["Data"]
return db
def get_commandline_options():
description = "To run use: python db_fields_pattern_finder.py -p <list_of_patterns>"
parser = argparse.ArgumentParser(description=description)
parser.add_argument('-p', '--patterns', nargs="+", help='List of patterns to look for in the db.', required=True)
return parser.parse_args()
def report_matching_fields(relevant_fields_by_collection):
print("Matches:")
for collection_name in relevant_fields_by_collection:
if relevant_fields_by_collection[collection_name]:
print(f"{collection_name}: {relevant_fields_by_collection[collection_name]}")
# pprint(relevant_fields_by_collection)
def get_collections_names(db):
"""
:param pymongo.database.Database db:
:return list: collections names
"""
return db.list_collection_names()
def get_keys(db, collection):
"""
See: https://stackoverflow.com/a/48117846/8808983
:param db:
:param collection:
:return:
"""
map = Code("function() { for (var key in this) { emit(key, null); } }")
reduce = Code("function(key, stuff) { return null; }")
result = db[collection].map_reduce(map, reduce, "myresults")
return result.distinct('_id')
def get_fields(db, collection_names):
fields_by_collections = {}
for collection_name in collection_names:
fields_by_collections[collection_name] = get_keys(db, collection_name)
return fields_by_collections
def get_matches_fields(fields_by_collections, patterns):
relevant_fields_by_collection = {}
for collection_name in fields_by_collections:
relevant_fields = [field for field in fields_by_collections[collection_name] if
[pattern for pattern in patterns if
pattern in field]]
relevant_fields_by_collection[collection_name] = relevant_fields
return relevant_fields_by_collection
def main(patterns):
"""
:param list patterns: List of strings to look for in the db.
"""
db = get_db()
collection_names = get_collections_names(db)
fields_by_collections = get_fields(db, collection_names)
relevant_fields_by_collection = get_matches_fields(fields_by_collections, patterns)
report_matching_fields(relevant_fields_by_collection)
if __name__ == '__main__':
args = get_commandline_options()
main(args.patterns)
As per the mongoldb documentation, a combination of distinct
Finds the distinct values for a specified field across a single collection or view and returns the results in an array.
and indexes collection operations are what would return all possible values for a given key, or index:
Returns an array that holds a list of documents that identify and describe the existing indexes on the collection
So in a given method one could do use a method like the following one, in order to query a collection for all it's registered indexes, and return, say an object with the indexes for keys (this example uses async/await for NodeJS, but obviously you could use any other asynchronous approach):
async function GetFor(collection, index) {
let currentIndexes;
let indexNames = [];
let final = {};
let vals = [];
try {
currentIndexes = await collection.indexes();
await ParseIndexes();
//Check if a specific index was queried, otherwise, iterate for all existing indexes
if (index && typeof index === "string") return await ParseFor(index, indexNames);
await ParseDoc(indexNames);
await Promise.all(vals);
return final;
} catch (e) {
throw e;
}
function ParseIndexes() {
return new Promise(function (result) {
let err;
for (let ind in currentIndexes) {
let index = currentIndexes[ind];
if (!index) {
err = "No Key For Index "+index; break;
}
let Name = Object.keys(index.key);
if (Name.length === 0) {
err = "No Name For Index"; break;
}
indexNames.push(Name[0]);
}
return result(err ? Promise.reject(err) : Promise.resolve());
})
}
async function ParseFor(index, inDoc) {
if (inDoc.indexOf(index) === -1) throw "No Such Index In Collection";
try {
await DistinctFor(index);
return final;
} catch (e) {
throw e
}
}
function ParseDoc(doc) {
return new Promise(function (result) {
let err;
for (let index in doc) {
let key = doc[index];
if (!key) {
err = "No Key For Index "+index; break;
}
vals.push(new Promise(function (pushed) {
DistinctFor(key)
.then(pushed)
.catch(function (err) {
return pushed(Promise.resolve());
})
}))
}
return result(err ? Promise.reject(err) : Promise.resolve());
})
}
async function DistinctFor(key) {
if (!key) throw "Key Is Undefined";
try {
final[key] = await collection.distinct(key);
} catch (e) {
final[key] = 'failed';
throw e;
}
}
}
So querying a collection with the basic _id index, would return the following (test collection only has one document at the time of the test):
Mongo.MongoClient.connect(url, function (err, client) {
assert.equal(null, err);
let collection = client.db('my db').collection('the targeted collection');
GetFor(collection, '_id')
.then(function () {
//returns
// { _id: [ 5ae901e77e322342de1fb701 ] }
})
.catch(function (err) {
//manage your error..
})
});
Mind you, this uses methods native to the NodeJS Driver. As some other answers have suggested, there are other approaches, such as the aggregate framework. I personally find this approach more flexible, as you can easily create and fine-tune how to return the results. Obviously, this only addresses top-level attributes, not nested ones.
Also, to guarantee that all documents are represented should there be secondary indexes (other than the main _id one), those indexes should be set as required.
We can achieve this by Using mongo js file. Add below code in your getCollectionName.js file and run js file in the console of Linux as given below :
mongo --host 192.168.1.135 getCollectionName.js
db_set = connect("192.168.1.135:27017/database_set_name"); // for Local testing
// db_set.auth("username_of_db", "password_of_db"); // if required
db_set.getMongo().setSlaveOk();
var collectionArray = db_set.getCollectionNames();
collectionArray.forEach(function(collectionName){
if ( collectionName == 'system.indexes' || collectionName == 'system.profile' || collectionName == 'system.users' ) {
return;
}
print("\nCollection Name = "+collectionName);
print("All Fields :\n");
var arrayOfFieldNames = [];
var items = db_set[collectionName].find();
// var items = db_set[collectionName].find().sort({'_id':-1}).limit(100); // if you want fast & scan only last 100 records of each collection
while(items.hasNext()) {
var item = items.next();
for(var index in item) {
arrayOfFieldNames[index] = index;
}
}
for (var index in arrayOfFieldNames) {
print(index);
}
});
quit();
Thanks #ackuser
Following the thread from #James Cropcho's answer, I landed on the following which I found to be super easy to use. It is a binary tool, which is exactly what I was looking for:
mongoeye.
Using this tool it took about 2 minutes to get my schema exported from command line.
I know this question is 10 years old but there is no C# solution and this took me hours to figure out. I'm using the .NET driver and System.Linq to return a list of the keys.
var map = new BsonJavaScript("function() { for (var key in this) { emit(key, null); } }");
var reduce = new BsonJavaScript("function(key, stuff) { return null; }");
var options = new MapReduceOptions<BsonDocument, BsonDocument>();
var result = await collection.MapReduceAsync(map, reduce, options);
var list = result.ToEnumerable().Select(item => item["_id"].ToString());
This one lines extracts all keys from a collection into a comma separated sorted string:
db.<collection>.find().map((x) => Object.keys(x)).reduce((a, e) => {for (el of e) { if(!a.includes(el)) { a.push(el) } }; return a}, []).sort((a, b) => a.toLowerCase() > b.toLowerCase()).join(", ")
The result of this query typically looks like this:
_class, _id, address, city, companyName, country, emailId, firstName, isAssigned, isLoggedIn, lastLoggedIn, lastName, location, mobile, printName, roleName, route, state, status, token
I extended Carlos LM's solution a bit so it's more detailed.
Example of a schema:
var schema = {
_id: 123,
id: 12,
t: 'title',
p: 4.5,
ls: [{
l: 'lemma',
p: {
pp: 8.9
}
},
{
l: 'lemma2',
p: {
pp: 8.3
}
}
]
};
Type into the console:
var schemafy = function(schema, i, limit) {
var i = (typeof i !== 'undefined') ? i : 1;
var limit = (typeof limit !== 'undefined') ? limit : false;
var type = '';
var array = false;
for (key in schema) {
type = typeof schema[key];
array = (schema[key] instanceof Array) ? true : false;
if (type === 'object') {
print(Array(i).join(' ') + key+' <'+((array) ? 'array' : type)+'>:');
schemafy(schema[key], i+1, array);
} else {
print(Array(i).join(' ') + key+' <'+type+'>');
}
if (limit) {
break;
}
}
}
Run:
schemafy(db.collection.findOne());
Output
_id <number>
id <number>
t <string>
p <number>
ls <object>:
0 <object>:
l <string>
p <object>:
pp <number>
I was trying to write in nodejs and finally came up with this:
db.collection('collectionName').mapReduce(
function() {
for (var key in this) {
emit(key, null);
}
},
function(key, stuff) {
return null;
}, {
"out": "allFieldNames"
},
function(err, results) {
var fields = db.collection('allFieldNames').distinct('_id');
fields
.then(function(data) {
var finalData = {
"status": "success",
"fields": data
};
res.send(finalData);
delteCollection(db, 'allFieldNames');
})
.catch(function(err) {
res.send(err);
delteCollection(db, 'allFieldNames');
});
});
After reading the newly created collection "allFieldNames", delete it.
db.collection("allFieldNames").remove({}, function (err,result) {
db.close();
return;
});
I have 1 simpler work around...
What you can do is while inserting data/document into your main collection "things" you must insert the attributes in 1 separate collection lets say "things_attributes".
so every time you insert in "things", you do get from "things_attributes" compare values of that document with your new document keys if any new key present append it in that document and again re-insert it.
So things_attributes will have only 1 document of unique keys which you can easily get when ever you require by using findOne()

JavaScript Array Comparison Function

I currently have a map of an array of users which all have a unique _id key / value.
user = [{_id: "1", ... }, {_id: "2", ... }, ... ]
I also have two other arrays, one named teams and another named accounts.
teams = [{ _id: "1", members: [{ userId: "2" }, { userId: "4" }, ... ], ... }]
accounts = [{ _id: "1", authorizedUsers: [{ userId: "3"}, ... ], ownerTeamId: "2" }, ... ]
Trying to create two comparison functions which takes the argument of user and outputs numberOfTeams and numberOfAccounts for the corresponding user.
I have attempted the numberOfTeams below but I'm not sure if it's the most optimal.
numberOfTeams(user) {
let count = 0;
teams.forEach(team => {
team.members.forEach(member => {
if (member.userId === user._id) {
count++
}
})
});
return count;
}
With the numberOfAccounts, I'm stuck on how to compare authorizedUsers === user._id OR ownerTeamId === team._id where also members.userId === user.id, and then count++.
It’s probably a good start to write a function to get the teams a user belongs to:
function containsUserId(users, id) {
return users.some(user => user.userId === id);
}
function getUserTeams(user, teams) {
return teams.filter(team =>
containsUserId(team.members, user._id));
}
because then you can write numberOfTeams using it:
numberOfTeams(user) {
return getUserTeams(user, teams).length;
}
then a similar function to get accounts:
function getUserAccounts(user, accounts) {
const userTeamIds = new Set(
getUserTeams(user).map(team => team._id)
);
return accounts.filter(account =>
containsUserId(account.authorizedUsers, user._id) ||
userTeamIds.has(accounts.ownerTeamId));
}
then numberOfAccounts using it:
numberOfAccounts(user) {
return getUserAccounts(user, accounts).length;
}
Essentially: use more functions so you can understand the steps you’re taking to solve your own problem and, in doing so, use those steps more effectively.

Categories

Resources