I am building a CRUD api for a post processing tool. I have some data that have a structure of:
{
_date: '3/19/2021',
monitor: 'metric1',
project: 'bluejays',
id1: 'test-pmon-2',
voltageCondition: 'HV',
testData: [],
id: null
}
So previous methods I had was I would maintain a separate MongoDB collection which stored high level data of jobs that are stored and these would displayed. This method worked however I would like just one collection for this group of data.
My current method iterates through all objects in the query. The query is relatively small and I set a limit on the return so there is no names really of speed and memory allocation. However, I would like to dev up a more efficient method. This is my current method:
jobHeaderList_2 = [];
queriedData.forEach(doc => {
if (jobHeaderList_2.length == 0) {
jobHeaderList_2.push({
id1: doc.id1,
project: doc.project,
monitor: [doc.monitor],
date: doc._date
})
}
else {
let index = jobHeaderList_2.map(obj => {
return obj.id1;
}).indexOf(doc.id1);
if (index == -1) {
jobHeaderList_2.push({
id1: doc.id1,
project: doc.projectId,
monitor: [doc.monitor],
date: doc._date
})
}
else if (jobHeaderList_2[index].monitor.includes(doc.monitor) == false) {
jobHeaderList_2[index].monitor.push(doc.monitor);
}
}
});
And this works fine, but I would like someone who is more experienced who can maybe help me with a better method of doing this. Basically, I want to group all objects with the same id1 into a single object that stores the monitor values in an array.
I do not want to change the data structure because it's efficient for the plotting that occurs elsewhere in the application.
Here is a solution using vanilla JavaScript, using an ES6 map to key the data by id.
let map = new Map(queriedData.map(doc => [doc.id, {
id1: doc.id1,
project: doc.projectId,
monitor: [],
date: doc._date
}]));
for (let {id, monitor} of queriedData) map.get(id).monitor.push(monitor);
let jobHeaderList_2 = [...map.values()];
Another option using vanilla Javascript that doesn't require any new knowledge:
const queriedData = [
{
_date: '3/19/2021',
monitor: 'metric1',
project: 'bluejays',
id1: 'test-pmon-2',
voltageCondition: 'HV',
testData: [],
id: null,
},
{
_date: '3/20/2021',
monitor: 'metric2',
project: 'yellowjays',
id1: 'test-pmon-2',
voltageCondition: 'HV',
testData: [],
id: null,
},
{
_date: '3/21/2021',
monitor: 'metric3',
project: 'orangejays',
id1: 'test-pmon-3',
voltageCondition: 'HV',
testData: [],
id: null,
},
]
function accumulateMonitor(queriedData) {
const jobHeaderList_2 = []
const indexById = {}
queriedData.forEach((doc) => {
const index = indexById[doc.id1]
if (index === undefined) {
indexById[doc.id1] = jobHeaderList_2.length
jobHeaderList_2.push({
id1: doc.id1,
project: doc.project,
monitor: [doc.monitor],
date: doc._date,
})
} else {
const jobHeader = jobHeaderList_2[index]
jobHeader.monitor.push(doc.monitor)
}
})
return jobHeaderList_2
}
console.log(accumulateMonitor(queriedData))
/*
[
{
id1: 'test-pmon-2',
project: 'bluejays',
monitor: [ 'metric1', 'metric2' ],
date: '3/19/2021'
},
{
id1: 'test-pmon-3',
project: 'orangejays',
monitor: [ 'metric3' ],
date: '3/21/2021'
}
]
*/
Well I have figured out an answer using pipelining!
I used the $group filter stage, $addToSet and $addFields operator.
const test1 = await pmon.aggregate([
{
$group:{
_id:"$id1",
monitor:{
$addToSet: "$monitor"
}
}
},
{
$addFields:{
id1:"$_id"
}
}
]);
Essentially works as follows:
group all elements based on id1
add the monitor values using $addToSet
finally, added an id1 field back after using this as the value for the _id key.
Related
I have a possible infinite category tree and I would like to add, update or remove categories at any level with setState in react. I know this is possible with recursion but I don't have enough experience to manage this problem on my own. Here is how the data could possible look like:
const categories = [
{
id: "1",
name: "category1",
subCategories: [
{
id: "sub1",
name: "subcategory1",
subCategories: [
{ id: "subsub1", name: "subsubcategory1", subCategories: [] },
{ id: "subsub2", name: "subsubcategory2", subCategories: [] }
]
},
{ id: "sub2", name: "subcategory2", subCategories: [] }
]
},
{
id: "2",
name: "category2",
subCategories: []
}
]
Considering that your top level categories object is an object and not an array, the add and remove function could be the following (same pattern for update)
function add (tree, newCategory, parentId) {
if(tree.id === parentId)
return {
...tree,
subCategories: tree.subCategories.concat(newCategory)
}
return {
...tree,
subCategories: tree.subCategories.map(c => add(c, newCategory, parentId))
}
}
function remove (tree, idToRemove) {
if(tree.subCategories.map(c => c.id).includes(idToRemove))
return {
...tree,
subCategories: tree.subCategories.filter(c => c.id !== idToRemove)
}
return {
...tree,
subCategories: tree.subCategories.map(c => remove(c, idToRemove))
}
}
Prologue
To update a nested property in an immutable way, you need to copy or perform immutable operations on all its parents.
Setting a property on a nested object:
return {
...parent,
person: {
...parent.person,
name: 'New Name'
}
}
Arrays: You may pre-clone the array, or use a combination of .slice(), .map(), .filter() and the concatenation operator (...); Warning: .splice mutates the array.
(this can be a long topic, so I just gave a veryfast overview)
immer
As this can quickly get very ugly on objects with deep nesting, using the immer lib quite becomes a must at some point. Immer "creates new immutable state from mutations".
const newState = immer(oldState, draft => {
draft[1].subcategories[3].subcategories[1].name = 'Category'
})
In the extreme case, you can combine immer with lodash to create mutations in arbitrary places:
import set from 'lodash/set'
const newState = immer(oldState, draft => {
set(draft, [0, 'subcategories', 5, 'subcategories', 3], { id: 5, name:'Cat' })
})
"You might not need lodash" website has the recursive implementation for lodash/set. But, seriously, just use lodash.
PLUSES:
If you are using redux-toolkit, immer already is auto-applied on reducers, and is exposed as createNextState (check docs with care).
Deeply nested state can be an interesting use case for normalizr (long talk).
this is how the recursive function would look like.
the arguments:
id: id to look for
cats: the categories array to loop
nestSubCategory: (boolean) if we want to add the subcategory object in the subCategories array or not
subCategory: the category object we want to insert
const addCategories = (id, cats, nestSubCategory, subCategory)=> {
const cat = cats.find(item=> item.id === id)
const arrSubs = cats.filter(item => item.subCategories?.length)
.map(item => item.subCategories)
if(cat){
if(nestSubCategory){
cat.subCategories.push(subCategory)
return
}else{
cats.push(subCategory)
return
}
}
else{
return addCategories(id, arrSubs[0], nestSubCategory, subCategory)
}
}
addCategories("blabla1", categories, true, { id: "blabla2", name: "blablacategory1", subCategories: [] })
//console.log(categories)
console.log(
JSON.stringify(categories)
)
remember to update the object in the state replacing the entire categories array once the function is executed.
be careful with recursion 🖖🏽
you can do in a similar way to remove items, i leave to you the pleasure to experiment
I'm a beginner in the backend (and development in general). Stuck for two weeks with one problem. Therefore, I ask for an expert opinion.
I have collections in Mongo - Order and Items. I generated IDs for them through UUID (I don't know if this info is important or not).
The Order object stores an array of items (items[]) with the id and quantity of a specific item:
Order = {
_id: '123123123123123',
items: [
{
_id: '0001',
quantity: 1
},
{
_id: '0002',
quantity: 4
}
]
}
The Items look like this:
Items = [
{
_id: '0001',
title: 'Pizza',
price: 650,
description: 'Some text'
},
{
_id: '0002',
title: 'Pasta',
price: 500,
description: 'Some text'
}
]
Question is: How to get data of Item via id from the array of Items and put it into an Order's Items array with full information about Item and send to the front? Including title, price, and description.
I tried to use for-loops and forEach - but there was no result. A call to a specific item in Order is working (like this Order.items[0]._id), but in a loop (like this Order.items[i]._id) it gives an error that Order.items[i] is undefined.
Controller:
class testController {
async getTestOrder(req, res) {
try {
const id = req.params.id
const order = await Order.findOne({_id: id})
const items = await Item.find({})
for (let i =0; i < order.items.length; i++){
for (let j = 0; j < items.length; j++){
if(order.items[i]._id === items[j]._id){
order.items.push(items[j]) // don't know how to add new data, leaving quantity
}
}
}
res.send(order)
} catch (error) {
console.log(error);
res.status(404).json({message: 'Order was not found'})
}
}
}
As a result, I should get the following response to the front:
Order = {
_id: '123123123123123',
items: [
{
_id: '0001',
quantity: 1,
title: 'Pizza',
price: 650,
description: 'Some text'
},
{
_id: '0002',
quantity: 4,
title: 'Pasta',
price: 500,
description: 'Some text'
}
]
}
You can loop over the Order.items array and find out entry from Items array based on _id. The merge and assign back to Order.items object.
Order.items.forEach(oi => {
let obj = Items.find(i => i._id === oi._id); //find the object
oi = Object.assign(oi,obj); //merge
})
In order to make the search for the right item in items easier I created a little auxiliary object itms (a "hash"), that allows me to access the target item directly:
const order={_id: '123123123123123',items: [{_id: '0001',quantity: 1},{_id: '0002',quantity: 4}]},
items=[{ _id: '0001',title: 'Pizza', price: 650,description: 'Some text'},{_id: '0002',title: 'Pasta',price: 500,description: 'Some text'}],
itms={};items.forEach(o=>itms[o._id]=o);
console.log(order.items.map(o=>({quantity:o.quantity,...itms[o._id]})))
You're changing the items array while iterating over it. If you run your code (for loop) in browser console you'll get 'Uncaught out of memory' error.
Instead of changing items consider to create new items:
const result = { // creating new order object
...order, // getting all order object props, but overwriting items
items: order.items.map(orderItem => { // iterating over order.items
const additionalData = items.find(i => i._id === orderItem._id); // find desired item by _id
return { // returning a new enriched item
...orderItem, // old item props (_id and quantity)
...(additionalData || {}) // found item props (title, price and description); Array.find returns null if it finds nothing, so `{}` is a an insurance that we won't spread null value
};
})
}
Here is how my documents look like:
{
id: 123,
tasks: {
5f6effae74a3802fe80d02a4: Object
5f6f289b73cc4e43546733bb: Object
5f6f28d873cc4e43546733bc: Object
5f6f291073cc4e43546733bd: Object,
5f6f291073cc4e43541211cc: Object,
5f6f291073cc4e43465662eq: Object
},
taskIds: [
5f6effae74a3802fe80d02a4,
5f6f289b73cc4e43546733bb,
5f6f28d873cc4e43546733bc,
5f6f291073cc4e43546733bd
]
}
I need to delete each object in tasks that matches id from taskIds so I thought about mapping through taskIds like this:
const tasksDocument = await db.collection.findOne({id: 123})
tasksDocument.taskIds.map(async (id) => await tasksDocument.updateOne({ $unset: { [`tasks.${[id]}`]: "" } }))
But I'm wondering if there is cleaner way to do it without mapping?
If you're using Mongo version 4.2+ you can use pipelined update which allows you to use aggregation operators in an update.
Our strategy will be to convert the object to an array, filter it with the taskIds and then converting back to an object, we will achieve this by using operators like $arrayToObject, $objectToArray, $filter and more.
db.collection.update(
{
'id': 123
},
[
{
'$set': {
tasks: {
$arrayToObject: {
$filter: {
input: {$objectToArray: '$tasks'},
as: 'task',
cond: {
$eq: [
{
$size: {
$setIntersection: [['$$task.k'], '$taskIds']
}
},
0
]
}
}
}
}
}
}
])
For older Mongo versions you have to split this into 2 calls and do it in code similar to your solution (just having one call to $unset instead of multiple).
const tasksDocument = await db.collection.findOne({id: 123});
let unsetArr = tasksDocument.taskIds.map(v => `tasks.${v}`);
await db.collection.updateOne({id: 123}, {$unset: unsetArr});
I am doing some filtering using React Context and I am having some difficulty in updating a child's array value when a filter is selected.
I want to be able to filter by a minimum price, which is selected in a dropdown by the user, I then dispatch an action to store that in the reducers state, however, when I try and update an inner array (homes: []) that lives inside the developments array (which is populated with data on load), I seem to wipe out the existing data which was outside the inner array?
In a nutshell, I need to be able to maintain the existing developments array, and filter out by price within the homes array, I have provided a copy of my example code before, please let me know if I have explained this well enough!
export const initialState = {
priceRange: {
min: null
},
developments: []
};
// Once populated on load, the developments array (in the initialState object)
// will have a structure like this,
// I want to be able to filter the developments by price which is found below
developments: [
name: 'Foo',
location: 'Bar',
distance: 'xxx miles',
homes: [
{
name: 'Foo',
price: 100000
},
{
name: 'Bar',
price: 200000
}
]
]
case 'MIN_PRICE':
return {
...state,
priceRange: {
...state.priceRange,
min: action.payload
},
developments: [
...state.developments.map(development => {
// Something here is causing it to break I believe?
development.homes.filter(house => house.price < action.payload);
})
]
};
<Select onChange={event=>
dropdownContext.dispatch({ type: 'MIN_PRICE' payload: event.value }) } />
You have to separate homes from the other properties, then you can apply the filter and rebuild a development object:
return = {
...state,
priceRange: {
...state.priceRange,
min: action.payload
},
developments: state.developments.map(({homes, ...other}) => {
return {
...other,
homes: homes.filter(house => house.price < action.payload)
}
})
}
I'm using normalizr util to process API response based on non-ids model. As I know, typically normalizr works with ids model, but maybe there is a some way to generate ids "on the go"?
My API response example:
```
// input data:
const inputData = {
doctors: [
{
name: Jon,
post: chief
},
{
name: Marta,
post: nurse
},
//....
}
// expected output data:
const outputData = {
entities: {
nameCards : {
uniqueID_0: { id: uniqueID_0, name: Jon, post: uniqueID_3 },
uniqueID_1: { id: uniqueID_1, name: Marta, post: uniqueID_4 }
},
positions: {
uniqueID_3: { id: uniqueID_3, post: chief },
uniqueID_4: { id: uniqueID_4, post: nurse }
}
},
result: uniqueID_0
}
```
P.S.
I heard from someone about generating IDs "by the hood" in normalizr for such cases as my, but I did found such solution.
As mentioned in this issue:
Normalizr is never going to be able to generate unique IDs for you. We
don't do any memoization or anything internally, as that would be
unnecessary for most people.
Your working solution is okay, but will fail if you receive one of
these entities again later from another API endpoint.
My recommendation would be to find something that's constant and
unique on your entities and use that as something to generate unique
IDs from.
And then, as mentioned in the docs, you need to set idAttribute to replace 'id' with another key:
const data = { id_str: '123', url: 'https://twitter.com', user: { id_str: '456', name: 'Jimmy' } };
const user = new schema.Entity('users', {}, { idAttribute: 'id_str' });
const tweet = new schema.Entity('tweets', { user: user }, {
idAttribute: 'id_str',
// Apply everything from entityB over entityA, except for "favorites"
mergeStrategy: (entityA, entityB) => ({
...entityA,
...entityB,
favorites: entityA.favorites
}),
// Remove the URL field from the entity
processStrategy: (entity) => omit(entity, 'url')
});
const normalizedData = normalize(data, tweet);
EDIT
You can always provide unique id's using external lib or by hand:
inputData.doctors = inputData.doctors.map((doc, idx) => ({
...doc,
id: `doctor_${idx}`
}))
Have a processStrategy which is basically a function and in that function assign your id's there, ie. value.id = uuid(). Visit the link below to see an example https://github.com/paularmstrong/normalizr/issues/256