frontendtasks = [
{"id": 1, "name": "User Deletion", "script": "UserDeletion"},
{"id": 2, "name": "User Creation", "script_name": "UserCreation"}
]
backendtasks = [
{"id": 1, "name": "User Deletion", "script": "UserDeletion_V2"}
]
I'm trying to delete the entry with id = 1 in frontendtask and push the entry from backendtask with this code.
if (backendtasks != 0) {
for (updated_task in backendtasks ) {
for (oldtask in frontendtasks) {
if (frontendtasks[oldtask].id == backendtasks[updated_task].id) {
frontendtasks[oldtask] = backendtasks[updated_task]
delete backendtasks[updated_task];
break;
}
}
}
for (new_task in backendtasks) {
frontendtasks.unshift(backendtasks[new_task])
}
}
This is really slow and CPU hits 100% in browser with 700 items. Is there any efficient way to implement this?
Don't loop through both arrays, instead use an object to map backend ids to values:
const mappings = {};
for (const task of backendtasks) {
mappings[task.id] = task;
}
for (let i = 0; i < frontendtasks.length; i ++) {
const curid = frontendtasks[i].id;
if (curid in mappings) {
frontendtasks[i] = mappings[curid];
delete mappings[curid];
}
}
// push is faster than unshift
for (const key in mappings) {
frontendtasks.push(mappings[key]);
}
Approach: Since you have 2 arrays, I would suggest first normalizing backend array to an object, and then iterate on frontend array and lookup in normalized object as lookup in object is O(1) as compared to O(n) in array.
function getFrontendTasks(){
const frontendtasks = [
{"id": 1, "name": "User Deletion", "script": "UserDeletion"},
{"id": 2, "name": "User Creation", "script_name": "UserCreation"}
]
const backendtasks = [
{"id": 1, "name": "User Deletion", "script": "UserDeletion_V2"}
]
const normalizedBackendTasks = backendtasks.reduce((acc, val) => ({...acc, [val.id]: val}), {});
const newFrontendTasks = frontendtasks.map((task) => normalizedBackendTasks[task.id] || task);
return newFrontendTasks
}
console.log(getFrontendTasks())
Creating a mapping table reduces the time complexity from O(n^2) to O(n), by removing the nested for loops, which is very expensive.
Try the following code:
const map = {};
backendtasks.forEach(bt => (map[bt.id] = bt));
frontendtasks.forEach((ft, idx) => {
if (map[ft.id]) {
frontendtasks[idx] = map[ft.id];
delete map[ft.id];
}
});
frontendtasks = frontendtasks.concat(Object.values(map));
Somehow I didn't see the map() function in any solution that creates a new array as shown below.
This will return the new array with the new value. As you can see, it takes an array, an id (this could be anything and any type tho), and a callback.
Searcing for the id in the array and runs the callback when found. Efficient way for what you want to do.
In the callback, I used find() on the backendtasks simply because I need to find the item which has the same id (id: 1).
When found, it returns the item from backendtasks then completes the function by returning that value in the map() function.
So, this should be O(n), considering that the callback only runs once and it's a more elegant solution for multiple uses in my opinion.
const frontendtasks: any[] = [];
const backendtasks: any[] = [];
const fn = (arr: any[], id: number, callback: (removed: any) => any) => {
return arr.map((ft) => {
if (ft.id !== id) return ft;
else return callback(ft);
});
};
fn(frontendtasks, 1, (rm) => backendtasks.find((id) => rm.id === id));
Related
I have two lists in javascript that are of same structure like below:
var required_documents = [{"id":1,"dt":1},{"id":2,"dt":2},{"id":3,"dt":3}];
var existing_documents = [{"id":1,"dt":1},{"id":2,"dt":2},{"id":3,"dt":4}];
I need to remove all records from database that are in existing documents list (i.e "dt") but NOT in required_documents list.
For the above scenario I should remove only {"id":3,"dt":4} and insert {"id":3,"dt":3}. I am not sure how I can compare on just one property. This is below that I found on SOF sometime ago but can't find it again apologies for not referencing it.
required_documents.forEach((obj) => {
const elementInArr2 = existing_documents.find((o) => o.dt === obj.dt);
console.log('found elementinarr: ' + obj.dt);
});
This returns unique objects like dt:1,dt:2,dt:3 but I need dt:4 from the existing documents list as it is the one that is not in the required documents list and needs to be deleted. How can I get just the one that is not in the required documents list.
Assuming both id and dt properties are significant, I would first create a means of hashing an entry and then build a hashed set of required_documents.
Then you can filter out anything from existing_documents that is in the set, leaving only the results you want.
const required_documents = [{"id":1,"dt":1},{"id":2,"dt":2},{"id":3,"dt":3}];
const existing_documents = [{"id":1,"dt":1},{"id":2,"dt":2},{"id":3,"dt":4}];
// a simple stringify hash
const createHash = ({ id, dt }) => JSON.stringify({ id, dt });
const requiredHashSet = new Set(required_documents.map(createHash));
const result = existing_documents.filter(
(doc) => !requiredHashSet.has(createHash(doc))
);
console.log(result);
The hash creation can be anything that produces a comparable entity that can uniquely identify a record.
You need to run it twice to confirm there is no elements left in existing. So create a function and use it.
var required_documents = [{"id":1,"dt":1},{"id":2,"dt":2},{"id":3,"dt":3}];
var existing_documents = [{"id":1,"dt":1},{"id":2,"dt":2},{"id":3,"dt":4}]
let output = [];
output = output.concat(extractUniqueValues(required_documents, output));
output = output.concat(extractUniqueValues(existing_documents, output));
console.log(output)
function extractUniqueValues(input, output){
return input.filter((item)=>{
return !output.find(v => v.dt == item.dt)
})
}
You can do like below
var required_documents = [
{ id: 1, dt: 1 },
{ id: 2, dt: 2 },
{ id: 3, dt: 3 },
];
var existing_documents = [
{ id: 1, dt: 1 },
{ id: 2, dt: 2 },
{ id: 3, dt: 4 },
];
for (let index = 0; index < required_documents.length; index++) {
const element = required_documents[index];
for (var i = existing_documents.length - 1; i >= 0; i--) {
const child = existing_documents[i];
if (element.id === child.id && element.dt === child.dt) {
existing_documents.splice(i, 1);
} else {
required_documents.push(element);
}
}
}
LOG not exist [{"dt": 4, "id": 3}]
LOG unique items [{"dt": 1, "id": 1}, {"dt": 2, "id": 2}, {"dt": 3, "id": 3}]
If you don't care about time complexity, something this should work:
var new_documents = existing_documents.filter(ed => {
return required_documents.find(rd => rd.dt == ed.dt);
});
Edit Okay, I just reread your question and I'm a bit confused. Do you want the object {id: 3, dt: 3} inside the new array as well?
I have a Typescript project where I want to join all the values of an Object except one.
This is my Object:
let dataInit = {
"host": "CAPR",
"ua": "RMA",
"country": "VE",
"page":3
};
This is what I do:
let dataJoin = Object.values(dataInit).join(',')
This is what I get:
CAPR,RMA,VE,3
I need to know how to remove the 'page' property, this is what I want:
CAPR,RMA,VE
If you don't know the other attributes, you can first use Object.entries() to give you an array of arrays containing the keys and values. That array can then be filtered to remove the "page" element, mapped to just contain the value, and finally joined.
let dataInit = {
"host": "CAPR",
"ua": "RMA",
"country": "VE",
"page":3
};
console.log(
Object.entries(dataInit)
.filter(([key,val]) => key !== "page")
.map(([_,val]) => val)
.join(",")
)
I would destructure the object first and create a new one
const { host, ua, country } = dataInit
const dataNew = { host, ua, country }
And then call the values join method on the new object.
You could filter the array resulted:
let dataJoin = Object.values(dataInit).filter(element => typeof element === "string").join(",");
Or you can use destructuring as presented in the other comments.
You can use for...in loop to iterate over element and skip page property.
let dataInit = {
"host": "CAPR",
"ua": "RMA",
"country": "VE",
"page": 3
};
const convertObject = (obj) => {
let list = [];
for (const prop in obj) {
if (prop !== "page") {
list.push(obj[prop]);
}
}
return list.join(",");
}
console.log(convertObject(dataInit));
Lets say I have the following array of objects:
let teams = [
{ Name: "Los Angeles Lakers", Championships: "17" },
{ Name: "Boston Celtics", Championships: "17" },
{ Name: "Cleveland Cavaliers", Championships: "01" },
{ Name: "San Antonio Spurs", Championships: "05" },
];
I sort the array:
teams.sort((a, b) => parseInt(a.Championships).localeCompare(parseInt(b.Championships)));
And so I have:
The thing is now I want to have distinct values for the property Championships, so the final result would be (I don't care which final result I will have, I just want to make it work):
Or the following:
How can I do it properly?
Thank you all!
There are a couple ways of doing this.
A naïve approach would be just to iterate over the teams collection, search the results to see if you are already including a team with that specific championship count. If there is no team, add it. This can be slow because you will need to iterate over the results N times for N items but is probably good enough for normal use cases.
const distinctBy = (array, keySelector) => {
return array.reduce((result, i) => {
const index = result.findIndex((j) => keySelector(j) === keySelector(i));
if (index === -1) {
result.push(i);
}
return result;
}, []);
};
distinctBy(teams, (team) => team.Championships);
The performance of this can be improved on my using a set to store known keys potentially reducing iteration count by an order of magnitude from the previous solution.
const distinctBy = (array, keySelector) => {
const result = [];
const keys = new Set();
for (let item of array) {
const key = keySelector(item);
if (!keys.has(key)) {
keys.add(key);
result.push(item);
}
}
return result;
};
distinctBy(teams, (team) => team.Championships);
If you already know your data set is sorted, you can simply iterate the collection and include values as long as the current key does not match the previous key. This common technique in databases and is very fast. Again it requires data to be sorted.
const distinctBy = (array, keySelector) => {
const result = [];
let previous = "";
for (let item of array) {
const key = keySelector(item);
if (key != previous) {
result.push(item);
}
previous = key;
}
return result;
};
distinctBy(teams, (team) => team.Championships);
I am trying to optimize selecting data from a large table (an array of objects).
I'd like to save multiple values from a single row and then write to localStorage.
let customerTable = [
{
"customer": "Apple Computers",
"contact": "Steve Jobs",
"id": 1,
"city": "Cupertino"
},
{
"customer": "Microsoft",
"contact": "Bill Gates",
"id": 2,
"city": "Redmond"
},
{
"customer": "Microsoft",
"contact": "Satya Nadella",
"id": 3,
"city": "Redmond"
}
]
let selectedRow = customerTable
.filter(i => { return i.customer === selectedCustomer })
.filter(i => { return i.contact === selectedContact })
let id = selectedRow
.map(a => a.id)
.filter((item, pos, self) => {return self.indexOf(item) === pos}) // Remove duplicates
let city = selectedRow
.map(a => a.city)
.filter((item, pos, self) => { return self.indexOf(item) === pos })
Is there a more performant method to selecting multiple values from a data model of this type?
The filter looks fine; however you could filter with composing functions.
Getting unique values can be optimized using Set and reduce:
let id = [...selectedRow.reduce(
(result,item)=>result.add(item.id)
,new Set()
)]
Or as Jonas pointed out (so you don't need reduce):
let id = [...new Set(
selectedRow.map(item=>item.id)
)]
let customerTable = [
{
"customer": "Apple Computers",
"contact": "Steve Jobs",
"id": 1,
"city": "Cupertino"
},
{
"customer": "Microsoft",
"contact": "Bill Gates",
"id": 2,
"city": "Redmond"
},
{
"customer": "Microsoft",
"contact": "Bill Gates",
"id": 2,
"city": "Redmond"
}
]
let selectedCustomer = "Microsoft";
let selectedContact = "Bill Gates";
// [[id], [city]]
let results = customerTable.reduce((_i, _j) => {
if(!(_j.customer === selectedCustomer && _j.contact === selectedContact)) return _i;
_i[0].push(_j.id);
_i[1].push(_j.city);
return _i;
}, [[], []])
.map((v) => v.filter((i, j, a) => a.indexOf(i) === j));
last .map, .filter is for removing duplicated items (as of question) if you are sure there will not be any duplicates you can remove this line,
if there no need for other filtering and no duplicates, the code will look like
let results = customerTable.reduce((_i, _j) => {
if(!(_j.customer === selectedCustomer && _j.contact === selectedContact)) return _i;
_i[0] = _j.id;
_i[1] = _j.city;
return _i;
}, [-1, ""])
In general you want to reduce the number of loops you have, so you shouldn't use multiple array operation when the same result can be achieved with one loop. The operations you're performing can be optimized.
let selectedRow = customerTable
.filter(i => { return i.customer === selectedCustomer })
.filter(i => { return i.contact === selectedContact });
loops through the array twice. It can be rewritten to loop through the array only one time as
let selectedRow = customerTable
.filter(i => {return i.customer === selectedCustomer && i.contact === selectedContact});
The other example also utilize multiple array operations that can be performed in one loop.
Your current code will compute selectedRow as all matching customers and contact pairs in an array and city and id are also arrays of the the unique cities and ids. This can be performed in a single loop as.
// using Set for performance as suggested by #HMR
let selectedRows = [], cities = new Set(), ids = new Set();
for (let i = 0; i = customerTable.length; i++) {
if (customerTable[i].customer === selectedCustomer && customerTable[i].contact === selectedContact) {
selectedRows.push(customerTable[i]);
// include uniqueness contraint on cities and ids
cities.add(customerTable[i].city);
ids.add(customerTable[i].id);
}
}
Depending on where you're getting your data if you can refactor it, you would get better performance on this search with a hashmap (object) with customer, contact, or some combination of both as the keys.
First, why do you have duplicate IDs? Ain't the point of IDs that they are unique?
Aside from optimizing the actual code, you can also optimize the data you are filtering; as mentioned in the comment, it's faster to search a short list than a long one.
So if your Array changes relatively rare compared to the searches you do on the data, it may be worth to create one or more indices. This could be as simple as:
let ixCustomer = new Map();
const ixCustomerKey = item => item.customer.charAt(0).toLowerCase();
const add = (index, key, value) => {
if(index.has(key)) index.get(key).push(value)
else index.set(key, [value]));
}
for(let item of customerTable){
add(ixCustomer, ixCustomerKey(item), item);
}
so that if you search you don't have to search customerTable but only a subset, which if you choose the right way to index the data, should be way smaller than the original Array.
let id = new Set();
let city = new Set();
let arr = ixCustomer.get(ixCustomerKey(selectedCustomer)) || [];
for(let item of arr){
if(item.customer !== selectedCustomer || item.company !== selectedCompany) continue;
id.add(item.id);
city.add(item.city);
}
But you need to know wether this overhead is it worth, for your data and your use-case.
How to add uniqueId field in below JSON. This array has large number of data and needs to dynamic unique identifier on existing array.
[{"title":"Accompanying"},{"title":"Chamber music"},{"title":"Church
music"}......]
so, this should look as follow:
[{"title":"Accompanying", "uniqueId": 1},{"title":"Chamber music", "uniqueId": 2}..]
uniqueId- type, number or guid.
Note: don't know the "title" or what other fields could be, so, could not map the fields by name.
I would go for a simple for loop
let myArray = [{"title":"Accompanying"},{"title":"Chamber music"},{"title":"Church music"}];
let i = 0, ln = myArray.length;
for (i;i<ln;i++){
myArray[i].uniqueId = i+1;
}
console.log(myArray);
If this is a one time thing you could do the following:
const newArray = oldArray.map((x, i) => ({
// If the object is dynamic you can spread it out here and add the ID
...x,
// Use the items index in the array as a unique key
uniqueId: i,
}));
If you want to use a guid generator instead (I'd recommend that) just replace i with whatever you use to generate a GUID and ensure that any time you add to the collection you generate a new GUID for the data.
const newArray = oldArray.map((x) => ({ ...x, uniqueId: generateGuid() }));
const yourDynamicObjects = [
{
title: 'A title',
author: 'A. Author'
},
{
foo: 'bar',
},
{
quotient: 2,
irrational: Math.sqrt(2)
}
];
const updatedData = yourDynamicObjects.map((x, i) => ({ ...x, uniqueId: i, }));
console.log(updatedData);
You can use map & in it's call back function use the index parameter to create uniqueId
item.title is not known actually as its dynamic array and so, could
not map with particular field names
In this case use Object.keys to get an array of all the keys . Then loop over it and add the key to a new object
let k = [{
"title": "Accompanying"
}, {
"title": "Chamber music"
}, {
"title": "Church"
}]
let getArrayKey = Object.keys(k[0]);
let n = k.map(function(item, index) {
let obj = {};
getArrayKey.forEach(function(elem) {
obj[elem] = item[elem];
})
obj.uniqueId = index + 1
return obj;
});
console.log(n)
Also you can use spread operator
let k = [{
"title": "Accompanying"
}, {
"title": "Chamber music"
}, {
"title": "Church"
}]
let n = k.map(function(item, index) {
return Object.assign({}, { ...item,
uniqueId: index + 1
})
});
console.log(n)