How to control array sorting in knockout.mapping? - javascript

Please find the jsfiddle example. I have an observable array of persons, each person has id, firstName and lastName. User sorts this array by any property, in both directions.
Then, at some point, I need to update my array, which I do using knockout.mapping.fromJS function:
this.rockStarsMapping = {
create: function (options) {
// just to make sure that key works
console.log('created');
return options.data;
},
key: function(data) {
return data.id;
}
};
this.rockStars = ko.observableArray([]);
this.getRockStars = function() {
// here should be some ajax call instead of a stub
var newRockStars = [
{ id: 1, firstName: "John", lastName: "Lehnon" },
{ id: 2, firstName: "Paul", lastName: "McCartney" },
...
];
ko.mapping.fromJS(newRockStars, self.rockStarsMapping, self.rockStars);
};
The problem is, that new array is sorted by id (for example), and this sorting prevails over sorting in existing array.
My current solution is to remember current sort column name and sort direction, but it doesn't work right, because other columns may have other sort directions. For example, in my jsfiddle code, try to sort by Id descending first (6, 5, 4, ...), then sort by First name. Now if you click "Get rock stars", the sorting order in Id column changes, which is not a desired behavior.
How to keep sorting order as is? And what more important - how to make sure new items would be in the right place of an array?

Related

Updating an object that is used inside another object

I have an app that allows you to create employees, but I have a problem when renaming employee positions.
I have an array of positions that looks something like this:
positions: [
{ id: 1, title: 'Masseuse' },
...
];
If I create an employee, I have to select a position from a dropdown, and the employees end up looking something like this:
employees: [
{ id: 1, name: 'John Doe', title: 'Masseuse' },
...
];
This approach works fine until the user renames a position. For example, if the user renames 'Masseuse' to 'Massage Therapist', the position dropdown will update as expected, but the employees' with that position will still say 'Masseuse'.
If a user renames a position, do I also need to find each employee with that position and update them individually? Or is there a different approach that I should be taking? I'm wondering if the employee object should store the position ID since that will never change and then somehow use that to display their position title, but I don't know how that would work.
I'm not experienced with backend development or database architecture yet, so this may not be important information, but I'm only using a fake REST API at the moment. I will eventually setup an actual database, but haven't gotten to that yet.
FWIW, I'm using Angular and the following mock API: https://github.com/typicode/json-server
You could use the position id to store the title. And then display the title by the stored position id.
employees: [
{ id: 1, name: 'John Doe', titleId: 1 },
...
];
Then your dropdown would be like below
<select [(ngModel)]="editingEmployee.titleId"">
<option *ngFor="let x of positions" [value]="x.id">{{x.title}}</option>
</select>
And now when you are displaying an employee, you could refer to the positions array from the stored titleId of the employee.
<div *ngFor="let employee of employees">
{{employee.name}} is a {{getPosition(employee.titleId)}}
</div>
And in your component
getPosition(titleId) {
const position = this.positions.filter(p => p.id === titleId);
return position[0] ? position[0].title : '';
}
Refer to this working stackblitz
Nice reasoning.
What you are thinking is the correct way to go about it.
This aligns with the concept of database normalization which is a structured way of reducing redundancy in your architecture.
Se also database normalization in wikipedia: link
One possible solution will be to store a position id instead of title inside employee object.
And when you need to display it you can construct "view" of that object:
let positions = [
{ id: 1, title: 'Masseuse' },
...
];
let employees = [
{ id: 1, name: 'John Doe', positionId: 1 },
...
];
function getPositionTittle(id, positions) {
const position = positions.find(p => p.id === id);
if (!position) return '';
return position.title;
}
function prepare(employee, positions) {
const title = getPositionTittle(employee.positionId, positions);
return {
...employee,
title
};
}
const employeeView = prepare(employees[0], positions);
// employeeView = { id: 1, name: 'John Doe', positionId: 1, title: 'Masseuse' }

Array filteration and Extraction of data and append to new Array

I have an array with nested array
I want the data to append in a new array.
For the data extraction or filtration what method's i have to use, using library such as lodash
DATA
[
[
{
_id: 588d9b8a608f2a66c298849f,
email: 'sd#',
password: '$2a$10$6..L3c3tANi6ydt9gZbc1O6prPfUd3RB.ner5lilxRyEwo1lPsSoC',
isJobSeeker: true,
__v: 0,
lastName: 'shrestha',
firstName: 'manish',
isSeeker: true
}
],
[
{
_id: 588dbb4f7a48ce0d26cb99fd,
jobId: [Object],
seekerId: 588d9b8a608f2a66c298849f,
employerId: 588d7d6c0ec4512feb819825,
__v: 0,
}
]
]
REQUIRED DATA
[
{
_id: 588d9b8a608f2a66c298849f,
email: 'sd#',
password: '$2a$10$6..L3c3tANi6ydt9gZbc1O6prPfUd3RB.ner5lilxRyEwo1lPsSoC',
isJobSeeker: true,
__v: 0,
lastName: 'shrestha',
firstName: 'manish',
isSeeker: true
},
jobId: [{}, {}, {}] // ARRAY WITH OBJECTS
]
also i want to change the jobId key to other key of custom string as jobs
Following is my attempt:
console.log('Data filteration', data);
const filteredData = [];
filteredData.push(data[0][0]);
data[1].forEach((i) => {
filteredData[0].jobs = i.jobId
});
console.log('filteredData', filteredData);
First you should clean you data to have a better structure.
[
[
{ ... }
],
[
{ ... }
]
]
In this datastructure, its difficult to understand what does inner arrays signify. Instead you should use an object. That would define the purpose of array and make your code more readable.
var data=[[{_id:"588d9b8a608f2a66c298849f",email:"sd#",password:"$2a$10$6..L3c3tANi6ydt9gZbc1O6prPfUd3RB.ner5lilxRyEwo1lPsSoC",isJobSeeker:!0,__v:0,lastName:"shrestha",firstName:"manish",isSeeker:!0}],[{_id:"588dbb4f7a48ce0d26cb99fd",jobId:["test","test1"],seekerId:"588d9b8a608f2a66c298849f",employerId:"588d7d6c0ec4512feb819825",__v:0}]];
var cleanedData = {
userData: data[0],
userJobMap: data[1],
}
var result = cleanedData.userData.reduce(function(p,c){
if(c.isJobSeeker){
var job = cleanedData.userJobMap.filter(x=> x.seekerId === c._id);
// To copy object and not reference
var t = Object.assign({}, c, { jobId: job[0].jobId });
p.push(t)
}
return p
}, [])
console.log(result)
References
Array.map is a tool that iterates over all elements and return different value say a single property of return double value of all numbers in array. Note, this will yield an array of same size.
Array.filter on the other hand is use to filter array based on condition. This will return a subset of original data but elements will be same. You cannot change element structure.
Array.reduce is a tool that address cases where you need to return selected elements with parsed value. You can achieve same by chaining .filter().map() but then its an overkill as it would result in O(2n).
Object.assign In JS objects are passed by reference. So if you assign an object to a variable, you are not copying entire object, but only reference. So it you change anything in this variable, it will also reflect in original object. To avoid this, you need to copy value. This is where Object.assign comes. Note, its not supported by old browsers. For them you can check following post - What is the most efficient way to deep clone an object in JavaScript?
Note: All array functions are part of functional programming paradigm and are used to make your code more readable and concise but they come at an expense of performance. Traditional for will always perform faster then them. So if you want to focus on performance, always try to use for (though difference is very small but can add up for multiple cases and become substantial)

How can I select the latest version of an object from a ForerunnerDb collection

I have a collection which contains a series of objects generated over time. Since I have disparate types stored in the same collection, I have a TypeId and a UID per object (where the UID identifies objects that refer to the same entity over time). I am trying to choose the most recent object from the collection, and running into serious difficulties grasping how to do so without manually iterating a query result - something I'd rather avoid since I think it could become expensive when the collection gets larger.
For example:
var db; // assigned elsewhere
var col = db.collection("res");
col.primaryKey("resId");
col.insert({
resId: 1,
TypeId: "Person",
UID: "Bob",
Data: {Age: 20, Name:Bob}
});
col.insert({
resId: 2,
TypeId: "Person",
UID: "Bob",
Data: {Age: 25, Name:Bob}
});
col.insert({
resId: 3,
TypeId: "Car",
UID: "TeslaModelX",
Data: {Manufacturer: "Tesla", Owner:"Bob"}
});
col.insert({
resId: 4,
TypeId: "Person",
UID: "Bill",
Data: {Age: 22, Name:Bill}
});
From col, I want the query to select all objects with TypeId="Person" ranked by resId descending, i.e. I'd expect to select objects 4 and 2, in that order.
The collection above is contrived, but in reality I'd expect there to be certainly 000s of entries and potentially 0000s, with maybe 00s of versions of each UID. In other words, I'd rather not return the full collection of objects, grouped or otherwise, and iterate it.
I have tried the following, but since the $distinct operator is applied before the $orderBy one, this doesn't help:
col.find(
{
TypeId : {$eq : "Person"}
$distinct: { UID: 1}
},
{
$orderBy: {
resId : -1
}
}
);
I have in mind that I should be able to use the $groupBy, $limit and $aggregate clauses to identify the per group desired IDs, and then use a subquery to find the precise (non-aggregated) elements, but as yet I haven't managed to get anything to do what I want. Any ideas?
My current solution is to include a Deleted property amongst my objects, and set it to true for all existing non-deleted objects in the DB before I insert new entries. This lets me do what I want but also stops me from, for instance, choosing the best available within a known timeframe or similar.
You can do this like:
var tmpObj = {};
col.sort({resId: -1}, coll.find({
"TypeId": "Person"
})).filter(function (doc) {
return col._match(doc, {
$distinct: {
UID: 1
}
}, {}, 'and', tmpObj);
});
It's a bit dirty since it's not neatly wrapped up in a single command, but it's as clean as you'll get it in ForerunnerDB v1.x.
Version 2 will have a new query pipeline system that would allow for exactly this sort of usage, something like this:
col.pipeline()
.find({"TypeId": "Person"})
.orderBy({"resId": 1})
.distinct({"UID": 1})
.then(function (err, data) {
console.log(data);
});
Source: I'm the developer of ForerunnerDB.

How to recreate a table with jQuery DataTables

I'm essentially using the top answer here ( from sdespont) to try to destroy some tables.
I have one table that shows me the status of a .csv file being uploaded.
FileuploadTable:
FileName FileType FileSize AvailableActions
I have a second table that is the table displaying data from the .csv file.
I need provide the user the ability to reset the form, i.e. get rid of the .csv, and get rid of all of the data, destroy() the two tables separately, and empty() them of all the data that was there initially.
Here is the issue I'm running into.
I can't seem to set the column titles of FileUploadTable after destroy() and empty(). When I attempt to upload a new file, the elements are still on the page, just empty, though the same initialization is being called
I can't seem to get rid of the column titles in CSVTable after destroy() and empty(). When I attempt to upload a different csv, it tries to match column headers to the ones that should have been destroyed, but they don't match because, though CSVTable was destroyed and emptied, the column titles are still there...?
Not sure what I'm missing. They're being set properly on initial create.
$(elem).DataTable()
Can anyone show me a basic working implementation of destroying/emptying datatables, then re initializing with different data, so I can try to mimic it. My brain is mush from looking at their docs for the last 3 days, making no progress.
Example of my data object
[
{
//key = column title
//"val" = data in row
//object = row
key: "val",
//i.e.
FirstName: "Bob",
LastName: "Barker",
Age: 800,
//etc
},
//etc
]
OK. You can make a simple iteration over your data using Object.keys() that produces a column object on the fly holding corresponding data and title values :
var columns = [], keys = Object.keys(data[0]);
for (var i=0;i<keys.length;i++) {
columns.push({ data: keys[i], title: keys[i] });
}
Use that inside a general function that initialises the table and take care of destroying and emptying if already initialized :
var table = null;
function initTable(data) {
var columns = [], keys = Object.keys(data[0]);
for (var i=0;i<keys.length;i++) {
columns.push({ data: keys[i], title: keys[i] });
}
if (table) {
table.destroy();
$('#example').empty();
}
table = $('#example').DataTable({
data: data,
columns : columns
})
}
Now imagine the following is the success handlers of your AJAX calls, or however you get the new data that should be populated to the table :
$('#insert1').on('click', function() {
var data = [
{ FirstName: "Bob", LastName: "Barker", Age: 800 },
{ FirstName: "John", LastName: "Doe", Age: 'N/A' }
]
initTable(data);
})
$('#insert2').on('click', function() {
var data = [
{ Animal : "Lion", Taxon : 'Panthera leo' },
{ Animal : "Cheetah", Taxon : 'Acinonyx jubatus' }
]
initTable(data);
})
demo -> http://jsfiddle.net/d5pb3kto/

How to retrieve a sorted list of objects in ydn-db?

Following an example from the docs here http://dev.yathit.com/ydn-db/getting-started.html, the first example under "Sorting".
My code:
var schema = {
stores: [
{
name: "authors",
keyPath: "id",
indexes: [
{ keyPath: "born" }
]
}
]
};
var db = new ydn.db.Storage("library", schema);
db.put("authors", [{ id: "111", born: "zzz" }, { id: "555", born: "bbb" }, { id: "999", born: "aaa" }]).done(function() {
// query with default ordering
db.values("authors").done(function(r) {
console.log("A list of objects as expected", r);
});
// query by ordered by "born" field
db.values(new ydn.db.Cursors("authors", "born", null, false)).done(function(r) {
console.log("This is a list of ids, not objects", r);
});
});
Changing the query from default ordering to ordering by a particular column seems to change its behaviour from returning a list of objects to just returning a list of ids. Am I doing something wrong? How do I get a list of objects?
It should be
// query by ordered by "born" field
db.values(new ydn.db.IndexValueCursors("authors", "born", null, false)).done(function(r) {
console.log("list of objects sorted by born", r);
});
or
// query by ordered by "born" field
db.values("authors", "born", null, false).done(function(r) {
console.log("list of objects sorted by born", r);
});
or simply
db.values("authors", "born").done(function(r) {
console.log("list of objects sorted by born", r);
});
A good API should do these common query very easily without reading documentation. I will think better API. For now, you got to read how iterator work: http://dev.yathit.com/api-reference/ydn-db/iterator.html Reference value of ydn.db.Cursors is primary key. That is why
values return list primary keys. Whereas reference value of ydn.db.IndexValueCursors is
record value, so values return list of objects. In fact, these are how IndexedDB API work.
Another point is above two queries has different performance characteristics. Second method, direct query is faster than first method, using iterator. This is because, iterator will iterate, whereas second method will use batch query. Performance is much different on websql since it does not support iteration.

Categories

Resources