CouchDB Document Update Handlers: Javascript - javascript

I am trying to create a generic document update handler.
I am using:
function(doc, req) {var field = req.query.field; var value =
req.query.value; var message = 'set '+field+' to '+value; doc[field] =
value; return [doc, message]; }
This works ok with simple json but not with a nested object such as
"abc":{"ax":"one", "by":"two" ...}
my curl command is:
curl -X PUT 'http://127.0.0.1:5984/db/_design/updatehandler/_update/inplace/id?field=abc.ax&value=three'
The result is a new field is created and the existing abc:{ax:one} is left
untouched.
With a simpler example:
if I have: "xyz":"five"
curl -X PUT 'http://127.0.0.1:5984/db/_design/updatehandler/_update/inplace/id?field=xyz&value=ten'
... works correctly.
I have not yet tried the generic process on "pqr":[s, t, u] yet but I guess
this may require a different design modification as well.
Ideally one wants something that works in at least the abovementioned three
cases if possible, as long as it is not too complex for it not to be worth
the effort.
Could someone possibly kindly help here or refer me to some javascript examples please.
Many thanks.
John

function (doc, req) {
function merge(nDoc,oDoc ) {
for (var f in nDoc) {
var tmpNewDoc = nDoc[f],
tmpDoc = oDoc[f];
var type = typeof(tmpNewDoc);
if (type === 'object' && tmpNewDoc.length === undefined && tmpDoc !== undefined) merge(tmpNewDoc, tmpDoc);
else oDoc[f] = tmpNewDoc;
}
}
if (!doc) {
return [null, toJSON({
error: 'not_found',
reason: 'No document were found with the specified ID or an incorrect method was used.'
})];
}
try {
var newDoc = JSON.parse(req.body);
merge(newDoc, doc);
}
catch (e) {
return [null, ToJSON({
error: 'bad_request',
reason: 'Invalid json or processing error'
})];
}
return [doc, toJSON({
doc: doc,
ok: true
})];
}"
}
Simply pass the new document to this handler. It will merge the new values to it (warning, the arrays will be overwrite). If you also want to merge array, you can either use a third party library or build your own recursive merge function.

Related

Azure CosmosDb Stored Procedure IfMatch Predicate

In a DocDb stored procedure, as the first step in a process retrieving data that I'm mutating, I read and then use the data iff it matches the etag like so:
collection.readDocument(reqSelf, function(err, doc) {
if (doc._etag == requestEtag) {
// Success - want to update
} else {
// CURRENTLY: Discard the read result I just paid lots of RUs to read
// IDEALLY: check whether response `options` or similar indicates retrieval
was skipped due to doc not being present with that etag anymore
...
// ... Continue with an alternate strategy
}
});
Is there a way to pass an options to the readDocument call such that the callback will be informed "It's changed so we didn't get it, as you requested" ?
(My real problem here is that I can't find any documentation other than the readDocument undocumentation in the js-server docs)
Technically you can do that by creating a responseOptions object and passing it to the call.
function sample(selfLink, requestEtag) {
var collection = getContext().getCollection();
var responseOptions = { accessCondition: { type: "IfMatch", condition: requestEtag } };
var isAccepted = collection.readDocument(selfLink, responseOptions, function(err, doc, options) {
if(err){
throw new Error('Error thrown. Check the status code for PreconditionFailed errors');
}
var response = getContext().getResponse();
response.setBody(doc);
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');
}
However, even if the etag you provide is not the one that the document has, you won't get an error and you will properly get the document itself back. It's just not supposed to work with that using the readDocument function in a stored procedure.
Thanks to some pushing from #Nick Chapsas, and this self-answer from #Redman I worked out that in my case I can achieve my goal (either read the current document via the self-link, or the newer one that has replaced it bearing the same id) by instead generating an Alt link within the stored procedure like so:
var docId = collection.getAltLink() + "/docs/"+req.id;
var isAccepted = collection.readDocument(docId, {}, function (err, doc, options) {
if (err) throw err;
// Will be null or not depending on whether it exists
executeUpsert(doc);
});
if (!isAccepted) throw new Error("readDocument not Accepted");

MongoDB - Mongoose - TypeError: save is not a function

I am attempting to perform an update to a MongoDB document (using mongoose) by first using .findById to get the document, then updating the fields in that document with new values. I am still a bit new to this so I used a tutorial to figure out how to get it working, then I have been updating my code for my needs. Here is the tutorial: MEAN App Tutorial with Angular 4. The original code had a schema defined, but my requirement is for a generic MongoDB interface that will simply take whatever payload is sent to it and send it along to MongoDB. The original tutorial had something like this:
exports.updateTodo = async function(todo){
var id = todo.id
try{
//Find the old Todo Object by the Id
var oldTodo = await ToDo.findById(id);
}catch(e){
throw Error("Error occured while Finding the Todo")
}
// If no old Todo Object exists return false
if(!oldTodo){
return false;
}
console.log(oldTodo)
//Edit the Todo Object
oldTodo.title = todo.title
oldTodo.description = todo.description
oldTodo.status = todo.status
console.log(oldTodo)
try{
var savedTodo = await oldTodo.save()
return savedTodo;
}catch(e){
throw Error("And Error occured while updating the Todo");
}
}
However, since I don't want a schema and want to allow anything through, I don't want to assign static values to specific field names like, title, description, status, etc. So, I came up with this:
exports.updateData = async function(update){
var id = update.id
// Check the existence of the query parameters, If they don't exist then assign a default value
var dbName = update.dbName ? update.dbName : 'test'
var collection = update.collection ? update.collection : 'testing';
const Test = mongoose.model(dbName, TestSchema, collection);
try{
//Find the existing Test object by the Id
var existingData = await Test.findById(id);
}catch(e){
throw Error("Error occurred while finding the Test document - " + e)
}
// If no existing Test object exists return false
if(!existingData){
return false;
}
console.log("Existing document is " + existingData)
//Edit the Test object
existingData = JSON.parse(JSON.stringify(update))
//This was another way to overwrite existing field values, but
//performs a "shallow copy" so it's not desireable
//existingData = Object.assign({}, existingData, update)
//existingData.title = update.title
//existingData.description = update.description
//existingData.status = update.status
console.log("New data is " + existingData)
try{
var savedOutput = await existingData.save()
return savedOutput;
}catch(e){
throw Error("An error occurred while updating the Test document - " + e);
}
}
My original problem with this was that I had a lot of issues getting the new values to overwrite the old ones. Now that that's been solved, I am getting the error of "TypeError: existingData.save is not a function". I am thinking the data type changed or something, and now it is not being accepted. When I uncomment the static values that were in the old tutorial code, it works. This is further supported by my console logging before and after I join the objects, because the first one prints the actual data and the second one prints [object Object]. However, I can't seem to figure out what it's expecting. Any help would be greatly appreciated.
EDIT: I figured it out. Apparently Mongoose has its own data type of "Model" which gets changed if you do anything crazy to the underlying data by using things like JSON.stringify. I used Object.prototype.constructor to figure out the actual object type like so:
console.log("THIS IS BEFORE: " + existingData.constructor);
existingData = JSON.parse(JSON.stringify(update));
console.log("THIS IS AFTER: " + existingData.constructor);
And I got this:
THIS IS BEFORE: function model(doc, fields, skipId) {
model.hooks.execPreSync('createModel', doc);
if (!(this instanceof model)) {
return new model(doc, fields, skipId);
}
Model.call(this, doc, fields, skipId);
}
THIS IS AFTER: function Object() { [native code] }
Which showed me what was actually going on. I added this to fix it:
existingData = new Test(JSON.parse(JSON.stringify(update)));
On a related note, I should probably just use the native MongoDB driver at this point, but it's working, so I'll just put it on my to do list for now.
You've now found a solution but I would suggest using the MongoDB driver which would make your code look something along the lines of this and would make the origional issue disappear:
// MongoDB Settings
const MongoClient = require(`mongodb`).MongoClient;
const mongodb_uri = `mongodb+srv://${REPLACE_mongodb_username}:${REPLACE_mongodb_password}#url-here.gcp.mongodb.net/test`;
const db_name = `test`;
let db; // allows us to reuse the database connection once it is opened
// Open MongoDB Connection
const open_database_connection = async () => {
try {
client = await MongoClient.connect(mongodb_uri);
} catch (err) { throw new Error(err); }
db = client.db(db_name);
};
exports.updateData = async update => {
// open database connection if it isn't already open
try {
if (!db) await open_database_connection();
} catch (err) { throw new Error(err); }
// update document
let savedOutput;
try {
savedOutput = await db.collection(`testing`).updateOne( // .save() is being depreciated
{ // filter
_id: update.id // the '_id' might need to be 'id' depending on how you have set your collection up, usually it is '_id'
},
$set: { // I've assumed that you are overwriting the fields you are updating hence the '$set' operator
update // update here - this is assuming that the update object only contains fields that should be updated
}
// If you want to add a new document if the id isn't found add the below line
// ,{ upsert: true }
);
} catch (err) { throw new Error(`An error occurred while updating the Test document - ${err}`); }
if (savedOutput.matchedCount !== 1) return false; // if you add in '{ upsert: true }' above, then remove this line as it will create a new document
return savedOutput;
}
The collection testing would need to be created before this code but this is only a one-time thing and is very easy - if you are using MongoDB Atlas then you can use MongoDB Compass / go in your online admin to create the collection without a single line of code...
As far as I can see you should need to duplicate the update object. The above reduces the database calls from 2 to one and allows you to reuse the database connection, potentially anywhere else in the application which would help to speed things up. Also don't store your MongoDB credentials directly in the code.

Why can’t I catch certain exceptions in a MarkLogic request?

I have some code that exercises the “invalid values” setting on an element range index. In this case, I have configured a dateTime element range index on the onDate element in my database (which will apply to both XML elements and JSON properties). I’ve set that index to reject invalid values. This setting means if I try to set the value of an onDate element and it is not castable to a dateTime or is null (literal null in JSON or xsi:nil="true" in XML), my update will fail. (The opposite behavior is to completely ignore invalid values.)
I tried the following code in Server-Side JavaScript in MarkLogic 8.0-4:
'use strict';
declareUpdate();
var errors = [];
var inputs = {
'/37107-valid.json': (new Date()).toISOString(),
'/37107-invalid.json': 'asdf', // Should throw an error
'/37107-null.json': null
};
for(var uri in inputs) {
try {
xdmp.documentInsert(
uri,
{ 'onDate': inputs[uri] },
xdmp.defaultPermissions(),
['37107'] // Collections
);
} catch(err) {
errors.push(err);
}
}
errors.length;
I would have expected my request to succeed and to end up with 1 === errors.length, because only the second insert would have failed because 'asdf' is not castable as a dateTime and it is not null. However, instead I get an XDMP-RANGEINDEX error and my transaction fails. Why doesn’t my try/catch work here?
The issue is how MarkLogic processes update transactions. Rather than actually changing the data with each xdmp.docuentInsert(…) call, MarkLogic queues up all of the updates and applies them atomically at the end of the request. (This is also why you can’t see database updates within the same transaction.) Thus, the error isn’t being thrown until after the loop has executed and the database tries to commit the queued transactions. This behavior is the same in XQuery (slightly simplified):
let $uris := (
'/37107-valid.xml',
'/37107-invalid.xml',
'/37107-null.xml'
)
let $docs := (
<onDate>{fn:current-dateTime()}</onDate>,
<onDate>asdf</onDate>,
<onDate xsi:nil="true" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"/>
)
return
for $uri at $i in $uris
return
try {
xdmp:document-insert($uri, $docs[$i], (), ('37107'))
} catch($err) {
xdmp:log($err)
}
In order to catch the errors synchronously, you’d need to put each update into its own transaction. In general, this approach will be much slower and resource intensive than MarkLogic’s default transaction handling. However, it’s illustrative here to demonstrate what’s happening under the covers and can come in handy for specific use cases, like this one.
In the example below, I use xdmp.invokeFunction() to “call” a function in a separate transaction from the parent request. (First-class functions for the win!) This allows the updates to be fully applied (or rolled back with an error) and the calling module to see the updates (or errors). I’ve wrapped the low-level xdmp.invokeFunction() in my own applyAs() function to provide some niceties, like correctly passing function arguments to the curried function.
'use strict';
var errors = [];
var inputs = {
'/37107-valid.json': (new Date()).toISOString(),
'/37107-invalid.json': 'asdf',
'/37107-null.json': null
};
var insert = applyAs(
function(uri, value) {
return xdmp.documentInsert(
uri,
{ 'onDate': inputs[uri] },
xdmp.defaultPermissions(),
['37107']
);
},
{ isolation: 'different-transaction', transactionMode: 'update' },
'one'
);
for(var uri in inputs) {
try {
insert(uri, inputs[uri]);
} catch(err) {
errors.push(err);
}
}
errors.length; // Correctly returns 1
// <https://gist.github.com/jmakeig/0a331823ad9a458167f6>
function applyAs(fct, options, returnType /* 'many', 'one', 'iterable' (default) */) {
options = options || {};
return function() {
var params = Array.prototype.slice.call(arguments);
// Curry the function to include the params by closure.
// xdmp.invokeFunction requires that invoked functions have
// an arity of zero.
var f = (function() {
return fct.apply(null, params);
}).bind(this);
// Allow passing in user name, rather than id
if(options.user) { options.userId = xdmp.user(options.user); delete options.user; }
// Allow the functions themselves to declare their transaction mode
if(fct.transactionMode && !(options.transactionMode)) { options.transactionMode = fct.transactionMode; }
var result = xdmp.invokeFunction(f, options); // xdmp.invokeFunction returns a ValueIterator
switch(returnType) {
case 'one':
// return fn.head(result); // 8.0-5
return result.next().value;
case 'many':
return result.toArray();
case 'iterable':
default:
return result;
}
}
}

How to query objects in the CloudCode beforeSave?

I'm trying to compare a new object with the original using CloudCode beforeSave function. I need to compare a field sent in the update with the existing value. The problem is that I can't fetch the object correctly. When I run the query I always get the value from the sent object.
UPDATE: I tried a different approach and could get the old register ( the one already saved in parse). But the new one, sent in the request, was overridden by the old one. WHAT?! Another issue is that, even thought the code sent a response.success(), the update wasn't saved.
I believe that I'm missing something pretty obvious here. Or I'm facing a bug or something...
NEW APPROACH
Parse.Cloud.beforeSave('Tasks', function(request, response) {
if ( !request.object.isNew() )
{
var Task = Parse.Object.extend("Tasks");
var newTask = request.object;
var oldTask = new Task();
oldTask.set("objectId", request.object.id);
oldTask.fetch()
.then( function( oldTask )
{
console.log(">>>>>> Old Task: " + oldTask.get("name") + " version: " + oldTask.get("version"));
console.log("<<<<<< New Task: " + newTask.get("name") + " version: " + newTask.get("version"));
response.success();
}, function( error ) {
response.error( error.message );
}
);
}
});
OBJ SENT {"name":"LLL", "version":333}
LOG
I2015-10-02T22:04:07.778Z]v175 before_save triggered for Tasks for user tAQf1nCWuz:
Input: {"original":{"createdAt":"2015-10-02T17:47:34.143Z","name":"GGG","objectId":"VlJdk34b2A","updatedAt":"2015-10-02T21:57:37.765Z","version":111},"update":{"name":"LLL","version":333}}
Result: Update changed to {}
I2015-10-02T22:04:07.969Z]>>>>>> Old Task: GGG version: 111
I2015-10-02T22:04:07.970Z]<<<<<< New Task: GGG version: 111
NOTE: I'm testing the login via cURL and in the parse console.
CloudCode beforeSave
Parse.Cloud.beforeSave("Tasks", function( request, response) {
var query = new Parse.Query("Tasks");
query.get(request.object.id)
.then(function (oldObj) {
console.log("-------- OLD Task: " + oldObj.get("name") + " v: " + oldObj.get("version"));
console.log("-------- NEW Task: " + request.object.get("name") + " v: " + request.object.get("version"));
}).then(function () {
response.success();
}, function ( error) {
response.error(error.message);
}
);
});
cURL request
curl -X PUT \
-H "Content-Type: application/json" \
-H "X-Parse-Application-Id: xxxxx" \
-H "X-Parse-REST-API-Key: xxxxx" \
-H "X-Parse-Session-Token: xxxx" \
-d "{\"name\":\"NEW_VALUE\", \"version\":9999}" \
https://api.parse.com/1/classes/Tasks/VlJdk34b2A
JSON Response
"updatedAt": "2015-10-02T19:45:47.104Z"
LOG
The log prints the original and the new value, but I don't know how to access it either.
I2015-10-02T19:57:08.603Z]v160 before_save triggered for Tasks for user tAQf1nCWuz:
Input: {"original":{"createdAt":"2015-10-02T17:47:34.143Z","name":"OLD_VALUE","objectId":"VlJdk34b2A","updatedAt":"2015-10-02T19:45:47.104Z","version":0},"update":{"name":"NEW_VALUE","version":9999}}
Result: Update changed to {"name":"NEW_VALUE","version":9999}
I2015-10-02T19:57:08.901Z]-------- OLD Task: NEW_VALUE v: 9999
I2015-10-02T19:57:08.902Z]-------- NEW Task: NEW_VALUE v: 9999
After a lot test and error I could figure out what was going on.
Turn out that Parse is merging any objects with the same class and id into one instance. That was the reason why I always had either the object registered in DB or the one sent by the user. I honestly can't make sense of such behavior, but anyway...
The Parse javascript sdk offers an method called Parse.Object.disableSingeInstance link that disables this "feature". But, once the method is called, all object already defined are undefined. That includes the sent object. Witch means that you can't neither save the sent object for a later reference.
The only option was to save the key and values of the sent obj and recreate it later. So, I needed to capture the request before calling disableSingleInstance, transform it in a JSON, then disable single instance, fetch the object saved in DB and recreate the sent object using the JSON saved.
Its not pretty and definitely isn't the most efficient code, but I couldn't find any other way. If someone out there have another approach, by all means tell me.
Parse.Cloud.beforeSave('Tasks', function(request, response) {
if ( !request.object.isNew() ) {
var id = request.object.id;
var jsonReq;
var Task = Parse.Object.extend("Tasks");
var newTask = new Task;
var oldTask = new Task;
// getting new Obj
var queryNewTask = new Parse.Query(Task);
queryNewTask.get(id)
.then(function (result) {
newTask = result;
// Saving values as a JSON to later reference
jsonReq = result.toJSON();
// Disable the merge of obj w/same class and id
// It will also undefine all Parse objects,
// including the one sent in the request
Parse.Object.disableSingleInstance();
// getting object saved in DB
oldTask.set("objectId", id);
return oldTask.fetch();
}).then(function (result) {
oldTask = result;
// Recreating new Task sent
for ( key in jsonReq ) {
newTask.set( key, jsonReq[key]);
}
// Do your job here
}, function (error) {
response.error( error.message );
}
);
}
});
If I were you, I would pass in the old value as a parameter to the cloud function so that you can access it under request.params.(name of parameter). I don't believe that there is another way to get the old value. An old SO question said that you can use .get(), but you're claiming that that is not working. Unless you actually already had 9999 in the version...
edit - I guess beforeSave isn't called like a normal function... so create an "update version" function that passes in the current Task and the version you're trying to update to, perhaps?
Rather than performing a query, you can see the modified attributes by checking which keys are dirty, meaning they have been changed but not saved yet.
The JS SDK includes dirtyKeys(), which returns the keys that have been changed. Try this out.
var attributes = request.object.attributes;
var changedAttributes = new Array();
for(var attribute in attributes) {
if(object.dirty(attribute)) {
changedAttributes.push(attribute);
// object.get(attribute) is changed and the key is pushed to the array
}
}
For clarification, to get the original attribute's value, you will have to call get() to load those pre-save values. It should be noted that this will count as another API request.
Hey this worked perfectly for me :
var dirtyKeys = request.object.dirtyKeys();
var query = new Parse.Query("Question");
var clonedData = null;
query.equalTo("objectId", request.object.id);
query.find().then(function(data){
var clonedPatch = request.object.toJSON();
clonedData = data[0];
clonedData = clonedData.toJSON();
console.log("this is the data : ", clonedData, clonedPatch, dirtyKeys);
response.success();
}).then(null, function(err){
console.log("the error is : ", err);
});
For those coming to this thread in 2021-ish, if you have the server data loaded in the client SDK before you save, you can resolve this issue by passing that server data from the client SDK in the context option of the save() function and then use it in the beforeSave afterSave cloud functions.
// eg JS client sdk
const options = {
context: {
before: doc._getServerData() // object data, as loaded
}
}
doc.save(null, options)
// #beforeSave cloud fn
Parse.Cloud.beforeSave(className, async (request) => {
const { before } = request.context
// ... do something with before ...
})
Caveat: this wouldn't help you if you didn't have the attributes loaded in the _getServerData() function in the client
Second Caveat: parse will not handle (un)serialization for you in your cloud function, eg:
{
before: { // < posted as context
status: {
is: 'atRisk',
comment: 'Its all good now!',
at: '2021-04-09T15:39:04.907Z', // string
by: [Object] // pojo
}
},
after: {
status: { // < posted as doc's save data
is: 'atRisk',
comment: 'Its all good now!',
at: 2021-04-09T15:39:04.907Z, // instanceOf Date
by: [ParseUser] // instanceOf ParseUser
}
}
}

Breeze Partial initializer

I have a Single Page Application that is working pretty well so far but I have run into an issue I am unable to figure out. I am using breeze to populate a list of projects to be displayed in a table. There is way more info than what I actually need so I am doing a projection on the data. I want to add a knockout computed onto the entity. So to accomplish this I registered and entity constructor like so...
metadataStore.registerEntityTypeCtor(entityNames.project, function () { this.isPartial = false; }, initializeProject);
The initializeProject function uses some of the values in the project to determine what the values should be for the computed. For example if the Project.Type == "P" then the rowClass should = "Red".
The problem I am having is that all the properties of Project are null except for the ProjNum which happens to be the key. I believe the issue is because I am doing the projection because I have registered other initializers for other types and they work just fine. Is there a way to make this work?
EDIT: I thought I would just add a little more detail for clarification. The values of all the properties are set to knockout observables, when I interrogate the properties using the javascript debugger in Chrome the _latestValue of any of the properties is null. The only property that is set is the ProjNum which is also the entity key.
EDIT2: Here is the client side code that does the projection
var getProjectPartials = function (projectObservable, username, forceRemote) {
var p1 = new breeze.Predicate("ProjManager", "==", username);
var p2 = new breeze.Predicate("ApprovalStatus", "!=", "X");
var p3 = new breeze.Predicate("ApprovalStatus", "!=", "C");
var select = 'ProjNum,Title,Type,ApprovalStatus,CurrentStep,StartDate,ProjTargetDate,CurTargDate';
var isQaUser = cookies.getCookie("IsQaUser");
if (isQaUser == "True") {
p1 = new breeze.Predicate("QAManager", "==", username);
select = select + ',QAManager';
} else {
select = select + ',ProjManager';
}
var query = entityQuery
.from('Projects')
.where(p1.and(p2).and(p3))
.select(select);
if (!forceRemote) {
var p = getLocal(query);
if (p.length > 1) {
projectObservable(p);
return Q.resolve();
}
}
return manager.executeQuery(query).then(querySucceeded).fail(queryFailed);
function querySucceeded(data) {
var list = partialMapper.mapDtosToEntities(
manager,
data.results,
model.entityNames.project,
'ProjNum'
);
if (projectObservable) {
projectObservable(list);
}
log('Retrieved projects using breeze', data, true);
}
};
and the code for the partialMapper.mapDtosToEntities function.
var defaultExtension = { isPartial: true };
function mapDtosToEntities(manager,dtos,entityName,keyName,extendWith) {
return dtos.map(dtoToEntityMapper);
function dtoToEntityMapper(dto) {
var keyValue = dto[keyName];
var entity = manager.getEntityByKey(entityName, keyValue);
if (!entity) {
extendWith = $.extend({}, extendWith || defaultExtension);
extendWith[keyName] = keyValue;
entity = manager.createEntity(entityName, extendWith);
}
mapToEntity(entity, dto);
entity.entityAspect.setUnchanged();
return entity;
}
function mapToEntity(entity, dto) {
for (var prop in dto) {
if (dto.hasOwnProperty(prop)) {
entity[prop](dto[prop]);
}
}
return entity;
}
}
EDIT3: Looks like it was my mistake. I found the error when I looked closer at initializeProject. Below is what the function looked like before i fixed it.
function initializeProject(project) {
project.rowClass = ko.computed(function() {
if (project.Type == "R") {
return "project-list-item info";
} else if (project.Type == "P") {
return "project-list-item error";
}
return "project-list-item";
});
}
the issue was with project.Type I should have used project.Type() since it is an observable. It is a silly mistake that I have made too many times since starting this project.
EDIT4: Inside initializeProject some parts are working and others aren't. When I try to access project.ProjTargetDate() I get null, same with project.StartDate(). Because of the Null value I get an error thrown from the moment library as I am working with these dates to determine when a project is late. I tried removing the select from the client query and the call to the partial entity mapper and when I did that everything worked fine.
You seem to be getting closer. I think a few more guard clauses in your initializeProject method would help and, when working with Knockout, one is constantly battling the issue of parentheses.
Btw, I highly recommend the Knockout Context Debugger plugin for Chrome for diagnosing binding problems.
Try toType()
You're working very hard with your DTO mapping, following along with John's code from his course. Since then there's a new way to get projection data into an entity: add toType(...) to the end of the query like this:
var query = entityQuery
.from('Projects')
.where(p1.and(p2).and(p3))
.select(select)
.toType('Project'); // cast to Project
It won't solve everything but you may be able to do away with the dto mapping.
Consider DTOs on the server
I should have pointed this out first. If you're always cutting this data down to size, why not define the client-facing model to suit your client. Create DTO classes of the right shape(s) and project into them on the server before sending data over the wire.
You can also build metadata to match those DTOs so that Project on the client has exactly the properties it should have there ... and no more.
I'm writing about this now. Should have a page on it in a week or so.

Categories

Resources