dojo.data.objectStore.deleteItem - javascript

I have a dojo.store.Memory wrapped in a dojo.data.ObjectStore which I am then plugging into a dataGrid. I want to delete an item from the store and have the grid update. I have tried every combonation I can think of with no success. For example:
var combinedStore = new dojo.data.ObjectStore({objectStore: new dojo.store.Memory({data: combinedItems})});
combinedStore.fetch({query:{id: 'itemId'}, onComplete: function (items) {
var item = items[0];
combinedStore.deleteItem(item);
combinedGrid.setStore(combinedStore);
}});
combinedGrid.setStructure(gridLayout);
This throws no errors but combinedStore.objectStore.data still has the item that was meant to be deleted and the grid still displays the item. (The also seems to be a complete mismatch between combinedStore.objectStore.data and combinedStore.objectStore.index);

There's a simple solution, luckily! The delete is successfully happening, however, you need to save the ObjectStore after the deletion for it to be committed.
Change your code to look like this:
onComplete: function (items) {
var item = items[0];
combinedStore.deleteItem(item);
combinedStore.save();
combinedGrid.setStore(combinedStore);
}
That little save should do the trick. (Please note: the save must occur after the deleteItem - if you put it outside the fetch block, do to being asynchronous, it will actually happen before the onComplete!)
Working example: http://pastehtml.com/view/b34z5j2bc.html (Check your console for results.)

This does seem rather poorly documented at present in the new dojo.store documentation.
The old dojo.data.api.Write documentation make it fairly clear. An excerpt from http://dojotoolkit.org/reference-guide/dojo/data/api/Write.html:
Datastores that implement the Write interface act as a two-phase
intermediary between the client and the ultimate provider or service
that handles the data. This allows for the batching of operations,
such as creating a set of new items and then saving them all back to
the persistent store with one function call.
The save API is defined as asynchronous. This is because most
datastores will be talking to a server and not all I/O methods for
server communication can perform synchronous operations.
Datastores track all newItem, deleteItem, and setAttribute calls on
items so that the store can both save the items to the persistent
store in one chunk and have the ability to revert out all the current
changes and return to a pristine (unmodified) data set.
Revert should only revert the store items on the client side back to
the point the last save was called.
dojo.store has evolved from dojo.data and seems to follow many of its behavioral aspects.
The new dojo.store documentation http://www.sitepen.com/blog/2011/02/15/dojo-object-stores/ and http://www.sitepen.com/blog/2011/02/15/dojo-object-stores/ manages to talk specifically about the delete operation without mentioning having to call save() (in fact I can't find the word 'save' on that page at all).
I'm staying away from dojo.store as long as possible, hopefully it will be easier to follow in 1.7 or later, whenever I'm forced to use it for real :)

Related

Changes in cache are not saved without async/await

I have a react-native application, it uses react-navigation. There is a functional component with a handler for button click.
I recently had a problem with async/await. I called async method in a non-async method and it did not work as I expected. I debugged it a little and I found out that the async method is called and does everything it should but after that the changes are lost.
The non-async method looked like this:
const handleDone = () => {
api.events.removeEventFromCache(eventId);
navigation.navigate(routes.Events);
};
When the method is called, an object is removed from cache and user is navigated to another screen. api.events.removeEventFromCache(eventId) got called and finished successfully and I even check the cache to see that the object was removed. The thing is that after the navigation.navigate(routes.Events) it is suddenly still in the cache.
Adding async/await keyword solved the problem but I do not really understand why:
const handleDone = async () => {
await api.events.removeEventFromCache(eventId);
navigation.navigate(routes.Events);
};
I though it would do everything without waiting for the result but why did the result disappear? It is not a question about the order of executing and waiting for the result. I do not really care about the result, I just want it to be done.
This is the log made without the keywords:
--> in cache now 3
remove the event from cache
navigate to events
cache length before remove 3
--> in cache now 3
cache length set 2
cache length checked 2
--> in cache now 3
A log with the keywords:
--> in cache now 3
remove the event from cache
cache length before remove 3
cache length set 2
cache length checked 2
navigate to events
--> in cache now 2
Yes, there is a difference in execution but my question is about the result in cache.
When you log the output before and after navigation, you are logging in 2 different contexts.
To explain this, lets say you have a cache object cache from which you wish to remove the event.
The way your code without the keywords executes is as follows:
cache is loaded by the api method to be edited
navigation method executes and it is going to send a copy of the current cache to the next screen and discard the previous.
cache-copy is created and dispatched by the navigation method.
You api method is currently still working with the cache object and not cache-copy.
cache is edited by the api method but is then discarded as the new screen is now using the cache-copy object.
In the second scenario:
The api method receives cache
The event is removed from cache
The navigation method receives the updated cache and creates cache-copy
cache-copy now has the updates list of events
The important thing to note is where and when exactly the cache-copy object is being created. If it is created before the event is removed, the code will work just fine.
Lets say, your navigation method executes the exact instant when the api method has removes the event, your code will run as expected even if async/await isn't used.
async/await is just working as expected. When managing promises, you could have two options:
//Using promises
const handleDone = () => {
api.events.removeEventFromCache(eventId).then(() => {
navigation.navigate(routes.Events);
});//You can manage failure with .catch()
};
and using async/await just as you posted, it waits until the promise is executed, it doesn't stop everything itself. Also, it is a good practice to wrap it inside a try/catch block in case the Promise fails.
The Promise object represents the eventual completion (or failure) of an asynchronous operation, and its resulting value.
That means, when you call api.events.removeEventFromCache(eventId) it won't be completed immediately, so you either have to use one of both options.

How to initialize a child process with passed in functions in Node.js

Context
I'm building a general purpose game playing A.I. framework/library that uses the Monte Carlo Tree Search algorithm. The idea is quite simple, the framework provides the skeleton of the algorithm, the four main steps: Selection, Expansion, Simulation and Backpropagation. All the user needs to do is plug in four simple(ish) game related functions of his making:
a function that takes in a game state and returns all possible legal moves to be played
a function that takes in a game state and an action and returns a new game state after applying the action
a function that takes in a game state and determines if the game is over and returns a boolean and
a function that takes in a state and a player ID and returns a value based on wether the player has won, lost or the game is a draw. With that, the algorithm has all it needs to run and select a move to make.
What I'd like to do
I would love to make use of parallel programming to increase the strength of the algorithm and reduce the time it needs to run each game turn. The problem I'm running into is that, when using Child Processes in NodeJS, you can't pass functions to the child process and my framework is entirely built on using functions passed by the user.
Possible solution
I have looked at this answer but I am not sure this would be the correct implementation for my needs. I don't need to be continually passing functions through messages to the child process, I just need to initialize it with functions that are passed in by my framework's user, when it initializes the framework.
I thought about one way to do it, but it seems so inelegant, on top of probably not being the most secure, that I find myself searching for other solutions. I could, when the user initializes the framework and passes his four functions to it, get a script to write those functions to a new js file (let's call it my-funcs.js) that would look something like:
const func1 = {... function implementation...}
const func2 = {... function implementation...}
const func3 = {... function implementation...}
const func4 = {... function implementation...}
module.exports = {func1, func2, func3, func4}
Then, in the child process worker file, I guess I would have to find a way to lazy load require my-funcs.js. Or maybe I wouldn't, I guess it depends how and when Node.js loads the worker file into memory. This all seems very convoluted.
Can you describe other ways to get the result I want?
child_process is less about running a user's function and more about starting a new thread to exec a file or process.
Node is inherently a single-threaded system, so for I/O-bound things, the Node Event Loop is really good at switching between requests, getting each one a little farther. See https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/
What it looks like you're doing is trying to get JavaScript to run multiple threads simultaniously. Short answer: can't ... or rather it's really hard. See is it possible to achieve multithreading in nodejs?
So how would we do it anyway? You're on the right track: child_process.fork(). But it needs a hard-coded function to run. So how do we get user-generated code into place?
I envision a datastore where you can take userFn.ToString() and save it to a queue. Then fork the process, and let it pick up the next unhandled thing in the queue, marking that it did so. Then write to another queue the results, and this "GUI" thread then polls against that queue, returning the calculated results back to the user. At this point, you've got multi-threading ... and race conditions.
Another idea: create a REST service that accepts the userFn.ToString() content and execs it. Then in this module, you call out to the other "thread" (service), await the results, and return them.
Security: Yeah, we just flung this out the window. Whether you're executing the user's function directly, calling child_process#fork to do it, or shimming it through a service, you're trusting untrusted code. Sadly, there's really no way around this.
Assuming that security isn't an issue you could do something like this.
// Client side
<input class="func1"> // For example user inputs '(gamestate)=>{return 1}'
<input class="func2">
<input class="func3">
<input class="func4">
<script>
socket.on('syntax_error',function(err){alert(err)});
submit_funcs_strs(){
// Get function strings from user input and then put into array
socket.emit('functions',[document.getElementById('func1').value,document.getElementById('func2').value,...
}
</script>
// Server side
// Socket listener is async
socket.on('functions',(funcs_strs)=>{
let funcs = []
for (let i = 0; i < funcs_str.length;i++){
try {
funcs.push(eval(funcs_strs));
} catch (e) {
if (e instanceof SyntaxError) {
socket.emit('syntax_error',e.message);
return;
}
}
}
// Run algorithm here
}

Wait until the data returns, synchronous call with angular and breeze

I'm working on a small blog engine where the user can create blog entry and possible to link tags to an entry. It is many-to-many relation, but due to that Breeze cannot yet manage this relation I have to expose the join table to breeze so that I can persist the data step-by-step. And my problem is here.
Tables:
BlogEntry
BlogEntryTag
Tag
Scenario:
user opens the "new blog entry" form or selects an existing one to be edited
enters the text, etc
selects one or more tags
Business logic:
create a new entity by Breeze / query the selected one
save the blog entry (1st server call which gives back the blog_id if the blog entry is new one)
check the already existing connections between the tags and blog entry, if the blog entry is edited then the already existing blogEntry-tag relations might change ( 2nd server call)
based on the tag name selecting the tag_id from tag table (3rd server call)
create the BlogEntrytag entities by breeze
persist the BlogEntrytag entities into database ( 4th server call)
I think the order must be consecutive.
I have this code and as you can see the attached screenshot the console logging marked by '_blogEntryEnttity' does not wait until the data returns from the server and it will be executed before the console logging marked by '_blogEntryEnttity inside'. The code will throw a reference exception when it tries to set up the title property a few line later.
var blogEntryEntityQueryPromise = datacontext.blogentry.getById(_blogsObject.id);
blogEntryEntityQueryPromise.then(function (result)
{
console.log('result', result);
_blogEntryEntity = result[0];
console.log('_blogEntryEnttity inside', _blogEntryEntity);
//if I need synchronous execution then I have to put the code here which must be executed consecutively
});
console.log('_blogEntryEnttity', _blogEntryEntity);
}
//mapping the values we got
_blogEntryEntity.title = _blogsObject.title;
_blogEntryEntity.leadWithMarkup = _blogsObject.leadWithMarkup;
_blogEntryEntity.leadWithoutMarkup = _blogsObject.leadWithoutMarkup;
_blogEntryEntity.bodyWithMarkup = _blogsObject.bodyWithMarkup;
_blogEntryEntity.bodyWithoutMarkup = _blogsObject.bodyWithoutMarkup;
console.log('_blogEntryEnttity', _blogEntryEntity);
The example comes from here.
My question is that, why it is not wait until the data comes back? What is the way of handling cases like this?
However, I figured out that, if I need synchronous execution then I should place the code into the success method following the data retrieving from the promise. However, I really don't like this solution because my code will be ugly after a while and hard to maintain.
The datacontext.blogentry.getById looks like below and the implementation is in an abstract class, you can find the code below too. The whole repository pattern comes from John Papa's course on Pluralsight.
Repository class method
function getById(id)
{
return this._getById(this.entityName, id);
}
Abstract repository class method. According to Breeze's documentation page the EntityQuery class' execute method returns a Promise.
function _getById(resource, id) {
var self = this;
var manager = self.newManager;
var Predicate = breeze.Predicate;
var p1 = new Predicate('id', '==', id);
return EntityQuery.from(resource)
.where(p1)
.using(manager).execute()
.then(success).catch(_queryFailed);
function success(data) {
return data.results;
}
}
I appreciate your help in advance!
I don't think you need all these round trips. I'd do this:
Query all available Tag entities, so they'll be in the EntityManager's cache (you need these to populate the UI anyway).
If it's an existing BlogEntry, just query the BlogEntry and all its associated BlogEntryTag entities; Breeze will connect the BlogEntryTags to their associated Tags in the cache. You'll add/delete BlogEntryTags if the user selects/unselects Tags for the BlogEntry.
var query = EntityQuery.from("BlogEntries").where("id", "==", id).expand("BlogEntryTags");
If it's a new BlogEntry, it won't have any BlogEntryTags. You'll create these when you save, after the user selects some tags.
Save the added/updated BlogEntry and any added/deleted BlogEntryTag entities to the database in a single saveChanges call.
See the Presenting Many-to-Many doc and its associated plunker for a deeper dive. The UI is different from what you want, but the underlying concepts are useful.
why it is not wait until the data comes back?
Because promises don't magically synchronize execution. They're still asynchronous, they still rely on callbacks.
What is the way of handling cases like this?
You need to put the code that should wait in the then callback.
However, I really don't like this solution because my code will be ugly after a while and hard to maintain.
Not really, you can write concise and elegant asynchronous code with promises. If your code is becoming too much spaghetti, abstract parts of it in own functions. You should be able to get to a clean and flat promise chain.

restangular save ignores changes to restangular object

I'm trying to call save on a restangularized object, but the save method is completely ignoring any changes made to the object, it seems to have bound the original unmodified object.
When I run this in the debugger I see that when my saveSkill method (see below) is entered right before I call save on it the skill object will reflect the changes I made to it's name and description fields. If I then do a "step into" I go into Restangular.save method. However, the 'this' variable within the restangular.save method has my old skill, with the name and description equal to whatever they were when loaded. It's ignoring the changes I made to my skill.
The only way I could see this happening is if someone called bind on the save, though I can't why rectangular would do that? My only guess is it's due to my calling $object, but I can't find much in way of documentation to confirm this.
I'm afraid I can't copy and paste, all my code examples are typed by hand so forgive any obvious syntax issues as typos. I don't know who much I need to describe so here is the shortened version, I can retype more if needed:
state('skill.detail', {
url: '/:id',
data: {pageTitle: 'Skill Detail'},
tempalte: 'template.tpl.html'
controller: 'SkillFormController',
resolve: {
isCreate: (function(){ return false;},
skill: function(SkillService, $stateParams){
return SkillService.get($stateParams.id, {"$expand": "people"}).$object;
},
});
my SkillService looks like this:
angular.module('project.skill').('SkillService', ['Restangular, function(Retangular) {
var route="skills";
var SkillService= Restangular.all(route);
SkillService.restangularize= function(element, parent) {
var skill=Restangular.restangluarizeElement(parent, elment, route);
return skill;
};
return SkillService;
}];
Inside of my template.tpl.html I have your standard text boxes bound to name and description, and a save button. The save button calls saveSkill(skill) of my SkillFormController which looks like this:
$scope.saveSkill=function(skill) {
skill.save().then(function returnedSkill) {
toaster.pop('success', "YES!", returnedSkill.name + " saved.");
...(other irrelevant stuff)
};
If it matters I have an addElementTransformer hook that runs a method calling skilll.addRestangularMethod() to add a getPeople method to all skill objects. I don't include the code since I doubt it's relevant, but if needed to I can elaborate on it.
I got this to work, though I honestly still don't know entirely why it works I know the fix I used.
First, as stated in comments restangular does bind all of it's methods to the original restangularizedObject. This usually works since it's simply aliasing the restangularied object, so long as you use that object your modifications will work.
This can be an issue with Restangular.copy() vs angular.copy. Restangualar.copy() makes sure to restangularize the copied object properly, rebinding restangualr methods to the new copy objects. If you call only Angular.copy() instead of Restangualar.copy() you will get results like mine above.
However, I was not doing any copy of the object (okay, I saved a master copy to revert to if cancel was hit, but that used Restangular.copy() and besides which wasn't being used in my simple save scenario).
As far as I can tell my problem was using the .$object call on the restangular promise. I walked through restangular enough to see it was doing some extra logic restangularizing methods after a promise returns, but I didn't get to the point of following the $object's logic. However, replacing the $object call with a then() function that did nothing but save the returned result has fixed my issues. If someone can explain how I would love to update this question, but I can't justify using work time to try to further hunt down a fixed problem even if I really would like to understand the cause better.

Conflicting purposes of IndexedDB transactions

As I understand it, there are three somewhat distinct reasons to put multiple IndexedDB operations in a single transaction rather than using a unique transaction for each operation:
Performance. If you’re doing a lot of writes to an object store, it’s much faster if they happen in one transaction.
Ensuring data is written before proceeding. Waiting for the “oncomplete” event is the only way to be sure that a subsequent IndexedDB query won’t return stale data.
Performing an atomic set of DB operations. Basically, “do all of these things, but if one of them fails, roll it all back”.
#1 is fine, most databases have the same characteristic.
#2 is a little more unique, and it causes issues when considered in conjunction with #3. Let’s say I have some simple function that writes something to the database and runs a callback when it's over:
function putWhatever(obj, cb) {
var tx = db.transaction("whatever", "readwrite");
tx.objectStore("whatever").put(obj);
tx.oncomplete = function () { cb(); };
}
That works fine. But now if you want to call that function as a part of a group of operations you want to atomically commit or fail, it's impossible. You'd have to do something like this:
function putWhatever(tx, obj, cb) {
tx.objectStore("whatever").put(obj).onsuccess = function () { cb(); };
}
This second version of the function is very different than the first, because the callback runs before the data is guaranteed to be written to the database. If you try to read back the object you just wrote, you might get a stale value.
Basically, the problem is that you can only take advantage of one of #2 or #3. Sometimes the choice is clear, but sometimes not. This has led me to write horrible code like:
function putWhatever(tx, obj, cb) {
if (tx === undefined) {
tx = db.transaction("whatever", "readwrite");
tx.objectStore("whatever").put(obj);
tx.oncomplete = function () { cb(); };
} else {
tx.objectStore("whatever").put(obj).onsuccess = function () { cb(); };
}
}
However even that still is not a general solution and could fail in some scenarios.
Has anyone else run into this problem? How do you deal with it? Or am I simply misunderstanding things somehow?
The following is just opinion as this doesn't seem like a 'one right answer' question.
First, performance is an irrelevant consideration. Avoid this factor entirely, unless later profiling suggests a material problem. Chances of perf issues are ridiculously low.
Second, I prefer to organize requests into transactions solely to maintain integrity. Integrity is paramount. Integrity as I define it here simply means that the database at any one point in time does not contain conflicting or erratic data. Essentially the database is never able to enter into a 'bad' state. For example, to impose a rule that cross-store object references point to valid and existing objects in other stores (a.k.a. referential integrity), or to prevent duplicated requests such as a double add/put/delete. Obviously, if the app were something like a bank app that credits/debits accounts, or a heart-attack monitor app, things could go horribly wrong.
My own experience has led me to believe that code involving indexedDB is not prone to the traditional facade pattern. I found that what worked best, in terms of organizing requests into different wrapping functions, was to design functions around transactions. I found that quite often there are very few DRY violations because every request is nearly always unique to its transactional context. In other words, while a similar 'put object' request might appear in more than one transaction, it is so distinct in its behavior given its separate context that it merits violating DRY.
If you go the function per request route, I am not sure why you are checking if the transaction parameter is undefined. Have the caller create the function and then pass it to the requests in turn. Expect the tx to always be defined and do not over-zealously guard against it. If it is ever not defined there is either a serious bug in indexedDB or in your calling function.
Explicitly, something like:
function doTransaction1(db, onComplete) {
var tx = db.transaction(...);
tx.onComplete = onComplete;
doRequest1(tx);
doRequest2(tx);
doRequest3(tx);
}
function doRequest1(tx) {
var store = tx.objectStore(...);
// ...
}
// ...
If the requests should not execute in parallel, and must run in a series, then this indicates a larger and more difficult design issue.

Categories

Resources