Ember websocket/new object collision workaround clobbering hasMany promise - javascript

I have an Ember 3.8 app that uses Ember Data 3.8 and a websocket. We use the standard Ember Data API to preload data in Routes before processing, and for creating and deleting records. We also push all relevant updates to the database back to the browser through a websocket, where we pushPayload it into the store.
This pattern doesn't seem too uncommon - I've seen people discuss it elsewhere. A common problem with this pattern, however, is how to handle the collision when you are creating a record and the websocket sends down a copy of this new record before your POST resolves. Normally a pushPayload would simply update the record already existing with the payload's ID. For a record mid-save, the pushPayload creates a new record. Then when the POST returns and Ember Data tries assigning the ID to the pending record, it discovers there was already a record with that ID (from the websocket) and throws an exception.
To get around this we wrapped createRecord in some code that detects collisions and unloads the existing record with the ID so the pending record can be the true instance.
This worked well for a while, but now we're getting some odd behavior.
One of the models we create this way is the target of an async hasMany from another model.
Example:
model-parent: {
children: hasMany('model-child', {async: true}),
}
When we have a collision during a save of 'model-child', the PromiseManyArray returned by get('children') is destroyed, but the content of the PromiseManyArray is not. Further, the parent.get keeps returning this destroyed PromiseManyArray. Later we try binding a computed to it, which throws an exception trying to bind to the destroyed PromiseManyArray.
eg:
> modelParent1.get('children').isDestroyed
true
> modelParent1.get('children').content.isDestroyed
false
children: computed('modelParent1.children.[]')
^^^ blows up at controller setup time when trying to add a chainwatcher to the destroyed PromiseManyArray
I've tried reloading the parent object and also reloading the hasMany to get a 'fixed' PromiseManyArray but it still returns the broken one.
I know this is a pretty specific setup but would appreciate any advice for any level of our issue - the websocket collision, the broken hasMany, etc.
I wouldn't know where to being creating a twiddle for this, but if my workaround doesn't actually work I'll give it a shot.

Related

Firestore Offline Cache & Promises

This question is a follow-up on Firestore offline cache. I've read the offline cache documentation but am confused on one point.
One commenter answered in the prior question (~ year ago):
You Android code that interact with the database will be the same
whether you're connected or not, since the SDK simply works the same."
In the API documentation for DocumentReference's set method, I just noticed that it says:
Returns
non-null Promise containing void A promise that resolves once
the data has been successfully written to the backend. (Note that it
won't resolve while you're offline).
Emphasis mine. Wouldn't this bit in the documentation suggest that the code won't behave the same, or am I missing something? If I'm waiting on the .set() to resolve before allowing some user interaction, it sounds from this bit like I need to adjust the code for an offline case differently than I would normally.
The CollectionReference's add method worries me a bit more. It doesn't have exactly the same note but says (emphasis mine):
A Promise that resolves with a DocumentReference pointing to the newly created document after it has been written to the backend.
That is a little more vague as not sure if "backend" in this case is a superset of "cache" and "server" or if it's meant to denote only the server. If this one doesn't resolve, that would mean that the following wouldn't work, correct?
return new Promise((resolve, reject) => {
let ref = firestore.collection(path)
ref.add(data)
.then(doc => {
resolve({ id: doc.id, data: data })
})
...
})
Meaning, the .add() would not resolve, .then() would not run, and I wouldn't have access to the id of the document that was just added. I hope I'm just misunderstanding something and that my code can continue to function as-is both online and offline.
You have two of concerns here, which are not really related. I'll explain both of them separately.
For the most part, developers don't typically care if the promise from a document update actually resolves or not. It's almost always "fire and forget". What would an app gain to know that the update hit the server, as long as the app behaves the same way regardless? The local cache has been updated, and all future queries will show that the document has been updated, even if the update hasn't been synchronized with the server yet.
The primary exception to this is transactions. Transactions require that the server be online, because round trips need to be made between the client and server in order to ensure that the update was atomic. Transactions simply don't work offline. If you need to know if a transaction worked, you need to be online. Unlike normal document writes, transactions don't persist in local cache. If the app is killed before the transaction completes on the server, the transaction is lost.
Your second concern is about newly added documents where the id of the document isn't defined at the time of the update. It's true that add() returns a promise that only resolves when the new document exists on the server. You can't know the id of the document until the promise delivers you the DocumentReference of the new document.
If this behavior doesn't work for you, you can generate a new id for a document by simply calling doc() with no arguments instead of add(). doc() immediately returns the DocumentReference of the new (future) document that hasn't been written (until you choose to write it). In both the case of doc() and add(), these DocumentReference objects contain unique ids generated on the client. The difference is that with doc(), you can use the id immediately, because you get a DocumentReference immediately. With add(), you can't, because the DocumentReference isn't provided until the promise resolves. If you need that new document id right now, even while offline, use doc() instead of add(). You can then use the returned DocumentReference to create the document offline, stored in the local cache, and synchronized later. The update will then return a promise that resolves when the document is actually written.

Angular - initiate a response when data loads/changes

Relative Angular newbie here, and I am wrestling with what would seem like something most applications need:
Watching a model/data and doing something when that model is hydrated and/or has a state change.
Use case would be, when a user logs in (user model gets initiated) a complimentary directive/controller sees the state change, and then requests out to the backend to get a list of this users corresponding data elements (ie Notifications, emails, friends, etc)
Ive parsed through StackOverflow and such, and it always appears that a shared service is the way to go, however I never find a definitive answer about how the directives are to watch the state change. Some suggest a broadcast/watch while others say that is a bad pattern.
Our app currently does employ a shared UserService, which contains model representation of a User (data and simple methods is fullName())
This service also has a subscription hook that directives can subscribe to
onLogin: (fn) ->
$rootScope.$on userService::login, fn
and the use is:
UserService.onLoad(myFunction)
When the UserService loads the User, it then broadcasts userService::login and all the listeners are run. Hence everyone that shares the UserService can subscribe and respond to a User logging in.
This all works. But I was thinking there must be a built in Angular way that the directives can just know about the state change and then do myFunction (ie make additional data calls)
Thoughts and feeling would be extremely appreciated!

Sails.js, API executing without route configuration and controller.method

Hey guys here I have a peculiar situation in Sails.js
I created new model and controller named "sponsor" using sails generate api sponsor in the cmd-line
Then I created the route: post /create/new/sponsor' : 'SponsorController.create ,created a method "create" inside the controller and structured the respective model.
Now when am trying it out in postman using legit configuration, everything is working fine, as its is supposed to work. New sponsor is getting created!!
THE PROBLEM IS :
Even when Iam passing URL localhost:port/sponsor new sponsor is getting created (which is supposed to throw status:404) as there is no such route defined.
Then I deleted the controller method "create" and tried using the url localhost:port/sponsor, strange it still works!!!!
The only ERROR that I got is in cmd-prompt saying "Invalid usage of publishCreate():: Values must have an 'id' instead ... (the body passed )"
I checked with the other other APIs that I have created before, everything is working normally according to routes defined ie. if [ url doesnt match route == status:404 ]
I want to know why is it happening?
By default Sails.js has blueprints enabled by default. This automatically creates GET, PUT, POST, Delete routes for your controllers at the url localhost:port/sponsor
See Concepts & Reference for more information.
It is possible to turn off blueprints in sails.js config/blueprints.js, uncomment actions and set it's value to false
actions: false,
Also the index "rest:" of config/blueprints.js needs to be uncommented and set to false to disable the self generated routes ie localhost:port/sponsor
rest: false,
Thank you
#Callum
for pointing out the solution
It's important to realize that, even if you haven't defined these yourself, as long as. A model exists with the same name as the controller, Sails will respond with built-in CRUD. logic in the form of a JSON API, including support for sort, pagination, and filtering.
Best Callum

How to migrate the database in sails.js?

So I created a new Sails.js project, then ran
$ sails generate api user
like the loading page suggested. But now when I fire up the server with sails lift I get an error:
sails lift
info: Starting app...
-----------------------------------------------------------------
Excuse my interruption, but it looks like this app
does not have a project-wide "migrate" setting configured yet.
(perhaps this is the first time you're lifting it with models?)
In short, this setting controls whether/how Sails will attempt to automatically
rebuild the tables/collections/sets/etc. in your database schema.
You can read more about the "migrate" setting here:
http://sailsjs.org/#/documentation/concepts/ORM/model-settings.html?q=migrate
In a production environment (NODE_ENV==="production") Sails always uses
migrate:"safe" to protect inadvertent deletion of your data.
However during development, you have a few other options for convenience:
1. safe - never auto-migrate my database(s). I will do it myself (by hand)
2. alter - auto-migrate, but attempt to keep my existing data (experimental)
3. drop - wipe/drop ALL my data and rebuild models every time I lift Sails
What would you like Sails to do?
info: To skip this prompt in the future, set `sails.config.models.migrate`.
info: (conventionally, this is done in `config/models.js`)
Is there a sails migrate command I have to run? I know in rails I would do something like rake db:migrate. What's the procedure in sails after the generate command?
It's not an error, it just tells you that you did not specify a default migration strategy.
Just open config/models.js
and uncomment the line where it says migrate like in the picture above.
Like the information "popup" tells you, you can choose between
safe
alter
drop
Drop will delete all your tables and recreate them, which is good for a new project and you want to seed new dummy data all the time.
Alter will try to keep your data but will make changes to your tables if you do so in your models. If sails can't keep the data, it will be deleted.
Safe is, like the name says, the safest. It will do just nothing to your tables.
If you want to take different action for different tables, you can specify just the same options within your model directly which will overwrite the default setting for this model only.
So say you have a User model and want to keep that data but want to have all other models recreated everytime you sails lift, you should add
migrate: 'safe'
to the model directly and use drop as default strategy.
I like alter personally, but that might be opinionated.
You do not need to make anything else. If there's a model and migrate is set to drop or alter, it will be migrated when sails lift is run.
You can read more about model settings here
As a sidenote, you can see what sails is doing exactly to your tables during lift by setting
log: 'verbose'
in your config/env/development.js file:

Single record persistence with ember-data

In Ember.js with ember-data (using the 1.0pre versions) all changes to the data are saved into a defaultTransaction on the store. When the store is committed with store.commit() ALL changes to the data are saved back to the API (using the RESTAdapter).
I would like more control over objects being persisted. So for now, I have been getting instances of store and adapter, then calling something like adapter.createRecord(store, type, record) or updateRecord where type is the App.Person model and record is an instance of that model.
This is using internal bits of the DS.RESTAdapter that I don't think are meant to be used directly. While it works I'm hoping there is a better way to gain more control over persistence then store.commit(). The business logic and UX of my application require finer control.
transaction = router.get('store').transaction();
person = transaction.createRecord(App.Person);
person.set('name', 'Thanatos');
transaction.commit();
watch yehuda presentation regarding this.
http://www.cloudee.com/preview/collection/4fdfec8517ee3d671800001d

Categories

Resources