createList does not pick up relationship in Mirage js - javascript

I am trying to create multiple posts that belong to a user using Mirage js createList facility. I have created models with corresponding relationships:
models: {
user: Model.extend({
posts: hasMany(),
}),
post: Model.extend({
user: belongsTo()
})
}
In the seeds method, I am trying to create a list of posts and allocate them to a user with this code:
seeds(server) {
let posts = server.createList("post", 2);
server.create("user", {
name: "John",
posts: [posts],
});
}
Unfortunately when I hit this.get("/users"); in http request I receive a mirage error, which I understand but can't fix:
Mirage: You're trying to create a user model and you passed in "model:post(1),model:post(2)" under the posts key, but that key is a HasMany relationship. You must pass in a Collection, PolymorphicCollection, array of Models, or null.
As far as I can tell I am passing an array of Models? How can I fix it, please?

So the problem was:
posts: [posts]
This part returns an array (or collection) already:
let posts = server.createList("post", 2);
So wrapping it in another array [post] is incorrect.
Using server.create("...") we can hook onto it by putting it in array, but server.createList is already returning an array.
Correct syntax is:
seeds(server) {
let posts = server.createList("post", 2);
server.create("user", {
name: "John",
posts: posts,
});
}
or even shorter:
seeds(server) {
let posts = server.createList("post", 2);
server.create("user", {
name: "John",
posts,
});
}

Related

using prisma, how do you access newly created record from within a nested write (update first, then create within)

Using Prisma, I have a question about accessing a record that was newly created from within a nested write (update first, then create within).
I'm following along the examples on this page in the prisma docs.
In particular, I am looking at the following two items in the data model:
Note, I have slightly modified the example for the purposes of this question by adding a counter to User.
model User {
id Int #id #default(autoincrement())
email String #unique
posts Post[]
counter Int
}
model Post {
id Int #id #default(autoincrement())
title String
author User? #relation(fields: [authorId], references: [id])
authorId Int?
}
Now, let's say you want to create a new Post and connect it to the User and at the same time, you also want to increment the counter.
My assumption, after reading this section of the doc is that you would need to update the existing User and within that, create a new Post record.
i.e.
const user = await prisma.user.update({
where: { email: 'alice#prisma.io' },
data: {
// increment the counter for this User
counter: {
increment: 1,
},
// create the new Post for this User
posts: {
create: { title: 'Hello World' },
},
},
})
My question is this. In the above scenario, how do you access the newly created Post in the return of the query?
In particular, say you want to get the id of the new Post?
From what I can tell, the returned user object could possibly include the array of all associated Posts, i.e. if you added this to the update query...
include: {
posts: true,
}
But I can't yet see how, using Prisma, you get the individual Post that you just created as part of this update query.
I have found one way of achieving this that works.
It uses the tranaction api.
Basically, instead of trying to use a nested write that begins with an update on User and contains a nested create on Post...
I split the two operations out into separate variables and run them together using a prisma.$transaction() as follows.
const updateUser = prisma.user.update({
where: { email: 'alice#prisma.io' },
data: {
// increment the counter for this User
counter: {
increment: 1,
},
},
});
const createPost = prisma.post.create({
data: {
// create the new Post for this User
title: 'Hello World',
author: {
connect: {
email: 'alice#prisma.io',
},
},
},
});
const [ updateUserResult, createPostResult ] = await prisma.$transaction([updateUser, createPost ]);
// if createPostResult, then can access details of the new post from here...

How can I update the value of an item in an array in firebase? [duplicate]

I'm currently trying Firestore, and I'm stuck at something very simple: "updating an array (aka a subdocument)".
My DB structure is super simple. For example:
proprietary: "John Doe",
sharedWith:
[
{who: "first#test.com", when:timestamp},
{who: "another#test.com", when:timestamp},
],
I'm trying (without success) to push new records into shareWith array of objects.
I've tried:
// With SET
firebase.firestore()
.collection('proprietary')
.doc(docID)
.set(
{ sharedWith: [{ who: "third#test.com", when: new Date() }] },
{ merge: true }
)
// With UPDATE
firebase.firestore()
.collection('proprietary')
.doc(docID)
.update({ sharedWith: [{ who: "third#test.com", when: new Date() }] })
None works. These queries overwrite my array.
The answer might be simple, but I could'nt find it...
Firestore now has two functions that allow you to update an array without re-writing the entire thing.
Link: https://firebase.google.com/docs/firestore/manage-data/add-data, specifically https://firebase.google.com/docs/firestore/manage-data/add-data#update_elements_in_an_array
Update elements in an array
If your document contains an array field, you can use arrayUnion() and
arrayRemove() to add and remove elements. arrayUnion() adds elements
to an array but only elements not already present. arrayRemove()
removes all instances of each given element.
Edit 08/13/2018: There is now support for native array operations in Cloud Firestore. See Doug's answer below.
There is currently no way to update a single array element (or add/remove a single element) in Cloud Firestore.
This code here:
firebase.firestore()
.collection('proprietary')
.doc(docID)
.set(
{ sharedWith: [{ who: "third#test.com", when: new Date() }] },
{ merge: true }
)
This says to set the document at proprietary/docID such that sharedWith = [{ who: "third#test.com", when: new Date() } but to not affect any existing document properties. It's very similar to the update() call you provided however the set() call with create the document if it does not exist while the update() call will fail.
So you have two options to achieve what you want.
Option 1 - Set the whole array
Call set() with the entire contents of the array, which will require reading the current data from the DB first. If you're concerned about concurrent updates you can do all of this in a transaction.
Option 2 - Use a subcollection
You could make sharedWith a subcollection of the main document. Then
adding a single item would look like this:
firebase.firestore()
.collection('proprietary')
.doc(docID)
.collection('sharedWith')
.add({ who: "third#test.com", when: new Date() })
Of course this comes with new limitations. You would not be able to query
documents based on who they are shared with, nor would you be able to
get the doc and all of the sharedWith data in a single operation.
Here is the latest example from the Firestore documentation:
firebase.firestore.FieldValue.ArrayUnion
var washingtonRef = db.collection("cities").doc("DC");
// Atomically add a new region to the "regions" array field.
washingtonRef.update({
regions: firebase.firestore.FieldValue.arrayUnion("greater_virginia")
});
// Atomically remove a region from the "regions" array field.
washingtonRef.update({
regions: firebase.firestore.FieldValue.arrayRemove("east_coast")
});
You can use a transaction (https://firebase.google.com/docs/firestore/manage-data/transactions) to get the array, push onto it and then update the document:
const booking = { some: "data" };
const userRef = this.db.collection("users").doc(userId);
this.db.runTransaction(transaction => {
// This code may get re-run multiple times if there are conflicts.
return transaction.get(userRef).then(doc => {
if (!doc.data().bookings) {
transaction.set({
bookings: [booking]
});
} else {
const bookings = doc.data().bookings;
bookings.push(booking);
transaction.update(userRef, { bookings: bookings });
}
});
}).then(function () {
console.log("Transaction successfully committed!");
}).catch(function (error) {
console.log("Transaction failed: ", error);
});
Sorry Late to party but Firestore solved it way back in aug 2018 so If you still looking for that here it is all issues solved with regards to arrays.
https://firebase.googleblog.com/2018/08/better-arrays-in-cloud-firestore.htmlOfficial blog post
array-contains, arrayRemove, arrayUnion for checking, removing and updating arrays. Hope it helps.
To build on Sam Stern's answer, there is also a 3rd option which made things easier for me and that is using what Google call a Map, which is essentially a dictionary.
I think a dictionary is far better for the use case you're describing. I usually use arrays for stuff that isn't really updated too much, so they are more or less static. But for stuff that gets written a lot, specifically values that need to be updated for fields that are linked to something else in the database, dictionaries prove to be much easier to maintain and work with.
So for your specific case, the DB structure would look like this:
proprietary: "John Doe"
sharedWith:{
whoEmail1: {when: timestamp},
whoEmail2: {when: timestamp}
}
This will allow you to do the following:
var whoEmail = 'first#test.com';
var sharedObject = {};
sharedObject['sharedWith.' + whoEmail + '.when'] = new Date();
sharedObject['merge'] = true;
firebase.firestore()
.collection('proprietary')
.doc(docID)
.update(sharedObject);
The reason for defining the object as a variable is that using 'sharedWith.' + whoEmail + '.when' directly in the set method will result in an error, at least when using it in a Node.js cloud function.
#Edit (add explanation :) )
say you have an array you want to update your existing firestore document field with. You can use set(yourData, {merge: true} ) passing setOptions(second param in set function) with {merge: true} is must in order to merge the changes instead of overwriting. here is what the official documentation says about it
An options object that configures the behavior of set() calls in DocumentReference, WriteBatch, and Transaction. These calls can be configured to perform granular merges instead of overwriting the target documents in their entirety by providing a SetOptions with merge: true.
you can use this
const yourNewArray = [{who: "first#test.com", when:timestamp}
{who: "another#test.com", when:timestamp}]
collectionRef.doc(docId).set(
{
proprietary: "jhon",
sharedWith: firebase.firestore.FieldValue.arrayUnion(...yourNewArray),
},
{ merge: true },
);
hope this helps :)
addToCart(docId: string, prodId: string): Promise<void> {
return this.baseAngularFirestore.collection('carts').doc(docId).update({
products:
firestore.FieldValue.arrayUnion({
productId: prodId,
qty: 1
}),
});
}
i know this is really old, but to help people newbies with the issue
firebase V9 provides a solution using the arrayUnion and arrayRemove
await updateDoc(documentRef, {
proprietary: arrayUnion( { sharedWith: [{ who: "third#test.com", when: new Date() }] }
});
check this out for more explanation
Other than the answers mentioned above. This will do it.
Using Angular 5 and AngularFire2. or use firebase.firestore() instead of this.afs
// say you have have the following object and
// database structure as you mentioned in your post
data = { who: "third#test.com", when: new Date() };
...othercode
addSharedWith(data) {
const postDocRef = this.afs.collection('posts').doc('docID');
postDocRef.subscribe( post => {
// Grab the existing sharedWith Array
// If post.sharedWith doesn`t exsit initiated with empty array
const foo = { 'sharedWith' : post.sharedWith || []};
// Grab the existing sharedWith Array
foo['sharedWith'].push(data);
// pass updated to fireStore
postsDocRef.update(foo);
// using .set() will overwrite everything
// .update will only update existing values,
// so we initiated sharedWith with empty array
});
}
We can use arrayUnion({}) method to achive this.
Try this:
collectionRef.doc(ID).update({
sharedWith: admin.firestore.FieldValue.arrayUnion({
who: "first#test.com",
when: new Date()
})
});
Documentation can find here: https://firebase.google.com/docs/firestore/manage-data/add-data#update_elements_in_an_array
Consider John Doe a document rather than a collection
Give it a collection of things and thingsSharedWithOthers
Then you can map and query John Doe's shared things in that parallel thingsSharedWithOthers collection.
proprietary: "John Doe"(a document)
things(collection of John's things documents)
thingsSharedWithOthers(collection of John's things being shared with others):
[thingId]:
{who: "first#test.com", when:timestamp}
{who: "another#test.com", when:timestamp}
then set thingsSharedWithOthers
firebase.firestore()
.collection('thingsSharedWithOthers')
.set(
{ [thingId]:{ who: "third#test.com", when: new Date() } },
{ merge: true }
)
If You want to Update an array in a firebase document.
You can do this.
var documentRef = db.collection("Your collection name").doc("Your doc name")
documentRef.update({
yourArrayName: firebase.firestore.FieldValue.arrayUnion("The Value you want to enter")});
Although firebase.firestore.FieldValue.arrayUnion() provides the solution for array update in firestore, at the same time it is required to use {merge:true}. If you do not use {merge:true} it will delete all other fields in the document while updating with the new value. Here is the working code for updating array without loosing data in the reference document with .set() method:
const docRef = firebase.firestore().collection("your_collection_name").doc("your_doc_id");
docRef.set({yourArrayField: firebase.firestore.FieldValue.arrayUnion("value_to_add")}, {merge:true});
If anybody is looking for Java firestore sdk solution to add items in array field:
List<String> list = java.util.Arrays.asList("A", "B");
Object[] fieldsToUpdate = list.toArray();
DocumentReference docRef = getCollection().document("docId");
docRef.update(fieldName, FieldValue.arrayUnion(fieldsToUpdate));
To delete items from array user: FieldValue.arrayRemove()
If the document contains a nested object in the form of an array, .dot notation can be used to reference and update nested fields.
Node.js example:
const users = {
name: 'Tom',
surname: 'Smith',
favorites: {
sport: 'tennis',
color: 'red',
subject: 'math'
}
};
const update = await db.collection('users').doc('Tom').update({
'favorites.sport': 'snowboard'
});
or Android sdk example:
db.collection("users").document("Tom")
.update(
'favorites.sport': 'snowboard'
);
There is a simple hack in firestore:
use path with "." as property name:
propertyname.arraysubname.${id}:
db.collection("collection")
.doc("docId")
.update({arrayOfObj: fieldValue.arrayUnion({...item})})

Resolve Custom Types at the root in GraphQL

I feel like I'm missing something obvious. I have IDs stored as [String] that I want to be able to resolve to the full objects they represent.
Background
This is what I want to enable. The missing ingredient is the resolvers:
const bookstore = `
type Author {
id: ID!
books: [Book]
}
type Book {
id: ID!
title: String
}
type Query {
getAuthor(id: ID!): Author
}
`;
const my_query = `
query {
getAuthor(id: 1) {
books { /* <-- should resolve bookIds to actual books I can query */
title
}
}
}
`;
const REAL_AUTHOR_DATA = [
{
id: 1,
books: ['a', 'b'],
},
];
const REAL_BOOK_DATA = [
{
id: 'a',
title: 'First Book',
},
{
id: 'b',
title: 'Second Book',
},
];
Desired result
I want to be able to drop a [Book] in the SCHEMA anywhere a [String] exists in the DATA and have Books load themselves from those Strings. Something like this:
const resolve = {
Book: id => fetchToJson(`/some/external/api/${id}`),
};
What I've Tried
This resolver does nothing, the console.log doesn't even get called
const resolve = {
Book(...args) {
console.log(args);
}
}
HOWEVER, this does get some results...
const resolve = {
Book: {
id(id) {
console.log(id)
return id;
}
}
}
Where the console.log does emit 'a' and 'b'. But I obviously can't scale that up to X number of fields and that'd be ridiculous.
What my team currently does is tackle it from the parent:
const resolve = {
Author: {
books: ({ books }) => books.map(id => fetchBookById(id)),
}
}
This isn't ideal because maybe I have a type Publisher { books: [Book]} or a type User { favoriteBooks: [Book] } or a type Bookstore { newBooks: [Book] }. In each of these cases, the data under the hood is actually [String] and I do not want to have to repeat this code:
const resolve = {
X: {
books: ({ books }) => books.map(id => fetchBookById(id)),
}
};
The fact that defining the Book.id resolver lead to console.log actually firing is making me think this should be possible, but I'm not finding my answer anywhere online and this seems like it'd be a pretty common use case, but I'm not finding implementation details anywhere.
What I've Investigated
Schema Directives seems like overkill to get what I want, and I just want to be able to plug [Books] anywhere a [String] actually exists in the data without having to do [Books] #rest('/external/api') in every single place.
Schema Delegation. In my use case, making Books publicly queryable isn't really appropriate and just clutters my Public schema with unused Queries.
Thanks for reading this far. Hopefully there's a simple solution I'm overlooking. If not, then GQL why are you like this...
If it helps, you can think of this way: types describe the kind of data returned in the response, while fields describe the actual value of the data. With this in mind, only a field can have a resolver (i.e. a function to tell it what kind of value to resolve to). A resolver for a type doesn't make sense in GraphQL.
So, you can either:
1. Deal with the repetition. Even if you have ten different types that all have a books field that needs to be resolved the same way, it doesn't have to be a big deal. Obviously in a production app, you wouldn't be storing your data in a variable and your code would be potentially more complex. However, the common logic can easily be extracted into a function that can be reused across multiple resolvers:
const mapIdsToBooks = ({ books }) => books.map(id => fetchBookById(id))
const resolvers = {
Author: {
books: mapIdsToBooks,
},
Library: {
books: mapIdsToBooks,
}
}
2. Fetch all the data at the root level instead. Rather than writing a separate resolver for the books field, you can return the author along with their books inside the getAuthor resolver:
function resolve(root, args) {
const author = REAL_AUTHOR_DATA.find(row => row.id === args.id)
if (!author) {
return null
}
return {
...author,
books: author.books.map(id => fetchBookById(id)),
}
}
When dealing with databases, this is often the better approach anyway because it reduces the number of requests you make to the database. However, if you're wrapping an existing API (which is what it sounds like you're doing), you won't really gain anything by going this route.

Editing Received Object After Angular HTTP Request

In a project I am working on, after obtaining a list of objects from an HTTP "get" request, one of the fields for each object is a string containing a status, "DEAD", "IDLE", etc. Is there any way to edit the structure of the object that comes in the list so it contains a few more fields based on that status value? For example, after the transformation each of the objects in the list would have the boolean fields isDead, isIdle, etc. Is this what the transformResponse() method in Angular does?
You can do something like this.
private getData(): void {
this.http.get('https://reqres.in/api/users?page=2').pipe(map((res: any) => {
return res.data;
})).subscribe((data) => {
this.data =data.map((item) => {
return {
id: item.id,
first_name: item.first_name,
last_name: item.last_name,
avatar: item.avatar,
age: 50
}
});
});
};
In here UI am requesting for a list of data and for each of the items in the list I am appending a age attribute.
You can find a working example in here

Chat App Tut: purpose of populate()?

I'm in the process of learning FeathersJS and so far it seems like everything I wish Meteor was. Keep up the great work!
Right now I'm working through the Chat App tutorial but have run into some confusion. I don't quite understand what's going on in this section of the tutorial, specifically the populate hook in messages.hooks.js:
'use strict';
const { authenticate } = require('feathers-authentication').hooks;
const { populate } = require('feathers-hooks-common');
const processMessage = require('../../hooks/process-message');
module.exports = {
before: {
all: [ authenticate('jwt') ],
find: [],
get: [],
create: [ processMessage() ],
update: [ processMessage() ],
patch: [ processMessage() ],
remove: []
},
after: {
all: [
// What's the purpose of this ?
populate({
schema: {
include: [{
service: 'users',
nameAs: 'user',
parentField: 'userId',
childField: '_id'
}]
}
})
],
find: [],
get: [],
create: [],
update: [],
patch: [],
remove: []
},
error: {
all: [],
find: [],
get: [],
create: [],
update: [],
patch: [],
remove: []
}
};
Here's process-message.js:
'use strict';
// Use this hook to manipulate incoming or outgoing data.
// For more information on hooks see: http://docs.feathersjs.com/api/hooks.html
module.exports = function() {
return function(hook) {
// The authenticated user
const user = hook.params.user;
// The actual message text
const text = hook.data.text
// Messages can't be longer than 400 characters
.substring(0, 400)
// Do some basic HTML escaping
.replace(/&/g,'&').replace(/</g,'<').replace(/>/g,'>');
// Override the original data
hook.data = {
text,
// Set the user id
userId: user._id,
// Add the current time via `getTime`
createdAt: new Date().getTime()
};
// Hooks can either return nothing or a promise
// that resolves with the `hook` object for asynchronous operations
return Promise.resolve(hook);
};
};
I understand that before a create, update, or patch is executed on the messages service, the data is sent to processMessage() which sanitizes the data and adds the user ID to it.
Questions:
After processMessage(), is the data immediately written to the database?
After the data is written to the database, the after hooks are executed, correct?
What's the purpose of the populate hook then ?
Thanks :)
TO better understand how hooks and other cool stuff on feathers. It is good to do some basic logs. Anyways here is the flow.
CLIENT -> Before ALL hook -> OTHER BEFORE(create, update, ... ) -> Database -> ERROR hook (note: only if have error on previous steps) -> AFTER ALL hook -> OTHER AFTER(create, update, ...) -> FILTERS -> CLIENT
As for the populate hook, it does the same purpose of populate in your db. Feathers do it for you, instead of you doing the populate query.
Base on your example, you expect that in your schema to have something like this;
{
...,
userId : [{ type: <theType>, ref: 'users' }]
}
And you want to add another field, named user, then populate it with data from users service and match its _id with the userId.
The populate hook is one of the hooks provided by the feathers-hooks-common module. Its function is to provide data after joining the various tables in the database. Since each table is represented by an individual Service you can think of join happening between the service on which the populate hook is being called and another service.
Hence in the following piece of code a schema object is being passed to the populate hook:
populate({
schema: {
include: [{
service: 'users',
nameAs: 'user',
parentField: 'userId',
childField: '_id'
}]
}
})
It is basically telling the hook to include data from the 'users' service.
It is also telling it to name the additional data as 'user'.
It is saying that the parentField for the join (i.e. the field in the service which has the hook (in your case messages service) is 'userId'.
It is saying that the childField for the join (i.e. the field in the 'users' service is '_id'.
When the data is received, it will have all the fields from the messages table and an additional object with key user and key,value pairs from the users table.

Categories

Resources