I am not able to find a firebase object by specifying the specifiv child key and only if I specify the entire path to the object. Why is this?
This works
ref.child(`users/${user.uid}/watchlist/${key}`)
.once('value')
.then(function(snapshot) {
console.log(snapshot.val());
});
This doesn't work
ref.child(key)
.once('value')
.then(function(snapshot) {
console.log(snapshot.val());
});
My firebase connection
export const ref = firebase.database().ref()
I'm not sure how else you would expect it to work? child() is not a "search" function, it only looks at direct children of the reference or child paths that you specify. It does not look through nested paths Firebase structure "searching" for the specified key. That would be incredibly inefficient. And even if it did do that, what would happen if there were multiple matching keys? Would it return all of them? What if they were at different depths? Etc.
The Firebase real time database is basically a big JSON-like data tree. The only way to get to a bit of data that you want is to know the entire path to that data. Otherwise there wouldn't be any structure at all.
I suppose if you wanted to, you could load the entire Firebase data structure and then write your own function for finding arbitrary keys, no matter where they were in the tree. But that has the same problematic inefficiencies as if it were built into the Firebase library itself - which, it is not.
Perhaps this will illustrate better how child() works. This:
ref.child(`users/${user.uid}/watchlist/${key}`);
Is basically the same as this:
ref.child("users").child(user.uid).child("watchlist").child(key);
Although I suspect that the second method is less efficient, I don't know for sure.
Related
Got a weird bug using FaunaDB with a Node.js running on a Netlify Function.
I am building out a quick proof-of-concept and initially everything worked fine. I had a Create query that looked like this:
const faunadb = require('faunadb');
const q = faunadb.query;
const CreateFarm = (data) => (
q.Create(
q.Collection('farms'),
{ data },
)
);
As I said, everything here works as expected. The trouble began when I tried to start normalizing the data FaunaDB sends back. Specifically, I want to merge the Fauna-generated ID into the data object, and send just that back with none of the other metadata.
I am already doing that with other resources, so I wrote a helper query and incorporated it:
const faunadb = require('faunadb');
const q = faunadb.query;
const Normalize = (resource) => (
q.Merge(
q.Select(['data'], resource),
{ id: q.Select(['ref', 'id'], resource) },
)
);
const CreateFarm = (data) => (
Normalize(
q.Create(
q.Collection('farms'),
{ data },
),
)
);
This Normalize function works as expected everywhere else. It builds the correct merged object with an ID with no weird side effects. However, when used with CreateFarm as above, I end up with two identical farms in the DB!!
I've spent a long time looking at the rest of the app. There is definitely only one POST request coming in, and CreateFarm is definitely only being called once. My best theory was that since Merge copies the first resource passed to it, Create is somehow getting called twice on the DB. But reordering the Merge call does not change anything. I have even tried passing in an empty object first, but I always end up with two identical objects created in the end.
Your helper creates an FQL query with two separate Create expressions. Each is evaluated and creates a new Document. This is not related to the Merge function.
Merge(
Select(['data'], Create(
Collection('farms'),
{ data },
)),
{ id: Select(['ref', 'id'], Create(
Collection('farms'),
{ data },
)) },
)
Use Let to create the document, then Update it with the id. Note that this increases the number of Write Ops required for you application. It will basically double the cost of creating Documents. But for what you are trying to do, this is how to do it.
Let(
{
newDoc: Create(q.Collection("farms"), { data }),
id: Select(["ref", "id"], Var("newDoc")),
data: Select(["data"], Var("newDoc"))
},
Update(
Select(["ref"], Var("newDoc")),
{
data: Merge(
Var("data"),
{ id: Var("id") }
)
}
)
)
Aside: why store id in the document data?
It's not clear why you might need to do this. Indexes can be created on the ref value themselves. If your client receives a Ref, then that can be passed into subsequent queries directly. In my experience, if you need the plain id value directly in an application, transform the Document as close to that point in the application as possible (like using ids as keys for an array of web components).
There's even a slight Compute advantage for using Ref values rather than re-building Ref expressions from a Collection name and ID. The expression Ref(Collection("farms"), "1234") counts as 2 FQL functions toward Compute costs, but reusing the Ref value returned by queries is free.
Working with GraphQL, the _id field is abstracted out for you because working with Document types in GraphQL would be pretty awful. However, the best practice for FQL queries would be to use the Ref's directly as much as possible.
Don't let me talk in absolute terms, though! I believe generally that there's a reason for anything. If you believe you really need to duplicate the ID in the Documents data, then I would be interested in a comment why.
how to get access from one collection to another collection in firebase in JS v9
Firebase's JS API v9 brought different changes.
One of the biggest changes is the fact that the DocumentReference don't allow the access to subcollections anymore. Or at least, not directly from the DocumentReference itself, how we used to to with v8.
In v8, for example, we could do something like this:
//say we have a document reference
const myDocument = db.collection("posts").doc(MY_DOC_ID);
//we can access the subcollection from the document reference and,
//for example, do something with all the documents in the subcollection
myDocument.collection("comments").get().then((querySnapshot) => {
querySnapshot.forEach((doc) => {
// DO SOMETHING
});
});
With v9, we have a different approach. Let's say we get our document:
const myDocument = doc(db, "posts", MY_DOC_ID);
As you can note, the way we write the code is different. In v8 we used to write it in a procedural way. With v9, everything switched to a more functional way, where we can use functions such as doc(), collection() and so on.
So, in order to do the same thing we did with the above example and do something with every doc in the subcollection, the code for v9 API should look like this:
const subcollectionSnapshot = await getDocs(collection(db, "posts", MY_DOC_ID, "comments"));
subcollectionSnapshot.forEach((doc) => {
// DO SOMETHING
});
Note that we can pass additional parameters to functions such as collection() and doc(). The first one will always be the reference to the database, the second one will be the root collection and from there onward, every other parameter will be added to the path. In my example, where I wrote
collection(db, "posts", MY_DOC_ID, "comments")
it means
go in the "posts" collection
pick the document with id equals to MY_DOC_ID
go in the "comments" subcollection of that document
All of the examples in the Gatsby documentation seem to assume you want to define an exports.onCreateNode first to parse your data, and then define a separate exports.createPages to do your routing.
However, that seems needlessly complex. A much simpler option would seem to be to just use the graphql option provided to createPages:
exports.createPages = async ({ graphql, actions }) => {
const { createPage } = actions;
const { data } = await graphql(query);
// use data to build page
data.someArray.forEach(datum =>
createPage({ path: `/some/path/${datum.foo}`, component: SomeComponent }));
However, when I do that, I get an error:
TypeError: filepath.includes is not a function
I assume this is because my path prop for createPage is a string and it should be "slug". However, all the approaches for generating slugs seem to involve doing that whole exports.onCreateNode thing.
Am I missing a simple solution for generating valid slugs from a path string? Or am I misunderstanding Gatsby, and for some reason I need to use onCreateNode every time I use createPage?
It turns out the error I mentioned:
TypeError: filepath.includes is not a function
Wasn't coming from the path prop at all: it was coming from the (terribly named) component prop ... which does not take a component function/class! Instead it takes a path to a component (why they don't call the prop componentPath is just beyond me!)
But all that aside, once I fixed "component" to (sigh) no longer be a component, I was able to get past that error and create pages ... and it turns out the whole onCreateNode thing is unnecessary.
Why Do I Need to Use exports.onCreateNode to Create Pages?
You do not.
Gatsby heavily uses GraphQL behind the scenes. The Gatsby documentation is about teaching users that many of the features in Gatsby are often only available via GraphQL.
You can create pages without GraphQL as you do in answer with data.someArray.forEach ... but that is not the intended way. By skipping createNodeField you will not be able to query for these fields within your page queries. If you don't need these fields via GraphQL then your solution is perfect.
I'm using DataLoader for batching the requests/queries together.
In my loader function I need to know the requested fields to avoid having a SELECT * FROM query but rather a SELECT field1, field2, ... FROM query...
What would be the best approach using DataLoader to pass down the resolveInfo needed for it? (I use resolveInfo.fieldNodes to get the requested fields)
At the moment, I'm doing something like this:
await someDataLoader.load({ ids, args, context, info });
and then in the actual loaderFn:
const loadFn = async options => {
const ids = [];
let args;
let context;
let info;
options.forEach(a => {
ids.push(a.ids);
if (!args && !context && !info) {
args = a.args;
context = a.context;
info = a.info;
}
});
return Promise.resolve(await new DataProvider().get({ ...args, ids}, context, info));};
but as you can see, it's hacky and doesn't really feel good...
Does anyone have an idea how I could achieve this?
I am not sure if there is a good answer to this question simply because Dataloader is not made for this usecase but I have worked extensively with Dataloader, written similar implementations and explored similar concepts on other programming languages.
Let's understand why Dataloader is not made for this usecase and how we could still make it work (roughly like in your example).
Dataloader is not made for fetching a subset of fields
Dataloader is made for simple key-value-lookups. That means given a key like an ID it will load a value behind it. For that it assumes that the object behind the ID will always be the same until it is invalidated. This is the single assumption that enables the power of dataloader. Without it the three key features of Dataloader won't work anymore:
Batching requests (multiple requests are done together in one query)
Deduplication (requests to the same key twice result in one query)
Caching (consecutive requests of the same key don't result in multiple queries)
This leads us to the following two important rules if we want to maximise the power of Dataloader:
Two different entities cannot share the same key, othewise we might return the wrong entity. This sounds trivial but it is not in your example. Let's say we want to load a user with ID 1 and the fields id and name. A little bit later (or at the same time) we want to load user with ID 1 and fields id and email. These are technically two different entities and they need to have a different key.
The same entity should have the same key all the time. Again sounds trivial but really is not in the example. User with ID 1 and fields id and name should be the same as user with ID 1 and fields name and id (notice the order).
In short a key needs to have all the information needed to uniquely identify an entity but not more than that.
So how do we pass down fields to Dataloader
await someDataLoader.load({ ids, args, context, info });
In your question you have provided a few more things to your Dataloader as a key. First I would not put in args and context into the key. Does your entity change when the context changes (e.g. you are querying a different database now)? Probably yes, but do you want to account for that in your dataloader implementation? I would instead suggest to create new dataloaders for each request as described in the docs.
Should the whole request info be in the key? No, but we need the fields that are requested. Apart from that your provided implementation is wrong and would break when the loader is called with two different resolve infos. You only set the resolve info from the first call but really it might be different on each object (think about the first user example above). Ultimately we could arrive at the following implementation of a dataloader:
// This function creates unique cache keys for different selected
// fields
function cacheKeyFn({ id, fields }) {
const sortedFields = [...(new Set(fields))].sort().join(';');
return `${id}[${sortedFields}]`;
}
function createLoaders(db) {
const userLoader = new Dataloader(async keys => {
// Create a set with all requested fields
const fields = keys.reduce((acc, key) => {
key.fields.forEach(field => acc.add(field));
return acc;
}, new Set());
// Get all our ids for the DB query
const ids = keys.map(key => key.id);
// Please be aware of possible SQL injection, don't copy + paste
const result = await db.query(`
SELECT
${fields.entries().join()}
FROM
user
WHERE
id IN (${ids.join()})
`);
}, { cacheKeyFn });
return { userLoader };
}
// now in a resolver
resolve(parent, args, ctx, info) {
// https://www.npmjs.com/package/graphql-fields
return ctx.userLoader.load({ id: args.id, fields: Object.keys(graphqlFields(info)) });
}
This is a solid implementation but it has a few weaknesses. First, we are overfetching a lot of fields if we have different field requiements in the same batch request. Second, if we have fetched an entity with key 1[id,name] from cache key function we could also answer (at least in JavaScript) keys 1[id] and 1[name] with that object. Here we could build a custom map implementation that we could supply to Dataloader. It would be smart enough to know these things about our cache.
Conclusion
We see that this is really a complicated matter. I know it is often listed as a benefit of GraphQL that you don't have to fetch all fields from a database for every query, but the truth is that in practice this is seldomly worth the hassle. Don't optimise what is not slow. And even is it slow, is it a bottleneck?
My suggestion is: Write trivial Dataloaders that simply fetch all (needed) fields. If you have one client it is very likely that for most entities the client fetches all fields anyways, otherwise they would not be part of you API, right? Then use something like query introsprection to measure slow queries and then find out which field exactly is slow. Then you optimise only the slow thing (see for example my answer here that optimises a single use case). And if you are a big ecomerce platform please don't use Dataloader for this. Build something smarter and don't use JavaScript.
I am new to React and I don't know what's the best way to do this.
I have a list of cars and on clicking each row it should show slide to full page details of that car.
My code structure is:
I have App which renders two components. CarList and CarDetails. Car Details is hidden initially. The reason I rendered carDetails in app is because it's a massive fix template so I would like to render this once when app is loaded and only update it's data when each row clicked.
CarList also renders CarRow component which is fine.
Now my problem is I have a getDetails function on CarRow component which is making a call to get the details based on the car id. How to get carDetails component data updated ? I used
this.setState({itemDetails:data});
but seems state of the carRow is not the same reference as state in carDetails.
Any help?
This is a fundamental issue that lots of thought and man-hours has gone into in order to try and solve. It probably can't be answered, except on a surface level, in a StackOverflow post. It's not React-centric, either. This is an issue across most applications, regardless of the framework you're using.
Since you asked in the context of React, you might consider reading into flux, which is the de-facto implementation of this one-way data-flow idea in concert with React. However, that architecture is by no means "the best". There are simply advantages and disadvantages to it like everything else.
Some people don't like the idea of the global "event bus" that flux proposes. If that's the case, you can simply implement your own intermediate data layer API that collects query callbacks and A) invokes the callbacks on any calls to save data and B) refreshes any appropriate queries to the server. For now, though, I'd stick with flux as it will give you an idea of the general principles involved in having the things that most people consider to be "good", like a single source of truth for your data, one way flow, etc.
To give a concrete example of the callback idea:
// data layer
const listeners = [];
const data = {
save: save,
query: query
};
function save(someData) {
// save data to the server, and then...
.then(data => {
listeners.forEach(listener => listener(data));
});
}
function query(params, callback) {
// query the server with the params, then
listeners.push(callback);
}
// component
componentWillMount() {
data.query(params, data => this.setState({ myData: data }));
},
save() {
// when the save operation is complete, it will "refresh" the query above
data.save(someData);
}
This is a very distilled example and doesn't address optimization, such as potential for memory leaks when moving to different views and invoking "stale" callbacks, however it should give you a general idea of another approach.
The two approaches have the same policy (a single source of truth for data and one way data flow) but different implementations (global "event bus" which necessitates keeping track of events, or the simple callback method, which can necessitate a form of memory management).