i have a data structure as following at the url
www.example.firebase.com/
{
"companyList" : {
"compkey1" : {
"url1":"somelink1",
"url2":somelink2
},
"compkey2" : {
"url1":"somelink1",
"url2":"somelink2"
}
}
}
What i want to achieve is that i want firebase to return first the list of companies which is
compkey1
compkey2
and not any child data
then if the user want to see a specific company i want them to go to that url
like so
www.example.firebase.com/companyList/compkey2
new to firebase so explain as such.
The Firebase JavaScript client always retrieves complete nodes. It has no option to retrieve only the keys.
If you want to retrieve only the keys/names of the company, you'll have to store them in a separate node.
{
"companyList" : {
"compkey1" : {
"url1":"somelink1",
"url2":"somelink2"
},
"compkey2" : {
"url1":"somelink1",
"url2":"somelink2"
}
},
"companyKeys" : {
"compkey1": true,
"compkey2": true
}
}
A common recommendation in Firebase (and many other NoSQL databases) is to model your data in a way that your application will need to read it. In the above example, it seems like you need to read a list of company keys, so that is what you should model.
Note: the Firebase REST API does have a shallow=true parameter that will return only the keys. But I recommend solving the problem by modeling the data differently instead.
Firebase has a shallow parameter which can retrieve only the keys. I verified it's way faster (by a factor of 100), than retrieving the whole nodes.
Here it is in Google App Script (sorry):
class FirebaseNamespace {
get database() {
if(!this._database) {
var firebaseUrl = "https://mydatabase.firebaseio.com/";
var secret = "mysecret";
this._database = FirebaseApp.getDatabaseByUrl(firebaseUrl, secret);
}
return this._database;
}
get(path, parameters) {
return this.database.getData(path, parameters);
}
keys(path) {
return Object.keys(this.get(path, {shallow:true}));
}
save(path, value) {
this.database.setData(path, value);
return value;
}
}
Related
So I have this app built in Vue and using Vuex. I connect to a Node/Express backend with Socket.Io to be able to push data from the server to client instantly when needed.
The data pushed to the clients are in the form of an object which is then stored in an array in VUEX. Each data object pushed into the array has a unique string attached to it.
This string is used to compare the objects already pushed into the array in VUEX. If there are duplicates they won't be stored. If not equal = they are stored.
I then use ...mapGetters to get the array in Vuex and loop through it. For each object a component is rendered.
HOWEVER - sometimes the same object is rendered twice even though the array in VUEX clearly only shows one copy.
Here is the code in the VUEX Store:
export default new Vuex.Store({
state: {
insiderTrades: [],
},
mutations: {
ADD_INSIDER_TRADE(state, insiderObject) {
if (state.insiderTrades.length === 0) {
// push object into array
state.insiderTrades.unshift(insiderObject);
} else {
state.insiderTrades.forEach((trade) => {
// check if the insider trade is in the store
if (trade.isin === insiderObject.isin) {
// return if it already exists
return;
} else {
// push object into array
state.insiderTrades.unshift(insiderObject);
}
});
}
},
},
getters: {
insiderTrades(state) {
return state.insiderTrades;
},
},
Here is the some of the code in App.vue
mounted() {
// //establish connection to server
this.$socket.on('connect', () => {
this.connectedState = 'ansluten';
this.connectedStateColor = 'green';
console.log('Connected to server');
});
//if disconnected swap to "disconneted state"
this.$socket.on('disconnect', () => {
this.connectedState = 'ej ansluten';
this.connectedStateColor = 'red';
console.log('Disconnected to server');
});
// recieve an insider trade and add to store
this.$socket.on('insiderTrade', (insiderObject) => {
this.$store.commit('ADD_INSIDER_TRADE', insiderObject);
});
},
Your forEach iterates the existing items and adds the new item once for every existing item. Use Array.find:
ADD_INSIDER_TRADE(state, insiderObject) {
const exists = state.insiderTrades.find(trade => trade.isin === insiderObject.isin);
if (!exists) state.insiderTrades.unshift(insiderObject);
},
You don't need the initial length check
I have a page that consists of 2 components and each of them has its own request for data
for example
<MovieInfo movieId={queryParamsId}/>
const GET_MOVIE_INFO = `gql
query($id: String!){
movie(id: $id){
name
description
}
}`
Next component
<MovieActors movieId={queryParamsId}/>
const GET_MOVIE_ACTORS = `gql
query($id: String!){
movie(id: $id){
actors
}
}`
For each of these queries I use apollo hook
const { data, loading, error } = useQuery(GET_DATA, {variable: {id: queryParamsId}}))
Everything is fine, but I got a warning message:
Cache data may be lost when replacing the movie field of a Query object.
To address this problem (which is not a bug in Apollo Client), either ensure all objects of type Movie have IDs, or define a custom merge function for the Query.movie field, so InMemoryCache can safely merge these objects: { ... }
It's works ok with google chrome, but this error affects Safari browser. Everything is crushing. I'm 100% sure it's because of this warning message. On the first request, I set Movie data in the cache, on the second request to the same query I just replace old data with new, so previous cached data is undefined. How can I resolve this problem?
Here is the same solution mentioned by Thomas but a bit shorter
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
YOUR_FIELD: {
// shorthand
merge: true,
},
},
},
},
});
This is same as the following
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
YOUR_FIELD: {
merge(existing, incoming, { mergeObjects }) {
return mergeObjects(existing, incoming);
},
},
},
},
},
});
Solved!
cache: new InMemoryCache({
typePolicies: {
Query: {
fields: {
YOUR_FIELD: {
merge(existing = [], incoming: any) {
return { ...existing, ...incoming };
// this part of code is depends what you actually need to do, in my
case i had to save my incoming data as single object in cache
}
}
}
}
}
})
});
The other answers still work, but as of Apollo Client >= 3.3 there's an easier option that doesn't require specifying specific fields or a custom merge function. Instead, you only have to specify the type and it will merge all fields for that type:
const cache = new InMemoryCache({
typePolicies: {
YOUR_TYPE_NAME: {
merge: true,
}
}
});
From your example query, I'd guess that an id field should be available though? Try requesting the ID in your query, that should solve the problem in a much more ideal way.
Had same issue with inconsistency of data values vs. our schema. A value type within an entity was missing the id value. Caused by an incomplete data migration.
Temporary solution:
const typePolicies = {
PROBLEM_TYPE: {
keyFields: false as false,
},
PARENT_TYPE: {
fields: {
PROBLEM_FIELD: {
merge: true
}
}
}
}
I'm defining a firebase rule to read data by userId. userId is a flag on each data created by a user, which is, of course, the user uid.
her's the rule below:
{
"rules": {
"items": {
"$itemId": {
".read": "auth !== null && root.child('items/$itemId/userId').val() === auth.uid"
}
}
}
}
And I'm accessing data on the client side like so:
firebase.database().ref(`/items`)
.once('value')
.then(snapshot => {
const places = []
const data = snapshot.val();
for (let key in data) {
places.push({
...data[key],
key: key
});
}
})
I want data access to the owner based on their userId flag on each item.
A sample of the data structure below:
According to the Firebase documentation rules are not filters. However last year (2018) query rules were introduced, I think the blog post is better than the docs for understanding this.
So to keep your current data structure your rule should change to:
{
"rules": {
"items": {
".read": "query.orderByChild == 'userId' && query.equalTo == auth.uid"
}
}
}
Then you have to change your query as well:
firebase.database().ref(`/items`).orderByChild('userId').equalsTo(userId)...
One last thing, it seems your database structure could be pointing to other needs:
If you want to make your data only accessible for the user who created it and also have an admin that can see everything, then a better solution is to denormalize your data. Your data structure would be:
user_items: {
uid1: {
key1:{//full object here}
}
},
items: {
key1:{//partial item here, just name and photo, think on a list}
},
admins:{
uid1:true
}
Here the admin problem is solved with an admin node, this could be used in conjunction with custom claims and Firebase Functions.
Since there is a places word, maybe you need to use something else like geofire for locations
When creating a new node, I want to create and push that same data into a different node.
The Door/111111111111/ins node is the node I am pushing the new data into:
root: {
doors: {
111111111111: {
MACaddress: "111111111111",
inRoom: "-LBMH_8KHf_N9CvLqhzU",
ins: {
// I am creating several "key: pair"s here, something like:
1525104151100: true,
1525104151183: true,
}
}
},
rooms: {
-LBMH_8KHf_N9CvLqhzU: {
ins: {
// I want the function to clone the same data here:
1525104151100: true,
1525104151183: true,
}
}
}
This is my function which is throwing an error:
TypeError: change.before.ref.parent.child(...).val is not a function
Code:
exports.updateRoom = functions.database.ref('/doors/{MACaddress}/ins').onWrite((change, context) => {
const beforeData = change.before.val(); // data before the write
console.log(beforeData); // all good so far
const afterData = change.after.val(); // data after the write
console.log(afterData); // all good so far
const roomPushKey = change.before.ref.parent.child('/inRoom').val(); // ERROR
console.log(roomPushKey);
return change.after.ref.parent.parent.parent.child('/rooms').child(roomPushKey).child('/ins').set(afterData);
});
What is wrong with my path? How can I get it to update the other node?
child() is a method on Reference that returns another Reference object. Reference doesn't have a val() method because it doesn't contain any data. It's just a reference.
To get data outside the location of the database trigger, you need to query Realtime Database for it. Use the once() method for that. This is extremely common, and you should be able to use samples and documentation to figure out what you need to do.
I'm using Angular Fullstack for an web app.
I'm posting my data by $http.post() my object:
{ title: "Some title", tags: ["tag1", "tag2", "tag3"] }
When I edit my object and try to $http.put() for example:
{ title: "Some title", tags: ["tag1"] }
In console I get HTTP PUT 200 but when I refresh the page I still recive the object with all 3 tags.
This is how I save in the MongoDB:
exports.update = function(req, res) {
if (req.body._id) {
delete req.body._id;
}
Question.findByIdAsync(req.params.id)
.then(handleEntityNotFound(res))
.then(saveUpdates(req.body))
.then(responseWithResult(res))
.catch(handleError(res));
};
function saveUpdates(updates) {
return function(entity) {
var data = _.merge(entity.toJSON(), updates);
var updated = _.extend(entity, data);
return updated.saveAsync()
.spread(function(updated) {
return updated;
});
};
}
Can someone explain how to save the object with removed items?
What I'm doing wrong?
This is pretty bad practice to use things like _.merge or _.extend in client ( meaning your nodejs client to database and not browser ) code after retrieving from the database. Also notably _.merge is the problem here as it is not going to "take away" things, but rather "augment" what is already there with the information you have provided. Not what you want here, but there is also a better way.
You should simply using "atomic operators" like $set to do this instead:
Question.findByIdAndUpdateAsync(
req.params.id,
{ "$set": { "tags": req.body.tags } },
{ "new": true }
)
.then(function(result) {
// deal with returned result
});
You also really should be targeting your endpoints and not having a "generic" object write. So the obove would be specically targeted at "PUT" for related "tags" only and not touch other fields in the object.
If you really must throw a whole object at it and expect an update from all the content, then use a helper to fix the update statement correctly:
function dotNotate(obj,target,prefix) {
target = target || {},
prefix = prefix || "";
Object.keys(obj).forEach(function(key) {
if ( typeof(obj[key]) === "object" ) {
dotNotate(obj[key],target,prefix + key + ".");
} else {
return target[prefix + key] = obj[key];
}
});
return target;
}
var update = { "$set": dotNotate(req.body) };
Question.findByIdAndUpdateAsync(
req.params.id,
update,
{ "new": true }
)
.then(function(result) {
// deal with returned result
});
Which will correctly structure not matter what the object you throw at it.
Though in this case then probably just directly is good enough:
Question.findByIdAndUpdateAsync(
req.params.id,
{ "$set": req.body },
{ "new": true }
)
.then(function(result) {
// deal with returned result
});
There are other approaches with atomic operators that you could also fit into your logic for handling. But it is best considered that you do these per element, being at least root document properties and things like arrays treated separately as a child.
All the atomic operations interact with the document "in the database" and "as is at modification". Pulling data from the database, modifiying it, then saving back offers no such guarnatees that the data has not already been changed and that you just may be overwriting other changes already comitted.
I truth your "browser client" should have been aware that the "tags" array had the other two entries and then your "modify request" should simply be to $pull the entries to be removed from the array, like so:
Question.findByIdAndUpdateAsync(
req.params.id,
{ "$pull": { "tags": { "$in": ["tag2", "tag3"] } } },
{ "new": true }
)
.then(function(result) {
// deal with returned result
});
And then, "regardless" of the current state of the document on the server when modified, those changes would be the only ones made. So if something else modified at added "tag4", and the client had yet to get the noficiation of such a change before the modification was sent, then the return response would include that as well and everything would be in sync.
Learn the update modifiers of MongoDB, as they will serve you well.