I am using typescript and the Firebase Realtime Database (cannot use Firestore), and I have data in the form on an interface MyData as follows:
enum RunStatus {
RUNNING,
ENDED
}
interface IResults {
firstItem: string;
secondItem: number;
}
interface MyData {
status: RunStatus;
results: IResults;
}
Suppose I have 10 clients that might end up writing this data simultaneously to (1) change status from RUNNING to ENDED and (2) set the results field.
What I want is for only the first client to be able to do this, so I need to use a transaction of some sort.
My data is stored at this path: "/some/path/here/mydata".
As near as I can tell from the limited documentation, my writes should look something like this:
class MyDatabase {
db: Database;
constructor(db_: Database) {
this.db = db_
}
writeMyData(newData: MyData) {
const path="/some/path/here/mydata";
const reference=child(ref(this.db),path);
runTransaction(reference, (currentData) => {
if (currentData) {
if (currentData.status!=RunStatus.ENDED && newData.status==RunStatus.ENDED) {
currentData.status=newData.status;
currentData.results=newData.results;
// Or, I could have just set currentData=newData
}
}
return currentData;
});
}
}
Is this correct? And what exactly does it do if multiple clients try to run this at the same time? The documentation says something about runTransaction being automatically rerun if currentData is updated by another client while the first client is writing. Can someone please explain to me if this is correct, and what is happening here?
Related
Let's say I've got a GraphQL query that looks like this:
query {
Todo {
label
is_completed
id
}
}
But the client that consumes the data from this query needs a data structure that's a bit different- e.g. a TypeScript interface like:
interface Todo {
title: string // "title" is just a different name for "label"
data: {
is_completed: boolean
id: number
}
}
It's easy enough to just use an alias to return label as title. But is there any way to make it return both is_completed and id under an alias called data?
There is no way to do that. Either change the schema to reflect the client's needs or transform the response after it is fetched on the client side.
I'm using angularfire and the realtime database
I have a database as so :
actions:{
uid1: {
meta...
},
uid2: {
meta...
},
uid3: {
meta...
},
uid4: {
meta...
},
}
I've made an editor showing all of that actions. Now let say I add an action (uid5) my valueChanges method send me back all the uids when I only need the updated value.
I would do something a bit like this
ngOnInit(){
this.db.object(`actions`).valueChanges().subscribe(
(actions) => console.log(actions) // here I get first time uid1{}, uid2{} ..., uid4{}
//second time after adding uid5 I would get uid5{} only.
)
}
So is it possible, is there some specific event or whatsoever or should I make a feature request?
There is nothing built into Firebase or AngularFire to get only the new items, so you'll have to built something yourself.
The two most common options:
Store a timestamp value in each node, and then query for only items after "now" with something like ref.orderBy("timestamp").startAt(Date.now()).
Start at keys after now, with something like ref.orderByKey().startAt(ref.push().key).
I want to send some value for a field to Cloud Firestore, but I dont want to be persist(saved) in Cloud Firestore.
Code:
const message = {
persistentData: {
id: 'dSXYdieiwoDUEUWOssd',
text: 'Hi dear how are you',
date: new Date();
},
nonPersistentData: {
securityCode: 393929949
}
};
db.collection('messages').doc(message.persistentData.id).set(message).catch(e => {});
In above code I want to persit (save) persistentData, but I dont want to save nonPersistentData online nor offline, because I only need them to check real data in Firestore rule. So I dont want they should be accessible in cache(offline) or server(online)...
This is simply not possible with firestore. There is a similar question here. You need to separate the data into public (persistent) and private data (non-persistent). One possible solution will be-
From the client, push the private data which contains the securityCode to a new collection called securityCodes and store the id of the new entry.
Because you don't want this info to be available to anyone, you can add a security rule
match /securityCodes/{securityCode} {
// No one can read the value from this collection, but only create
allow create: true;
}
In your public data, add the id of the previously added document
data = {
id: 'dSXYdieiwoDUEUWOssd',
text: 'Hi dear how are you',
date: new Date(),
securityId: <id of the secretCode entry>
}
In your security rules, get the secret code using the securityId you are sending with the public data. Example-
match /collectionId/documentId {
allow create: if get(/secretCodes/$(request.resource.data.secretId)) == 'someknowncode'
}
I have an Entity called Trip. The structure is:
What I want is whenever a new trip is created, the room column should be populated with ${tripId}_${someRandomStringHere}. So for example, I just created a new trip using this body:
The response should be:
The newly created trip has the id of 15. So, the response has the room valued at 15_4gupvdo0ea408c25ia0qsbh because again: ${tripId}_${someRandomStringHere}.
This is working as expected whenever I POST the request and create the trip. BUT whenever I query all the trips created, the room property of each trip objects shows null!
Look at the /api/trips:
room property is NULL. So what the heck I dont understand what is happening.
My Trip Entity code is:
import { PrimaryGeneratedColumn, Column, CreateDateColumn,
UpdateDateColumn, Entity, Unique, ManyToOne, AfterInsert, JoinColumn, getConnection } from 'typeorm'
import { DriverEntity } from 'src/driver/driver.entity';
#Entity('trips')
export class TripEntity {
#PrimaryGeneratedColumn()
id: number
#Column()
destination: string
#Column('decimal')
destination_lat: number
#Column('decimal')
destination_long: number
#Column()
maxPassenger: number
#Column()
totalPassenger: number
#Column({ nullable: true })
room: string
#CreateDateColumn()
created_at: Date
#UpdateDateColumn()
updated_at: Date
// --------->HERE: The after insert
#AfterInsert()
async createSocketRoom(): Promise<void> {
const randomString = Math.random().toString(36).substring(2, 15) + Math.random().toString(36).substring(2, 15)
this.room = `${this.id}_${randomString}`
}
// Trip belongs to driver
// Adds driver_id to trips table
#ManyToOne(type => DriverEntity, driver => driver.trips)
#JoinColumn({ name: 'driver_id' })
driver: DriverEntity
}
My Trip Service Code is:
async create(data: CreateTripDTO) {
const { driver_id } = data
const driver = await this.driverRepository.findOne({ where: { id: driver_id } })
const trip = await this.tripRepository.create(data)
trip.driver = driver
await this.tripRepository.save(trip)
return trip
}
I dont think I need to include the Trip Controller code but anyway..
I don't know why it is happening because I have my User Entity with #BeforeUpdate and works fine...
After reading alot of similar github issues, watched youtube tutorials [Hi Ben Awad! :D], I found a somewhat fix.. by using Subscribers
Actually, I don't know what is the difference of the Listener/Subscriber. Maybe I am doing the wrong usage. Can someone enlighten me please? For example the difference of AfterSave of Entity Listener vs AfterSave of Entity Subscriber. When/Best case to use? something like that. Anyway back with the "fix..."
I created a Trip Subscriber:
import { EventSubscriber, EntitySubscriberInterface, InsertEvent } from "typeorm";
import { TripEntity } from "src/trip/trip.entity";
#EventSubscriber()
export class TripSubsriber implements EntitySubscriberInterface<TripEntity> {
// Denotes that this subscriber only listens to Trip Entity
listenTo() {
return TripEntity
}
// Called after entity insertion
async afterInsert(event: InsertEvent<any>) {
console.log(`AFTER ENTITY INSERTED: `, event.entity);
const randomString = Math.random().toString(36).substring(2, 15) + Math.random().toString(36).substring(2, 15)
// query the trip with given event.entity
const trip = await event.manager.getRepository(TripEntity).findOne(event.entity.id)
// populate the room with desired format
trip.room = `${trip.id}_${randomString}`
// save it!
await event.manager.getRepository(TripEntity).save(trip)
}
}
At first it is not working but after digging for hours again, I need to add a subscriber property with the value of the path of my subscribers at the ormconfig.json for it to work!
e.g: "subscribers": [
"src/subscriber/*.ts"
]
Again, the Trip Subscriber code seems spaghetti to me because I already have the event.entity object but I do not know how to update it without the need of querying and updating it using event.manager.getRepository(). Please can someone fix this code for me? the proper way of doing it?
NOW, It is working!
the request body:
the /api/trips res:
My questions are:
Why whenever I use that method methoud subscriber, it is not working. Is it not the proper way to do it? The why is it in the docs? Or for other use case?
Do I really have to use subscriber for it to achieve? Thats so many steps.
I came from Rails. So having to create files/subscribers just to do it somewhat tiring. Unlike ActiveRecord's after_save callback it is very easy..
PS. I'm new to nest-js and typeorm
#AfterInsert method will just modify your JS object after inserting into DB is done. So thats reason why is your code not working. You have to use #BeforeInsert decorator. BeforeInsert will modify your JS entity/object before inserting/saving into DB.
What it looks like is happening with your AfterInsert is that you are creating the random room string just fine, but you are not saving the value to the database, only using the return of the id so that you can create the string. What you could do in your AfterInsert is run the save() function from the EntityManager or RepositoryManger once more and commit the value to the database, similar to what you have happening in you Subscriber. I haven't dived too deep into the Subscriber/Listener vs Before-/AfterInsert decorators, so I can't give a deeper answer to your questions.
If you'd rather not make two commits to the database, you can always do a query for the most recent id and increment it by 1 (thus, matching what the new objects id should be) with something like
const maxId = await this.tripRepository.findOne({select: ['id'], order: {id: "DESC"} });
const trip = await this.tripRepository.create(data);
const randomString = Math.random().toString(36).substring(2, 15) + Math.random().toString(36).substring(2, 15)
this.room = `${maxId + 1}_${randomString}`
trip.driver = driver
await this.tripRepository.save(trip)
It's a little clunky to look at, but it doesn't require two writes to the database (though you'll definitely need to ensure that after creation room and trip have the same id).
Your last option would be to create a Trigger in your database that does the same thing as your JavaScript code.
You just use "this.save()" after all in createSocketRoom() function
https://i.stack.imgur.com/oUY8n.png my query after use that!!!
I am storing location data via GeoFire in my database, here is the structure:
geofire
-Ke1uhoT3gpHR_VsehIv
-Kdrel2Z_xWI280XNfGg
-g: "dr5regw90s"
-l
-0: 40.7127837
-1: -74.00594130000002
It is avised to store a location's information and geofire data in separate nodes, however I see an advantage to storing some extra data under these geofire nodes, such as the name of a business. This way I'd only have to make only one call to my Firebase to fetch nearby locations as well as their names.
Is this achievable via the key_entered method? Has anyone created a similar solution? Is this truly that bad of an idea even if the location info is consistently updated?
Any input is appreciated!
Firstly the structure you are using for the app is wrong.
let us take an example of users app
When you use Geofire, you have two lists of data:
a list of user details for each user
a list of latitude and longitude coordinates for each user
You are trying to store both in same structure like this
"users" : {
<userId> : {
"userData" : "userData",
<geofireData>
...
}
}
Trying to store userdata and geofiredata in one node is a bad idea, since you're mixing mostly static data (the properties of your user) with highly volatile data (the geo-location information).Separating the two out leads to better performance, which is why Geofire enforces it.
This is why when you try to add geolocation data to a user node it overwrites previous user details for that node.
Your database structure should be like this.
"Users" : {
<UserId> : {
"UserData" : "UserData",
...
}
}
"Users_location" : {
<UserId> : {
<geofireData> ...
}
}
Hence for the same user Id you create 2 structures one for user details and another for geolocation details of that user.
How to push the user and set the geofire data.
String userId = ref.child("users").push().getKey();
ref.child("users").child(userId).setValue(user);
geoFire = new GeoFire(ref.child("user_location"));
geoFire.setLocation(userId, new GeoLocation(lattitude, longitude));
the userId you use in geolocation is same as the one you got during push().
Hence for each userId you have userdetails in one structure and location details in anotehr structure.
To get the data, first you need to do GeoQuery at users_location node and then get the data on the onKeyEntered method. The parameter key is userId from my example.
geoFire=newGeoFire(FirebaseDatabase.getInstance().getReference().child("users_location");
geoQuery = geoFire.queryAtLocation(geoLocation), radius);
geoQuery.addGeoQueryEventListener(new GeoQueryEventListener() {
#Override
public void onKeyEntered(String key, GeoLocation location) {
//retrieve data
//use this key which is userId to fetch user details from user details structure
}
};
Happy coding :)
It might be late but just in case someone else is facing the same issue.
It's very possible to integrate and manipulate GeoFire data with existing data.
The main challenge faced by many developers is the fact that they can't update the GeoFire geolocation without deleting the existing data using geoFire.setLocation("id", location); as reported in this issue geofire-java/issues/134.
This is my solution for that.
1. Setting mixed data with GeoFire on FireBase
Let's consider below Structure:
|-userExtra :
|-userId1 :
|- userId : "userId1",
|- userName : "Name1",
|- <geofireData> (i and g)
|...
|-userId2 :
|- userId : "userId2",
|- userName : "Name2",
|- <geofireData> (i and g)
|...
You can generate the geoHash with GeoHash geoHash = new GeoHash(location) and set the value to the child directly in the firebase.
/*Java Code*/
void setGeoFireMixData(UserExtra user, Location location){
DatabaseReference ref = FirebaseDatabase.getInstance().getReference();
ref = ref.child("users").child(user.userId);
//Setting userData if needed
ref.child("userId").setValue(user.id);
ref.child("Name").setValue(user.name);
// .... other references
//Setting GeoFire Data
GeoHash geoHash = new GeoHash(location);
ref.child("l").setValue(Arrays.asList(location.latitude, location.longitude));
ref.child("g").setValue(geoHash.getGeoHashString());
}
2. Fetching mixed data with GeoFire
About the question:
Is this achievable via the key_entered method? Has anyone created a similar solution? Is this truly that bad of an idea even if the location info is consistently updated?
I'd say, you can't retrieve both extra data and GeoFire using only the GeoQueryEventListener with onKeyEntered. You will need to use GeoQueryDataEventListener. For that, you can define the PoJo of your data considering the structure defined above.
...
GeoQueryDataEventListener geoQueryDataEventListener = new GeoQueryDataEventListener() {
#Override
public void onDataEntered(DataSnapshot dataSnapshot, GeoLocation location) {
UserExtra user = dataSnapshot.getValue(UserExtra.class);
actionOnUser(user);
}
...
};
...
geoQuery = geoFire.queryAtLocation(new GeoLocation(latitude,longitude), 0.5);
geoQuery.addGeoQueryDataEventListener(geoQueryDataEventListener);
...
The class UserExtra can be defined as below:
public class UserExtra implements Serializable {
String userId;
String name;
...
List<Double> l; //GeoLocation
String g;
}
That will give you all the data in your node including the geoFire data.
I had this solution I am using now for Javascript:
Save any extra info with key separated by symbol i.e. underscore
Make sure auth id is in the key, which is used in security rules to ensure only the user can write to their node
"rules": {
"geofire": {
".read":"true",
"$extrainformation_authid":{
".write":"$extrainformation_authid.contains(auth.uid)"
}
}
}
Client-side, separate the information by the underscore