I'm using the Firebase Realtime Database to store measured weather data. (i.e. temperature, air pressure, etc.)
Let's say every 15 minutes a new value gets added to my db. I wanna use Firebase Functions to extract certain values automatically (maxima, minima, 24h high/low, etc.) because I want to display these values on my website. It seemed a good idea because in this way all the work would be done on the back-end and the web-js-code would just grab the values off of my db instead of querying endlessly.
Now I'm no expert on Firebase Functions and ran into some trouble trying to get Firebase to read and compare all these values. My db-tree looks something like this:
The idea is to use .onWrite to listen for new entries in, say, 'weather/temps' and compare each new entry with 'history/extrema/maximum/temp'. Now I dont really know how to read in the current maximum value inside the functions that would update it's value. So far, my code looks like this:
How can I read data in my function from any point in my db and use it for comparison etc.?
You need to asynchronously get the "history/extrema/maximum/temp" and then compare and set. But that would be a costly operation and takes time!
In cloud functions, you can store global variable, they will survive until the function timesout. So if your function is not called for some time, the system destroys your function , and you loose the variable.
So a good way to do is,if the global variable,say "maxTemp" is null, read the "history/extrema/maximum/temp" and store it in a global varaible, then proceed to compare with new value, and if it higher, then update the "maxTemp" variable and the "history/extrema/maximum/temp" also.
Related
I want to receive data from JavaScript file using PHP in Firebase in following structure.
Not like this.
From how I understood your question, I think you're looking to have your data to be added with an auto-generated Firebase id. So I think what you're looking for is the push() method:
Generates a new child location using a unique key and returns its Reference.
This is the most common pattern for adding data to a collection of items.
If you provide a value to push(), the value will be written to the generated location. If you don't pass a value, nothing will be written to the Database and the child will remain empty (but you can use the Reference elsewhere).
The unique key generated by push() are ordered by the current time, so the resulting list of items will be chronologically sorted. The keys are also designed to be unguessable (they contain 72 random bits of entropy).
Also see Firebase Database: Read and Write Data on the Web - Update specific fields.
I wanted to know which approach will be better for below scenario:
I have 100 products type(for ex. Dress, pant) and each has >100 brands, I have one service api which have two end points,
end pt1 : if i call this i will get all product types and all corresponding brands.
end pt2: if i call this i will get all brands for a single product type.
My approach is,
call endpt1 and store this information in local storage and if you
move from one product page to other you dont have to call api again to
get all brands for this product type.
But someone suggested,
dont store anything call end pt2 if you land on any product page.
which approach should be best in respect to time, accuracy, and code maintenance?
I wouldn't use localstorage unless I absolutely have to. Local caches get stale, so you have to sync them. Local storage can fail if you exceed the storage limit. By necessity, it adds complexity to your code (2 sources of data instead of one).
If you find that calling the API is too slow for each screen, you can optimize for performance at that point. I wouldn't do it beforehand.
Even if you have to optimize, local storage wouldn't be my first choice. I'd cache the data in memory (a global variable) and perform client-side routing or something.
I am running MySQL 5.6. I have a number of various "name" columns in the database (in various tables). These get imported every year by each customer as a CSV data dump. There are a number of places that these names are displayed throughout this website. The issue is, the names have almost no formatting (and to this point, no sanitization existed upon importation):
Phil Eaton, PHIL EATON, Phil EATON, etc.
Thus, the website sometimes look like a mess when these names are involved. There are a number of ways that I can think to do this, but none that are particularly appealing.
First, I can have a filter in Javascript. However, as I said, these names exist in a number of places throughout this (large) site. I may end up missing a page. The names do not exist already within nice "name"-classed divs/spans, etc.
Second, I could filter in PHP (the backend). This seems about as effective as doing it in Javascript. I could do it on the API, but there was still not a central method for pulling names from the database. So I could still miss an API call anyway.
Finally, the obvious "best" way is to sanitize the existing data in place for each name column. Then at the same time, immediately start sanitizing all names that get imported each time we add a customer. The issue with the first part of this is that there are hundreds of millions of rows of names in the database. Updating these could take a long amount of time and be disruptive to the clients' daily routines.
So, the most appealing way to correct this in the short-term is to invoke a function every time a column is selected. In this way I could "decorate" every name column with a formatting function so the names will appear uniform on the frontend. So ultimately, my question is: is it possible to invoke a specific function in SQL to format each row every time a specific column is selected? In other words, maybe can I call a stored procedure every time a column is selected? (Point being, I'm trying to keep the formatting in SQL to avoid the propagation of usage.)
In MySQL you can't trigger something on SELECT, but I have an idea (it's only an idea, now I don't have time to try it, sorry).
You probably can create a VIEW on this table, with the same structure, but with the stored procedure applied to the names fields, and select from this view in your PHP.
But it has two backdraw:
You have to modify all your SELECT statements in your PHPs.
The server will always call that procedure. Maybe you can store the formatted values, then check for it (cache them).
On the other hand I agree with HLGEM, I also suggest to format the data on import, because it's a very bad practice to import something you don't check into a DB (SQL Injections?). The batch tasking is also a good idea to clean up the mess.
I presume names are called frequently so invoking a sanitization function every time they are called could severely slow down your system. Further, you can't just do a simple setting to get this, you would have to change every buit of SQL code that is run that includes names.
Personally how I would handle it is to fix the imports so they put in a sanitized version for new names. It is a bad idea to directly put any data into a database without some sort of staging and clean up.
Then I would tackle the old names and fix them in batches in a nightly run that is scheduled when the fewest people are using the system. You would have to do some testing on dev to determine how big a batch you could run without interfering with other things the database is doing. The alrger the batch the sooner you would get through all the names, but even though this will take time, it is the surest method of getting the data cleaned up and over time the data will appear better to the users. If the design of your datbase allows you to identify which are the more active names (such as an is_active flag for a customer or am order in the last year), I would prioritize the update by that. Alternatively, you could clean up one client at a time starting with whichever one has noticed the problem and is driving this change.
Other answers before give some possible solutions. But, the short answer for the specific option you are asking is : No. There is no such thing called a
"Select Statement Trigger", that too for a single column, although triggers come close for this kind of expectation, but only for Insert, Update and Delete operations.
At the top of a file can I put something like...
var collection = db.mongo.collection('test', function(err, collection){return collection});
and then in any of the files functions use collection.find() etc
I guess my question is... is collection a reference to the collection or a copy of the data?
If data in the collection changes will i still get up to date data by querying the collection variable?
Thanks!!
Collection is a reference for the collection object. Until you issue a find (or findOne) you don't have real data in your hands. And even then, it returns a Cursor object leaving the collection object always untouched.
Storing both Collections or cursors will not store your data. remember that you could be dealing with millions of records. dealing with data itself could be overwhelming for the server memory. Instead, mongo returns cursors and references for you to filter away. In PHP you have a function called iterator_to_array that you can pass it the cursor and it converts to an array of data. In javascript I don't know if there is such functions. But I guess it doesn't makes sense to be such functions. Filter the information until you have manageable data size, then iterate over the cursor and do your thing. If you have something like a config array or such, intead of several documents, try to store everything on one and fetch it with the findOne() function.
But in the end I guess that's just a design question whether your data is possible to filter or not.
I'm designing a MongoDB database that works with a script that periodically polls a resource and gets back a response which is stored in the database. Right now my database has one collection with four fields , id, name, timestamp and data.
I need to be able to find out which names had changes in the data field between script runs, and which did not.
In pseudocode,
if(data[name][timestamp]==data[name][timestamp+1]) //data has not changed
store data in collection 1
else //data has changed between script runs for this name
store data in collection 2
Is there a query that can do this without iterating and running javascript over each item in the collection? There are millions of documents, so this would be pretty slow.
Should I create a new collection named timestamp for every time the script runs? Would that make it faster/more organized? Is there a better schema that could be used?
The script runs once a day so I won't run into a namespace limitation any time soon.
OK, this is a neat question b/c the short is basically: you will have to iterate and run javascript over each item.
The part where this gets "neat" is that this isn't really different from what an SQL solution would have to do. I mean, you're basically joining a table to itself where x.1=x.1 and y.1=y.2. Even if the relational DB can handle such a beast, it's definitely not going to be fast with millions of entries.
So the truth is, you're doing this right way. Here are the extra details I would use to make this cleaner.
Ensure that you have an index on Name/Timestamp.
Run a db.mycollection.find().foreach() across the data set.
Foreach entry you're going to a) Perform comparison. b) Save appropriately. c) Update a flag indicating that this record has been processed.
On future loads you should be able to add a query to your find. db.mycollection.find({flag:{$exists:false}}).foreach()
Use db.eval() to help with speed.
The reason for the "Name/Timestamp" index is that you're going to be looking up each "successor" by "Name/Timestamp", so you want to be quick here.
The reason for the "processed" flag is that you should never have to re-run the same item. If given timestamp 'n' you find 'n+1', then that's the only 'n+1' you're going to have.
Honestly, if you're only running this once / day, it's quite likely that the speed will be just fine, especially if you only running on new records. Just assume that it's going to take several minutes.