Get changes based on uploaded time from firebase - javascript

I have initialized a real time database using firebase, I am detecting live changes to the databse using
const ref = firebase.database().ref("test");
ref.on('value', function(dataSnapshot){
console.log(dataSnapshot.val())
});
But this returns me value in ascending order. Whereas I want it to return based on time. I tried using time in: 00:00 (IST) format but if a data is marked 11:59 (am) and another 01:02 (pm) this will return me the second message first.
What will be the best way to fix this?
example data is =>
in my databse =>

It is not clear what you mean by time in ascending order
None of your example data mention time. They are just usernames and text.
If you want to order times correctly, best to use ISO date format
This stores 1:02 pm as 13:02, which will sort after 11:59. Its sorting characteristics are ideal.
Use an international time standard to store your times
An international time standard, UTC, has great advantages over national times. It is not subject to change with location, political decisions, or season. You can always interconvert with the user's local time, at the time of entry or display.
Example
const dateString = (new Date()).toISOString();
console.log(dateString)
// Result:
// 2021-06-22T14:40:37.985Z
// If you want to use them as Firebase keys, they must not contain a ".", so you might clean it up like this:
const cleanDateString = (new Date()).toISOString().replace(".","-")
console.log(cleanDateString)
// Result:
// 2021-06-22T14:47:44-445Z
Even better, use a Firebase PushID
The above date-and-time system will work if you are using it to sort the remarks made by a single person, but will not be good as a message identifier if a single space is shared by all people, since 2 people will eventually make a message at the same millisecond, and will get the same message ID.
To deal with that it is better practice to use a Firebase Push ID.
An explanation is given here: In Firebase when using push() How do I get the unique ID and store in my database
Or from Firebase itself, here:
https://firebase.google.com/docs/database/admin/save-data

Related

Querying a row with times - postgresql

At present I have a simple query that will get me fixtures from my DB that belong to a particular league and are then ordered by kick off time.
response = await pool.query('SELECT * FROM fixtures WHERE league_name = $1 ORDER BY kick_off ASC', [leagueName]);
Now if I want to expand on this and return the fixtures that are upcoming or inplay (so a 2 hour buffer on top of the time now lets say), what queries should I be looking at? The kick_off time is stored as a VARCHAR, so 15:00 or 19:00.
I was thinking about grabbing the time from the users browser with javascript and passing that and doing a comparison that way but timezones would make this tricky right? So my thinking is do it on the server so I know it's consistent ?
I am not sure on the queries though
If I am wrong with my assumptions happy to be corrected to learn here
Thanks
Storing a time as a varchar makes this unnecessarily hard (borderline impossible). The reason is that your DB doesn't understand it as a time, so you'd really have to roll your own time-sorting functions. Ugh.
What you need to do is change kick_off to be a TIME, TIMESTAMP, or even TIMESTAMP WITH TIMEZONE. Then your query look would like:
SELECT * FROM fixtures
WHERE league_name = $1
AND kick_off > NOW()-interval '2 hour'
AND kick_off < NOW()+interval '2 hour'
ORDER BY kick_off ASC;
No need to muck about with the browser's time, your DB already knows what time it is!
(note, I'm assumeing a game takes 2 hours, but I'm not sure what the sport in question is, and even if I was, I'm not sure on the average game length. Change the now()- to suit.)

Firebase get by date and order by voteCount

I have a question about sorting. I want to get the first 10 posts added today, this week, and this month (So I have 30 posts total but each 10 posts from a different part of the database). This works perfect but the problem is that I want to sort all of them by voteCount'.
Is there any way to sort them by date and voteCount?
My reference:
const ref = firebase.database().ref('bookmarks')
.orderByChild('date')
.startAt(date.end)
.endAt(date.start)
.limitToLast(10)
f you're willing to pad-print the numbers into a string like this you can sort on basically anything:
"sortKey": "9899999-8521969121365-000009692795"
Don't forget Firebase can't sort-descending so you have to deal with that yourself. That's why my sortKey first field is 9899999. The value was actually 100000 but that portion is a descending sort so I'm subtracting it from 9999999.
Also don't forget to add an indexOn in your rules to avoid client-side sorting!

timezones with moment-timezone

I am checking the list of timezones using moment-timezone from moment.js
moment.tz.names()
is giving 583 item , the list is very detailed and very huge, how can I get the main timezones out of it so I create a drop down list?
it goes like this :
"Africa/Abidjan", "Africa/Accra", "Africa/Addis_Ababa",
"Africa/Algiers", "Africa/Asmara", "Africa/Asmera", "Africa/Bamako",
"Africa/Bangui", "Africa/Banjul", "Africa/Bissau", "Africa/Blantyre",
"Africa/Brazzaville", "Africa/Bujumbura", "Africa/Cairo",
"Africa/Casablanca", "Africa/Ceuta", "Africa/Conakry", "Africa/Dakar",
"Africa/Dar_es_Salaam", "Africa/Djibouti", "Africa/Douala",
"Africa/El_Aaiun", "Africa/Freetown", "Africa/Gaborone",
"Africa/Harare", "Africa/Johannesburg", "Africa/Juba",
"Africa/Kampala", "Africa/Khartoum", "Africa/Kigali",
"Africa/Kinshasa", "Africa/Lagos", "Africa/Libreville", "Africa/Lome",
"Africa/Luanda", "Africa/Lubumbashi", "Africa/Lusaka",
"Africa/Malabo", "Africa/Maputo", "Africa/Maseru", "Africa/Mbabane",
"Africa/Mogadishu", "Africa/Monrovia", "Africa/Nairobi",
"Africa/Ndjamena", "Africa/Niamey", "Africa/Nouakchott",
"Africa/Ouagadougou", "Africa/Porto-Novo", "Africa/Sao_Tome",
"Africa/Timbuktu", "Africa/Tripoli", "Africa/Tunis",
"Africa/Windhoek", "America/Adak", "America/Anchorage",
"America/Anguilla", "America/Antigua", "America/Araguaina",
"America/Argentina/Buenos_Aires", "America/Argentina/Catamarca",
"America/Argentina/ComodRivadavia", "America/Argentina/Cordoba",
"America/Argentina/Jujuy", "America/Argentina/La_Rioja",
"America/Argentina/Mendoza", "America/Argentina/Rio_Gallegos",
"America/Argentina/Salta", "America/Argentina/San_Juan",
"America/Argentina/San_Luis", "America/Argentina/Tucuman",
"America/Argentina/Ushuaia", "America/Aruba", "America/Asuncion",
"America/Atikokan", "America/Atka", "America/Bahia",
"America/Bahia_Banderas", "America/Barbados", "America/Belem",
"America/Belize", "America/Blanc-Sablon", "America/Boa_Vista",
"America/Bogota", "America/Boise", "America/Buenos_Aires", ..........
thank you
I don't know of any such list. After all, who decides which time zones are the "main timezones"? However, one option is to do what Microsoft Windows does and sort the list by timezone offset:
From the moment docs:
moment("2016-03-03").tz("America/Toronto").format('Z');
will give you the offset of that particular zone, e.g., -05:00. You can get the offset for each zone, sort by offset, and then present the list. The text in the list can be the zone name, or the part after the /, with _ replaced with space (e.g., America/New_York -> New York).
As everyone has said, the list is the list. What I did to narrow down the list is use a session variable that has the user's timezone, and if that is null, then use a default of America/UK/South Africa/Australian timezones. For my company, this is where the majority of our customers come from. Not many Canadians, strangely enough.
This is all in PHP, and not JavaScript, but it shouldn't be too hard to read and figure out. What's important are the steps used to narrow down the results.
Here's the gist:
https://gist.github.com/NoMan2000/a262a96b159164882cd7
The output.html shows you what the default timeZone breakdown is.
The most important file is the CountryCodesAndAbbr.php file.
https://gist.github.com/NoMan2000/a262a96b159164882cd7#file-countrycodesandabbr-php
It has an array of five different country ISOs stored in JSON format. It makes a CURL request, responds with the JSON, and then stores that in a file so I don't have to keep re-downloading the files each time, and bombarding the APIs.
For our purposes, only worry about the getContinentAndCountry method, as that contains all the data we'll need. This is what moment does for you in javaScript, with some tweaks to how the API gets called.
The TimeFormatter is a class for dealing with how PHP internally handles timezones, and I need to translate between the user's country-code and the displayed timezones so I don't overwhelm them.
In PHP, I use the user's country that I get when they sign up, so I have a region to pass into the method to narrow down the results. If you don't have a session variable or some other information saved, use this API.
http://freegeoip.net
This will give you the country when you make a request to it with the user's IP. Now you have the country.
If you don't get a result back, then do what I do and default to the regions where you get the most customers, with an optional <select> field that will allow them to pick the country they want.
You have to pick whether or not you are going to use the IP or if you have some way of getting a value to the user.
Lastly, pass in the country to the TimeFormatter->getTimezonesByCountryCodeLookup(region)
https://gist.github.com/NoMan2000/a262a96b159164882cd7#file-timeformatter-php
That takes the 2 letter country code and looks it up. The end result is you end up with a much smaller result that user can navigate. I combined this with chosen.js and it enabled them to just type in their abbreviation to narrow it down.
You will have to adapt this code to your own uses, but hopefully it shows you how to go about doing it.

Dynamic Frequency Map from MongoDB Keys

I'm using MiniMongo through Meteor, and I'm trying to create a frequency table based off of a dynamic set of queries.
I have two main fields, localHour and localDay. I expect many overlaps, and I'd like to determine where the most overlaps occur. My current method of doing this is so.
if(TempStats.findOne({
localHour: hours,
localDay: day
})){//checks if there is already some entry on the same day/hour
TempStats.update({//if so, we just increment frequency
localHour: hours,
localDay: day
},{
$inc: {freq: 1}
})
} else {//if nothing exists yet, we put in a new entry
TempStats.insert({
localHour: hours,
localDay: day,
freq: 1
});
}
Essentially, this code runs every time I have new data I want to insert. It works fine at the moment, in that, after all data is inserted, I can sort by frequency to find what set of hours & days occurs the most often (TempStats.find({}, {sort: {freq: -1}}).fetch()).
However, I'm looking more for a way to search by frequency for any key. For instance, searching for the day which everything occurs on the most often as opposed to both the date and hour. With my current way of doing this, I would need to have multiple databases and different methods of inserting for each, which is a bit ridiculous. Is there a Mongo (specifically MiniMongo) solution to do frequency maps based on keys?
Thanks!
It looks like miniMongo does not in fact support aggregation, which makes this kind of operation difficult. One way to go about it would be aggregating yourself at the end of each day and inserting that aggregate record into your db (without the hour field or with it set to something like -1). Conversely as wastefully you could also update that record at the time of each insert. This would allow you to use the same collection for both and is fairly common in other dbs.
Also you should consider #nickmilon's first suggestion since the use of an upsert statement with the $inc operator would reduce your example to a single operation per data point.
a small note on your code: the part that comes as an else statement is not really required your update will do the complete job if you combine it with the option upsert=true it will insert a new document and $inc will set the freq field to 1 as desired see: here and here
for alternative ways to count your frequencies: assuming you store the date as a datetime object I would suggest to use an aggregation (I am not sure if they added support for aggregation yet in minimongo) but there are solutions then with aggregation you can use datetime operators as
$hour, $week, etc for filtering and $count to count the frequencies without you having to keep counts in the database.
This is basically a simple map-reduce problem.
First, don't separate the derived data into 2 fields. This violates DB best practices. If the data comes to you this way, use it to create a Date object. I assume you have a bunch of collections that are being subscribed to and then you aggregate all those into this temporary local collection. This is the mapping of the map-reduce pattern. At this point, since your query in unknown, it's a waste of CPU (even though it's your client) to aggregate. Map first, reduce second. What you should have is a collection full of datetimes. call it TempMapCollection if you wish. Now, use a forEach() and pass in your reduce function (by day, by hour, etc).
You can reduce into another local collection, or into a javascript object. I like using collections, but if the objects are complex, you'll get EJSON errors all up in there. Since your objects are nothing more than a datetime, let's use collections.
so you've got something like:
TempMapCollection.find().forEach(function(doc) {
var date = doc.dateTime.getDate();
TempReduceCollection.upsert({timequery: hours}, {$inc: {freq: 1}});
})
Now query your reduce collection. This has the added benefit that you won't have to re-map if you want to do 2 unique queries.

Range query for MongoDB pagination

I want to implement pagination on top of a MongoDB. For my range query, I thought about using ObjectIDs:
db.tweets.find({ _id: { $lt: maxID } }, { limit: 50 })
However, according to the docs, the structure of the ObjectID means that "ObjectId values do not represent a strict insertion order":
The relationship between the order of ObjectId values and generation time is not strict within a single second. If multiple systems, or multiple processes or threads on a single system generate values, within a single second; ObjectId values do not represent a strict insertion order. Clock skew between clients can also result in non-strict ordering even for values, because client drivers generate ObjectId values, not the mongod process.
I then thought about querying with a timestamp:
db.tweets.find({ created: { $lt: maxDate } }, { limit: 50 })
However, there is no guarantee the date will be unique — it's quite likely that two documents could be created within the same second. This means documents could be missed when paging.
Is there any sort of ranged query that would provide me with more stability?
It is perfectly fine to use ObjectId() though your syntax for pagination is wrong. You want:
db.tweets.find().limit(50).sort({"_id":-1});
This says you want tweets sorted by _id value in descending order and you want the most recent 50. Your problem is the fact that pagination is tricky when the current result set is changing - so rather than using skip for the next page, you want to make note of the smallest _id in the result set (the 50th most recent _id value and then get the next page with:
db.tweets.find( {_id : { "$lt" : <50th _id> } } ).limit(50).sort({"_id":-1});
This will give you the next "most recent" tweets, without new incoming tweets messing up your pagination back through time.
There is absolutely no need to worry about whether _id value is strictly corresponding to insertion order - it will be 99.999% close enough, and no one actually cares on the sub-second level which tweet came first - you might even notice Twitter frequently displays tweets out of order, it's just not that critical.
If it is critical, then you would have to use the same technique but with "tweet date" where that date would have to be a timestamp, rather than just a date.
Wouldn't a tweet "actual" timestamp (i.e. time tweeted and the criteria you want it sorted by) be different from a tweet "insertion" timestamp (i.e. time added to local collection). This depends on your application, of course, but it's a likely scenario that tweet inserts could be batched or otherwise end up being inserted in the "wrong" order. So, unless you work at Twitter (and have access to collections inserted in correct order), you wouldn't be able to rely just on $natural or ObjectID for sorting logic.
Mongo docs suggest skip and limit for paging:
db.tweets.find({created: {$lt: maxID}).
sort({created: -1, username: 1}).
skip(50).limit(50); //second page
There is, however, a performance concern when using skip:
The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return result. As offset increases, cursor.skip() will become slower and more CPU intensive.
This happens because skip does not fit into the MapReduce model and is not an operation that would scale well, you have to wait for a sorted collection to become available before it can be "sliced". Now limit(n) sounds like an equally poor method as it applies a similar constraint "from the other end"; however with sorting applied, the engine is able to somewhat optimize the process by only keeping in memory n elements per shard as it traverses the collection.
An alternative is to use range based paging. After retrieving the first page of tweets, you know what the created value is for the last tweet, so all you have to do is substitute the original maxID with this new value:
db.tweets.find({created: {$lt: lastTweetOnCurrentPageCreated}).
sort({created: -1, username: 1}).
limit(50); //next page
Performing a find condition like this can be easily parallellized. But how to deal with pages other than the next one? You don't know the begin date for pages number 5, 10, 20, or even the previous page! #SergioTulentsev suggests creative chaining of methods but I would advocate pre-calculating first-last ranges of the aggregate field in a separate pages collection; these could be re-calculated on update. Furthermore, if you're not happy with DateTime (note the performance remarks) or are concerned about duplicate values, you should consider compound indexes on timestamp + account tie (since a user can't tweet twice at the same time), or even an artificial aggregate of the two:
db.pages.
find({pagenum: 3})
> {pagenum:3; begin:"01-01-2014#BillGates"; end:"03-01-2014#big_ben_clock"}
db.tweets.
find({_sortdate: {$lt: "03-01-2014#big_ben_clock", $gt: "01-01-2014#BillGates"}).
sort({_sortdate: -1}).
limit(50) //third page
Using an aggregate field for sorting will work "on the fold" (although perhaps there are more kosher ways to deal with the condition). This could be set up as a unique index with values corrected at insert time, with a single tweet document looking like
{
_id: ...,
created: ..., //to be used in markup
user: ..., //also to be used in markup
_sortdate: "01-01-2014#BillGates" //sorting only, use date AND time
}
The following approach wil work even if there are multiple documents inserted/updated at same millisecond even if from multiple clients (which generates ObjectId). For simiplicity, In following queries I am projecting _id, lastModifiedDate.
First page, fetch the result Sorted by modifiedTime (Descending), ObjectId (Ascending) for fist page.
db.product.find({},{"_id":1,"lastModifiedDate":1}).sort({"lastModifiedDate":-1, "_id":1}).limit(2)
Note down the ObjectId and lastModifiedDate of the last record fetched in this page. (loid, lmd)
For sencod page, include query condition to search if (lastModifiedDate = lmd AND oid > loid ) OR (lastModifiedDate < loid)
db.productfind({$or:[{"lastModifiedDate":{$lt:lmd}},{"_id":1,"lastModifiedDate":1},{$and:[{"lastModifiedDate":lmd},{"_id":{$gt:loid}}]}]},{"_id":1,"lastModifiedDate":1}).sort({"lastModifiedDate":-1, "_id":1}).limit(2)
repeat same for subsequent pages.
ObjectIds should be good enough for pagination if you limit your queries to the previous second (or don't care about the subsecond possibility of weirdness). If that is not good enough for your needs then you will need to implement an ID generation system that works like an auto-increment.
Update:
To query the previous second of ObjectIds you will need to construct an ObjectID manually.
See the specification of ObjectId http://docs.mongodb.org/manual/reference/object-id/
Try using this expression to do it from a mongos.
{ _id :
{
$lt : ObjectId(Math.floor((new Date).getTime()/1000 - 1).toString(16)+"ffffffffffffffff")
}
}
The 'f''s at the end are to max out the possible random bits that are not associated with a timestamp since you are doing a less than query.
I recommend during the actual ObjectId creation on your application server rather than on the mongos since this type of calculation can slow you down if you have many users.
I have build a pagination using mongodb _id this way.
// import ObjectId from mongodb
let sortOrder = -1;
let query = []
if (prev) {
sortOrder = 1
query.push({title: 'findTitle', _id:{$gt: ObjectId('_idValue')}})
}
if (next) {
sortOrder = -1
query.push({title: 'findTitle', _id:{$lt: ObjectId('_idValue')}})
}
db.collection.find(query).limit(10).sort({_id: sortOrder})

Categories

Resources