Time sensitive data in Node.js - javascript

I'm building an application in Node.js and MongoDB, and the application has something of time-valid data, meaning if some piece of data was inserted into the database.
I'd like to remove it from the database (via code) after three days (or any amount of days/time spread).
Currently, my solution is to have some sort of member in my Schema that checks when it was actually posted and subsequently removes it when the current time is past 3 days from the insertion, but I'm having trouble in figuring out a good way to write it in code.
Are there any standard ways to accomplish something like this?

There are two basic ways to accomplish this with a TTL index. A TTL index will let you define a special type of index on a BSON Date field that will automatically delete documents based on age. First, you will need to have a BSON Date field in your documents. If you don't have one, this won't work. http://docs.mongodb.org/manual/reference/bson-types/#document-bson-type-date
Then you can either delete all documents after they reach a certain age, or set expiration dates for each document as you insert them.
For the first case, assuming you wanted to delete documents after 1 hour you would create this index:
db.mycollection.ensureIndex( { "createdAt": 1 }, { expireAfterSeconds: 3600 } )
assuming you had a createdAt field that was a date type. MongoDB will take care of deleting all documents in the collection once they reach 3600 seconds (or 1 hour) old.
For the second case, you will create an index with expireAfterSeconds set to 0 on a different field:
db.mycollection.ensureIndex( { "expireAt": 1 }, { expireAfterSeconds: 0 } )
If you then insert a document with an expireAt field set to a date mongoDB will delete that document at that date and time:
db.mycollection.insert( {
"expireAt": new Date('June 6, 2014 13:52:00'),
"mydata": "data"
} )
You can read more detail about how to use TTL indexes here:
http://docs.mongodb.org/manual/tutorial/expire-data/

Related

V-Calendar component in Vuetify; setting up events to scale across months

I'm looking for advice on the best way to store event dates in Postgres when it comes to fetching them and displaying them on an calendar. I'm using an node/expressjs backend with postgres as a data store. On the front end I'm using vue with vuetify/nuxt. Vuetify has a lot of convenient UI components, but more specifically the v-calendar component:
V-Calendar
I've got a few edge cases that I'm having a hard time wrapping my head around.
I want to be able to fetch events for the current month from the database and events that spill over from one month to the next, and to the next, etc. What is the best way to do this? How should I model my database table and fetch the records (I'm using Postgres)? An event needs a name, start and end. Should I instead store the total duration of the event in a unix timestamp and query the events by range between a given month duration (in seconds)?
Any advice would be welcome.
Store your events with their beginning and end dates in a range type
You can then use the overlap && range operator to figure out which events belong on a certain month's calendar.
For instance, if you have an event with duration column of type daterange defined as '[2020-01-01, 2020-03-31]'::daterange, it will be match the following condition:
where duration && '[2020-02-01, 2020-03-01)'
Please note that the closing ) is deliberate since that excludes the upper limit from the range (in this case, 1 March).
In case you would rather not store the start and end dates inside a range type, you can always construct one on the fly:
where daterange(start_date, end_date, '[]') && '[2020-02-01, 2020-03-01)'
The range for the current month can be calculated on the fly:
select daterange(
date_trunc('month', now())::date,
(date_trunc('month', now()) + interval '1 month')::date, '[)'
);
daterange
-------------------------
[2020-07-01,2020-08-01)
(1 row)
Or for a three-month calendar:
select daterange(
(date_trunc('month', now()) - interval '1 month')::date,
(date_trunc('month', now()) + interval '2 month')::date, '[)'
);
daterange
-------------------------
[2020-06-01,2020-09-01)
(1 row)
The way we stored and retrieved events was that every time a user scrolls in the calendar i use a method to return start_date_time for the current month and the previous and next month. For a total of 3 months. This way we catch any calendar overlap. We use laravel in the backend, but you should be able to get the general gist of the method. Our tableify method just formats data for us.
My DB structure is as follows (removing subjective data):
CREATE TABLE calendar_events (
id bigserial NOT NULL,
calendar_event_category_id int4 NOT NULL,
name varchar(512) NOT NULL,
description text NULL,
start_date_time timestamp(0) NOT NULL,
end_date_time timestamp(0) NULL,
"data" json NULL,
user_id int4 NULL,
created_at timestamp(0) NULL,
updated_at timestamp(0) NULL,
CONSTRAINT calendar_events_pkey PRIMARY KEY (id),
CONSTRAINT calendar_events_calendar_event_category_id_foreign FOREIGN KEY (calendar_event_category_id) REFERENCES calendar_event_categories(id),
CONSTRAINT calendar_events_user_id_foreign FOREIGN KEY (user_id) REFERENCES users(id)
);
My index method:
public function index(Request $request)
{
$currentDate = empty($request->filter_date) ? Carbon::now() : new Carbon($request->filter_date);
if (! empty($request->filter_date)) {
return api_response('Retrieved Calendar Events.',
CalendarEvent::tableify($request,
CalendarEvent::where('start_date_time', '>=', $currentDate->subMonth(1)->isoFormat('YYYY-MM-DD'))->where('start_date_time', '<=', ($currentDate->addMonth(2)->isoFormat('YYYY-MM-DD')))->orderby('start_date_time', 'DESC')
)
);
} else {
return api_response('Retrieved Calendar Events.', CalendarEvent::tableify($request, CalendarEvent::orderby('start_date_time', 'DESC')));
}
}
That's the way I solved the overlap problem. Every time the user scrolls the frontend checks if a month was changed, if so, it updates the calendar with the latest 3 month chunk.

Best way to get rid of old messages/posts in a collection?

I know this website prefers answers over discussions but I am quite lost on this.
What would be a sufficient enough way to get rid of old messages that are stored in a collection? As they are messages, there will be a large amount of them.
What I have so far are either deleting messages using
if (Messages.find().count() > 100) {
Messages.remove({
_id: Messages.findOne({}, { sort: { createdAt: 1 } })._id
});
}
and I have also tried using expire.
Is there any other/more efficient way to do this?
Depending on how you define the age to expiry, there are two ways you can go about this.
The first one would be to use "TTL indexes" that automatically prune some collections based on time. For instance, you might have a logs table to log all the application events and you only want to keep the logs for the last hour. To implement this, add a date field to your logs document. This will indicate the age of the document. MongoDB will use this field to determine if your document is expired and needs to be removed:
db.log_events.insert({
"name": "another log entry"
"createdAt": new Date()
})
Now add a TTL index to your collection on this field. In the example below I used an expireAfterSeconds value of 3600 which will annihilate logs after every hour:
db.log_events.createIndex({ "createdAt": 1 }, { expireAfterSeconds: 3600 })
So for your case you would need to define an appropriate expiry time in seconds. For more details refer to the MongoDB documentation on expiration of data using TTL indexes.
The second approach involves manually removing the documents based on a date range query. For the example above given the same collection, to remove documents older that an hour you need to create a date that represents an hour ago relative to the current timestamp and use that date as the query in the remove method of the collection:
var now = new Date(),
hourAgo = new Date(now.getTime() - (60 * 60 * 1000));
db.log_events.remove({"createdAt": { "$lte": hourAgo }})
The above will delete log documents older than an hour.

Dynamic Frequency Map from MongoDB Keys

I'm using MiniMongo through Meteor, and I'm trying to create a frequency table based off of a dynamic set of queries.
I have two main fields, localHour and localDay. I expect many overlaps, and I'd like to determine where the most overlaps occur. My current method of doing this is so.
if(TempStats.findOne({
localHour: hours,
localDay: day
})){//checks if there is already some entry on the same day/hour
TempStats.update({//if so, we just increment frequency
localHour: hours,
localDay: day
},{
$inc: {freq: 1}
})
} else {//if nothing exists yet, we put in a new entry
TempStats.insert({
localHour: hours,
localDay: day,
freq: 1
});
}
Essentially, this code runs every time I have new data I want to insert. It works fine at the moment, in that, after all data is inserted, I can sort by frequency to find what set of hours & days occurs the most often (TempStats.find({}, {sort: {freq: -1}}).fetch()).
However, I'm looking more for a way to search by frequency for any key. For instance, searching for the day which everything occurs on the most often as opposed to both the date and hour. With my current way of doing this, I would need to have multiple databases and different methods of inserting for each, which is a bit ridiculous. Is there a Mongo (specifically MiniMongo) solution to do frequency maps based on keys?
Thanks!
It looks like miniMongo does not in fact support aggregation, which makes this kind of operation difficult. One way to go about it would be aggregating yourself at the end of each day and inserting that aggregate record into your db (without the hour field or with it set to something like -1). Conversely as wastefully you could also update that record at the time of each insert. This would allow you to use the same collection for both and is fairly common in other dbs.
Also you should consider #nickmilon's first suggestion since the use of an upsert statement with the $inc operator would reduce your example to a single operation per data point.
a small note on your code: the part that comes as an else statement is not really required your update will do the complete job if you combine it with the option upsert=true it will insert a new document and $inc will set the freq field to 1 as desired see: here and here
for alternative ways to count your frequencies: assuming you store the date as a datetime object I would suggest to use an aggregation (I am not sure if they added support for aggregation yet in minimongo) but there are solutions then with aggregation you can use datetime operators as
$hour, $week, etc for filtering and $count to count the frequencies without you having to keep counts in the database.
This is basically a simple map-reduce problem.
First, don't separate the derived data into 2 fields. This violates DB best practices. If the data comes to you this way, use it to create a Date object. I assume you have a bunch of collections that are being subscribed to and then you aggregate all those into this temporary local collection. This is the mapping of the map-reduce pattern. At this point, since your query in unknown, it's a waste of CPU (even though it's your client) to aggregate. Map first, reduce second. What you should have is a collection full of datetimes. call it TempMapCollection if you wish. Now, use a forEach() and pass in your reduce function (by day, by hour, etc).
You can reduce into another local collection, or into a javascript object. I like using collections, but if the objects are complex, you'll get EJSON errors all up in there. Since your objects are nothing more than a datetime, let's use collections.
so you've got something like:
TempMapCollection.find().forEach(function(doc) {
var date = doc.dateTime.getDate();
TempReduceCollection.upsert({timequery: hours}, {$inc: {freq: 1}});
})
Now query your reduce collection. This has the added benefit that you won't have to re-map if you want to do 2 unique queries.

In Meteor, how do I find a record in a collection given the year, month, day?

So I have a collection of posts where a JavaScript Date object is stored in the "submitted" field. Given the year/month/day (2015/02/03 for example), I need to be able to pull the records that fit this description.
I tried something like this, but it didn't work. I'm clueless as to the correct syntax:
Posts.find({$where : 'return this.submitted.getMonth() == 2 && this.submitted.getDay() == 3 && this.submitted.getYear() == 2015'})
Also, is it better for me to just separately store 3 separate variables in the collection to begin with? Like submitted.year, submitted.month, submitted.day. It sounds a lot simpler, but it requires me to add in a whole bunch of fields, which may seem redundant.
This is a mongodb question. The solution is to search within a date range. Tweak the query below:
Posts.find({submitted: {$gte: new Date('2015-02-23'), $lt: new Date('2015-02-04')}})

Range query for MongoDB pagination

I want to implement pagination on top of a MongoDB. For my range query, I thought about using ObjectIDs:
db.tweets.find({ _id: { $lt: maxID } }, { limit: 50 })
However, according to the docs, the structure of the ObjectID means that "ObjectId values do not represent a strict insertion order":
The relationship between the order of ObjectId values and generation time is not strict within a single second. If multiple systems, or multiple processes or threads on a single system generate values, within a single second; ObjectId values do not represent a strict insertion order. Clock skew between clients can also result in non-strict ordering even for values, because client drivers generate ObjectId values, not the mongod process.
I then thought about querying with a timestamp:
db.tweets.find({ created: { $lt: maxDate } }, { limit: 50 })
However, there is no guarantee the date will be unique — it's quite likely that two documents could be created within the same second. This means documents could be missed when paging.
Is there any sort of ranged query that would provide me with more stability?
It is perfectly fine to use ObjectId() though your syntax for pagination is wrong. You want:
db.tweets.find().limit(50).sort({"_id":-1});
This says you want tweets sorted by _id value in descending order and you want the most recent 50. Your problem is the fact that pagination is tricky when the current result set is changing - so rather than using skip for the next page, you want to make note of the smallest _id in the result set (the 50th most recent _id value and then get the next page with:
db.tweets.find( {_id : { "$lt" : <50th _id> } } ).limit(50).sort({"_id":-1});
This will give you the next "most recent" tweets, without new incoming tweets messing up your pagination back through time.
There is absolutely no need to worry about whether _id value is strictly corresponding to insertion order - it will be 99.999% close enough, and no one actually cares on the sub-second level which tweet came first - you might even notice Twitter frequently displays tweets out of order, it's just not that critical.
If it is critical, then you would have to use the same technique but with "tweet date" where that date would have to be a timestamp, rather than just a date.
Wouldn't a tweet "actual" timestamp (i.e. time tweeted and the criteria you want it sorted by) be different from a tweet "insertion" timestamp (i.e. time added to local collection). This depends on your application, of course, but it's a likely scenario that tweet inserts could be batched or otherwise end up being inserted in the "wrong" order. So, unless you work at Twitter (and have access to collections inserted in correct order), you wouldn't be able to rely just on $natural or ObjectID for sorting logic.
Mongo docs suggest skip and limit for paging:
db.tweets.find({created: {$lt: maxID}).
sort({created: -1, username: 1}).
skip(50).limit(50); //second page
There is, however, a performance concern when using skip:
The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return result. As offset increases, cursor.skip() will become slower and more CPU intensive.
This happens because skip does not fit into the MapReduce model and is not an operation that would scale well, you have to wait for a sorted collection to become available before it can be "sliced". Now limit(n) sounds like an equally poor method as it applies a similar constraint "from the other end"; however with sorting applied, the engine is able to somewhat optimize the process by only keeping in memory n elements per shard as it traverses the collection.
An alternative is to use range based paging. After retrieving the first page of tweets, you know what the created value is for the last tweet, so all you have to do is substitute the original maxID with this new value:
db.tweets.find({created: {$lt: lastTweetOnCurrentPageCreated}).
sort({created: -1, username: 1}).
limit(50); //next page
Performing a find condition like this can be easily parallellized. But how to deal with pages other than the next one? You don't know the begin date for pages number 5, 10, 20, or even the previous page! #SergioTulentsev suggests creative chaining of methods but I would advocate pre-calculating first-last ranges of the aggregate field in a separate pages collection; these could be re-calculated on update. Furthermore, if you're not happy with DateTime (note the performance remarks) or are concerned about duplicate values, you should consider compound indexes on timestamp + account tie (since a user can't tweet twice at the same time), or even an artificial aggregate of the two:
db.pages.
find({pagenum: 3})
> {pagenum:3; begin:"01-01-2014#BillGates"; end:"03-01-2014#big_ben_clock"}
db.tweets.
find({_sortdate: {$lt: "03-01-2014#big_ben_clock", $gt: "01-01-2014#BillGates"}).
sort({_sortdate: -1}).
limit(50) //third page
Using an aggregate field for sorting will work "on the fold" (although perhaps there are more kosher ways to deal with the condition). This could be set up as a unique index with values corrected at insert time, with a single tweet document looking like
{
_id: ...,
created: ..., //to be used in markup
user: ..., //also to be used in markup
_sortdate: "01-01-2014#BillGates" //sorting only, use date AND time
}
The following approach wil work even if there are multiple documents inserted/updated at same millisecond even if from multiple clients (which generates ObjectId). For simiplicity, In following queries I am projecting _id, lastModifiedDate.
First page, fetch the result Sorted by modifiedTime (Descending), ObjectId (Ascending) for fist page.
db.product.find({},{"_id":1,"lastModifiedDate":1}).sort({"lastModifiedDate":-1, "_id":1}).limit(2)
Note down the ObjectId and lastModifiedDate of the last record fetched in this page. (loid, lmd)
For sencod page, include query condition to search if (lastModifiedDate = lmd AND oid > loid ) OR (lastModifiedDate < loid)
db.productfind({$or:[{"lastModifiedDate":{$lt:lmd}},{"_id":1,"lastModifiedDate":1},{$and:[{"lastModifiedDate":lmd},{"_id":{$gt:loid}}]}]},{"_id":1,"lastModifiedDate":1}).sort({"lastModifiedDate":-1, "_id":1}).limit(2)
repeat same for subsequent pages.
ObjectIds should be good enough for pagination if you limit your queries to the previous second (or don't care about the subsecond possibility of weirdness). If that is not good enough for your needs then you will need to implement an ID generation system that works like an auto-increment.
Update:
To query the previous second of ObjectIds you will need to construct an ObjectID manually.
See the specification of ObjectId http://docs.mongodb.org/manual/reference/object-id/
Try using this expression to do it from a mongos.
{ _id :
{
$lt : ObjectId(Math.floor((new Date).getTime()/1000 - 1).toString(16)+"ffffffffffffffff")
}
}
The 'f''s at the end are to max out the possible random bits that are not associated with a timestamp since you are doing a less than query.
I recommend during the actual ObjectId creation on your application server rather than on the mongos since this type of calculation can slow you down if you have many users.
I have build a pagination using mongodb _id this way.
// import ObjectId from mongodb
let sortOrder = -1;
let query = []
if (prev) {
sortOrder = 1
query.push({title: 'findTitle', _id:{$gt: ObjectId('_idValue')}})
}
if (next) {
sortOrder = -1
query.push({title: 'findTitle', _id:{$lt: ObjectId('_idValue')}})
}
db.collection.find(query).limit(10).sort({_id: sortOrder})

Categories

Resources