Creating a mongo view that depends on the current time - javascript

I have a collection that has a date field and I want to create a mongo view that filter all the documents by the current date. For example, I want my view to contain all the documents of the last 7 days.
I have a javascript script that creates the view with aggregation pipeline. I used javascript method- new Date() to write the condition of the last 7 days:
{
"$lt": [
{"$subtract": [new Date(), "$DateOfDocument"]}, // difference in milliseconds
1000 * 60 * 60 * 24 * 7 // 7 days in milliseconds
]
}
But when I execute the script that creates the view, mongo calculates 'new Date()' and than creates the view, with the result of 'new Date()' as ISODate. Now the aggregation pipeline calculates the view by the last time I executed the script, not by the actual current date.
{
"$lt": [
{"$subtract": [ISODate("2018-02-05T06:52:32.10+0000"), "$DateOfDocument"]},
604800000
]
}
Is there any way to get a view filtered by the current date? Any aggregation method for the current date, like oracle's 'sysdate'? I don't want to execute the script that recreate the view every time I want to read the view.

Looks like this feature is in the works for MongoDB 3.7.
https://jira.mongodb.org/browse/SERVER-23656

Related

V-Calendar component in Vuetify; setting up events to scale across months

I'm looking for advice on the best way to store event dates in Postgres when it comes to fetching them and displaying them on an calendar. I'm using an node/expressjs backend with postgres as a data store. On the front end I'm using vue with vuetify/nuxt. Vuetify has a lot of convenient UI components, but more specifically the v-calendar component:
V-Calendar
I've got a few edge cases that I'm having a hard time wrapping my head around.
I want to be able to fetch events for the current month from the database and events that spill over from one month to the next, and to the next, etc. What is the best way to do this? How should I model my database table and fetch the records (I'm using Postgres)? An event needs a name, start and end. Should I instead store the total duration of the event in a unix timestamp and query the events by range between a given month duration (in seconds)?
Any advice would be welcome.
Store your events with their beginning and end dates in a range type
You can then use the overlap && range operator to figure out which events belong on a certain month's calendar.
For instance, if you have an event with duration column of type daterange defined as '[2020-01-01, 2020-03-31]'::daterange, it will be match the following condition:
where duration && '[2020-02-01, 2020-03-01)'
Please note that the closing ) is deliberate since that excludes the upper limit from the range (in this case, 1 March).
In case you would rather not store the start and end dates inside a range type, you can always construct one on the fly:
where daterange(start_date, end_date, '[]') && '[2020-02-01, 2020-03-01)'
The range for the current month can be calculated on the fly:
select daterange(
date_trunc('month', now())::date,
(date_trunc('month', now()) + interval '1 month')::date, '[)'
);
daterange
-------------------------
[2020-07-01,2020-08-01)
(1 row)
Or for a three-month calendar:
select daterange(
(date_trunc('month', now()) - interval '1 month')::date,
(date_trunc('month', now()) + interval '2 month')::date, '[)'
);
daterange
-------------------------
[2020-06-01,2020-09-01)
(1 row)
The way we stored and retrieved events was that every time a user scrolls in the calendar i use a method to return start_date_time for the current month and the previous and next month. For a total of 3 months. This way we catch any calendar overlap. We use laravel in the backend, but you should be able to get the general gist of the method. Our tableify method just formats data for us.
My DB structure is as follows (removing subjective data):
CREATE TABLE calendar_events (
id bigserial NOT NULL,
calendar_event_category_id int4 NOT NULL,
name varchar(512) NOT NULL,
description text NULL,
start_date_time timestamp(0) NOT NULL,
end_date_time timestamp(0) NULL,
"data" json NULL,
user_id int4 NULL,
created_at timestamp(0) NULL,
updated_at timestamp(0) NULL,
CONSTRAINT calendar_events_pkey PRIMARY KEY (id),
CONSTRAINT calendar_events_calendar_event_category_id_foreign FOREIGN KEY (calendar_event_category_id) REFERENCES calendar_event_categories(id),
CONSTRAINT calendar_events_user_id_foreign FOREIGN KEY (user_id) REFERENCES users(id)
);
My index method:
public function index(Request $request)
{
$currentDate = empty($request->filter_date) ? Carbon::now() : new Carbon($request->filter_date);
if (! empty($request->filter_date)) {
return api_response('Retrieved Calendar Events.',
CalendarEvent::tableify($request,
CalendarEvent::where('start_date_time', '>=', $currentDate->subMonth(1)->isoFormat('YYYY-MM-DD'))->where('start_date_time', '<=', ($currentDate->addMonth(2)->isoFormat('YYYY-MM-DD')))->orderby('start_date_time', 'DESC')
)
);
} else {
return api_response('Retrieved Calendar Events.', CalendarEvent::tableify($request, CalendarEvent::orderby('start_date_time', 'DESC')));
}
}
That's the way I solved the overlap problem. Every time the user scrolls the frontend checks if a month was changed, if so, it updates the calendar with the latest 3 month chunk.

Creating and comparing dates inside CosmosDB stored procedures

There is limited guidance for CosmosDB stored procedures and their handling of new Date() and the comparison of dates.
The following code is a CosmosDB stored procedure to 'freeze' the writing of documents after a given time. The property currentDoc.FreezeDate is in ISO-8601 format, e.g. '2017-11-15T13:34:04Z'.
Note: this is an example of the situation I'm trying to understand. It is not production code.
function tryUpdate(newDoc) {
__.queryDocuments(
__.getSelfLink(),
{ /* query to fetch the document */ },
(error, results) => {
var currentDoc = results[0]; // doc from the database
// fail if the document is still locked
if (new Date(currentDoc.FreezeDate) < new Date()) {
getContext().getResponse().setBody({ success: false });
return;
}
// else update the document
/* snip */
}
);
}
My question is: within CosmosDB stored procedures, is new Date() affected by timezones, especially given that the database may be in a different region than the invoking code? Is the date comparison code here valid in all situations?
As far as I can see, CosmosDB is storing DateTime values without the corresponding Timezone, aka. not as DateTimeOffset. This means it should not matter where the code is executed, since it is always normalized to something like this:
"2014-09-15T23:14:25.7251173Z"
Javascript Date object are timestamps - they merely contain a number of milliseconds since the epoch. There is no timezone info in a Date object. Which calendar date (day, minutes, seconds) this timestamp represents is a matter of the interpretation (one of to...String methods).
(taken from Parse date without timezone javascript)
In other words, no matter where you are in the world, new Date() will always have the same value internally.
If you want to remove uncertainty in exchange for readability, I would recommend only storing the seconds or milliseconds since the epoch (Unix Time). This is also what is used internally by date (new Date().value - milliseconds). Incidentally, the internal cosmos document field _ts is also a timestamp in epoch format.
Be aware that the value of new Date() might by off the 'correct global time` by a couple of minutes - I don't know if Azure/Cosmos guarantees a certain deviation window.

Best way to get rid of old messages/posts in a collection?

I know this website prefers answers over discussions but I am quite lost on this.
What would be a sufficient enough way to get rid of old messages that are stored in a collection? As they are messages, there will be a large amount of them.
What I have so far are either deleting messages using
if (Messages.find().count() > 100) {
Messages.remove({
_id: Messages.findOne({}, { sort: { createdAt: 1 } })._id
});
}
and I have also tried using expire.
Is there any other/more efficient way to do this?
Depending on how you define the age to expiry, there are two ways you can go about this.
The first one would be to use "TTL indexes" that automatically prune some collections based on time. For instance, you might have a logs table to log all the application events and you only want to keep the logs for the last hour. To implement this, add a date field to your logs document. This will indicate the age of the document. MongoDB will use this field to determine if your document is expired and needs to be removed:
db.log_events.insert({
"name": "another log entry"
"createdAt": new Date()
})
Now add a TTL index to your collection on this field. In the example below I used an expireAfterSeconds value of 3600 which will annihilate logs after every hour:
db.log_events.createIndex({ "createdAt": 1 }, { expireAfterSeconds: 3600 })
So for your case you would need to define an appropriate expiry time in seconds. For more details refer to the MongoDB documentation on expiration of data using TTL indexes.
The second approach involves manually removing the documents based on a date range query. For the example above given the same collection, to remove documents older that an hour you need to create a date that represents an hour ago relative to the current timestamp and use that date as the query in the remove method of the collection:
var now = new Date(),
hourAgo = new Date(now.getTime() - (60 * 60 * 1000));
db.log_events.remove({"createdAt": { "$lte": hourAgo }})
The above will delete log documents older than an hour.

Time sensitive data in Node.js

I'm building an application in Node.js and MongoDB, and the application has something of time-valid data, meaning if some piece of data was inserted into the database.
I'd like to remove it from the database (via code) after three days (or any amount of days/time spread).
Currently, my solution is to have some sort of member in my Schema that checks when it was actually posted and subsequently removes it when the current time is past 3 days from the insertion, but I'm having trouble in figuring out a good way to write it in code.
Are there any standard ways to accomplish something like this?
There are two basic ways to accomplish this with a TTL index. A TTL index will let you define a special type of index on a BSON Date field that will automatically delete documents based on age. First, you will need to have a BSON Date field in your documents. If you don't have one, this won't work. http://docs.mongodb.org/manual/reference/bson-types/#document-bson-type-date
Then you can either delete all documents after they reach a certain age, or set expiration dates for each document as you insert them.
For the first case, assuming you wanted to delete documents after 1 hour you would create this index:
db.mycollection.ensureIndex( { "createdAt": 1 }, { expireAfterSeconds: 3600 } )
assuming you had a createdAt field that was a date type. MongoDB will take care of deleting all documents in the collection once they reach 3600 seconds (or 1 hour) old.
For the second case, you will create an index with expireAfterSeconds set to 0 on a different field:
db.mycollection.ensureIndex( { "expireAt": 1 }, { expireAfterSeconds: 0 } )
If you then insert a document with an expireAt field set to a date mongoDB will delete that document at that date and time:
db.mycollection.insert( {
"expireAt": new Date('June 6, 2014 13:52:00'),
"mydata": "data"
} )
You can read more detail about how to use TTL indexes here:
http://docs.mongodb.org/manual/tutorial/expire-data/

MongoDB query for document older than 30 seconds

Does anyone have a good approach for a query against a collection for documents that are older than 30 seconds. I'm creating a cleanup worker that marks items as failed after they have been in a specific state for more than 30 seconds.
Not that it matters, but I'm using mongojs for this one.
Every document has a created time associated with it.
If you want to do this using mongo shell:
db.requests.find({created: {$lt: new Date((new Date())-1000*60*60*72)}}).count()
...will find the documents that are older than 72 hours ("now" minus "72*60*60*1000" msecs). 30 seconds would be 1000*30.
We are assuming you have a created_at or similar field in your document that has the time it was inserted or otherwise modified depending on which is important to you.
Rather than iterate over the results you might want to look at the multi option in update to apply your change to all documents that match your query. Setting the time you want to look past should be fairly straightforward
In shell syntax, which should be pretty much the same of the driver:
db.collection.update({
created_at: {$lt: time },
state: oldstate
},
{$set: { state: newstate } }, false, true )
The first false being for upserts which does not make any sense in this usage and the second true marking for multi document update.
If the documents are indeed going to be short lived and you have no other need for them afterwards, then you might consider capped collections. You can have a total size or time to live option for these and the natural insertion order favours processing of queued entries.
You could use something like that:
var d = new Date();
d.setSeconds(d.getSeconds() - 30);
db.mycollection.find({ created_at: { $lt: d } }).forEach(function(err, doc) {} );
The TTL option is also an elegant solution. It's an index that deletes documents automatically after x seconds, see here: https://docs.mongodb.org/manual/core/index-ttl/
Example code would be:
db.yourCollection.createIndex({ created:1 }, { expireAfterSeconds: 30 } )

Categories

Resources