How to search by duration in Algolia - javascript

Let's say I am building a hotel booking platform, and every Room record has availability calendar. A common search criteria is to search by duration. Where user inputs start date and end date, and database fetches rooms that are not occupied from that duration.
I have implemented a very naive approach where I store the occupied days as an array of days.
attribute :occupied_at_i do
array = []
if !occupied_at.empty?
occupied_at.each do |date|
array << Time.parse(date).to_i
end
end
array
end
And then at the client side, I add the following javascript code to cross-check if the day is in the numericRefinement
// Date Filters
$('.date-field')
.on('change', function() {
if(_.every(_.map($('.date-field'), function(date) {return _.isEmpty(date.value) }), function(n) {return n==false;})) {
helper.clearRefinements('occupied_at_i');
var arrayOfDates = addDateFilters();
_.forEach(arrayOfDates, function(n) {
helper.addNumericRefinement('occupied_at_i', '!=', n);
});
showClearAllFilters();
helper.search();
}
});
So, it obviously is not a good way to do it, I am wondering what's a better to leverage on algolia and search by duration?
Thanks

Hope this helps others (since question has asked long back)
it is always better do the filtering at server side and render the results. Here one approach would be to add datetime fields to Room model as start_date and end_date. So that when user inputs start date and end date, records are fetched based on occupied status in that duration. One simple query would be like:
Room.where.not("start_date >= ? AND end_date <= ?", start_date, end_date)
The other best solution would be like to have another model to save Room Bookings details with start and end datetime fields. This way we can have multiple room bookings for a particular room and it can be saved simultaneously in the room booking collection. Hence the query would become:
Room.left_joins(:room_bookings).
where(
"room_bookings.start_date > ? OR
room_bookings.end_date < ?", end_date, start_date
)

Related

Sql Server query that returns prices from each shop on each date and adds a 0 if no data is present for the shopId on a particular date

I have this Sql Server database table called productPrices:
shopId int
price decimal(18, 2)
dateFound datetime
I want to make an sql query formatted to use on a chart.js line chart. The line chart takes the following parameters:
An array of X Values - which in this context will be the date/day (from lowest to highest)
Each shopId contains an array of prices.
I have made a codepen version of how the data should be displayed in the line chart (using just data added manually)
https://codepen.io/nickbuus/pen/OJxEGqK
The problem is that if no price was added for the shop on a particular day then a data value for instance 0 still needs to be present since price arrays length has to match the X Values array.
How could I make a query that fills out 0 when the particular shopId doesn't have a value on that particular date?
What is the best way to format the returned data from the sql query when it should be used for the structure that chart.js line chart uses?
Ideally what you should be doing here is creating a Calendar Table. I'm not going to cover how you create a calendar table here, as a search in your favourite search engine of something like "Calendar Table SQL Server" will give you a huge wealth of resources and give some great explanations and their use cases.
Once you have a Calendar table, you need to CROSS JOIN that to your Shops table; which I also assume you have. Then you can simply LEFT JOIN to your Prices table.
So a parametrised query might look like this:
SELECT S.ShopID,
COUNT(P.Price) AS Prices,
C.CalendarDate AS Datefound
FROM dbo.Calendar C
CROSS JOIN dbo.Shops S
LEFT JOIN dbo.Prices P ON C.CalendarDate = P.DateFound --Though DateFound is a datetime, I assume it's time portion is 00:00:00.000
AND S.ShopId = P.ShopID
WHERE C.CalendarDate >= #StartDate
AND C.CalendarDate < DATEADD(DAY, 1, #EndDate)
GROUP BY S.ShopID,
C.CalendarDate;
If you can't, for some reason, create a Calendar table and you don't have (and can't create) a Shop table (I strongly suggest you do create one though) then you'll need to use an inline tally to create your Calendar, and DISTINCT to get the Shop IDs.
A parametrised query would look something like this:
WITH N AS(
SELECT N
FROM (VALUES(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL))N(N)),
Tally AS(
SELECT 0 AS I
UNION ALL
SELECT TOP(DATEDIFF(DAY, #StartDate, #EndDate))
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS I
FROM N N1, N N2, N N3), --1,000 rows. Add more cross joins for more rows
Calendar AS(
SELECT DATEADD(DAY, T.I, #StartDate) AS CalendarDate
FROM Tally T),
Shops AS(
SELECT DISTINCT P.ShopID
FROM dbo.Prices P)
SELECT S.ShopID,
COUNT(P.Price) AS Prices,
C.CalendarDate AS Datefound
FROM dbo.Calendar C
CROSS JOIN dbo.Shops S
LEFT JOIN dbo.Prices P ON C.CalendarDate = P.DateFound --Though DateFound is a datetime, I assume it's time portion is 00:00:00.000
AND S.ShopId = P.ShopID
GROUP BY S.ShopID,
C.CalendarDate;
db<>fiddle

V-Calendar component in Vuetify; setting up events to scale across months

I'm looking for advice on the best way to store event dates in Postgres when it comes to fetching them and displaying them on an calendar. I'm using an node/expressjs backend with postgres as a data store. On the front end I'm using vue with vuetify/nuxt. Vuetify has a lot of convenient UI components, but more specifically the v-calendar component:
V-Calendar
I've got a few edge cases that I'm having a hard time wrapping my head around.
I want to be able to fetch events for the current month from the database and events that spill over from one month to the next, and to the next, etc. What is the best way to do this? How should I model my database table and fetch the records (I'm using Postgres)? An event needs a name, start and end. Should I instead store the total duration of the event in a unix timestamp and query the events by range between a given month duration (in seconds)?
Any advice would be welcome.
Store your events with their beginning and end dates in a range type
You can then use the overlap && range operator to figure out which events belong on a certain month's calendar.
For instance, if you have an event with duration column of type daterange defined as '[2020-01-01, 2020-03-31]'::daterange, it will be match the following condition:
where duration && '[2020-02-01, 2020-03-01)'
Please note that the closing ) is deliberate since that excludes the upper limit from the range (in this case, 1 March).
In case you would rather not store the start and end dates inside a range type, you can always construct one on the fly:
where daterange(start_date, end_date, '[]') && '[2020-02-01, 2020-03-01)'
The range for the current month can be calculated on the fly:
select daterange(
date_trunc('month', now())::date,
(date_trunc('month', now()) + interval '1 month')::date, '[)'
);
daterange
-------------------------
[2020-07-01,2020-08-01)
(1 row)
Or for a three-month calendar:
select daterange(
(date_trunc('month', now()) - interval '1 month')::date,
(date_trunc('month', now()) + interval '2 month')::date, '[)'
);
daterange
-------------------------
[2020-06-01,2020-09-01)
(1 row)
The way we stored and retrieved events was that every time a user scrolls in the calendar i use a method to return start_date_time for the current month and the previous and next month. For a total of 3 months. This way we catch any calendar overlap. We use laravel in the backend, but you should be able to get the general gist of the method. Our tableify method just formats data for us.
My DB structure is as follows (removing subjective data):
CREATE TABLE calendar_events (
id bigserial NOT NULL,
calendar_event_category_id int4 NOT NULL,
name varchar(512) NOT NULL,
description text NULL,
start_date_time timestamp(0) NOT NULL,
end_date_time timestamp(0) NULL,
"data" json NULL,
user_id int4 NULL,
created_at timestamp(0) NULL,
updated_at timestamp(0) NULL,
CONSTRAINT calendar_events_pkey PRIMARY KEY (id),
CONSTRAINT calendar_events_calendar_event_category_id_foreign FOREIGN KEY (calendar_event_category_id) REFERENCES calendar_event_categories(id),
CONSTRAINT calendar_events_user_id_foreign FOREIGN KEY (user_id) REFERENCES users(id)
);
My index method:
public function index(Request $request)
{
$currentDate = empty($request->filter_date) ? Carbon::now() : new Carbon($request->filter_date);
if (! empty($request->filter_date)) {
return api_response('Retrieved Calendar Events.',
CalendarEvent::tableify($request,
CalendarEvent::where('start_date_time', '>=', $currentDate->subMonth(1)->isoFormat('YYYY-MM-DD'))->where('start_date_time', '<=', ($currentDate->addMonth(2)->isoFormat('YYYY-MM-DD')))->orderby('start_date_time', 'DESC')
)
);
} else {
return api_response('Retrieved Calendar Events.', CalendarEvent::tableify($request, CalendarEvent::orderby('start_date_time', 'DESC')));
}
}
That's the way I solved the overlap problem. Every time the user scrolls the frontend checks if a month was changed, if so, it updates the calendar with the latest 3 month chunk.

PostgreSQL Database - How to query a table in order and sequentially (infinite scroll)

I am not lost when dealing with databases but also not an expert.
I want to implement infinite scroll on my website, which means data needs to be in order, either by date_created or id descending. My initial thought was to use LIMIT and OFFSET in a query like this (using SQLalchemy):
session.query(Posts).filter(Posts.owner_id == _userid_).filter(Posts.id < post_id).orderBy(desc(Posts.id)).limit(5).all()
which translates to something like this:
SELECT * from posts WHERE owner_id = _userid_ AND id < _post_id_ ORDER BY id DESC LIMIT 10 OFFSET _somevalue_;
and in my js:
var minimum_post_id = 0;
var posts_list = [];
var post_ids = [];
function infinite_load(_userid_, _post_id_) {
fetch('/users/' + _userid_ + '/posts/' + _post_id_)
.then(r => r.json())
.then(data => {
console.log(data);
data.posts.forEach(post => { posts_list.push(post); post_ids.push(post.id) });
minimum_post_id = Math.min(...post_ids);
})
}
infinite_load(1, minimum_post_id) // random user id
However, i was researching to see if this was efficient and came across this: https://www.eversql.com/faster-pagination-in-mysql-why-order-by-with-limit-and-offset-is-slow/
Basically it is saying that there limit and offset is bad because it still has to count all of the records to offset, only to throw them away.
So my question is, is my implementation inadequate? How do i efficiently query a database sequentially?
Pagination -- done correctly -- has a few more barbs than a simple "What id range did we show last page? Add 10 to limit and offset." Some quick questions to whet your appetite, then a suggestion:
While a user is looking at items positioned 11 through 20, a record is inserted at position 15. What is returned to the user upon clicking the 'Next' pagination button?
Conversely, while a user is looking at records positioned from 101 through 110, 10 arbitrarily records below are position 100 are removed. What does the user get after a 'Next' pagination click? Or a 'Previous' pagination click?
Depending on your data model, schema, and UI requirements, these can be simple or really difficult to answer.
Now, to why LIMIT/OFFSET is the wrong way to do it ... It's not, actually, provided you have a small enough dataset -- and that can be plenty large for most sites. In other words, pick what works for your setup.
Meanwhile, for the pedagogically minded under the "really large" data set assumption: it's the OFFSET that is the killer part of that query (as it requires the results to be tallied, sorted, counted, then skipped before the LIMIT can kick in). So, how can we remove the OFFSET? Incorporate it into the CONSTRAINT section of your query.
Your query orders by ID, then offsets by some number. Remove the offset, by ensuring that the ID is greater (or less) than what the current screen shows for the user:
SELECT * FROM posts
WHERE
owner_id = _userid_
AND id < _last_displayed_id
ORDER BY id DESC
LIMIT 10;
Similarly, if you're ordering by time, then, make your pagination button (or scroll handler) request new records after/before the last item already presented to the user.

Trying to sort a find using MongoDB with Meteor [duplicate]

I am working on my first project using Meteor, and am having some difficulty with sorting.
I have a form where users enter aphorisms that are then displayed in a list. Currently the most recent aphorisms automatically display at the bottom of the list. Is there an easy way to have the most recent appear at the top of the list instead?
I tried:
Template.list.aphorisms = function () {
return Aphorisms.find({}, {sort: {$natural:1}});
};
And am stumped because the Meteor docs don't have many examples.
Assuming that the date_created is in a valid date format along with the timestamp, You should insert the parsed value of date_created using Date.parse() javascript function, which gives the number of milliseconds between January 1, 1970 and date value contained in date_created.
As a result of that, the most recently added record will contain greater value of date_created than the record inserted before it.
Now when fetching the records, sort the cursor in descending order of the date_created parameter as:
Aphorisms.find({}, {sort: {date_created: -1}});
This will sort records from newer to older.
I've found the following to be a cleaner solution:
Template.list.aphorisms = function () {
return Aphorisms.find().fetch().reverse();
};
Given that entire collection already exists in the reverse order that you would like, you can simply create an array of all the objects and reverse the order.

Time sensitive data in Node.js

I'm building an application in Node.js and MongoDB, and the application has something of time-valid data, meaning if some piece of data was inserted into the database.
I'd like to remove it from the database (via code) after three days (or any amount of days/time spread).
Currently, my solution is to have some sort of member in my Schema that checks when it was actually posted and subsequently removes it when the current time is past 3 days from the insertion, but I'm having trouble in figuring out a good way to write it in code.
Are there any standard ways to accomplish something like this?
There are two basic ways to accomplish this with a TTL index. A TTL index will let you define a special type of index on a BSON Date field that will automatically delete documents based on age. First, you will need to have a BSON Date field in your documents. If you don't have one, this won't work. http://docs.mongodb.org/manual/reference/bson-types/#document-bson-type-date
Then you can either delete all documents after they reach a certain age, or set expiration dates for each document as you insert them.
For the first case, assuming you wanted to delete documents after 1 hour you would create this index:
db.mycollection.ensureIndex( { "createdAt": 1 }, { expireAfterSeconds: 3600 } )
assuming you had a createdAt field that was a date type. MongoDB will take care of deleting all documents in the collection once they reach 3600 seconds (or 1 hour) old.
For the second case, you will create an index with expireAfterSeconds set to 0 on a different field:
db.mycollection.ensureIndex( { "expireAt": 1 }, { expireAfterSeconds: 0 } )
If you then insert a document with an expireAt field set to a date mongoDB will delete that document at that date and time:
db.mycollection.insert( {
"expireAt": new Date('June 6, 2014 13:52:00'),
"mydata": "data"
} )
You can read more detail about how to use TTL indexes here:
http://docs.mongodb.org/manual/tutorial/expire-data/

Categories

Resources