Methods for tracking changes when making realtime updates to a webpage - javascript

I'm looking to update a list of orders (and statuses) real-time on a webpage. The orders in the (MySQL) database are updated asynchronously through other processes (PHP).
I'm familiar with the mechanics of pushing data to pages (polling, event-source). This is not about that.
What I'm struggling with is figuring out exactly what data to push for each user without
needlessly updating list entities that don't need to be
not missing an update.
My table does have a DateTime column last_update_date that I update when there are any changes to the order. I know MySQL doesn't really have any event triggers that can trigger other code.
Ideas so far:
In my JS I could track the time of the last request and on every subsequent request, ask for data since that time. This doesn't work because JS time will most likely not match server MySQL time.
The same could probably done storing the server time in the user session. I feel like this would probably work most of the time, but depending on the timing of the DB update and the requests, changes could be missed since the DB only stores a DateTime with a precision of 1 second.
I'm sure there's a more atomic way to do this, I am just drawing a blank though. What are suitable design patterns for this?

You are correct that you must poll your database for changes, and that MySQL can't push changes to other applications.
The trick is to use server time throughout for your polling. Use a table to keep track of polling. For example, suppose your users have user_id values. Then make a poll table consisting of
user_id INT primary key
polldate DATETIME
Then, when you poll do this sequence.
First make sure your user has an entry in the poll table showing a long-ago polldate. (INSERT IGNORE doesn't overwrite any existing row in the table.)
SET #userid := <<your user's id>>;
INSERT IGNORE INTO poll (user_id, polldate) VALUES (#userid, '1970-01-01')
Then when you poll, do this sequence of operations.
Lock the poll row for the user:
BEGIN TRANSACTION;
SELECT polldate INTO #polldate
FROM poll
WHERE user_id = #userid
FOR UPDATE;
Retrieve the updated rows you need; those since the last update.
SELECT t.whatever, t.whatelse
FROM transaction_table t
JOIN poll p ON t.user_id = p.user_id
WHERE user_id = #userid
AND t.last_update_date > p.polldate;
Update the poll table's polldate column
UPDATE poll p
SET p.polldate = IFNULL(MAX(t.last_update_date), p.polldate)
FROM transaction_table t
JOIN poll_p ON t.user_id = p.user_id
WHERE user_id = #userid
AND t.last_update_date > p.polldate;
And commit the transaction.
COMMIT;
Every time you use this sequence you'll get the items from your transaction table that have been updated since the preceding poll. If there are no items, the polldate won't change. And, it's all in server time.
You need the transaction in case some other client updates a transaction table row between your SELECT and your UPDATE queries.

The solution O.Jones provided would work for making tracking updates atomic, though where it fails is if the following scenario occurs all within one second:
An order update is written to the table (update 1)
A poll action occurs
An order update is written to the table (update 2)
In this scenario, the next poll action will either miss update 2, or will duplicate update 1, depending on if you use > or >= in your query. This is not the fault of the code, it's a limitation of the MySql datetime type having only 1 second resolution. This could be somewhat mitigated with MySql v8 as it has Fractional Seconds Support though this still would not guarantee atomicity.
The solution I ended up using was creating a order_changelog table
CREATE TABLE 'NewTable' (
'id' int NULL AUTO_INCREMENT ,
'order_id' int NULL ,
'update_date' datetime NULL ,
PRIMARY KEY ('id')
);
This table is updated any time a change to an order is made essentially numerating every update.
For the client side, the server stores the last ID from order_changelog that was sent in the session. Every time the client polls, I get all rows from order_changelog that have an ID greater than the ID stored in the session and join the orders to it.
$last_id = $_SESSION['last_update_id'];
$sql = "SELECT o.*, c.id as update_id
FROM order_changelog c
LEFT JOIN orders o ON c.order_id = o.id
WHERE c.id > $last_id
GROUP BY o.id
ORDER BY order_date";
I now am guaranteed to have all the orders since last poll, with no duplicates, and I don't have to track individual clients.

Related

How to know how many items a Firestore query will return while implementing pagination

Firestore has this guide on how to paginate a query:
Firestore - Paginate data with query cursors
They show the following example:
Paginate a query
Paginate queries by combining query cursors with the limit() method. For example, use the last document in a batch as the start of a cursor for the next batch.
var first = db.collection("cities")
.orderBy("population")
.limit(25);
return first.get().then(function (documentSnapshots) {
// Get the last visible document
var lastVisible = documentSnapshots.docs[documentSnapshots.docs.length-1];
console.log("last", lastVisible);
// Construct a new query starting at this document,
// get the next 25 cities.
var next = db.collection("cities")
.orderBy("population")
.startAfter(lastVisible)
.limit(25);
});
QUESTION
I get the example, but how can I know how many items (in total, without the limit restriction) that query will return? I'll need that to calculate the number of pages and control the pagination component, won't I?
I can't simply display next and back buttons without knowing the limit.
How is it supposed to be done? Am I missing something?
You can't know the size of the result set in advance. You have to page through all the results to get the total size. This is similar to not being able to know the size of a collection without also recording that yourself somewhere else - it's just not scalable to provide this information, in the way that Cloud Firestore needs to scale.
this is not possible, the iterator cannot know how many documents it contains, as they are fetched via a gRPC stream.
But there is a workaround... but you have to make a few stuff:
1) write a counter in a firebase doc, which you increment or decrement everything you make a transaction
2) store the count in a field of your new entry, like position 10 or something.
Then you create an index on that field (position DESC).
This way you can do a skip+limit with a where("position", "<", N).orderBy("position", DESC)
It's complex but it does the trick

Breezejs Unique Constraint with Delete

I have the following table:
CREATE TABLE Foo AS (
Id int not null primary key,
YesNo char(1) not null default('N')
)
That has the following constraint - "one and only one row may have the Value 'Y'"
CREATE UNIQUE NONCLUSTERED INDEX [IX_YesNo] ON [dbo].[Foo]
(
[YesNo] ASC
)
WHERE ([YesNo]=('Y'))
The application code (Breeze JS) enforces that one row is always 'Y'. So if you Delete the Row with YesNo = 'Y', the BLL sets another Row's YesNo field to be Y.
origEntity.entityAspect.setDeleted();
otherEntity.YesNo('Y');
When performing the actual DB operations, Breeze is FIRST updating the other row to Y, prior to perfoming the delete of the original. Which violates the unique constraint. Is there an easy way to make the DELETE happen first or do I need special server side delete handling?
Breeze does not control the order of operations performed on the server. You didn't say what technology you're using on the server but the question tags tell me it is EF and SQL Server. In that case, it is EF that is doing the updates before the delete.
I wish there was a way to tell EF what to do. That is not possible so far as I know.
You can take over and it isn't hard to do so especially if you can isolate this sequence of operations from others. Take a look at the beforeSave... methods. If you need both parts of the save inside the same transaction (likely), learn how to set up your own ambient transaction so that you can make two calls to EF (or the database directly), one to do the deletes, and the other to do the updates.

Compound Query JS SDK paRse.com

I have one class Messages with 3 principal fields:
id FromUser ToUser
I do have a query where the To = Value field and the From field is not repeated. I mean, get all FROMUSER who sent me a message.
Any Idea?
Thanks!
As #Fosco says, "group by" or "select distinct" are not supported yet in Parse.com.
Moreover keep in mind the restriction on the selection limit (max 1000 results for query) and timeout request call ( 3 seconds in the before save events, 7/10 seconds in the custom functions ). For the "count" selection, the restriction is the timeout request call.
I'm working on Parse.com too, and i've changed a lot the structure of my db model, often adding some inconsistent columns in several classes, keeping them carefully updated for each necessary query.
For cases like yours, i suggest to make a custom function, that keep in input two parameter ( we can say, "myLimit" and "myOffset" ) for the lazy loading, then select the slices, and programmatically try to filter the resulting array item list (with a simple search using for..loop, or using some utility of UnderscoreJS). Start with small slices ( eg: 200-300 records maximum for selection ) until the last selection returns zero results ( end reached). You could count all items before start all of this, but the timeout limitation could cause you problems. If this not works as expected try to make the same, client side.
You could also make a different approach, so creating another table/class, and for each new message, adding the FromUser in that table ONLY if it doesn't already exist, for that specified ToUser.
Hope it helps

Javascript function taking too long to complete?

Below is a snipet of code that I am having trouble with. The purpose is to check duplicate entries in the database and return "h" with a boolean if true or false. For testing purposes I am returning a true boolean for "h" but by the time the alert(duplicate_count); line gets executed the duplicate_count is still 0. Even though the alert for a +1 gets executed.
To me it seems like the function updateUserFields is taking longer to execute so it's taking longer to finish before getting to the alert.
Any ideas or suggestions? Thanks!
var duplicate_count = 0
for (var i = 0; i < skill_id.length; i++) {
function updateUserFields(h) {
if(h) {
duplicate_count++;
alert("count +1");
} else {
alert("none found");
}
}
var g = new cfc_mentoring_find_mentor();
g.setCallbackHandler(updateUserFields);
g.is_relationship_duplicate(resource_id, mentee_id, section_id[i], skill_id[i], active_ind,table);
};
alert(duplicate_count);
There is no reason whatsoever to use client-side JavaScript/jQuery to remove duplicates from your database. Security concerns aside (and there are a lot of those), there is a much easier way to make sure the entries in your database are unique: use SQL.
SQL is capable of expressing the requirement that there be no duplicates in a table column, and the database engine will enforce that for you, never letting you insert a duplicate entry in the first place. The syntax varies very slightly by database engine, but whenever you create the table you can specify that a column must be unique.
Let's use SQLite as our example database engine. The relevant part of your problem is right now probably expressed with tables something like this:
CREATE TABLE Person(
id INTEGER PRIMARY KEY ASC,
-- Other fields here
);
CREATE TABLE MentorRelationship(
id INTEGER PRIMARY KEY ASC,
mentorID INTEGER,
menteeID INTEGER,
FOREIGN KEY (mentorID) REFERENCES Person(id),
FOREIGN KEY (menteeID) REFERENCES Person(id)
);
However, you can make enforce uniqueness i.e. require that any (mentorID, menteeID) pair is unique, by changing the pair (mentorID, menteeID) to be the primary key. This works because you are only allowed one copy of each primary key. Then, the MentorRelationship table becomes
CREATE TABLE MentorRelationship(
mentorID INTEGER,
menteeID INTEGER,
PRIMARY KEY (mentorID, menteeID),
FOREIGN KEY (mentorID) REFERENCES Person(id),
FOREIGN KEY (menteeID) REFERENCES Person(id)
);
EDIT: As per the comment, alerting the user to duplicates but not actually removing them
This is still much better with SQL than with JavaScript. When you do this in JavaScript, you read one database row at a time, send it over the network, wait for it to come to your page, process it, throw it away, and then request the next one. With SQL, all the hard work is done by the database engine, and you don't lose time by transferring unnecessary data over the network. Using the first set of table definitions above, you could write
SELECT mentorID, menteeID
FROM MentorRelationship
GROUP BY mentorID, menteeID
HAVING COUNT(*) > 1;
which will return all the (mentorID, menteeID) pairs that occur more than once.
Once you have a query like this working on the server (and are also pulling out all the information you want to show to the user, which is presumably more than just a pair of IDs), you need to send this over the network to the user's web browser. Essentially, on the server side you map a URL to return this information in some convenient form (JSON, XML, etc.), and on the client side you read this information by contacting that URL with an AJAX call (see jQuery's website for some code examples), and then display that information to the user. No need to write in JavaScript what a database engine will execute orders of magnitude faster.
EDIT 2: As per the second comment, checking whether an item is already in the database
Almost everything I said in the first edit applies, except for two changes: the schema and the query. The schema should become the second of the two schemas I posted, since you don't want the database engine to allow duplicates. Also, the query should be simply
SELECT COUNT(*) > 0
FROM MentorRelationship
WHERE mentorID = #mentorID AND menteeID = #menteeID;
where #mentorID and #menteeID are the items that the user selected, and are inserted into the query by a query builder library and not by string concatenation. Then, the server will get a true value if the item is already in the database, and a false value otherwise. The server can send that back to the client via AJAX as before, and the client (that's your JavaScript page) can alert the user if the item is already in the database.

How to get the last record of the table and update a column value and insert to the same column with new record using angularJs/Javascript/Jquery

I am working in Web Api & MVC using angularjs for CURD operations,
In my DB I have a table "Accounts" it has a column with Name "ID" which will insert as
1 for first record and 2 for second record etc...
This column values will increment as per the last record in table,
this process should be in client side only.
Thanks in advance
If I understand your question correctly, you want to send next ID from db to client, create a new record on client with that ID and update it in db.
1) All you need to do is to create a empty record in db and send it's id/primary key to the client. This approach has a potential problem that, if the client stops/chooses to cancel the operation, there will be lot of empty records created.
2) Otherwise, you can fetch the last record from client using select max(id) from table and then use id+1 for the new record. The problem with this approach is that when multiple clients try to update db at the same time, all of them will/may have the same id.
3) In order to overcome the problem of (2), you can use locking mechanisms, but thats not how you do it. Its not worth it.
In my opinion, in most of the cases, client doesn't need to know the id of the going to be created record. Once the record is created, you can send it from db.
Once you're within the table, do everything in there. It's faster an easier.
UPDATE table1 SET column = (SELECT DISTINCT MAX(id) FROM Accounts WHERE user = ? GROUP BY id DESC LIMIT 1 ) WHERE user = ?

Categories

Resources