I have a meteor application that is displaying records in a table using the code below:
Template.records.helpers({
trackingData: function() {
return Tracking.find({},{$sort: {fullDate: -1}})
}
})
And
<table>
...
{{#each trackingData}}
<tr class="record" id="{{_id._str}}">
...
{{/each}}
...
</table>
And
Meteor.publish('tracking', function(filter, offset) {
var records = Tracking.find(filter,{
sort: {fullDate: -1},
limit:10,
skip: offset*10
});
return records
});
For some reason, when my new record is added it always shows up at the bottom of the table. Based on the sorting I have in place the new record should show on top. What's odd is that when I refresh the page the record stays on the bottom but when I stop my app and restart it - the record shows at the top like it should. What might I be missing that would cause this sort of odd sorting behavior?
You might want to use Tracker.flush() or similar:
Normally, when you make changes (like writing to the database), their impact (like updating the DOM) is delayed until the system is idle. This keeps things predictable — you can know that the DOM won’t go changing out from under your code as it runs. It’s also one of the things that makes Meteor fast.
Tracker.flush forces all of the pending reactive updates to complete. For example, if an event handler changes a Session variable that will cause part of the user interface to rerender, the handler can call flush to perform the rerender immediately and then access the resulting DOM
Basically, the Collection.find operation returns a cursor that is generally only computed once.
The topic was covered by this blog as well:
When you publish documents to the client, they are merged with other
documents from the same collection and rearranged into an in-memory
data store called minimongo. The key word being rearranged.
Many new meteor developers have a mental model of published data as
existing in an ordered list. This leads to questions like: "I
published my data in sorted order, so why doesn't it appear that way
on the client?" That's expected. There's one simple rule to follow:
If you need your documents to be ordered on the client, sort them on
the client. Sorting in a publish function isn't usually necessary
unless the result of the sort changes which documents are sent (e.g.
you are using a limit).
You may, however, want to retain the server-side sort in cases where
the data transmission time is significant. Imagine publishing several
hundred blog posts but initially showing only the most recent ten. In
this case, having the most recent documents arrive on the client first
would help minimize the number of template renderings.
Related
When using a transaction and a select * ... FOR UPDATE to lock a row, is it possible to do a "soft" commit that would write the changes so far to the table so they become permanent, while retaining the lock on the row?
In this specific use case, I have a long running function that triggers a series of operations based on a particular record. During that long running function, the row should remain locked for modification by other parts of the application.
However at different stages of the function there are side effect triggers that need to be be committed to the database (and made permanent).
If anything happens past one of those steps it would only roll back to that point.
If I just COMMIT then my current transaction finishes (and can't run further operations with that transaction) and any other queued operation kicks in.
COMMIT AND CHAIN doesn't prevent existing pending transactions from kicking in first.
Is there a way to do this at the database level?
No, that is not possible. If you need to prevent concurrent data modifications for a longer time, long transactions are not a good solution. You should solve this with application logic, for example by adding a boolean column that indicates that the row is being worked on.
Let's add a <FlatList/> into our application.
The first requirement we have is to render a predefined set of 5 items. We define a constant in our component, pass it into the list via the data prop and it works just fine...
... until we decide to store this data on a server and expose it via the API. OK, no problem, will fetch the data in our componentDidMount() method, put it into the state when it finishes loading, pass the state to the data prop and it also works just fine...
... until we notice that we have a huge delay before we can show the first item of the list. That is because the amount of data we're loading from the API grew significantly over time. Maybe now it is some REST resource collection consisting of thousands of items.
Naturally, we decide to implement a pagination in our API. And that is when the things start to get interesting... When do we load the next page of the resource collection? We reach out to the wonderful React Native API reference, examine the FlatList part of it, and figure out that it has a very handy onEndReached callback prop. Wonderful! Let's load the next page of our collection every time this callback is called! It would work as a charm...
... until you receive a bug report in your mail. In this report a user tells us that the data is not sorted properly in the list, that some items are duplicated and some items are just missing.
After a quick debugging we are able to reproduce the issue and figure out what causes it. Just set the onEndReachedThreshold = { 5 } and scroll the list very fast. onEndReached callback would fire asynchronously before the previous one has finished.
Inside our component, we have a variable pageId storing the last page ID we loaded. Each time the onEndReachedThreshold gets fired we use it to construct the next page URL and then increment it. The problem is that this method is called concurrently and the same value of pageId is used multiple times.
I used to do a bit of multithreading programming before, I've heard of mutexes, semaphores, and atomicity. I would like to be able to acquire an exclusive lock on the pageId to use it in this concurrent callback.
But after a quick Internet search, it seems that JS does not provide such tools out of the box. I found some libraries like this one but it doesn't look like a good candidate for a dependency, it's not very actively developed, it's not made by a major vendor etc. Looks more like some hobby project.
The question is: what are the industry-standard rock-solid tools or patterns for thread-safe React Native programming? How can I solve the described concurrency issue in a React Native application?
I'm creating an events app with react native. I just wanted some advice on which would be the better more performant and scalable way to structure my data model in firestore.
I have 2 collections Events and Users.
A user creates an event which goes into the Event collection, In my app users can then go onto the main page and view a list of events from the events collection.
I also want to have a second page in the app a "users profile" page where users can view a list of their own events, update and delete them.
My question is which would be better:
to store the event's key in an array in users/user1
store basically a duplicate event in a subcollection called events in users/user1
I feel that option 1, might be better just to store a reference to the doc in an array, So I don't have duplicates of the event and if the user has to update the event, only 1 write has to be made to the actual event in the events collections.
the event is likely to have more fields come onto it in the future, like a comments field etc, so I feel by just going with option 1 I dont have to keep doing double work, although I might have to read twice i.e read users/user1- > (then array) events:[event:{dockey}], then use that key to get the actual event doc in the events collection.
Thank you for any feedback and advice
There is no simple right or wrong answer when you need to choose between these two options. Data duplication is the key to faster reads, not just in Firebase Realtime Database or Cloud Firestore, but in general. Any time you add the same data to a different location, you're duplicating data in favor of faster read performance. Unfortunately in return, you have a more complex update and higher storage/memory usage. But you need to note that extra calls in the Firebase Realtime Database are not expensive, in Firestore are. How much duplication data versus extra database calls is optimal for you, depends on your needs and your willingness to let go of the "Single Point of Definition mindset", which can be also called very subjective.
After finishing a few Firebase projects, I find that my reading code gets drastically simpler if I duplicate data. But of course the writing code gets more complex at the same time. It's a trade-off between these two and your needs that determines the optimal solution for your app.
Please also take a look at my answer from this post where I have explained more about collections, maps and arrays in Firestore.
I've been writing a Component Entity System in javascript for a while now but I keep returning to a root issue.
How do you handle entity specific - that is to say a single instance or single type - functionality?
Here's the current situation:
The way I have structured things, when an item entity is stored in inventory by another entity it isn't destroyed, merely stripped of most of its components. This is so that if it is dropped, or perhaps retrieved for use, it can be reactivated with its old state. The components that are stripped are stored in an InstanceDataComponent attached to the entity (this is just a JSON object).
There is a small system for managing the internals of whether an item can be picked up and adding an inventory record with a hash, quantity, and id for the thing being stored but something needs to manage the transformation of that entity from its "item" state to its "stored" state. What should do this? It seems to me that the details of which component to remove and what data to alter will need to be nearly unique for each item.
Suppose that in the future I want an entity to switch between two behaviors on the fly. For example, to pace to and fro until it is disturbed then pathfind to the player. What will handle that transition?
I get the feeling I've got a fundamental misunderstanding of the issues and typical architecture here. What would be a clean way to handle this? Should I perhaps add a component for each set of behavior transitions? Wouldn't that end up with far too many components that are glorified callback wrappers? Or am I missing something about how an entity should be altered when it is stored in inventory?
Some thoughts for others who might be going through this situation.
My current solution (after several iterations) is to fire a global event e.g. itemPickupSystem:storedItem and an entity can attach handlers for any events inside its factory method. This isn't scalable, for a number of reasons. I've been considering moving those events into a queue to be executed later.
My factory methods have turned into a hodgepodge of callback definitions and things are degrading into callback hell. In addition, this events system has to go, it is the only part of the entire system that breaks the serial nature of the game loop. Until now each system fired in a defined order and all logic resided inside those systems. Now I can't guarantee that an entity will be in a specific state because those callbacks could have been fired at different points. Finally, because execution is being turned over carte blanche to code that isn't part of the core functionality there is no way to know how large that call stack will get when an event is fired.
Sometimes it's easiest to think of this problem in terms of pragmatic network replication, and the boundaries between components come naturally.
Let's say that your actor can both wield, and store, a sword.
I would not anticipate a 'transform' from an inventorySword into a presentationSword, but rather that there's a natural delineation between an 'inventoryItem' and a 'swordPresentation'.
On the server, each player would be assigned a list of items in their inventory. Each item would have a unique id for the world. The inventory item might be derived as 'SwordItem' and have properties for SwordType and Condition%
On the client, a 'swordPresentation' component might handle the job of which mesh to display, which socket to attach animation data to when displayed via a 1st person camera, and how to smooth animation transitions. But none of that matters to the actual state of the game, it's simply how our client is seeing the world.
Potentially, if you were distributing the state of the game to each client all that you would pass over the network would be the current player's inventory, and for the other players, which item each player has currently equipped and where they are (assuming they're in eyesight)
So, consider creating a factory that creates a 'swordPresentation' based off of an inventory item, finding the bare minimum you can pass in as parameters to create a representation of the component (maybe it's sword type, sword % condition, etc).
Whatever that bare minimum is what you want to serialize as your inventory item.
Establishing a clear delineation between replicated data means better performance, fewer vulnerabilities when you're writing a multiplayer game. When you're writing a single-player game, it'll help you understand what goes into a save file.
Most of the Meteor revolves around collections and cursors and fetching new documents when they appear in collection and match the criteria. Yet I am working with bigger documents, that contain multiple fields and has a deep and not predictable structure. On the top level there is a clear schema, but some subdocuments are unpredictable json data.
But let's look at a simpler example:
Reports = new Mongo.collection('reports');
Meteor.publish('reports', function() {
return Reports.find({});
});
and then on a client side, I open a report, put it in on screen using rather complicated not-only-html rendering functionality and then there are free text comment field embedded within report. And when they are changed, I want to automatically save them
Meteor.call("autosaveReport",reportId,comment);
and then there is meteor method that writes in the comment
Meteor.methods({
"autosaveReport": function(reportId,comment) {
Reports.update({_id:reportId},{$set:{comment:comment}});
}
);
Problem is, that every time comment is autosaved, Meteor Tracker reruns all the subscribtions and finds related to this report. And as report is big and has complicated rendering, that reload is visible for the user and destroys the purpose of seamless autosaving.
So, question - is it possible to trigger reactivity only on parts of the mongo document? Currently I have solved it by manually comparing old and new document on rendering, and if there is no difference in core, then stopping the re-rendering. That feels odd and against meteor spirit.
In your helper or route that sets the data context for your template, use {reactive: false} in the find:
return Reports.find(query,{reactive: false});
That way the helper won't update when the underlying object changes.
That flag is all or nothing however, it doesn't let you be selective about what changes to observe and which to ignore.