Changing a Meteor collection subscription for all clients - javascript

I am developing a webapp in which I'd need one client, associated with the admin, to trigger an event (e.g., a new value selected in a dropdown list) which in turns will tell all the other connected clients to change the subscription, possibly using a parameter, i.e., the new selected value.
Something along the lines of
Template.bid.events
"change .roles": (e, tpl) ->
e.preventDefault()
role = tpl.$("select[name='role']").val()
Meteor.subscribe role
Of course this works for the current client only.
One way I thought would be keeping a separate collection that points a the current collection to be used, so the clients can programmatically act on that. It feels cumbersome, thou.
Is there a Meteor-way to achieve this?
Thanks

In meteor, whenever you have a problem that sounds like: "I need to synchronize data across clients", you should use a collection. I realize it seems like overkill just to send one piece of data, but I assure you it's currently the path of least resistance.
There are ways you can expose pseudo-collections which don't actually write to mongo, but for your use case that really sounds like overkill - new Mongo.Collection is the way to go.

You can use streams to setup a simple line of communication between connected clients and the server. It doesn't store data in MongoDB. Just let all connected clients listen to a stream and switch subscriptions when a new message comes in with the subscription name. Make sure only your client associated to your admin can push messages to the stream.
Available package: https://atmospherejs.com/lepozepo/streams
Examples: http://arunoda.github.io/meteor-streams/

Related

How to remove particular messages in rabbitmq before publishing new messages?

I have a subscriber which pushes data into queues. Now the messages looks this
{
"Content": {
"_id" ""5ceya67bbsbag3",
"dataset": {
"upper": {},
"lower": {}
}
}
Now a new message can be pushed with same content id but data will be different. So in that i want to delete the old message with same id or replaece the message those id is same & retain only latest message.
I have not found a direct solution for this in rabbitmq. Please guide me how we can do this ?
I have already gone through some posts.
Post 1
Post 2
What you are trying to achieve cannot be trivially solved with RabbitMQ (or rather the AMQP protocol).
RabbitMQ queues are simple FIFO stacks and don't offer any mean of access to the elements beyond publishing at their top and consuming from their bottom.
Therefore, the only way to "update" an already existing message without relying on an another service would be to fetch all the messages until you find the one you are interested in, discard it, and publish the new one with the other messages you fetched together with it.
Overall, the recommendation when using RabbitMQ in regards of message duplication is to make their consumption idempotent. In other words, the consumption of 2 messages deemed to be the same should lead to the same outcome.
One way to achieve idempotency is to rely on a secondary cache where you store the message identifiers and their validity. Once a consumer fetches a new message from RabbitMQ, it would check the cache to see if it's a valid message or not and act accordingly.
I think this is a slightly wrong way to use rabbitMQ.
only immutable (not intended to change) tasks should be put into queues which a worker shall consume.
An alternative way to implement your particular task is
just push immutable data into queue { "content" : { "_id" : "5ceya67bbsbag3"} .. }
store mutable data in db (mongo) or in-mem db (something like redis is suggested here).
whenever update needed, update in db
let your worker fetch required data using your "_id" ref from the db
I am not sure if removing a message is a good idea. If your requirement is to update the data as it comes so that always latest data is maintained for same Id.
Other thing is as messages are getting consumed always the last message data will get updated. So I don't see a issue here in Rabbit MQ.

Storing websocket (channels) connection objects in Redux

I want to use websockets in my redux app and have problems with storing connection objects (phoenix channels).
I have a dynamic collection with possibility to add and remove items. When user adds an item, app should create a new phoenix channel based on connection, subscribe and store because I have to do some stuff on it (for example I have to call a method leave() on channel when user removes an item). Unfortunately, store in redux is all immutable, so there is no option to handle this. Any help would be appreciated.
Definitely don't put it in the store. Per Redux FAQ, only serializable data should go into the store. The standard place to put things like persistent connection objects is inside middleware. And, in fact, there's literally dozens of existing middlewares that demonstrate that approach, with most of them listed over at redux-ecosystem-links. You should be able to use some of those as examples.

Keeping a client-side sync of Sails.js collection, using sockets

I very much like Meteor's pub/sub. I wonder if there is a way to get a similar workflow, using sails.js or just a socket library in general.
In particular, what I would like to be able to do is something along the lines of:
// Server-side:
App.publish('myCollection', -> collection.find({}))
// Client-side:
let myCollection = App.subscribe('myCollection')
let bob = myCollection.find({name: 'Bob'})
myCollection.insert({name: 'Amelie'}, callback)
All interaction with the server should happen in the background.
I very much like Meteor's pub/sub. I wonder if there is a way to get a similar workflow, using sails.js or just a socket library in general
Basically yes, at least about realtime sync between backend and frontend. Let's review what meteor's have and answer point by point.
Pub/sub
The Pub / Sub concept, as stated by Sabbir, is also supported by sails.js. Though the basics are slightly different :
In meteor, the client can subscribes to everything he wants, and the server control what it receives by only publishing to who he wants;
whereas in sails.js, the server both does subscribe some clients sockets and publish to all binded sockets
Note that, by default:
meteor contains the autopublish package that just notify every client without any kind of filtering. To acheive some filtering, you have to meteor remove autopublish then you can handle what will your client receive by adding a mongo request to it, like explained here.
sails by default, on its automatic "select" blueprints actions, auto-subscribes the calling socket to the events on the objects returned by the "select".
As a server-side conclusion:
Subscribe: just call findor findOne blueprint default action, through a socket (attaching some where filters or not) and your socket will automatically be subscribed to every events concerning returned objects => you don't have to code anything on the server, in most cases, for the Subscribe logic.
Publish: every blueprint default actions (create, update, destroy, add, remove) auto-publish to subscribed sockets => you don't have to code anything on the server, in most cases, for the Publish logic.
(Though, if you find yourself implementing some manual controller actions, sails API helps you publishing and subscribing easily)
Client handling
Therefore, with both meteor and sails, clients only receive what they're supposed to receive. Time for front-end now.
Philosophy
meteor in one hand, with it's isomorphic dimension, does provide a front-end connector by nature, exposing it's data-bound collections.
sails on the other hand, is front-end agnostic, and can be attacked by any http REST connector (JS or not), such as $http, $resource, or more advanced ones like Restangular.
Though, being aware of the complexity using raw sockets on their API (when it comes to session, CORS, CSRF and stuff), they developped a javascript socket.io wrapper called sails.io.js designed to be REST-like-over-socket, and just works like a charm.
Basically, The main difference is that meteor is one step higher-level than sails, because it provides the logic of syncing collections and objects.
All interaction with the server should happen in the background.
sails.io.js, the official front-end component, is just not that high-level. When it comes to Angular.js.
Though, you can find some community connectors that aim to, kinda, provide the same feature as mongo data-bound collections and objects. There is sails-resource, spinnaker or angular resource sails. I tried both of them, and I should say that I was disapointed. The abstraction level is so high that it just becomes annoying, IMHO. For example, with not-very-RESTful-friendly custom actions, like a login, it becomes very hard to adapt it for your needs.
==> I would advice to use a low-level connector, such as angularSails or (my prefered) https://github.com/janpantel/angular-sails, or even raw sails.io.js if you're not using Angular.
Edit: just foun a backbone version, by the sails' creator
It just works great, and believe me, the "keep my collection in sync with that socket" code is so ridiculous, that finding a module for this is just not worth it.
Some code please, stop talking
In particular, what I would like to be able to do is something along the lines of:
Server
Meteor
# Server-side:
App.publish('myCollection', -> collection.find({}))
Sails
//Nothing to do, just sails generate api myCollection
Client
Meteor
# Client-side:
myCollection = App.subscribe('myCollection')
Sails, with sails.io.js
(Here using lodash for convenience)
var myCollection;
sails.io.get('/myCollection').then(
function(res) {
myCollection = res.data;
},
function(err) {
//Handle error
}
);
sails.io.on('myCollection').function(msg) {
switch(msg.verb) {
case 'created':
myCollection.push(msg.data);
break;
case 'updated':
_.extend(_.find(myCollection, 'id', msg.id), msg.data);
break;
case 'destroyed':
_.remove(myCollection, 'id', msg.id);
break;
};
});
(I leave the find where and create to your imagination with [the doc])
All interaction with the server should happen in the background.
Well, Sails, only for angular, with sails ressources
I'm not pretty used to that process, so I leave you reading here or here, but once again I'd choose manual .on()method.
Since I asked this question, I've learned a few things and some new projects have popped up. I decided against sails.io, because when developing with React.js, most of the community's weight is behind webpack, but sails.io uses gulp. I realize these can be used together and there is even an npm package for this, but I wasn't too keen on making my stack bigger than it had to be, so I went with a simple express.js server that I could tailor to my needs.
In order to sync my data, I'm using rethinkdb which allows me to asynchronously watch the database for changes and then publish the changes to the clients through websockets.
I've set up a simple script where I keep an instance of a baobab tree on both the client and the server.
When the tree gets modified on the server, it sends transaction data to the appropriate clients through the websocket
The client merges the transaction with the tree.
This method does not make use of local storage and keeps the data in memory in the node.js process. The data in the transaction is also quite redundant.
The future plan has always been to set something up using redis and local storage ...
... until yesterday when I found deepstream.io!
This is a tool that does exactly what I want and need! Nothing more, nothing less.
Another project worth mention is meatier: "like meteor, but meatier". It is composed of many other well supported open source projects, so you could even pick and choose.

Conditional publish events

Introduction
I'm building a private messaging system using sails, but this question can apply to pretty much anything. I'll be using the messaging system as an example to make the question more clear. As a bit of background info, I'm working with the latest sails 0.10 RC.
The problem
Sails allows you to use redis for sessions and pubsub, which allows you to scale over multiple servers. This is all very neat and works brilliantly, but it leaves me with the question of how to publish events to specific connected sockets (clients).
Sometimes you wish to only publish events to participants, as is the case with a private messaging system. Only the author and recipient should be notified of new messages in the thread. How would you accomplish this? I know you can subscribe a client to a specific model instance, notifying the client of changes in said model; I also know it's possible to subscribe a client to a model, notifying them of newly created (saved) model instances. It's the latter, the create verb that's causing me a bit of trouble. I don't want everyone that's using the messaging system to receive updates for new messages in threads they're not in. This would be a privacy issue.
TL;DR
How can I filter which clients receive the create verb event based on the value of a property (author and recipient) on the model in question? Is there any other way to make sure only these clients receive updates for the model?
You have a few options here, but all of them involve not really using the default publishCreate method, which will just blast out the created message to everyone who was subscribed to it via .watch().
The first option is to use associations to link your Message model to the users who should know about it, and then listen for the publishAdd message instead of publishCreate. For example, if there's an association between a Message instance and the User instances who represent the sender and recipient, then the default publishCreate logic will also trigger a publishAdd for the related users, indicating that a new Message has been added to their messages (or whatever you name it) collection.
The second option is to override the default publishCreate for Message, to have it send only to the correct users. For example, if only the recipient should be notified, then in api/models/Message.js you could do:
attributes: {...},
publishCreate: function (values, req, options) {
User.publish(values.recipient, {
verb: "created",
data: values,
id: values.id
}, req);
}
As a slight alternative, you can place your custom code in the model's afterPublishCreate method instead, which the default publishCreate will then call. This has the benefit of maintaining the default code that handles calling publishAdd for associated models; the trick would be just to make sure that no one was subscribed to the model classroom via .watch(), so that the default publishCreate doesn't send out created messages to users who shouldn't see them.

Dropping a Mongo Database Collection in Meteor

Is there any way to drop a Mongo Database Collection from within the server side JavaScript code with Meteor? (really drop the whole thing, not just Meteor.Collection.remove({}); it's contents)
In addition, is there also a way to drop a Meteor.Collection from within the server side JavaScript code without dropping the corresponding database collection?
Why do that?
Searching in the subdocuments (subdocuments of the user-document, e.g. userdoc.mailbox[12345]) with underscore or similar turns out quiet slow (e.g. for large mailboxes).
On the other hand, putting all messages (in context of the mailbox-example) of all users in one big DB and then searching* all messages for one or more particular messages turns out to be very, very slow (for many users with large mailboxes), too.
There is also the size limit for Mongo documents, so if I store all messages of a user in his/her user-document, the mailbox's maximum size is < 16 MB together with all other user-data.
So I want to have a database for each of my user to use it as a mailbox, then the maximum size for one message is 16 MB (very acceptable) and I can search a mailbox using mongo queries.
Furthemore, since I'm using Meteor, it would be nice to then have this mongo db collection be loaded as Meteor.Collection whenever a user logs in. When a user deactivates his/her account, the db should of course be dropped, if the user just logs out, only the Meteor.Collection should be dropped (and restored when he/she logs in again).
To some extent, I got this working already, each user has a own db for the mailbox, but if anybody cancels his/her account, I have to delete this particular Mongo Collection manually. Also, I have do keep all mongo db collections alive as Meteor.Collections at all times because I cannot drop them.
This is a well working server-side code snippet for one-collection-per-user mailboxes:
var mailboxes = {};
Meteor.users.find({}, {fields: {_id: 1}}).forEach(function(user) {
mailboxes[user._id] = new Meteor.Collection("Mailbox_" + user._id);
});
Meteor.publish("myMailbox", function(_query,_options) {
if (this.userId) {
return mailboxes[this.userId].find(_query, _options);
};
});
while a client just subscribes with a certain query with this piece of client-code:
myMailbox = new Meteor.Collection("Mailbox_"+Meteor.userId());
Deps.autorun(function(){
var filter=Session.get("mailboxFilter");
if(_.isObject(filter) && filter.query && filter.options)
Meteor.subscribe("myMailbox",filter.query,filter.options);
});
So if a client manipulates the session variable "mailboxFilter", the subscription is updated and the user gets a new bunch of messages in the minimongo.
It works very nice, the only thing missing is db collection dropping.
Thanks for any hint already!
*I previeously wrote "dropping" here, which was a total mistake. I meant searching.
A solution that doesn't use a private method is:
myMailbox.rawCollection().drop();
This is better in my opinion because Meteor could randomly drop or rename the private method without any warning.
You can completely drop the collection myMailbox with myMailbox._dropCollection(), directly from meteor.
I know the question is old, but it was the first hit when I searched for how to do this
Searching in the subdocuments...
Why use subdocuments? A document per user I suppose?
each message must be it's own document
That's a better way, a collection of messages, each is id'ed to the user. That way, you can filter what a user sees when doing publish subscribe.
dropping all messages in one db turns out to be very slow for many users with large mailboxes
That's because most NoSQL DBs (if not all) are geared towards read-intensive operations and not much with write-intensive. So writing (updating, inserting, removing, wiping) will take more time.
Also, some online services (I think it was Twitter or Yahoo) will tell you when deactivating the account: "Your data will be deleted within the next N days." or something that resembles that. One reason is that your data takes time to delete.
The user is leaving anyway, so you can just tell the user that your account has been deactivated, and your data will be deleted from our databases in the following days. To add to that, so you can respond to the user immediately, do the remove operation asynchronously by sending it a blank callback.

Categories

Resources