Can you combine socket.io emit features? - javascript

I was trying to send coordinate data for a game by using the broadcast modifier and volatile modifier at the same time. Is that able to be done?
socket.broadcast.volatile.emit('coords', x, y);

Yes, you can do that. All the socket.broadcast and socket.volatile properties do is execute a getter than sets the appropriate flag and then returns the socket object to allow you to chain them. Here's the code for creating those getters:
var flags = [
'json',
'volatile',
'broadcast',
'local'
];
flags.forEach(function(flag){
Object.defineProperty(Socket.prototype, flag, {
get: function() {
this.flags[flag] = true;
return this;
}
});
});
Then, when you call .emit() all the flags are passed on to the adapter that does the actual sending as you can see here. So, the adapter gets the emit() and gets the flags (both broadcast and volatile) to do accordingly.
Note that the main feature of volatile is if that a message is not queued if the client connection is not currently available (client is polling or client has lost the connection and may be reconnecting).

Related

Firebase synchronisation of locally-modified data: handling errors & global status

I have two related questions regarding the Firebase web platform's
synchronisation of locally-modified data to the server:
Every client sharing a Firebase database maintains its own internal version of any active data.
When data is updated or saved, it is written to this local version of the database.
The Firebase client then synchronizes that data with the Firebase servers and with other clients on a 'best-effort' basis.
1. Handling sync errors
The data-modification methods
(set(),
remove(), etc)
can take an onComplete callback parameter:
A callback function that will be called when synchronization to the Firebase servers
has completed. The callback will be passed an Error object on failure; else null.
var onComplete = function(error) {
if (error) {
console.log('Synchronization failed');
} else {
console.log('Synchronization succeeded');
}
};
fredRef.remove(onComplete);
In the example above, what kind of errors should the fredRef.remove() callback expect to receive?
Temporary errors?
Client is offline (network connection lost) ?
Firebase server is temporarily overloaded or down for maintenance, but will be available again soon?
Permanent errors?
Permission denied (due to security rules) ?
Database location does not exist?
Is there a way to distinguish between temporary and permanent errors?
How should we handle / recover from these errors?
For temporary errors, do we need to call fredRef.remove() again after a short period of time, to retry the operation?
2. Global sync status
I realise that each call to set() and remove() will receive an individual sync success/failure
result in the onComplete callback.  But I'm looking for a way to determine the
global sync status of the whole Firebase client.
I'd like to use a beforeunload event listener
to warn the user when they attempt to leave the page before all modified data has been synced to the server,
and I'm looking for some function like firebase.isAllModifiedDataSynced().  Something like this:
window.addEventListener('beforeunload', function (event) {
if (!firebase.isAllModifiedDataSynced()) {
event.returnValue = 'Some changes have not yet been saved. If you ' +
'leave this page, your changes will be lost.';
}
});
Here's an example of the same functionality in Google Drive:
I'm aware of the special /.info/connected location:
it is useful for a client to know when it is online or offline.
Firebase clients provide a special location at /.info/connected which is updated every time the client's connection state changes.
Here is an example:
var connectedRef = new Firebase("https://<YOUR-FIREBASE-APP>.firebaseio.com/.info/connected");
connectedRef.on("value", function(snap) {
if (snap.val() === true) {
alert("connected");
} else {
alert("not connected");
}
});
The special /.info/connected location can be connected to a beforeunload event listener like this:
var connectedRef = new Firebase('https://myapp.firebaseio.com/.info/connected');
var isConnected = true;
connectedRef.on('value', function (snap) {
isConnected = snap.val();
});
window.addEventListener('beforeunload', function (event) {
if (!isConnected) {
event.returnValue = 'Some changes have not yet been saved. If you ' +
'leave this page, your changes will be lost.';
}
});
My question is:
If isConnected is true, does this also mean that all modified data has been synced to the server?
i.e. Does "connected" also mean "synced"?
If not, how can the app determine the global sync status of the whole Firebase client?
Is there a special /.info/synchronized location?
Does the app need to manually keep track of the sync success/failure result of every onComplete callback?
In the example above, what kind of errors should the fredRef.remove() callback expect to receive?
Client is offline (network connection lost) ?
No, this will not cause an error to be passed to the completion listener. It will simply cause the completion listener to not be called (yet).
Firebase server is temporarily overloaded or down for maintenance, but will be available again soon?
No. This is essentially the same as being without a network connection.
Permission denied (due to security rules) ?
Yes, this is will indeed cause an error to be passed to the completion handler.
Database location does not exist?
No, this will not cause an error to be caused to the completion listener.
If isConnected is true, does this also mean that all modified data has been synced to the server? i.e. Does "connected" also mean "synced"?
No it does not. .info/connected will fire with true when a connection is made to the database.
If not, how can the app determine the global sync status of the whole Firebase client?
There is currently no way to determine whether your local data is up to date with the server.
Is there a special /.info/synchronized location?
No, such a location doesn't exist.
Does the app need to manually keep track of the sync success/failure result of every onComplete callback?
That depends on the use-case. But if you want to simply know when all your writes are executed, push a dummy value and wait for that to complete. Since Firebase executes the writes in order, you can be certain at that stage that you've gotten the other events.

pusher api: how can I check if the connection is established?

I'm using the pusher interface, and I would like to write a fallback for environments where the pusher service is unavailable. I can't find a way to check if the pusher subscription is ok.
I tried this
$scope.channel.bind('pusher:error', function() {
console.log("pusher:error");
});
and also pusher:subscription_error but it does nothing.
any help will be appreciated.
It's important to highlight the difference between connection and subscription.
A connection is a persistent connection to Pusher over which all communication takes place.
A subscription is a request for data. In Pusher these are represented by channels. Subscriptions and associated data use the established connection and multiple subscriptions are multiplexed over a single connection.
To determine if the Pusher service is reachable or not you should check the connection state.
However, if you ever see this I'd also recommend contacting Pusher support since this shouldn't happen.
Detecting & Querying Connection State
It's possible to detect connection state by binding to events on the connection object.
pusher.connection.bind('state_change', function(states) {
var prevState = states.previous;
var currState = states.current;
});
You can additional get the state right now.
var currentState = pusher.connection.state;
Full documentation on this can be found here:
https://pusher.com/docs/client_api_guide/client_connect#connection-states
The example in the questions appears to use Angular so you'll need to get reference to the connection object from the $scope. If you're using pusher-angular then the API should be the same as the normal Pusher library.
Subscription Status
You can bind to two events to determine the result of a subscription:
pusher:subscription_succeeded
pusher:subscription_error
The code to use these looks as follows:
var channel = pusher.subscribe('my-channel');
channel.bind('pusher:subscription_succeeded', function() {
// Yipee!!
});
channel.bind('pusher:subscription_error', function() {
// Oh nooooos!
});
Documentation on the success event can be found here:
https://pusher.com/docs/client_api_guide/client_events#subscription_succeeded
Docs on the error event can be found here:
https://pusher.com/docs/client_api_guide/client_events#subscription_error

Mongodb Tailable Cursor in nodejs, how to stop stream

I use below code to get the data from mongodb capped collection
function listen(conditions, callback) {
db.openConnectionsNew( [req.session.client_config.db] , function(err, conn){
if(err) {console.log({err:err}); return next(err);}
coll = db.opened[db_name].collection('messages');
latestCursor = coll.find(conditions).sort({$natural: -1}).limit(1)
latestCursor.nextObject(function(err, latest) {
if (latest) {
conditions._id = {$gt: latest._id}
}
options = {
tailable: true,
awaitdata: true,
numberOfRetries: -1
}
stream = coll.find(conditions, options).sort({$natural: -1}).stream()
stream.on('data', callback)
});
});
}
and then I use sockets.broadcast(roomName,'data',document);
on client side
io.socket.get('/get_messages/', function(resp){
});
io.socket.on('data', function notificationReceivedFromServer ( data ) {
console.log(data);
});
this works perfectly as I am able to see the any new document which is inserted in db.
I can see in mongod -verbose that after each 200ms there is query running with the query {$gt:latest_id} and this is fine, but I have no idea how can i close this query :( I am very new in nodejs and using the mongodb tailable option for the first time and am totally lost, any help or clue is highly appreciated
What is returned from the .stream() method from the Cursor object returned from .find() is an implementation of the node transform stream interface. Specifically this is a "readable" stream.
As such, it's "data" event is emitted whenever there is new data received and available in the stream to be read.
There are other methods such as .pause() and .resume() which can be used to control the flow of these events. Typically you would call these "inside" a "data" event callback, where you wanted to make sure the code in that callback was executed before the "next" data event was processed:
stream.on("data", function(data) {
// pause before processing
stream.pause();
// do some work, possibly with a callback
something(function(err,result) {
// Then resume when done
stream.resume();
});
});
But of course this is just a matter of "scoping". So as long as the "stream" variable is defined in a scope where another piece of code can access it, then you can call either method at any time.
Again, by the same token of scoping, you can just "undefine" the "stream" object at any point in the code, making the "event processing" redundant.
// Just overwrite the object
scope = undefined;
So worth knowing. In fact the newer "version 2.x" of the node driver wraps a "stream interface" directly into the standard Cursor object without the need to call .stream() to convert. Node streams are very useful and powerful things that it would be well worth while coming to terms with their usage.

resending peerconnection offer

I'm currently trying to rebroadcast my local stream to all my peer connections. options I tried:
1) Loop trough all my peer connection and recreate them with the new local stream. Problem that I encounter here is the fact that createOffer is asynchronous.
2) create 1 sdp and send it to all peers. Problem: no video
Would anyone have a way to resend an offer to a list of peers?
Each PC needs to recreate an offer (as bwrent said).
as you obviously are using a p2p multiparty (multiple peer connections) you might want to pass on the peerID to the createOffer success callback every time, then you don't have to worry about it being asynchronous. You need to make the full handshake (offer, answer, candidate) peerID dependent.
(Simplified) Example from our SDK
Skyway.prototype._doCall = function (targetMid) {
var pc = this._peerConnections[targetMid]; // this is thread / asynchronous safe
pc.createOffer(
function (offer) {
self._setLocalAndSendMessage(targetMid, offer); // pass the targetID down the callback chain
},
function (error) {this._onOfferOrAnswerError(targetMid, error);},
constraints
);
};
Skyway.prototype._setLocalAndSendMessage = function (targetMid, sessionDescription) {
var pc = this._peerConnections[targetMid]; // this is thread / asynchronous safe
pc.setLocalDescription(
sessionDescription,
self._sendMessage({ target: targetMid, ... }), // success callback
function () {} // error callback
);
};
If you mean async in a way that when a callback fires it has the wrong variable of who to send it to as the loop has ended and the variable contains the last 'person'? You could scope it to solve the asynchronous problem:
For(var i=0;i<peerConnections.length;i++){
(function(id){
//inside here you have the right id. Even if the loop ended and the i variable has changed to something else, the I'd variable still is the same.
})(i);
}
This is a bit like Alex' answer, as his anwer also describes an example of scoping the variable inside the function executing the .createOffer
Another way to handle this correctly is to use renegotiation. Whenever you change a stream, the on onnegotiation event handler is automatically fired. Inside this function you create a new offer and send that to the other person. As you mentioned you have multiple peer connect ions listening to the stream, you need to know whom to send the sdp to. If you would add the persons id to the rtc object, you can then get it back inside the onnegotioation event by calling this.id.

Strange issue with socket.on method

I am facing a strange issue with calling socket.on methods from the Javascript client. Consider below code:
for(var i=0;i<2;i++) {
var socket = io.connect('http://localhost:5000/');
socket.emit('getLoad');
socket.on('cpuUsage',function(data) {
document.write(data);
});
}
Here basically I am calling a cpuUsage event which is emitted by socket server, but for each iteration I am getting the same value. This is the output:
0.03549148310035006
0.03549148310035006
0.03549148310035006
0.03549148310035006
Edit: Server side code, basically I am using node-usage library to calculate CPU usage:
socket.on('getLoad', function (data) {
usage.lookup(pid, function(err, result) {
cpuUsage = result.cpu;
memUsage = result.memory;
console.log("Cpu Usage1: " + cpuUsage);
console.log("Cpu Usage2: " + memUsage);
/*socket.emit('cpuUsage',result.cpu);
socket.emit('memUsage',result.memory);*/
socket.emit('cpuUsage',cpuUsage);
socket.emit('memUsage',memUsage);
});
});
Where as in the server side, I am getting different values for each emit and socket.on. I am very much feeling strange why this is happening. I tried setting data = null after each socket.on call, but still it prints the same value. I don't know what phrase to search, so I posted. Can anyone please guide me?
Please note: I am basically Java developer and have a less experience in Javascript side.
You are making the assumption that when you use .emit(), a subsequent .on() will wait for a reply, but that's not how socket.io works.
Your code basically does this:
it emits two getLoad messages directly after each other (which is probably why the returning value is the same);
it installs two handlers for a returning cpuUsage message being sent by the server;
This also means that each time you run your loop, you're installing more and more handlers for the same message.
Now I'm not sure what exactly it is you want. If you want to periodically request the CPU load, use setInterval or setTimeout. If you want to send a message to the server and want to 'wait' for a response, you may want to use acknowledgement functions (not very well documented, but see this blog post).
But you should assume that for each type of message, you should only call socket.on('MESSAGETYPE', ) once during the runtime of your code.
EDIT: here's an example client-side setup for a periodic poll of the data:
var socket = io.connect(...);
socket.on('connect', function() {
// Handle the server response:
socket.on('cpuUsage', function(data) {
document.write(data);
});
// Start an interval to query the server for the load every 30 seconds:
setInterval(function() {
socket.emit('getLoad');
}, 30 * 1000); // milliseconds
});
Use this line instead:
var socket = io.connect('iptoserver', {'force new connection': true});
Replace iptoserver with the actual ip to the server of course, in this case localhost.
Edit.
That is, if you want to create multiple clients.
Else you have to place your initiation of the socket variable before the for loop.
I suspected the call returns average CPU usage at the time of startup, which seems to be the case here. Checking the node-usage documentation page (average-cpu-usage-vs-current-cpu-usage) I found:
By default CPU Percentage provided is an average from the starting
time of the process. It does not correctly reflect the current CPU
usage. (this is also a problem with linux ps utility)
But If you call usage.lookup() continuously for a given pid, you can
turn on keepHistory flag and you'll get the CPU usage since last time
you track the usage. This reflects the current CPU usage.
Also given the example how to use it.
var pid = process.pid;
var options = { keepHistory: true }
usage.lookup(pid, options, function(err, result) {
});

Categories

Resources