add to WebSocket.onmessage() like how jQuery adds to events? - javascript

I'm writing a single page ws++ site, and I'd like to keep my code grouped first by "page" (I think I need a new word since it never posts back) then by section then by concept etc.
I'd like to split up WebSocket.onmessage across my code much in the same way that $('#someElement') can constantly have an event like click(function(){}) added to it.
Can this be done with WebSocket.onmessage(function(){})? If so, how?
As some jQuery programmers happily know, an event can be initially set then added to in multiple places across the js. That's my favorite thing about js, the "put it anywhere as long as it's in order" ability. This makes code organization so much easier for me at least.
With WebSockets, really, the action client side for me so far is with the WebSocket.onmessage() handler since WebSocket.send() can be used anywhere and really just ports js data to the server.
onmessage() now owns my page, as whatever's in it initiates most major actions such as fading out the login screen to the first content screen upon a "login successful" type message.
According to my limited understanding of js, the onmessage() handler must be set all in one place. It's a pain to keep scrolling back/tabbing to another file to make a change to it after I've changed the js around it, far, far, away.
How can I add to the WebSocket.onmessage() handler in multiple places across the js?

To answer your last question;
how can I add to onmessage handler in multiple places across the js?
You can define your own personal (global) event handler in which you accept arbitrary number of handler functions. Here's an example:
window.bind: function(name, func, context) {
if (typeof this.eventHandlers[name] == "undefined") {
this.eventHandlers[name] = [func];
this.eventContexts[name] = [context];
}
else {
var found = false;
for (var index in this.eventHandlers[name]) {
if (this.eventHandlers[name][index] == func && this.eventContexts[name][index] == context) {
found = true;
break;
}
}
if (!found) {
this.eventHandlers[name].push(func);
this.eventContexts[name].push(context);
}
}
}
window.trigger: function(name, args) {
if (typeof this.eventHandlers[name] != "undefined") {
for (var index in this.eventHandlers[name]) {
var obj = this.eventContexts[name][index];
this.eventHandlers[name][index].apply(obj, [args]);
}
}
}
// === Usage ===
//First you will bind an event handler some where in your code (it could be anywhere since `.bind` method is global).
window.bind("on new email", function(params) { ... });
//Then you need to trigger "on new email" in `onmessage` once appropriate events happen.
WebSocket.onmessage(function(data) {
//Work on data and trigger based on that
window.trigger("on new email", { subject: data.subject, email: data.email });
})
This code is a part of an open source project I worked on before. It gives events names and let you set context for your handler (for methods instead of functions). Then you can call trigger in your onmessage handler of your socket. I hope this is what you are looking for.

You can create a wrapper which will handle WS events on itself. See this example CoffeeScript:
class WebSocketConnection
constructor: (#url) ->
#ws = new WebSocket(#url)
#ws.onmessage = #onMessage
#callbacks = []
addCallback: (callback) ->
#callbacks.push callback
onMessage: (event) =>
for callback in #callbacks
callback.call #, event
# and now use it
conn = new WebSocketConnection(url)
conn.addCallback (event) =>
console.log event

You can do it with addEventListener :
socket.addEventListener('message', function (event) {
console.log('Message from server ', event.data);
});

I've constructed a CoffeeScript class to solve this problem. It's similar to #Valent's but a bit more full-featured, so I figured I'd share it. It provides "on", "off", and "clear" methods for web socket events and also provides forwarding functions for "send" and "close" so that you pretty much don't have to touch the socket directly. If you do need access to the actual WebSocket object, you can get there by superWebSocket.ws.
edit: I added a getConnection static method to produce url-dependent singletons. This way there's only one connection per url and if you attempt to create a 2nd, it just gives you the existing one. It also protects against anyone calling the constructor directly.
edit: I communicate across the socket in JSON. I added some code that will run JSON.stringify on any non-string passed into send and also will attempt to run JSON.parse on any message received via a handler.
superSockets = {}
class SuperWebSocket
#getConnection: (url)->
superSockets[url] ?= new SuperWebSocket url
superSockets[url]
constructor: (url)->
if arguments.callee.caller != SuperWebSocket.getConnection
throw new Error "Calling the SuperWebSocket constructor directly is not allowed. Use SuperWebSocket.getConnection(url)"
#ws = new WebSocket url
events = ['open', 'close', 'message', 'error']
#handlers = {}
events.forEach (event)=>
#handlers[event] = []
#ws["on#{event}"] = (message)=>
if message?
try
message = JSON.parse message.data
catch error
for handler in #handlers[event]
handler message
null
on: (event, handler)=>
#handlers[event] ?= []
#handlers[event].push handler
this
off: (event, handler)=>
handlerIndex = #handlers[event].indexOf handler
if handlerIndex != -1
#handlers[event].splice handlerIndex, 1
this
clear: (event)=>
#handlers[event] = []
this
send: (message)=>
if typeof(message) != 'string'
message = JSON.stringify message
#ws.send message
close: => #ws.close()

Related

Manually firing off signal event?

I'm wondering if there is a way to manually cause a simplepeer Peer object to fire of the signal data event that does of when a initiating peer is created.
More precisely what I want to do is to create Peer objects with the initiator property set to false and then manually turn them into initiating peers.
Is this possible or should I look elsewhere?
Update 1:
I tried to create a non initiating peers and then set initiator to true via function but that did not trigger the signal event.
so basically this:
let p = new SimplePeer();
p.on('signal', function (data) {
console.log('SIGNAL', JSON.stringify(data))
});
const turnOn = function () {
p.initiator = true;
};
turnOn();
*This is not the actual code, just the parts that have to do with the question.

Starting Alexa Skill in a specific state

Earlier I ran into the issue of Alexa not changing the state back to the blank state, and found out that there is a bug in doing that. To avoid this issue altogether, I decided that I wanted to force my skill to always begin with START_MODE.
I used this as my reference, where they set the state of the skill by doing alexa.state = constants.states.START before alexa.execute() at Line 55. However, when I do the same in my code, it does not work.
Below is what my skill currently looks like:
exports.newSessionHandler = {
LaunchRequest () {
this.hander.state = states.START;
// Do something
}
};
exports.stateHandler = Alexa.CreateStateHandler(states.START, {
LaunchRequest () {
this.emit("LaunchRequest");
},
IntentA () {
// Do something
},
Unhandled () {
// Do something
}
});
I'm using Bespoken-tools to test this skill with Mocha, and when I directly feed IntentA like so:
alexa.intended("IntentA", {}, function (err, p) { /*...*/ })
The test complains, Error: No 'Unhandled' function defined for event: Unhandled. From what I gather, this can only mean that the skill, at launch, is in the blank state (because I have not defined any Unhandled for that state), which must mean that alexa.state isn't really a thing. But then that makes me wonder how they made it work in the example code above.
I guess a workaround to this would be to create an alias for every intent that I expect to have in the START_MODE, by doing:
IntentA () {
this.handler.state = states.START;
this.emitWithState("IntentA");
}
But I want to know if there is a way to force my skill to start in a specific state because that looks like a much, much better solution in my eyes.
The problem is that when you get a LaunchRequest, there is no state, as you discovered. If you look at the official Alexa examples, you will see that they solve this by doing what you said, making an 'alias' intent for all of their intents and just using them to change the state and then call themselves using 'emitWithState'.
This is likely the best way to handle it, as it gives you the most control over what state and intent is called.
Another option, assuming you want EVERY new session to start with the same state, is to leverage the 'NewSession' event. this event is triggered before a launch request, and all new sessions are funneled through it. your code will look somewhat like this:
NewSession () {
if(this.event.request.type === Events.LAUNCH_REQUEST) {
this.emit('LaunchRequest');
} else if (this.event.request.type === "IntentRequest") {
this.handler.state = states.START;
this.emitWithState(this.event.request.intent.name);
}
};
A full example of this can be seen here (check out the Handlers.js file): https://github.com/alexa/skill-sample-node-device-address-api/tree/master/src
I would also recommend reading through this section on the Alexa GitHub: https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs#making-skill-state-management-simpler
EDIT:
I took a second look at the reference you provided, and it looks like they are setting the state outside of an alexa handler. So, assuming you wanted to mimic what they are doing, you would not set the state in your Intent handler, but rather the Lambda handler itself (where you create the alexa object).
exports.handler = function (event, context, callback) {
var alexa = Alexa.handler(event, context);
alexa.appId = appId;
alexa.registerHandlers(
handlers,
stateHandlers,
);
alexa.state = START_MODE;
alexa.execute();
};

JavaScript same functions, different implementation decided on runtme

What is the best way to change JavaScript implementations at run time?
I have a web application which connects to the server by SignalR.
If there is any problem connecting to the server using SignalR at runtime, I want to change the services functions implementations to work with regular XHR.
I have one js file with the following functions to connect via SignalR:
function initializeConnection() {
// Initialize connection using SignalR
}
function sendEcho() {
// Sending echo message using signalR
}
And another js file with the same functions for connection via XHR:
function initializeConnection() {
// Initialize connection using XHR
}
function sendEcho() {
// Sending echo message using XHR
}
I know it is impossible to have them loaded at the same time.
I know I can use one file with a toggle within each function.
I thought maybe I can switch between these files by loading & unloading them at runtime. Is this possible? If so, is this the best way for such an issue?
What is the best way for supplying different implementations at runtime?
One way to do it, is to define both implementations as objects with same signatures and just set the namespace to a variable:
;var MyStuff = {
//SignalR
SignalR: {
initializeConnection: function(){console.log('SignalR.initializeConnection()')},
sendEcho: function(){console.log('SignalR.sendEcho()')}
},
//XHR
XHR: {
initializeConnection: function(){console.log('XHR.initializeConnection()')},
sendEcho: function(){console.log('XHR.sendEcho()')}
}
};
//Do whatever check you want to
var mNamespace = (1 === 2) ? MyStuff.SignalR : MyStuff.XHR;
//Call the instance
mNamespace.initializeConnection();
You can also keep them split in two files and add them both to MyStuff dynamicallly:
//File 1
;var MyStuff = (MyStuff === undefined) ? {} : MyStuff;
MyStuff.SignalR = {..};
//File 2
;var MyStuff = (MyStuff === undefined) ? {} : MyStuff;
MyStuff.XHR = {..};
One pattern that can help you is the "lazy function definition" or "self-defining function" pattern. It consists of (as its name points out) the redefinition of a function at runtime. It's useful when your function has to do some initial preparatory work and it needs to do it only once.
In your case, this "preparatory" work would be selecting the function that handles the client-server connection.
For instance:
var sendMessage = function() {
// Perform a check, or try a first message using your default connection flavour
// Depending on the result, redefine the function accordingly
sendMessage = sendMessageUsingWhatever;
};
//Use sendMessage anywhere you want, it'll use the proper protocol
This pattern was particularly handy when dealing with browsers and their peculiarities:
var addHandler = document.body.addEventListener ?
function(target, eventType, handler) {
target.addEventListener(eventType, handler, false);
} :
function(target, eventType, handler) {
target.attachEvent("on" + eventType, handler);
};
In this case, it is useful to determine which which way to attach event listeners depending on the availability (or not) of a particular method.
It has its drawbacks though. For instance, any properties you've previously added to the original function will be lost when it redefines itself.
Hope it helps or at least gives you some ideas.

socket.io, adding message handler dynamically

I've written happily a node.js server, which uses socket.io to communicate with the client.
this all works well.
the socket.on('connection'...) handler got a bit big, which made me think of an alternative way to organize my code and add the handlers in a generator function like this:
sessionSockets.on('connection', function (err, socket, session) {
control.generator.apply(socket, [session]);
}
the generator takes an object that contains the socket events and their respective handler function:
var config = {
//handler for event 'a'
a: function(data){
console.log('a');
},
//handler for event 'b'
b: function(data){
console.log('b');
}
};
function generator(session){
//set up socket.io handlers as per config
for(var method in config){
console.log('CONTROL: adding handler for '+method);
//'this' is the socket, generator is called in this way
this.on(method, function(data){
console.log('CONTROL: received '+method);
config[method].apply(this, data);
});
}
};
I was hoping that this would add the socket event handlers to the socket, which it kind of does, but when any event comes in, it always calls the latest one added, in this case always the b-function.
Anyone any clues what i am doing wrong here?
The problem appears because by that time this.on callback triggers (let's say in a few seconds after you bind it), the for loop is finished and method variable becomes the last value.
To fix that you may use some JavaScript magic:
//set up socket.io handlers as per config
var socket = this;
for(var method in config){
console.log('CONTROL: adding handler for '+method);
(function(realMethod) {
socket.on(realMethod, function(data){
console.log('CONTROL: received '+realMethod);
config[realMethod].apply(this, data);
});
})(method); //declare function and call it immediately (passing the current method)
}
This "magic" is hard to understand when you first see it, but when you get it, the things become clear :)

PhantomJS: Ensuring that the response object stays alive in server.listen(...)

I'm using server.listen(...) from PhantomJS. I realize that it is largely experimental and that it shouldn't be used in production. I'm using it for a simple screenshot-server that accepts generates screenshots for a URL; it's a toy project that I'm using to play around with PhantomJS. I've noticed an issue with long-running requests in particular, where the response object is unavailable. Here are the relevant snippets from my code:
var service = server.listen(8080, function (request, response) {
response.statusCode = 200;
if (loglevel === level.VERBOSE) {
log(request);
} else {
console.log("Incoming request with querystring:", request.url);
}
var params = parseQueryString(request.url);
if (params[screenshotOptions.ACTION] === action.SCREENSHOT) {
getScreenshot(params, function (screenshot) {
response.headers["success"] = screenshot.success; //<-- here is where I get the error that response.headers is unavailable. Execution pretty much stops at that point for that particular request.
response.headers["message"] = screenshot.message;
if (screenshot.success) {
response.write(screenshot.base64);
} else {
response.write("<html><body>There were errors!<br /><br />");
response.write(screenshot.message.replace(/\n/g, "<br />"));
response.write("</body></html>");
}
response.close();
});
} else {
response.write("<html><body><h1>Welcome to the screenshot server!</h1></body></html>")
response.close();
}
});
getScreenshot is an asynchronous method that uses the WebPage.open(...) function to open a webpage; this function is also asynchronous. So what seems to be happening is that when the callback that is passed in as an argument to getScreenshot is finally called, it appears that the response object has already been deleted. I basically end up with the following error from PhantomJS:
Error: cannot access member `headers' of deleted QObject
I believe this is because the request times out and so the connection is closed. The documentation mentions calling response.write("") at least once to ensure that the connection stays open. I tried calling response.write("") at the beginning of server.listen(...) and I even tried a pretty hacky solution where I used setInterval(...) to perform a response.write("") every 500 milliseconds (I even lowered it down to as little as 50). I also made sure to clear the interval once I was done. However, I still seem to get this issue.
Is this something that I'm just going to have to deal with until they make the webserver module more robust? Or is there a way around it?
I was able to figure this out. It appears that while loading certain pages with WebPage.open (for example http://fark.com and http://cnn.com) multiple onLoadFinished events are fired. This results in the callback in WebPage.open being called multiple times. So what happens is that when control comes back to the calling function, I've already closed the response and so the response object is no-longer valid. I fixed this by using creating a flag before the WebPage.open function is called. Inside the callback, I check the status of the flag to see if I've already encountered a previous onLoadFinished event. Once I am with whatever I have to do inside the WebPage.open callback, I update the flag to show that I've finished processing. This way spurious (at least in the context of my code) onLoadFinished events are no-longer serviced.
(Note that the following refers to PhantomJS 1.9.7 while the OP was likely referring to 1.6.1 or older.)
In the event that multiple onLoadFinished events are being fired, you can use page.open() instead of listening for onLoadFinished yourself. Using page.open() will wrap your handler in a private handler to ensure that your callback is only called once.
From the source:
definePageSignalHandler(page, handlers, "_onPageOpenFinished", "loadFinished");
page.open = function (url, arg1, arg2, arg3, arg4) {
var thisPage = this;
if (arguments.length === 1) {
this.openUrl(url, 'get', this.settings);
return;
}
else if (arguments.length === 2 && typeof arg1 === 'function') {
this._onPageOpenFinished = function() {
thisPage._onPageOpenFinished = null;
arg1.apply(thisPage, arguments);
}
this.openUrl(url, 'get', this.settings);
return;
}
// ... Truncated for brevity
This functionality is exactly the same as the other answer, exposed as part of the official API.

Categories

Resources