connection = new WebSocket("ws://localhost:1050/join?username=test")
connection.onopen = function(){
alert('Connection open!');
}
connection.onmessage = function(e){
var server_message = e.data;
alert(server_message);
}
connection.onclose = function() {
alert("websocket closing")
}
The connection to the server is established and an alert is displayed for Connection open! However immediately afterwards the connection closes. The server does not call close and there seem to be no other errors in the console. This is happening in both chrome and firefox.
I looked at a bunch of different similar examples on the web but to no avail.
to Keep Websocket Opened prevent handler from returning by return false; in connection.onmessage
like this :
connection.onmessage = function(e){
var server_message = e.data;
alert(server_message);
return false;
}
I believe I've stumbled across the solution that OP found but failed miserably to explain. I don't have enough reputation to comment, otherwise I'd be responding to all of the confused comments begging for clarification on OP's response.
The short version is that I think OP was referring to his server-side connection handler when he said "All I had to do was block the handler from returning before the websocket connection closes".
It turns out my server was closing the webSocket automatically because I didn't understand how a certain webSocket function worked. Specifically, I was using a Python server script with asyncio/websockets and the following code:
async def receiveCommandsLoop(player):
while True:
msg = await player.websocket.recv()
print(command)
async def handleClient(websocket, path):
username = await websocket.recv()
player = players[username]
...
#Start task to listen for commands from player
asyncio.get_event_loop().create_task(receiveCommandsLoop(player))
start_server = websockets.serve(handleClient, '', 8765)
The idea was that websockets.serve would use handleClient to begin the connection and do some setup, then create a new task with receiveCommandsLoop that would take over the job of communication.
But it turns out: when you call websockets.serve, Python expects that when your handler (in this case, handleClient) returns, you must be done with the socket, and it closes it automatically.
Thus, by the time receiveCommandsLoop was run, handleClient had returned, and the webSocket had been automatically closed.
I was able to fix this by simply modifying my handleClient function to directly run the loop originally contained in receiveCommandsLoop. Hope this helps someone out there.
This also could be the case when you're trying to send binary data over a websocket connection, but some side (client or server) is trying to interpret it as a text - many libraries and frameworks do it unless you explicitly specify you do want binary data.
It could also be a login problem. The websocket will automatically close the website required authentication but no authentication information was provided.
Piecing together hints from this post and others, I found a solution that works when using the python websocket server example found everywhere that includes something like:
async def handler(websocket, path):
data = await websocket.recv()
reply = f"Data recieved as: {data}!"
await websocket.send(reply)
To those of us new to websocket, I think the assumption is that the handler function will be called each time the client sends a message, which turns out not to be the case. As others mention, the connection closes as soon as the handler function returns once. The solution I found is to change it to:
async def handler(websocket, path):
async for data in websocket:
reply = f"Data recieved as: {data}!"
print(data)
await websocket.send(reply)
My client-side javascript code is equivalent to the OP's and I didn't have to change anything for this to work.
Unfortunately I can't explain why async for data in websocket: makes it actually wait forever and spontaneously run the inner code block each time a message is received, but it does for me and I get all the expected log messages both on the python server side and the client javascript console.
If anyone more knowledgeable on this topic can comment on whether this is a good-for-general-use solution or if there's a gotcha to look out for here, it would be much appreciated.
Fixed it!
All I had to do was block the handler from returning before the websocket connection closes
Related
I'm trying to build real time notification system for users using
Server sent events in python.
The issue I'm facing is when I refresh the browser, which means the
browser then will try to hit the EventSource url again, and as per
docs it should send the event.lastEventId as part of headers in next
request. I'm getting None every time I refresh the page.
<!DOCTYPE html>
<html>
<header><title>SSE test</title></header>
<body>
<ul id="list"></ul>
<script>
const evtSource = new EventSource("/streams/events?token=abc");
evtSource.onmessage = function(event) {
console.log('event', event)
console.log('event.lastEventId', event.lastEventId)
const newElement = document.createElement("li");
const eventList = document.getElementById("list");
newElement.innerHTML = "message: " + event.data;
eventList.appendChild(newElement);
}
</script>
</body>
</html>
On the server side
from sse_starlette.sse import EventSourceResponse
from asyncio.queues import Queue
from starlette.requests import Request
#event_router.get(
"/streams/events",
status_code=HTTP_200_OK,
summary='',
description='',
response_description='')
async def event_stream(request: Request):
return EventSourceResponse(send_events(request))
async def send_events(request: Request):
try:
key = request.query_params.get('token')
last_id = request.headers.get('last-event-id')
print('last_id ', last_id) # this value is always None
connection = Queue()
connections[key] = connection
while RUNNING:
next_event = await connection.get()
print('event', next_event)
yield dict(data=next_event, id=next_event['id'])
connection.task_done()
except asyncio.CancelledError as error:
pass
Now, as per every doc on SSE, when client reconnects or refreshes the
page it will send the last-event-id in headers. I'm trying to read it
using request.headers.get('last-event-id'), but this is always null.
Any pointers on how to get the last event id would be helpful. Also,
how would I make sure that I do't send the same events even later once
the user has seen the events, as my entire logic would be based on
last-event-id received by the server, so if its None after reading
events with Id 1 to 4, how would I make sure in server that I should
not send these back even if last-event-id is null for the user
.
Adding browser snaps
1st pic shows that the events are getting received by the browser.
e.g. {alpha: abc, id:4}
2nd pic shows that the event received is
setting the lastEventId correctly.
I think this is a wrong understanding. Where did you get the part that says "when client reconnects or refreshes the page it will send the last-event-id in headers".
My understanding is that the last ID is sent on a reconnect. When you refresh the page directly, that is not seen as a broken connection and a re-connect. You are completely re-starting the whole page and forming a brand new SSE connection to the server from the browser. Keep in mind if your SSE had a certain request parameter for example and this could be driven by something on the page. You could have two tabs open from the same "page" but put different items into the page which causes this request parameter to be different in your SSE. They are two different SSE connections altogether. One does not affect the other except that you can reach the maximum connection limit of a browser if you have too many. In the same way, if you click refresh, this is like a new tab being created. It is a totally new connection.
I'm creating an inactivity check for my bot where it sends a message to the user if X amount of minutes have passed since the last message he sent.
bot.dialog('SomeDialog',
function(session, args){
let text = "The text sent to the user";
session.send(text, session.message.text);
check(session); //The function where I send the session to do the checking
session.endDialog();
}
);
The check function is where the problem happens:
check(session){
if(!session.conversationData.talked){
session.conversationData.talked = 1;
}
}
When I run it, I always get
Cannot read property 'conversationData' of undefined
If I use session.conversationData.talked within the bot.dialog it works, but not on the check function.
What am I doing wrong here?
Your code snippet works fine on my side, maybe you can provide your whole picture of your porject for further analysis.
However, to your requirememnt, you can consider to use the node package botbuilder-timeout,
This could be an "async" timing issue. The session on your browser / server needs to be sync'd.
Is this JS server side, or browser side? And what framework is this intended for?
I have a complicated project (not my code...) where a Flask servers launches computations using SCOOP -- that is, in another thread.
I'd like to know how I can send intermediary data from my SCOOP thread to display it on my Flask web page. I am not afraid of a little Javascript.
Python websockets seems like the way to go, but I'm unsure how to use it.
Let's say I will use Javascript this way in my Flask web page to fetch my data (example from Websockets' doc) :
var ws = new WebSocket("ws://127.0.0.1:5678/");
var messages = document.createElement('ul');
ws.onmessage = function (event) {
// Do stuff
}
Now, all my data is encapsulated in an object (Calibration2 -- again, not my code!). So, the following example (still from Websockets' doc) does not suit me:
#asyncio.coroutine
def time(websocket, path):
while True:
now = datetime.datetime.utcnow().isoformat() + 'Z'
yield from websocket.send(now)
yield from asyncio.sleep(random.random() * 3)
start_server = websockets.serve(time, '127.0.0.1', 5678)
Because I want the handler coroutine to be part of my Calibration2 class, and call it whenever Calibration2 wants to. But according to Websockets, the coroutine has to have this prototype, with websocket and path. But how can I access Calibration2's insides from such a function?
I'm pretty sure my issue is mostly a misunderstanding of Python scopes. I'm not a pro (yet!). So, if you can point me in some direction, I'd be glad, thanks!
I am facing a strange issue with calling socket.on methods from the Javascript client. Consider below code:
for(var i=0;i<2;i++) {
var socket = io.connect('http://localhost:5000/');
socket.emit('getLoad');
socket.on('cpuUsage',function(data) {
document.write(data);
});
}
Here basically I am calling a cpuUsage event which is emitted by socket server, but for each iteration I am getting the same value. This is the output:
0.03549148310035006
0.03549148310035006
0.03549148310035006
0.03549148310035006
Edit: Server side code, basically I am using node-usage library to calculate CPU usage:
socket.on('getLoad', function (data) {
usage.lookup(pid, function(err, result) {
cpuUsage = result.cpu;
memUsage = result.memory;
console.log("Cpu Usage1: " + cpuUsage);
console.log("Cpu Usage2: " + memUsage);
/*socket.emit('cpuUsage',result.cpu);
socket.emit('memUsage',result.memory);*/
socket.emit('cpuUsage',cpuUsage);
socket.emit('memUsage',memUsage);
});
});
Where as in the server side, I am getting different values for each emit and socket.on. I am very much feeling strange why this is happening. I tried setting data = null after each socket.on call, but still it prints the same value. I don't know what phrase to search, so I posted. Can anyone please guide me?
Please note: I am basically Java developer and have a less experience in Javascript side.
You are making the assumption that when you use .emit(), a subsequent .on() will wait for a reply, but that's not how socket.io works.
Your code basically does this:
it emits two getLoad messages directly after each other (which is probably why the returning value is the same);
it installs two handlers for a returning cpuUsage message being sent by the server;
This also means that each time you run your loop, you're installing more and more handlers for the same message.
Now I'm not sure what exactly it is you want. If you want to periodically request the CPU load, use setInterval or setTimeout. If you want to send a message to the server and want to 'wait' for a response, you may want to use acknowledgement functions (not very well documented, but see this blog post).
But you should assume that for each type of message, you should only call socket.on('MESSAGETYPE', ) once during the runtime of your code.
EDIT: here's an example client-side setup for a periodic poll of the data:
var socket = io.connect(...);
socket.on('connect', function() {
// Handle the server response:
socket.on('cpuUsage', function(data) {
document.write(data);
});
// Start an interval to query the server for the load every 30 seconds:
setInterval(function() {
socket.emit('getLoad');
}, 30 * 1000); // milliseconds
});
Use this line instead:
var socket = io.connect('iptoserver', {'force new connection': true});
Replace iptoserver with the actual ip to the server of course, in this case localhost.
Edit.
That is, if you want to create multiple clients.
Else you have to place your initiation of the socket variable before the for loop.
I suspected the call returns average CPU usage at the time of startup, which seems to be the case here. Checking the node-usage documentation page (average-cpu-usage-vs-current-cpu-usage) I found:
By default CPU Percentage provided is an average from the starting
time of the process. It does not correctly reflect the current CPU
usage. (this is also a problem with linux ps utility)
But If you call usage.lookup() continuously for a given pid, you can
turn on keepHistory flag and you'll get the CPU usage since last time
you track the usage. This reflects the current CPU usage.
Also given the example how to use it.
var pid = process.pid;
var options = { keepHistory: true }
usage.lookup(pid, options, function(err, result) {
});
Sorry in advance, I have a couple of questions on createReadStream() here.
Basically what I'm doing is dynamically building a file and streaming it to the user using fs once it is finished. I'm using .pipe() to make sure I'm throttling correctly (stop reading if buffer's full, start again once it's not, etc.) Here's a sample of my code I have so far.
http.createServer(function(req, res) {
var stream = fs.createReadStream('<filepath>/example.pdf', {bufferSize: 64 * 1024})
stream.pipe(res);
}).listen(3002, function() {
console.log('Server listening on port 3002')
})
I've read in another StackOverflow question (sorry, lost it) that if you're using the regular res.send() and res.end() that .pipe() works great, as it calls the .send and .end and adds throttling.
That works fine for most cases, except I'm wanting to remove the file once the stream is complete and not using .pipe() means I'm going to have to handle throttling myself just to get a callback.
So I'm guessing that I'll want to create my own fake "res" object that has a .send() and .end() method that does what the res usually does, however on the .end() I'll put additional code to clean up the generated file. My question is basically how would I pull that off?
Help with this would be much appreciated, thanks!
The first part about downloading can be answered by Download file from NodeJS Server.
As for removing the file after it has all been sent, you can just add your own event handler to remove the file once everything has been sent.
var stream = fs.createReadStream('<filepath>/example.pdf', {bufferSize: 64 * 1024})
stream.pipe(res);
var had_error = false;
stream.on('error', function(err){
had_error = true;
});
stream.on('close', function(){
if (!had_error) fs.unlink('<filepath>/example.pdf');
});
The error handler isn't 100% needed, but then you don't delete the file if there was an error while you were trying to send it.