websocket returns 403 after 101 - javascript

I am using websocket (mqtt js) to connect to aws iot core.
My keep alive is 10 seconds.
For the first time, the mqtt client is able to connect to aws iot core service without problem, as keep alive is small, i see disconnect from time to time. mqtt js try to reconect and it able to do that as we can see in photo below with the response 101. I can see many logs in cloudWatch for aws iot core [ connect, publish, subscribe ..]
After 12 reconnects, i am receiving 403 as shown below in firfox debug tool.
with response header:
I see nothing in cloudWatch
When i refresh the page, same thing will repeated.
Any idea ?.

Related

How do I catch ping/pong frames sent to my ws listener from the server?

The ws package for nodeJS hides incoming ping frames by default and silently responds to them with pong frames.
How can I catch these incoming ping frames and take note of them?
You just listen for a ping event: https://github.com/websockets/ws/blob/master/doc/ws.md#event-ping
The real answer here is RTFM.
You need a Node app for that. The app and your front-end (FE) will have open websockets via which they will communicate.
Conceptually, you run a node server and open a web-socket on it. Then you serve your FE to users. The FE in user's browser opens connection back to the server via the websocket. The server sends/pushes some messages to FE via this open channel, and also the client can send some messages to the app.
The websockets differ from a simple requests in that you can PUSH data to the FE. With simple requests, the FE can only PULL data from the server.

Browser closes WebSocket connection with error 1006 one second after it is established when I'm behind OpenVPN

I have small HTTP + WebSocket server hosted on Amazon VPS. Index.html has JS code to connect WebSocket server and exchange data with it. When I connect my server directly using public IP or domain name - everything works fine.
However I don't want this server to be public, so I configured OpenVPN to connect to this server privately.
Sometimes everything works as expected over OpenVPN and when I enter local (inside VPN) servers IP address in my browser (Chrome or Opera) it succesfully loads index.html, connects my WebSocket server and succesfully exchanges data via WebSocket connection.
But sometimes (or some days) 1 second after Websocket connection is established it is closed by browser with error code 1006 and without any description. My script tries to reconnect WebSocket 1 second after this, but result is the same all the time.
I can't figure out why sometimes everything is working and sometimes I can't use WebSocket over OpenVPN for several hours.
Can somebody describe why error 1006 occures when using WebSocket over OpenVPN and how to eliminate it by coding or reconfiguring Chrome, Opera or OpenVPN?
I discovered that problem only occurs when any side of WS connection sends large message.
I guess that if there is some middleware like VPN, firewall or proxy between browser and WebSocket server, then large WS message can exceed some internal packet size or limit of that middleware and it will interrupt the connection between browser and server during message transfer. This unexpected disconnect results to error 1006 in your browser.
If your clients experience unexpected disconnects with error 1006, try to minimize WebSocket message sizes of your API. If you need to send large data amounts then don't send it in one chunk. You better slice it and send multiple short messages.

WebRTC Ice failed, signaling proces

I have some troubles with my WebRTC application. The app is simple, just a client and receiver. The client should sent their media, cam, audio or screen to the receiver. Then the receiver should show the incoming streaming.
The troubles start with the signaling process I guess. Just the console. It says:
ICE failed, see about:webrtc for more details
For develop this app I followed the tutorial Felix Hagspiel's Blog but I am unable to make it work in my system and share the media.
I did not define stun server because I'm on the same machine and the same network, so it do not need to do a Transversal NAT to know the IP of the peers. I just read this in a question here in Stackoverflow.
I do not want to somebody write my app, only I need to know where it could fail to try to resolve by myself. I just searching information about 4 days but nothing help me.
I am using Firefox 45.0.1 on Linux (Manjaro) with the client and the receiver. For the signaling server is a NodeJS app.
The code is in a pastebins:
Websocket Server
Receiver
Client
webrtc.js
For last the about:webrtc log. Log. The file Cliente_New.html is the Client and the file Pi_new.html is the Receiver

Websockets not working in my Rails app when I run on Unicorn server, but works on a Thin server

I'm learning Ruby on Rails to build a real-time web app with WebSockets on Heroku, but I can't figure out why the websocket connection fails when running on a Unicorn server. I have my Rails app configured to run on Unicorn both locally and on Heroku using a Procfile...
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
...which I start locally with $foreman start. The failure occurs when creating the websocket connection on the client in javascript...
var dispatcher = new WebSocketRails('0.0.0.0:3000/websocket'); //I update the URL before pushing to Heroku
...with the following error in the Chrome Javascript console, 'websocket connection to ws://0.0.0.0:3000/websocket' failed. Connection closed before receiving a handshake response.
...and when I run it on Unicorn on Heroku, I get a similar error in the Chrome Javascript console, 'websocket connection to ws://myapp.herokuapp.com/websocket' failed. Error during websocket handshake. Unexpected response code: 500.
The stack trace in the Heroku logs says, RuntimeError (eventmachine not initialized: evma_install_oneshot_timer):
What's strange is that it works fine when I run it locally on a Thin server using the command $rails s.
I've spent the last five hours researching this problem online and haven't found the solution. Any ideas for fixing this, or even ideas for getting more information out of my tools, would be greatly appreciated!
UPDATE: I found it strange that websocket-rails only supported EventMachine-based web servers while faye-websocket which websocket-rails is based upon, supports many multithread-capable web servers.
After further investigation and testing, I realised that my earlier assumption had been wrong. Instead of requiring an EventMachine-based web server, websocket-rails appears to require a multithread-capable (so no Unicorn) web server which supports rack.hijack. (Puma meets this criteria while being comparable in performance to Unicorn.)
With this assumption, I tried solving the EventMachine not initialized error using the most direct method, namely, initializing EventMachine, by inserting the following code in an initializer config/initializers/eventmachine.rb:
Thread.new { EventMachine.run } unless EventMachine.reactor_running? && EventMachine.reactor_thread.alive?
and.... success!
I have been able to get Websocket Rails working on my local server over a single port using a non-EventMachine-based server without Standalone Server Mode. (Rails 4.1.6 on ruby 2.1.3p242)
This should be applicable on Heroku as long as you have no restriction in web server choice.
WARNING: This is not an officially supported configuration for websocket-rails. Care must be taken when using multithreading web servers such as Puma, as your code and that of its dependencies must be thread-safe. A (temporary?) workaround is to limit the maximum threads per worker to one and increase the number of workers, achieving a system similar to Unicorn.
Out of curiousity, I tried Unicorn again after fixing the above issue:
The first websocket connection was received by the web server (Started GET "/websocket" for ...) but
the state of the websocket client was stuck on connecting, seeming to hang
indefinitely.
A second connection resulted in HTTP error code 500 along with app
error: deadlock; recursive locking (ThreadError) showing up in the
server console output.
By the (potentially dangerous) action of removing Rack::Lock, the deadlock error can be resolved, but connections still hang, even though the server console shows that the connections were accepted.
Unsurprisingly, this fails. From the error message, I think Unicorn is incompatible due to reasons related to its network architecture (threading/concurrency). But then again, it might just be some some bug in this particular Rack middleware...
Does anyone know the specific technical reason for why Unicorn is incompatible?
ORIGINAL ANSWER:
Have you checked the ports for both the web server and the WebSocket server and their debug logs? Those error messages sound like they are connecting to something other than a WebSocket server.
A key difference in the two web servers you have used seems to be that one (Thin) is EventMachine-based and one (Unicorn) is not. The Websocket Rails project wiki states that a Standalone Server Mode must be used for non-EventMachine-based web servers such as Unicorn (which would require an even more complex setup on Heroku as it requires a Redis server). The error message RuntimeError (EventMachine not initialized: evma_install_oneshot_timer): suggests that standalone-mode was not used.
Heroku AFAIK only exposes one internal port (provided as an environmental variable) externally as port 80. A WebSocket server normally requires its own socket address (port number) (which can be worked around by reverse-proxying the WebSocket server). Websocket-Rails appears to get around this limitation by hooking into an existing EventMachine-based web server (which Unicorn does not provide) hijacking Rack.

Web socket in VPC behind load-balancer giving errors

When I connect and send some sockets to my Linux, node.js server inside a VPC and behind a load balancer I get a unusually long delay, followed by WebSocket connection to [address] failed: Connection closed before receiving a handshake response
And then a few seconds later I get responses for all the sockets I sent, and then everything works fine. No long delays.
But just on this initial connect, there's a horrible wait followed by this error message. Everything still works, it just takes a bit.
I'm using Amazon Web Service EC2 load-balancers and AWS VPCs
When I'm accessing the same server directly, I get no delays.
I was unable to connect to my server when having just a load-balancer.
I was unable to connect to my server when having just a VPC, so I can't isolate the problem to just my load-balancer or the VPC.
What's going on?
The correct answer was Michael's comment that I marked as helpful.
The first person who puts this into an answer format gets points.
The health of connection from the Load Balancer to the server is determined by the way in which your Health Check is set up.
Try and set it up differently.
eg
Use a TCP based Health Check rather than a HTTP based one, and change the thresholds.
If you see some different behaviour, you'll know that the Health Check is the issue.
It is hard to know exactly without debugging, but note that there are issues on using Elastic Load Balancer for Web Sockets. They parse HTTP requests (unless in TCP mode) and they have a 60 seconds idle connection timeout.

Categories

Resources