I want to limit the limit users who connect to my live streaming page, such that users can't share passwords and login multiple times under the same account, from different locations.
I don't mind if one user logs in on two different devices at his home, like a computer and a Google TV, for example. This makes me think restricting based on source IP address is the right way to handle this.
The problem is that if the user logs in, and I record their IP address, and restrict them to logging in just from that IP, they can't change locations.
With HTTP, after they log in, it's connectionless, so I've lost track of if they are watching the stream from the CDN or not.
It makes me think I should use javascript on the client to disconnect the player if the user logins into another location.
This means I need to have a way to communicate to logged in clients in a reasonably scalable way.
Can you suggest an appropriate way to handle this problem? I have the feeling there must be a simple and scalable solution for this.
It really depends on how sensitive you are to people bypassing your access controls.
If you're ok with some people bypassing them, then you can perform access control on the client side and have the client ping the server every 60 seconds or so telling the server that it's still streaming. Then in the server, store the IP address of the user in an expiring queue. So the IP address would expire out of the queue in, say, 3 minutes if the player stops pinging the server. And by pinging the server, I mean sending a simple http GET request to keep the session open. When the user closes the client, the pings would stop and your server would expire the IP address for that client after 3 minutes. At that point the user could log in with a different IP address.
It's important to keep in mind that savvy users would be able to watch the networking events in a browser like Chrome and see where your content is being served from and easily bypass any restrictions because you have no control over the CDN itself.
If you need stricter control, you'll need to serve the content from your server. Then you'll absolutely know when a client has stopped accessing a stream.
There might be a middle ground. If you're worried about streaming speed without a CDN, you might consider taking a look at CloudFlare.com. CloudFlare is a CDN layer that sits in front of all of your http requests, even the dynamic ones. The static requests are served from the edge, like a normal CDN, but the dynamic requests are reverse proxied through CloudFlare's network, back to your server each time. If you setup your streaming requests to look like dynamic content to CloudFlare then the you would gain the potential benefit of streaming over a low latency, high bandwidth connection to the edge point while still being able to track users individually.
Related
I have not been able to get an answer to this anywhere online. I want to remove possible jitter from my nodejs server. I am using socket.io to create connections to node.
If a user goes to a specific part of my website, a connection is started. However, if the user refreshes the site too quickly and often, the connection is created very frequently, and issues arise with my server.
While I realized it's possible this could be solved a couple different ways, I am hoping a server solution is out there. Meaning, whenever a user connects, make sure the user is connected for at least 5 seconds. Then move on. Otherwise, disconnect the user. Thanks for any insight!
First off a little background. With a default configuration, when a socket.io connection starts, it first does 2-5 http connections and then once it has established the "logical" connection, it tries to establish a connection using the webSocket transport. If that is successful, then it keeps that webSocket connection as a long lasting connection and sends socket.io packets over it.
If the client refreshes in the middle of the transition to a webSocket connection, it creates a period of unknown state on the server where the server isn't sure if the user is just still in the middle of the transition to a lasting webSocket connection, if the user is gone entirely already, if the user is having some sort of connection issues or if the user is doing some refresh thing. You can easily end up with a situation where the server thinks there are multiple connections all from the same user in the process of being confirmed. It can be a bit messy if your server is sensitive to that kind of thing.
The quickest thing you can do is to force the connection process to go immediately to the webSocket transport. You can do that in the client by adding an options to your connection code:
let socket = io(yourURL, {transports: ["websocket"]});
You can also configure the server to only accept webSocket connections if you're try to protect against any other types of connections besides just from your own web pages.
This will then go through the usual webSocket connection which starts with a single http request that is then "upgraded" to the webSocket protocol. Once connection, one socket. The server will know right away, either the user is or isn't connected. And, once they've switched over to the webSocket protocol, the server will known immediately if the user hits refresh because the browser will close the webSocket immediately.
The "start with http first" feature in socket.io is largely present because in the early days of webSockets, there were some browsers that didn't yet support them and some network infrastructure (like corporate proxies) that didn't always support webSocket connections. The browser issue is completely gone now. All browsers in use support webSocket connections. I don't personally have any data on the corporate proxies issues, but I don't ever hear about any issues with people using webSockets these days so I don't think that's much of an issue any more either.
So, the above change will get you a quick, confirmed connection and get rid of the confusion around whether a user is or isn't connected early in the connection process.
Now, if you still have users who are messing things up by rapid refresh, you probably need to just implement some protection on your server for that. If you cookie each user that arrives on your server, you could create some middleware that would keep track of how many page requests in some recent time interval have come from the browser with this cookie and just return them an error page that explains they can't make requests that quickly. I would probably implement this at the web page level, not the webSocket level as that will give users better feedback to stop hitting refresh. If it's really a refresh you're trying to protect against and not general navigation on your site, then you can keep a record of a combination cookie and URL and if you see even two of those within a few seconds, then return the error page instead of the expected content. If you redirect to an error page, it forces a more conscious action to go back to the right page again before they can get to the content.
I am creating an online game.
My game will dynamically create servers (websockets) using hourly server hosting, and the amount of servers / lobbies will expand / shrink depending on the number of players online. It is not very feasible to have pre-created servers constantly running. I then have an IP address for each server.
The problem comes when I want to connect to the game IP from my website. I can't connect unless I disable CORS protection and also disable HTTPS on my website. I could do this to begin with but isn't great in the longterm for security reasons and because the browser shows that a page is insecure if it is in HTTP.
The other option is to create a subdomain on my domain for each game server IP and setup HTTPS. The problem with this is that the DNS could take hours to update, which is a problem because I am dynamically creating and destroying servers.
The final option is to create a single server proxy (could be spread across multiple nodes using load balancing) that forward messages from the client to the actual individual game servers. The problem with this is there will be a lot of extra latency, obviously being a problem for games.
How can I achieve a secure IP websocket connection in the browser, or is there a better compromise than what I've come up with? With webassembly and browser openGL on the way I would expect that something like this was thought through and there is some solution.
I have a logic written on my server mostly doing curl requests (e.g. accessing social networks). though, some of the sites, will be blocking my server(s) IPs soon.
I can of course, use VPN or deploy multiple servers per location, but it won't get accurate, and still some of the networks might get block the user account.
I am trying to find creative solution to run it from the user browser (it is ok to ask for his permission, as it is an action he is explicitly trying to execute) Though I am trying to avoid extra installations (e.g. downloadable plugins\extension or a desktop app)
Is there a way to turn the client browser into a server-proxy, to run those curl-calls from his machine instead of sending it from my own server? (e.g. using web-sockets, polling, etc.)
It depends on exactly what sort of curl requests you are making. In theory, you could simulate these using an XMLHttpRequest. However, for security reasons these are generally not allowed to access resources hosted on a different site. (Imagine the sort of issues it could cause for instance if visiting any website could cause your browser to start making requests to Facebook to send messages on your behalf.) Basically it will depend on the Cross-origin request policy of the social networks that you are querying. If the requests being sent are intended to be publicly available without authentication then it is possible that your system will work, otherwise it will probably be blocked.
Say I want to limitate the amount of connections on a single device to my WebSocket server. For this I can compare the IP addresses, and reject duplicate IP’s (if maximum connections are 1 per device). But if two devices try to connect to my server using the same network IP, the last one will be rejected.
Is there another way to identify one device and reject if exceed max amount of WebSocket connection?
I am using Node.js with websocket/ws module.
The only real solution here is to require an account login before getting any meaningful access to your service and to rigorously control how accounts are created and handed out. Then, you can easily prevent multiple access from the same account by just checking if the account is already and currently logged in. This approach also allows you to implement rate limiting, service abuse detection and even account banning (either temporal or permanent) when repeated abuse is detected.
IP address is not a meaningful solution because you don't have any way of knowing what the real IP address is of the client when they are behind a proxy or NAT and trying to use the IP address in that case can end up falsely blocking lots of legitimate users because they may all share one common internet IP address even though they all have their own private IP address on their own local network. Contrary to what Myst wrote, your load balancer layer can't reliably get access to the real client IP address either so this isn't an issue of application layer vs. network layer. It's an issue where your end of the connection (at all layers) does not necessarily have access to the real client IP address (because of client-side proxies and NAT).
If you just want to erect some obstacles to prevent casual or accidental login from the same browser and there's some reason you don't want to require account login, then you can cookie the browser upon first access and then on subsequent accesses check to see if a browser with that cookie is already connected. This is not the least bit secure. It's trivial to defeat (just use a second browser, second computer or disable cookies), but it does prevent users from accidentially doing this and may even keep some non-sophisticated users from doing it on purpose. But, it is easy to defeat so the cookie protection is, at best, a weak obstacle.
I'm aware that this might not be the answer you want, but it the answer you will probably get when all is said and done:
Connection limiting (which is in essences a security related concern) cannot be safely resolved by the application layer. This should be resolved by the proxy (or, sometimes, the load balancer) layer, and even this approach isn't fully reliable (due to limited data and the multitude of intermediaries involved in internet connections).
Consider that the remote IP your application has access to isn't the real IP address.
When collecting the data from the socket layer, the application will either have access to the proxy's IP address, the host layer or some other intermediary... but not the client's actual IP address.
When collecting data from the HTTP layer (as performed by remoteAddress) the data collected isn't reliable. It's provided by the client and can be easily tampered with, forged, spoofed, etc'.
Simply put, the application doesn't have enough data to actually implement this security measure.
In other words, this layer of security should be implemented before your application is exposed to the incoming connection, not by the application layer itself.
The solution suggested by #NewToJS is by far the best approach an application can master - limiting access by using a login system, where only registered users can establish a persistent connection and each user has a limit on the number of possible connections (often disconnecting the oldest connection is better than refusing the new connection).
We have a Rails website where users takeup online quizzes. 20-30% times users will report that due to internet disconnection they were unable to complete the quiz. Is there any way to track how many times internet disconnection occured when a user was on a particular page.
Depends on how and why you want to track it. You are able to do this with the window.navigator.onLine property on the client side which you could attempt to log to analytics when the user is online again to get an idea of how frequently this happens - MDN.
If you wanted to know on the server side, depending on the resources available to you, you may want to create a websocket (MDN) to the client. The websocket will be an ongoing connection between your rails app and the clients browser and any disconnections should be noticeable on the server side which you can keep track of. There are existing libraries for doing websockets with rails but remember that this option will likely take up more server resources as it requires persistent connections with all the online users of your site but you could transmit all your application data over this channel saving the other connections that may otherwise be required.
Another option which would require less resources but possibly more work on your part would be to have a script on the client that polls the server in some way to let it know it's still online and you could link that 'keep-alive' request to the user's information and determine a reasonable time-out if one doesn't arrive which may require a scheduled task on the server.