WebSockets: How to identify devices connected through same network? - javascript

Say I want to limitate the amount of connections on a single device to my WebSocket server. For this I can compare the IP addresses, and reject duplicate IP’s (if maximum connections are 1 per device). But if two devices try to connect to my server using the same network IP, the last one will be rejected.
Is there another way to identify one device and reject if exceed max amount of WebSocket connection?
I am using Node.js with websocket/ws module.

The only real solution here is to require an account login before getting any meaningful access to your service and to rigorously control how accounts are created and handed out. Then, you can easily prevent multiple access from the same account by just checking if the account is already and currently logged in. This approach also allows you to implement rate limiting, service abuse detection and even account banning (either temporal or permanent) when repeated abuse is detected.
IP address is not a meaningful solution because you don't have any way of knowing what the real IP address is of the client when they are behind a proxy or NAT and trying to use the IP address in that case can end up falsely blocking lots of legitimate users because they may all share one common internet IP address even though they all have their own private IP address on their own local network. Contrary to what Myst wrote, your load balancer layer can't reliably get access to the real client IP address either so this isn't an issue of application layer vs. network layer. It's an issue where your end of the connection (at all layers) does not necessarily have access to the real client IP address (because of client-side proxies and NAT).
If you just want to erect some obstacles to prevent casual or accidental login from the same browser and there's some reason you don't want to require account login, then you can cookie the browser upon first access and then on subsequent accesses check to see if a browser with that cookie is already connected. This is not the least bit secure. It's trivial to defeat (just use a second browser, second computer or disable cookies), but it does prevent users from accidentially doing this and may even keep some non-sophisticated users from doing it on purpose. But, it is easy to defeat so the cookie protection is, at best, a weak obstacle.

I'm aware that this might not be the answer you want, but it the answer you will probably get when all is said and done:
Connection limiting (which is in essences a security related concern) cannot be safely resolved by the application layer. This should be resolved by the proxy (or, sometimes, the load balancer) layer, and even this approach isn't fully reliable (due to limited data and the multitude of intermediaries involved in internet connections).
Consider that the remote IP your application has access to isn't the real IP address.
When collecting the data from the socket layer, the application will either have access to the proxy's IP address, the host layer or some other intermediary... but not the client's actual IP address.
When collecting data from the HTTP layer (as performed by remoteAddress) the data collected isn't reliable. It's provided by the client and can be easily tampered with, forged, spoofed, etc'.
Simply put, the application doesn't have enough data to actually implement this security measure.
In other words, this layer of security should be implemented before your application is exposed to the incoming connection, not by the application layer itself.
The solution suggested by #NewToJS is by far the best approach an application can master - limiting access by using a login system, where only registered users can establish a persistent connection and each user has a limit on the number of possible connections (often disconnecting the oldest connection is better than refusing the new connection).

Related

Maintain communication between two clients even if their IP addresses change

I'm trying to figure out if it is possible to have something like this scenario:
Say we have two people, Alice and Bob. Alice wants to send some data (doesn't matter what this data is) to Bob, and vice versa. I know that WebRTC can be used to serverless-ly exchange messages, but that requires Alice and Bob knowing each other's IP addresses. Now, it's relatively easy for Alice and Bob to share their IP addresses once, to initialize a connection, but what happens if one of them happens to connect to a different network; maybe Bob is in a coffee shop, for instance, and his IP address is thus different? The previously initialized connection wouldn't be to his current IP address, so they'd have to reinitialize the connection; but how?
It would seem to me that there would already need to be some sort of preexisting communication between the two so they could share their IP addresses, but then why not just communicate through the method they communicate their IP addresses instead? Alternatively, there could be a server that connects the two, but that defeats the serverless part of the system.
So, is there any way to maintain communication between two clients even if they happen to change networks, and thus IP addresses? Perhaps there is a more fixed method of identifying devices than their IP addresses, like I've seen in this SO answer, but it's years old, so maybe there's something new? I'd be implementing this in JS, across multiple devices/OSs, so that answer probably wouldn't work. Any ideas/examples would be greatly appreciated; I mainly want to know if this is even possible, and, if so, how.
Simple Answer
No, while this is possible, it is unrealistic and not needed. See my longer explanation below. Also, if you really want to do this, although it is possible through QUIC, it’s not likely to be needed (as explained below).
Longer Answer/Question to think about
In short, this is not a needed feature of WebRTC. Let me ask you a question:
Alice and Bob are in a data channel, exchanging chat messages over WebRTC. To create a WebRTC connection, you need to use an ice server ( first link, second link ) to get both Alice and Bob to, to quote Wikipedia:
...to find ways for the two computers to talk to each other as directly as possible..
This means that it will use Alice’s current IP address to make an offer to Bob through a STUN or TURN server. If, like you said, Alice were to change IP addresses, she would need to change location. That means that she will need to move a sufficient distance so that the IP address will change. In practice, this probably means that she goes in a car and drives somewhere. If not, she calls an Uber or a cab or rides her bike. In most of these scenarios, she will need to close her computer, hence ending the p2p connection. If, by some weird wizardry, she doesn’t close her computer/the connection, the browser will very likely refresh, hence re-connecting to the WebRTC data channel from the new IP address. When will you need to create a WebRTC data channel and handle IP changes? Long explanation coming to a conclusion, clients changing IP addresses without ending/resetting the connect just doesn’t happen in practice.
If you want to look into other alternatives, here are some examples:
ALTERNATIVES
Adding an IP event listener ★
Now this isn’t an actual global variable that you can check, but you can use an online API (some are listed here) to check the user’s IP address, store it in a variable (or localStorage), and check if the IP changes. In a loop, you would do some simple logic to check If it does, you reset the WebRTC connection, if not, you keep the loop going.
Using a “piping” server
You can set up a simple chat by using a http/https server, already set up, explained here, called a piping server. You can look at the article for more information, but it promotes a serverless chat system (can be used without creating a server) that isn’t exposed to the difficulties of changing IP addresses. However, you need to know the peer’s ID, and they need to know your ID, which effectively makes solution obsolete because you need to have some sort of communication before establishing this simple chat.
Using Node.js, Websockets, and/or Socket.io ★
If you want to create a simple chat app or create a data channel, Node.js and Socket.io is the way to go. This is super simple, however, it involves a server, which is why I left it for last. However, I highly recommend this for ease and simplicity, and is not reliant on IP addresses. See here for a very good starting immersion into Node.js and the Express framework. I am far from an expert from Websockets, but MDN is always a good place to start. However good Websockets are, I think that Socket.io is much easier for beginners, so if you are willing to sacrifice a bit of speed over simplicity, you should start here. These are all good server-side chat starting points.
Links
Simple Answer
QUIC connection migration
IP Listener
SO Question,
Ice Servers
MDN docs, Wikipedia
Piping Server
Simple article, Github repo
Node.js, Websockets, and Socket.io
Node.js and Express setup, Websocket intro, and Socket.io intro
All alternatives that are starred (★) are personally recommended.
Yes, It is possible.
You need to use FQDN (or a sub-domain) instead of IP Address and a DNS server, and a client side util or tool to update DNS record while IP Address changing.
There's free solutions on the web like no-ip.com, cloud-flare and etc.
A more modern approach to this would be using QUIC for transport. It has a session ID in the payload and uses UDP as the transport. This handles the very common case where a NAT will change it's Public IP. From Cloudflare's blogpost:
One of the features QUIC is intended to deliver is called “connection migration” and will allow QUIC end-points to migrate connections to different IP addresses and network paths at will.
So assume Alice and Bob are sending messages via QUIC and their connection is given a session ID. Alice's NAT changes it's public IP and source port. Since UDP is being used, Alice's messages still are being sent to Bob's Public IP. Bob receive these UDP messages and looks at the embedded QUIC header and sees that it contains the same session ID as when he was speaking to Alice. Bob then starts using Alice's new public IP and port as a destination for the conversation.
Naturally this seems open to redirection attacks, but there is crypto layered on top of these mechanisms to prevent this and other attacks.
References:
Cloudflare Blog
Amazing blog on NAT behavior and setting up initial connectivity behind them
QUIC connection migration

How to get mac address and ip adress using node.js or Angular [duplicate]

Can someone give me some pointers on picking up the user's MAC address from an HTTP request?
The users will be from outside my network.
It depends on your network setup. But probably no.
Here is a short refresher on Ethernet and IP. The MAC address is a unique address of the network card. It is used to identify for which user on the network segment a packet is. You can use ARP to get a MAC address for an IP address. But this works as expected only if you are on the same network segment.
So the question is, what is a network segment? It depends on the technology you use, but here are the common cases. An entire wireless network is a network segment. Every user on the network can talk via Ethernet to every other user. On wire based networks, this depends on the hardware. If you have good old BNC or a hub you have one network segment with all uses. Again each user can talk to any other. With a switch in the network a network segment is only cable that connects you to the switch. Here you can only talk to the switch via Ethernet. Every other user needs at least IP.
Too bad that most situations with HTTP, which builds on TCP/IP, you are 99.99% never in the same network segment as your user. You can use ARP, but will only get the MAC address of the first hop. It get's better, depending on your hardware, you may not even be on an IP network that is based on Ethernet; ATM for example...
I don't think there is a way to do it in ASP.NET.
MAC is a property of a TCP packet, and on HTTP level there're no packets or MACs (for example, a single HTTP request might be assembled of several TCP packets).
You could try using a packet sniffer (like WireShark) to capture TCP packets, and then analyze them to extract MACs and map them to HTTP requests.
Anyway, you won't get any useful data unless the user is in the same network segment as your server.
UPD. As was pointed out in the comments, I mixed up the network layers.
MAC address is a property of Ethernet frame, not a TCP packet.
The conclusion is still correct, however.
This is not possible, unless you intend to create an ActiveX component, in which case it will only work on IE.

How do I make my Node.js websocket socket.io server more resilient to DoS?

My project has a WebSocket server built in Node.js using socket.io.
Reviewing the causes of a recent outage, I found that the front-end app gets into a state where it keeps making malformed connection attempts which the server rejects. It does so in a loop with no back-off.
What ends up happening on this server is that the single Node.js CPU thread ends up getting clogged up with the backlog and it creates a cascading effect -- no new request can be made, no other processing can happen, and so on.
The easy way to fix this is on the client -- figure out why it goes into the rapid-fire loop and add some exponential back-off.
However, this doesn't solve the problem of a similar issue happening in the future. So, I need to find a way to make my server more resilient.
One approach could be to use the backlog parameter when calling server.listen. That, however, could prevent legitimate client requests from going through.
I'd love to be able to somehow identify an out of control client in some way. IP address might not work well because of NAT, proxies, and firewalls.
So, what would be a good way of protecting my server from this type of DoS?
The ideal place to intercept a DoS attack is BEFORE the connection gets to your application server. That would typically be in a router, firewall or load balancer and you'd rely on the means that it has for rate limiting from a particular source. If you are paying for a hosting service, this should be one of the features the hosting service has to offer as many of the larger or more visible tenants will want that type of service.
From an accidentally misbehaved browser (like you describe in your question), you can cookie a request that has been denied and then rate limit any client that presents that cookie (limit them to no more than N connection requests/minute).
You cannot rely on a cookie for an actual purposeful DoS attack since attackers probably won't preserve cookies if it interferes with their attack. For that, all any infrastructure can do is to look for identifying information in the source (which is typically the IP address). If you accidentally sweep up a few legitimate clients who happen to be sharing the same NAT or proxy (and thus get identified as the same IP address as where a DoS attack is coming from), then that's just the nature of the problem. There isn't much you can do about that. You have to protect the integrity of the service at all cost and there really isn't much else you can do if cookies aren't being preserved by a real attacker.
If you choose to try to implement this type of protection yourself, then you can either try implementing it in your application server (and accept some performance hit for doing so) or you can deploy an intermediary on your same host such as NGINX to serve as mitigation for DoS attacks. Here's an article on using NGINX for that: Mitigating DDoS Attacks with NGINX and NGINX Plus.

Suitability of scrypt for password authentication in the browser using Dart

I've written a Go server with custom binary websocket protocol, and a Dart client. User authentication on the server employs scrypt and the recommended parameters N=16384, r=8, p=1 (with salt of length 16 and generated key length of 64), my i7 desktop takes maybe a second or two to crank through authentication on the server side. That's compared to practically instant, say, SHA-512 authentication.
I had trouble finding scrypt implementations for Dart and while this one works, generating the same hash with the same parameters in a browser (Firefox) takes too long to practically complete. I can get it down to a handful of seconds on the same machine using N=1024 and r<=8 but if I settle on that for compatibility, on the server side, the authentication time is for practical purposes instant again.
Scrypt is great on the server side but I'm wondering if its practical for a browser client. Admittedly I haven't seen any/many examples of people using scrypt for browser authentication. Should I persevere and tackle the performance (e.g. maybe using other javascript libraries from dart), or is this a basic limitation at the moment? How low can you wind down the scrypt parameters before you may as well just use more widely available, optimised crypto hashing algos such as SHA?
Use HTTPS. If you're hashing the password in the browser and then sending the hash to the server for comparison, what's to prevent an attacker from simply sniffing the hashed password and hijacking the session by sending the same hash himself?
Even if you come up with an encryption scheme to prevent that, the attacker could simply inject an additional <script> tag with a keylogger via a MITM attack to steal the password before it's encrypted.
Basically no matter how you cut it, you have to use HTTPS to ensure that your communications are not sniffable and no MITM attack has taken place. And once your connection is already secured over HTTPS, which is encrypted with a (minimum) 128-bit key and would take longer than the known age of the universe to crack, you might as well just use the HTTPS connection to send your password and doing additional encryption of the password client-side is probably not necessary.
#maaartinus ...
I've never thought about not using HTTPS. I'm curious if offloading the password-based key derivation overhead to the client makes any sense.
If I may, I can come at this problem from non-Web direction and come back to the browser-use case. Way back when, I worked with EFT*POS security standards and working with secure communications for financial transactions. For example; the credit-card machine in the supermarket. Just establish my grounding on this topic. That said, I think the original question HAS be covered comprehensively. I decided to add a comment to enrich the conversation on this area (it is quite topical).
The procedure is about the a conversation between the terminal (iPhone, smartphone, browser, etc). Premise: you naturally don't want anyone sniffing your username/password pairing. Assume your typical web page or login screen at work. Over the Intranet, LAN, WAN and VPN what every you type is dispatched from your keyboard to the host. These links may already be encrypted these days. The WWW web on the Internet has two main options: HTTP (clear text) and HTTPS (encrypted) via the browser. If we just stick with the (username, password) pair.
Your terminal (such as browser or mobile) needs to be "trusted" by the host (server, phone company, etc).
There's a lot of standard stuff you can (should do) first; and get creative from there-on. Think of it as a pyramid. On the bottom are things you can do on your PC. That's the base of the pyramid. And there's loads of good information about that (e.g. Electronic Frontier Foundation, EFF), it is about protecting yourself, your data (intangible property) and your rights.
That said here are a few points to consider:
Everything sent via HTTP is clear-text. It can be read and copied.
A hash sent over clear-text can be decoded to get the password. It is just maths.
Even if you use scrypt or another method the hash is decodable -- Given enough time.
If you're on the web, any hash implemented in the browser (terminal/client), is transparent to anyone who can load the web page and javascript code [pointed out by ntoskrnl above.
HTTPS on the other hand, sends Everything-as-a-hash. In addition the hash used is negotiated is unique to the conversation and agreed at session-completion time. It is a slightly 'better-er' hash over the whole-of-the-Message.
The main thing making it better-er in the first instance, is the negotiation. The idea overall is that message hashes are based on a key only known to both end-points.
Once again that can be cracked if you have enough time, etc. The main thing making this challenging is establishing the to-and-fro for the negotiations.
Let's back-up a little and consider cryptography. The notion is to hide the message in a way that permits the message to be revealed. Think of this a as lock and key, where the door is the procedure/algorithm and your message is the contents of the room.
HTTPS works to separate the lock from the key in a pragmatic fashion, in time and via process.
Whatever is done in the HTTPS room, stays in the HTTPS-room. You must have the key to enter, poke about and do unwanted stuff. Imho, any extra security should be ONLY considered within a HTTPS space.
There are methods to improve on that foundation. I think of security like a pyramid. about 4 or 5 layers above the base considerations like the transport protocol.
Such options include, ...
SMS authentication number to your phone.
Something like a dongle or personalised ID-key.
An physical message like an s-mail or e-mail with an authentication Number.
anything you come up with.
In summary, if your need say make it safe; there are many things that can be accomplished. If you can't use HTTPS, hashing password(s) locally needs to be managed extremely carefully. Hashes have vulnerabilities. When you are not using HTTPS, anything you can do in the browser is like wet-rice-paper trying to stave-off a sword.

How to limit live streaming page to one connection per user?

I want to limit the limit users who connect to my live streaming page, such that users can't share passwords and login multiple times under the same account, from different locations.
I don't mind if one user logs in on two different devices at his home, like a computer and a Google TV, for example. This makes me think restricting based on source IP address is the right way to handle this.
The problem is that if the user logs in, and I record their IP address, and restrict them to logging in just from that IP, they can't change locations.
With HTTP, after they log in, it's connectionless, so I've lost track of if they are watching the stream from the CDN or not.
It makes me think I should use javascript on the client to disconnect the player if the user logins into another location.
This means I need to have a way to communicate to logged in clients in a reasonably scalable way.
Can you suggest an appropriate way to handle this problem? I have the feeling there must be a simple and scalable solution for this.
It really depends on how sensitive you are to people bypassing your access controls.
If you're ok with some people bypassing them, then you can perform access control on the client side and have the client ping the server every 60 seconds or so telling the server that it's still streaming. Then in the server, store the IP address of the user in an expiring queue. So the IP address would expire out of the queue in, say, 3 minutes if the player stops pinging the server. And by pinging the server, I mean sending a simple http GET request to keep the session open. When the user closes the client, the pings would stop and your server would expire the IP address for that client after 3 minutes. At that point the user could log in with a different IP address.
It's important to keep in mind that savvy users would be able to watch the networking events in a browser like Chrome and see where your content is being served from and easily bypass any restrictions because you have no control over the CDN itself.
If you need stricter control, you'll need to serve the content from your server. Then you'll absolutely know when a client has stopped accessing a stream.
There might be a middle ground. If you're worried about streaming speed without a CDN, you might consider taking a look at CloudFlare.com. CloudFlare is a CDN layer that sits in front of all of your http requests, even the dynamic ones. The static requests are served from the edge, like a normal CDN, but the dynamic requests are reverse proxied through CloudFlare's network, back to your server each time. If you setup your streaming requests to look like dynamic content to CloudFlare then the you would gain the potential benefit of streaming over a low latency, high bandwidth connection to the edge point while still being able to track users individually.

Categories

Resources