I've currently run into an issue using Laravel Echo Server on our staging/production servers. Locally, everything works as intended, but deploying to our staging servers has been a bit of a nightmare. As it stands right now, public channels work properly, but we get an authentication error when trying to join a private channel.
I've narrowed this down to being a session issue, where the session isn't being sent along with the socket requests. This seems to be because the staging servers are routed to person.staging.website.com, but the socket server had to be setup at ws.website.com because of some complications with AWS not allowing us access to the SSL certificate to configure the echo server. So we setup a subdomain with a Lets Encrypt to get it up and running.
Now, I realize that I can just change the SESSION_DOMAIN in our .env's to be .website.com, but I'm getting some pushback since people won't be able to be logged into different subdomains at the same time. Is there any way I can set up Laravel's sessions to work with two different, specific subdomains instead of wildcarding every subdomain? For testing, I'd need to set it up at person.staging.website and ws.website.com, but production would need different values.
Any suggestions or clever work-arounds for this?
Related
I'm trying to get the client IP as a way to save a particular user so the server knows who they are next time they visit, without having the need to login/signup for anything. This is a React front end with a NodeJS backend.
I tried my app locally and it seems to work fine. But I tried deploying it to Heroku and now I'm getting different IP addresses each time I reload. It keeps the same IP for the duration of the visit, but once I reload (refresh) the page, my IP changes..
[Method: 'POST'] [Path: '/api/posts'] [IP '::ffff:***.63.***.219']
[Method: 'POST'] [Path: '/api/posts'] [IP '::ffff:***.47.***.144']
(actual ip modified)
this is my console, as you can see the IP is completely different, and it looks nothing like my IP. I'm getting the IP from the request object (request.ip).
Why is the IP different each time on Heroku but stable on my local machine? is there another method for getting the client IP that I should be using? or is this a Heroku problem? I've looked for answers about this but I have come up empty which makes me think this is specific to Heroku.
According to the Heroku Documentation, all requests are going through a Proxy which acts mainly as a load balancer (If I have it correct in mind). You can use the custom HTTP-Headers to get the client ip address, but it is not recommended!
Additional:
For security reasons you should avoid to use the IP to identify a user, because that can cause session hijacking. Use technologies like cookies instead!
I'm new to Web Sockets in general, but get the main concept.
I am trying to build a simple multiplayer game and would like to have a server selection where I can run sockets on multiple IPs and it will connect the client through that, to mitigate connections in order to improve performance, this is hypothetical in the case of there being thousands of players at once, but would like some insight into how this would work and if there are any resources I can use to integrate this before hand, in order to prevent extra work at a later date. Is this at all possible, as I understand it Node.Js runs on a server and uses the Socket.io dependencies to create sockets within that, so I can't think of a possible solution to route it through another server unless I had multiple sites running it separately.
The first question I have is this:
Are you hosting on AWS or in a local datacenter?
The reason I ask is because SOCKET.io requires sticky sessions to work properly across multiple servers. Due to the fact that SOCKET.io will attempt to upgrade each connection, and because that upgrade request must reach the original server that authorized the session, you'll need to route websocket (TCP) connections back to that original server via sticky sessions. Unfortunately AWS makes this extremely tricky and will require you to learn how to:
A) Modify elastic load balancer policies to forward protocol information
B) Split apart TCP connections from standard web requests using something like HA PROXY or NGINX. This is necessary in order to handle web socket UPGRADE requests properly, as you will be setting TCP to sticky and web requests to round-robin.
C) Attach your socket.io configuration to a common storage source, like Redis (elasticache).
Once you've figured out what's needed for AWS (or if you've got full control over request routing at your local datacenter), you'll want to architect your SOCKET application to use multicast rooms rather than direct socket messaging.
Example:
To send a message to users in game #4444, emit a message to room 'games:4444', rather than direct to the user's socket.
If your socket instance is configured using REDIS, REDIS will automatically take care of maintaining lists of people who are connected to your 'games:4444' channel. Otherwise you'll need to maintain the list yourself using a database or other shared mechanism.
Other than that, there are plenty of resources online that can help you figure out each step along the way. I'd start with understanding something like HA PROXY and how it can help split apart your SOCKETS from your web requests.
I've setup a remote, hosted javascript server (DreamFactory Server http://www.dreamfactory.com/) that responds via REST API's.
Locally, I'm running an Angularjs application through the grunt web server via $grunt serve
https://www.npmjs.com/package/grunt-serve
I have setup CORS on the remote server to allow '*' for multiple http:// connection types. THIS WORKS CORRECTLY.
My question is how I can limit the CORS configuration to only allow a connection from my home, grunt web server?
I've tried to create an entry for "localhost", "127.0.0.1", also my home Internet IP that is reported from whatismyip.com, the dns entry that my provider lists for my home IP when I ping it, a dyndns entry that I create for my home internet IP... None of them work, except for '*' (which allows any site to connect).
I think it is an educational issue for me to understand what that CORS entry should look like to allow ONLY a connection from my home web server.
Is this possible? If so, what and where should I be checking in order to find the correct entry to clear in the CORS configuration?
-Brian
To work and actually apply restrictions, the client requesting the connection must support and enforce CORS. In an odd sort of way (from a security point of view), restricting access using CORS requires a self-policing client (one that follows the prescribed access rules). This works for modern browsers as they all follow the rules so it generally works for applications that are served through a browser.
But, CORS access restrictions do not prevent other types of clients (such as any random script in any language) from accessing your API.
In other words, CORS is really about access rules from web pages that are enforced by the local browser. It doesn't sound like your grunt/angular code would necessarily be something that implements and enforces CORS.
If you really want to prevent other systems from accessing your DreamFactory Server, then you will need to implement some server-side access restrictions in the API server itself.
If you just have one client accessing it and that client is using "protected" code that is not public, then you could just implement a password or some sort of logon credentials and your one client would be the only client that would have the logon credentials.
If the access is always from one particular fixed IP address, you could refuse connections on your server from any IP address that was not in a config file you maintained.
You can't secure an API with CORS, for that you will need to implement an authentication scheme on your server. There's essentially 4 steps to do this.
Update the headers your server sends with a few additional Access-control statements.
Tell Angular to allow cross-domain requests.
Pass credentials in your API calls from Angular.
Implement an HTTP Authentication scheme on your web server or in your API code.
This post by Georgi Naumov is a good place to look for details of an implementation in Angular and PHP.
AngularJS $http, CORS and http authentication
We are developing a web application that uses strophe.js to communicate with an openfire server for XMPP chat. The web application is hosted on tomcat and both tomcat and openfire reside on the same server. Strophe.js is using BOSH (essentially http long-polling) as a communication mechanism between the client and the openfire.
Our tomcat instance authenticates (form-based) using a users table in our database. We've configured our openfire instance to read out of the same table. That way mobile apps can directly connect to our chat server using the user's credentials
We also have apache running as a reverse proxy. This might be TMI for the problem at hand, but more information can't hurt. The url schemes look like the following:
http://myserver/web Our web interface. Goes to http://myserver:8080/
http://myserver/chat Forwards to the openfire BOSH url (what strophe.js connects to). Goes to http://myserver:7070/http-bind (openfire bosh endpoint)
The problem I'm trying to figure out is how to log in to our openfire server from the browser. For example, if the user goes to the login.jsp site and enters their credentials, the server will forward that user to index.jsp. The strophe.js connection will try to connect to the chat server (/chat), but at that point, the username and password is no longer available to the javascript code.
I need to figure out how to securely authenticate the user in the web browser with the openfire server AFTER authentication has occurred. I've looked around for some examples, but there's not much information out there (or rather, I don't know what to look for).
Some Possible Solutions
1.) The first strategy I tried was creating an AuthProvider implementation in openfire that can take the browser's cookie as the password, make an HTTP request to tomcat with that cookie, and if succeeds deem that user as authenticated. This worked at first, but when deploying I found that I needed to configure tomcat to allow the document.cookie to be populated with the JSESSIONID. After reading a bit about this, it seems that using cookies is not recommended from a security standpoint. Jeff Atwood has an post Protecting Your Cookies: HttpOnly that discusses the security issues stemming from cookies accessible to javascript. Although I am not completely opposed to using cookies, is there a better way?
2.) A solution I have also thought of (haven't implemented yet) was providing a REST endpoint to create tokens that the user can fetch once they are logged in and use as passwords for the openfire server. Seems a little better, but I'll need to create a new table, manage their expiration, etc.
If anyone has tackled this problem, please let me know. It would be greatly appreciated.
I'm running a game which contains a server.js backend (which is hosted and run on my localhost), and the frontend is on a github website. The github page connects to the server on my localhost through the config which points to 127.0.0.1. I realize that I will be able to play this from my localhost this way, but will other people be able to?
Basically the index.html connects to the visitor's localhost to look for the running server.
A visual representation (sort of):
[nullwalker.github.io/index.html] ----> [localhost(127.0.0.1)/server.js]
What should I do to allow myself to play from the computer that's hosting the server backend as well as others being able to play?
You would need to host it in a live environment. There are ways via port forwarding to use your computers ip (gateway) to allow others to connect, however typically ISP's will try to stop you from using your dynamic IP statically. Safest bet is to launch a cheap VPS and host it there.
http://www.howtogeek.com/66214/how-to-forward-ports-on-your-router/
This article seems to explain port forwarding well enough.
As for the VPS, you can find extremely cheap ones really easily, if you do not expect a lot of players it should be fine, if you expect more then using your own connection is dangerous.
unless they have the same server running on their localhost, no. And they almost surely don't. You should get a host (digitalocean.com is very popular and good, but there are many others), and then run it there and connect to that instead of localhost