What am I working on?
I am trying to establish a communication between a PHP app and a telephony system's REST API.
Since I my site is all written in PHP, I decided to build the communication using PHP by making cURL calls to the API.
to bring you up to speed, there are 2 types of communication between the user and the API and I like to put them into two different categories
Send Once / Receive Once Example of this would, be a user attempt to dial a new phone number "dial 800-123-4567." The API takes the request and return back an interaction id to allow the user to control the call (i.e. disconnect, mute, put on hold.... )
Send Once / Receive Every Second In this communication, I will create "persistent" connection between the user's session and the API. Then every second, I will check the API for new messages. After the message from the API is received, I must update the user's cache, read the latest user's cache, and finally send the browser the cache data.
Problem? HTTP is stateless.
Every request the user send to the web server, it generates a new TCP connection. The issue with this is that every second I query the API for new messages I will have a new TCP connection. On average about 200 TCP connections are needed at any giving time per user. So if I have 300 users using the app/server, then that is about 60,000 TCP connection open for the web server. As you can clearly see the solution does not scale well here and it is a matter of time before the server blow up in my face... :(
Another issue is that PHP is not asynchronous which cause problem if the communication to the API took longer or return errors.
FWIW, I have tried to user JavaScript SharedWorker to eliminate some of the overhead. I every tried Server-sent-events but a user still generated too many TCP connections to the server. nothing first the problem I was only able to reduce the connection a little.
Can Nodejs help?
I was advised by couple of people to use Nodejs instead of PHP for this task. Of course, I am not going to change my PHP application into Nodejs as this would be insane since my app is huge.
I would like to consider running a nodejs server as a middle man between the PHP server and the API service. The idea is to have a WebSocket running on the node server. Then, the client will pass any communication to the websocket and the websocket will then send the communication to the server. It does not sound bad at a high level but once a dig deeper, it seems to be getting trickier.
Nodejs Challenge
When a user logs into my PHP App, I validate their credentials and once they are in then I create a session which is stored into MySQL database. A session to be valid the following must be correct
IP Address must match the Ip which created the session
The agent data must also match (I can live without it for nodejs)
The idle time of the session must be less that 900 seconds.
In order for Nodejs to start communication it must first create a new connection to the API. After the connection is accepted, nodejs must keep track of the following data "received by the API"
CSFR token
Session Id
Http Cookie
In order for Nodejs to make a connection to the API it must pass a username, password, server name, port, and a station name. I have all the needed info stored into MySQL database and I can easily get that using PHP.
The challenge is that NodeJS have to take the PHP session, validates it, pull the API needed info from the database then establish connection to the API.
Questions
Can nodejs use the PHP session to validate the user? If so how?
How will can nodejs use the TCP connection to prevent me from overloading the server?
This is your arrangement:
[User browser] -> [PHP] -> [Node.js] -> [API]
When your user's browser sends a request to your PHP server the request includes a cookie - one of the values of this cookie is the session id which PHP then uses to look up the session. The session id acts like a password, PHP will assume that if you've got the session id then you are the original user that was issued that id.
When your PHP script communicates with Node it needs to pass along that session id as part of the request. In Node you just then need to do a lookup of your sessions table in MySQL for the corresponding session.
PHP session data is stored as the $_SESSION array serialised. To extract data from it you will need to unserialise it first. There are a number of libraries out that can provide this functionality (e.g. https://github.com/naholyr/js-php-unserialize, https://github.com/kvz/phpjs/blob/master/functions/var/unserialize.js). However if the session data is simple and conforms to a known format you could 'hand parse' the data.
Related
I have an Apache server A set up that currently hosts a webpage of a bar chart (using Chart.js). This data is currently pulled from a local SQLite database every couple seconds, and the web chart is updated.
I now want to use a separate server B on a Raspberry Pi to send data to the server to be used for the chart, rather than using the database on server A.
So one server sends a file to another server, which somehow realises this and accepts it and processes it.
The data can either be sent and placed into the current SQLite database, or bypass the database and have the chart update directly from the Pi's sent information.
I have come across HTTP Post requests, but not sure if that's what I need or quite how to implement it.
I have managed to get the Pi to simply host a json file (viewable from the external ip address) and pull the data from that with a simple requests.get('ip_address/json_file') in Python, but this doesn't seem like the most robust or secure solution.
Any help with what I should be using much appreciated, thanks!
Maybe I didn't quite understand your request but this is the solution I imagined:
You create a Frontend with WebSocket support that connects to Server A
Server B (the one running on the raspberry) sends a POST request
with the JSON to Server A
Server A accepts the JSON and sends it to all clients connected with the WebSocket protocol
Server B ----> Server A <----> Frontend
This way you do not expose your Raspberry directly and every request made by the Frontend goes only to Server A.
To provide a better user experience you could also create a GET endpoint on Server A to retrieve the latest received JSON, so that when the user loads the Frontend for the first time it calls that endpoint and even if the Raspberry has yet to update the data at least the user can have an insight of the latest available data.
I have a file with the contents:
id,value,location
1234,pass,/temp/...
234,fail,/temp/r/...
2343,pass,/temp/status/...
The above file is update for about 1 hour continuously by some program. I need to send this file information to the browser and create a table and show all the data dynamically whenever user enters the link http://localhost:6666/getdata. How do I achieve this with:
1.cgi (python or perl)
or
2.nodejs
or
3.bottle framework.
as the backend.
There could be 10k entries after 1 hour.
Let us say the file was create at 12:00PM and a user requested http://localhost:6666/getdata at 12:10 PM. For next 50 minutes the data has to updated dynamically(continuously) which would feel like live data for the user.
To regularly send data from server to client, the usual design would be for the client to establish either a webSocket or socket.io connection to the server. That connection will then be long-lived and data can be sent either direction over the connection.
This allows the server to send data to the client whenever it wants to without waiting for the client to ask for data. The client then listens for that incoming data on the existing connection (with the appropriate event handlers) and processes the data when it arrives - doing whatever is appropriate for the data (like displaying it).
The socket.io library is a higher level abstraction built on top of webSocket and it offers a number of useful features beyond what webSocket offers (such as auto-reconnect, auto-detection of a dropped or non-function connection, a messaging layer, etc...) which are generally helpful (which is why that library is so popular for this use). There are socket.io libraries for both use in a browser and for many server platforms (including node.js).
I want to send messages from my server to my client when a function is called. Using the code from this answer messages can be successfully sent from Server to Client every second.
I am building an application that runs node in the background, ideally I would like to be able to click a button that will call a function in the node server.js file which takes a parameter and sends that message to the client. The function in question would look like this
function sendToClient(message) {
clients[0].emit('foo', msg);
}
This would send the passed in message to the first client. How can I go about this?
In terminal, after you run node server.js is there a way to call a function from the server file using terminal, this could be a possible solution if so.
The best way to send messages from server to client right now is using webSockets. The basic concept is this:
Client A loads web page from server B.
Client A runs some javascript that creates a webSocket connection to server B.
Server B accepts that webSocket connection and the socket stays open for the duration of the life of the web page.
Server B registers event handlers to handle incoming messages from the web page.
Client A registers event handlers to handle incoming messages from the server.
At any point in time, the server can proactively send data to the client page and it will receive that data.
At any point in time, the client may sent data to the server and it will receive that data.
A popular node.js library that makes webSocket support pretty easy is socket.io. It has both client and server support so you can use the same library for both ends of the connection. The socket.io library supports the .emit() method mentioned in your question for sending a message over an active webSocket connection.
You don't directly call functions from client to server. Instead, you send a message that triggers the server to run some particular code or vice versa. This is cooperative programming where the remote end has to be coded to support what you're asking it to do so you can send it a message and some optional data to go with the message and then it can receive that message and data and execute some code with that.
So, suppose you wanted the server to tell the client anytime a temperature changed so that the client could display in their web page the updated temperature (I actually have a Raspberry Pi node.js server that does exactly this). In this case, the client web page establishes a webSocket connection to the server when the page loads. Meanwhile, the server has its own process that is monitoring temperature changes. When it sees that the temperature has changed some meaningful amount, it sends a temperature change message to each connected client with the new temperature data. The client receives that message and data and then uses that to update it's UI to show the new temperature value.
The transaction could go the other way too. The client could have a matrix of information that it wants the server to carry out some complicated calculation on. It would send a message to the server with the type of calculation indicated in the message type and then send the matrix as the data for the message. The server would receive that message, see that this is a request to do a particular type of calculation on some data, it would then call the appropriate server-side function and pass it the client data. When the result was finished on the server, it would send a message back to the client with the result. The client would receive that result and then do whatever it needed to with the calculated result.
Note, if the transactions are only from client to server with a response then coming back from the server, a webSocket is not needed for that type of transaction. That can be done with just an Ajax call. Client makes ajax call to the server, server formulates a response and returns the response. Where webSockets are most uniquely useful is if you want to initiate the communication from the server and send unsolicited data to the client at a time that the server decides. For that, you need some continuous connection between client and server which is what a webSocket is designed to be.
It appears there may be more to your question about how to communicate from a C# server to your node.js server so it can then notify the client. If this is the case, then since the node.js server is already a web server, I'd just add a route to the node.js server so you can simply do an http request from the C# server to the node.js server to pass some data to the node.js server which it can then use to notify the appropriate client via the above-described webSocket connection. Depending upon your security needs, you may want to implement some level of security so that the http request can only be sent locally from your C# server, not from the outside world to your node.js server.
In order to send a command to a client via the console there are two options, single process or multiprocess:
Single Process
When the command is run from console, temporary socket.io server starts listening on a port.
Once the client connects, send the message to the client.
Disconnect and stop the console app.
The reason this works is that socket.io clients are always trying to connect to the server. As long as the browser is open, they will try to connect. So even if the server only comes on for a few seconds, it should connect and receive messages. If the client is not running then simply create a timeout that will stop the console app and inform the user that it failed to broadcast the command.
While this approach is very easy, it's not robust nor efficient. For small projects this would work, but you'll have better luck with the next approach:
Multi-Process
This approach is much more reliable, expandable, and just better looking when you are talking about architecture. Here's the basic summary:
Spin up a stand-alone server that connects with clients.
Create a very similar console node app that will send a message to the server to forward on to clients.
Console app completes but the main server stays up and running.
This technique is just interprocess communication. Luckily you already have Socket.IO on the primary server, so your console app just needs to be another socket.io client. Check out this answer on how to implement that.
The downside to this is that you must secure that socket communication. Maybe you can enforce it to just allow localhost connections, that way you need access to the server to send the run command message (you absolutely don't want web clients executing code on other web clients).
Overall it comes down to the needs of your project. If this is a quick little experiment you want to try out, then just do it single process. But if will be hosting an express server (other webservers are available) and need to be running anyways, then multi-process is the way to go!
Example
I've created a simple example of this process using only Socket.io. Instructions to run it all are in the readme.
Implementations
In order to have C# (app) -> Node.js (server) -> Browser (client) communication then I would do one of the following:
Use redis as a message queue (add items to the queue with the app, consume with the server, which sends commands to client).
Live on the wild side and merge your NodeJS and C# runtimes with Edge.js. If you can run NodeJS from C# you will be able to send messages from your app to the server (messages are then handled by the server, just like any other socket.io client-server model).
Easier, but still kinda hacky, use System.Diagnostics.Process to start a console tool explained in the Multi-Process section. This would simply run the process with arbitrary parameters. Not very robust but worth considering, simple means harder to break (And again, messages are then handled by the server, just like any other socket.io client-server model).
I would create a route for sending the message and send message from post parameter. From CLI you can use curl or from anywhere really:
app.get('/create', function(req, res) {
if( data.type && data.content && data.listeners){
notify( data );
}
});
var notify = function( notification ){
ns_mynamespace.in(notification.listeners.users)
.emit("notification", {
id: notification.id,
title: 'hello', text: notification.content });
}
}
Quick background:
Full Javascript SPA AngularJS client that talks to a REstful API server. I am trying to work out the best authentication for the API Server. The client will have roles and I am not concerned if the user can see areas of the client they aren't allowed because the server should be air tight.
Authentication flow:
User Posts Username and Password to let's say /api/authenticate
If a user the server generates api token ( sha hash of fields or md5) and some other meta data determining roles to pass back in 1) post reply.
The token is stored in a session cookie (no exp, http only, ssl)
Each request after authentication takes the token in the cookie and verifies this is the user.
SSL user on server.
Questions:
Is this the best way to secure the server?
Do I need to worry about replay attacks w/ SSL? If so best way to manage this?
I tried to think of a way to do HMAC security with AngularJS but I can't store a private key on a javascript client.
I initially went the http authentication method but sending the username and password each request seems odd.
Any suggestions or examples would be appreciated.
I'm currently working on a similar situation using angularjs+node as a REST API, authenticating with HMAC.
I'm in the middle of working on this though, so my tune may change at any point. Here's what I have though. Anyone willing to poke holes in this, i welcome that as well:
User authenticates, username and password over https
Server (in my case node.js+express) sends back a temporary universal private key to authenticated users. This key is what the user will use to sign HMACs client side and is stored in LocalStorage on the browser, not a cookie (since we don't want it going back and forth on each request).
The key is stored in nodejs memory and regenerates every six hours, keeping record of the last key generated. For 10 seconds after the key changes, the server actually generates two HMACs; one with the new key, one with the old key. That way requests that are made while the key changed are still valid. If the key changed, the server sends the new one back to the client so its can flash it in LocalStorage. The key is a SHA256 of a UUID generated with node-uuid, hashed with crypto. And after typing this out, i realize this may not scale well, but anyway ...
The key is then stored in LocalStorage on the browser (the app actually spits out a your-browser-is-too-old page if LocalStorage is not supported before you can even try to login).
Then all requests beyond the initial authentication send three custom headers:
Auth-Signature: HMAC of username+time+request.body (in my case request.body is a JSON.stringify()'d representation of the request vars) signed with the locally stored key
Auth-Username: the username
X-Microtime: A unix timestamp of when the client generated its HMAC
The server then checks the X-Microtime header, and if the gap between X-Microtime and now is greater than 10 seconds, drop the request as a potential replay attack and throw back a 401.
Then the server generates is own HMAC using the same sequence as the client, Auth-Username+X-Microtime+req.body using the 6-hour private key in node memory.
If HMACs are identical, trust the request, if not, 401. And we have the Auth-Username header if we need to deal with anything user specific on the API.
All of this communication is intended to happen over HTTPS obviously.
Edit:
The key would have to be returned to the client after each successful request to keep the client up to date with the dynamic key. This is problematic since it does the same thing that a cookie does basically.
You could make the key static and never changing, but that seems less secure because the key would never expire. You could also assign a key per user, that gets returned to the client on login, but then you still have to do user lookups on each request anyway, might as well just use basic auth at that point.
Edit #2
So, after doing some testing of my own, i've decided to go with a backend proxy to my REST API still using HMAC.
Angular connects to same-domain backend, the backend runs the HMAC procedure from above, private key stored on this proxy. Having this on same domain allows us to block cors.
On successful auth, angular just gets a flag, and we store logged in state in LocalStorage. No keys, but something that identifies the user and is ok to be made public. For me, the presence of this stored value is what determines if the user is logged in. We remove the localStorage when they logout or we decide to invalidate their "session".
Subsequent calls from angular to same domain proxy contain user header. The proxy checks for user header (which can only be set by us because we've blocked cross-site access), returns 401 if not set, otherwise just forwards the request through to the API, but HMAC'd like above. API passes response back to proxy and thus back to angular.
This allows us to keep private bits out of the front end, while still allowing us to build an API that can authenticate quickly without DB calls on every request, and remain state-less. It also allows our API to serve other interfaces like a native mobile app. Mobile apps would just be bundled with the private key and run the HMAC sequence for each of their requests.
I'm building an app where the user may occasionally make a search. I'd like to run the search through google, but I'm unsure in the event I have many users if i will hit google's search quota. Any individual user will not make more than one or two searches a day on the app. But cumulatively, it could potentially be much more.
Will doing client side retrival of a google query avoid this problem and not identify my server as the origin ip?
Yes, if you do a GET request from the client, the clients IP will be the source IP
Since you are doing a GET from the client's side, the TCP/IP connection is being opened by the client. So it would be the client's IP that the site would see as the requesting IP. However if you would like the site to see your IP instead, you can re-route the request via AJAX to your server, have your server do the GET and send the results asynchronously back to the client.