Listen to http request on webpage - javascript

I'm now creating a script on my website, which will need a bi-direction connection (the script is a chat room window). But we don't want to create a socket. Instead, we want to make client(the script) and server both have ability of sending http request to each other.
(BTW, the website server and script server are two different servers)
It is very easy for client(the script) to send http request to our server. But it is a big problem for the client(the script) to listen to http requests.
I have done some search but found nothing, maybe this requirement is so weird that seldom been used? Is it possible for a script which embeded in webpage that listen for http request?
Thanks!

In order to receive an HTTP request, you must:
have a publicly accessible IP address
have an open port, publicly accessible
bind a program to that port to listen for HTTP requests
Browsers fall down on all three counts. You cannot expect that all of your clients have a publicly reachable, unblocked IP address with specifically the port open that you want. But even if that were the case, there's no way for the browser to listen to incoming requests; there's no API to do that in the browser, partly because the browser is an HTTP client and not a server, partly because offering such an API would probably provide an extremely powerful API to all sorts of attackers, and partly because it oftentimes is useless anyway because the browser cannot be reached anyway (see point 1).
So, no, you cannot turn the browser into an HTTP server.
Use WebSockets.

Related

Managing online users on server without using websockets

I would like to show a list of connected users without using Websockets.
I thought to use http header Connection:keep-alive
to get persistent connections.
Then, when clients leave the website, they would run a listener handler on beforeunload event in order to notice server that a client is going to leave the list.
But, how is server able to notify the rest of connected clients to update their lists? (remember, without using websockets, and if possible, without making clients asking any interval to server)
So using the Connection: keep-alive header means that the browser and server will carry out multiple http requests/responses over one tcp connection vs opening and closing a tcp connection for each http request. But this still doesn't allow the server to just push data whenever. For the server to respond with anything, the client still would need to make requests. So it isn't really related to real time push events.
and if possible, without making clients asking any interval to server
This isn't really possible. Like I said, a server cannot send data to a client over http unless the client first requested it.
So you either have to make interval requests for the user list
or
you can make it "simulate" pushing from the server with http long-polling.
The basic idea is that the server never "finishes" its response to a client request, but sends its response in chunks, when really those chunks would be treated on the client side as separate pieces of data. But this solution is hacky and has a lot of cons. Either way, http long-polling would more or less simulate pushing data real time.

Why is HTTP request needed to get IP address in browser

I need to get the user's IP address from the browser. I know we can get device information from the browser with plain JS without any http requests involved (OS and browser info via User-Agent), but to get the IP address you need to make an HTTP request, as your browser will attaches the IP address as a header of the request so you can get it server-side or in the response of that request in the UI.
I am lacking some basic understanding and I can't see why an HTTP request is required and at what point the IP address is added as a header, if the browser doesn't know how does the header get attached?
I believe OSI_model is the basic knowledge you are looking for.
https://en.wikipedia.org/wiki/OSI_model
HTTP request is just the top layer of the whole network system.
The IP protocol is handled on (Layer 4)Transport Layer and it will not arrived to Application Layer(Layer 7).
The statement -- "your browser will attaches the IP address as a header of the request" is Wrong.
Normally the http request doest not carry source IP information in headers. You can view the https://en.wikipedia.org/wiki/List_of_HTTP_header_fields for normal headers.
But you are right that the sever side should figure out the client's IP. How can it achieve that?
In fact HTTP is an Application Layer protocol. The topic of source IP belongs to Internet layer.
The Internet protocol suite(TCP/IP) will solve that.
Meanwhile it means it's impossible to get your ip directly in browser. Moreover, sometimes it's even impossible to get your public ip address within your System.
For example the WiFi AP normally use DHCP to assign you an private ip only. And use NAT to modify your packets when you send/receive a request.

Is it possible to prevent cookies to be sent in every HTTP request?

I recently found (here: Does every web request send the browser cookies?) that every HTTP request contains the cookies related to a domain every time a request is made to that same domain.
Given this, what happens when the request is not sent through a browser but from Node.js, for example? Is it possible that no information is sent in the request?
Is it also possible to prevent it to be sent in the browser requests?
Browsers
Is not possible to prevent browser to send cookies.
This is why is generally it is recommended (Yahoo developer Best practice, see section Use Cookie-free Domains for Components) to serve static content like css, images, from a different domain that is cookie free.
When the browser makes a request for a static image and sends cookies together with the request, the server doesn't have any use for those cookies. So they only create network traffic for no good reason. You should make sure static components are requested with cookie-free requests. Create a subdomain and host all your static components there.
Programmatically
From any programming language, instead, you can choose if you like to send cookies or not.
Cookie management is done by the programmer, because libraries are written to make single requests.
So if you make a first request that return cookies, you need to explicit read them, hold them locally somewhere, and eventually put them in a second request to the same server if you need.
So from NodeJS if you don't explicitly add cookies in your requests the http call doesn't hold them.
You Can Use Fetch with the credentials option set to omit
see
https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
You can strip cookies with a proxy server. For example our product WinGate will allow you to modify requests (and responses), and you could use this to clear the Cookie header in requests.
However, this will prevent a large number of websites from functioning properly, as cookies are used to transport session IDs so that the server can identify each connection / request your browser makes as being from the same "session". HTTP itself does not have any concept of session.
Disclaimer: I work for Qbik who make WinGate.

Connecting to a socket via JavaScript (without flash)

I have a browser based app that needs to communicate with another service running on the client machine via a socket connection from the browser using JavaScript.
I need to post and parse XML back and forth on the socket.
I am unable to go down the flash path as the cross domain security is a barrier, ie the service running on the socket is not able to be modified to support Flash's crossdomain security.
What are my options for a pure JS based solution?
You've got two major problems here:
It's difficult in javascript to access non HTTP resources,
It's difficult in javascript to access resources not loaded from the same server.
There are exceptions to both of these, but the conjunction of exceptions available might not exactly match with what you need. Here are some possibilities:
Some sort of proxy on your own server that connects back to the machine with the XML service on behalf of your web app.
If you can control the client machine somewhat you can run a server on it that can embed the XML in a JSONP formatted http response, you can access by adding simple script tags, and send messages the other way by using a script tag to request a url with your data encoded into it.
If when you say 'socket' you're referring to an HTTP connection, then there are a number of options, one is to add a Access-Control-Allow-Origin header to the HTTP, then you can do gets and posts using normal XMLHttpRequests in recent browsers.
Javascript will not allow you to create a socket connection to the client. It would violate the same origin policy. If you could somehow save an applet/swf to the local machine you could serve it up as file:/// and it could communicate to localhost (maybe! not tested).
Maybe creating a proxy to go in front of this unmodifiable socket server could open up some options for you. You could then use something like flash, or you could just not use sockets.
Your options for socket based interaction is limited to plugins that support such live functionality. The options generally break down as follows Flash, Java and Silverlight. All of which aside from Java, if I recall correctly, will have similar policy requirements.
If you control your own server, you could create a socket service to proxy the request to the final destination. Or, depending on the interaction, you can use standard Ajax-style requests and have the socket interaction on your server-side code. If you don't need a persistent connection, having the socket interaction via the server is your best bet.

Understanding mod_proxy and Apache 2 for writing a comet-server

I currently try to implement a simple HTTP-server for some kind of comet-technique (long polling XHR-requests). As JavaScript is very strict about crossdomain requests I have a few questions:
As I understood any apache worker is blocked while serving a request, so writing the "script" as a usual website would block the apache, when all workers having a request to serve. --> Does not work!
I came up with the idea writing a own simple HTTP server only for serving this long polling requests. This server should not be blocking, so each worker could handle many request at the same time. As my site also contains content / images etc and my server does not need to server content I started him on a different port then 80. The problem now is that I can't interact between my JavaScript delivered by my apache and my comet-server running on a different port, because of some crossdomain restrictions. --> Does not work!
Then I came up with the idea to use mod_proxy to map my server on a new subdomain. I really don't could figure out how mod_proxy works but I could imagine that I know have the same effect as on my first approach?
What would be the best way to create these kind of combination this kind of classic website and these long-polling XHR-requests? Do I need to implement content delivery on my server at my own?
I'm pretty sure using mod_proxy will block a worker while the request is being processed.
If you can use 2 IPs, there is a fairly easy solution.
Let's say IP A is 1.1.1.1 and IP B is 2.2.2.2, and let's say your domain is example.com.
This is how it will work:
-Configure Apache to listen on port 80, but ONLY on IP A.
-Start your other server on port 80, but only on IP B.
-Configure the XHR requests to be on a subdomain of your domain, but with the same port. So the cross-domain restrictions don't prevent them. So your site is example.com, and the XHR requests go to xhr.example.com, for example.
-Configure your DNS so that example.com resolves to IP A, and xhr.example.com resolves to IP B.
-You're done.
This solution will work if you have 2 servers and each one has its IP, and it will work as well if you have one server with 2 IPs.
If you can't use 2 IPs, I may have another solution, I'm checking if it's applicable to your case.
This is a difficult problem. Even if you get past the security issues you're running into, you'll end up having to hold a TCP connection open for every client currently looking at a web page. You won't be able to create a thread to handle each connection, and you won't be able to "select" on all the connections from a single thread. Having done this before, I can tell you it's not easy. You may want to look into libevent, which memcached uses to a similar end.
Up to a point you can probably get away with setting long timeouts and allowing Apache to have a huge number of workers, most of which will be idle most of the time. Careful choice and configuration of the Apache worker module will stretch this to thousands of concurrent users, I believe. At some point, however, it will not scale up any more.
I don't know what you're infrastructure looks like, but we have load balancing boxes in the network racks called F5s. These present a single external domain, but redirect the traffic to multiple internal servers based on their response times, cookies in the request headers, etc.. They can be configured to send requests for a certain path within the virtual domain to a specific server. Thus you could have example.com/xhr/foo requests mapped to a specific server to handle these comet requests. Unfortunately, this is not a software solution, but a rather expensive hardware solution.
Anyway, you may need some kind of load-balancing system (or maybe you have one already), and perhaps it can be configured to handle this situation better than Apache can.
I had a problem years ago where I wanted customers using a client-server system with a proprietary binary protocol to be able to access our servers on port 80 because they were continuously having problems with firewalls on the custom port that the system used. What I needed was a proxy that would live on port 80 and direct the traffic to either Apache or the app server depending on the first few bytes of what came across from the client. I looked for a solution and found nothing that fit. I considered writing an Apache module, a plugin for DeleGate, etc., but eventually rolled by own custom content-sensing proxy service. That, I think, is the worst-case scenario for what you're trying to do.
To answer the specific question about mod-proxy: yes, you can setup mod_proxy to serve content that is generated by a server (or service) that is not public facing (i.e. which is only available via an internal address or localhost).
I've done this in a production environment and it works very, very well. Apache forwarding some requests to Tomcat via AJP workers, and others to a GIS application server via mod proxy. As others have pointed out, cross-site security may stop you working on a sub-domain, but there is no reason why you can't proxy requests to mydomain.com/application
To talk about your specific problem - I think really you are getting bogged down in looking at the problem as "long lived requests" - i.e. assuming that when you make one of these requests that's it, the whole process needs to stop. It seems as though your are trying to solve an issue with application architecture via changes to system architecture. In-fact what you need to do is treat these background requests exactly as such; and multi-thread it:
Client makes the request to the remote service "perform task X with data A, B and C"
Your service receives the request: it passes it onto a scheduler which issues a unique ticket / token for the request. The service then returns this token to the client "thanks, your task is in a queue running under token Z"
The client then hangs onto this token, shows a "loading/please wait" box, and sets up a timer that fires say, for arguments, every second
When the timer fires, the client makes another request to the remote service "have you got the results for my task, it's token Z"
You background service can then check with your scheduler, and will likely return an empty document "no, not done yet" or the results
When the client gets the results back, it can simply clear the timer and display them.
So long as you're reasonably comfortable with threading (which you must be if you've indicated you're looking at writing your own HTTP server, this shouldn't be too complex - on top of the http listener part:
Scheduler object - singleton object, really that just wraps a "First in, First Out" stack. New tasks go onto the end of the stack, jobs can be pulled off from the beginning: just make sure that the code to issue a job is thread safe (less you get two works pulling the same job from the stack).
Worker threads can be quite simple - get access to the scheduler, ask for the next job: if there is one then do the work send the results, otherwise just sleep for a period, start over.
This way, you're never going to be blocking Apache for longer than needs be, as all you are doing is issues requests for "do x" or "give me results for x". You'll probably want to build some safety features in at a few points - such as handling tasks that fail, and making sure there is a time-out on the client side so it doesn't wait indefinitely.
For number 2: you can get around crossdomain restrictions by using JSONP.
Two Three alternatives:
Use nginx. This means you run 3 servers: nginx, Apache, and your own server.
Run your server on its own port.
Use Apache mod_proxy_http (as your own suggestion).
I've confirmed mod_proxy_http (Apache 2.2.16) works proxying a Comet application (powered by Atmosphere 0.7.1) running in GlassFish 3.1.1.
My test app with full source is here: https://github.com/ceefour/jsfajaxpush

Categories

Resources