I want to capture particular HTTP requests from a Flash game and then alter the HTTP responses it receives. I can currently do this using Fiddler, but I want to write some Javascript that achieves the same programmatically. Is it possible to capture and alter HTTP browser traffic with JS like this?
Regarding my motivations, I am part of a community who enjoy playing an ancient Flash game. Part of the game involves uploading your own levels to the game's server. Unfortunately, this is broken - when you request the level from the server via the game, the server always reports failure, presumably due to no longer being maintained. So, in order to play our levels, we are using Fiddler to capture the game's HTTP requests that ask the server for the level data and then altering the server failure response by inserting our level data. I am trying to automate this process on a webpage.
Is HTTP packet sniffing feasible in Javascript? Or will we continue to be limited by native desktop solutions like Fiddler?
Web based proxies are totally a thing. In the same manner that your current solution uses Fiddler as an intermediary between your web browser and the game server, a website can be act as an intermediary between your browser and another website by simply making HTTP requests itself and then sending the modified results to the user.
To diagram:
Browser -> Fiddler -> WebPage (Game) -> Fiddler -> Browser
...is roughly equivalent to...
Browser -> WebPage (Proxy Server) -> WebPage (Game) -> WebPage (Proxy Server)-> Browser
And you could in theory write your proxy server entirely in javascript (see: full stack javascript)!
But based on the fact that you ask about javascript specifically, I'm going to guess that you are not interested in your proxy page having a meaningful back end. This may be a problem. If you would like your proxy website to be entirely client side javascript, your diagram suddenly looks more like this:
Browser -> WebPage (Proxy Server) -> Browser -> WebPage (Game) -> Browser -> WebPage (Proxy Server)-> Browser
This is a problem because web browsers take steps to prevent this behavior by default (see: Same Origin Policy [SO won't let me put more than 2 links in this answer. You're going to have to Google this one.]). Most client-side javascript proxy solutions I can imagine violate Same Origin Policy to a significant degree (if you have control of the site serving the game you could look into CORS headers or jsonp requests - but it doesn't sound like this is an option).
If you can engineer a solution that doesn't violate same-origin policy you may be successful with an entirely client-side solution. In this case, I would recommend looking into async calls as a starting point (see: jQuery AJAX).
Related
I have a logic written on my server mostly doing curl requests (e.g. accessing social networks). though, some of the sites, will be blocking my server(s) IPs soon.
I can of course, use VPN or deploy multiple servers per location, but it won't get accurate, and still some of the networks might get block the user account.
I am trying to find creative solution to run it from the user browser (it is ok to ask for his permission, as it is an action he is explicitly trying to execute) Though I am trying to avoid extra installations (e.g. downloadable plugins\extension or a desktop app)
Is there a way to turn the client browser into a server-proxy, to run those curl-calls from his machine instead of sending it from my own server? (e.g. using web-sockets, polling, etc.)
It depends on exactly what sort of curl requests you are making. In theory, you could simulate these using an XMLHttpRequest. However, for security reasons these are generally not allowed to access resources hosted on a different site. (Imagine the sort of issues it could cause for instance if visiting any website could cause your browser to start making requests to Facebook to send messages on your behalf.) Basically it will depend on the Cross-origin request policy of the social networks that you are querying. If the requests being sent are intended to be publicly available without authentication then it is possible that your system will work, otherwise it will probably be blocked.
I'm trying to log all the requests that sites in my browser make behind the scenes. I can do it manually using Chrome's anylitics or Firebug, but I want to have either (a) a quick js extension that I can bookmark and run on sites when I want to log requests, or (b) a chrome/firefox extension to do so. I found this thread asking roughly the same thing, but I want to catch AJAX requests too. How can I go about this?
Fiddlr
http://www.telerik.com/fiddler
This application runs outside of your browser to inspect all data transmitted between your computer and the internet. It's what I use to debug application design and I think it would be great for what you need.
To note once running it will automatically "log" all requests, and they can be easily saved for reviewing later. There are also loads of extensions to the application that may do the same for you.
Key Features
HTTP/HTTPS Traffic Recording
Fiddler is a free web debugging proxy which logs all HTTP(s) traffic between your computer and the Internet. Use it to debug traffic from virtually any application that supports a proxy like IE, Chrome, Safari, Firefox, Opera and more.
Tamper-client-requests-and-server-responses
Web Session Manipulation
Easily manipulate and edit web sessions. All you need to do is set a breakpoint to pause the processing of the session and permit alteration of the request/response. You can also compose your own HTTP requests to run through Fiddler.
Inspect-and-debug-traffic-from-any-client
Web Debugging
Debug traffic from PC, Mac or Linux systems and mobile devices. Ensure the proper cookies, headers and cache directives are transferred between the client and server. Supports any framework, including .NET, Java, Ruby, etc.
Decrypt-HTTPS-web-sessions
Security Testing
Use Fiddler for security testing your web applications -- decrypt HTTPS traffic, and display and modify requests using a man-in-the-middle decryption technique. Configure Fiddler to decrypt all traffic, or only specific sessions.
Test-the-performance-of-your-web-sites-and-apps
Performance Testing
Fiddler lets you see the “total page weight,” HTTP caching and compression at a glance. Isolate performance bottlenecks with rules like “Flag any uncompressed responses larger than 25kb.”
Update:
Google Chrome Developer tools (Specifically the Network Tab) You are able to easily see network traffic directly from the current webpage and monitor all HTTP information such as request and response headers, cookies and timing elements.
Try to use jQuery Global Ajax Event Handlers
These methods register handlers to be called when certain events, such as initialization or completion, take place for any Ajax request on the page. The global events are fired on each Ajax request if the global property in jQuery.ajaxSetup() is true, which it is by default. Note: Global events are never fired for cross-domain script or JSONP requests, regardless of the value of global.
I have read many posts on SO and the web regarding the keywords in my question title and learned a lot from them. Some of the questions I read are related to specific implementation challenges while others focus on general concepts. I just want to make sure I understood all of the concepts and the reasoning why technology X was invented over technology Y and so on. So here goes:
Http Polling: Basically AJAX, using XmlHttpRequest.
Http Long Polling: AJAX but the server holds on to the response unless the server has an update, as soon as the server has an update, it sends it and then the client can send another request. Disadvantage is the additional header data that needs to be sent back and forth causing additional overhead.
Http Streaming: Similar to long polling but the server responds with a header with "Transfer Encoding: chunked" and hence we do not need to initiate a new request every time the server sends some data (and hence save the additional header overhead). The drawback here is that we have to "understand" and figure out the structure of the data to distinguish between multiple chunks sent by the server.
Java Applet, Flash, Silverlight: They provide the ability to connect to socket servers over tcp/ip but since they are plugins, developers don't want to depend on them.
WebSockets: they are the new API which tries to address the short comings of above methods in the following manner:
The only advantage of WebSockets over plugins like Java Applets, Flash or Silverlight is that WebSockets are natively built into browsers and does not rely on plugins.
The only advantage of WebSockets over http streaming is that you don't have to make an effort to "understand" and parse the data received.
The only advantage of WebSockets over Long Polling is that of elimination of extra headers size & opening and closing of socket connection for request.
Are there any other significant differences that I am missing? I'm sorry if I am re-asking or combining many of the questions already on SO into a single question, but I just want to make perfect sense out of all the info that is out there on SO and the web regarding these concepts.
Thanks!
There are more differences than the ones you have identified.
Duplex/directional:
Uni-directional: HTTP poll, long poll, streaming.
Bi-direcitonal: WebSockets, plugin networking
In order of increasing latency (approximate):
WebSockets
Plugin networking
HTTP streaming
HTTP long-poll
HTTP polling
CORS (cross-origin support):
WebSockets: yes
Plugin networking: Flash via policy request (not sure about others)
HTTP * (some recent support)
Native binary data (typed arrays, blobs):
WebSockets: yes
Plugin networking: not with Flash (requires URL encoding across ExternalInterface)
HTTP *: recent proposal to enable binary type support
Bandwidth in decreasing efficiency:
Plugin networking: Flash sockets are raw except for initial policy request
WebSockets: connection setup handshake and a few bytes per frame
HTTP streaming (re-use of server connection)
HTTP long-poll: connection for every message
HTTP poll: connection for every message + no data messages
Mobile device support:
WebSocket: iOS 4.2 and up. Some Android via Flash emulation or using Firefox for Android or Google Chrome for Android which both provide native WebSocket support.
Plugin networking: some Android. Not on iOS
HTTP *: mostly yes
Javascript usage complexity (from simplest to most complicated). Admittedly complexity measures are somewhat subjective.
WebSockets
HTTP poll
Plugin networking
HTTP long poll, streaming
Also note that there is a W3C proposal for standardizing HTTP streaming called Server-Sent Events. It is currently fairly early in it's evolution and is designed to provide a standard Javascript API with comparable simplicity to WebSockets.
Some great answers from others that cover a lot of ground. Here's a little bit extra.
The only advantage of WebSockets over plugins like Java Applets, Flash or Silverlight is that WebSockets are natively built into browsers and does not rely on plugins.
If by this you mean that you can use Java Applets, Flash, or Silverlight to establish a socket connection, then yes, that is possible. However you don't see that deployed in the real world too often because of the restrictions.
For example, intermediaries can and do shutdown that traffic. The WebSocket standard was designed to be compatible with existing HTTP infrastructure and so is far less prone to being interfered with by intermediaries like firewalls and proxies.
Moreover, WebSocket can use port 80 and 443 without requiring dedicated ports, again thanks to the protocol design to be as compatible as possible with existing HTTP infrastructure.
Those socket alternatives (Java, Flash, and Silverlight) are difficult to use securely in a cross-origin architecture. Thus people often attempting to use them cross-origin will tolerate the insecurities rather than go to the effort of doing it securely.
They can also require additional "non-standard" ports to be opened (something administrators are loathe to do) or policy files that need to be managed.
In short, using Java, Flash, or Silverlight for socket connectivity is problematic enough that you don't see it deployed in serious architectures too often. Flash and Java have had this capability for probably at least 10 years, and yet it's not prevalent.
The WebSocket standard was able to start with a fresh approach, bearing those restrictions in mind, and hopefully having learned some lessons from them.
Some WebSocket implementations use Flash (or possibly Silverlight and/or Java) as their fallback when WebSocket connectivity cannot be established (such as when running in an old browser or when an intermediary interferes).
While some kind of fallback strategy for those situations is smart, even necessary, most of those that use Flash et al will suffer from the drawbacks described above. It doesn't have to be that way -- there are workarounds to achieve secure cross-origin capable connections using Flash, Silverlight, etc -- but most implementations won't do that because it's not easy.
For example, if you rely on WebSocket for a cross-origin connection, that will work fine. But if you then run in an old browser or a firewall/proxy interfered and rely on Flash, say, as your fallback, you will find it difficult to do that same cross-origin connection. Unless you don't care about security, of course.
That means it's difficult have a single unified architecture that works for native and non-native connections, unless you're prepared to put in quite a bit of work or go with a framework that has done it well. In an ideal architecture, you wouldn't notice if the connections were native or not; your security settings would work in both cases; your clustering settings would still work; your capacity planning would still hold; and so on.
The only advantage of WebSockets over http streaming is that you don't have to make an effort to "understand" and parse the data received.
It's not as simple as opening up an HTTP stream and sitting back as your data flows for minutes, hours, or longer. Different clients behave differently and you have to manage that. For example some clients will buffer up the data and not release it to the application until some threshold is met. Even worse, some won't pass the data to the application until the connection is closed.
So if you're sending multiple messages down to the client, it's possible that the client application won't receive the data until 50 messages worth of data has been received, for example. That's not too real-time.
While HTTP streaming can be a viable alternative when WebSocket is not available, it is not a panacea. It needs a good understanding to work in a robust way out in the badlands of the Web in real-world conditions.
Are there any other significant differences that I am missing?
There is one other thing that noone has mentioned yet, so I'll bring it up.
The WebSocket protocol was designed to a be a transport layer for higher-level protocols. While you can send JSON messages or what-not directly over a WebSocket connection, it can also carry standard or custom protocols.
For example, you could do AMQP or XMPP over WebSocket, as people have already done. So a client could receive messages from an AMQP broker as if it were connected directly to the broker itself (and in some cases it is).
Or if you have an existing server with some custom protocol, you can transport that over WebSocket, thus extending that back-end server to the Web. Often an existing application that has been locked in the enterprise can broaden it's reach using WebSocket, without having to change any of the back-end infrastructure.
(Naturally, you'd want to be able to do all that securely so check with the vendor or WebSocket provider.)
Some people have referred to WebSocket as TCP for the Web. Because just like TCP transports higher-level protocols, so does WebSocket, but in a way that's compatible with Web infrastructure.
So while sending JSON (or whatever) messages directly over WebSocket is always possible, one should also consider existing protocols. Because for many things you want to do, there's probably a protocol that's already been thought of to do it.
I'm sorry if I am re-asking or combining many of the questions already on SO into a single question, but I just want to make perfect sense out of all the info that is out there on SO and the web regarding these concepts.
This was a great question, and the answers have all been very informative!
If I may ask one additional thing: I came across in an article somewhere that says that http streaming may also be cached by proxies while websockets are not. what does that mean?
(StackOverflow limits the size of comment responses, so I've had to answer here rather than inline.)
That's a good point. To understand this, think about a traditional HTTP scenario... Imagine a browser opened a web page, so it requests http://example.com, say. The server responds with HTTP that contains the HTML for the page. Then the browser sees that there are resources in the page, so it starts requesting the CSS files, JavaScript files, and images of course. They are all static files that will be the same for all clients requesting them.
Some proxies will cache static resources so that subsequent requests from other clients can get those static resources from the proxy, rather than having to go all the way back to the central web server to get them. This is caching, and it's a great strategy to offload requests and processing from your central services.
So client #1 requests http://example.com/images/logo.gif, say. That request goes through the proxy all the way to the central web server, which serves up logo.gif. As logo.gif passes through the proxy, the proxy will save that image and associate it with the address http://example.com/images/logo.gif.
When client #2 comes along and also requests http://example.com/images/logo.gif, the proxy can return the image and no communication is required back to the web server in the center. This gives a faster response to the end user, which is always great, but it also means that there is less load on the center. That can translate to reduced hardware costs, reduced networking costs, etc. So it's a good thing.
The problem arises when the logo.gif is updated on the web server. The proxy will continue to serve the old image unaware that there is a new image. This leads to a whole thing around expiry so that the proxy will only cache the image for a short time before it "expires" and the next request goes through the proxy to the web server, which then refreshes the proxy's cache. There are also more advanced solutions where a central server can push out to known caches, and so on, and things can get pretty sophisticated.
How does this tie in to your question?
You asked about HTTP streaming where the server is streaming HTTP to a client. But streaming HTTP is just like regular HTTP except you don't stop sending data. If a web server serves an image, it sends HTTP to the client that eventually ends: you've sent the whole image. And if you want to send data, it's exactly the same, but the server just sends for a really long time (like it's a massively gigantic image, say) or even never finishes.
From the proxy's point of view, it cannot distinguish between HTTP for a static resource like an image, or data from HTTP streaming. In both of those cases, the client made a request of the server. The proxy remembered that request and also the response. The next time that request comes in, the proxy serves up the same response.
So if your client made a request for stock prices, say, and got a response, then the next client may make the same request and get the cached data. Probably not what you want! If you request stock prices you want the latest data, right?
So it's a problem.
There are tricks and workarounds to handle problems like that, it is true. Obviously you can get HTTP streaming to work since it's it's in use today. It's all transparent to the end user, but the people who develop and maintain those architectures have to jump through hoops and pay a price. It results in over-complicated architectures, which means more maintenance, more hardware, more complexity, more cost. It also means developers often have to care about something they shouldn't have to when they should just be focussing on the application, GUI, and business logic -- they shouldn't have to be concerned about the underlying communication.
HTTP limits the number of connections a client can have with a server to 2 (although this can be mitigated by using subdomains) and IE has been known to enforce this eagerly. Firefox and Chrome allow more (although I can't remember of the top of my head exactly how many). This might not seem like a huge issue but if you are using 1 connection constantly for real-time updates, all other requests have to bottleneck through the other HTTP connection. And there is the matter of having more open connections from clients puts more load on the server.
WebSockets are a TCP-based protocol and as such don't suffer from this HTTP-level connection limit (but, of course, browser support is not uniform).
You know, a web server. Right now my Socket.IO server loads from a BATCH file that is a JavaScript file. Can you use node and make the socket.io server load from a web browser. Like a web-server utility tool or something of the sort.
That's explicitly not possible due to the design of WebSockets. It starts as a special HTTP request that, after the handshaking, drops the HTTP protocol and strips it down into the WebSocket protocol -- a nearly bare protocol similar to (but slightly more managed than) raw TCP. Because a web browser specifically cannot handle HTTP requests, it could never initiate the socket as a server.
This was done specifically so it wouldn't be possible to write a drive-by botnet website to use scores of users' computers for DDOS attacks without their knowing, amongst other security concerns.
So it wouldn't surprise me if Flash supported that kind of behavior. ;) (I know Java can, but who enables Java applets?)
I'd say you Can. Not that I can think of a good use case.
You would need to put the startup code somewhere where the web server could run it and you would need to get the web server to return some information to the browser to allow it to then connect. You would also have to insert the socket.io code into the browser after the socket server had started.
So I Think that it would indeed be possible but rather complex for little gain. I suppose one possible use case would be to restart a socket server after failure. Actually I'd do that a slightly different way, probably by calling an external script from Node.
fortunatly the answer is no. if you mean by load / launched , NO. but you can create a script on a server that launch another server once a url is requested by a a client.
I currently try to implement a simple HTTP-server for some kind of comet-technique (long polling XHR-requests). As JavaScript is very strict about crossdomain requests I have a few questions:
As I understood any apache worker is blocked while serving a request, so writing the "script" as a usual website would block the apache, when all workers having a request to serve. --> Does not work!
I came up with the idea writing a own simple HTTP server only for serving this long polling requests. This server should not be blocking, so each worker could handle many request at the same time. As my site also contains content / images etc and my server does not need to server content I started him on a different port then 80. The problem now is that I can't interact between my JavaScript delivered by my apache and my comet-server running on a different port, because of some crossdomain restrictions. --> Does not work!
Then I came up with the idea to use mod_proxy to map my server on a new subdomain. I really don't could figure out how mod_proxy works but I could imagine that I know have the same effect as on my first approach?
What would be the best way to create these kind of combination this kind of classic website and these long-polling XHR-requests? Do I need to implement content delivery on my server at my own?
I'm pretty sure using mod_proxy will block a worker while the request is being processed.
If you can use 2 IPs, there is a fairly easy solution.
Let's say IP A is 1.1.1.1 and IP B is 2.2.2.2, and let's say your domain is example.com.
This is how it will work:
-Configure Apache to listen on port 80, but ONLY on IP A.
-Start your other server on port 80, but only on IP B.
-Configure the XHR requests to be on a subdomain of your domain, but with the same port. So the cross-domain restrictions don't prevent them. So your site is example.com, and the XHR requests go to xhr.example.com, for example.
-Configure your DNS so that example.com resolves to IP A, and xhr.example.com resolves to IP B.
-You're done.
This solution will work if you have 2 servers and each one has its IP, and it will work as well if you have one server with 2 IPs.
If you can't use 2 IPs, I may have another solution, I'm checking if it's applicable to your case.
This is a difficult problem. Even if you get past the security issues you're running into, you'll end up having to hold a TCP connection open for every client currently looking at a web page. You won't be able to create a thread to handle each connection, and you won't be able to "select" on all the connections from a single thread. Having done this before, I can tell you it's not easy. You may want to look into libevent, which memcached uses to a similar end.
Up to a point you can probably get away with setting long timeouts and allowing Apache to have a huge number of workers, most of which will be idle most of the time. Careful choice and configuration of the Apache worker module will stretch this to thousands of concurrent users, I believe. At some point, however, it will not scale up any more.
I don't know what you're infrastructure looks like, but we have load balancing boxes in the network racks called F5s. These present a single external domain, but redirect the traffic to multiple internal servers based on their response times, cookies in the request headers, etc.. They can be configured to send requests for a certain path within the virtual domain to a specific server. Thus you could have example.com/xhr/foo requests mapped to a specific server to handle these comet requests. Unfortunately, this is not a software solution, but a rather expensive hardware solution.
Anyway, you may need some kind of load-balancing system (or maybe you have one already), and perhaps it can be configured to handle this situation better than Apache can.
I had a problem years ago where I wanted customers using a client-server system with a proprietary binary protocol to be able to access our servers on port 80 because they were continuously having problems with firewalls on the custom port that the system used. What I needed was a proxy that would live on port 80 and direct the traffic to either Apache or the app server depending on the first few bytes of what came across from the client. I looked for a solution and found nothing that fit. I considered writing an Apache module, a plugin for DeleGate, etc., but eventually rolled by own custom content-sensing proxy service. That, I think, is the worst-case scenario for what you're trying to do.
To answer the specific question about mod-proxy: yes, you can setup mod_proxy to serve content that is generated by a server (or service) that is not public facing (i.e. which is only available via an internal address or localhost).
I've done this in a production environment and it works very, very well. Apache forwarding some requests to Tomcat via AJP workers, and others to a GIS application server via mod proxy. As others have pointed out, cross-site security may stop you working on a sub-domain, but there is no reason why you can't proxy requests to mydomain.com/application
To talk about your specific problem - I think really you are getting bogged down in looking at the problem as "long lived requests" - i.e. assuming that when you make one of these requests that's it, the whole process needs to stop. It seems as though your are trying to solve an issue with application architecture via changes to system architecture. In-fact what you need to do is treat these background requests exactly as such; and multi-thread it:
Client makes the request to the remote service "perform task X with data A, B and C"
Your service receives the request: it passes it onto a scheduler which issues a unique ticket / token for the request. The service then returns this token to the client "thanks, your task is in a queue running under token Z"
The client then hangs onto this token, shows a "loading/please wait" box, and sets up a timer that fires say, for arguments, every second
When the timer fires, the client makes another request to the remote service "have you got the results for my task, it's token Z"
You background service can then check with your scheduler, and will likely return an empty document "no, not done yet" or the results
When the client gets the results back, it can simply clear the timer and display them.
So long as you're reasonably comfortable with threading (which you must be if you've indicated you're looking at writing your own HTTP server, this shouldn't be too complex - on top of the http listener part:
Scheduler object - singleton object, really that just wraps a "First in, First Out" stack. New tasks go onto the end of the stack, jobs can be pulled off from the beginning: just make sure that the code to issue a job is thread safe (less you get two works pulling the same job from the stack).
Worker threads can be quite simple - get access to the scheduler, ask for the next job: if there is one then do the work send the results, otherwise just sleep for a period, start over.
This way, you're never going to be blocking Apache for longer than needs be, as all you are doing is issues requests for "do x" or "give me results for x". You'll probably want to build some safety features in at a few points - such as handling tasks that fail, and making sure there is a time-out on the client side so it doesn't wait indefinitely.
For number 2: you can get around crossdomain restrictions by using JSONP.
Two Three alternatives:
Use nginx. This means you run 3 servers: nginx, Apache, and your own server.
Run your server on its own port.
Use Apache mod_proxy_http (as your own suggestion).
I've confirmed mod_proxy_http (Apache 2.2.16) works proxying a Comet application (powered by Atmosphere 0.7.1) running in GlassFish 3.1.1.
My test app with full source is here: https://github.com/ceefour/jsfajaxpush