Find a server in local network via client javascript in browser - javascript

Is it possible to find the ip of an http-server (which will respond on a specific request) in local network only via client javascript in the browser? It should work on mobile and desktop browsers.
The poor mans way trying out all ip(v4) adresses seems not possible because I cannot get reliably the local ip (http://net.ipcalf.com doesn't work on IE and all mobile browsers on iOS). UDP broadcasts don't seem possible with javascript. Is there any other possibility?
My only other alternative seems to be developing a native discovery app for all relevant platforms (win, mac, ios, android).

You just can guess the IP-Range of the local network and send multiple ajax requests to each IP in the Network. You will always get an error for each request, because if there isn't a timeout you get an No Access-Control-Allow-Origin error. However you can distinguish these errors by setting the timeout time to a certain time and track the time until the error occurs. If the tracked time is equal or bigger than your timeout you can suspect a timeout error. Otherwise you will get the error faster. So you can suspect that there is a server behind this IP.
Note: This is no secure method to search for servers and very CPU intensive.

There's no purely client-side way to get that information, no. You'd have to query a site that provides DNS lookup with a cross-domain-friendly API, but that's not purely client-side.

Related

Turning your browser into proxy server

I have a logic written on my server mostly doing curl requests (e.g. accessing social networks). though, some of the sites, will be blocking my server(s) IPs soon.
I can of course, use VPN or deploy multiple servers per location, but it won't get accurate, and still some of the networks might get block the user account.
I am trying to find creative solution to run it from the user browser (it is ok to ask for his permission, as it is an action he is explicitly trying to execute) Though I am trying to avoid extra installations (e.g. downloadable plugins\extension or a desktop app)
Is there a way to turn the client browser into a server-proxy, to run those curl-calls from his machine instead of sending it from my own server? (e.g. using web-sockets, polling, etc.)
It depends on exactly what sort of curl requests you are making. In theory, you could simulate these using an XMLHttpRequest. However, for security reasons these are generally not allowed to access resources hosted on a different site. (Imagine the sort of issues it could cause for instance if visiting any website could cause your browser to start making requests to Facebook to send messages on your behalf.) Basically it will depend on the Cross-origin request policy of the social networks that you are querying. If the requests being sent are intended to be publicly available without authentication then it is possible that your system will work, otherwise it will probably be blocked.

Socket connection from chrome extension being blocked by proxy/firewall

I have a web app in javascript that connects to a socket using socket.io and a Chrome Extension which connects in the same way and to the same server.
Everything works fine in most computers and internet connections, but one of my customer's computer is failing to have the Chrome Extension connected (the web app connects successfully).
By inspecting the extension's console for background.js (the script within the extension creating the socket connection) I see that it is not trying to connect to the right URL (my socket server) but to an unknown URL which seems to be a proxy: https://gateway.zscloud.net/auT?origurl=http%3A%2F%2Fmy_socket_server_domain...
Since this is happening only in that specific computer (from the 10 or so that I have tried with so far) using different internet connections (corporate network, guests network, mobile hotspot) and since other computers in those same networks DID succeed in connecting, I assume something installed or configured in the problematic computer is catching the connection request before it happens and tries to redirect it through a proxy.
Again, this happens only in the context of the Chrome Extension. The very same computer using the same internet connection DOES succeed in connecting from a web page in the same browser (Google Chrome).
Does anybody know what the problem could be? The client is not aware of having a security software (firewall, antivirus, etc...) that could be causing this, but it's a computer managed by his company so an admin could have done that for him. If that was the case, however, shouldn't the connection from the webpage be captured too? Is there anything specific to socket connections in Chrome Extensions that differ from regular web apps?
Thanks!
WebSocket connections differ from normal HTTP requests; they require a protocol upgrade after establishing that (some!) proxies may be unable to support.
I was at some point behind one such (transparent) proxy at work; however, it does not attempt to intercept HTTPS, which means I could use wss: WebSockets but not ws: WebSockets.
..which you should be using, anyway! With Let's Encrypt on the market, the barrier of entry for HTTPS is very low. If any sensitive data at all is sent through that connection, it's in your best interest.
For the record, that particular proxy is part of ZScaler which is a security solution. Sadly, it includes HTTPS MITM, so the above is unlikely to solve the problem (but should be implemented anyway!). It's set up as an OS-level proxy - if that setting is possible to change, or override with Chrome's proxy settings, that would fix it. However, that's going to piss off network security!
If you can't do that, then your client is a SOL and should complain up the chain about the security solution breaking legitimate applications.
Edit: I looked around and found this, which seems to claim that using SSL (that is, wss:) is enough. But that's from 2012 - perhaps before ZScaler was able to MITM all HTTPS traffic.
It should be possible to test whether wss: switch will work using https://www.websocket.org/echo.html - if it can connect then everything will work over wss:

How can I monitor outgoing requests from my browser in javascript?

I'm trying to log all the requests that sites in my browser make behind the scenes. I can do it manually using Chrome's anylitics or Firebug, but I want to have either (a) a quick js extension that I can bookmark and run on sites when I want to log requests, or (b) a chrome/firefox extension to do so. I found this thread asking roughly the same thing, but I want to catch AJAX requests too. How can I go about this?
Fiddlr
http://www.telerik.com/fiddler
This application runs outside of your browser to inspect all data transmitted between your computer and the internet. It's what I use to debug application design and I think it would be great for what you need.
To note once running it will automatically "log" all requests, and they can be easily saved for reviewing later. There are also loads of extensions to the application that may do the same for you.
Key Features
HTTP/HTTPS Traffic Recording
Fiddler is a free web debugging proxy which logs all HTTP(s) traffic between your computer and the Internet. Use it to debug traffic from virtually any application that supports a proxy like IE, Chrome, Safari, Firefox, Opera and more.
Tamper-client-requests-and-server-responses
Web Session Manipulation
Easily manipulate and edit web sessions. All you need to do is set a breakpoint to pause the processing of the session and permit alteration of the request/response. You can also compose your own HTTP requests to run through Fiddler.
Inspect-and-debug-traffic-from-any-client
Web Debugging
Debug traffic from PC, Mac or Linux systems and mobile devices. Ensure the proper cookies, headers and cache directives are transferred between the client and server. Supports any framework, including .NET, Java, Ruby, etc.
Decrypt-HTTPS-web-sessions
Security Testing
Use Fiddler for security testing your web applications -- decrypt HTTPS traffic, and display and modify requests using a man-in-the-middle decryption technique. Configure Fiddler to decrypt all traffic, or only specific sessions.
Test-the-performance-of-your-web-sites-and-apps
Performance Testing
Fiddler lets you see the “total page weight,” HTTP caching and compression at a glance. Isolate performance bottlenecks with rules like “Flag any uncompressed responses larger than 25kb.”
Update:
Google Chrome Developer tools (Specifically the Network Tab) You are able to easily see network traffic directly from the current webpage and monitor all HTTP information such as request and response headers, cookies and timing elements.
Try to use jQuery Global Ajax Event Handlers
These methods register handlers to be called when certain events, such as initialization or completion, take place for any Ajax request on the page. The global events are fired on each Ajax request if the global property in jQuery.ajaxSetup() is true, which it is by default. Note: Global events are never fired for cross-domain script or JSONP requests, regardless of the value of global.

Chrome.socket listener in browser

So I've searched very thoroughly and tried a lot of things to create a TCP or UDP listener for the Chrome.socket API in the browser in JavaScript. I am very confused now. So far I have tried using:
WebSockets
WebTCP sockets
WebRTC
PeerJS
Node net module in chrome extension
And several more small options. So far these 'options' all don't seem to work properly. The closest I got was getting a readyState of zero in the browser. The server accepts the connection, but the readyState of the socket in the browser just stays zero (means still connecting).
I have two questions.
Is it possible?
Is it even possible to listen to incoming TCP/UDP connections made from a Chrome extension on a sperate website with only client-side JavaScript?
How?
If so, how would I be able to create this?
For security reasons, you will never be able to create raw sockets in pure client-side code.
Protocols like WebSockets or WebRTC require that the server explicitly acknowledge the protocol and allow you to connect.

Do AJAX applications that use POST requests always fail in Internet Explorer?

I have recently discovered that problems with intermittent failures for users running my application using Internet Explorer is due to a bug in Internet Explorer. The bug is in the HTTP stack, and should be affecting all applications using POST requests from IE. The result is a failure characterized by a request that seems to hang for about 5 minutes (depending on server type and configuration), then fail from the server end. The browser application will error out of the post request after the server gives up. I will explain the IE bug in detail below.
As far as I can tell this will happen with any application using XMLHttpRequest to send POST requests to the server, if the request is sent at the wrong moment. I have written a sample program that attempts to send the POSTS at just those times. It attempts to send continuous POSTs to the server at the precise moment the server closes the connections. The interval is derived from the Keep-Alive header sent by the server.
I am finding that when running from IE to a server with a bit of latency (i.e. not on the same LAN), the problem occurs after only a few POSTs. When it happens, IE locks up so hard that it has to be force closed. The ticking clock is an indication that the browser is still responding.
You can try it by browsing to: http://pubdev.hitech.com/test.post.php. Please take care that you don't have any important unsaved information in any IE session when you run it, because I am finding that it will crash IE.
The full source can be retrieved at: http://pubdev.hitech.com/test.post.php.txt. You can run it on any server that has php and is configured for persistent connections.
My questions are:
What are other people's experiences with this issue?
Is there a known strategy for working around this problem (other than "use another browser")?
Does Microsoft have better information about this issue than the article I found (see below)?
The problem is that web browsers and servers by default use persistent connections as described in RFC 2616 section 8.1 (see http://www.ietf.org/rfc/rfc2616.txt). This is very important for performance--especially for AJAX applications--and should not be disabled. There is however a small timing hole where the browser may start to send a POST on a previously used connection at the same time the server decides the connection is idle and decides to close it. The result is that the browser's HTTP stack will get a socket error because it is using a closed socket. RFC 2616 section 8.1.4 anticipates this situation, and states, "...clients, servers, and proxies MUST be able to recover from asynchronous close events. Client software SHOULD reopen the transport connection and retransmit the aborted sequence of requests without user interaction..."
Internet Explorer does resend the POST when this happens, but when it does it mangles the request. It sends the POST headers, including the Content-Length of the data as posted, but it does not send the data. This is an improper request, and the server will wait an unspecified amount of time for the promised data before failing the request with an error. I have been able to demonstrate this failure 100% of the time using a C program that simulates an HTTP server, which closes the socket of an incoming POST request without sending a response.
Microsoft seems to acknowledge this failure in http://support.microsoft.com/kb/895954. They say that it affects IE versions 6 through 9. That provide a hotfix for this problem, that has shipped with all versions of IE since IE 7. The hotfix does not seem satisfactory for the following reasons:
It is not enabled unless you use regedit to add a key called FEATURE_SKIP_POST_RETRY_ON_INTERNETWRITEFILE_KB895954 to the registry. This is not something I would expect my users to have to do.
The hotfix does not actually fix the broken POST. Instead, if the socket gets closed as anticipated by the RFC, it simply errors out immediately without trying to resent the POST. The application still fails--it just fails sooner.
The following example is a self contained php program that demonstrates the bug. It attempts to send continuous POSTs to the server at the precise moment the server closes the connections. The interval is derived from the Keep-Alive header sent by the server.
We've encountered this problem with IE on a regular basis. There is no good solution. The only solution that is guaranteed to solve the problem is to ensure that the web server keepalive timeout is higher than the browser keepalive timeout (by default with IE this is 60s). Any situation where the web server is set to a lower value can result in IE attempting to reuse the connection and sending a request that gets rejected with a TCP RST because the socket has been closed. If the web server keepalive timeout value is higher than IE's keepalive timeout then IE's reuse of the connections ensure that the socket won't be closed. With high latency connections you'll have to consider the latency time as the time spent in-transit could be an issue.
Keep in mind however, that increasing the keepalive on the server means that an idle connection is using server sockets for that much longer. So you may need to size the server to handle a large number of inactive idle connections. This can be a problem as it may result in a burst of load to the server that the server isn't able to handle.
Another thing to keep in mind. You note that the RFC section 8.1.4 states :"...clients, servers, and proxies MUST be able to recover from asynchronous close events. Client software SHOULD reopen the transport connection and retransmit the aborted sequence of requests without user interaction..."
You forgot a very important part. Here's the full text:
Client software SHOULD reopen the
transport connection and retransmit the aborted sequence of requests
without user interaction so long as the request sequence is
idempotent (see section 9.1.2). Non-idempotent methods or sequences
MUST NOT be automatically retried, although user agents MAY offer a
human operator the choice of retrying the request(s). Confirmation by
user-agent software with semantic understanding of the application
MAY substitute for user confirmation. The automatic retry SHOULD NOT
be repeated if the second sequence of requests fails
An HTTP POST is non-idempotent as defined by 9.1.2. Thus the behavior of the registry hack is actually technically correct per the RFC.
No, generally POST works in IE. It may be an issue, what you are saying,
but it isn't such a major issue to deserve this huge a post.
And when you issue POST ajax request, to make sure every browser inconsistency is covered, just use jquery.
One more thing:
Noone sane will tell you to "use another browser" because IE is widely used and needs to be taken care of (well, except IE6 and for some, maybe even some newer versions)
So, POST has to work in IE, but to make yourself covered for unexpected buggy behavior, use jquery and you can sleep well.
I have never encountered this issue. And our clients mostly runs IE6.
I suspect you've configured your keep-alive timer too long. Most people configure it to be under 1 second because persistent connections are only meant to speed up page loading not service Ajax calls.
If you have keep-alive configured too long you'll face much more severe problems than IE crashing - your server will run out file descriptors to open sockets!*
* note: Incidentally, opening and not closing connections to HTTP servers is a well known DOS attack that tries to force the server to reach its max open socket limit. Which is why most server admins also configure connection timeouts to avoid having sockets open for too long.

Categories

Resources