How to register a service worker on localhost subdomain? - javascript

I have a domain at https://example.co.uk, and one at https:// chat.example.co.uk, both using service workers in production. When developing on localhost, I can use a service worker on the https://example.co.uk domain (on localhost:1337), but not when using the chat subdomain (on chat.localhost:1337), or any other subdomains.
This is not an issue on the live version, but it makes development quite difficult when working on the service workers' code.
Am I missing something important, or is there something I can do to allow the service worker to register anyway?
I tried turning the #allow-insecure-localhost Chrome flag on, but I don't think that was the problem.

As you've observed, only http://localhost is whitelisted by Chrome for features that require secure contexts, not any subdomains of localhost—which I don't think are a thing without jumping through some hoops in your local domain resolution setup?
To facilitate local testing, I'd recommend instructing your web server to listen to requests on localhost on two different ports, e.g. 1337 and 1338. That would probably be the easiest way to simulate the two different origins that you have in production, with their isolated security contexts.
It obviously might require be some additional effort to refactor your configuration so that instead of communicating with a subdomain in your dev environment, you communicate with a the same localhost domain, but a different port.

Related

Socket connection from chrome extension being blocked by proxy/firewall

I have a web app in javascript that connects to a socket using socket.io and a Chrome Extension which connects in the same way and to the same server.
Everything works fine in most computers and internet connections, but one of my customer's computer is failing to have the Chrome Extension connected (the web app connects successfully).
By inspecting the extension's console for background.js (the script within the extension creating the socket connection) I see that it is not trying to connect to the right URL (my socket server) but to an unknown URL which seems to be a proxy: https://gateway.zscloud.net/auT?origurl=http%3A%2F%2Fmy_socket_server_domain...
Since this is happening only in that specific computer (from the 10 or so that I have tried with so far) using different internet connections (corporate network, guests network, mobile hotspot) and since other computers in those same networks DID succeed in connecting, I assume something installed or configured in the problematic computer is catching the connection request before it happens and tries to redirect it through a proxy.
Again, this happens only in the context of the Chrome Extension. The very same computer using the same internet connection DOES succeed in connecting from a web page in the same browser (Google Chrome).
Does anybody know what the problem could be? The client is not aware of having a security software (firewall, antivirus, etc...) that could be causing this, but it's a computer managed by his company so an admin could have done that for him. If that was the case, however, shouldn't the connection from the webpage be captured too? Is there anything specific to socket connections in Chrome Extensions that differ from regular web apps?
Thanks!
WebSocket connections differ from normal HTTP requests; they require a protocol upgrade after establishing that (some!) proxies may be unable to support.
I was at some point behind one such (transparent) proxy at work; however, it does not attempt to intercept HTTPS, which means I could use wss: WebSockets but not ws: WebSockets.
..which you should be using, anyway! With Let's Encrypt on the market, the barrier of entry for HTTPS is very low. If any sensitive data at all is sent through that connection, it's in your best interest.
For the record, that particular proxy is part of ZScaler which is a security solution. Sadly, it includes HTTPS MITM, so the above is unlikely to solve the problem (but should be implemented anyway!). It's set up as an OS-level proxy - if that setting is possible to change, or override with Chrome's proxy settings, that would fix it. However, that's going to piss off network security!
If you can't do that, then your client is a SOL and should complain up the chain about the security solution breaking legitimate applications.
Edit: I looked around and found this, which seems to claim that using SSL (that is, wss:) is enough. But that's from 2012 - perhaps before ZScaler was able to MITM all HTTPS traffic.
It should be possible to test whether wss: switch will work using https://www.websocket.org/echo.html - if it can connect then everything will work over wss:

What is the correct CORS entry for limiting an http:// connection to a remote, hosted web server from a grunt web server on a home network?

I've setup a remote, hosted javascript server (DreamFactory Server http://www.dreamfactory.com/) that responds via REST API's.
Locally, I'm running an Angularjs application through the grunt web server via $grunt serve
https://www.npmjs.com/package/grunt-serve
I have setup CORS on the remote server to allow '*' for multiple http:// connection types. THIS WORKS CORRECTLY.
My question is how I can limit the CORS configuration to only allow a connection from my home, grunt web server?
I've tried to create an entry for "localhost", "127.0.0.1", also my home Internet IP that is reported from whatismyip.com, the dns entry that my provider lists for my home IP when I ping it, a dyndns entry that I create for my home internet IP... None of them work, except for '*' (which allows any site to connect).
I think it is an educational issue for me to understand what that CORS entry should look like to allow ONLY a connection from my home web server.
Is this possible? If so, what and where should I be checking in order to find the correct entry to clear in the CORS configuration?
-Brian
To work and actually apply restrictions, the client requesting the connection must support and enforce CORS. In an odd sort of way (from a security point of view), restricting access using CORS requires a self-policing client (one that follows the prescribed access rules). This works for modern browsers as they all follow the rules so it generally works for applications that are served through a browser.
But, CORS access restrictions do not prevent other types of clients (such as any random script in any language) from accessing your API.
In other words, CORS is really about access rules from web pages that are enforced by the local browser. It doesn't sound like your grunt/angular code would necessarily be something that implements and enforces CORS.
If you really want to prevent other systems from accessing your DreamFactory Server, then you will need to implement some server-side access restrictions in the API server itself.
If you just have one client accessing it and that client is using "protected" code that is not public, then you could just implement a password or some sort of logon credentials and your one client would be the only client that would have the logon credentials.
If the access is always from one particular fixed IP address, you could refuse connections on your server from any IP address that was not in a config file you maintained.
You can't secure an API with CORS, for that you will need to implement an authentication scheme on your server. There's essentially 4 steps to do this.
Update the headers your server sends with a few additional Access-control statements.
Tell Angular to allow cross-domain requests.
Pass credentials in your API calls from Angular.
Implement an HTTP Authentication scheme on your web server or in your API code.
This post by Georgi Naumov is a good place to look for details of an implementation in Angular and PHP.
AngularJS $http, CORS and http authentication

Using Service Worker API in a multi site environment

So with all the new stuff like notifications and offline caching available now with the service worker api I've been looking to add it to my web app.
The only thing is I can't seem to figure out is how to deal with the the https/ssl issue.
My site allows people to host websites in an online no code environment. New sites are accessed by subdomains off the main domain. This by itself I can only see requiring a wildcard subdomain ssl cert.
The complication I'm facing is that premium sites can add their own top level domain. Which will break the service worker as far as I can tell.
All these sites only require the user to sign up once so users are shared between sites and you can also get your notifications and messages cross site.
I would like to take advantage of the notifications part of the api for mobile but I'm going to need to get around this issue first.
Any help or enlightenment on this would be much appreciated :).
As Alex Russel pointed in his article:
Service Worker scripts must be hosted at the same origin
and Service Worker can't work outside its scope. Subdomains are not the same origin, so you'll need specific worker for specific client's page.
However, I can't see a problem here - when someone will enter yourpremiumclient.com, DNS server (ex. cloudflare, which offers free HTTPS and can force HTTPS) will point to your server, where worker could install and control this domain scope. Of course, the same worker won't be able to control your default scope ex. yourclient.yourdomain.com.

Circumventing the same-origin policy with DNS trickery

I'm writing a web app with Javascript which needs to access a third-party API (located on x.apisite.com and y.apisite.com). I was using XMLHTTPRequest, but when serving the files from my own local server, this fails because of the same-origin policy.
Now, this web app is supposed to be installed on my mobile device, where any downloaded files will be cached. So, I changed my DNS entries to point x.apisite.com and y.apisite.com to my own local server. I then download the files and then change the DNS entries back to the correct ones. I thought that since the browser thinks that the scripts were downloaded from *.apisite.com, I could now make XMLHTTPRequests to *.apisite.com. However, this does not seem to be the case, I still get same-origin policy errors.
What am I doing wrong?
Here's the basic idea of what I'm doing:
<!DOCTYPE html>
<html>
<head>
<!-- this will actually be downloaded from my own local server -->
<script src="http://x.apisite.com/script-0.js">
<script src="http://y.apisite.com/script-1.js">
...
In script-0.js, I make an XMLHTTPRequest to x.apisite.com, and likewise in script-1.js, I access y.apisite.com.
Practical answer (not recommended): Create CNAME records to the third-party domains from domains that you control, then use those domains and hope that the hosts of the third-party aren't looking at the HTTP Host header. Note that this wouldn't work if the clients attempt to authenticate the third-party hosts either; for example when using HTTPS (some client browsers may force the use of HTTPS in certain scenarios).
Ideal answer: Ask the third-party to authorize requests made by code that came from your origin domain using CORS (some hosts already allow requests from code from any origin, you should check that).
Alternative: If the third-party doesn't want to give clients the go-ahead to make cross-origin requests with code from your domain, then you have to make those requests yourself (from your server). The code you send to the client browsers will then only interact with the same origin, but this also means that users will have to trust you with their credentials if you're proxying requests for them (if that's relevant), or you must have credentials of your own to authenticate your server to the third-party hosts, which allow you to do whatever it is you want to do there. It also means you take the traffic load as well, which may or may not be heavy depending on the application. There are potentially many other implications, which all derive from the fact that you explicitly take responsibility for these requests.
Note: While this may sound a bit complicated, it may be useful to understand the trust mechanics between the user, the user's client browser, the code executing in the browser, the origin of that code, and the domains to which that code makes requests. Always keep in mind the best interests of each party in mind and it'll be easy to find a solution for your specific problem.
Final answer (everybody hates it, but you probably expected it): "It depends on what exactly you're trying to do." (Sorry.)

Serve a html/jss/css application from CDN or application server (RoR, node.js etc)?

I'm doing a rich internet application (html/js/css) which has to communicate with a backend application server (RoR or node.js) through XHR/Websocket.
I wonder what the best way is to serve the RIA files to the web browsers: CDN or RoR/node.js as static file servers?
Does't the latter make it impossible for the browser to communicate with the backend server due to the same origin policy?
Thanks
Same origin policy applies to requests, not static files.
You are on www.test.com
$.get('api.someotherorigin.com/things.json', function(res){
// I'll get a same origin policy error
});
This is why people use getJSON/jsonp in these cases. It even applies to subdomains, depending on how things are set up.
A cdn has the benefits of serving your static files from a cookieless, often geolocation optimized source. You almost certainly don't need this during development.
The benefits later on are that you are likely going to have only a few servers (or just one) located in a spot that may favor people in one location and give a crappy RTT for folks not close. Additionally, your domain is going to likely have cookies for authentication, sessionid, etc etc -- if you use a cdn, you avoid sending these cookies along with every single subsequent request for static files, reducing the over all request/response sizes.
Just host the files yourself. You can serve static files quite easily using connect
connect.static
You may request popular JavaScript files from a cdn if you want to take advantage of caching. jscdn and google cdn are popular.
But your own personal HTML/CSS files should be on a static file server. (You can use something else like nginx to serve those through a sub domain if you want )

Categories

Resources