Using Service Worker API in a multi site environment - javascript

So with all the new stuff like notifications and offline caching available now with the service worker api I've been looking to add it to my web app.
The only thing is I can't seem to figure out is how to deal with the the https/ssl issue.
My site allows people to host websites in an online no code environment. New sites are accessed by subdomains off the main domain. This by itself I can only see requiring a wildcard subdomain ssl cert.
The complication I'm facing is that premium sites can add their own top level domain. Which will break the service worker as far as I can tell.
All these sites only require the user to sign up once so users are shared between sites and you can also get your notifications and messages cross site.
I would like to take advantage of the notifications part of the api for mobile but I'm going to need to get around this issue first.
Any help or enlightenment on this would be much appreciated :).

As Alex Russel pointed in his article:
Service Worker scripts must be hosted at the same origin
and Service Worker can't work outside its scope. Subdomains are not the same origin, so you'll need specific worker for specific client's page.
However, I can't see a problem here - when someone will enter yourpremiumclient.com, DNS server (ex. cloudflare, which offers free HTTPS and can force HTTPS) will point to your server, where worker could install and control this domain scope. Of course, the same worker won't be able to control your default scope ex. yourclient.yourdomain.com.

Related

How to access resources in a private subnet from apple store and google play

Would love to get peoples thoughts on this.
I have a front-end application that lives on the apple store. It interacts with custom JavaScript APIs that we've built and that are deployed on an EKS cluster. The cluster and the EC2 instances of the cluster live in private subnets in AWS, but are exposed to the world through an application load balancer that lives in a public subnet.
Since the front end application lives on apples servers, I can't think of an easy way to securely access the APIs in AWS without exposing them to the world. This is what I have in mind:
Use API keys. Not ideal as the keys could still potentially be scraped from a header
Restrict access to the APIs to the apple server network via ACLs and security groups. Again, not something that seems achievable since there is no network CIDR that apple provide (that I know of)
Set up some sort of SSH tunnel
I've hit a wall on this and would really appreciate anyones thoughts if they've had a similar issue.
Thanks!
In Google CDP you can have another type of ACL which monitors the client URL. If requests wont come from your.frontend.app, they are denied. Check if you can find that in AWS as well
I recommend to further think about if possible in you project:
1.) CSRF strategy. Apply tokens to clients which must be provided on request to API.
2.) AccessLimiter. Maintain Fingerprint or Session for your clients and count/limit requests as you need. E.g. if the request didnt run through an index file before, no request is possible as clients didnt collect a token.

Discover Web Service in local private network with javascript

I searched for a few options on my issue but couldn't find any useful information unfortunately.
Here is my issue:
Suppose I have 1x computer that runs a rest service on a specific port lets say 5555, running in a private network.
Now I have a frontend/browser application (javascript) that could be opened with a mobile phone or computer. When a device is connected to the same network (suppose wireless) and opens the frontend application it should discover in any way the rest service of the other computer, but I can't find a solution to that challenge.
So I can't find the sevices' ipv4 in the network since the webRTC workaround got smashed. I would have to traverse all possible private ip ranges to find that running service, which seems like an overkill.
Anyone got any idea how to solve this challenge?
Most web apps actually use the port-scan approach, which you are trying to avoid. I could think of some other approaches:
Have the service also publish an mDNS service under a specific name, e.g. foo.local. Your web app can simply have a static configuration using that hostname. This will, hovewer, require you to be able to control the service and your network/host need to be capable of using mDNS.
Require the admin of the service to register the local IP adress in a public DNS server. This will require manual config of the URL in the web app, but you can at least avoid dealing with discovering the address.
What you are talking about is sort of network scan, which is a security issue if you can do it, though it is usually possible in home networks. I would add a DNS server for that local network and use a local domain name to access the service. I don't know any other standard way to propagate where the service is.

How Do I Make a Service Worker Compatible with Wildcard Subdomains?

I have a fairly simple question. I have a manifest.json file for my new service worker that lists "start_url" as "https://example.com" and the scope is "/". That works great unless the URL has a subdomain. In that case I get several errors saying that the manifest start url is not valid, it has been ignored, and that the manifest has no matching service worker.
The service worker still works but I would like to eliminate these errors. I use wildcard subdomains for all listings categorized by geographic location (ex: https://city-state.example.com). That lets me feather out the categories on the other side of the domain name (ex: https://city-state.example.com/category/subcategory). Is there a way to use something like https://(*).example.com for the start url or scope to avoid this error?
A service worker is scoped to a single origin and no higher in a file page than the level it is served from.
The rules are to provide security and prevent 3rd party scripts from attaching service workers to invade your site.
You will have to replicate your service worker on each origin. But honestly, unless the application is exactly the same you will want to customize the service worker logic to the specific application.

How to register a service worker on localhost subdomain?

I have a domain at https://example.co.uk, and one at https:// chat.example.co.uk, both using service workers in production. When developing on localhost, I can use a service worker on the https://example.co.uk domain (on localhost:1337), but not when using the chat subdomain (on chat.localhost:1337), or any other subdomains.
This is not an issue on the live version, but it makes development quite difficult when working on the service workers' code.
Am I missing something important, or is there something I can do to allow the service worker to register anyway?
I tried turning the #allow-insecure-localhost Chrome flag on, but I don't think that was the problem.
As you've observed, only http://localhost is whitelisted by Chrome for features that require secure contexts, not any subdomains of localhost—which I don't think are a thing without jumping through some hoops in your local domain resolution setup?
To facilitate local testing, I'd recommend instructing your web server to listen to requests on localhost on two different ports, e.g. 1337 and 1338. That would probably be the easiest way to simulate the two different origins that you have in production, with their isolated security contexts.
It obviously might require be some additional effort to refactor your configuration so that instead of communicating with a subdomain in your dev environment, you communicate with a the same localhost domain, but a different port.

Securing an API for use with Javascript widget

I'm writing a javascript plugin which will be installed by bloggers/website owners. It will communicate with my remote API.
I'm wondering how to secure the API to ensure that only domains owned by users that have registered an account with the service can access resources from the API. I've read up on OAuth2 and understand the basics, but because the plugin will run from within the browser and not from server to server, i'm not sure how secure this can be.
Tons of services like mixpanel, google analytics, olark use the same concept (i.e. website owner install a line of JS on their site) so it must be a solved problem.
You can insert window.location checks into your script to prevent other people from including it directly off of your servers.
However, it is impossible to prevent people from downloading the scripts locally, removing your protection, then hosting it themselves.
You can require an API key in all server-side requests, but enemies can easily steal API keys from legitimate sites.

Categories

Resources