I have been toying around with service workers and sw-toolbox. Both are great methods but seems to have their weaknesses.
My project started out using Google's method of service workers (link). The way I see this is that you have to manually update the version number for cache busting. I could be wrong also but I don't think the pages that the users has visited will not be cached.
Compared to the sw-toolbox method, all I need to add is the following code:
self.toolbox.router.default = self.toolbox.networkFirst;
self.toolbox.router.get('/(.*)', function (req, vals, opts) {
return self.toolbox.networkFirst(req, vals, opts)
.catch(function (error) {
if (req.method === 'GET' && req.headers.get('accept').includes('text/html')) {
return self.toolbox.cacheOnly(new Request(OFFLINE_URL), vals, opts);
}
throw error;
});
});
Then the problem of caching pages will be solved. Here is my issue: after applying the sw-toolbox to my project, the old service worker doesn't get cleared or replaced by the new one unless I go to the dev tools to clear it.
Any ideas how to get around this?
Here is my issue: after applying the sw-toolbox to my project, the old
service worker doesn't get cleared or replaced by the new one unless I
go to the dev tools to clear it.
The browser checks for updates to the service worker file every time it requests a resource in the service worker scope. If there is a byte difference in the service worker files, the browser will install the new service worker. You only need to update the service worker manually in dev tools because the app is still running, and the browser does not want to activate a new service worker while the old one is still in use.
If you close all pages associated with the service worker (like a user would when leaving your app), the browser will be able to activate the new service worker the next time your page is opened.
If you want to force the new service worker to take over, you can add self.skipWaiting(); to the install event. Here is some documentation with an example.
You can learn just about everything you need to know about the service worker's life cycle from this post by Jake Arichbald.
As far as caching & cache management goes, tools like sw-toolbox will handle cache busting for you. And actually, Workbox is a new tool that is meant to replace sw-toolbox & sw-precache. It will also handle cache busting and cache management (by comparing file hashes & setting/tracking resource expiration dates).
Generally speaking, you should always use a tool like Workbox to write your service workers. Writing them by hand is error prone and you are likely to miss corner cases.
Hope that helps.
P.S. If you end up not using skipWaiting and instead only updating when the page is closed & re-opened by a user, you can still enable automatic updating for development. In Chrome's dev tools, Application > Service Workers has an Update on reload option to automatically update the service worker.
I don't know if sw_toolbox has cache busting built in. Typically when you change the service worker and need to purge the previous version's cache you should do that with in the activate event handler.
The best practice here is to name your caches with the sw version number included. Here is some example code from an online course I have on service worker caching that might get you started:
self.addEventListener("activate", event => {
console.log("service worker activated");
//on activate
event.waitUntil(caches.keys()
.then(function (cacheNames) {
cacheNames.forEach(function (value) {
if (value.indexOf(config.version) < 0) {
caches.delete(value);
}
});
return;
})
);
});
Related
I have a service worker. Here's the install event:
self.addEventListener('install', function (event) {
console.log('Installing Service Worker ...', event);
return self.skipWaiting()
.then(() => caches.open(CACHE_STATIC_NAME))
.then(function (cache) {
return cache.addAll([
'./file1.html',
'./file2.html'
])
})
});
For some reason, when I edit the service worker code and update the query parameter in the service worker file URL, it installs but does not activate (according to Chrome DevTools) — even though I've called self.skipWaiting().
Oddly if I go into the console, go to the scope of the service worker and type self.skipWaiting() myself, it activates immediately.
I've been trying to work out what's going on for many hours now, and I'm completely stumped. Is there something I'm missing here?
The old SW might not stop while it's still running tasks - for example if it had a long running fetch request (e.g server sent events / event source, or fetch streams; although I don't think websockets can cause this as SW will ignore them I think).
I find the behaviour to be different between browers however. Chrome seems to wait while the existing task is running (so skipWaiting will fail...), but Safari seems to kill the task and activate the new SW.
A good way to test if this is causing your issue would be to kill your server just after you request the skipWaiting (to kill the network connections). (Just clicking "Offline" in Dev Tools doesn't seem to kill all running connections, for example EventSources stay running.)
You can have the SW ignore certain routes (below), or you could try and force the requests to terminate (maybe using AbortController).
self.addEventListener('fetch', function(event) {
const { method, url } = event.request;
if(event.request.method !== "GET") return false;
if(url === "https://example.com/poll") return false;
event.respondWith(
caches.match(match).then(function(response) {
return response || fetch(event.request);
})
);
});
The process for skipWaiting is in this spec:
https://w3c.github.io/ServiceWorker/#try-activate-algorithm
But I don't find it very clear about whether the browser should wait for tasks or terminate them (or transfer them to the new SW?), before activating the new SW; and as mentioned, it seems to work differently between browsers at the moment...
I had experienced the same issue.
I validated my issue by the trick given by #ej.daly that I should stop the server & the WAITING service-worker will become active in few minutes.
After that, I applied a hack in my code: Reload the window if the WAITING service worker isn't activated in next 3 seconds.
BTW, I am developing a library to make service-worker installation easy for the various (commonly used) scenarios in our company. Have a look at https://github.com/DreamworldSolutions/workbox-installer
I have a create-react-app project and I am running into an issue when there is a new production release the users don't get the latest version unless the clear the cache or in the best scenario when refreshing the page after they access it.
My Cloudflare cache expiry is set to 4 hours, but obviously the users still get the old version after this period. This leaves me thinking that it is a service worker issue.
Are there any other reasons that lead to this behaviour?
What are the possible solutions?
Is unregistering the SW considered a good solution for this issue? Knowing that I don't need my app to run offline at the moment.
If it is a good solution what are the consequences of unregistering it?
Do I need to use cache-control headers (ie max-age=0) in my index.html?
I know it is a lot of questions, but I wanted to show the directions I am thinking of and the areas I am bit confused about.
Thank you for your time and help in advance.
Adding versioning to your service worker cache is one of the ways you can ensure the new service worker gets installed whenever there is a new build. Just add a script which increments the version of the cache with each new build which causes a byte difference in service worker which in enough for the browser to trigger the new install event.
In your service worker file add something like
const version = 1;
let cacheV = 'foo' + version;
In your Activate event add logic that if there is a version mismatch delete the old cache.
self.addEventListener("activate", function(event) {
event.waitUntil(
caches.keys().then(function(cacheNames) {
return Promise.all(
cacheNames.map(function(cacheName) {
if (cacheV!== cacheName && cacheName.startsWith("foo")) {
return caches.delete(cacheName);
}
})
);
})
);
});
Also you can add update logic to your fetch listener which will fetch the latest file from network like
event.waituntil(update(request));
I have built a portal which provides access to several features, including trouble ticket functionality.
The client has asked me to make trouble ticket functionality available offline. They want to be able to "check out" specific existing tickets while online, which are then accessible (view/edit) while the user's device is out-of-range of any internet connection. Also, they want the ability to create new tickets while offline. Then, when the connection is available, they will check in the changed/newly created tickets.
I have been tinkering with Service Workers and reviewing some good documentation on them, and I feel I have a basic understanding of how to cache the data.
However, since I only want to make the Ticketing portion of the portal available offline, I don't want the service worker caching or returning cached data when any other page of the portal is being accessed. All pages are in the same directory, so the service worker, once loaded, would by default intercept all requests from all pages in the portal.
How can I set up the service worker to only respond with cached data when the Tickets page is open?
Do I have to manually check the window.location value when fetch events occur? For example,
if (window.location == 'https://www.myurl.com/tickets')
{
// try to get the request from network. If successful, cache the result.
// If not successful, try returning the request from the cache.
}
else
{
// only try the network, and don't cache the result.
}
There are many supporting files that need to be loaded for the page (i.e. css files, js files, etc.) so it's not enough to simply check the request.url for the page name. Will 'window.location' be accessible in the service worker event, and is this a reasonable way to accomplish this?
Use service worker scoping
I know that you mentioned that you currently have all pages served from the same directory... but if you have any flexibility over your web app's URL structure at all, then the cleanest approach would be to serve your ticket functionality from URLs that begin with a unique path prefix (like /tickets/) and then host your service worker from /tickets/service-worker.js. The effort to reorganize your URLs may be worthwhile if it means being able to take advantage of the default service worker scoping and just not have to worry about pages outside of /tickets/ being controlled by a service worker.
Infer the referrer
There's information in this answer about determining what the referring window client URL is from within your service worker's fetch handler. You can combine that with an initial check in the fetch handler to see if it's a navigation request and use that to exit early.
const TICKETS = '/tickets';
self.addEventListener('fetch', event => {
const requestUrl = new URL(event.request.url);
if (event.request.mode === 'navigate' && requestUrl.pathname !== TICKETS) {
return;
}
const referrerUrl = ...; // See https://stackoverflow.com/questions/50045641
if (referrerUrl.pathname !== TICKETS) {
return;
}
// At this point, you know that it's either a navigation for /tickets,
// or a request for a subresource from /tickets.
});
I have a doubt about the service worker update process.
In my project there are 2 files related to sw:
"sw.js", placed in website root, will be NOT cached (by Cache API and Web Browser).
My service worker manages the cache of all statics files and all dynamic url pages.
Sometimes I need to update it and the client must detect that there's an update and do that immediatelly.
"sw_main.js" is the script that installs my sw. This file is cached by Cache API because my app must work offline.
Inside we can find:
var SW_VERSION = '1.2';
navigator.serviceWorker.register("sw.js?v=" + SW_VERSION, { scope: "/" }).then(....
The problem is: because sw_main.js is cached, if I change the SW_VERSION and then deploy online the webapp, all clients will not
update because cannot see the changes in that file.
Which is the best way to manage the SW update process?
As I now, there are 3 ways to trigger sw update:
push and sync events (but I'm not implementing these)
calling .register() only if the service worker URL has changed (but
in my case it's not possible because the sw_main.js is cached so I'm
not able to change the SW url)
navigation to an in-scope page (I think we've the same cache problem
of point 2)
I read also this: "Your service worker is considered updated if it's byte-different to the one the browser already has".
That means that if I change the content of sw.js (that is not cached), the service worker will automatically detect the update?
Thank you
I found 2 possibile solutions.
First of all I wanna say that it's better to cache (using pwa Cache API) also the sw.js because when you're offline, it will be requested by sw_main.js.
FIRST SOLUTION:
Use a the service worker's cache as a fallback and always attempt to go network-first via a fetch().
This only for sw.js and maybe sw_main.js.
You lose some performance gains that a cache-first strategy offers, but the js file size is very light so I don't think it's a big problem.
SECOND SOLUTION:
If your cached sw.js file has changed?
We can hook into "onupdatefound" function on the registered Service Worker.
Even though you can cache tons of files, the Service Worker only checks the hash of your registered service-worker.js.
If that file has only 1 little change in it, it will be treated as a new version.
So this confirm my previous question! I'll try it!
If it works, the second solution is the best
Background
I'm new to service workers but working on a library that is intended to become "offline-first" (really, almost "offline-only") (FWIW, the intent is to allow consumers of the library to provide JSON config representing tabular multilinear texts and get in return an app which allows their users to browse these texts in a highly customizable manner by paragraph/verse ranges.)
Other projects are to install the library as a dependency and then supply information via our JavaScript API such as the path of a JSON config file indicating the files that our app will consume to produce an (offline) app for them.
While I know we could do any of the following:
require users provide a hard-coded path from which our service worker's install script could use waitUntil with its own JSON request to retrieve the user's necessary files
skip the service worker's install step of the service worker for the JSON file, and rely on fetch events to update the cache, providing a fallback display if the user completed the install and went offline before the fetches could occur.
Post some state info from our main script to a server which the service worker, once registered, would query before completing its install event.
...but all choices seems less than ideal because, respectively:
Our library's consumers may prefer to be able to designate their own location for their JSON config.
Given that the JSON config designates files critical to showing their users anything useful, I'd rather not allow an install to complete only to say that the user has to go back online to get the rest of the files if they were not able to remain online after the install event to see all the required fetches occur.
Besides wanting to avoid more trips to the server and extra code, I'd prefer for our code to be so offline-oriented as to be able to work entirely on mere static file servers.
Question:
Is there some way to pass a message or state information into a service worker before the install event occurs, whether as part of the query string of the service worker URL, or through a messaging event? The messaging event could even technically arrive after the install event begins as long as it can occur before a waitUntil within the install is complete.
I know I could test this myself, but I'd like to know what best practices might be anyways when the critical app files must themselves be dynamically obtained as in such libraries as ours.
I'm guessing indexedDB might be the sole alternative here (i.e., saving the config info or path of the JSON config to indexedDB, registering a service worker, and retrieving the indexedDB data from within the install event)? Even this would not be ideal as I'm letting users define a namespace for their storage, but I need a way for it too to be passed into the worker, or otherwise, multiple such apps on the origin could clash.
Using a Query Parameter
If you find it useful, then yes, you can provide state during service worker installation by including a query parameter to your service worker when you register it, like so:
// Inside your main page:
const pathToJson = '/path/to/file.json';
const swUrl = '/sw.js?pathToJson=' + encodeURIComponent(pathToJson);
navigator.serviceWorker.register(swUrl);
// Inside your sw.js:
self.addEventListener('install', event => {
const pathToJson = new URL(location).searchParams.get('pathToJson');
event.waitUntil(
fetch(pathToJson)
.then(response => response.json())
.then(jsonData => /* Do something with jsonData */)
);
});
A few things to note about this approach:
If you fetch() the JSON file in your install handler (as in the code sample), that will effectively happen once per version of your service worker script (sw.js). If the contents of the JSON file change, but everything else stays the same, the service worker won't automatically detect that and repopulate your caches.
Following from the first point, if you work around that by, e.g., including hash-based versioning in your JSON file's URL, each time you change that URL, you'll end up installing a new service worker. This isn't a bad thing, per se, but you need to keep it in mind if you have logic in your web app that listens for service worker lifecycle events.
Alternative Approaches
You also might find it easier to just add files to your caches from within the context of your main page, since browsers that support the Cache Storage API expose it via window.caches. Precaching the files within the install handler of a service worker does have the advantage of ensuring that all the files have been cached successfully before the service worker installs, though.
Another approach is to write the state information to IndexedDB from the window context, and then read from IndexedDB inside of your service worker's install handler.
Update 3:
And since it is not supposed to be safe to rely on globals within the worker, my messaging solution seems even less sound. I think it either has to be Jeff Posnick's solution (in some cases, importScripts may work).
Update 2:
Although not directly related to the topic of this thread relating to "install" event, as per a discussion starting at https://github.com/w3c/ServiceWorker/issues/659#issuecomment-384919053 , there are some issues, particularly with using this message-passing approach for the activate event. Namely, the activate event may never fail, and thus never be tried again, leaving one's application in an unstable state. (A failure of install will at least not apply the new service worker to old pages, whereas activate will keep fetches on hold until the event completes, which it may never do if it is left waiting for a message that was not received, and which anything but a new worker will fail to correct since new pages won't be able to load to send that message again.)
Update:
Although I got the client from within the install script in Chrome, I wasn't able to receive the message back with navigator.serviceWorker.onmessage for some reason.
However, I was able to fully confirm the following approach in its place:
In the service worker:
self.addEventListener('install', e => {
e.waitUntil(
new Promise((resolve, reject) => {
self.addEventListener('message', ({data: {
myData
}}) => {
// Do something with `myData` here
// then when ready, `resolve`
});
})
);
});
In the calling script:
navigator.serviceWorker.register('sw.js').then((r) => {
r.installing.postMessage({myData: 100});
});
#JeffPosnick 's is the best answer for the simple case I described in the OP, but I thought I'd present my discovering that one can get messages from and into a service worker script early (tested on Chrome) by such as the following:
In the service worker:
self.addEventListener('install', e => {
e.waitUntil(self.clients.matchAll({
includeUncontrolled: true,
type: 'window'
}).then((clients) => new Promise((resolve, reject) => {
if (clients && clients.length) {
const client = clients.pop();
client.postMessage('send msg to main script');
// One should presumably be able to poll to check for a
// variable set in the SW message listener below
// and then `resolve` when set
// Despite the unreliability of setting globals in SW's
// I believe this could be safe here as the `install`
// event is to run while the main script is still open.
}
})));
});
self.addEventListener('message', e => {
console.log('SW receiving main script msg', e.data);
e.ports[0].postMessage('sw response');
});
In the calling script:
navigator.serviceWorker.addEventListener('message', (e) => {
console.log('msg recd in main script', e.data);
e.source.postMessage('sending back to sw');
});
return navigator.serviceWorker.register(
'sw.js'
).then((r) => {
// navigator.serviceWorker.ready.then((r) => { // This had been necessary at some point in my testing (with r.active.postMessage), but not working for me atm...
// Sending a subsequent message
const messageChannel = new MessageChannel();
messageChannel.port1.onmessage = (e) => {
if (e.data.error) {
console.log('err', e.data.error);
} else {
console.log('data', e.data);
}
};
navigator.serviceWorker.controller.postMessage('sending to sw', [messageChannel.port2]);
// });
});