Precaching with Stale While Revalidate Strategy in Workbox - javascript

Question
Is it possible to precache a file using a different strategy? i.e. Stale While Revalidate?
Or, should I just load the script in the DOM and then add a route for it in the worker with the correct strategy?
Background
This is quite a weird case so I will try to explain it as best I can...
We have two repos; The PWA and The Games
Both are statically hosted on the same CDN
Due to the Games repo being separate, the PWA has no access to the versioning of the game js bundles
Therefore, the solution I have come up with is to generate an unversioned manifest (game-manifest.js) in the Games build
The PWA will then precache this file, loop through it's contents, and append each entry to the existing precache manifest
However, given the game-manifest.js has no revision and is not hashed, we need to apply either a Network First, or Stale While Revalidate strategy in order for the file to be updated when new versions become available
See the following code as a clearer example of what I am trying to do:
import { precacheAndRoute } from 'workbox-precaching';
// Load the game manifest
// THIS FILE NEEDS TO BE PRECACHED, but under the strategy
// of stale while revalidate, or network first.
importScripts('example.cdn.com/games/js/game-manifest.js');
// Something like...
self.__gameManifest.forEach(entry => {
self.__precacheManifest.push({
url: entry
});
});
// Load the assets to be precached
precacheAndRoute(self.__precacheManifest);

Generally speaking, it's not possible to swap in an alternative strategy when using workbox-precaching. It's always going to be cache-first, with the versioning info in the precache manifest controlling how updates take place.
There's a larger discussion of the issue at https://github.com/GoogleChrome/workbox/issues/1767
The recommended course of action is to explicitly set up runtime caching routes using the strategy that you'd prefer, and potentially "prime" the cache by adding entries to it in advance during the install step.

Related

Merging Images Using Javascript/React

I am creating a website in which each user has an "avatar". An avatar has different accessories like hats, facial expressions, etc. I have made this previously on a php website but I am using react to create this new website. I am loading in each users avatar and its item links from firestore. I do not want to
use absolute positioning or css, I want the avatar to be one image.
Example of what I am trying to achieve:
I found this library: https://github.com/lukechilds/merge-images which seems to be exactly what I need but I cannot load in external images or I get this error:
Any solutions to this error or suggestions to an alternative would be greatly appreciated.
My code:
render() {
mergeImages([
'http://example.com/images/Avatar.png',
'http://example.com/images/Hat.png',
])
.then((b64) => {
document.querySelector('img.abc').src = b64;
})
.catch(error => console.log(error))
return (
...
<img class="abc" src='' width={100} height={200} alt="avatar"/>
...
); }
The merge-images package has some quirks. One of those quirks is that it expects individual images to either be served from your local server (example: http://localhost:3000/images/head.png, http://localhost:3000/images/eyes.png, and http://localhost:3000/images/mouth.png) or that those individual images be imported into a single file.
Working example: https://github.com/mattcarlotta/merge-images-example (this example includes the first three options explained below with the fourth option utilizing the end result of using a third party CDN)
To run the example, clone the repo:
git clone https://github.com/mattcarlotta/merge-images-example
Change directory:
cd merge-images-example
Then install dependencies:
yarn install
Then run the development server:
yarn dev
Option 1:
The simplest implementation would be to import them into a AvatarFromFiles component. However, as written, it isn't reusable and isn't suitable for dynamically selected avatars.
Option 2:
You may want to serve them from the local server like the AvatarFromLocalServer component with a Webpack Dev Config. Then you would retrieve stored strings from an API and pass them down into from state into the component. Once again, this still requires the images to be present in the images folder, but more importantly, it isn't ideal for a production environment because the images folder must be placed outside of the src folder to be served. This could also lead to security issues. Therefore, I don't recommend this option at all.
Option 3:
Same as Option 1, but lazy loaded like the AvatarFromLazyFiles component and therefore flexible. You can load images by their name; however, it still requires that all of the images be present upon runtime and during production compilation. In other words, what you have on hand is what you get.
Option 4:
So... the ideal option would be to build an image microservice or use a CDN that handles all things images (uploading, manipulating/merging, and serving images). The client would only be able to select/upload new images to this microservice/CDN, while the microservice/CDN handles everything else. This may require a bit more work but offers the most flexibility, super easy to implement, and best performance -- as it offloads all the work from the client to the dedicated service.
In conclusion, unless you plan on having a set amount of images, use option 3, otherwise option 4.
Problem
This is a CORS issue. The images are coming from a different origin that's not your server.
If you look at the source of the library, you'll notice it's using a <canvas> under the hood to merge the images, and then getting the resulting data. Canvas cannot work with images loaded from another domain. There's good reasoning behind this. In essence, loading an image into a canvas is a way to fetch data, and since you can retrieve the data from the canvas as base64, a malicious one could steal information by first loading it into a <canvas> and then pulling it out.
You can read about it directly from the spec for the <canvas> element.
Solution
You need to serve the images either from the same origin (essentially, the same domain) or include Access-Control-Allow-Origin: ... on the HTTP headers that serve the images. There's ways to do this in firebase storage, or other server solutions you might use.

Handle "Loading chunk failed" errors with Lazy-loading/Code-splitting

We are developing a Vue.js application based on Vue CLI 3 with Vue Router and Webpack. The routes are lazy-loaded and the chunk file names contain a hash for cache busting. In general, everything is working fine.
However, there is a problem during the deployment. Steps to reproduce are the following.
User opens the application (let's assume route "/"), thus the main chunk file is loaded.
We change something in the application and deploy a new version.
Old chunk files are removed
New chunk files are being added (i.e. hashes in the chunk file names change)
User clicks a link to another route (e.g. "/foo")
An error occurs as the application tries to load a chunk file that has been renamed: Error: "Loading CSS chunk foo failed.
(/assets/css/foo.abc123.css)" (this might be CSS or JavaScript)
What is the best way to avoid errors like this?
One approach that should work is just to retain old chunk files and delete them at a later time. That, however, complicates the deployment of new versions as you need to keep track of old versions and always also deploy the old chunk files with the new version.
Another (naive) approach is to just reload as soon as such an error is detected (e.g. Vue Lazy Routes & loading chunk failed). It somewhat works, but it reloads the old route, not the new one. But at least it ensure that consecutive route changes work again.
Any other ideas? Maybe there is something in webpack that could fix this?
DoNOT cache the entry file(usually index.html).
We add:
expires 0;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate';
in our nginx server config.
Then, after you refreshed the client's code, you can use the vue-router's error hook to detect the error and do something properly.
As long as you have a versioned API, you can use the old app files (just leave them on the server and delete after a vew days).
You will get problems as soon as your API changes during deployments.
I assume, you deploy a new API each time you deploy new JS code.
Then you can:
Pass on the API version (simply use the git hash) to the application as header with every response (JS resources, CSS, API requests, 404 responses)
Store the API version in your main JS entry point (or make it accessible somehow, e.g. as generated constant)
On each server response, check if the Server version matches your main client version.
If it does not: Display a prominent warning to the user (like the cookie banners) that he should reload the page (=> allows the user to save chnages in hope the API did not change for that save button).
For async components, we display normal 'not found' messages if loading fails, together with a reload button that appears instead of the component. Reloading without user interaction will cause a lot of confusion.

How to cache signed files (Service worker)

Several front-end applications implement a signature template in the files to control the cache, and each change in the file that signature changes.
How do I get the Service Worker to handle these signatures and cache?
Signature Examples:
sw-d58e3582afa99040e27b92b13c8f2280.js
sw.js?_gc=20180101
I'm working on an application that is already ready, trying to implement service worker for certain features to become available offline.
Example, in this section I need to say what I will cache, however, the application changes the signatures of the file to control the cache. (today it's like this)
caches.open('my-cache').then(function(cache) {
return cache.addAll([
'/index.html',
'/styles.css',
'/main.js'
]);})
The application is always changing "styles.css" to "styles.css? V = 1527624807103_1" (timestamp) As far as I understand, "styles.css" is not the same as "styles.css? V = 1527624807103_1".
You should not version or add a signture parameter to the URL used for your actual, top-level service worker file. As explained in "The Service Worker Lifecycle":
It can land you with a problem like this:
index.html registers sw-v1.js as a service worker.
sw-v1.js caches and serves index.html so it works offline-first.
You update index.html so it registers your new and shiny sw-v2.js.
If you do the above, the user never gets sw-v2.js, because
sw-v1.js is serving the old version of index.html from its cache.
You've put yourself in a position where you need to update your
service worker in order to update your service worker.
As for the actual URLs that are included inside of your service worker file, which may legitimately contain versioning/hashes/signatures, your best bet is to use a tool that integrates with your build process and will generate the list of URLs based on the latest versions of each file that should be precached.
Workbox is one such tool, and there are others, like sw-precache (a precursor to Workbox), and offline-plugin.

Does sw-precache activation of new service worker guarantees cache busting?

I am using sw-precache along with sw-toolbox to allow offline browsing of cached pages of an Angular app.
The app is served through a node express server.
One problem we ran into is that the index.html sometimes doesn't seem to be updated in the cache although other assets has been updated on activation of new service worker.
This leaves users with an outdated index.html that is trying to load no longer existing versioned asset in this case /scripts/a387fbeb.modules.js.
I am not entirely sure what's happening, because it seems that on different browsers where the index.html has been correctly updated have the same hash.
On one browser outdated (problematic) Index.html
(cached with 2cdd5371d1201f857054a716570c1564 hash) includes:
<script src="scripts/a387fbeb.modules.js"></script>
in its content. (this file no longer exists in the cache or on remote).
On another browser updated (good) index.html
(cached with the same 2cdd5371d1201f857054a716570c1564) includes:
<script src="scripts/cec2b711.modules.js"></script>
These two have the same cache, although the content that is returned to the browsers are different!
What should I make of this? Does this mean that sw-precache doesn't guarantee atomic cache busting when new SW activates? How can one protect from this?
If these help, this is the generated service-worker.js file from sw-precache.
Note: I realize I can use remoteFirst strategy (at least for index.html) to avoid this. But I'd still like to understand and figure out a way to use cacheFirst strategy to get the most out of performance.
Note 2: I saw in other related questions that one can change the name of the cache to force bust all the old cache. But this seems to beat the idea of sw-precache only busting updated content? Is this the way to go?
Note 3: Note that even if I hard reload the browser where the website is broken. The site would work because it would skip service worker cache but the cache would still be wrong - the service worker doesn't seem to activate - my guess because this specific SW has been activated already but failed at busting the cache correctly. Subsequent non-hard-refresh visits would still see the broken index.html.
(The answers here are specific to the sw-precache library. The details don't apply to service workers in general, but the concepts about cache maintenance may still apply to a wider audience.)
If the content of index.html is dynamically generated by a server and depends on other resources that are either inlined or referenced via <script> or <link> tags, then you need to specify those dependencies via the dynamicUrlToDependencies option. Here's an example from the app-shell-demo that ships as part of the library:
dynamicUrlToDependencies: {
'/shell': [
...glob.sync(`${BUILD_DIR}/rev/js/**/*.js`),
...glob.sync(`${BUILD_DIR}/rev/styles/all*.css`),
`${SRC_DIR}/views/index.handlebars`
]
}
(/shell is used there instead of /index.html, since that's the URL used for accessing the cached App Shell.)
This configuration tells sw-precache that any time any of the local files that match those patterns change, the cache entry for the dynamic page should be updated.
If your index.html isn't being generated dynamically by the server, but instead is updated during build time using something like this approach, then it's important to make sure that the step in your build process that runs sw-precache happens after all the other modifications and replacements have taken place. This means using something like run-sequence to ensure that the service worker generation isn't run in parallel with other tasks.
If the above information doesn't help you, feel free to file a bug with more details, including your site's URL.

javascript (DOJO) file caching - Client side

I want to implement caching of the javascript files (Dojo Tool kit) which are not going to change.. Currently my home page takes about 15-17 secs to load and upon refresh it takes 5-6 secs on load.. But is there a way to use the cached files again when we load it in a new browser session.. I do not want the browser to make request to the server on load of the application home page in a new browser session? Also is there a option to set the expiry to a certain number of days.. I tried with META tag and not helping much.. Either I'm doing something wrong or I'm not implementing it correctly..
I have implemented the dojo compression tool kit and see a slight improvement in the performance but not significant..
Usually your browser should do that already anyway. Please check if caching is really turned on and not only for session.
However, creating a custom dojo build with your app profile defining layers with dojo the build puts all your code together and bundles it with dojo.js (the files are still available independently). The result is just one http request for all of the code (larger file but just once). The gained speed due to reduced http requests is much more than a cache could ever provide.
For details refer to the tutorial: http://dojotoolkit.org/documentation/tutorials/1.8/build/
The caching is made by the browser, which behaviour is influenced by Cache-Control HTTP header (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html). Normally, the browser would ask, if newer version of resources is available, so you get 1 short request for each resource anyway.
From my experience, even with very aggresive caching, when the browser is instructed not to ask for the version of the resource on server for the given period of time, the checking in the browser cache for such an immense number of resources is a costly process.
The real solution are custom builds. You have written something about "dojo compression", I assume you're acquainted with the Dojo build profiles. It's quite nasty documented, but once you're successfull with it, you should have some big file(s) with layer(s), with the following format:
require({cache:{
"name/of/dojo/resource": function() { ... here comes the content of JS file ... }
, ...
}});
It's a multi-definition file that inlines all definitions of modules within single layer. So, loading such file will load many modules in single request. But you must load the layer.
In order to get layers to run I've had to add the extra require to each of my entry JS files (those that are referenced in the headers of HTML file):
require(["dojo/domReady!"], function(){
// load the layers
require(['dojo/dojo-qm',/*'qmrzsv/qm'*/], function(){
// here is your normal dojo code, all modules will be loaded from the memory cache
require(["dojo", "dojo/on", "dojo/dom-attr", "dojo/dom-class"...],
function(dojo,on,domAttr,domClass...){
....
})
})
})
It has significantly improved the performance. The bottleneck was loading a large amount of little javascript modules, not parsing them. Loading and parsing all modules at once is much cheaper that loading hundreds of them on demand.

Categories

Resources