We are developing a Vue.js application based on Vue CLI 3 with Vue Router and Webpack. The routes are lazy-loaded and the chunk file names contain a hash for cache busting. In general, everything is working fine.
However, there is a problem during the deployment. Steps to reproduce are the following.
User opens the application (let's assume route "/"), thus the main chunk file is loaded.
We change something in the application and deploy a new version.
Old chunk files are removed
New chunk files are being added (i.e. hashes in the chunk file names change)
User clicks a link to another route (e.g. "/foo")
An error occurs as the application tries to load a chunk file that has been renamed: Error: "Loading CSS chunk foo failed.
(/assets/css/foo.abc123.css)" (this might be CSS or JavaScript)
What is the best way to avoid errors like this?
One approach that should work is just to retain old chunk files and delete them at a later time. That, however, complicates the deployment of new versions as you need to keep track of old versions and always also deploy the old chunk files with the new version.
Another (naive) approach is to just reload as soon as such an error is detected (e.g. Vue Lazy Routes & loading chunk failed). It somewhat works, but it reloads the old route, not the new one. But at least it ensure that consecutive route changes work again.
Any other ideas? Maybe there is something in webpack that could fix this?
DoNOT cache the entry file(usually index.html).
We add:
expires 0;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate';
in our nginx server config.
Then, after you refreshed the client's code, you can use the vue-router's error hook to detect the error and do something properly.
As long as you have a versioned API, you can use the old app files (just leave them on the server and delete after a vew days).
You will get problems as soon as your API changes during deployments.
I assume, you deploy a new API each time you deploy new JS code.
Then you can:
Pass on the API version (simply use the git hash) to the application as header with every response (JS resources, CSS, API requests, 404 responses)
Store the API version in your main JS entry point (or make it accessible somehow, e.g. as generated constant)
On each server response, check if the Server version matches your main client version.
If it does not: Display a prominent warning to the user (like the cookie banners) that he should reload the page (=> allows the user to save chnages in hope the API did not change for that save button).
For async components, we display normal 'not found' messages if loading fails, together with a reload button that appears instead of the component. Reloading without user interaction will cause a lot of confusion.
Related
So the obvious answer is that its necessary because it serves routed paths from the server, so that we don't get 404s.
However solutions like angular-cli-ghpages solves this by adding a script to the app that parses parameters returned in a 404 that will then reroute the app to the correct state.
So just curious are there any drawbacks to this and why would this not be used in general instead of solutions like Angular Universal or Rendertron?
For example this is what spa-github-pages says:
A quick SEO note - while it's never good to have a 404 response, it appears based on Search Engine Land's testing that Google's crawler will treat the JavaScript window.location redirect in the 404.html file the same as a 301 redirect for its indexing. From my testing I can confirm that Google will index all pages without issue, the only caveat is that the redirect query is what Google indexes as the url. For example, the url example.tld/about will get indexed as example.tld/?p=/about. When the user clicks on the search result, the url will change back to example.tld/about once the site loads.
Because of two main things:
First page load speed;
SEO
Robots do not run javascript, so they parse what the get from server and than the Universal comes around.
Even using --aot builded app served by ghpages with a 404 page that is a clone from the index, the client/robot still needs to get the first files, parse them and finally mount the final view. Gh-pages do not serve the final html state.
I have recently been working on an Electron application which requires the storage of data in a javascript file which gets decrypted when the user logs in and displayed, and encrypted when the user logs out. However, while logged in, the user has the option to add data to the javascript file. Unfortunately though, when this process is complete, the new data is not displayed, despite using the exact same code in the initial display as in the reload. I am confident that this is due to the javascript file needing to be reloaded (file changes are not registered by Electron). I have tried the electron-reload module, but it seems to only allow for live reloads. I need a module or solution which allows me to do something like this.
var reload = require('some-reload-module');
reload.reload('../path/to/file.js');
...
Any solutions would be welcome as I have so far had no luck. Thank you in advance!
This is happening due to the fact that require caches its results in require.cache. To get around this, you can just delete the entry in the cache.
// Initially require the file; the result is cached.
require('../path/to/file.js');
// Delete the cached version of the module.
delete require.cache[require.resolve('../path/to/file.js')];
// Re-require the file; the file is re-executed and the new result is cached.
require('../path/to/file.js');
I am using sw-precache along with sw-toolbox to allow offline browsing of cached pages of an Angular app.
The app is served through a node express server.
One problem we ran into is that the index.html sometimes doesn't seem to be updated in the cache although other assets has been updated on activation of new service worker.
This leaves users with an outdated index.html that is trying to load no longer existing versioned asset in this case /scripts/a387fbeb.modules.js.
I am not entirely sure what's happening, because it seems that on different browsers where the index.html has been correctly updated have the same hash.
On one browser outdated (problematic) Index.html
(cached with 2cdd5371d1201f857054a716570c1564 hash) includes:
<script src="scripts/a387fbeb.modules.js"></script>
in its content. (this file no longer exists in the cache or on remote).
On another browser updated (good) index.html
(cached with the same 2cdd5371d1201f857054a716570c1564) includes:
<script src="scripts/cec2b711.modules.js"></script>
These two have the same cache, although the content that is returned to the browsers are different!
What should I make of this? Does this mean that sw-precache doesn't guarantee atomic cache busting when new SW activates? How can one protect from this?
If these help, this is the generated service-worker.js file from sw-precache.
Note: I realize I can use remoteFirst strategy (at least for index.html) to avoid this. But I'd still like to understand and figure out a way to use cacheFirst strategy to get the most out of performance.
Note 2: I saw in other related questions that one can change the name of the cache to force bust all the old cache. But this seems to beat the idea of sw-precache only busting updated content? Is this the way to go?
Note 3: Note that even if I hard reload the browser where the website is broken. The site would work because it would skip service worker cache but the cache would still be wrong - the service worker doesn't seem to activate - my guess because this specific SW has been activated already but failed at busting the cache correctly. Subsequent non-hard-refresh visits would still see the broken index.html.
(The answers here are specific to the sw-precache library. The details don't apply to service workers in general, but the concepts about cache maintenance may still apply to a wider audience.)
If the content of index.html is dynamically generated by a server and depends on other resources that are either inlined or referenced via <script> or <link> tags, then you need to specify those dependencies via the dynamicUrlToDependencies option. Here's an example from the app-shell-demo that ships as part of the library:
dynamicUrlToDependencies: {
'/shell': [
...glob.sync(`${BUILD_DIR}/rev/js/**/*.js`),
...glob.sync(`${BUILD_DIR}/rev/styles/all*.css`),
`${SRC_DIR}/views/index.handlebars`
]
}
(/shell is used there instead of /index.html, since that's the URL used for accessing the cached App Shell.)
This configuration tells sw-precache that any time any of the local files that match those patterns change, the cache entry for the dynamic page should be updated.
If your index.html isn't being generated dynamically by the server, but instead is updated during build time using something like this approach, then it's important to make sure that the step in your build process that runs sw-precache happens after all the other modifications and replacements have taken place. This means using something like run-sequence to ensure that the service worker generation isn't run in parallel with other tasks.
If the above information doesn't help you, feel free to file a bug with more details, including your site's URL.
I just removed # tag from my url of angular single page app.
I did like.
$locationProvider.html5Mode(true);
And It worked fine.
My problem is when I directly enter any url to the browser it showing a 404 error. And its working fine when I traverse throughout the app through links.
Eg: www.example.com/search
www.example.com/search_result
www.example.com/project_detail?pid=19
All these url's are working fine. But when I directly enter any of the above url's into my browser it showing a 404 error.
Please any thoughts on it.
Thanks in advance.
Well i had a similar problem. The server side implementation included Spring in my case.
Routing on client side ensures that all the url changes are resolved on the client side. However, When you directly enter any such url in the browser, the browser actually goes to the server for retrieving a web page corresponding to the url.
Now in your case, since these are VIRTUAL urls, that are meaningful on the client side, the server throws 404.
You can capture page not found exception at your server side
implementation, and redirect to the default page [route] in your app.
In Spring, we do have handlers for page not found exceptions, so i
guess they'll be available for your server side implementation too.
When using the History API you are saying:
"Here is a new URL. The other JavaScript I have just run has transformed the page into the page you would have got by visiting that URL."
This requires that you write server side code that will build the page in that state for the other URLs. This isn't a trivial thing to do and will usually require a significant amount of work.
However, in exchange for that work you get robustness and performance. When one of those URLs is visited it will:
work even if the JS fails for any reason (such as a dropped network connection or a client (such as a search engine) that doesn't support JS)
load faster than loading the homepage and then transforming it with JS
You need to use rewrite rules. Angular is an single page app, so all your request should go to the same file(index.html). You could do this by creating an .htaccess.
Assuming your main page is index.html.
Something like this (not tested):
RewriteRule ^(.)*$ / [L,QSA]
L flag means that if the rule matches, don't execute the next RewriteRule.
QSA means that the URL query parameters are also passed with the rewrited url.
More info about htaccess: http://httpd.apache.org/docs/2.2/howto/htaccess.html
In my project there is a public folder and a script inside it: public/worker.js, which contains a piece of code:
alert('foo');
I call this script using a Worker:
new Worker('worker.js');
I launch Meteor and connect to my app. foo is alerted.
If I change the public/worker.js code to anything else:
alert('bar');
The server refreshes the clients, the client refreshes the page but won't get the new code, instead using the old one (alerting foo instead of the new shiny bar). Clearing the cache then refreshing fixes the issue. CTRL+F5 does not fix this cache issue, it does not seem to work for this kind of script call (at least not on the version of Firefox I tested it with).
Why is this happening, exactly?
How can I prevent it?
You should alter the response header for the file. Maybe this gets you going: Explicit HTTP Response Headers for files in Meteor's public directory
The script is cached and the browser does not pull the new version from the server.
We need to edit the header of the requests for the files in the /workers folder, using the following code server-side (I wrapped it in a package with api.use('webapp')):
WebApp.rawConnectHandlers.use('/workers', function(req, res, next) {
res.setHeader('cache-control', 'must-revalidate');
next();
});
Using WebApp.connectHandlers did not work, the callback was never called, so I used rawConnectHandlers instead.
I am not 100% sure it is the best way to go, but it works.
I've not found exactly why, but browsers (at least Chrome) seem to treat worker scripts differently to other Javascript files with regards to caching on refresh of the page, even if the headers sent from the server are the same. Refreshing the page makes the browser check for new scripts referenced in script tags, but not those used as a worker.
The way I've fixed this is that at build time, I include a version number/build time/md5 of the file contents in the file name, so it will end up something like worker.12333.js. The advantage of this is that if each filename references a file that essentially immutable, you can set far-future expires headers... So instead of telling the browser to never cache the worker script, it can cache it forever. https://github.com/felthy/grunt-cachebuster is one such tool that does this for Javascript included via script tags, but there are probably others.
The issue with this is that there must be some mechanism to tell the Javascript the updated filename, so it knows to call new Worker('worker.12333.js');. I'm not sure if existing available tools handle that, but the way I do it is to just use the project build time in seconds as the unique key for all the files
<html build-time="12333">
...
and then access it via Javascript so it can work out the latest worker script filename. It's not perfect, but it's fairly simple. You can probably come up with other mechanisms depending on your requirements.