I have an issue with an Electron app (19.x) built with React (18).
One of my testers, when leaving the application running overnight, will occasionally return in the morning to a new blank Electron window and the following in its devtools console:
ChunkLoadError: Loading chunk 3495 failed. react-dom.production.min.js:189
(timeout: file://C:/Users/<rest of filepath to chunk inside app.asar>)
The stacktrace resolves back to a line such as this:
const SystemOutagePage = lazy(() => import('src/pages/systemOutage'));
which in normal operation is triggered when the application loses connectivity with various backend services. So the window opening is expected (e.g. if the users home router reboots overnight), but the failure to load the local outage page is not.
My question is: What could be causing this chunk load timeout error?
Some notes:
The path of the module it is trying to lazy load is a local filesystem file.
The chunk filepath it is trying to access is valid - the application's files are deployed (and replaced) in their entirety during app installation/update.
The system outage page works correctly when tested. This issue so far has affected only one user, and only when they leave the application running overnight.
This issue is likely related to the user's system being a laptop and going into hibernation overnight. It appears the process of suspension/unsuspension is problematic for the app.
The issue is likely resolved by use of Electron's powerSaveBlocker API to prevent the OS from suspending the application while its in use. I'll update and accept this answer if the solution holds.
Related
We have a web app which fulfills the PWA criteria. So far, when opening the app on an Android device, we received the PWA installation prompt.
Now, if possible, we would like to generate the manifest.json dynamically on the client-side. I’m following the steps outlined in the following article, which look quite promising:
How to Setup Your Web App Manifest Dynamically Using Javascript
We generate the JSON and set it as blob URL through client-side JS:
const stringManifest = JSON.stringify(manifest);
const blob = new Blob([stringManifest], { type: 'application/json' });
const manifestUrl = URL.createObjectURL(blob);
document.querySelector('#manifest-placeholder').setAttribute('href', manifestUrl);
But now, when I open the app on an Android device I no longer see the PWA prompt. Yet, the manifest file obviously get interpreted, as e.g. the icon and start_url are correctly set when I try to add the app to the home screen.
Any experience here whether setting the manifest.json is possible at all for a PWA? Anything I might be missing?
I faced a similar issue. I came to the conclusion this issue could be due to the browser checking for a valid manifest before your dynamically-created manifest has been inserted into the DOM.
If the browser thinks there is no valid manifest available for PWA installation, then the install prompt will not show, which is stated clearly enough in the documentations. However, debugging this issue is rather confusing, as tools, such as Lighthouse in the Google Chrome Audit DevTool tab will say that the app is installable and everything is fine and dandy...
It could be the case that the browser only checks for a valid manifest once during page load - unfortunately I can't find any firm details anywhere about this.
I was able to solve this issue by ensuring the html document's initial render has a valid manifest for PWA-installation, containing some default values (in my case, the values were based off of window.location.pathname). Then later on, when I have all the appropriate data available, I can create my desired manifest and replace it in the DOM. Then, when is ready to install the PWA, the browser uses the data from the desired manifest.
We have an Aurelia SPA that is served from a .NET MVC application. The SPA is bundled using Webpack.
Under certain conditions seemingly random JavaScript functions and objects will be undefined. We are unable to navigate to certain routes because of this. Refreshing the browser fixes these issues.
The steps we have found to reproduce this behavior are not always reliable.
The SPA is open in the browser (specifically Chrome in this case).
We deploy a new version of the code to our server; .NET and JavaScript.
The previously open browser stays open for about 12 hours
Then we will see the issues like this Cannot read property 'split' of undefined. when navigating to certain routes. The undefined objects preventing route navigation are not always the same.
I cannot reproduce this behavior on my localhost.
Without deploying new code, I have left my browser open over a weekend and returned to a functioning application.
Any suggestions would be greatly appreciated. I am unsure how the deploy can be related since the browser should be unaware of any server changes.
I have fixed this problem by adding a contenthash to my chunk files.
path: path.resolve(bundleOutputDir),
publicPath: "dist/",
filename: "[name].js",
chunkFilename: "[name].[contenthash].js",
},
Our webpack build outputs a couple of files. Most of them are required at start up and are included in the index.cshtml of our MVC app's home controller.
e.g. <script type="text/javascript" src="~/dist/app.js" asp-append-version="true"></script>
asp-append-version="true" appends a version to these files so that the browser knows if it should load them from cache or not.
The problem is that the other chunk files that webpack outputs are not referenced in this way. They are referenced from the files that the index.cshtml references.
This is what I believe was happening.
The web app would be open in the browser.
We would release new code to the server.
The browser goes idle.
The user returns to the browser.
The browser wakes up from being idle and requests the files again.
The ones with asp-append-version="true" are updated while the ones with a static name are fetched from cache.
Any minor change to that cached file will throw off the application because of the way webpack bundles files.
I have observed a rather strange anomaly in a web application.
Tech stack:
Front-End: ReactJS
Back-End: .NET Core application + Kestrel
Behavior
The root HTML page of the application loads several key JS files required for creating the web app.
For some reason its possible that a given file of, say, 500 KB loads about halfway and is then executed.
An error can be seen in the console that an exception occurred on line so and so, indicating that the file has not loaded completely and is therefore corrupt.
Also, if this happens once for a user, on page refresh the browser will reuse the incomplete file from cache.
I know it would be extremely helpful if I could provide additional logs/network call headers etc but I do not have access to such at the moment.
I am guessing that the files are being served in some strange way as I would assume a browser should be aware of the total file size of the resource its requesting for, so it should be able to know when it does not load completely.
Everytime I deploy an update to our web application customers ring in with issues where their browser hasnt picked up that index.html has changed and since the name of the .js file has changed they run into errors. Presumably because their index.html still points to the old javascript file which no longer exists.
What is the correct way to ensure that users always get the latest version when the system is updated.
We have a HTML5 + AngularJS web application. It uses WebPack to bundle the vendor and app javascript into two js files. The files contain a hashname to ensure they are different once released.
Some other information
I can never replicate this issue locally (and by that I mean in debug, on our staging site or our production site)
We use CloudFlare but purge the entire cache after release
We have a mechanism in JS that checks on page load or every 5 minutes to see if the version of our API has changed, and if so show up a "Please refresh your browser" message. Clicking this runs window.location.reload(true);
Our backend is IIS
If you need users to pick up the latest index.html when they load your site immediately after you've updated the file, make index.html non-cacheable. That will mean the browser, CloudFlare, and any intermediate proxies aren't allowed to cache it, and that one file will always be served from your canonical server.
Naturally, that has a traffic and latency impact (for you and them), but if that's really your requirement, I don't see any other option.
There are spins on this. It might not be index.html itself that isn't cacheable, you could insert another resource (a tiny JavaScript file that writes out the correct script tags) if index.html is really big and it's important to cache it, etc. But if you need the change picked up immediately, you'll need a non-cacheable resource that identifies the change.
I have a Chrome App that displays slideshow/dashboard on the screen. Right now all of the website files (html and .js) are stored in the app. So when the application starts, it can be used completely offline because all of the files it needs are stored internally.
I'm looking for a way to host the dashboard's files (.js files) on a web server, that the app will query, pull down, and cache fresh copies of at startup. This will make it easier to update the dashboard but still allow it to be (mostly) offline.
I have been able to have the app query the server successfully and pull down the Javascript files as strings. Normally, I would just inject the JavaScript into the page but that's not allowed with Chrome Apps. My question is, how can I take the JavaScript (pulled down from the server) and load it in the page?