Service worker file not found in offline mode - javascript

My web app is like this:
I have an Apache server running on port 80, with virtual hosts for my domain and different subdomains, including port 443 for SSL.
I have a nodejs application running on port 5000.
All traffic to my main domain is redirected using proxypass from apache to nodejs.
NodeJs then loads Service Workers.
Problem:
ON first load of application in online mode it works properly.Shows Service worker registered over the complete domain message as expected. Even for some css,js and img it return response fromServiceWorker.
But now when I open same page on offline mode service worker dont play it part to open the web app.
Error:
1. Uncaught (in promise) TypeError: Failed to fetch(…).
An unknown error occurred when fetching the script for service-worker file.
Any help on this regard to implement this properly will be be helpful.

Your browser will attempt to refetch the JavaScript file corresponding to the current service worker for every navigation request. You can read more about that in this Stack Overflow answer.
What you're seeing logged reflects the fact that the JavaScript for your service worker can't be fetch (which makes sense, because you're offline). What happens in that case is that the previous JavaScript that your browser already knows about and has cached will be reused, and your service worker should function as expected. The noise in the DevTools console about the service worker's JavaScript failing to be fetched can be ignored.
If you're seeing any failures related to the actual function of the service worker itself (like, retrieving cached resources are failing), then that would point to an issue with your implementation. But it doesn't sound like that's what's being logged.

Make sure the html files are cached, and that the service worker script is in the top level.

Related

Enforcing reload of remote index.html in an Angular progressive web app

I have an Angular 8 web app that uses Angular PWA and service worker.
The customer wants to add maintenance mode to the system to be able to cut off the users and display a simple HTML page with the downtime information.
On the server side it works simply by returning 503 and a maintenance mode HTML page. While the page is based on a template, I cannot include it in my SPA webapp because customer's admins might want to adjust it for the situation (e.g. to explain how long the maintenance will last).
The question boils down to this:
how to enforce the web browser to show the remote index.html with maintenance text instead of the cached index.html? How is it usually done in a typical Angular SPA application, that was created using a Visual Studio Angular SPA template?
The longer explanations:
The problem is the client side. Since this is a PWA that mostly communicates with the back-end through API, there are three potential points when the user might receive 503 response:
when loading the site for the very first time. This is not a problem, users get the maintenance page immediately.
when the site has been already loaded and the user is doing a normal page refresh (not full reload). This is a problem because the cached SPA root index.html is returned by the service worker instead of the modified index.html on the server.
Fortunately, my SPA is doing some web requests immediately after launching, so I can catch 503 there, and thus it all boils down to the third point:
the most often 503 status will be caught while the user is working on some task and the app calls my server API. I have a global error handler (based on ErrorHandler) added to my HTTP service calls in my Angular code, so I can catch 503 response. But what should I do next? How do I force full reload of website, ignoring the service worker cache to enforce reloading of the remote index.html page?
What have I already attempted:
As a quick&dirty workaround, I have added the following code in my error handler:
if (error.status === 503) {
// the server should have offline page. If not, the request will lead to 404.
document.location.href = "offline.html?_=" + Math.floor(Math.random() * 1000000000);
}
This assumes that the remote server has offline.html file instead of newly modified index.html.
The problem with this approach is that the user won't be able to simply continue refreshing the page because they'll get the offline.html and finally 404 instead of the revived index.html after the server has been restored from the maintenance mode. It would be nicer to force the maintenance text in the same index.html, but unfortunately, the browser keeps displaying the cached SPA index.html instead.
In case if someone else stumbles upon the same question: I ended up with such a bit ugly solution:
else if (error.status === 503) {
// we assume the server has sent us entire offline html page contents
// and we rewrite current page with it entirely
document.open();
// error is the response body
document.write(error.error);
document.close();
}
This way:
if the user loads the website for the first time, they will receive the server's 503 response immediately
if the user loads the website from browser's cache, the website will receive 503 sometime later, but, most likely, very soon because of some immediate API requests, and then we come to the last:
if the user receives 503 status when calling API, the entire visible document will get replaced with the text from the server.

Using sw-toolbox with Parse JS SDK for runtime caching / offline does not work

I have an AngularJS app, and it uses Parse as the backend. I've made it offline friendly by using service worker - it uses sw-precache for caching the static resources during build time, and also uses runtime caching for certain whitelisted URLs (such as the parse API backend), and some other URLs.
Even with this in place, the runtime caching does not take effect once the app goes offline - it still tries to reach the /config URL on my parse server. I've tried whitelisting the URL with this pattern, and it still does not cache the response.
My (Grunt generated) service worker snippet is
// Runtime cache configuration, using the sw-toolbox library.
toolbox
.router
.get(/^https:\/\/xxx\.xxx\.com/, toolbox.networkFirst,
{"networkTimeoutSeconds":10,
"cache":
{
"origin":"https://xxx.xxx.com",
"maxEntries":10,
"name":"main-cache",
"maxAgeSeconds":1800
}
});
According to the docs, this config will cache any request to the domain in the whitelisted URL. I've tried playing around with maxAgeSeconds and maxEntries, and still have not seen the client caching as many mentioned there.
The idea is to have a completely offline and working AngularJS + Parse.com application that works on page reloads (we already use a lot of IndexedDb, localstorage while offline). Now the application works as long as the page is not reloaded, but the minute that it is - it makes an API call to the /config URL and the page remains blank with a few console error messages.
Would appreciate any help on this.

Does it make sense saving a service worker in cache?

From my service worker, I am saving some assets in the browser cache, as well as the service worker script itself, and it works fine so that I can see the service worker url together with all other assets in my devtools cache tab.
Now, when I go offline, my service worker listens to the fetch event and gets all assets from cache.
However, there seems to be no fetch event when the page tries to register the worker itself, therefore I'm getting the following errors in the console:
console error
console erorr
Am I missing something? After all, does it make sense to cache the service-worker script itself?
According to the specification of the update algorithm (which is run for registering also). At the point 7.2:
Set request’s skip service worker flag and request’s redirect mode to "error".
That's mean your service worker request will never pass through the service worker. Instead, it is cached in its own cache according to its own rules. What you see as errors, are the failing attempts of the browser to get a fresh version of the service worker.
As Jeff Posnick says in one of his replies, you can safely ignore these errors.

How do I load a Web Worker script over HTTPS?

I am attempting to use a Web Worker to offload some CPU intensive calculations into a separate thread. For a little context, I am taking an audio stream from getUserMedia and saving it into a file to be uploaded to my service after it is complete. I am able to retrieve the stream from the user and play it back via the WebAudio API and through an HTML5 player, but now I need to take the next step of saving it into a file.
The problem:
My main service is running over an HTTPS connection, since it is restricted to signed in users only. I have a worker script that does what I need it to, and I am attempting to load the script in via a relative path into my worker. I am receiving the following error
Mixed Content: The page at 'https://someurl.com:1081/some/path' was loaded over HTTPS,
but requested an insecure Worker script
'http://someurl.com/some/path/lib/assets/javascripts/worker.js'.
This request has been blocked; the content must be served over HTTPS.
I figured it was because I was using a relative path in my code like so:
worker = new Worker('lib/assets/javascripts/worker.js');
I wanted to rule this out so I made the following change:
worker = new Worker('https://someurl.com:1081/some/path/lib/assets/javascripts/worker.js');
This did not solve my error. It appears that the Worker is loading my script via HTTP no matter what url location I attempt to use. I couldn't find any reference on how to use the Web Worker via HTTPS, so I am hoping someone can provide some insight.
Possible Solution
I do want you to know there is a possible solution, but it seems a bit hacky to me. I can load my worker script up as a Blob and pass that directly into the Worker. If this is the only solution, I can make it work. But I was hoping to find a way to make the script load via HTTPS.
Have you tried
//someurl.com:1081/some/path/lib/assets/javascripts/worker.js
instead of
https://someurl.com:1081/some/path/lib/assets/javascripts/worker.js
Just something I found here,
Deezer content is served over HTTP
I solved this. The error itself was misleading and caused me to go down a rabbit hole looking for the solution.
The issue here actually stems from the way I have this service configured. The service that starts the web worker is actually proxied behind another service, and all requests go through the parent service. This works great for most requests, but was causing an error in this case. Instead of forwarding the request on this port to my app, the web worker was attempting to download the worker script from the parent service itself. This means the error stemmed from the fact that the script wasn't found, not that the protocol was incorrect.
To solve this, I had to pass in a localized script location from Rails using its asset pipeline. This allowed the worker to grab the script and actually work.

Websockets not working in my Rails app when I run on Unicorn server, but works on a Thin server

I'm learning Ruby on Rails to build a real-time web app with WebSockets on Heroku, but I can't figure out why the websocket connection fails when running on a Unicorn server. I have my Rails app configured to run on Unicorn both locally and on Heroku using a Procfile...
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
...which I start locally with $foreman start. The failure occurs when creating the websocket connection on the client in javascript...
var dispatcher = new WebSocketRails('0.0.0.0:3000/websocket'); //I update the URL before pushing to Heroku
...with the following error in the Chrome Javascript console, 'websocket connection to ws://0.0.0.0:3000/websocket' failed. Connection closed before receiving a handshake response.
...and when I run it on Unicorn on Heroku, I get a similar error in the Chrome Javascript console, 'websocket connection to ws://myapp.herokuapp.com/websocket' failed. Error during websocket handshake. Unexpected response code: 500.
The stack trace in the Heroku logs says, RuntimeError (eventmachine not initialized: evma_install_oneshot_timer):
What's strange is that it works fine when I run it locally on a Thin server using the command $rails s.
I've spent the last five hours researching this problem online and haven't found the solution. Any ideas for fixing this, or even ideas for getting more information out of my tools, would be greatly appreciated!
UPDATE: I found it strange that websocket-rails only supported EventMachine-based web servers while faye-websocket which websocket-rails is based upon, supports many multithread-capable web servers.
After further investigation and testing, I realised that my earlier assumption had been wrong. Instead of requiring an EventMachine-based web server, websocket-rails appears to require a multithread-capable (so no Unicorn) web server which supports rack.hijack. (Puma meets this criteria while being comparable in performance to Unicorn.)
With this assumption, I tried solving the EventMachine not initialized error using the most direct method, namely, initializing EventMachine, by inserting the following code in an initializer config/initializers/eventmachine.rb:
Thread.new { EventMachine.run } unless EventMachine.reactor_running? && EventMachine.reactor_thread.alive?
and.... success!
I have been able to get Websocket Rails working on my local server over a single port using a non-EventMachine-based server without Standalone Server Mode. (Rails 4.1.6 on ruby 2.1.3p242)
This should be applicable on Heroku as long as you have no restriction in web server choice.
WARNING: This is not an officially supported configuration for websocket-rails. Care must be taken when using multithreading web servers such as Puma, as your code and that of its dependencies must be thread-safe. A (temporary?) workaround is to limit the maximum threads per worker to one and increase the number of workers, achieving a system similar to Unicorn.
Out of curiousity, I tried Unicorn again after fixing the above issue:
The first websocket connection was received by the web server (Started GET "/websocket" for ...) but
the state of the websocket client was stuck on connecting, seeming to hang
indefinitely.
A second connection resulted in HTTP error code 500 along with app
error: deadlock; recursive locking (ThreadError) showing up in the
server console output.
By the (potentially dangerous) action of removing Rack::Lock, the deadlock error can be resolved, but connections still hang, even though the server console shows that the connections were accepted.
Unsurprisingly, this fails. From the error message, I think Unicorn is incompatible due to reasons related to its network architecture (threading/concurrency). But then again, it might just be some some bug in this particular Rack middleware...
Does anyone know the specific technical reason for why Unicorn is incompatible?
ORIGINAL ANSWER:
Have you checked the ports for both the web server and the WebSocket server and their debug logs? Those error messages sound like they are connecting to something other than a WebSocket server.
A key difference in the two web servers you have used seems to be that one (Thin) is EventMachine-based and one (Unicorn) is not. The Websocket Rails project wiki states that a Standalone Server Mode must be used for non-EventMachine-based web servers such as Unicorn (which would require an even more complex setup on Heroku as it requires a Redis server). The error message RuntimeError (EventMachine not initialized: evma_install_oneshot_timer): suggests that standalone-mode was not used.
Heroku AFAIK only exposes one internal port (provided as an environmental variable) externally as port 80. A WebSocket server normally requires its own socket address (port number) (which can be worked around by reverse-proxying the WebSocket server). Websocket-Rails appears to get around this limitation by hooking into an existing EventMachine-based web server (which Unicorn does not provide) hijacking Rack.

Categories

Resources