Nuxt - dynamic params after building does not work - javascript

In a Nuxt ("spa" mode) project I have a url with a dynamic param /shop/:product, which can be as such:
/shop/ipad-128gb-rose-gold
/shop/subway-gift-card
/shop/any-string
etc.
Using this directory structure works fine in development environment:
pages/
shop/
_product.vue
However it does not work in production. Looking to the generated bin/ folder I see that there is nothing inside shop/ directory. And I see that Nuxt mentions a solution here: https://nuxtjs.org/api/configuration-generate/#routes
But in my situation, I don't know what the :product param will be (could be any string).
I am fetching the product details in pages/shop/_product.vue from the server (if it exists), otherwise handling the error. So now how do I do that in a production build?
I think I am misunderstanding the Nuxt solution -- am I really supposed to generate all possible routes for every existing product slug??

The solution for me was to use:
// nuxt.config.js
export default {
...
generate: {
fallback: true
}
}
I am serving the app out of the built dist/ folder. And I came across this in the Netlify deployment docs:
For a single page app there is a problem with refresh as by default on
netlify the site redirects to "404 not found". For any pages that
are not generated they will fallback to SPA mode and then if you
refresh or share that link you will get Netlify's 404 page. This is
because the pages that are not generated don't actually exist as they
are actually a single page application so if you refesh this page you
will get a 404 because the url for that page doesn't actually exist.
By redirecting to the 404.html Nuxt will reload your page correctly in
SPA fallback.
The easiest way to fix this is by adding a generate property in your
nuxt.config and setting fallback: true. Then it will fallback to the
generated 404.html when in SPA mode instead of Netlify's 404 page.
References:
https://nuxtjs.org/faq/netlify-deployment/
https://nuxtjs.org/api/configuration-generate/#fallback

When you generate static pages, it produces directories and index.html in each one. How did you expect to have it dynamic if you serve static HTML?
You have 2 solutions:
don't use npm run generate. Run nuxt on the server. Using this solution, you avoid ajax in browser. Instead, nuxt performs it and sends the HTML to the browser. Good for SEO.
have your web server (nginx) point all requests to /index.html - at that point, javascript takes over and it can correctly find the slug and query the products via ajax. Bad for SEO because you need to use ajax to get the content after page finishes loading.
Documentation and configuration about this can be found at nuxt's web.

Related

Error page does not work in production for Sapper export

Basically not existing routes get catched when using the dev command and the error / 404 page gets displayed. But when using export and uploading the generated files to a webserver this does not work. Instead, the index page is displayed, but none of the logic works, like clicking on another link for navigation.
I had a catch all slug before in the code, but removed and deleted all the files that were generated by the export command, to make sure that it is removed. Could this be the issue? How would the slugs file look like?
When using sapper export the script will start from your index page and visit (and render) all pages reachable by links on the page. This way you have a static version of your website that you upload to your hosting. It replaces the server side rendering sapper normally does, but just for the first page the user visits, all the rest will start work as normal.
Since a 404 page is shown when the user goes somewhere that does not exist you will (usually) not have a link there and therefore the script will not render that page.
In order to tell sapper to also crawl that page you have to add it as an entry point
In package.json
"export": "sapper export --entry "/ /404""
This extra paramater will tell the script to start at / (the main index file) and do the entire process again, starting at /404 (which shouldn't exist and thus throw your error page)

Handle "Loading chunk failed" errors with Lazy-loading/Code-splitting

We are developing a Vue.js application based on Vue CLI 3 with Vue Router and Webpack. The routes are lazy-loaded and the chunk file names contain a hash for cache busting. In general, everything is working fine.
However, there is a problem during the deployment. Steps to reproduce are the following.
User opens the application (let's assume route "/"), thus the main chunk file is loaded.
We change something in the application and deploy a new version.
Old chunk files are removed
New chunk files are being added (i.e. hashes in the chunk file names change)
User clicks a link to another route (e.g. "/foo")
An error occurs as the application tries to load a chunk file that has been renamed: Error: "Loading CSS chunk foo failed.
(/assets/css/foo.abc123.css)" (this might be CSS or JavaScript)
What is the best way to avoid errors like this?
One approach that should work is just to retain old chunk files and delete them at a later time. That, however, complicates the deployment of new versions as you need to keep track of old versions and always also deploy the old chunk files with the new version.
Another (naive) approach is to just reload as soon as such an error is detected (e.g. Vue Lazy Routes & loading chunk failed). It somewhat works, but it reloads the old route, not the new one. But at least it ensure that consecutive route changes work again.
Any other ideas? Maybe there is something in webpack that could fix this?
DoNOT cache the entry file(usually index.html).
We add:
expires 0;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate';
in our nginx server config.
Then, after you refreshed the client's code, you can use the vue-router's error hook to detect the error and do something properly.
As long as you have a versioned API, you can use the old app files (just leave them on the server and delete after a vew days).
You will get problems as soon as your API changes during deployments.
I assume, you deploy a new API each time you deploy new JS code.
Then you can:
Pass on the API version (simply use the git hash) to the application as header with every response (JS resources, CSS, API requests, 404 responses)
Store the API version in your main JS entry point (or make it accessible somehow, e.g. as generated constant)
On each server response, check if the Server version matches your main client version.
If it does not: Display a prominent warning to the user (like the cookie banners) that he should reload the page (=> allows the user to save chnages in hope the API did not change for that save button).
For async components, we display normal 'not found' messages if loading fails, together with a reload button that appears instead of the component. Reloading without user interaction will cause a lot of confusion.

Why is Angular Universal necessary?

So the obvious answer is that its necessary because it serves routed paths from the server, so that we don't get 404s.
However solutions like angular-cli-ghpages solves this by adding a script to the app that parses parameters returned in a 404 that will then reroute the app to the correct state.
So just curious are there any drawbacks to this and why would this not be used in general instead of solutions like Angular Universal or Rendertron?
For example this is what spa-github-pages says:
A quick SEO note - while it's never good to have a 404 response, it appears based on Search Engine Land's testing that Google's crawler will treat the JavaScript window.location redirect in the 404.html file the same as a 301 redirect for its indexing. From my testing I can confirm that Google will index all pages without issue, the only caveat is that the redirect query is what Google indexes as the url. For example, the url example.tld/about will get indexed as example.tld/?p=/about. When the user clicks on the search result, the url will change back to example.tld/about once the site loads.
Because of two main things:
First page load speed;
SEO
Robots do not run javascript, so they parse what the get from server and than the Universal comes around.
Even using --aot builded app served by ghpages with a 404 page that is a clone from the index, the client/robot still needs to get the first files, parse them and finally mount the final view. Gh-pages do not serve the final html state.

How do I add another page in an angular-cli project?

Based on the comments on another of my questions (gradle how to add files javascript fies to a directory in the war file) I'm trying to use angular-cli to help build and manage an angular project. However, I cannot seem to find any documentation on how to create a second webpage in the project, which to me seems like a very basic task. I tried creating a "component" with ng g component {component name}, but this didn't add anything to the build result.
I had missed the section of the angular docs on routing since I did not make the connection between the word "routing" and what I wanted to do. Routing as described here works perfectly when using Node as your server. However, other web servers such as Tomcat (which I am using for this project) will not since ng build only generates an index.html file. Node knows that it should re-route URLs under the angular base to that file, but Tomcat doesn't. A proxy server such as apache needs to be placed in front of the Tomcat server to redirect the urls to the base url for the application.
With that out of the way, here is the basics of routing:
create a component for each "page" (the component does not need to be responsible for the whole page displayed. see 2)
create a "shell" component that contains features that will be on all pages e.g. toolbar, side navigation.
add <router-outlet></router-outlet> to the point in the shell component component where components for sub-URLs will appear (note that they are inserted into the DOM after this tag, not within it.)
in the imports for your module, add RouterModule.forRoot(). This function takes an array of Route. Each route has a path and a component property. path is the url (relative to the base url) that will cause component to be inserted into the DOM. Note that path values should not begin with a slash.
add a tags with the routerLink property bound to the url of your new page. Note that here, there should be a leading slash.

Handling Dynamic Routes Without a Server

Is it possible to serve a dynamic html page without a backend server or without using a front-end framework like Angular?
Edit
To clarify, the index file is served from a backend. This question is about how to handling routing between the index and dynamic pages.
I have an application that consists of two files - index.html and dynamic.html. When the user clicks an option say "Option A", they are served dynamic.html and the url is updated to /option-a. Now, with a server this is no problem and assuming the user visits the app from the landing page, it isn't a problem either because a cookie can be set. However, suppose a user visits a page at my-domain/option-a. That route doesn't exist and there is no server to redirect so it will 404. They would have to visit dynamic.html.
I think this architecture demands that there's either a server to handle route redirects or a SPA framework.
Is there something I'm missing?
your SPA framework will be active only once your HTML page is loaded and to do that you need to redirect any URL that user tries for your domain to that HTML file. For this you obviously need a server (and since you are talking about my-domain/option-a I assume you have atleast a basic server). You can refer to this link to get an idea on how server can redirect a URL to specific html file: Nodejs - Redirect url.
Once HTML is loaded you can initialize your SPA framework and decide the template to be loaded based on the URL.
Note: without a server you will access URLs using file://somepath/index.html and anything other than this URL will result in 404 and no SPA framework can handle that.
I think the solution is to use a static site generator such as Jekyll or Middleman and allows you to convert information into static pages. That way you functionally are building a bunch of pages but they are all compiled ahead of time. You can add dynamic content that is loaded in from a yaml file and it will compile the content into separate html pages.
It is not possible, but there is a workaround using url parameters like this:
my-folder/index.html
my-folder/index.html?=about
my-folder/index.html?=about/sublevel
my-folder/index.html?=profile
my-folder/index.html?=./games
const urlParams = new URLSearchParams(location.search);
const route = urlParams.get('');
console.log(route);
// Should print "about" "about/sublevel" "profile" "./games"
Of course this approach is not as clean as using a server for routing, but it's the best you can get without a server.
BTW. I tried an alternative solution creating symlinks with all the target routes pointing to the same index.htmlfile. But it did not work because the browser (firefox) redirects by default when it finds a symlink, thus home is shown all the time.

Categories

Resources