How do I set preload headers in Next.js when the files have random build names?
I've seen from the page source that the static files are preloaded using link tags in the html. I'd much rather have these preloaded in the headers, as that also enables HTTP/2 Server Push.
According to the documentation, you can set custom headers in next.config.js. This is fine, but the main problem is that the file names get random strings every build.
For example, I've also got some local font files that I'd like to push, aswell as the CSS file generated by Tailwind.
How would you set preload headers for the resources in Next.js?
EDIT:
I have managed to hardcode font files in the headers, as these get to keep their random names on rebuild. Tailwind CSS seems to be impossible to hardcode this way, as it gets a new name right after I rebuild. I guess I could modify the build folder in that case, but both of these methods are less than ideal.
Why isn't this a more common issue with people using React/Next.js? As far as I know, Using HTTP/2 Server Push makes everything much faster as long as the server supports it.
Here is a working (but is it efficient?) solution, requiring to setup an Apache2 or NGINX reverse proxy:
Use a custom server.
Intercept the response body, search <link rel=preload> HTML tags,
and set for each link a Link HTTP header.
You could use this library.
Configure the reverse proxy (NGINX or Apache2) to automatically push resources by intercepting Link HTTP headers.
See also : https://github.com/vercel/next.js/issues/2961
Related
I am creating a website in which each user has an "avatar". An avatar has different accessories like hats, facial expressions, etc. I have made this previously on a php website but I am using react to create this new website. I am loading in each users avatar and its item links from firestore. I do not want to
use absolute positioning or css, I want the avatar to be one image.
Example of what I am trying to achieve:
I found this library: https://github.com/lukechilds/merge-images which seems to be exactly what I need but I cannot load in external images or I get this error:
Any solutions to this error or suggestions to an alternative would be greatly appreciated.
My code:
render() {
mergeImages([
'http://example.com/images/Avatar.png',
'http://example.com/images/Hat.png',
])
.then((b64) => {
document.querySelector('img.abc').src = b64;
})
.catch(error => console.log(error))
return (
...
<img class="abc" src='' width={100} height={200} alt="avatar"/>
...
); }
The merge-images package has some quirks. One of those quirks is that it expects individual images to either be served from your local server (example: http://localhost:3000/images/head.png, http://localhost:3000/images/eyes.png, and http://localhost:3000/images/mouth.png) or that those individual images be imported into a single file.
Working example: https://github.com/mattcarlotta/merge-images-example (this example includes the first three options explained below with the fourth option utilizing the end result of using a third party CDN)
To run the example, clone the repo:
git clone https://github.com/mattcarlotta/merge-images-example
Change directory:
cd merge-images-example
Then install dependencies:
yarn install
Then run the development server:
yarn dev
Option 1:
The simplest implementation would be to import them into a AvatarFromFiles component. However, as written, it isn't reusable and isn't suitable for dynamically selected avatars.
Option 2:
You may want to serve them from the local server like the AvatarFromLocalServer component with a Webpack Dev Config. Then you would retrieve stored strings from an API and pass them down into from state into the component. Once again, this still requires the images to be present in the images folder, but more importantly, it isn't ideal for a production environment because the images folder must be placed outside of the src folder to be served. This could also lead to security issues. Therefore, I don't recommend this option at all.
Option 3:
Same as Option 1, but lazy loaded like the AvatarFromLazyFiles component and therefore flexible. You can load images by their name; however, it still requires that all of the images be present upon runtime and during production compilation. In other words, what you have on hand is what you get.
Option 4:
So... the ideal option would be to build an image microservice or use a CDN that handles all things images (uploading, manipulating/merging, and serving images). The client would only be able to select/upload new images to this microservice/CDN, while the microservice/CDN handles everything else. This may require a bit more work but offers the most flexibility, super easy to implement, and best performance -- as it offloads all the work from the client to the dedicated service.
In conclusion, unless you plan on having a set amount of images, use option 3, otherwise option 4.
Problem
This is a CORS issue. The images are coming from a different origin that's not your server.
If you look at the source of the library, you'll notice it's using a <canvas> under the hood to merge the images, and then getting the resulting data. Canvas cannot work with images loaded from another domain. There's good reasoning behind this. In essence, loading an image into a canvas is a way to fetch data, and since you can retrieve the data from the canvas as base64, a malicious one could steal information by first loading it into a <canvas> and then pulling it out.
You can read about it directly from the spec for the <canvas> element.
Solution
You need to serve the images either from the same origin (essentially, the same domain) or include Access-Control-Allow-Origin: ... on the HTTP headers that serve the images. There's ways to do this in firebase storage, or other server solutions you might use.
files are created and deleted dynamicaly and names change over time due to css and js CMS plugin minifycation process, how can I do all js and css on a directory be pushed by Nginx?
I try do
index index.php;
http2_push 'path/to/files' *min.css; #not working
http2_push 'path/to/files' *min.js; #not working
http2_push 'path/to/file' favicon.ico; #works fine
forgive me language Im not english native
thanks for your time
Update: After looking for boredom a solution, I decided to go the long way, I modified the base plugin to create a custom one that creates files with defined name every time instead of one with a dynamic name, I have removed all text strings which denote information by removing $ ctime and $ hash from the generation of the static file.
index index.php;
http2_push 'path/to/files' static-name.min.css; #working
http2_push 'path/to/files' static-name.min.js; #working
http2_push 'path/to/file/' *.min.js; # still dont works but it does not matter anymore thanks for the answers.
Get PHP to do it for you.
First of all set up the following config in Nginx:
http2_push_preload on
Then get PHP to send preload link HTTP headers in the response to index.php:
header('Link: </styles/file.css>;rel=preload;as=style>');
Nginx will then use the preload HTTP headers as instructions to send HTTP/2 push requests.
This assumes your PHP code either knows the files you want to push or can find out.
Using preload hints also means that HTTP/1.1 requests will also get preload hints which will tell the browser to request these ASAP even before parsing the returned HTML.
The main downsides with this options are that you 1) can’t do this for static resources (e.g. if using index.html instead of index.php) and also 2) that it won’t start pushing until the index.php response is ready. For the latter HTTP Status 103 Early Hints allows a quick response but can’t find anything to suggest that Nginx supports this relatively new HTTP Header yet.
I'm building a web application. I'm linking to separate css and js files and I want to manage cache.
If the script js or the style css file have been updated then force reload and replace that file, else get the file from cache.
Is that possible? How to do that?
As a default css and javascript file cashed in client browser. when you update your css or javascript file only need update and increase version in HTML header like this:
foo.css?ver=002
foo.js?ver=002
This depends a lot on the server, as caching in browsers is based on a set of headers sent by the server, including Cache-Control, Expires, Etag, and the way it handles headers from the client, including If-Modified-Since, and If-None-Match. This allows the browser to try to request that the server return a file only if it doesn't already have the latest version; if, based on headers, the server determines that the browser already has the latest version, it can return a response of 304 not modified.
You can also use "cache buster" query parameters as sia suggested: add a query parameter to the file name, which will be ignored by most servers, but which you can use to indicate that there is a new version of the same file. While the query parameter won't let you control what version is downloaded, it will be part of the key in the browser cache, so when the parameter changes, the browser will download the file again.
There is an excellent rundown of how HTTP caching works over on MDN.
I have a python 2.7 app on Google Appengine. One of the JS files is served via a python script, not a standard static handler. The app.yaml config is shown below:
- url: /js/foo.js
script: python.js.write_javascript.app
secure: optional
The request for foo.js is part of a code snippet clients, of our service, place on their website, so it can't really be updated. python.js.write_javascript.app basically just reads in a JS template file, substitutes in a few customer specific values and prints to the browser.
What I'm wondering is, how do we set the correct headers so this request is cached correctly. Without any custom headers, appengine's default is to tell the browser never to cache this. This is obviously undesirable because it creates unnecessary load on our app.
Ideally, I would like to have browsers make a new request only when the template has been updated. Another option would be to cache per session.
Thanks
Well
It looks like Google handles this automatically. I just print it, using the correct JavaScript headers but without any cache headers and Google's CDN caches it for me. I'm not sure what the default cache lifetime is but I saw no increase in instances or cost by implementing this.
It seems Google just takes care of it for me.
So, I want to add versioning to my css and js files. The way I would like to do this is by appending a query string to the end of the asset path so
/foo/bar/baz.css
Becomes
/foo/bar/baz.css?version=1
This will work for proxies and browser cache, however, I was wondering if Akamai will know this is a new file and re-request it from the origin server? My assumption would be that it would re-request the file from the origin server but figured I'd ask if anyone knew for sure.
Yes. It matches exact URLs for all GET requests.
Not quite. It depends on the CDN configuration. Query String values are usually not part of the cache-key. So, when setting up the CDN delivery config, make sure you explicitly add the option to include the Query String as part of the cache-key. Otherwise, you will end up serving inconsistent versions due to having a cache key that does not vary based on the query string value, in this case, the asset version.
I prefer to have a url like '/css/DEVELOPER_BASE/foo/baz/style.css'.
Your build/deploy scripts do a global find and replace on '/css/DEVELOPER_BASE/' with '/css/[version_number]/'
To make this work you then have two options.
Your deploy script copies the css files from '/css/DEVELOPER_BASE/' to '/css/[version_number]/'
Your web server does an alias (not redirect) for '/css/[version_number]/' to '/css/DEVELOPER_BASE/'
This will keep you from having to worry about how browsers and CDN's handle query parameters.