I am having trouble understanding why in the docs it is stated that having a custom server disables Automatic Static Optimization.
Before deciding to use a custom server, please keep in mind that it should only be used when the integrated router of Next.js can't meet your app requirements. A custom server will remove important performance optimizations, like serverless functions and Automatic Static Optimization.
My understanding is that thanks to it, during the build phase (next build) it will automatically generate an HTML file (for pages that qualify) which will be then served in future requests.
What I have tried
I have created a static page with no getServerSideProps or getInitialProps that should be pre-rendered in build phase thanks to Automatic Static Optimization.
I have added a console.log() to the functional page component to know when the component is being rendered: i.e. if it renders on the server per request or only on the client.
The static page component code:
export default function Static() {
console.log("The static page component is being rendered.")
return <div>Hello from static page!</div>
}
I have created a custom server that would let all requests be handled by nextjs handler
Custom server code:
const app = next({ dev })
const handle = app.getRequestHandler()
app.prepare().then(() => {
const server = express()
server.all('*', (req, res) => {
return handle(req, res)
})
})
Tested serving the app with both built-in server next start and my custom server mentioned above.
Results
After running next build, in both cases, a corresponding HTML file was generated for the Static page. When accessing the static page route, in both cases the logged message only appeared in the browser's console and not in node's console. When requesting the static route via curl and analysing the response, I could see <div>Hello from static page!</div> present. From that I have inferred that it is actually serving the pre-rendered HTML and thus using the Automatic Static Optimization.
Questions
The docs stated that custom server would disable Automatic Static Optimization, which by my understanding runs during the build step next build, how is it possible that in my testing it worked: generated the HTML file and served it for all requests to that static page route?
If a custom server really disables Automatic Static Optimization, what is preventing nextjs handler in the custom server from using already generated files from the next build step and serving them just as the built-in server would?
Have I misunderstood what the Automatic Static Optimization is really doing? Or something else?
Thanks!
You're correct, static automatic optimization does work with custom server when you let nextjs handle the requests. The warning probably refers to when you're actually using the custom server to handle page requests, instead of just passing them to nextjs.
Here's a quote from co-author of next.js:
Overall we recommend not adding a custom server, not to make you use Vercel but to make sure we can optimize the whole stack end to end. Automatic static optimization is always there, but if you're using a custom server there's some downsides like you can't remap routes which can lead to bugs in your application, hence why we don't recommend it.
Related
Can anyone please explain me the concept of client side and server side in Next.js as they have mentioned in their documentation. What I know is that Next.js works on react which is client side and run in the browser and server side means the api (backend). Any help would be appreciated. Thanks
From Next.js documentation:
This function gets called at build time on server-side. It won't be called on client-side, so you can even do direct database queries. See the "Technical details" section.
export async function getStaticProps() {
const postsDirectory = path.join(process.cwd(), 'posts')
const filenames = await fs.readdir(postsDirectory)
}
I started writing NextJS app few months before, i'll explain as far as i know check whether it would be helpful.
Your understanding on client and server(API) is correct but in case of NextJS there is another client side and server side as NextJS is used for Server-Side Rendering(SSR).
In simple a same page Ex: pages/home.js when loaded with browser hard re-load https://example.com/home is loaded as server side. Pages written under /pages/ folder will be rendered server side on navigation. So the DOM elements of the page will be available in page source(view page source option in browser) which will be used by crawlers too.
You can find the difference by checking whether type of window !== 'undefined', as window represents browser which is client and view page source of browser represents server side rendering.
In Pages also you can check
Create a Next.js project
Have two pages index.js and home.js
In home.js write Home.getInitalProps method which is similar to useEffect or componentDidMount in react component. Here pages cannot contain componentDidMount or useEffect instead all API calls before render has to be done in getInitialProps or other related methods.
Home.getInitialProps = async (context) => {
const { req, query, res, asPath, pathname } = context
if (!req) {
if (typeof window !== 'undefined') {
//its server side request happened on
}
} else {
// its client side call that calls getInitialProps when routing
happened Router.push('/home') from index page or inside components
rendered from pages/index.js
}
}
Let me know if you need some more details, we can explore and figure it out.
I have a SSR Angular app which I am trying to transform into a PWA. I want it to be server-side rendered for SEO and for the "fast first rendering" that it provides.
The PWA mode works fine when combined with SSR, but once the app is loaded, when we refresh it, the client index HTML file is loaded instead of the server-side rendered page.
I have dug into the code of ngsw-worker.js and I saw this:
// Next, check if this is a navigation request for a route. Detect circular
// navigations by checking if the request URL is the same as the index URL.
if (req.url !== this.manifest.index && this.isNavigationRequest(req)) {
// This was a navigation request. Re-enter `handleFetch` with a request for
// the URL.
return this.handleFetch(this.adapter.newRequest(this.manifest.index), context);
}
I have no control over this file since it's from the framework and not exposed to developers.
Did anybody find a solution or workaround for this?
Up-to-date answer (v11.0.0)
Angular now has a navigationRequestStrategy option which allows to prioritize server requests for navigation. Extract of the changelog:
service-worker: add the option to prefer network for navigation
requests (#38565) (a206852), closes #38194
To be used wisely! This warning appears in the documentation:
The freshness strategy usually results in more requests sent to the
server, which can increase response latency. It is recommended that
you use the default performance strategy whenever possible.
Old answer (for archaeological purposes)
I have found a working solution, the navigationUrls property of ngsw-config.json contains a list of navigation URLs included or excluded (with an exclamation mark) like explained in the documentation.
Then I configured it like this:
"navigationUrls": [
"!/**"
]
This way, none of the URLs redirect to index.html and the server-side rendered app comes into play when the app is first requested (or refreshed), whatever the URL is.
To go further, the three kinds of URLs managed by the service worker are:
Non-navigation URLs: static files cached by the service worker and listed in the generated ngsw.json file with their corresponding hashes
Navigation URLs: redirected to index.html by default, forwarded to the server if the "!/**" configuration is used
GET requests to the backend: forwarded to the backend
In order to distinguish a GET XMLHttpRequest from a navigation request, the service worker uses the Request.mode property and the Accept header that contains text/html when navigating and application/json, text/plain, */* when requesting the backend.
Edit: This is actually not a good practice to do that for two reasons:
Depending on the network quality, there is no guarantee that the server-side version will render faster than the cached browser version
It breaks the "update in background" mechanism. Indeed, the server-side rendered app will always refer to the latest versions of the JavaScript files
For more details on this, please take a look at the Angular's team member answer to my feature request: https://github.com/angular/angular/issues/30861
I'm new to the react world and to the fullstack world as a whole but I've searched endlessly for an answer to the following and some guidance would be really appreciated.
I'm creating an app using React and Express. It requires authentication so I was planning on using Passport to help. The client side JS uses React Routers to navigate through the website. That's all fine but my issue is with the initial GET request made by the browser.
I'll first describe my specific app requirements and then generalize what I don't understand.
As I said, my application requires OAuth2 authentication. If you try to GET a path on my website and you're not logged in, it should just load the login page. If you are logged in, then load as normal and find your path. Similar to facebook, I'd like the login URL to be the same as the "feed" page. So similar to how facebook.com '/' route is either the login page or your new feed depending on whether you are signed in, I want the same thing.
From what I understand, Passport authenticates on the back end by checking the request header. So I understand that I should have some kind of middleware that says "if user is signed in, continue down the routes otherwise render sign in page" ... How is this done? What would the code look like? My only experience with Express was from an intro class which used res.render to send back an HTML file and pass it through some template engine like handlebars. But I have no idea how it'd work with react routes. Would i still use res.render()? Something else?
Let's say my index.html has the root div to inject the react into. If I had to guess, I'd send back that index.html page with the .js file with the routes and somehow on the backend send back the route I want it to match on my react routes (either the login one or the user requested)??
More generally, I guess I'm just confused how the initial request to a website using react routes is done. 1) How does the server interact with everything to render what I asked for? 2) What would the code look like for that. My only experience with React is from a basic Udemy course that just used "react-scripts start" to render the page.
After spending the entire day Googling this question it led me to SSR which is a rabbit-hole of its own and I'm not even sure if its what I need to help me. Is it?
I'm clearly missing some fundamental knowledge as this is really tripping me up so if you have any resources to learn more just post them. Thanks!
I understand your struggle as I've had to go through it myself when combining front-end with back-end, specifically React and Node. So first things first, we know that the browser/client will always initiate a request to the server, so how does React Router take control of the routes? Well its plain simple actually, all you have to do is return the entire react app from any route from your express server. The code will look something like this:
const express = require('express');
const app = express();
app.get('/*', (req, res, next) => {
// Return React App index.html
});
app.listen(3000);
Once the react app renders on the user browser (don't worry about paths, as react will automatically render according to the URL based on the code you wrote in the client side, it will also take care of authentication vs feed page when it will scan for your local storage, cookies, etc), it will take control of routing, instead of a request going to the express server. But what happens when we request data from our server, well it returns react app on each route so we need to setup an api route to handle any data requests.
app.get('/api/v1/*', (req, res, next) {
// Return some data in json format
});
Hopefully, this gives you insight about what you were looking for.
I think the fundamental gap you're struggling with stems from that lot of those 'intro courses' shove the entire browser client into the application server to get things up and running quickly, as in, the Node server renders the entire React app AND operates as an API...
// Ajax request from React app to: http://example.com/api
app.use('/api/*'),()=> {
res.send({ <!-- some JSON object -->})
})
// User visits in browser: http://example.com/**/*
app.use('/*',()=>{
res.render(<!-- entire React App sent to browser -->)
})
The first request (assuming the user doesn't visit /api/* ) will just send down the React bundle. Further user navigation within the client would generally send XHR requests (or open WebSockets) from the React app to Express routes running on the same node program.
In many situations it makes sense to have these parts of your program separated, as by having react delivered from a completely different location than where it requests data. There's many reasons for this, but optimizing computing resources to their differing demands of CPU, memory, network .etc and manageability of code/deployment are the big reasons for me.
For example...
User visits: http://example.com *
Nginx, Apache, a 'cloud proxy' .etc direct the traffic to a static React bundle, which has no authentication and never makes contact with your Node server.
If the user has Authenticate previously they will have token in local storage (if you're using JWTs for Authentication) and your React app will be configured to always check for these tokens when you first it is initially loaded.
If the user has a token it will send an Ajax request in the background with the token as a Header Bearer and will send back user data, then redirect them to an 'Authenticated page' like the FB feed you mention.
If they don't have a token or the token Authentication fails then React will redirect them to the Login or Registration page
React
React basically high jacks the browser's native 'location' functionality (whats displayed after you domain name). So any events after the initial page load (buttons clicks and such) are handled entirely by React internally and uses those routes to determine what to display or what data to fetch from the API through Ajax (XHR).
If the user performs a hard page reload then that request will go back to the server and it will perform the whole cycle over again
React Router
Allows you to do 2 things simultaneously...
Manipulate the browser Location and History objects.
Use that History and Location information elsewhere by detecting changes and sending off events.
SSR
I've only toyed around with SSR so I can speak to it, but its provides extremely low latency for initial renders, doing it in 1 network request, so you want to use it areas of your program where thats important.
Not sure if this answers you question, but let me know if you would like me to elaborate on anything or provide some more detailed resources.
SSR is a little bit confuses for developer that has less experience, let forget it for now.
It will be more easier for you to assume that frontend JavaScript (React) and backend Javascript (NodeJS) are two separate apps, and they communicate to each other via API.
here the code that show Login component and Feed component depending on whether you are signed in
import React, { Component } from "react";
import axios from "axios";
class Home extends Component {
constructor() {
const accessToken = localStorage.getItem("accessToken");
this.state = {
accessToken,
feeds: []
};
}
componentDidMount() {
if (this.state.accessToken) {
axios(`api/feeds?accessToken=${this.state.accessToken}`).then(({ data }) => {
this.setState({
feeds: data
});
});
}
}
render() {
if (this.state.accessToken) {
return <FeedsComponent feeds={this.state.feeds} />;
}
return <LoginComponent />;
}
}
and this is your backend
const express = require("express");
const app = express();
app.get('/api/feeds', (req, res, ) => {
const feeds = [
{},
{}
]
res.status(200).json(feeds);
});
app.listen(3001);
just keep in mind that they are two separate apps, the can be in two different folder, different server, different port.
Simply point Express to the folder containing your React files or build files.
app.use(express.static(__dirname + '/dist'));
where 'dist' contains the build files
See docs for more details
I'm building a chat dashboard and widget with which a customer should be able to put the widget into their page. Some similar examples would be Intercom or Drift.
Currently, the "main" application is written in Meteor.js (it's front end is in React). I've written a <Widget /> component and thrown it inside a /widget directory. Inside this directory, I also have an index.jsx file, which simply contains the following:
import React from 'react';
import ......
ReactDOM.render(
<Widget/>,
document.getElementById('widget-target')
);
I then setup a webpack configuration with an entry point at index.jsx and when webpack is run spits out a bundle.js in a public directory.
This can then be included on another page by simply including a script and div:
<script src="http://localhost:3000/bundle.js" type="text/javascript"></script>
<div id="widget-target"></div>
A few questions:
What is wrong with this implementation? Are their any security issues to be aware of? Both the examples linked earlier seem make use of an iframe in one form or another.
What is the best way to communicate with my main meteor application? A REST API? Emit events with Socket.io? The widget is a chat widget, so I need to send messages back and forth.
How can I implement some sort of unique identifier/user auth for the user and the widget? Currently, the widget is precompiled.
1 What is wrong with this implementation? Are their any security issues to be aware of? Both the examples linked earlier seem make use of an iframe in one form or another.
As #JeremyK mentioned, you're safer within an iFrame. That being said, there's a middle route that many third parties (Facebook, GA, ...) are using, including Intercom:
ask users to integrate your bundled code within their webpage. It's then up to you to ensure you're not introducing a security vulnerability on their site. This code will do two things:
take care of setting up an iframe, where the main part of your service is going to happen. You can position it, style it etc. This ensure that all the logic happening in the iframe is safe and you're not exposed.
expose some API between your customer webpage and your iframe, using window messaging.
the main code (the iframe code) is then loaded by this first script asynchronously, and not included in it.
For instance Intercom ask customers to include some script on their page: https://developers.intercom.com/docs/single-page-app#section-step-1-include-intercom-js-library that's pretty small (https://js.intercomcdn.com/shim.d97a38b5.js). This loads extra code that sets the iFrame and expose their API that will make it easy to interact with the iFrame, like closing it, setting user properties etc.
2 What is the best way to communicate with my main meteor application? A REST API? Emit events with Socket.io? The widget is a chat widget, so I need to send messages back and forth.
You've three options:
Build your widget as an entire Meteor app. This will increase the size of the code that needs to be loaded. In exchange for the extra code, you can communicate with your backend through the Meteor API, like Meteor.call, get the reactivity of all data (for instance if you send a response to a user through your main Meteor application, the response would pop up on the client with no work to do as long as they are on the same database (no need to be on the same server)), and the optimistic UI. In short you've all what Meteor offers here, and it's probably going to be easier to integrate with your existing backend that I assume is Meteor.
Don't include Meteor. Since you're building a chat app, you'll probably need socket.io over a traditional REST API. For sure you can do a mix of both
Use Meteor DDP. (it's kind of like socket.io, but for Meteor. Meteor app use that for all requests to the server) This will include less things that the full Meteor and probably be easier to integrate to your Meteor backend than a REST API / socket.io, and will be some extra work over the full Meteor.
3 How can I implement some sort of unique identifier/user auth for the user and the widget?
This part should probably do some work on the customer website (vs in your iframe) so that you can set cookies on his page, and send that data to your iframe that's gonna talk to your server and identify the user. Wether you use artwells:accounts-guest (that's based on meteor:accounts-base) is going to depend on wether you decide to include Meteor in your iframe.
If you don't have Meteor in your iframe, you can do something like:
handle user creation yourself, by simply doing on your server
.
const token = createToken();
Users.insert({ tokens: [token] });
// send the token back to your iframe
// and set is as a cookie on your customer website
then for each call to your server, on your iframe:
.
let token;
const makeRequest = async (request) => {
token = token || getCookieFromCustomerWebsite();
// pass the token to your HTTP / socket.io / ... request.
// in the header of whatever
return await callServer(token, request);
};
in the server have a middleware that sets the user. Mine looks like:
.
const loginAs = (userId, cb) => {
DDP._CurrentInvocation.withValue(new DDPCommon.MethodInvocation({
isSimulation: false,
userId,
}), cb);
};
// my middleware that run on all API requests for a non Meteor client
export const identifyUserIfPossible = (req, res, next) => {
const token = req.headers.authorization;
if (!token) {
return next();
}
const user = Users.findOne({ tokens: token });
if (!user) {
return next();
}
loginAs(user._id, () => {
next();
// Now Meteor.userId() === user._id from all calls made on that request
// So you can do Meteor.call('someMethod') as you'd do on a full Meteor stack
});
};
Asking your customers to embed your code like this doesn't follow the principles of Security by Design.
From their point of view, you are asking them to embed your prebundled code into their website, exposing their site up to any hidden security risks (inadvertent or deliberately malicious) that exist in your code which would have unrestricted access to their website's DOM, localstorage, etc.
This is why using an iframe is the prefered method to embed third party content in a website, as that content is sandboxed from the rest of it's host site.
Further, following the security principle of 'Least Privilege' they (with your guidance/examples) can set the sandbox attribute on the iframe, and explicitly lockdown via a whitelist the privileges the widget will have.
Loading your widget in an iframe will also give you more flexibility in how it communicates with your servers. This could now be a normal meteor client, using meteor's ddp to communicate with your servers. Your other suggestions are also possible.
User auth/identification depends on the details of your system. This could range from using Meteor Accounts which would give you either password or social auth solutions. Or you could try an anonymous accounts solution such as artwells:accounts-guest.
html5rocks article on sandboxed-iframes
I want to restrict a certain subtree only to authenticated users. The basic setup is as follows (fat removed):
app.use(express.bodyParser())
.use(express.cookieParser('MY SECRET'))
.use(express.cookieSession())
.use('/admin', isAuthenticatedHandler)
.use('/admin', adminPanelHandler);
Where the handler functions is:
isAuthenticatedHandler = function(req, res, next) {
if (!req.session.username) {
res.redirect('login');
} else {
next();
}
};
The problem is that even though I provide the redirect destination as a relative path 'login', it doesn't lead to <mount_point>/login i.e. /admin/login but to /login which of course throws a 404.
From the expressjs API reference:
This next redirect is relative to the mount point of the application.
For example if you have a blog application mounted at /blog, ideally
it has no knowledge of where it was mounted, so where a redirect of
/admin/post/new would simply give you `http://example.com/admin/post/new`,
the following mount-relative redirect would give you
`http://example.com/blog/admin/post/new`:
res.redirect('admin/post/new');
Am I misreading this?
The issue here is that while you are using your middleware off of /admin, your app itself is not mounted at /admin. Your app is still off of the root, and your configuration simply says to only use your isAuthenticatedHandler middleware if the request comes in off the /admin path.
I whipped together this gist. Notice how it uses 2 Express applications, one mounted inside the other (line 23 pulls this off). That is an example of mounting the application at a different point rather than just putting a given middleware at a given point. As presently written, that example will give you an endless redirect, since the isAuthenticatedHandler fires for everything off of / in the child application, which equates to /admin overall. Using 2 separate applications might introduce other issues you're not looking to deal with, and I only include the example to show what Express means when it talks about mounting entire applications.
For your present question, you'll either need to follow what Yashua is saying and redirect to /admin/login or mount your admin interface as a separate Express application.
What are you trying to achieve? Why not just redirect to '/admin/login' ? And the mount point they are talking about is the place where your Express app is located, not necessarily the current URL. So /blog might be setup on your server to be the root of your app while / might be a totally different app. At least that's the way I read this.