When running nextjs with a custom server, what does .prepare() called on the next application do?
it does 5 things in this order in an async/await fashion
Verify TypeScript Setup
load Custom Routes
add ExportPathMap to Routes
Makes next export exportPathMap work in development mode.
So that the user doesn't have to define a custom server reading the exportPathMap
start hotReloader
records telemetry
[source code]
It prepare or make the next.js code ready to use another server (In their example express) for handling SSR.
Related
I am having trouble understanding why in the docs it is stated that having a custom server disables Automatic Static Optimization.
Before deciding to use a custom server, please keep in mind that it should only be used when the integrated router of Next.js can't meet your app requirements. A custom server will remove important performance optimizations, like serverless functions and Automatic Static Optimization.
My understanding is that thanks to it, during the build phase (next build) it will automatically generate an HTML file (for pages that qualify) which will be then served in future requests.
What I have tried
I have created a static page with no getServerSideProps or getInitialProps that should be pre-rendered in build phase thanks to Automatic Static Optimization.
I have added a console.log() to the functional page component to know when the component is being rendered: i.e. if it renders on the server per request or only on the client.
The static page component code:
export default function Static() {
console.log("The static page component is being rendered.")
return <div>Hello from static page!</div>
}
I have created a custom server that would let all requests be handled by nextjs handler
Custom server code:
const app = next({ dev })
const handle = app.getRequestHandler()
app.prepare().then(() => {
const server = express()
server.all('*', (req, res) => {
return handle(req, res)
})
})
Tested serving the app with both built-in server next start and my custom server mentioned above.
Results
After running next build, in both cases, a corresponding HTML file was generated for the Static page. When accessing the static page route, in both cases the logged message only appeared in the browser's console and not in node's console. When requesting the static route via curl and analysing the response, I could see <div>Hello from static page!</div> present. From that I have inferred that it is actually serving the pre-rendered HTML and thus using the Automatic Static Optimization.
Questions
The docs stated that custom server would disable Automatic Static Optimization, which by my understanding runs during the build step next build, how is it possible that in my testing it worked: generated the HTML file and served it for all requests to that static page route?
If a custom server really disables Automatic Static Optimization, what is preventing nextjs handler in the custom server from using already generated files from the next build step and serving them just as the built-in server would?
Have I misunderstood what the Automatic Static Optimization is really doing? Or something else?
Thanks!
You're correct, static automatic optimization does work with custom server when you let nextjs handle the requests. The warning probably refers to when you're actually using the custom server to handle page requests, instead of just passing them to nextjs.
Here's a quote from co-author of next.js:
Overall we recommend not adding a custom server, not to make you use Vercel but to make sure we can optimize the whole stack end to end. Automatic static optimization is always there, but if you're using a custom server there's some downsides like you can't remap routes which can lead to bugs in your application, hence why we don't recommend it.
I'm using Keystone 6 for my backend and I'm trying to integrate Stripe, which requires access to the underlying express app to pass a client secret to the client from the server. This is highlighted in the Stripe documentation here: https://stripe.com/docs/payments/payment-intents#passing-to-client
I'm having trouble figuring out how to access the express app in Keystone 6 though and they don't seem to mention anything about this in the documentation. Any help would be appreciated.
The short answer is Keystone 6 doesn't support this yet.
The longer answer has two parts:
This functionality is coming
We've been discussing this requirement internally and the priority of it has been raised. We're updating the public roadmap to reflect this next week.
The functionality itself should arrive soon after. (Unfortunately I can't commit to a release date.)
Getting access to the Express app is possible, it's just a real pain right now
If you look at Keystone's start command you can see where it
calls createExpressServer().
This just returns an express app with the GraphQL API and a few other bits and bobs.
But there's actually nothing forcing you to use the build in keystone start command – You can copy this code, hack it up and just run it directly yourself.
Eg. you could replace this...
const server = await createExpressServer(
config,
graphQLSchema,
keystone.createContext,
false,
getAdminPath(cwd)
);
With...
const server = express();
server.get('/hello-world', (req, res) => {
res.send('Hello');
});
const keystoneServer = await createExpressServer(
config,
graphQLSchema,
keystone.createContext,
false,
getAdminPath(cwd)
);
server.use(keystoneServer);
And your /hello-world endpoint should take precedence over the stuff Keystone adds.
Unfortunately, this doesn't work for the dev command so, in your local environment you'll need to do it differently.
One option is to start a second express server that you control and put it on a different port and include your custom routes there.
You can still do this from within your Keystone app codebase but having different URLs in different environments can be annoying.
You'll probably need an environment variable just for your custom endpoints URL, with values like this in production:
# Production
GRAPHQL_ENDPOINT="https://api.example.com/api/graphql"
CUSTOM_ENDPOINT="https://api.example.com/hello-world"
And this in dev:
# Dev
GRAPHQL_ENDPOINT="http://localhost:3000/api/graphql"
CUSTOM_ENDPOINT="http://localhost:3100/hello-world"
It's ugly but it does work.
I'll update this answer when the "official" functionality lands.
I am very new to the strapi, and I am looking for the best way to create an import job for it.
This job is supposed to be run as a cron and get its data from a temporary database, which means no uploaded files etc.
Strapi is deployed as a docker container to the kubernetes.
There is an example of importer plugin, but it is too huger for has unnecessary frontend while I am looking for something lighter.
You can use strapi controller e.g Restaurant is a controller
call it in browser like
localhost/restaurant/import
module.exports = {
import: async (ctx) => {
// getRecord(// code goes here)
// strapi.services.restaurant.create(passData)
}}
1 - get data e.g from api https://www.example.com/wp-json/wp/v2/
2 - create strapi record strapi.services.restaurant.create()
You can call localhost/restaurant/import using cron
I'm using the JavaScript version of the botframework. I've followed the documentation to enable telemetry logging in Application Insights. When I access the logs I can see that custom events are being logged.
The issue is that the bot specific identifiers, such as user_Id, session_Id and conversation_Id are not being logged. This can be seen in the screen capture below
In the applicationInsightsTelemetryClient.js file there is a function called addBotIdentifiers. As far as I can tell, it is this function that is responsible for adding the bot specific identifiers.
The first lines of the function look like this:
function addBotIdentifiers(envelope, context) {
if (context.correlationContext && context.correlationContext.activity) {
Inspecting this function shows that the context argument is always null.
This leads me to my questions.
Why is it null?
Any suggestions on what I need to do to have it set appropriately?
Update
In digging into this further it appears the code starting at line 26 in the applicationInsightsTelemetryClient.js file isn't being called. Could this be the cause of the missing context later on in the addBotIdentifiers function?
Looks like the documentation has a missing line in step 7. We will correct the document ASAP. Meanwhile, please add the below in your index.js following https://github.com/microsoft/BotBuilder-Samples/blob/main/samples/javascript_nodejs/21.corebot-app-insights/index.js#L113
// Enable the Application Insights middleware, which helps correlate all activity
// based on the incoming request.
server.use(restify.plugins.bodyParser());
Further investigation has shown what the issue was. It's not immediately obvious.
I compared the code in my index.js file with the one in the 21.corebot-app-insights BotBuilder sample.
Note that the setup of the Restify server happens after the creation of the adapter bot adapter. It is also after the configuration of the main dialog and the middleware.
In my code the setup of the Restify server and the bot adatper / dialogs was intermingled. This appears to have been the cause of the problem.
The main lesson here for me, and for anyone who stumbles across this post later, is that the setup of the Restify server should be at the end of the index.js file. To ensure all of the bot framework is setup first.
I've got a new API from the backend team in a new project, when I call the api it returns "you need to enable java...", whereas I had used Postman for another project before... is it related to api, server or something else?
I don't think that POSTMAN is capable of executing JavaScript in its console.
Try doing the same in the web browser it will work (You won't see this error message).
I spent some times pondering on this trepidation.. and then suddenly i realized what was going on..
the endpoint does not exist, it could be a misspelling
not in the same directory as you expect it to be,
try adding or removing "/" at the beginning of the url, particularly if you don't specify the hostname, i.e. fetch('getusername') is different from fetch('/getusername') .
. This acceptable in development but NOT when already deployed, it points to different path.
the endpoint may be working fine in the Development,
but somewhere within in the Production/Staging, it generated some exception.
I updated Postman and now it works. I'm not sure if it was because of the update or the restart.
I had this problem with a project built using the new template in Visual Studio 2022 for a React app with .NET Core.
In my case I was only getting the response "You need to enable JavaScript to run this app" with calls to a new controller I added. Calls to the built-in WeatherForecastController were working just fine. My new controller was configured the same as the built-in controller so I could not figure out why this was happening. It has to do with how this project template creates both a React app and a back-end API both accessible on the same port. There's a setupProxy.js file that defines routes that should be forwarded to the API. All other routes are redirected to index.html. This is actually what was happening in my case, because my new controller had not been added to setupProxy.js the middleware was redirecting the request to index.html, and because it came from Postman rather than a browser the message regarding enabling JavaScript is displayed.
The solution is that each controller must be explicitly mapped in setupProxy.js or else it won't be proxied correctly. After making this change it worked perfectly in Postman as well as fetch calls from the React app.
const context = [
"/weatherforecast", // built-in controller than comes with the project template in VS2022
"/recaptcha" // controller I created (this line must be added)
];
While calling the REST API with the postman, if you miss the end-point, then also this issue will come, add the end-point to the URL and check
What worked for me was to turn-off / deselect the user-agent header field under request