I'm building an editorial site that regularly adds new content. As such, they've requested that articles and article content is loaded in at load time rather than having to run a build and wait for it. The speed is quite important for them and even after spending some time optimizing the build speed, they need it quicker.
I have the articles pulling in fine using the sanity client, but when you click on an article it's looking for the-slug-of-the-article.js rather than some kind of article page. If I could have it load something like article.js I could use the slug param from the URL to grab the page content, but I'm not sure how to actually load the page instead of the-slug.js without changing the URL structure.
To make things a bit more confusing, there's actually a category in here too, so the URL looks like: www.example.com/category/the-slug-of-the-article
Short answer: you can't build dynamic pages at "load" time using a static site generator.
TL;DR
I assume that you are generating a final client site. Otherwise, you can achieve what you are trying to do using an EC2 instance of AWS (or similar in any other service) that runs a gatsby develop. Gatsby enables a feature to refresh the content automatically sending a curl request to its own domain with the __refresh URL parameter. For its own behavior, of course, it's only available in gatsby develop:
curl -X POST http://localhost:8000/__refresh
To do so, you will need to able ENABLE_GATSBY_REFRESH_ENDPOINT environment variable, so you will need to adapt your running commands.
Of course, gatsby develop won't refine and build the "final" static site, without all SEO, code improvements nor asset optimization as gatsby build does, so this approach may work only for testing purposes.
In your case, Sanity adds a watcher that makes this work easier for you. You will only need to enable the watchMode flag like:
{
resolve: `gatsby-source-sanity`,
options: {
projectId: `abc123`,
dataset: `blog`,
token: process.env.SANITY_TOKEN,
graphqlTag: 'default',
watchMode: true,
},
},
Note: by default is set to false.
If it's not your case and you want to build a static site and show "live" content as soon as an article is published you will need to configure webhooks between your CMS (where the articles are written) and the server.
Basically, a webhook is an action that throws/notifies another action in the server. For example, as soon as an article is published in the CMS, the webhook will notify the server to run a deployment to publish the article. Of course, this action will take some time, depending on your project's size and the code optimization, from tens of seconds to up +15 minutes (in the worse cases).
The webhook implementation will depend and rely on the CMS and the server but more or less all of them are quite similar.
Keep in mind that you are serving static content that needs to be compiled, bundled, and built, that's why Gatsby is so fast (and all the static site generators indeed) so you can't bypass this process in a final client site to show "live" articles using this technology.
Related
Long time reader. First time poster. I've been using a Django site in a Windows only house for about 2 years. We're running scripts on Cisco network equipment with Paramiko. These tasks are run on multiple devices at times, and are time consuming (anywhere from 20 seconds to 3 minutes). I want to submit the script for processing and provide user live updates while the script runs. My research says that Javascript can do this with Ajax by not requiring a page reload. I'd want to attempt with Alpine.js due to its perceived simplicity. My background is in simpler static HTML/CSS type sites.
Has anyone tried the Django/Alpine combo at this time? I know Channels/Redis/Celery are popular, but these async requests are onesy twosy affairs, and I feel that a full task queue manager is complete overkill, while not being compatible with Windows for me at this time.
I'm not looking for specific code to fix my problem, more direction on how "best" to handle this issue of the user sitting and waiting on a script.
Alpine.js could work for your purposes. It's designed to play nicely with data fetching APIs and libraries such as the fetch Web API and JavaScript HTTP clients.
It's quite easy to adopt into your workflow since you can just include a HTML script tag and start writing Alpine.js components so it would be a good fit for a Django/other server-rendered type of application where this is one of the few instances where you need a bit of client-side interactivity.
As for the queuing/job tracking, really all you need is to keep track of script runs/jobs somehow and have an HTTP API endpoint which is able to expose this information to your frontend (ie. your Alpine.js widget) through a HTTP call.
New to k6, working with a web application that presents a spinner briefly on the home page while css and js files load.
Once the files are loaded and scripts are available, a login form is added (replacing the spinner).
With k6, is there a way to wait until a specific body element (the login form) is available in the body before continuing with the next step (ie. populating the username and pwd and submitting the form to login)?
Currently, when I review the response body, I see the spinner element only. Adding a delay does not appear to affect the body returned, even though the login form should, in theory, have been added to the page.
If the element is added to the body after the initial page load, will it be detected by k6 and made available in the response?
Thanks for your help.
Bill
k6 doesn't work like a browser - the load tests are written in JavaScript, but when you request an HTML file, the JavaScript in that file isn't executed. It usually can't be executed even with eval() or something like that, since k6 doesn't have a DOM or any of the usual browser APIs. So you have to explicitly specify any HTTP requests you want your k6 scripts to make, and in your case I assume that the spinner and login form are generated by a JavaScript somewhere in the home page.
To simplify working with such highly dynamic websites when you use k6, you can use the site normally in your browser, record the browser session as a .har file and export it, and then convert that .har file to a k6 script with the k6 convert command like this: k6 convert session.har -O k6_script.js. You can find more information about the whole process here.
k6 doesn't execute client side code, nor does it render anything. It makes requests against the target system and loads them. This makes it efficient to make a large number of reqeusts, but creates new things that must be solved in certain cases.
Capturing all the requests necessary - typically using the k6 convert to convert a HAR file works well to give a foundation of a script. I suggest using the other options in converting to limit any third party requests. e.g. --only or --skip. More info here: https://support.loadimpact.com/4.0/how-to-tutorials/how-to-convert-har-to-k6-test/
Since you recorded your browser session, if your application/site uses anything to prevent CSRF attacks, you must handle those values/correlate them. e.g. .NET sites use VIEWSTATE, if you were testing a .NET app, you would need to instruct the VUs to extract the viewstate from the response body and reuse it in your requests that require it
In a similar vein to point 2, if you are submitting a form, you probably don't want to utilize the same details over and over again. That typically just tests how well your system can cache or results in failing requests (if you are logging in and your system doesn't support concurrent logins for the same user as one example). k6 is able to utilize CSV or JSON data as a source for data parameterization. You can also generate some of this inline if it's not too complex. Some examples are here: https://docs.k6.io/docs/open-filepath-mode
I am a young developer, and I work on the development of a site whose content is stored on Contentful. Currently, each reloading of the page, the javascript will retrieve the content on Contentful via the API.
The content of the site is not likely to change often, so I would like to cache it.
The site is stored on Netlify. Link
So I thought I could recover the content on Contentful on the Node build, store it in a "cache", that the javascript could use when loading the page. And when modifying on Contentful, a webhook would trigger the rebuild on Netlify.
I do not know if my thinking is the right one, thank you for your help and your answers.
Contentful actually has caching built into its service so you shouldn't need to do anything to get the benefits of caching on your website. Quoting from the Contentful Docs:
There are no limits enforced on requests that hit our CDN cache, i.e. the request doesn't count towards your rate limit and you can make an unlimited amount of cache hits. For requests that do hit the Contentful Delivery API enforces rate limits of 78 requests per second and 280800 requests per hour by default. Higher rate limits may apply depending on your current plan.
See https://www.contentful.com/developers/docs/references/content-delivery-api/#/introduction/api-rate-limits for full details
If you want to do additional caching onto of the Contentful API you could utilize a Node library that'll do it for you. Something like APICache would work pretty well in this use case.
If the rebuilding stack when new content is published, rather than rending it on page view, is important to you, I'd encourage you to take a look at static sites. Contentful has some great webhook support that you can use together with Netlify to help rebuild your site anytime an author pushes new content. Check out this tutorial about using Gatsby for more details - https://www.contentful.com/blog/2018/02/28/contentful-gatsby-video-tutorials/
It seems to be better to cache the pages separately (instead of caching the whole site) and use a cron job to compare the cache of each page (maybe weekly) against the current version. If it is different, regenerate the cache for that page. Also, you might want to manually trigger that, possibly on deploys or in the rare event when there is a change on a given page.
Anyway, before you start to do all this caching stuff you should check whether your site is anywhere near to be overwhelmed by requests. If not, then caching can be postponed to be later, which would be wise, since, in the case your site's nature will change over time and changes will occur often you might need a different cache, or even no cache at all.
Ancient website:
User navigates to url via bar or href, server call is made for that particular page
The page is returned (either static html or html rendered on server by ASP.NET
MVC, etc
EVERY page reloads everything, slow, reason to go to SPA
Angular 2 SPA:
User navigates to url via bar or router
A server call is made for the component's html/javascript
ONLY the stuff within the router outlet is loaded, not the navbar, etc (main advantage of SPAs)
HOWEVER, html is not actually received from server as is, Angular 2 code/markup is - then this markup is processed on the CLIENT before it can be displayed as plain HTML, which the browser can understand - SLOW? Enter Angular Universal?
Angular Universal:
First time users of your application will instantly see a server rendered view which greatly improves perceived performance and the overall user experience.
So, in short:
User navigates to url via search bar or router
Instead of returning Angular components, Angular Universal actually turns those components into html AND then sends them to the client. This is the ONLY advantage.
TLDR:
Is my understanding of what Angular Universal does correct? (last bullet point above).
And most importantly, assuming I understand what it does, how does it achieve this? My understanding is that IIS or whatever just returns requested resources, so how does Angular Universal pre-process them (edit: I would basically running something akin to an API that returns processed html)?
This HAS to mean that the server makes all the initial API calls needed to display the initial view, for example from route resolves...correct?
Edit: Let's focus on this approach to narrow down the question:
The second approach is to dynamically re-render your application on a web server for each request. There are still several caching options available with this approach to improve scalability and performance, but you would be running your application code within the context of Angular Universal for each request.
The approach here:
The first option is to pre-render your application which means that you would use one of the Universal build tools (i.e. gulp, grunt, broccoli, webpack, etc.) to generate static HTML for all your routes at build time. Then you could deploy that static HTML to a CDN.
is beyond me. Seeing how there is a bunch of dynamic content there, yet we preload static html.
HTML5 appcache is a good way to cache resources in the client side and make them available when the user goes offline. I was playing around with this technology for sometime now, and it works pretty good so far with static resources like html, css, js, images etc.
Now I have come to a situation where I have the need to integrate some php functionality into my application. For the sake of simplicity lets say my show_tasks.php does this,
<?php
// Print the list of task names from the database.
printMyTasks();
?>
So I have put cached this page inside my myCache.appcache file like this,
CACHE MANIFEST
#v1.0
show_tasks.php
Now the issue is this.
When the user accesses this page for the first time (given he is
online), the browser caches the html page along with the list of tasks
those were there at that moment. But after that, even when he is
online, it does not connect to the database and fetch the latest data.
Instead it always shows the cached version which was cached in the
first run.
I understand that in order to update the cache in the client side, a change must be done to the show_tasks.php file itself (not the data it manipulates) and also the myCache.appcache file should be updated. (In my case, a change to the show_tasks.php won't happen)
I am looking for a solution so that my application works in a way such that,
If the user is online, read the tasks from database and show them.
(Don't show the cached version)
If the user is offline, show the cached version, but also with the
most recently updated data. (The list of tasks fetched when he
accessed it online the last time)
I have looked into Using appcache with php & database question too. But I am wondering if there is another way of achieving this. (Probably using some client side functionality) Obviously I am willing to use Javascript/Jquery.
What will be a good approach to handle this scenario?