I am a young developer, and I work on the development of a site whose content is stored on Contentful. Currently, each reloading of the page, the javascript will retrieve the content on Contentful via the API.
The content of the site is not likely to change often, so I would like to cache it.
The site is stored on Netlify. Link
So I thought I could recover the content on Contentful on the Node build, store it in a "cache", that the javascript could use when loading the page. And when modifying on Contentful, a webhook would trigger the rebuild on Netlify.
I do not know if my thinking is the right one, thank you for your help and your answers.
Contentful actually has caching built into its service so you shouldn't need to do anything to get the benefits of caching on your website. Quoting from the Contentful Docs:
There are no limits enforced on requests that hit our CDN cache, i.e. the request doesn't count towards your rate limit and you can make an unlimited amount of cache hits. For requests that do hit the Contentful Delivery API enforces rate limits of 78 requests per second and 280800 requests per hour by default. Higher rate limits may apply depending on your current plan.
See https://www.contentful.com/developers/docs/references/content-delivery-api/#/introduction/api-rate-limits for full details
If you want to do additional caching onto of the Contentful API you could utilize a Node library that'll do it for you. Something like APICache would work pretty well in this use case.
If the rebuilding stack when new content is published, rather than rending it on page view, is important to you, I'd encourage you to take a look at static sites. Contentful has some great webhook support that you can use together with Netlify to help rebuild your site anytime an author pushes new content. Check out this tutorial about using Gatsby for more details - https://www.contentful.com/blog/2018/02/28/contentful-gatsby-video-tutorials/
It seems to be better to cache the pages separately (instead of caching the whole site) and use a cron job to compare the cache of each page (maybe weekly) against the current version. If it is different, regenerate the cache for that page. Also, you might want to manually trigger that, possibly on deploys or in the rare event when there is a change on a given page.
Anyway, before you start to do all this caching stuff you should check whether your site is anywhere near to be overwhelmed by requests. If not, then caching can be postponed to be later, which would be wise, since, in the case your site's nature will change over time and changes will occur often you might need a different cache, or even no cache at all.
Related
I'm building an editorial site that regularly adds new content. As such, they've requested that articles and article content is loaded in at load time rather than having to run a build and wait for it. The speed is quite important for them and even after spending some time optimizing the build speed, they need it quicker.
I have the articles pulling in fine using the sanity client, but when you click on an article it's looking for the-slug-of-the-article.js rather than some kind of article page. If I could have it load something like article.js I could use the slug param from the URL to grab the page content, but I'm not sure how to actually load the page instead of the-slug.js without changing the URL structure.
To make things a bit more confusing, there's actually a category in here too, so the URL looks like: www.example.com/category/the-slug-of-the-article
Short answer: you can't build dynamic pages at "load" time using a static site generator.
TL;DR
I assume that you are generating a final client site. Otherwise, you can achieve what you are trying to do using an EC2 instance of AWS (or similar in any other service) that runs a gatsby develop. Gatsby enables a feature to refresh the content automatically sending a curl request to its own domain with the __refresh URL parameter. For its own behavior, of course, it's only available in gatsby develop:
curl -X POST http://localhost:8000/__refresh
To do so, you will need to able ENABLE_GATSBY_REFRESH_ENDPOINT environment variable, so you will need to adapt your running commands.
Of course, gatsby develop won't refine and build the "final" static site, without all SEO, code improvements nor asset optimization as gatsby build does, so this approach may work only for testing purposes.
In your case, Sanity adds a watcher that makes this work easier for you. You will only need to enable the watchMode flag like:
{
resolve: `gatsby-source-sanity`,
options: {
projectId: `abc123`,
dataset: `blog`,
token: process.env.SANITY_TOKEN,
graphqlTag: 'default',
watchMode: true,
},
},
Note: by default is set to false.
If it's not your case and you want to build a static site and show "live" content as soon as an article is published you will need to configure webhooks between your CMS (where the articles are written) and the server.
Basically, a webhook is an action that throws/notifies another action in the server. For example, as soon as an article is published in the CMS, the webhook will notify the server to run a deployment to publish the article. Of course, this action will take some time, depending on your project's size and the code optimization, from tens of seconds to up +15 minutes (in the worse cases).
The webhook implementation will depend and rely on the CMS and the server but more or less all of them are quite similar.
Keep in mind that you are serving static content that needs to be compiled, bundled, and built, that's why Gatsby is so fast (and all the static site generators indeed) so you can't bypass this process in a final client site to show "live" articles using this technology.
I'm working on a purchase page for a new product that the inventor is expecting to receive significant media coverage for (time will tell...). We're building a simple 1-page product page using HTML, CSS and using Stripe's (hosted) checkout pages.
We're trying to minimise the amount of back-end logic needed as this is less able to be cached by CloudFlare.
We do however need to show the product pricing in different currencies depending whether the visitor is in the UK (GBP), Europe (EUR) or US/Rest of World (USD).
CloudFlare will pass a header (HTTP_CF_IPCOUNTRY) (when turned on) with a country code to our upstream webserver - but this wont always be available if we're aiming to cache the entire page.
Any clever ideas?
I'm thinking an ajax call to a geo-location service perhaps?
It seems like you will need something dynamic to achieve this. If you can offload this to a geolocation service with javascript as you mentioned, this would probably be the best.
If you end up having to make the dynamic component yourself, I think you would have the least dynamic requests by making a page that redirects to a static page for each region based on user location. A page like this would be easy to make in Cloudflare workers and cheap ($5/month + $0.50 per million requests). You should probably give the option to users to override their region manually because of the inaccuracy of ip geolocation in general.
There is an option to make it completely static by asking the user their location from the browser and then making a javascript function to map this to a currency/region, but users will likely reject your location request en masse, and I wouldn't consider it a truly useable option.
Long time reader. First time poster. I've been using a Django site in a Windows only house for about 2 years. We're running scripts on Cisco network equipment with Paramiko. These tasks are run on multiple devices at times, and are time consuming (anywhere from 20 seconds to 3 minutes). I want to submit the script for processing and provide user live updates while the script runs. My research says that Javascript can do this with Ajax by not requiring a page reload. I'd want to attempt with Alpine.js due to its perceived simplicity. My background is in simpler static HTML/CSS type sites.
Has anyone tried the Django/Alpine combo at this time? I know Channels/Redis/Celery are popular, but these async requests are onesy twosy affairs, and I feel that a full task queue manager is complete overkill, while not being compatible with Windows for me at this time.
I'm not looking for specific code to fix my problem, more direction on how "best" to handle this issue of the user sitting and waiting on a script.
Alpine.js could work for your purposes. It's designed to play nicely with data fetching APIs and libraries such as the fetch Web API and JavaScript HTTP clients.
It's quite easy to adopt into your workflow since you can just include a HTML script tag and start writing Alpine.js components so it would be a good fit for a Django/other server-rendered type of application where this is one of the few instances where you need a bit of client-side interactivity.
As for the queuing/job tracking, really all you need is to keep track of script runs/jobs somehow and have an HTTP API endpoint which is able to expose this information to your frontend (ie. your Alpine.js widget) through a HTTP call.
The app
I have a web app that currently uses AppCache for offline functionality since users of the system need to create documents offline. The document is first created offline and when internet access is available, the user can click "sync" which will send the document to the server and save it as a revision. To be more specific, the app does not save the change delta as a revision (the exact field modified) but rather the whole document in its entirety. So in other words, a "snapshot" document is saved.
The problem
Users can login from different browsers and devices and work on their documents. When they click "sync", if the server's document is newer, the entire client's version will be overridden by the server's. This leads to one main issue that is depicted in the image below.
The scenario above occurs because of the current implementation which does not rely on deltas (small changes) and rather relies on snapshot revisions.
Some questions
1) My research indicates that I should be upgrading the "sync" mechanism to be expressed in deltas (small changes that can be applied independently). Is this a sound approach?
2) Should each delta be applied independently?
2) According to my research, revision deltas have a numeric value and not a timestamp. What should the value for this be exactly? How would I ensure both the server and the client agree on what the revision number should be?
Stack information
Angular on the frontend
IndexedDB to save documents locally (offline mode)
Postgres DB with JSONB in the backend
What your describing is a version control issue like in this question. The choice is yours with how to resolve. Here are a few examples of other products with this problem:
Google docs: A makes edit offline, B makes edit online, A goes online, Sync, Google Docs combines A and B's edits
Apple notes: Same as Google Docs
Git/Subversion: Throw an error, ask user to resolve conflicts
Wunderlist: Last edit overwrites previous
For your case, this simplest solution is to use Wunderlist's approach, but it seems that may cause a usability issue. What do your users expect to happen?
Answering your questions directly:
A custom sync implementation is necessary if you don't want overwrites.
This is a usability decision, what does the user expect?
True, revisions are numeric (e.g r1, r2). To get server agreement, alter the return value of the last sync request. You can return the entire model to the client each time (or just a 200 OK if a normal sync happened). If a model is returned to the client, update the client with the latest model.
In any case, the server should always be the source of truth. This post provides some good advice on server/mobile referential integrity:
To track inserts you need a Created timestamp ... To track updates you need to track a LastUpdate timestamp on your rows ... To track deletes you need a tombstone table.
Note that when you do a sync, you need to check the time offset between the server and the mobile device, and you need to have a method for resolving conflicts. Inserts are no big deal (they shouldn't conflict), but updates could conflict, and a delete could conflict with an update.
I am creating a complex social networking website that is all one single page that never refreshes unless a user presses the refresh button on the browser.
The issue here is that when I edit files and upload them to the server they don't take effect unless the user refreshes the browser.
How would I go about and fix this problem? Should I do a time interval of browser refreshes? Or should I poll the server every 10 minutes to check if the browser should do a refresh?
Any suggestions?
Server
I would communicate the version number through whatever means you're already using for data transfer. Presumably that's some kind of API, but it may be sockets or whatever else.
Whatever the case, I would recommend that with each response - a tidy way is in the header, as suggested in comments by Kevin B - you transmit the current application version.
Client
It is then up to the client to handle changes to the version number supplied. It will know from initial load and more recent requests what the version number has been up until this point. You might want to consider different behaviour depending on what the change in version is.
For example, if it is a patch number change, you might want to present to the user the option of reloading, like Outlook.com does. A feature change might do the same with a different message advertising the fact that new functionality is available, and a major version change may just disable the site and tell the user to reload to regain access.
You'll notice that I've skated around automatic reloading. This is definitely not a technical issue so much as a UX one. Having a SPA reload with no warning (which may well result in data loss) is not the best and I'd advise against it, especially for patch version changes.
Edit
Of course, if you're not using any kind of API or other means of dynamically communicating data with the server, you will have to resort to polling an endpoint that will give you a version and then handle it on the client in the same way. Polling isn't super tidy, but it's certainly better - in my strong opinion - than reloading on a timer on the offchance that the application has updated in the interim.
Are you talking about changing the client side code of the app or the content? You can have the client call the server for updated content using AJAX requests, one possibility would be whenever the user changes states in the app or opens a page that loads a particular controller. If you are talking about changing the html or javascript, I believe the user would need to reload to get those updates.