Programmatically create Gatsby pages from API data - javascript

This is a similar question to this.
I'm looking for help programmatically creating pages using createPage or createPages and am having trouble - the docs give an example for creating pages from markdown files but not much explanation.
I am using a plugin in plugins\characters\gatsby-node.js to add data from the Rick & Morty API to the GraphQL data layer. My plugin is at the bottom of the question in case it is relevant.
The plugin does add the data successfully as I can see the data in http://localhost:8000/___graphql, and I have successfully managed to use the data in (static) pages.
Where I am lost is that I would like to be able to create a page for each individual character, using the url characters/<characterIdHere> for each of the pages. I am aware that I need to add some logic to my (main or plugins version of....?) gatsby-node.js file, but this is the part I am stuck on. I do not know what I need to put into the gatsby-node.js file. The examples I can find all use json or markdown files and I would like to use data that I have pulled in (from an API) to gatsby's data layer. I have obviously researched this for a few hours and played around with it before asking, but not had any luck.
The component on the pages I would like to create should look something like this:
const CharactersViewSingle = ({ character}) => {
return (
<div>
<Helmet>
<title>{character.name && character.name}</title>
</Helmet>
<NavBar />
<CharactersViewBox character={character} width={300} height={520} />
</div>
)
}
The above code is taken from what the component returned when I was using create-react-app.
The graphQL query (which obviously reflects the structure of the data I would like to use) I use to get the data on other (static) pages looks like this:
export const query = graphql`
query CharactersQuery {
allCharacters(limit: 5) {
edges {
node {
id
name
status
gender
image
}
}
}
}
`
Plugin code:
const axios = require("axios")
exports.sourceNodes = async ({
actions,
createNodeId,
createContentDigest,
}) => {
const { createNode } = actions
const integerList = (start, length) =>
Array.from({ length: length }, (v, k) => k + start)
const rickMortyURL = `https://rickandmortyapi.com/api/character/${integerList(
1,
493
)}`
const rickMorty = await axios.get(rickMortyURL)
const query = await axios.get(rickMortyURL)
rickMorty.data.forEach(character => {
const nodeContent = JSON.stringify(character)
const nodeMeta = {
id: character.id.toString(),
//id: createNodeId(`char-data-${character.id}`),
parent: null,
children: [],
internal: {
type: `Characters`,
content: nodeContent,
contentDigest: createContentDigest(character),
},
}
const node = Object.assign({}, character, nodeMeta)
createNode(node)
})
}

Gatsby's createPages API is what you might be looking for.
I used it to create multiple pages like blog1, blog2, blog3 etc...
In the same way, you can create multiple pages for your characters.
Since you mentioned you have a graphql call to get your characters using
const pages = await graphql(`
query CharactersQuery {
allCharacters(limit: 5) {
edges {
node {
id
name
status
gender
image
}
}
}
}
`)
The above graphql call returns results in pages.data.allCharacters.edges
Now you can iterate them using foreach and use createPage to create the pages.
Below is complete mock code you might need to add in your gatsby-node.js file
const path = require('path');
exports.createPages = async ({ graphql, actions }) => {
const { createPage } = actions
const templateOfYourCharacterPage = path.resolve(`src/templates/exampleTemplateFile.jsx`)
const pages = await graphql(`
query CharactersQuery {
allCharacters(limit: 5) {
edges {
node {
id
name
status
gender
image
}
}
}
}
`)
let characters = pages.data.allCharacters.edges;
characters.forEach(edge => {
createPage({
path: `/${edge.node.id}`,
component: templateOfYourCharacterPage,
context: {id: edge.node.uid, name: edge.node.name } // This is to pass data as props to your component.
})
})
}

Related

Problem with React Query's Infinite Query, using Edamam API

I currently have some issues trying to add the infinite query feature to a recipes app I'm working on using Edamam API.
All the examples I have looked for (even React Query's documentation) implement the infinite scroll using a page/cursor number system... I understand this is the ideal way, but... Edamam API doesn't work this way with paginated queries.
Instead, the API has the following structure for each recipe query we look for (let's assume we are searching for "chicken", this would be the JSON structure):
from: 1,
to: 20,
count: 10000,
_links: {
next: {
href: "https://api.edamam.com/api/recipes/v2?q=chicken&app_key=APIKEYc&_cont=CHcVQBtNNQphDmgVQntAEX4BYldtBAAGRmxGC2ERYVJ2BwoVX3cVBWQSY1EhBQcGEmNHVmMTYFEgDQQCFTNJBGQUMQZxVhFqX3cWQT1OcV9xBB8VADQWVhFCPwoxXVZEITQeVDcBaR4-SQ%3D%3D&type=public&app_id=APPID"
title: "Next Page"
}
},
hits: [{}] ... (This is where the actual recipes are)
As you can see, there is no numbering system for paginated queries, instead, it's a whole URL and it's giving me a hard time since I'm also new to React Query.
I tried the following, but it just fetches the same data over and over again as I reach the bottom of the page:
const getRecipes = async ({ pageParam }) => {
try {
const path = pageParam
? pageParam
: `https://api.edamam.com/api/recipes/v2?q=${query}&app_id=${process.env.REACT_APP_APP_ID}&app_key=${process.env.REACT_APP_API_KEY}&type=public`;
const response = await axios.get(path);
return response.data;
} catch (error) {
console.log(error);
}
const { ref, inView } = useInView();
useEffect(() => {
inView && fetchNextPage();
}, [inView]);
const {
data,
isFetching,
isFetchingNextPage,
error,
status,
hasNextPage,
fetchNextPage,
} = useInfiniteQuery(
["recipes", query],
({ pageParam = "" }) => getRecipes(pageParam),
{
getNextPageParam: (lastPage) => lastPage._links.next.href,
}
);
Since the next page param is a whole URL, I just say that IF there is a pageParam, then use that URL for the request, if not, then do a normal request using the query value the user is searching for.
Please help!
Since the next page param is a whole URL, I just say that IF there is a pageParam, then use that URL for the request, if not, then do a normal request using the query value the user is searching for.
I'd say that this is the correct approach. The only code issue I can see in your example is that you destruct page param, and then pass the page param string to getRecipes:
({ pageParam = "" }) => getRecipes(pageParam),
but in getRecipes, you expect an object to come in (which you again destructure):
const getRecipes = async ({ pageParam }) => {
You can fix that by either changing the call side, or the function syntax, and then it should work.

How do I save an object in IndexedDB?

I want to store my API data in indexedDB of Browser. I would have tried local storage but it has a limit of 5MB but my JSON data is more than 7MB. I want to save in indexedDB for faster access. I want to save the whole data in the JSON format but don't know how to set the scheme of the indexed DB. The data fetched from the database is testData
const db =new Dexie("ReactDexie");
db.version(1).stores({
test:"++id title " //Dont Know how to set scheme here for my json object
})
db.open().catch((err)=>{
console.log(err.stack || err)
})
var transaction = db.transaction([testData], IDBTransaction.READ_WRITE);
var objstore = transaction.objectStore(testData);
for (var i = 0; i < testData.length; i++) {
objstore.put(testData[i]);
}
Follow these steps for good architecture and reusable components ( Sample project is created here ):-
1 ) Create one file lets just name it indexDB.js
import Dexie from 'dexie';
const db = new Dexie('ReactDexie');
db.version(1).stores({
testData: 'datakey'
});
export default db;
2 ) Now make one utility function that will store API data (let's assume this is in file utility.js)
import db from './indexDB.js';
export async function saveDataInIndexDB(data) {
if (data) {
if (db.testData) db.testData.clear();
db.testData.add({ datakey: 'datakey', data }).then(() => {});
}
}
3 ) function for fetching data from indexDB (in utility.js file)
export async function getDataFromIndexDB() {
const testData = await db.testData
.where('datakey')
.equals('datakey')
.toArray();
if (testData && testData.length > 0) {
return testData[0];
}
return null;
}
4 ) I am considering sample JSON data as following (suppose you are getting this data in App.js)
const sampleJSONdata = {
type: 'articles',
id: '1',
attributes: {
title: 'JSON:API paints my bikeshed!',
body: 'The shortest article. Ever.',
created: '2015-05-22T14:56:29.000Z',
updated: '2015-05-22T14:56:28.000Z'
}
};
5 ) Store and Fetch data as following (you can call utility.js functions in `App.js file)
saveDataInIndexDB(sampleJSONdata);
const getDataFromDB = async () => {
let data = await getDataFromIndexDB();
console.log('Data ', data);
};
console.log(getDataFromDB());
The sample project is created here, you can refer to this project for further use, more about schema and useful Dexie related article you can find here.
Note*- Please clear site data, you might face multiple version issues as you were trying earlier (in that case you can change or add extraversion)

[Gatsby][GraphQL]: running a query after retrieving filter parameters from another query

Quick background:
I've got a listings project with around 40 cities and 16 regions that I'm targeting. I'm programatically creating a search results page for each city: example.com/london, example.com/paris etc...Then, I need each city page to have a query to retrieve listings that are only related to that city.
As of now, I'm querying same listings on each search page and then in the component I'm filtering the results on the client. The problem with that solution is that I'm loading thousands of listings on each page with page-data.json that I don't need.
I don't expect the listings to exceed few thousands that's why I don't want to add apollo to query directly from the client. I'd like all pages to be ssr'd. Filtering of results and pagination will be done via component and filtering of the array of results once the page loads.
The way I imagined that was:
Run a query to retrieve list of cities
For each city retrieved run a actual page query with cityId as filter parameter. For performance purposes, I'd like that to happen in gatsby-node.js and not pass cityId to the pageContext and then run a pageQuery from the page.js (which for some reason I couldn't make that work either)
Here's my gatsby-node.js
const path = require('path')
function slugify(str) {
str = str.replace(/^\s+|\s+$/g, ''); // trim
str = str.toLowerCase();
// remove accents, swap ñ for n, etc
var from = "ãàáäâąęẽèéëêćìíïîõòóöôùúüûñńçłśżź·/_,:;";
var to = "aaaaaaeeeeeeciiiiooooouuuunnclszz------";
for (var i=0, l=from.length ; i<l ; i++) {
str = str.replace(new RegExp(from.charAt(i), 'g'), to.charAt(i));
}
str = str.replace(/[^a-z0-9 -]/g, '') // remove invalid chars
.replace(/\s+/g, '-') // collapse whitespace and replace by -
.replace(/-+/g, '-'); // collapse dashes
return str;
};
exports.createPages = async ({ graphql, actions }) => {
const { createPage } = actions;
const listingQueryResults = await graphql(`
query {
allDatoCmsListing {
nodes {
company {
cities {
cityName
region {
regionName
}
}
companyName
address
logo {
fixed(imgixParams: {w: "128", h: "128", fit: "fillmax"}) {
src
}
}
#Companywide Terms
insurancePolicy
otherInsuranceTerms
pricePerKm
minAge
deposit
bookingPhoneNumber
displayPhoneNumber
bookingEmail
}
featuredInCountry
monthlyPrice
listingTitle
pricesIncludeVat
id
originalId
featuredImage {
fluid(imgixParams: {fit: "crop", w: "800", h: "600", crop: "focalpoint"}) {
aspectRatio
base64
height
sizes
src
srcSet
tracedSVG
width
}
originalId
}
gallery {
fluid {
width
tracedSVG
srcSet
src
sizes
height
base64
aspectRatio
}
}
featuredInCity
featuredInRegion
listingDescription
make {
makeName
}
spec
seats
topSpeed
transmission {
transmissionType
}
weekendLimit
weekendNoDepositPrice
weekendPrice
weeklyLimit
weeklyNoDepositPrice
weeklyPrice
acceleration
collectionDropoff
color {
colorName
colorValue
}
dailyLimit
dailyNoDepositPrice
dailyPrice
doors
engine {
engineType
}
engineSize
horsepower
monthlyLimit
monthlyNoDepositPrice
noDepositPricingAvailable
#Listing Terms
applyCompanywideTerms
insurancePolicy
otherInsuranceTerms
pricePerKm
minAge
deposit
listingApproved
}
}
}
`);
const listingTemplate = path.resolve(`src/templates/listing.js`);
listingQueryResults.data.allDatoCmsListing.nodes.forEach(node => {
createPage({
path: `/oferta/${node.originalId}-${slugify(node.listingTitle)}`,
component: listingTemplate,
context: {
listing: node
}
});
});
const queryResults = await graphql(`
query {
allDatoCmsCity {
nodes {
cityName
cityCase
id
}
}
allDatoCmsRegion {
nodes {
regionName
regionCase
id
}
}
}
`);
const searchTemplate = path.resolve(`src/templates/search.js`);
queryResults.data.allDatoCmsCity.nodes.forEach(node => {
createPage({
path: `/${slugify(node.cityName)}`,
component: searchTemplate,
context: {
search: node,
}
});
});
queryResults.data.allDatoCmsRegion.nodes.forEach(node => {
createPage({
path: `/${slugify(node.regionName)}`,
component: searchTemplate,
context: {
search: node
}
})
})
const emptySearch = {
cityName: null,
regionName: null
}
createPage({
path: `/cala-polska`,
component: searchTemplate,
context: {
search: emptySearch
}
})
};
I guess the more precised question is:
What's the best way to achieve the above. That is to get all cities & regions
Loop through cities & regions and query each city & region separately as opposed to running the exact same query and getting results for all cities/regions on a specific city/region page?
So I've spent some hours on this. And I found a solution that works wonders for me. Went from 5 page queries / second to hundreds. That's because I query only once for all cities, regions and listings.
Then I wrote a filter function that's just chain of
FilterResults = (arr, params) => (
arr.filter(/* city filter / if filter is null -> return true and move to the next*/)
.filter(/*region filter - same as city */)
)
That function returns array of listings.
We loop the city results as follows:
query.allDatoCmsCity.nodes.forEach(node => {
params = {
city: node.id
}
results = FilterResults(query.allDatoCmsListing.nodes, params)
// Then our typical gatsby create page
createPage({
path: `/${slugify(node.cityName)}`,
component: searchTemplate,
context: {
search: node,
listings: results
}
});
})
This makes us query only once for all listings instead of 56 times for all listings (because I was using a page query in the template, that would essentially be called on every createPage)
Not only is this cleaner code, it's also much more performant. Hope I helped someone as much as I did myself ;)

Modify default find service in Strapi

I have 2 collection types in my Strapi setup: product and review where a product has many reviews.
I want to add 2 new fields to the response of /products and /products/:id:
averageRaing: number
totalReviews: number
I want to override the default find service to implement this, but I am unable to find the source code for strapi.query("product").find(params, populate) to override it.
If possible, I need this done in a single query rather than making multiple queries.
So far I have:
find(params, populate) {
return strapi.query("product").model.query(db => {
// I want the same query as when I run `strapi.query("product").find(params, populate)`
});
},
But I am unsure of how to handle the params and populate in the exact same way that .find(params, populate) does.
After digging into the source code, I found a solution:
const { convertRestQueryParams, buildQuery } = require("strapi-utils");
function find(params, populate) {
const model = strapi.query("product").model;
const filters = convertRestQueryParams(params);
const query = buildQuery({ model, filters });
return model
.query((qb) => {
const totalReviewsQuery = strapi.connections
.default("reviews")
.count("*")
.where("product", strapi.connections.default.ref("products.id"))
.as("total_reviews");
const averageRatingQuery = strapi.connections
.default("reviews")
.avg("rating")
.where("product", strapi.connections.default.ref("products.id"))
.as("average_rating");
query(qb);
qb.column("*", totalReviewsQuery, averageRatingQuery);
})
.fetchAll({
withRelated: populate,
publicationState: filters.publicationState,
})
.then((results) => results.toJSON());
}

How to clone a node to another path based on a reference value from the initial path on Google Cloud Functions?

I am trying clone an "original" node's data (as soon as I create the data) to a path that is based on the original node's path.
This is my data structure:
root: {
doors: {
111111111111: {
MACaddress: "111111111111",
inRoom: "-LBMH_8KHf_N9CvLqhzU", // I will need this value for the clone's path
ins: {
// I am creating several "key: pair"s here, something like:
1525104151100: true,
1525104151183: true,
}
}
},
rooms: {
-LBMH_8KHf_N9CvLqhzU: {
ins: {
// I want the function to clone the same data here:
1525104151100: true,
1525104151183: true,
}
}
}
My cloud function is now like this:
exports.updateRoom = functions.database.ref('/doors/{MACaddress}/ins').onWrite((change, context) => {
const beforeData = change.before.val(); // data before the write
const afterData = change.after.val(); // data after the write
const roomPushKey = change.before.ref.parent.child('/inRoom');
console.log(roomPushKey); // this is retrieving all the info about the ref "inRoom" but not its value...
Question 1) How can I get to this ref/node's value?
My code goes on by trying to get the value like this.
roomPushKey.once('child_added').then(function(dataSnapshot) {
let snapVal = dataSnapshot.val();
console.log(snapVal);
});
Question 2 (which I think is basically question 1 rephrased): How can I get the snapVal outside the then. method's scope?
return change.after.ref.parent.parent.parent.child('/rooms')
.child(snapVal).child('/ins').set(afterData);
// snapVal should go above
});
Error message: ReferenceError: snapVal is not defined
The following should work.
const admin = require("firebase-admin");
....
....
exports.updateRoom = functions.database.ref('/doors/{MACaddress}').onWrite((change, context) => {
const afterData = change.after.val(); // data after the write
const roomPushKey = afterData.inRoom;
const ins = afterData.ins;
const updates = {};
updates['/rooms/' + roomPushKey] = ins;
return admin.database().ref().update(updates);
}).catch(error => {
console.log(error);
//+ other rerror treatment if necessary
});
Here are some explanations:
You get the roomPushKey by reading the "data after the write" as an object: roomPushKey = afterData.inRoom. You don't need to do roomPushKey.once('child_added').then()
Once you have the roomPushKey, you create a new child node in the rooms node by using update() and creating an object with square brackets notation which allow you to assign the id of the node (i.e. roomPushKey).
Note that you could also do:
return admin.database().ref('/rooms/' + roomPushKey).set(ins);
Note also that you have to import firebase-admin in order to be able to do return admin.database().ref()...
Finally, I would suggest that you have a look at the three following videos from the Firebase team: youtube.com/watch?v=7IkUgCLr5oA&t=517s & youtube.com/watch?v=652XeeKNHSk&t=27s & youtube.com/watch?v=d9GrysWH1Lc. A must for anyone starting coding for Cloud Functions.

Categories

Resources