I'm building an infinite scroller in cycle.js. I have an ajax service that returns X results per page. The first request is with a page id of 0, but subsequent requests need to use the page id returned in the first result set. The results are placed in the dom. When the user scrolls to the bottom of the visible results, a new set is loaded and concatenated to the list. I have something that works, but not as well as I would like.
The shortened version looks like:
const onScroll$ = sources.DOM.select('document').events('scroll')
.map((e) => {
return e.srcElement.scrollingElement.scrollTop;
})
.compose(debounce(1000));
const onHeight$ = sources.DOM.select('#items').elements()
.map((e) => {
return e[0].scrollHeight;
})
const scroll$ = onScroll$.startWith(0);
const height$ = onHeight$.startWith(0);
const itemsXHR$ = sources.HTTP.select('items')
.flatten();
const id$ = itemsXHR$
.map(res => res.body.data.content.listPageId)
.take(2)
.startWith(0);
const getitems$ = xs.combine(scroll$,height$,id$)
.map( ( [scrollHeight, contentHeight, sid, lp] ) => {
if ( scrollHeight > (contentHeight - window.innerHeight - 1) ) {
const ms = new Date().getTime();
return {
url: `data/${sid}?_dc=${ms}`,
category: 'items',
method: 'GET'
};
}
return {};
});
const items$ = itemsXHR$
.fold( (acc=[],t) => [...acc, ...t.body.data.content.items] )
.startWith(null);
const vdom$ = items$.map(items =>
div('#items', [
ul('.search-results', items==null ? [] : items.map(data =>
li('.search-result', [
a({attrs: {href: `/${data.id}`}}, [
img({attrs: {src: data.url}})
])
])
))
])
);
The main issue is that the ajax request is linked to scroll position, and its possible that to fire multiple requests during scroll.
As the moment I debounce the scroll, but that it not ideal here.
Any ideas on how to orchestrate the streams so that only 1 request is sent when needed (when the user is near the bottom of the page)?
I though maybe a unique id per page and use .dropRepeats on the getitem$? I don't have a unique page id in the result though.
You could filter the scroll$ to take only the bottom of the page.
const onScroll$ = sources.DOM.select('document').events('scroll')
.map((e) => {
return e.srcElement.scrollingElement.scrollTop;
})
.filter( /* filter by checking scrollTop is near bottom */ )
.compose(debounce(1000));
Related
I'm trying to retrieve data from Firestore with the code below, but I only get one result. When I eliminate that document, I get the other document with the same query. I don't know what else to do.
const retrieveNetwork2 = async () => {
const query = geocollection.near({
center: new firebase.firestore.GeoPoint(15.5, -90.25),
radius: 1000,
});
await query.get().then((querySnapshot) => {
querySnapshot.docs.map((doc) => {
let workingX = doc.data().name;
setReada(workingX);
});
});
};
The problem is that for each document that you get back, you call setReada. So at the end of processing all documents, setReada will only know about the last document in the results.
It's hard to say what exactly you need to change without seeing your setReada, but it'll typically be something like:
query.get().then((querySnapshot) => {
let results = querySnapshot.docs.map((doc) => doc.data().name);
setReada(results);
});
So with this your setReada is now an array with the name value of each document.
I built a project of Frontend Mentors called Advice generator App using HTML, CSS & JS. There is a bug with my solution and Firefox Developer. When I clicked on the button, the event listener run the fetch function and makes a request and changes the UI accordingly to the data received from the response; the problem is that when I click again, the UI doesn't change and the console shows the same response as before; this didn't happen on brave browser and I suppose other Chromium-based browsers as well, I want to know why this happens with Firefox Developer?
Link of repo: https://github.com/Perlishnov/advice-generator-app-main
Link of website: https://advice-generator-app-main-kappa.vercel.app/
"use strict";
//Html Elements
const rollDice = document.getElementById("roll-dice");
const adviceNumber = document.getElementById("advice-number");
const adviceParagraph = document.getElementById("advice");
const url = "https://api.adviceslip.com/advice";
//Dice button logic
rollDice.addEventListener("click", () => {
fetch(url)
.then((response) => response.json())
.then(
(data) => (
(adviceNumber.textContent = data.slip.id),
(adviceParagraph.textContent = data.slip.advice)
)
);
});
It's looks like a cache memory issue. if the url is the same, the browser uses what is already in the cache.
The simplest way to fix that: add a counter on your url, to force the cache
"use strict";
//Html Elements
const
rollDice = document.getElementById('roll-dice')
, adviceNumber = document.getElementById('advice-number')
, adviceParagraph = document.getElementById('advice')
, url =
{ ref : 'https://api.adviceslip.com/advice'
, count : 0
}
;
//Dice button logic
rollDice.onclick = () =>
{
fetch( `${url.ref}?c=${++url.count}`)
.then( r => r.json() )
.then( data =>
{
adviceNumber.textContent = data.slip.id
adviceParagraph.textContent = data.slip.advice
})
}
<div>
<h3 id="advice-number"> n </h3>
<p id="advice">...</p>
</div>
<button id="roll-dice"> roll-dice </button>
Well, I discover while reading in the Frontend Mentor website that this happens because Firefox cache the response from the API. So the only you have to do is pass an object with the property of cache and then set it to no cache as the second argument of the fetch API. like this
"use strict";
//Html Elements
const rollDice = document.getElementById("roll-dice");
const adviceNumber = document.getElementById("advice-number");
const adviceParagraph = document.getElementById("advice");
const url = "https://api.adviceslip.com/advice";
//Dice button logic
rollDice.addEventListener("click", () => {
fetch(url, {cache: "no-cache"})
.then((response) => response.json())
.then(
(data) => (
(adviceNumber.textContent = data.slip.id),
(adviceParagraph.textContent = data.slip.advice)
)
);
});
I am working on a react app where I have table with scroller and on every scroll I am updating the page number and making a subsquent api call with the updated page number but the page number is updating so fast that it exceeds the limit of page number and the api returns empty array and that leads to imcomplete data.
Here's my code:
handleScroll=async ({ scrollTop }) => {
console.log('hey');
if (this.props.masterName && this.props.codeSystem) {
const params = {};
await this.props.setPageNumber(this.props.page_num + 1);
params.code_system_category_id = this.props.masterName;
params.code_systems_id = this.props.codeSystem;
params.page_num = this.props.page_num;
if (this.props.entityName) params.entity_name = this.props.entityName;
if (this.props.status) params.status = this.props.status;
console.log(params);
await this.props.fetchCodeSets(params);
}
}
This is the function that will get called on every scroll,on every scroll I am incrementing the page number by 1 using await and also making a api call as this.props.fetchCodeSets using await so that scroll doesnt exceed before completing the api call,but the scroll keeps getting called and it leads to the above explained error.
Here's my table with scroll:
<StyledTable
height={250}
width={this.props.width}
headerHeight={headerHeight}
rowHeight={rowHeight}
rowRenderer={this.rowRenderer}
rowCount={this.props.codeSets.length}
rowGetter={({ index }) => this.props.codeSets[index]}
LoadingRow={this.props.LoadingRow}
overscanRowCount={5}
tabIndex={-1}
className='ui very basic small single line striped table'
columnsList={columns}
onScroll={() => this.handleScroll('scroll')}
/>
I am using react-virtualized table and the docs can be found here:
https://github.com/bvaughn/react-virtualized/blob/master/docs/Table.md
Any leads can definitely help!
You are loading a new page on every scroll interaction. If the user scrolls down by 5 pixels, do you need to load an entire page of data? And then another page when the scrolls down another 2 pixels? No. You only need to load a page when you have reached the end of the available rows.
You could use some math to figure out which pages need to be loaded based on the scrollTop position in the onScroll callback and the rowHeight variable.
But react-virtualized contains an InfiniteLoader component that can handle this for you. It will call a loadMoreRows function with the startIndex and the stopIndex of the rows that you should load. It does not keep track of which rows have already been requested, so you'll probably want to do that yourself.
Here, I am storing the API responses in a dictionary keyed by index, essentially a sparse array, to support any edge cases where the responses come back out of order.
We can check if a row is loaded by seeing if there is data at that index.
We will load subsequent pages when the loadMoreRows function is called by the InfiniteList component.
import { useState } from "react";
import { InfiniteLoader, Table, Column } from "react-virtualized";
import axios from "axios";
const PER_PAGE = 10;
const ROW_HEIGHT = 30;
export default function App() {
const [postsByIndex, setPostsByIndex] = useState({});
const [totalPosts, setTotalPosts] = useState(10000);
const [lastRequestedPage, setLastRequestedPage] = useState(0);
const loadApiPage = async (pageNumber) => {
console.log("loading page", pageNumber);
const startIndex = (pageNumber - 1) * PER_PAGE;
const response = await axios.get(
// your API is probably like `/posts/page/${pageNumber}`
`https://jsonplaceholder.typicode.com/posts?_start=${startIndex}&_end=${
startIndex + PER_PAGE
}`
);
// This only needs to happen once
setTotalPosts(parseInt(response.headers["x-total-count"]));
// Save each post to the correct index
const posts = response.data;
const indexedPosts = Object.fromEntries(
posts.map((post, i) => [startIndex + i, post])
);
setPostsByIndex((prevPosts) => ({
...prevPosts,
...indexedPosts
}));
};
const loadMoreRows = async ({ startIndex, stopIndex }) => {
// Load pages up to the stopIndex's page. Don't load previously requested.
const stopPage = Math.floor(stopIndex / PER_PAGE) + 1;
const pagesToLoad = [];
for (let i = lastRequestedPage + 1; i <= stopPage; i++) {
pagesToLoad.push(i);
}
setLastRequestedPage(stopPage);
return Promise.all(pagesToLoad.map(loadApiPage));
};
return (
<InfiniteLoader
isRowLoaded={(index) => !!postsByIndex[index]}
loadMoreRows={loadMoreRows}
rowCount={totalPosts}
minimumBatchSize={PER_PAGE}
>
{({ onRowsRendered, registerChild }) => (
<Table
height={500}
width={300}
onRowsRendered={onRowsRendered}
ref={registerChild}
rowCount={totalPosts}
rowHeight={ROW_HEIGHT}
// return empty object if not yet loaded to avoid errors
rowGetter={({ index }) => postsByIndex[index] || {}}
>
<Column label="Title" dataKey="title" width={100} />
<Column label="Description" dataKey="body" width={200} />
</Table>
)}
</InfiniteLoader>
);
}
CodeSandbox Link
The placeholder API that I am using takes start and end indexes instead of page numbers, so going back and forth from index to page number to index in this example is silly. But I am assuming that your API uses numbered pages.
I am trying to do Firestore reactive pagination. I know there are posts, comments, and articles saying that it's not possible but anyways...
When I add a new message, it kicks off or "removes" the previous message
Here's the main code. I'm paginating 4 messages at a time
async getPaginatedRTLData(queryParams: TQueryParams, onChange: Function){
let collectionReference = collection(firestore, queryParams.pathToDataInCollection);
let collectionReferenceQuery = this.modifyQueryByOperations(collectionReference, queryParams);
//Turn query into snapshot to track changes
const unsubscribe = onSnapshot(collectionReferenceQuery, (snapshot: QuerySnapshot) => {
snapshot.docChanges().forEach((change: DocumentChange<DocumentData>) => {
//Now save data to format later
let formattedData = this.storeData(change, queryParams)
onChange(formattedData);
})
})
this.unsubscriptions.push(unsubscribe)
}
For completeness this is how Im building my query
let queryParams: TQueryParams = {
limitResultCount: 4,
uniqueKey: '_id',
pathToDataInCollection: messagePath,
orderBy: {
docField: orderByKey,
direction: orderBy
}
}
modifyQueryByOperations(
collectionReference: CollectionReference<DocumentData> = this.collectionReference,
queryParams: TQueryParams) {
//Extract query params
let { orderBy, where: where_param, limitResultCount = PAGINATE} = queryParams;
let queryCall: Query<DocumentData> = collectionReference;
if(where_param) {
let {searchByField, whereFilterOp, valueToMatch} = where_param;
//collectionReferenceQuery = collectionReference.where(searchByField, whereFilterOp, valueToMatch)
queryCall = query(queryCall, where(searchByField, whereFilterOp, valueToMatch) )
}
if(orderBy) {
let { docField, direction} = orderBy;
//collectionReferenceQuery = collectionReference.orderBy(docField, direction)
queryCall = query(queryCall, fs_orderBy(docField, direction) )
}
if(limitResultCount) {
//collectionReferenceQuery = collectionReference.limit(limitResultCount)
queryCall = query(queryCall, limit(limitResultCount) );
}
if(this.lastDocInSortedOrder) {
//collectionReferenceQuery = collectionReference.startAt(this.lastDocInSortedOrder)
queryCall = query(queryCall, startAt(this.lastDocInSortedOrder) )
}
return queryCall
}
See the last line removed is removed when I add a new message to the collection. Whats worse is it's not consistent. I debugged this and Firestore is removing the message.
I almost feel like this is a bug in Firestore's handling of listeners
As mentioned in the comments and confirmed by you the problem you are facing is occuring due to the fact that some values of the fields that your are searching in your query changed while the listener was still active and this makes the listener think of this document as a removed one.
This is proven by the fact that the records are not being deleted from Firestore itself, but are just being excluded from the listener.
This can be fixed by creating a better querying structure, separating the old data from new data incoming from the listener, which you mentioned you've already done in the comments as well.
So I've been working on a scraper. Everything was well until I've tried scraping data for individual link.
Now to explain: I've got a scraper, which scrapes me data about apartments. Now first url is page where the articles are located(approx. 29-30 should be fetched). Now on that page I don't have information about square meters, so I need to run another scraper for each link that is scraped, and scrape square meters from there.
Here is the code that I have:
const axios = require('axios');
const cheerio = require('cheerio');
const url = `https://www.olx.ba/pretraga?vrsta=samoprodaja&kategorija=23&sort_order=desc&kanton=9&sacijenom=sacijenom&stranica=2`;
axios.get(url).then((response) => {
const articles = [];
const $ = cheerio.load(response.data);
$('div[id="rezultatipretrage"] > div')
.not('div[class="listitem artikal obicniArtikal i index"]')
.not('div[class="obicniArtikal"]')
.each((index, element) => {
$('span[class="prekrizenacijena"]').remove();
const getLink = $(element).find('div[class="naslov"] > a').attr('href');
const getDescription = $(element)
.find('div[class="naslov"] > a > p')
.text();
const getPrice = $(element)
.find('div[class="datum"] > span')
.text()
.replace(/\.| ?KM$/g, '')
.replace(' ', '');
const getPicture = $(element)
.find('div[class="slika"] > img')
.attr('src');
articles[index] = {
id: getLink.substring(27, 35),
link: getLink,
description: getDescription,
price: getPrice,
picture: getPicture,
};
});
articles.map((item, index) => {
axios.get(item.link).then((response) => {
const $ = cheerio.load(response.data);
const sqa = $('div[class="df2 "]').first().text();
});
});
console.log(articles);
});
Now the first part of the code likes as it should, I've been struggling with this second part.
Now I'm mapping over articles because there, for each link, I need to load it into axios function and get the data about square meters.
So my desired output would be updated articles: with it's old objects and key values inside it but with key sqm and value of scraped sqaure meters.
Any ideas on how to achieve this?
Thanks!
You could simply add the information about the square meters to the current article/item, something like:
const articlePromises = Promise.all(articles.map((item) => {
return axios.get(item.link).then((response) => {
const $ = cheerio.load(response.data);
const sqa = $('div[class="df2 "]').first().text();
item.sqm = sqa;
});
}));
articlePromises.then(() => {
console.log(articles);
});
Note that you need to wait for all mapped promises to resolve, before you log the resulting articles.
Also note that using async/await you could rewrite your code to be a bit cleaner, see https://javascript.info/async-await.