paginating through a CRUD API - javascript

I am writing a client that will query a CRUD web API. I will be be using socket.io.get('/api'). Problem is: I want to paginate the results, so I can start displaying stuff while my client is still receiving the data.
The results from the API come as JSON, like
[
{
"id": "216754",
"date": "2015-07-30T02:00:00.000Z"
},
{
"id": "216755",
"date": "2015-08-30T02:00:00.000Z"
}
]
The api lets me construct an URL query where I can limit the size of each result array. So I can make a query like /api&skip=10&limit=10, and it will get me the results from item 10 to item 19. What I want to be able to do is to keep looping and receiving results until the results array is less than length = 10 (that will mean we reached the end of the dataset). And I need that to be asynchronous, so I can start to work on the data right from the start and update whatever work I have done each time a new page is received.

Is it an infinite scroll that you are trying to do? Or do you want to call all the pages asynchronously and be able to receive the page 3 before page 2? Reading the question, I understand it is the second.
You can't rely on "until the results array is less than length = 10" since you want to launch all the calls at the same time.
You should do a first query to retrieve the number of records. Then you will be able to know how many pages there are, you could generate all the urls that you need and call them asynchronously.
It could looks like this (code not tested):
var nbItemsPerPage = 10;
socket.io.get(
'/api/count', // <= You have to code the controller that returns the count
function(resCount) {
nbPages = resCount / nbItemsPerPage;
for (var i=0; i<nbPages; i++) {
// Javascript will loop without waiting for the responses
(function (pageNum) {
socket.io.get(
'/api',
{skip:nbItemsPerPage*pageNum, limit=nbItemsPerPage},
function (resGet) {
console.log('Result of page ' + pageNum + ' received');
console.log(resGet);
}
)(i); // <= immediate function, passing "i" as an argument
// Indeed, when the callback function will be executed, the value of "i" will have changed
// Using an immediate function, we create a new scope to store pageNum for every loop
}
}
)
If what you are trying to archive is an infinite scroll page, then you have to load the page n+1 only after you received the content of the page n and you can rely on results.length < 10

Related

Rest API Search Results is not the same as the results returned by the search page

I have A search rest API, when I run it through Share point Designer, i don't Get the Same Number if results as Returned by the Search page on the Share point site, I have tried Using different Source ids , also tried to use the default source id from results source but I always get the same results so i am not sure what I am doing wrong.
My Other Thought is, IS there a way to Get all the results From the Default search function Built in to Share-point?
var ct = new SP.ClientContext.get_current();
var keywordQuery = new Microsoft.SharePoint.Client.Search.Query.KeywordQuery(ct);
var queryStr = ctx.DataProvider.get_currentQueryState().k;
keywordQuery.set_queryText(queryStr);
keywordQuery.set_trimDuplicates(false);
keywordQuery.set_enableSorting(true);
keywordQuery.set_sourceId=("xxxxxx-xxxx-xxxx-xxx-xxxxxxx");
keywordQuery.set_rowLimit(500);
keywordQuery.set_trimDuplicates(false);
var searchExecutor = new Microsoft.SharePoint.Client.Search.Query.SearchExecutor(ct);
var results = searchExecutor.executeQuery(keywordQuery);
ct.executeQueryAsync(onQuerySuccess, onQueryFail);
function onQuerySuccess()
{
results.m_value.ResultTables[1].ResultRows.forEach(function (row)
{
var Aname1 = row.name;
console.log(row);
if (!$isNull(Aname1))
{
var name= Aname1;
console.log(name);
}
});
}
function onQueryFail()
{
}
Usually, the results are paginated. What it means is that, instead of returning all the results at once, they are divided into parts and each part (page) is sent once.
For example, when you search in google.com, instead of returning all 1,50,00,000.... results, Google returns only 10 results or so. To get the next 10 results, you click the next button in the pagination menu at the bottom of the page.
This is done so that the API and network don't get overloaded. Imagine how large a response with 1,50,00,000 records would be.
This is what's happening with you. In the response you recieved, see if there's a record with a URL for the next page, Microsoft usually does things this way. If you call that URL, you'll get the next page. If that's not there, see if the URL you called has a parameter somewhere, where you can select the page.

Limit GET requests (D3.js/Twitter API)

I am currently using D3.js and have modified my chart from listening to mouseover/mouseout to mousemove. This has brought quite a few issues in the chart but none moreso than my GET statuses/show/:id requests.
Previously, I would have points on my chart to hover over and if there was a tweet within half an hour of that point (from a tweet DB in backend), it would send a GET request to get that tweet.
My problem now is that because I'm using mousemove in proximity to these points on my chart as opposed to mouseover, its firing this hundreds of times and the GET requests are limited to 900 in a 15-minute window.
var tweet_arr = [];
for(j in data_tweets){
var tweet_time = timeParser(data_tweets[j]['timestamp_s']);
var point_time = timeParser(d.timestamp);
var diff = point_time.getTime() - tweet_time.getTime();
if(diff<=1800000 && diff>=-1800000) {
tweet_arr.push(data_tweets[j]);
} else {
var tweet_list = []
d3.selectAll(".panel-body")
.data(tweet_list)
.exit()
.remove();
}
}
twitterapi.fetch().getTweets(tweet_arr, tweet_urls[0], tweet_urls[1]);
This function checks the difference between the nearest point on the x-axis and checks my collection of tweet data, if there is one in half an hour, add it to an array called tweet_arr and then pass that into my fetch() function which has an AJAX call to the Flask framework where I run my GET request by ID.
What I would ideally want it to do is have some check that if the request to fetch a specific tweet has been carried out in say, the last 5 seconds, don't run the fetch() function.
How would I go about doing something like this?
Have a look at debounce and throttle from underscore.js: http://underscorejs.org/#debounce,
http://underscorejs.org/#throttle
Here's a good, short post about debouncing requests: https://www.google.de/amp/s/davidwalsh.name/javascript-debounce-function/amp
For a comparison between throttle and debounce, see https://gist.github.com/makenova/7885923
You need to define your fetch logic in a separate function and put that one into _.debounce.
Have a look at this example: https://codepen.io/anon/pen/EQwzpZ?editors=0011
const fetchFromTwitter = function(s) { console.log(s) }
var lazyFetch = _.debounce(fetchFromTwitter, 100)
lazyFetch('This is')
lazyFetch('is')
lazyFetch('gonna be')
lazyFetch('legen ... ')
lazyFetch('wait for it')
lazyFetch('... dary')
lazyFetch('LEGENDARY')

Javascript Algolia - How to initiate a new search with an empty search string?

I have a piece of functionality in my Angular app in which I have some searchable items that exist within my Algolia database. The thing is...they aren't searchable by any string. They are only searchable via facets.
The problem is that when I run my initial .search() function with an empty string + my search filters/facets, I get a list returned and everything is all fine and dandy. HOWEVER, when I go to run the function again to refresh the list, it just comes back with the same results and actually never fires a new request, unless I change one of the filters/facets.
Is there any way to force a search query any time I want, without having to specify a "new" search criteria?
Here is my search function:
searchForAuditions() {
// Setup the filters/options
let options = {
highlightPreTag: '<span class="highlighted">',
highlightPostTag: '</span>',
hitsPerPage: 10,
};
// Add clause to make sure "is_deleted" is false
let isDeleted = ` is_deleted: "false"`;
let facets = `${isDeleted}`;
// Replace all trailing spaces and split it into an array
let facetWords = facets.replace(/\s+$/, '').split(" ");
// Remove trailing "AND"
if((facetWords[facetWords.length - 1] === "AND") || (facetWords[facetWords.length - 1] === "OR")) {
facetWords.splice(-1, 1);
facets = facetWords.join(' ');
}
if(facets) {
options['filters'] = facets;
}
this.auditionsIndex.search('', options).then(result => {
this.results = result.hits;
})
}
Thanks in advance!
From the algolia JavaScript client documentation,
To avoid performing the same API calls twice search results will be stored in a cache that will be tied to your JavaScript client and index objects. Whenever a call for a specific query (and filters) is made, we store the results in a local cache. If you ever call the exact same query again, we read the results from the cache instead of doing an API call.
This is particularly useful when your users are deleting characters
from their current query, to avoid useless API calls. Because it is
stored as a simple JavaScript object in memory, the cache is
automatically reset whenever you reload the page.
To resolve this you should use
index.clearCache() or client.clearCache()
Read more about the inbuilt cache system here.
https://github.com/algolia/algoliasearch-client-javascript#cache

Functional Javascript BaconJS, how can I push more values to an event stream?

I'm attempting to create a stack of AJAX responses in BaconJS. That processes them in a first in first out fashion, but each 'out' event should wait for user input.
This is where I'm at now: Live JSBin
var pages = Bacon.fromArray([1,2,3])
var next = $("#next").asEventStream('click').map(true);
pages.flatMapConcat(asyncFunction).zip(next).log("responses")
function asyncFunction(page) {
// Simulating something like an AJAX request
return Bacon.later(1000 + (Math.random() * 3000), "Page "+ page)
}
Currently this synchronously outputs an event from the pages EventStream each time that #next is clicked, which the behavior I want.
However, I am unable to figure out how to push more values to the pages EventStream. I have attempted to replace the pages EventStream with a Bus, and pushing values like this (which doesn't work).
var pages = new Bacon.Bus()
pages.push("value")
How do I push more values to an EventStream?
I know this is an OLD post, but bus would work. Just push the number (you had an array of numbers before) into it:
var pages = new Bacon.Bus();
// Bacon.fromArray([1,2,3])
var next = $("#next").asEventStream('click').map(true);
pages.flatMapConcat(asyncFunction).zip(next).log("responses")
function asyncFunction(page) {
// Simulating something like an AJAX request
return Bacon.later(1000 + (Math.random() * 3000), "Page "+ page)
}
pages.push(1);
pages.push(2);
pages.push(3);
I cloned your jsbin and changed it to use bus
As mentioned previously, you could stream the source of the page values using something like fromEvent or from fromBinder

atomic 'read-modify-write' in javascript

I'm developing an online store app, and using Parse as the back-end. The count of each item in my store is limited. Here is a high-level description of what my processOrder function does:
find the items users want to buy from database
check whether the remaining count of each item is enough
if step 2 succeeds, update remaining count
check if remaining count becomes negative, if it is, revert remaining count to the old value
Ideally, the above steps should be executed exclusively. I learned that Javascript is a single-threaded and event-based, so here are my questions:
no way in Javascript to put the above steps in a critical section, right?
assume only 3 items are left, and two users try to order 2 of them respectively. The remaining count will end up as -1 for one of the users, so remaining count needs to be reverted to 1 in this case. Imagine another user tries to order 1 item when the remaining count is -1, he will fail although he should be allowed to order. How do I solve this problem?
Following is my code:
Parse.Cloud.define("processOrder", function(request, response) {
Parse.Cloud.useMasterKey();
var orderDetails = {'apple':2, 'pear':3};
var query = new Parse.Query("Product");
query.containedIn("name", ['apple', 'pear']);
query.find().then(function(results) {
// check if any dish is out of stock or not
_.each(results, function(item) {
var remaining = item.get("remaining");
var required = orderDetails[item.get("name")];
if (remaining < required)
return Parse.Promise.error(name + " is out of stock");
});
return results;
}).then(function(results) {
// make sure the remaining count does not become negative
var promises = [];
_.each(results, function(item) {
item.increment("remaining", -orderDetails[item.get("name")]);
var single_promise = item.save().then(function(savedItem) {
if (savedItem.get("remaining") < 0) {
savedItem.increment("remaining", orderDetails[savedItem.get("name")]);
return savedItem.save().then(function(revertedItem) {
return Parse.Promise.error(savedItem.get("name") + " is out of stock");
}, function(error){
return Parse.Promise.error("Failed to revert order");
});
}
}, function(error) {
return Parse.Promise.error("Failed to update database");
});
promises.push(single_promise);
});
return Parse.Promise.when(promises);
}).then(function() {
// order placed successfully
response.success();
}, function(error) {
response.error(error);
});
});
no way in Javascript to put the above steps in a critical section, right?
See, here is the amazing part. In JavaScript everything runs in a critical section. There is no preemption and multiprocessing is cooperative. If your code started running there is simply no way any other code can run before yours completes.
That is, unless your code is done executing.
The problem is, you're doing IO, and IO in JavaScript yields back to the event loop before actually happening kind of like in blocking code. So when you create and run a query you don't actually continue running right away (that's what your callback/promise code is about).
Ideally, the above steps should be executed exclusively.
Sadly that's not a JavaScript problem, that's a host environment problem in this case Parse. This is because you have to explicitly yield control to the other code when you use their APIs (through callbacks and promises) and it is up to them to solve it.
Lucky for you, parse has atomic counters. From the API docs:
To help with storing counter-type data, Parse provides methods that atomically increment (or decrement) any number field. So, the same update can be rewritten as.
gameScore.increment("score");
gameScore.save();
There are also atomic array operations which you can use here. Since you can do step 3 atomically, you can guarantee that the counter represents the actual inventory.

Categories

Resources