I am using the Nodejs Cassandra driver and I want to be able to retrieve the previous and next pages. So far the documentation shows the retrieval of the next page, which is saving the pageState from the previous page and passing it as a parameter. Sadly there is no info on how to navigate to the previous page.
As I see it there are two options:
Save each pageState and page as a key-value pair and use the pageState for the page that you want to navigate to.
Save the retrieved data in an array and use the array to navigate to the previous page. (I don't think that this is a good solution as I'll have to store large chunks the data in the memory.)
Both methods does not seem to be an elegant solution to me, but if I have to choose I'll use the first one.
Is there any way to do this out of the box using the Nodejs Cassandra driver?
Another thing is that in the documentation the manual paging is used by calling the eachRow function. If I understand it correctly it gives you every row as soon as it is red from the database. The problem is that this is implemented in my API and I am returning the data for the current page in the HTTP response. So in order for me to do that I'll have to push each row to a custom array and then return the array when the data for the current page is retrieved. Is there a way to use execute with the manual paging as the above seems redundant?
Thanks
EDIT:
This is my data model:
CREATE TABLE store_customer_report (
store_id uuid,
segment_id uuid,
report_time timestamp,
sharder int,
customer_email text,
count int static,
first_name text,
last_name text,
PRIMARY KEY ((store_id, segment_id, report_time, sharder), customer_email)
) WITH CLUSTERING ORDER BY (customer_email ASC)
I am displaying the data in a grid, so that the user can navigate trough it.
As I write this I thought of a way to do this without needing the previous functionality, but nevertheless I think that this is a valid case and it will be great if there is an elegant solution to it.
Sadly there is no info on how to navigate to the previous page.
That is correct, when you make a query and there are more rows, Cassandra returns a paging state to fetch the next set of rows, but not the previous ones. While what the paging state represents is abstracted away, it is generally a pointer to where to continue reading the next set of data, there really isn't a concept of reading the previous set of data (because you just read it).
Save each pageState and page as a key-value pair and use the pageState for the page that you want to navigate to.
This is the strategy i'd recommend too, of course to get the paging state you actually have to make the queries.
Is there any way to do this out of the box using the Nodejs Cassandra driver?
Not that I am aware of unfortunately, if you want to go back to previous pages you are going to need to track the state.
Is there a way to use execute with the manual paging as the above seems redundant?
Yep, you can provide pageState in the options parameter with execute as well and it will be regarded, i.e.:
client.execute('SELECT * FROM table', [], {pageState: pageState}, function (err, result) {
...
});
There is an option to be "more" manual, this was common before paging features. You can store the last partition/clustering keys of the current page you are sending back in response (possibly encrypted depending on what it is, most likely generate the partition key and only send/receive the clustering keys to avoid security issues). Then when doing your "next page" response you just have to start your CQL query from there. To go back a page, change ORDER BY clause of clustering key and walk backwards. If you provide your schema it would be easier to give examples. Thats all the pageState really does anyway.
Related
I have a page with user data, with ajax pagination using PHP. User's name is listed in descending order(last inserted data will be shown in first page). Suppose there are 3 pages having 2 data's(user name) per page . Imagine, i visited the second page it shows some data, imagine(Alan and Arun - user names). At the same time couple of data is inserted in the database and i requested the 3rd page. But it will show the same data as previous,as a couple of data is inserted in the database. My Question is - How can i handle this. It will create a confusion to the user. Hope you get my point.
One easy solution I could think of would be a timestamp, when you first requested the page. Now you keep this timestamp while navigating through the pages and only display entries older than the timestamp. This way, no new entries would pollute the result and mess with the pagination.
But you'd need to think of a mechanism to let the user include newer entries. E.g. a box display at the top saying "New entries available. Click here to refresh.". A click on this box would refresh the timestamp and navigate back to page 1.
You can create some kind of ping service in JS to see if there is any new update on the server. If there is: fetch it and update in your App. Also, you can create a unique identifier to solve the problem of duplication.
In my react based single page application, my page is divided in two panes.
Left Pane: Filter Panel.
Right Pane: Grid (table containing data that passes through applied filters)
In summary, I have an application that looks very similar to amazon.com. By default, when user hits an application's root endpoint (/) in the browser, I fetch last 7 days of data from the server and show it inside the grid.
Filter panel has couple of filters (e.g. time filter to fetch data that falls inside specified time interval, Ids to search data with specific id etc.) and a search button attached in the header of filter panel. Hitting search button makes a post call to a server by giving selected filters inside post form body, server returns back data that matches filters passed and my frontend application displays this data returned back from the server inside grid.
Now, when someone hits search button in the filter panel I want to reflect selected filters in the query parameter of the URL, because it will help me to share these URLs with other users of my website, so that they can see filters I applied and see data inside the grid matching these filters only.
Problem here is, if on search button click, I use http get with query parameters, I will endup breaking application because of limit imposed on URL length by different browsers.
Please suggest me correct solution to create such URLs that will help me to set the selected filters in the filter panel without causing any side effect in my application.
Possible solution: Considering the fact that we cannot directly add plain strings in query parameter because of URL length limitation from different browsers (Note: Specification does not limit the length of an HTTP Get request but different browsers implement their own limitations), we can use something like message digest or hash (convert input of arbitrary length into an output of fixed length) and save it in DB for server to understand the request and serve content back. This is just a thought, I am not sure whether this is an ideal solution to this problem.
Behavior of other heavily used websites:
amazon.com, newegg.com -> uses hashed urls.
kayak.com -> since they have very well defined keywords, they use
short forms like IN for INDIA, BLR for Bangalore etc. and combine
this with negation logic to further optimize maximum url length. Not
checked but this will ideally break after large selection of filters.
flipkart.com -> appends strings directly to query parameters and breaks
after limit is breached. verified this.
In response to #cauchy's answer, we need to make a distinction between hashing and encryption.
Hashing
Hashes are by necessity irreversible. In order to map the hash to the specific filter combination, you would either need to
hash each permutation of filters on the server for every request to try matching the requested hash (computationally intensive) or
store a map of hash to filter combination on the server (memory intensive).
For the vast majority of cases, option 1 is going to be too slow. Depending on the number of filters and options, option B may require a sizable map, but it's still your best option.
Encryption
In this scheme, the server would send its public key to the client, then the client could use that to encrypt its filter options. The server would then decrypt the encrypted data with its private key. This is good, but your encrypted data will not be fixed length. So, as more options are selected, you run into the same problem of indeterminate parameter length.
Thus, in order to ensure your URL is short for any number of filters and options, you will need to maintain a mapping of hash->selection on the server.
How should we handle permanent vs temporary links?
You mentioned in your comment above
If we use some persistent store to save the mapping between this hash to actual filters, we would ideally want to segregate long-lived "permalinks" from short-lived ephemeral URLs, and use that understanding to efficiently expire the short-lived hashes.
You likely have a service on the server that handles all of the filters that you support in your application. The trick here is letting that service also manage the hashmap. As more filters and options are added/removed, the service will need to re-hash each permutation of filter selections.
If you need strong support for permalinks, then whenever you remove filters or options, you'll want to maintain the "expired" hashes and change their mapping to point to a reasonable alternative hash.
When do we update hashes in our DB?
There are lots of options, but I would generally prefer build time. If you're using a CI solution like Jenkins, Travis, AWS CodePipeline, etc., then you can add a build step to update your DB. Basically, you're going to...
Keep a persistent record of all the existing supported filters.
On build, check to see if there are any new filters. If so...
Add those filters to the record from step 1.
Hash all new filter permutations (just those that include your new filters) and store those in the hash DB
Check to see if any filters have been removed. If so...
Remove those filters from the record from step 1.
Find all the hashes for permutations that include those filters and either...
remove those hashes from the DB (weak permalinks), or
Point that hash to a reasonable alternative hash in the DB (strong permalinks)
Lets analyse your problem and the solution possible.
Problem : You want a URL which has information about the filter applied so that when you share that URL user doesn't land on arbitrary page.
Solutions:
1) Append filter applied with URL. To achieve this you will need to shorten the key of type of filter and the value of filter so that Length of URL don't exceed much for each filter.
Drawback: This is not most reliable solution as the number of filter increase URL length has to increase no other option.
2) Append a unique key of filter applied(hash) with URL. To achieve this you will need to do some changes on server and client both. On client side you will need a encoding algorithm which convert filter applied to unique hash. On server side you will need decoding algorithm which convert unique hash to filter applied. SO now client whenever a URL like this is hit you can make a POST api call which take this hash give you the array of filter applied or on client side only put the logic to convert this hash.
Do all this in componentWillMount to avoid any side effect.
I think 2nd solution is scalable and efficient in almost all cases.
I have an application uses viewpanels to display data. One viewpanel displays unprocessed records and the other displays processed records. The user chooses an unprocessed record (using the show values in this column as links option), and is directed to a page where they input information. Then then click on button that updates the documents using doc.replaceItemValue statements in javascript. The user is then directed back to the viewpanel that displays the unprocessed records. In order to have the just processed record not show up in the unprocessed records I have to reindex the database. I am using database.updateFTIndex(false) to accomplish this.
Is there a better way to accomplish this? If two are more users are submitting records, will their individual indexes step on each other?
I never had to worry about this when using mysql.
Thanks for any advice.
I've used that technique for a while in production and not been notified of any issues. Updating an index via the Database Properties or a View gives the message that it has been queued for update on the server, but I'm not sure if the same happens with the programmatic call. It may well do.
In my scenario, I'm consolidating a lot of data into individual documents, so although intensive use periodically, it's not a huge number of documents being updated at any one time.
I'm also running the update to the index via sessionAsSigner, I had assumed that would be needed for authority purposes.
This is a performance related question .
I have a requirement to show all the orders made by the Customer during that Day (Max Orders can be of 10)
And the screen looks this way ??
Right now on click of the Order Row i am making a Ajax call , getting the data from the server and showing it on Front End .
And the end result looks this way
I am thinking of other approach , which is during page start (document ready ) up load all data related to that customer for that day , store it in a variable in a javascript (global level array).
and during click of the order row show the data by looping the array ??
Could anybody please tell me what is the best approach ??
If you know that everyone opening that page will go ahead and toggle all the rows then go ahead and preload everything. Otherwise it is much better to load only the data you need, thus make small ajax calls when the user requests data for a specific row.
The answer will depend on the specifics of your application, and how it is used.
How expensive is it to obtain the full list of orders? How often does a user need to see all the orders? Will your users tolerate short pauses while retrieving data from the server, or are they more likely to complain about page load time?
Neither approach is always better or worse, it just depends.
What I did for the CRUD in my app is that I select all the item from backend and load it to the front-end and loop the item out using js, to be specified I used ajax.
Think of my app is a todo list. Even if a user inserted a new item, I suppose still need to select all the items from db again after insert query right? same goes to delete, I may use remove() but still need to load so that my item id doesn't mess up. correct?
I using angularjs ng-repeat, I cant do like id++, then I bind the the id in ng-repeat with the object that I got from json form db.
if I have 1 thousand of item that will cause problem because I trigger the load function too much in backend, how to solve that?
Loading all the items from back end is invitation for disaster. It will kill back end and front end both. It becomes a serious usability problem if you dump 1000's of rows of data in the UI. How will the user wade through the data and act on them? Provide some way to filter the items. For example - if it is a todo list display one day at a time (default being today). For any other use case we can provide similar filtering mechanism. That way you query limited data from back end, take it to the UI and display it. If you cannot filter like this at least provide some way of pagination to limit the data you query and transport to the UI.