I am making an application which involves users storing their favorite content series (which each have a unique ID).
The problem is : I want to check if those IDs exists in the database and also to limit the size of this list to a floor (~100 items).
So far, I have tried to implement a RLS policy but checking the content of the posted data is too complex especially for arrays.
I am considering the use of a sort of gateway API that makes a bridge between the user and the database of their profile to insure more checks but before working on that I want to be sure there is no way to implement this by relying on Supabase only.
You can use a database constraint to limit the size of the list to a floor of 100 items. Additionally, you can use a SQL query to check if the IDs exist in the database.
To limit the size of the list, you can use a check constraint on the table that stores the user's favorite content series. The check constraint can be defined to ensure that the size of the array is not greater than 100.
To check if the IDs exist in the database, you can use a SQL query with the "IN" operator. The query can be written to check if each ID in the user's list exists in the database.
These solutions can be implemented using Supabase only, without the need for a gateway API.
Related
I am new to Amazon Dynamo DB, I have created a user table and address table.and I want to retrieve all users with their particular address as I have assigned user_id in address table to each address. So how can I get user info with address with one query rather than querying both table and merge after. Is their any way like in MySQL we can use JOIN?
Dynamodb is not meant for these types of queries; especially aggregation queries are not ideal. DynamoDB is mainly good for fast lookups for predefined access patterns (e.g. get all items in shopping cart for user ID X).
Since addresses are unique properties of users, you might be able to add an attribute to the user ID table. So basically you have one table with all user data and their properties, including address, that you can query by user ID.
If you need to support different queries I'd suggest you note them all down first before deciding on your data model. (E.g. get all users and sort by last name, get all users living in city X). If the data model in dynamodb is too complex to support all these access patterns you might need to change to a SQL-like db instead.
Edit: note that there are ways to model relationships in dynamodb but they are not trivial. For some examples see the link below. But as suggested above, first define your access patterns before deciding on your data model.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-modeling-nosql-B.html
I'm developing a web app that allows me to generate product orders in PHP. When I first started, I was thinking only about my product orders, however, I have 3 more sales representatives and I would like for them to use it as well.
The problem I'm currently facing is that whenever I'm looking for product orders (using an ajax), all users are allowed to see all the product orders. I would like that each user can only see **their* product orders, but I'm not sure how the query should be.
What would the query be considering these tables?
Thanks in advance!
UPDATE:
So, thanks to #Nicolas answer, I was able to get only information for a selected vendor. However, the 'nombre_cliente(client name)' from 'clientes' tables is wrong for every row. It is showing only the first row. Other information is correct, except for the name. Here is what the results look like now, and how I'm looping through the data.
results
looping
Thanks in advance to everyone.
From what i understand, You would need to know who's asking the server for every request. There's multiple way to achieve this and we don't have enough informations about your infrastructure to give you a good way to accomplish this. Are you using PHP session, are you using an API with API keys, etc. Either way, To get only the product order for a particular vendor, you would need to execute a SQL that would look like that :
SELECT * FROM `facturas` WHERE `id_vendedor` = 1;
In this example, 1 is the id of the connected user.
This request will filter every order and returns only the one that have been done by the user with the id : 1.
You could also join both tables together and filter by some other property of the user table. Lets say we want to get every order by the user whose name contains "john". we would do a request like that :
SELECT facturas.* FROM `facturas` JOIN `users` ON `facturas`.`id_vendedor` = `users`. `user_id` WHERE `users`.`firstname` LIKE '%john%';
In this case, we are joining both table, mapping every id_vendedor to an user_id in the user table, and filtering by firstname. The like operator can be translate to : 'anything john anything'.
IMPORTANT
If you are doing SQL request in any language, make sure you are binding your parameters
Thanks to #Nicolas help, I was able to fix my problem. The final query is:
SELECT * FROM facturas JOIN clientes ON facturas.id_cliente=clientes.id_cliente where facturas.id_vendedor=$current_user (variable that I used to keep track of who is logged in)
In my react based single page application, my page is divided in two panes.
Left Pane: Filter Panel.
Right Pane: Grid (table containing data that passes through applied filters)
In summary, I have an application that looks very similar to amazon.com. By default, when user hits an application's root endpoint (/) in the browser, I fetch last 7 days of data from the server and show it inside the grid.
Filter panel has couple of filters (e.g. time filter to fetch data that falls inside specified time interval, Ids to search data with specific id etc.) and a search button attached in the header of filter panel. Hitting search button makes a post call to a server by giving selected filters inside post form body, server returns back data that matches filters passed and my frontend application displays this data returned back from the server inside grid.
Now, when someone hits search button in the filter panel I want to reflect selected filters in the query parameter of the URL, because it will help me to share these URLs with other users of my website, so that they can see filters I applied and see data inside the grid matching these filters only.
Problem here is, if on search button click, I use http get with query parameters, I will endup breaking application because of limit imposed on URL length by different browsers.
Please suggest me correct solution to create such URLs that will help me to set the selected filters in the filter panel without causing any side effect in my application.
Possible solution: Considering the fact that we cannot directly add plain strings in query parameter because of URL length limitation from different browsers (Note: Specification does not limit the length of an HTTP Get request but different browsers implement their own limitations), we can use something like message digest or hash (convert input of arbitrary length into an output of fixed length) and save it in DB for server to understand the request and serve content back. This is just a thought, I am not sure whether this is an ideal solution to this problem.
Behavior of other heavily used websites:
amazon.com, newegg.com -> uses hashed urls.
kayak.com -> since they have very well defined keywords, they use
short forms like IN for INDIA, BLR for Bangalore etc. and combine
this with negation logic to further optimize maximum url length. Not
checked but this will ideally break after large selection of filters.
flipkart.com -> appends strings directly to query parameters and breaks
after limit is breached. verified this.
In response to #cauchy's answer, we need to make a distinction between hashing and encryption.
Hashing
Hashes are by necessity irreversible. In order to map the hash to the specific filter combination, you would either need to
hash each permutation of filters on the server for every request to try matching the requested hash (computationally intensive) or
store a map of hash to filter combination on the server (memory intensive).
For the vast majority of cases, option 1 is going to be too slow. Depending on the number of filters and options, option B may require a sizable map, but it's still your best option.
Encryption
In this scheme, the server would send its public key to the client, then the client could use that to encrypt its filter options. The server would then decrypt the encrypted data with its private key. This is good, but your encrypted data will not be fixed length. So, as more options are selected, you run into the same problem of indeterminate parameter length.
Thus, in order to ensure your URL is short for any number of filters and options, you will need to maintain a mapping of hash->selection on the server.
How should we handle permanent vs temporary links?
You mentioned in your comment above
If we use some persistent store to save the mapping between this hash to actual filters, we would ideally want to segregate long-lived "permalinks" from short-lived ephemeral URLs, and use that understanding to efficiently expire the short-lived hashes.
You likely have a service on the server that handles all of the filters that you support in your application. The trick here is letting that service also manage the hashmap. As more filters and options are added/removed, the service will need to re-hash each permutation of filter selections.
If you need strong support for permalinks, then whenever you remove filters or options, you'll want to maintain the "expired" hashes and change their mapping to point to a reasonable alternative hash.
When do we update hashes in our DB?
There are lots of options, but I would generally prefer build time. If you're using a CI solution like Jenkins, Travis, AWS CodePipeline, etc., then you can add a build step to update your DB. Basically, you're going to...
Keep a persistent record of all the existing supported filters.
On build, check to see if there are any new filters. If so...
Add those filters to the record from step 1.
Hash all new filter permutations (just those that include your new filters) and store those in the hash DB
Check to see if any filters have been removed. If so...
Remove those filters from the record from step 1.
Find all the hashes for permutations that include those filters and either...
remove those hashes from the DB (weak permalinks), or
Point that hash to a reasonable alternative hash in the DB (strong permalinks)
Lets analyse your problem and the solution possible.
Problem : You want a URL which has information about the filter applied so that when you share that URL user doesn't land on arbitrary page.
Solutions:
1) Append filter applied with URL. To achieve this you will need to shorten the key of type of filter and the value of filter so that Length of URL don't exceed much for each filter.
Drawback: This is not most reliable solution as the number of filter increase URL length has to increase no other option.
2) Append a unique key of filter applied(hash) with URL. To achieve this you will need to do some changes on server and client both. On client side you will need a encoding algorithm which convert filter applied to unique hash. On server side you will need decoding algorithm which convert unique hash to filter applied. SO now client whenever a URL like this is hit you can make a POST api call which take this hash give you the array of filter applied or on client side only put the logic to convert this hash.
Do all this in componentWillMount to avoid any side effect.
I think 2nd solution is scalable and efficient in almost all cases.
We are looking to use Algolia Search for an application. We like the convenience of Algolia but are stuck on one point. We have custom user groups and each user group can only see a subset of the records. When we are pushing records to Algolia all the records show up. How do we pair that with our custom logic of specific users can see specific records and we dont those to show up on the search lists.
The best way to handle this use case is to encode the permission information directly inside your records (like a group or a user). You can for example add a permission array on your record:
"permission": ["group1", "user42"]
You then just need to add this permission attribute in the list of attributes for faceting and apply the restriction in your query via a facetFilters argument.
I would also recommend to use the secured-API key feature that allows to apply this restriction in a secure way even if the query come from a browser or mobile app. A HMAC-SHA 256 signature is computed in your backend between the API key and the restriction to ensure no-one can change this restriction.
I need to fetch huge data(may be some 10K records) from DB and show it as report(i use DataTable), and it has data filter/search and pagination.
Question - which one is best/recommended way from the below option,
I will fetch all the records at once and store it in front end(as a object) and if filter applies i will filter from the object and display it.
Likewise i wont interact with DB if i work with pagination(Since i have all the records with myself already)
Every time i need to contact the DB when i applies filter/search.
Likewise for pagination,
For example, if i select page 5 then i will send a query to DB to get me only those data and display it. Note: Number of record per page is also the option to select.
If we have any other best way, please guide me.
Thanks,
I am not familiar with DataTable, but it appears to be similar to jqGrid, which I'm familiar with.
I prefer your proposed solution #2. You are better off fetching only what you need. If you're only displaying, say, 100 rows, it's wasteful (both in terms of bandwidth and local memory usage) to fetch 10k rows at once if you're only displaying 100.
Use LIMIT on the MySQL side to fetch only the records you need. If you want, say, records 200 through 300 for page 3, you'd add LIMIT 200, 100 to the end of your query (the first parameter to LIMIT says "start at 200" and the second says "fetch 100 rows.") If DataTable works like jqGrid, you should be able to re-query the database and repopulate your table when the user changes pages, and this fetch will be done in the background with AJAX, which conserves bandwidth. Your query will be identical except for the range specified by the LIMIT at the end of your query.
Think of it this way: say you use GMail and you never archive your messages, so your inbox contains 20,000 emails, but only shows 100 per page. Do you think Google has designed the GMail front-end so that all 20k subject and from lines are fetched at once and stored locally, or is the server queried again when the user changes pages? (It's the latter.)