Get pending friend requests from one table - javascript

Im trying to get all pending friend requests from a table with schema:
id
user_id
friend_id
A pending request would just be a single row such as:
(Meaning user 1 sent a request to user 2)
id
user_id
friend_id
1
1
2
and when accepted it becomes two rows, and I am able to join two of this table to find all accepted (this is working just fine).
An accepted request for reference:
id
user_id
friend_id
1
1
2
2
2
1
My accepted query looks like this (Im using bookshelf js and knex.js):
const friends = new Friends();
return friends.query((qb) => {
qb.select('friends.user_id', 'friends.friend_id');
qb.join('friends as friendsTwo', 'friends.user_id', 'friendsTwo.friend_id');
qb.where('friends.user_id', '=', id);
}).fetchAll();
How can I modify this to only get the one way relationships?
My first thought was leftJoin and I couldnt seem to get it to work, so if anyone knows an answer or has seen a good answer please lead me to it, thanks :).

I think you can completely go away from idea of join in this project. I had done something similar in my previous projects and I think it will be best to add a third field isMutual meaning if they are mutual or not. The field is self-explanatory and think you got the idea. After that your table must look like
No accepted friendship
id
user_id
friend_id
isMutual
1
1
2
false
Accepted friendship
id
user_id
friend_id
isMutual
1
2
1
true
2
1
2
true
I believe doing this will actually benefit your system as your query will be faster and try making an compound index for (user_id, friend_id). This solution is more towards shifting the load.
Conclusion
Using this, you can achieve faster queries, but all the hard work done in during READ OPERATION in previous schema will shift to WRITE OPERATION.
But I think this will reduce writing speed by more.
And yeah, make sure you use transaction doing updates for isMutual field.

qb.leftJoin('friends as friendsTwo', 'friends.user_id', 'friendsTwo.friend_id');
qb.whereNull('friendsTwo.user_id');
This will LEFT JOIN, which keeps all rows (pending or accepted), but then filter to keep only those with no matching record in friendsTwo; thus returning only the pending links.

Related

Increment the number of "commentsNumber" in the post table for all comments in comments table that has a specific postId

This is the post table
And this is the comments table
There is a way to encrease the value "commentsNumber" for each comments that has "postId" = post to increase the value?
I hope I made myself understood in some way
The idea was to increase the value of "commentsNumber" and then fetch the data so as to show the number of comments on the post. Of course, if there's a better way to handle this, suggestions are welcome.
I specify that initially the "commentsNumber" column was not present, if someone knows how to do it through a query and you think it's better, tell me
When including sample data in your question, do not insert images! Instead you should include a markdown table of data (tableconvert.com makes this very easy) and/or the CREATE TABLE statements and INSERTS so we can quickly and easily reproduce your example/problem.
In the vast majority of cases, storing the redundant count of child records is considered premature optimisation (at best). RDBMSes can perform these simple tasks incredibly quickly and efficiently, as long as the data is indexed appropriately.
It should be calculated on the fly -
SELECT p.*, COUNT(c.id) AS commentsNumber
FROM posts p
LEFT JOIN comments c
ON p.id = c.postId
GROUP BY p.id
It is important that there is an index on comments.postId but it should already be there as it is a foreign key.

ArrayCollection (Collection of forms) index collision in Symfony 2

I am using Symfony2 to build up my page.
When I try to update a collection of forms (like described in the cookbook entry "How to Embed a Collection of Forms"), i get a collision of the indexes of the frontend and the indexes of the ArrayCollection in the backend.
I've got the relation User <-> Address (OneToMany). A user wants to create/update/delete his addresses, therefore he can add / delete in the frontend with the help of the javascript part new address elements. He does the following:
(1) Adds new address (has index: 0)
(2) Adds new address (has index: 1) and instantly removes this address again
(3) Adds new address (has index: 2).
When he clicks on save button, the following code saves/updates the user (and its addresses):
$this->em->persist($user);
$this->em->flush();
New addresses for example are then correctly persisted to the database.
Now the user wants to update the address e.g. with index 0.
When he now clicks on the save button, it updates the adress with "index 0", but at the same time, it adds again the address with "index 2" to the database (object).
To better understand the problem, i've drawn a small illustration (handmade, sorry for my bad art skills):
Now , i've got two times the address with "index 1" within my object / database.
I know why this happens, it's because the first "index 1" address gets mapped to the ArrayCollection element "number 1", and the second gets mapped to "number 2 "(because of the frontend name "index 2").
You can say: "it just fills up the addresses, until it reaches the frontend index in the backend"..
But how can I fix this behaviour ?
Site note:
This behaviour occurs using ajax requests, because if you would reload the page after clicking "save button", it would reindex the addresses in the frontend correctly with the indexes in the backend.
My suggestion to handle that situation:
Reindexing the frontend indexes after clicking save with the server side
indexes. Is this a clear / the only solution for my problem?
Yes, this is problem of Symfony form collection and it has no easy solution imho. But I have to ask why don't you do exactly the same thing what page refresh does? You can refresh only html snippet with collection. HTML code for snippet can come from server-side. Back to your question - yes, reindexing is good solution until you do not want to try write custom collection type on your own.
symfony/symfony/issues/7828
There is similar problem with validating in collection - symfony/symfony/issues/7468.
Well I think default collection type and the tutorial in Symfony docs has the some drawbacks. Hope that's help.
I have come round this issue on the client side by modifying the Javascript/Jquery code given in the Symfony Documentation.
Instead of numbering the new elements by counting the sub-elements, I am looking at the last element's id and extracting its index with a regular expression.
When adding an element, I am incrementing the last index by 1. That way, I never use the same index.
Here is my code :
// Initializing default index at 0
var index = 0;
// Looking for collection fields in the form
var $findinput = $container.find(':input');
// If fields found then looking for last existing index
if ( $findinput.length > 0 ) {
// Reading id of last field
var myString = $findinput.last().attr('id')
// Setting regular expression to extract number from id containing letters, hyphens and underscores
var myRegex = /^[-_A-Za-z]+([0-9]+)[-_A-Za-z]*$/
// Executing regular expression on last collection field id
var test = myRegex.exec(myString);
// Extracting last index and incrementing by 1
if (test.length > 0) index = parseInt(test[1]) + 1;
}
I ran into this problem a couple of times during the past two years. Usually, following the Symfony tutorial How to Embed a Collection of Forms does the job just fine. You need to do a little bit javascript coding to add the "edit/update" functionality, but other than that - you should be just fine using this approach.
If, on the other hand, you have a really complex form which uses AJAX to validate/save/calculation/business logic/etc, I've found it's usually a better to store the final data into an array in the session. After submitting the form, inside the if($form->isValid()){...} block, you would have
$collection = new ArrayCollection($mySessionPlainArray);
$user->setAddress($collection);
I would like to warn you to be careful with the serialization of your data - you might get some awkward exceptions or misbehavior if you're using entities (see my question).
I'm sorry I can't provide more code, but the solution to this problem sometimes is quite complex.

CouchDB map/reduce function to show limited results for a user by date

I am one of many SQL users who probably have a hard time transitioning to the NoSQL world, and I have a scenario, where I have tonnes of entries in my database, but I would only like to get the most recent ones, which is easy, but then it should all be for the same user. I'm sure it's simple, but after loads of trial and error without a good solution, I'm asking you for help!
So, my keys look like this.. (because I'm thinking that's the way to go!)
emit([doc.eventTime, doc.userId], doc);
My question then is, how would I go about only getting the 10 last results from CouchDB? For that one specific user. The reason why I include the time as key, is because I think that's the simplest way to sort the results descending, as I want the ten last actions, for example.
If I had to do it in SQL i'd do this, to give you an exact example.
SELECT * FROM table WHERE userId = ID ORDER BY eventTime DESC LIMIT 10
I hope someone out there can help :-)
Change your key to:
emit([doc.userId, doc.eventTime], null);
Query with:
view?descending=true&startkey=[<user id>,{}]&endkey=[<user id>]&limit=10
So add something like this to a view...
"test": {
"map": "function(doc) { key = doc.userId; value = {'time': doc.eventTime, 'userid': doc.userId}; emit(key, value)}"
}
And then call the view...(assuming userId = "123"
http://192.168.xxx.xxx:5984/dbname/_design/docname/_view/test?key="123"&limit=10
You will need to add some logic to the map to get the most recent, as I don't believe order is preserved in any manner.

How to read from another row in javascript step of Pentaho?

I'm working on an ETL process with Pentaho Data Integration (Spoon, before Kettle).
In the Modified Javascript step of Pentaho you can set a start, end and transform script. In the transform script you can write code that it will be executed only for each row, and from here I don't know how to access to data of the previous row (if it's possible).
I need access to the previous row because all rows are ordered by product, store and date (respectively), and the goal is to get the quantity on hand from the previous row and add the quantity sell or received on the current row (this would be the same product, same store but different date). I also need accessing to the previous row to compare the product and store of the current row with the previous row, because if someone of them changes I must to restart the field quantity_on_hand (I do it with a field of all columns named initial_stock).
On pseudocode would be something like this (if I hadn't the restriction of that the code written on the step is executed only for each row):
while(all_rows_processed()){
current_row.quantity_on_hand = current_row.initial_stock;
while(id_product_current_row == id_product_previous_row && id_store_current_row == id_store_previous_row){
current_row.quantity_on_hand = previous_row.quantity_on_hand + current_row.stock_variation;
}
}
This question related couldn't help me.
Any ideas to solve my problem would be appreciated.
May I ask you to reconsider Group By step? It seems suitable for your scenario.
If you sort the stream accordingly to your combination date/store/article, you can calculate cumulative sum for sell/received quantity. This way you can have a running total of inventory variation that would be reset on a group basis.
Also give a look both at this blog post and at the forum post it quotes.
I doubt you need to go to JavaScript for this. Check out the Analytic query step. That will allow you to bring a value from the previous row into the current.
The JavaScript step gives you tremendous flexibility, but if you can do it with the regular transform steps, it will typically be much faster.
use Analytic Query. By Using this Step u can access the previous / next record. Actually, not only prev and next record that you can read, but you can read N Rows Fordward or N Rows Back Wards.
Check the following URL for clearer expalanation :
http://wiki.pentaho.com/display/EAI/Analytic+Query
http://www.nicholasgoodman.com/bt/blog/2009/01/30/the-death-of-prevrow-rowclone/
Thanks for all, I've got the solution to my problem.
I've combined all your suggestions and I've used the Analytic Query, Modified Javascript and Group by steps.
Although the question wasn't very well formulated, the problem I had was to calculate the stock level on each row (there was one row for each product, date and store combination).
First (obviously later than sort rows by product_id, store_id and date ascending), I used the Analytic Query step to group by product_id and store_id, because with this step I've got a new field previous_date to identify the first row of each group (previous_date=null on the row of the group where date was the oldest).
Then I needed to calculate the quantity_on_hand of each group [product,store] at first row (first date of each group because it's sorted by date) because the initial_stock is different for each group. This is because of (sum(quantity_received) - sum(quantity sold)) != quantity_on_hand.
Finally (and the key was here), I used the Group by step like #andtorg suggested and do it as the next image shows.
This link that #andtorg suggested was very useful. It includes even two .ktr example files.
Thank you so much for help!

Getting stream posts by fql query with limit returns unexpected results

Because of October 2013 migration, Graph Api page feed call don't return likes.count field so i had to call another Graph Api call with fql query shown below(if there is ways to get like count per post in one Api call then please help me else read the question further) -
select post_id,like_info.like_count from stream where source_id=page_id_here limit 5
but it did return only 3 results not 5 as expected ! and this happening mostly when i use access token generated by a application which have October 2013 migration enabled.So how to solve the problem? if i set time limit in the query, then i can't be sure what time limit i specify because my limit setting may be 5,50,100,500 and obviously setting time period is not a proper solution.
Also if i increase limit value say by 50, then for 5 posts(limit value = 55) it may make sense but for desired 100 posts(limit=150) it might not return 100 items and likewise true for more desired posts.
So i am in a puzzle and making my clients late.So please, any suggestions and solutions are very much welcome.
For some odd reason, when you specify a subquery (be it a useful or useless one), the desired results are more likely to show up. Example:
SELECT post_id,like_info.like_count
FROM stream
WHERE post_id IN (SELECT post_id FROM stream WHERE source_id = page_id_here)
limit 5
Hope this will help!

Categories

Resources