ON DUPLICATE KEY UPDATE with Web sql and variables - javascript

I am trying to do this but this is not working :
tx.executeSql('INSERT INTO MOVIE (id, rate) VALUES(?,?) ON DUPLICATE KEY UPDATE rate=VALUES(rate)',[result.id,score]);
My id is an INT NOT NULL UNIQUE and rate an INT.
I think my syntax is wrong... Do you have a solution ?
Thx :)
Anthony.

As stated in the Web SQL Database:
User agents must implement the SQL dialect supported by Sqlite 3.6.19.
So my guess is that you are going to face the same issues you get with Sqlite. It seems that SQLite UPSERT - ON DUPLICATE KEY UPDATE is not supported by Sqlite, so I suggest just trying one of the solutions provided in the answers. Maybe something like this would work:
db.transaction(function (tx) {
tx.executeSql('INSERT OR IGNORE INTO MOVIE VALUES (?, ?)', [result.id,score]);
tx.executeSql('UPDATE MOVIE SET rate = ? WHERE id = ?', [score,result.id]);
});
See demo
By the way, Web SQL Database is deprecated.

Related

Security Context with several elements on CubeJS

Currently working on Cube.JS and I'm building a cube within I want to restrict the data to an user based on his gender. So I end up with :
cube(`Data`, {
sql: `select * from my_table where ${SECURITY_CONTEXT.user_gender.filter(user_gender)}`,
...
as explained here
But now I want to restrict the data to an user based on his gender AND his age, how should I proceed ? I was thinking about something like that...
cube(`Data`, {
sql: `select * from my_table where ${SECURITY_CONTEXT.user_gender.user_age.filter(user_gender,user_age)}`, //????
...
...but it seems weird to put two "attributes" .user_gender.user_age.filter to the SECURITY_CONTEXT
I hope someone has already tried something like that.
Thank you!
You'll need to use SECURITY_CONTEXT twice:
cube(`Data`, {
sql: `select * from my_table where ${SECURITY_CONTEXT.user_gender.filter(user_gender)} AND ${SECURITY_CONTEXT.user_age.filter(user_age)}`,
...

typeorm efficient bulk update

I have an update query using typeorm on a postgresql database, like the one below, which is performed on a list of 20+ items frequently (once every 30 sec). It takes approx. 12 seconds for the update, which is a lot for my limits.
for (item of items) {
await getConnection().createQueryBuilder().update(ItemEntity)
.set({status: item.status, data: item.data})
.whereInIds(item.id).execute();
}
Is it possible to perform such a bulk update in a single query, instead of iterating other the items? If so - how?
item.status and item.data are unique for each item.
There is a way to do a workaround for this through upsert
Using an array of data that is already on the db and using ON CONFLICT to update it.
const queryInsert = manager
.createQueryBuilder()
.insert()
.into(Entity)
.values(updatedEntities)
.orUpdate(["column1", "column2", "otherEntityId"], "PK_table_entity")
.execute();
will run something like:
INSERT INTO entity (
"id", "column1", "column2", "otherEntityId"
) VALUES
($1, $2, $3, $4),
($5, $6, $7, $8),
ON CONFLICT
ON CONSTRAINT "PK_table_entity"
DO UPDATE SET
"column1" = EXCLUDED."column1",
"column2" = EXCLUDED."column2",
"otherEntityId" = EXCLUDED."otherEntityId"
But you need to be aware that orUpdate does not support using Entity relations, you will need to pass the id column of a relation entity. It also doesnt do any manipulation for the naming strategy. Another problem is that it only works if you're not using #PrimaryGeneratedColumn for your pk (you can use #PrimaryColumn instead)
Using pure psql this can be done as described in the answers to: Update multiple rows in same query using PostgreSQL
However, the UpdateQueryBuilder from Typeorm does not support a from clause.
For now, I think that a raw query is the only way, i.e. getManager().query("raw sql ...").

Knex subquery to sum data from 2nd table

I'm trying to write a query using knex to SUM the votes for each question but am not getting the correct sum. I can write the subquery in SQL but can't seem to piece it all together. I am a student and not sure if I'm doing something wrong with Knex or if my underlying logic is wrong. Thanks in advance for any help!
My knex query looks like this
return knex
.from('question')
.select(
'question.id AS question_id',
knex.raw(
`count(DISTINCT vote) AS number_of_votes`, //this returns the number_of_votes for each question_id as expected
),
knex.raw(
`sum(vote.vote) AS sum_of_votes`, //something wrong here... E.g., question_id 1 has 3 down votes so the sum should be -3, however I am getting -9
),
)
.leftJoin('user', 'question.user_id', 'user.id')
.leftJoin('vote', 'question.id', 'vote.question_id')
.groupBy('question.id', 'user.id');
There are 3 tables that look like:
user
id
user_name
question
id
title
body
user_id (FK references user.id)
vote
question_id (FK references question.id)
user_id (FK references user.id)
vote (-1 or 1)
PRIMARY KEY (question_id, user_id)
I did manage to write the query as a stand-alone SQL query and verified that it works as expected. This is what I am trying to accomplish in the above knex query:
SELECT question.id, sum(vote.vote) AS sum_of_votes FROM question LEFT JOIN vote ON question.id = vote.question_id GROUP BY question.id;
So, broadly your SQL query is correct (after fixing a couple of typos) although as #felixmosh points out it has no user information in it: might be tricky to figure out who voted for what! But perhaps you don't need that for your purposes.
Your posted solution will do the trick, but is perhaps not the most efficient query for the job as it involves a subquery and several joins. Here's the SQL it generates:
SELECT "question"."id" AS "question_id",
count(DISTINCT vote) AS number_of_votes,
(
SELECT sum(vote) FROM vote
WHERE question_id = question.id
GROUP BY question_id
) AS sum_of_votes
FROM "question"
LEFT JOIN "user" ON "question"."user_id" = "user"."id"
LEFT JOIN "vote" ON "question"."id" = "vote"."question_id"
GROUP BY "question"."id", "user"."id";
We can take a simpler approach to get the same information. How about this?
SELECT question_id,
count(vote) AS number_of_votes,
sum(vote) AS sum_of_votes
FROM vote
GROUP BY question_id;
This gets all the information you were looking for, without joining any tables or using subqueries. It also avoids DISTINCT, which could lead to incorrectly counting the number of votes. The Knex to generate such a query looks like this:
knex("vote")
.select("question_id")
.count("vote AS number_of_votes")
.sum("vote AS sum_of_votes")
.groupBy("question_id")
You only really need to join tables here if you were looking for further information from those tables (such as the user's name or the question's title).
After hours of trying to figure this out I finally got it. Here is solution:
return knex
.from('question')
.select(
'question.id AS question_id',
knex.raw(
`count(DISTINCT vote) AS number_of_votes`,
),
knex.raw(
`SELECT sum(vote) from vote WHERE question_id = question.id GROUP BY question_id) AS sum_of_votes`
)
.leftJoin('user', 'question.user_id', 'user.id')
.leftJoin('vote', 'question.id', 'vote.question_id')
.groupBy('question.id', 'user.id');

How to save Item in dynamodb with GSI condition?

I have a dynamodb table that has a Global secondary Index with a range key (email, hashedPassword ).
i want to save an item if the email is not duplicated,
i used attribute_not_exists but it doesn't work, i also used :
ConditionExpression: "#email <> :email",
ExpressionAttributeNames: {"#email": "email"},
ExpressionAttributeValues: {":email": userInfo.email}
without success.
Can anyone help me please,
Thank you.
The condition expression for DynamoDB only works on the item it is working with, and not across items.
In other words, condition expression does not get evaluated against other items.
For example, if you are creating a new item, you can only enforce the email constraint if you use the Primary Key (Partition + Sort Key if you have one) as the unique constraint.
Some options you have:
Perform a read before the insert. This is not going to guarantee uniqueness of the email, but should catch a lot of duplicates.
Use Email as the Primary Key.
Perform a consistent read after the insert, which rolls back the creation
HTH

CouchDB map/reduce function to show limited results for a user by date

I am one of many SQL users who probably have a hard time transitioning to the NoSQL world, and I have a scenario, where I have tonnes of entries in my database, but I would only like to get the most recent ones, which is easy, but then it should all be for the same user. I'm sure it's simple, but after loads of trial and error without a good solution, I'm asking you for help!
So, my keys look like this.. (because I'm thinking that's the way to go!)
emit([doc.eventTime, doc.userId], doc);
My question then is, how would I go about only getting the 10 last results from CouchDB? For that one specific user. The reason why I include the time as key, is because I think that's the simplest way to sort the results descending, as I want the ten last actions, for example.
If I had to do it in SQL i'd do this, to give you an exact example.
SELECT * FROM table WHERE userId = ID ORDER BY eventTime DESC LIMIT 10
I hope someone out there can help :-)
Change your key to:
emit([doc.userId, doc.eventTime], null);
Query with:
view?descending=true&startkey=[<user id>,{}]&endkey=[<user id>]&limit=10
So add something like this to a view...
"test": {
"map": "function(doc) { key = doc.userId; value = {'time': doc.eventTime, 'userid': doc.userId}; emit(key, value)}"
}
And then call the view...(assuming userId = "123"
http://192.168.xxx.xxx:5984/dbname/_design/docname/_view/test?key="123"&limit=10
You will need to add some logic to the map to get the most recent, as I don't believe order is preserved in any manner.

Categories

Resources