I'm trying to create a new course row into my Courses database in postman and I cannot get the syntax correct.
I've got 159 values that need to be added into a single course. Any chance I can summarise them instead of having to write down all the values?
Currently this is my query:
const addCourse = "INSERT INTO courses VALUES (value1 'value2','value3', 'value4', 'value5', 'value6', 'value7', 'value8', 'value9', 'value10', 'value11', 'value12', 'value13', 'value14', 'value15', 'value16', 'value17', 'value18', 'value19', 'value20', 'value21', 'value22', 'value23', 'value24', 'value25', 'value26', 'value27', 'value28', 'value29', 'value30', 'value31', 'value32', 'value33', 'value34', 'value35', 'value36', 'value37', 'value38', 'value39', 'value40', 'value41', 'value42', 'value43', 'value44', 'value45', 'value46', 'value47', 'value48', 'value49', 'value50', 'value51', 'value52', 'value53', 'value54', 'value55', 'value56', 'value57', 'value58', 'value59', 'value60', 'value61', 'value62', 'value63', 'value64', 'value65', 'value66', 'value67', 'value68', 'value69', 'value70', 'value71', 'value72', 'value73', 'value74', 'value75', 'value76', 'value77', 'value78', 'value79', 'value80', 'value81', 'value82', 'value83', 'value84', 'value85', 'value86', 'value87', 'value88', 'value89', 'value90', 'value91', 'value92', 'value93', 'value94', 'value95', 'value96', 'value97', 'value98', 'value99', 'value100', 'value101', 'value102', 'value103', 'value104', 'value105', 'value106', 'value107', 'value108', 'value109', 'value110', 'value111', 'value112', 'value113', 'value114', 'value115', 'value116', 'value117', 'value118', 'value119', 'value120', 'value121', 'value122', 'value123', 'value124', 'value125', 'value126', 'value127', 'value128', 'value129', 'value130', 'value131', 'value132', 'value133', 'value134', 'value135', 'value136', 'value137', 'value138', 'value139', 'value140', 'value141', 'value142', 'value143', 'value144', 'value145', 'value146', 'value147', 'value148', 'value149', 'value150', 'value151', 'value152', 'value153', 'value154', 'value155', 'value156', 'value157 ', 'value158' )"
This is my courseController code:
const addCourse = (req, res) => {
console.log(req.body);
const { id_curso } = req.body;
//check Curso exists
pool.query(queries.checkIdCursoExists, [id_curso], (error, results) => {
if (results.rows.length) {
res.send("Este curso ja existe.");
}
// add course
pool.query(queries.addCourse),
(error, results) => {
if (error) throw error;
res.status(201).send("Curso criado com sucesso");
};
});
};
The problem I encounter is this error message whether I have value1 in quotes or not:
error: type "value1" does not exist'
The course is not posted onto my database.
The path to your answer lies in your INSERT statement. I guess your courses table has 159 columns in it. (That is a great many columns and may suggest the need to normalize your table. SQL handles multiple-row data more efficiently than multiple-column data where it makes sense to do so. But you did not ask about that.)
The INSERT syntax is either this:
INSERT INTO tbl (col1, col2, col3) VALUES (const, const, const);
or this:
INSERT INTO tbl VALUES (const, const, const);
The first syntax allows you to insert a row without giving values for every column in the table. You use the second syntax. It requires you to give one constant value for each column of your table. But your insert looks like this:
INSERT INTO courses VALUES (value1 'value2', ... , 'value158' )"
I see some problems with this.
You only have 158 values, but your question says you have 159.
value1, the first value in your list, isn't a constant.
You need a comma after your first value.
All your value constants are text strings. Yet you mentioned float in the title of your question.
I have a postgres server running with one column (say marks) of type VARCHAR(255), but is supposed to have numbers, like if i do a select *.. query , i will get ['100','50','21','14'...] etc.
i would like to run a range query on it, like user passes [10,30] and gets ['21','14'] as result. I think this would require casting at the time of running the BETWEEN query, but i cannot get it to work properly.
I am using sequalize.js which is generating the following query:
SELECT "id"
FROM "token_attributes" AS "token_attributes"
WHERE "token_attributes"."attributesDirectoryId" = 3
AND CAST('token_attributes.attributeValue' AS INTEGER) BETWEEN 10 AND 30;
on server also this query seems to fail. the sequalize query that is being created is :
{
where: {
attributesDirectoryId: 3,
attributeValue: Where { attribute: [Cast], comparator: '=', logic: [Object] }
},
attributes: [ 'id' ]
}
i have used the following code to create the where condition (cast and where were imported from sequelize):
let whereFilter ={}
let value = where(cast(`${tableName}.attributeValue`, 'integer'), {[Op.between]: rangeAsInt})
whereFilter['attributeValue'] = value
so this is basically calling table.findAll({where:whereFilter}) I am not sure how to either make sequelize create a correct sql api or what the actual correct SQL api would be. can anyone help?
found the issue, i missed the sequilize.col function :
let whereFilter ={}
let value = where(cast(col(`${tableName}.attributeValue`), 'integer'), {[Op.between]: rangeAsInt})
whereFilter['attributeValue'] = value
and the query would be :
SELECT "id"
FROM "token_attributes" AS "token_attributes"
WHERE "token_attributes"."attributesDirectoryId" = 3
AND CAST("token_attributes"."attributeValue" AS INTEGER) BETWEEN 10 AND 30;
This is a simplified version of the DB structure that I'm working with:
"user" : {
"nhbAQ9p8BrMoAIbJNKvLlXTdiNz2" : {
"log" : {
"-LhMVugmjmIdqwrJSURp" : {
"a" : 25120,
"timeStamp" : 1560312000000,
},
"-Lh_Z9GsJJvlMOpVV9jU" : {
"a" : 19033,
"timeStamp" : 1564718400000,
}
}
}
}
I'm having issues filtering and retrieving the value of "a" with a given user id (e.g. nhbAQ9p8BrMoAIbJNKvLlXTdiNz2) and timeStamp (e.g.1560312000000).
I've tried combinations of orderByChild(), equalTo(), and adding a once() listener to do the task but they've only returned null so far.
The code that I have:
firebase.database().ref('user/' + userID + + '/log').orderByChild('timeStamp').equalTo(targetTimeStamp).once('value').then(function(snapshot){
let userLog = snapshot.val().a
})
where userID is a string and targetTimeStamp is a number.
I checked the doc and a post about orderByChild() but I'm still not sure what is causing it to return null.
This is my first time posting a question, please comment if there's anyway I can make this clearer and any help is much appreciated!
where userID and targetTimeStamp are both strings.
That is the reason nothing is returned. In the database the values of the timeStamp property is a number, and comparing a number to a string never returns a match.
To make the query work, convert the string to a number:
...equalTo(parseInt(targetTimeStamp)).once(...
Aside from that a query against the Firebase Database may potentially have multiple results. So the snapshot contains a list of those results. Even if there is only a single result, the snapshot will contain a list of one result.
So you need to handle that case too:
firebase.database().ref('user/' + userID + + '/log').orderByChild('timeStamp').equalTo(targetTimeStamp).once('value').then(function(results){
results.forEach(function(snapshot) {
let userLog = snapshot.val().a
})
})
I'm trying to create a relationship between two tables, however i seem to get, some issues when i try to insert my object to postgres database. i keep getting following error: insert or update on table "tournament" violates foreign key constraint "tournament_league_id_foreign". which i guess is related to my knex syntax?
Insert into db
var data = {
id: id,
name: name,
league_id: leagueId
};
var query = knex('tournament').insert(data).toString();
query += ' on conflict (id) do update set ' + knex.raw('name = ?, updated_at = now()',[name]);
knex.raw(query).catch(function(error) {
log.error(error);
})
Knex tables
knex.schema.createTable('league', function(table) {
table.increments('id').primary();
table.string('slug').unique();
table.string('name');
table.timestamp('created_at').notNullable().defaultTo(knex.raw('now()'));
table.timestamp('updated_at').notNullable().defaultTo(knex.raw('now()'));
}),
knex.schema.createTable('tournament', function(table) {
table.string('id').primary();
table.integer('league_id').unsigned().references('id').inTable('league');
table.string('name');
table.boolean('resolved');
table.timestamp('created_at').notNullable().defaultTo(knex.raw('now()'));
table.timestamp('updated_at').notNullable().defaultTo(knex.raw('now()'));
})
When you created the tournament table, for the column league_id you you specified that .references('id').inTable('league'). This means that for every row in that table, there must exist a row in the table league whose id is the same as the value of the league_id field in the former row. Apparently in your insert (is this your only insert?) you are adding a row to tournament whose league_id does not exist in league. As a rule, the foreign constraint (i.e. the .references part) implies that you must create the league first and then tournaments in that league (which actually makes sense).
I have JSON data by which i am creating nodes and relationship between nodes using https://github.com/thingdom/node-neo4j connector.
I have following JSON format
{
att0:"abcd",
att1:"val1",
att2:"val2",
att3:"val3",
att4:"val4",
att5:"val5",
att6:"val6",
att7:"val7",
att8:"val8"
} .... more like this around 1000
Here att0+att1 gives me unique id after md5 hash (let it be UID1) .
and att4 gives me unique id after md5 hash (let it be UID2).
and att7 gives me unique id after md5 hash (let it be UID3).
I am creating two node of following properties :
Node 1 :
{
id: UID1 ,
att3:"val3"
}
Node 2 :
{
id:UID2,
att5:"val5",
att6:"val6"
}
Relationship from Node 1 --> Node 2 :
{
id:UID3,
att5:"val8"
}
Following is my data insertion query:
for(i=0; i<1000; i++){ // 1000 objects in json
// create UID1,UID2 and UID3 based on above info for each object
// and create query string as mentioned below
query_string = MERGE (n:nodes_type1 {id:'UID1'})
ON CREATE SET n={ id:'UID1', att3:'val3'},n.count=1
ON MATCH SET n.count = n.count +1
MERGE (m:nodes_type2 {id:'UID2'})
ON CREATE SET m={ id:'UID2', att5:'val5', att6:'val6'},m.count=1
ON MATCH SET m.count = m.count +1
MERGE (n)-[x:relation_type {id:'UID3'} ]->(m)
ON CREATE SET x={ att8:'val8', id:'UID3' },x.count=1
ON MATCH SET x.count = x.count +1 return n
db.query(query_string, params, function (err, results) {
if (err) {
console.log(err);
throw err;
}
console.log("Node Created !!! "+ event_val)
});
}
Firstly i cleared my neo4j database using following query externally ( using neo4j database UI):
Now problem is when i query MATCH (n:nodes_type2) return COUNT(n). Since there are 1000 objects in json it should create 1000 nodes.But the result is coming more than 1000 (around 9000) and keeps on changing as every time when i clear the data and restart the script. When i saw in the results there were multiple nodes of the same UID . Shouldn't merge query handel node match and increment counter . Merge is incrementing the counter but after some number, new node is created with same UID.
Based on your given query, I assume the UUID generated looks to be different on each loop :
1000 loops, 3 queries with 3 different node labels.
Can you count distinct uuids you get from your database, like :
MATCH (n) RETURN count(DISTINCT n.id)
I assue your queries are executed massively in parallel,
make sure to have a unique constraint for : nodes_type1(id) and nodes_type2(id) installed, otherwise MERGE cannot guarantee uniqueness.
Also you should change your query to use parameters instead of literal values
And it should also look like this:
MERGE (n:nodes_type1 {id:{id1}})
ON CREATE SET n.att3={att3},n.count=1
ON MATCH SET n.count = n.count +1
MERGE (m:nodes_type2 {id:{id2}})
ON CREATE SET m.att5={att5}, m.att6={att6},m.count=1
ON MATCH SET m.count = m.count +1
MERGE (n)-[x:relation_type {id:{id3}} ]->(m)
ON CREATE SET x.att8={att8},x.count=1
ON MATCH SET x.count = x.count+1
return n,r,m
I don't think the id and counter on the relationship make sense in a real use-case but for your test it might be ok