Sorting by partition key/sort key does not work - javascript

I'm working on an IoT project where I need to read some data from a device.
I use AWS, and I'm currently working on some lambda function code. But I can't figure out how to get the last (newest) item from my database.
My database has two keys:
Partition key:
device_id (Number)
Sort key
sample_time (Number)
This is a part of the code I wrote to retrieve the newest reading from my IoT device
case "GET /data/newest":
body = await dynamo
.query({
TableName: "bikelock_db",
KeyConditionExpression: 'device_id = :id',
ExpressionAttributeValues: {
":id": 1,
},
Limit: 1,
ScanForwardIndex: false,
})
.promise();
break;
This code however, only returns the first added item from the database.
Changing the ScanForwardIndex: false to true doesn't change a thing.
I thought the Sort Key would sort it automatically, but it does not.
Any idea what I'm missing, or why it isn't working?

Try ScanIndexForward and I bet it'll work. You transposed the two words.

Related

In sails/waterline get maximum value of a column in a database agnostic way

While using sails as ORM (version 1.0), I notice that there is a function called Model.avg (as well as sum). - However there is not a maximum or minimum function to get the maximum or minimum from a column in a model; so it seems this is not necessary because it is covered by other functions already?
Now in my database I need to get the "maximum id" in a list; and I have it working for postgresql by using a native query:
const maxnum = await Order.getDatastore().sendNativeQuery('SELECT MAX(\"orderNr\") FROM \"order\"')
While this isn't the most difficult thing, it is not what I truly want: it is limited to only sql-based datastores (so we wouldn't be able to move easily to mongodb); and the syntax might actually be even different for another sql database type.
So I wonder - can this be transformed in such a way it doesn't rely on sendNativeQuery?
You can try .query() to execute a raw SQL query using the specified model's datastore and if u want u can try pg , an NPM package used for communicating with PostgreSQL databases:
Pet.query('SELECT pet.name FROM pet WHERE pet.name = $1', [ 'dog' ]
,function(err, rawResult) {
if (err) { return res.serverError(err); }
sails.log(rawResult);
// (result format depends on the SQL query that was passed in, and
the adapter you're using)
// Then parse the raw result and do whatever you like with it.
return res.ok();
});
You can use the limit and order options waterline provides to get a single Model with a maximal value (then just extract that value).
const orderModel = await Order.find({
where: {},
select: ['orderNr'],
limit: 1,
sort: 'orderNr DESC'
});
console.log(orderModel.orderNr);
Like most things in Waterline, it's probably not as efficient as an SQL SELECT MAX query (or some equivalent in mongo, etc), but it should allow swapping out the database with no maintenance. Last note, don't forget to handle the case of no models found.

DynamoDB: Query only every 10th value

I am querying data between two specific unixtime values. for example:
all data between 1516338730 (today, 6:12) and 1516358930 (today, 11:48)
my database receives a new record every minute. Now, when i want to query the data of last 24h, its way too dense. every 10th minute would be perfect.
my question now is: how can i read only every 10th database record, using DynamoDB?
As far as i know, theres no posibility to use modulo or something similar that pleases my needs.
This is my AWS Lambda Code so far:
var read = {
TableName: "user",
ProjectionExpression:"#time, #val",
KeyConditionExpression: "Id = :id and TIME between :time_1 and :time_2",
ExpressionAttributeNames:{
"#time": "TIME",
"#val": "user_data"
},
ExpressionAttributeValues: {
":id": event, // primary key
":time_1": 1516338730,
":time_2": 1516358930
},
ScanIndexForward: true
};
docClient.query(read, function(err, data) {
if(err) {
callback(err, null);
}
else {
callback(null, data.Items);
}
});
};
You say that you insert 1 record every minute?
The following might be an option:
At the time of insertion, set another field on the record, let's call it MinuteBucket, which is calculated as the timestamp's minute value mod 10.
If you do this via a stream function, you can handle new records, and then write something to touch old records to force a calculation.
Your query would change to this:
/*...snip...*/
KeyConditionExpression: "Id = :id and TIME between :time_1 and :time_2 and MinuteBucket = :bucket_id",
/*...snip...*/
ExpressionAttributeValues: {
":id": event, // primary key
":time_1": 1516338730,
":time_2": 1516358930,
":bucket_id": 0 //can be 0-9, if you want the first record to be closer to time_1, then set this to :time_1 minute value mod 10
},
/*...snip...*/
Just as a follow-up thought: if you want to speed up your queries, perhaps investigate using the MinuteBucket in an index, though that might come at a higher price.
I don't think that it is possible with dynamoDB API.
There are FilterExpression that contains conditions that DynamoDB applies after the Query operation, but before the data is returned to you.
But AFAIK it isn't possible to use a custom function. And build-in functions are poor.
As a workaround, you could mark each 10th item on the client side. And then query with checking attribute_exists (or attribute value) to filter them.
BTW, it would be nice to create the index for 'Id' attribute with sort key 'TIME' for improving query performance.

Algolia Instant Search Firebase Cloud Function - how to get the other value?

I don't have much idea about JavaScript, so I used Algolia's Instant Search for Firebase Github Repository to build my own function.
My function:
exports.indexentry = functions.database.ref('/posts/{postid}/text').onWrite(event => {
const index = client.initIndex(ALGOLIA_POSTS_INDEX_NAME);
const firebaseObject = {
text: event.data.val(),
timestamp: event.data.val(),
objectID: event.params.postid
};
In Algolia indices, with timestamp as the key, I get the same value as in text field, but in Firebase backend timestamp is different. How to fix this?
I tried different statements to get timestamp value but couldn't.
Edit
Expected Outcome:
{
text: "random rext",
timestamp: "time stamp string",
author: "author name",
object ID: "object ID"
}
Actual Outcome
{
text: "entered text",
object ID: "object ID"
}
I'm not real clear about your goal. Event has a timestamp property. Have you tried:
const firebaseObject = {
text: event.data.val(),
timestamp: event.timestamp, // <= CHANGED
objectID: event.params.postid
};
If you want a long instead of string, use Date.parse(event.timestamp)
EDIT 2: Answer can be found here.
Original Answer: What Bob Snyder said about the timestamp event is correct.
There may be other fields as well, for example, author_name that we may need to index, is there a generalized way to do that or do I write separate functions for every field?
If you want a general way to add all fields, I think what you are looking for can be found here. This should give you the right guidance to get what you want, i.e save your whole object into the Algolia index.
EDIT:
index.saveObject(firebaseObject, function(err, content) {
if (err) {
throw err;
}
console.log('Firebase object indexed in Algolia', firebaseObject.objectID);
});
event.data.val() returns the entire firebase snapshot. If you want a specific value in your data you add it after .val() for example if every post has an author stored in your firebase database under they key "author" you can get this value using var postAuthor = event.data.val().author
I've included some samples from my code for those interested. A sample post looks like this:
Then inside my cloud functions I can access data like this:
const postToCopy = event.data.val(); // entire post
const table = event.data.val().group;
const category = event.data.val().category;
const region = event.data.val().region;
const postKey = event.data.val().postID;

Mongo check if a document already exists

In the MEAN app I'm currently building, the client-side makes a $http POST request to my API with a JSON array of soundcloud track data specific to that user. What I now want to achieve is for those tracks to be saved to my app database under a 'tracks' table. That way I'm then able to load tracks for that user from the database and also have the ability to create unique client URLs (/tracks/:track)
Some example data:
{
artist: "Nicole Moudaber"
artwork: "https://i1.sndcdn.com/artworks-000087731284-gevxfm-large.jpg?e76cf77"
source: "soundcloud"
stream: "https://api.soundcloud.com/tracks/162626499/stream.mp3?client_id=7d7e31b7e9ae5dc73586fcd143574550"
title: "In The MOOD - Episode 14"
}
This data is then passed to the API like so:
app.post('/tracks/add/new', function (req, res) {
var newTrack;
for (var i = 0; i < req.body.length; i++) {
newTrack = new tracksTable({
for_user: req.user._id,
title: req.body[i].title,
artist: req.body[i].artist,
artwork: req.body[i].artwork,
source: req.body[i].source,
stream: req.body[i].stream
});
tracksTable.find({'for_user': req.user._id, stream: req.body[i].stream}, function (err, trackTableData) {
if (err)
console.log('MongoDB Error: ' + err);
// stuck here - read below
});
}
});
The point at which I'm stuck, as marked above is this: I need to check if that track already exists in the database for that user, if it doesn't then save it. Then, once the loop has finished and all tracks have either been saved or ignored, a 200 response needs to be sent back to my client.
I've tried several methods so far and nothing seems to work, I've really hit a wall and so help/advice on this would be greatly appreciated.
Create a compound index and make it unique.
Using the index mentioned above will ensure that there are no documents which have the same for_user and stream.
trackSchema.ensureIndex( {for_user:1, stream:1}, {unique, true} )
Now use the mongoDB batch operation to insert multiple documents.
//docs is the array of tracks you are going to insert.
trackTable.collection.insert(docs, options, function(err,savedDocs){
//savedDocs is the array of docs saved.
//By checking savedDocs you can see how many tracks were actually inserted
})
Make sure to validate your objects as by using .collection we are bypassing mongoose.
Make a unique _id based on user and track. In mongo you can pass in the _id that you want to use.
Example {_id : "NicoleMoudaber InTheMOODEpisode14",
artist: "Nicole Moudaber"
artwork: "https://i1.sndcdn.com/artworks-000087731284-gevxfm-large.jpg?e76cf77"
source: "soundcloud"
stream: "https://api.soundcloud.com/tracks/162626499/stream.mp3? client_id=7d7e31b7e9ae5dc73586fcd143574550"
title: "In The MOOD - Episode 14"}
_id must be unique and won't let you insert another document with the same _id. You could also use this to find the record later db.collection.find({_id : NicoleMoudaber InTheMOODEpisode14})
or you could find all tracks for db.collection.find({_id : /^NicoleMoudaber/}) and it will still use the index.
There is another method to this that I can explain if you dont' like this one.
Both options will work in a sharded environment as well as a single replica set. "Unique" indexes do not work in a sharded environment.
Soundcloud API provides a track id, just use it.
then before inserting datas you make a
tracks.find({id_soundcloud : 25645456}).exec(function(err,track){
if(track.length){ console.log("do nothing")}else {//insert}
});

Inconsistent mongo results with unique field

Not sure when this issue cropped up but I am not able to fetch items from mongo consistently. I have 4000+ items in the db. Here's the schema.
var Order = new Schema({
code: {
type: String,
unique: true
},
...
});
Now run some queries:
Order.find().exec(function(err, orders) {
console.log(orders.length); // always 101
})
Order.find().limit(100000).exec(function(err, orders) {
console.log(orders.length); // varies, sometimes 1150, 1790, 2046 - never more
})
Now if I remove the 'unique: true' from schema it will always return the total amount:
Order.find().exec(function(err, orders) {
console.log(orders.length); // always 4213 (correct total)
})
Any idea as to why this behavior occurs? afaik the codes are all unique (orders from a merchant). This is tested on 3.8.6, 3.8.8
Ok issue was indeed unique index being not there/corrupted. I a guilty of adding the unique index later on in the game and probably had some dups already which prevented Mongo from creating indexes.
I removed the duplicates and then in the mongo shell did this:
db.orders({name: 1}, {unique: true, dropDubs: true});
I would think the above would remove dups but it would just die because of the dups. I am sure there is a shell way to do this but I just did it with some js code then ran the above to recreate the index which can be verified with:
db.orders.getIndexes()

Categories

Resources