I want to know the difference between the AWS SDK DynamoDB client and the DynamoDB DocumentClient? In which use case should we use the DynamoDB client over the DocumentClient?
const dynamoClient = new AWS.DynamoDB.DocumentClient();
vs
const dynamo = new AWS.DynamoDB();
I think this can be best answered by comparing two code samples which do the same thing.
Here's how you put an item using the dynamoDB client:
var params = {
Item: {
"AlbumTitle": {
S: "Somewhat Famous"
},
"Artist": {
S: "No One You Know"
},
"SongTitle": {
S: "Call Me Today"
}
},
TableName: "Music"
};
dynamodb.putItem(params, function (err, data) {
if (err) console.log(err)
else console.log(data);
});
Here's how you put the same item using the DocumentClient API:
var params = {
Item: {
"AlbumTitle": "Somewhat Famous",
"Artist": "No One You Know",
"SongTitle": "Call Me Today"
},
TableName: "Music"
};
var documentClient = new AWS.DynamoDB.DocumentClient();
documentClient.put(params, function (err, data) {
if (err) console.log(err);
else console.log(data);
});
As you can see in the DocumentClient the Item is specified in a more natural way. Similar differences exist in all other operations that update DDB (update(), delete()) and in the items returned from read operations (get(), query(), scan()).
As per the announcement of the DocumentClient:
The document client abstraction makes it easier to read and write data to Amazon DynamoDB with the AWS SDK for JavaScript. Now you can use native JavaScript objects without annotating them as AttributeValue types.
It's basically a simpler way of calling dynamoDB in the SDK and it also converts annotated response data to native JS types. Generally you should only use the regular DynamoDB client when doing more "special" operations on your database like creating tables etc. That's stuff usually outside of the CRUD-scope.
In simpler words, DocumentClient is nothing, but wrapper around DynamoDB client. As per mentioned in other comments and aws documentation below, it features convenience of use, converting annotated response data to native JS types and abstracting away the notion of attribute values.
Another noticeable difference is that the scope of documentclient is limited to item level operations, but dynamodb client provides broader range of operations in addition to the item level operations.
From AWS document client documentation
The document client simplifies working with items in Amazon DynamoDB by abstracting away the notion of attribute values. This abstraction annotates native JavaScript types supplied as input parameters, as well as converts annotated response data to native JavaScript types.
Related
I am new to NodeJs and I am trying to create a web application using express framework and MySql. I get that in MVC architecture the views are for example the *.ejs files. The controllers are supposed to have the logic and the models should focus on the database.
But still I am not quite sure what is supposed to be inside the model. I have the following code in my controller (probably wrong, not following mvc design):
const mysql = require('mysql');
const db = mysql.createConnection(config);
db.query(query, (err, result) => {
if (err) {
return res.redirect('/');
}
res.render('index.ejs', {
users: result
});
});
Now from what I've read the controller should ask the model to execute the query to the database, get the results and render the view (index.ejs').
My question is this: What should be inside the model.js file? Can I make something like this?
controller.js
const db = require('./models/model.js');
db.connect();
const results = db.query(query);
if(results != null) {
res.render('index.ejs'){
users: result
});
}
model.js will make a query to mysql handle any errors and return the result.
From what I've read I have two options. Option1: pass callback function to model and let the model render the view (I think that's wrong, model should not communicate with view, or not?) Option2: possible use of async/await and wait for model to return the results but I am not sure if this is possible or how.
The model is the programmatic representation of the data stored in the database. Say I have an employees table with the following schema:
name: string
age: number
company: company_foreign_key
And another table called companies
name: string
address: string
I therefore have two models: Company and Employee.
The model's purpose is to load database data and provide a convenient programmatic interface to access and act upon this data
So, my controller might look like this:
var db = require('mongo');
var employeeName = "bob";
db.connect(function(err, connection){
const Employee = require('./models/Employee.js'); // get model class
let employeeModel = new Employee(connection); // instantiate object of model class
employee.getByName(employeeName, function(err, result){ // convenience method getByName
employee.getEmployeeCompany(result, function(err, companyResult){ // convenience method getEmployeeCompany
if(companyResultl) { // Controller now uses the results from model and passes those results to a view
res.render('index.ejs')
company: companyResult
});
})
})
}
})
Basically, the model provides a convenient interface to the raw data in the database. The model executes the queries underneath, and provides convenient methods as a public interface for the controller to access. E.g., the employee model, given an employee object, can find the employee's company by executing that query.
The above is just an example and, given more thought, a better interface could be thought up. In fact, Mongoose provides a great example of how to set up model interfaces. If you look at how Mongoose works, you can use the same principles and apply them to your custom model implementation.
While using sails as ORM (version 1.0), I notice that there is a function called Model.avg (as well as sum). - However there is not a maximum or minimum function to get the maximum or minimum from a column in a model; so it seems this is not necessary because it is covered by other functions already?
Now in my database I need to get the "maximum id" in a list; and I have it working for postgresql by using a native query:
const maxnum = await Order.getDatastore().sendNativeQuery('SELECT MAX(\"orderNr\") FROM \"order\"')
While this isn't the most difficult thing, it is not what I truly want: it is limited to only sql-based datastores (so we wouldn't be able to move easily to mongodb); and the syntax might actually be even different for another sql database type.
So I wonder - can this be transformed in such a way it doesn't rely on sendNativeQuery?
You can try .query() to execute a raw SQL query using the specified model's datastore and if u want u can try pg , an NPM package used for communicating with PostgreSQL databases:
Pet.query('SELECT pet.name FROM pet WHERE pet.name = $1', [ 'dog' ]
,function(err, rawResult) {
if (err) { return res.serverError(err); }
sails.log(rawResult);
// (result format depends on the SQL query that was passed in, and
the adapter you're using)
// Then parse the raw result and do whatever you like with it.
return res.ok();
});
You can use the limit and order options waterline provides to get a single Model with a maximal value (then just extract that value).
const orderModel = await Order.find({
where: {},
select: ['orderNr'],
limit: 1,
sort: 'orderNr DESC'
});
console.log(orderModel.orderNr);
Like most things in Waterline, it's probably not as efficient as an SQL SELECT MAX query (or some equivalent in mongo, etc), but it should allow swapping out the database with no maintenance. Last note, don't forget to handle the case of no models found.
I'd like to break this into smaller, tighter questions but I don't know what I don't know enough to do that yet. So hopefully a can get specific answers to help do that.
The scope of the solution requires receiving & parsing a lot of records, 2013 had ~17 million certificate(s) transactions while I'm only interested in very small subsets of the order 40,000 records.
In pseudo code:
iterate dates(thisDate)
send message to API for thisDate
receive JSONS as todaysRecords
examine todaysRecords to look for whatever criteria match inside the structure
append a subset of todaysRecords to recordsOut
save recordsOut to a SQL/CSV file.
There's a large database of Renewable Energy Certificates for the use under the Australian Government RET Scheme called the REC Registery and as well as the web interface linked to here, there is an API provided that has a simple call logic as follows
http://rec-registry.gov.au/rec-registry/app/api/public-register/certificate-actions?date=<user provided date> where:
The date part of the URL should be provided by the user
Date format should be YYYY-MM-DD (no angle brackets & 1 date limit)
A JSON is returned (with potentially 100,000s of records on each day).
The API documentation (13pp PDF) is here, but it mainly goes into explaining the elements of the returned structure which is less relevant to my question. Includes two sample JSON responses.
While I know some Javascript (mostly not in a web context) I'm not sure how send this message within a script and figure I'd need to do it server side to be able to process (filter) the returned information and then save the records I'm interested in. I'll have no issue parsing the JSON (if i can use JS) and copying the objects I wish to save I'm not sure where to even start doing this. Do I need a LAMP setup to do this (or MAMP since I'm on OS X) or is there a more light-weight JS way I can execute this. I've never known how to save file from within web-browser JS, I thought it was banned for security reasons but I guess theres ways and means.
If i can rewrite this question to be more clear and effective in soliciting an answer I'm happy for edits to question also.
I guess maybe I'm after some boilerplate code for calling a simple API like this and the stack or application context in which I need to do it. I realise there's potential several ways to execute this but looking for most straightforward for someone with JS knowledge and not much PHP/Python experience (but willing to learn what it takes).
Easy right?
Ok, to point you in the right direction.
Requirements
If the language of choice is Javascript, you'll need to install Node.js. No server whatsoever needed.
Same is valid for PHP or Python or whatever. No apache needed, just the lang int.
Running a script with node
Create a file.js somewhere. To run it, you'll just need to type (in the console) node file.js (in the directory the file lives in.
Getting the onfo from the REC Webservice
Here's an example of a GET request:
var https = require('https');
var fs = require('fs');
var options = {
host: 'rec-registry.gov.au',
port: 443,
path: '/rec-registry/app/api/public-register/certificate-actions?date=2015-06-03'
};
var jsonstr = '';
var request = https.get(options, function(response) {
process.stdout.write("downloading data...");
response.on('data', function (chunk) {
process.stdout.write(".");
jsonstr += chunk;
});
response.on('end', function () {
process.stdout.write("DONE!");
console.log(' ');
console.log('Writing to file...');
fs.writeFile("data.json", jsonstr, function(err) {
if(err) {
return console.error('Error saving file');
}
console.log('The file was saved!');
});
});
})
request.on('error', function(e) {
console.log('Error downloading file: ' + e.message);
});
Transforming a json string into an object/array
use JSON.parse
Parsing the data
examine todaysRecords to look for whatever criteria match inside the structure
Can't help you there, but should be relatively straightforward to look for the correct object properties.
NOTE: Basically, what you get from the request is a string. You then parse that string with
var foo = JSON.parse(jsonstr)
In this case foo is an object. The results "certificates" are actually inside the property result, which is an array
var results = foo.result;
In this example the array contains about 1700 records and the structure of a certificate is something like this:
"actionType": "STC created",
"completedTime": "2015-06-02T21:51:26.955Z",
"certificateRanges": [{
"certificateType": "STC",
"registeredPersonNumber": 10894,
"accreditationCode": "PVD2259359",
"generationYear": 2015,
"generationState": "QLD",
"startSerialNumber": 1,
"endSerialNumber": 72,
"fuelSource": "S.G.U. - solar (deemed)",
"ownerAccount": "Solargain PV Pty Ltd",
"ownerAccountId": 25782,
"status": "Pending audit"
}]
So, to access, for instance, the "ownerAccount" of the first "certificateRanges" of the first "certificate" you would do:
var results = JSON.parse(jsonstr).result;
var ownerAccount = results[0].certificateRanges[0].ownerAccount;
Creating a csv
The best way is to create an abstract structure (that meets your needs) and convert it to a csv.
There's a good npm library called json2csv that can help you there
Example:
var fs = require('fs');
var json2csv = require('json2csv');
var fields = ['car', 'price', 'color']; // csv titles
var myCars = [
{
"car": "Audi",
"price": 40000,
"color": "blue"
}, {
"car": "BMW",
"price": 35000,
"color": "black"
}, {
"car": "Porsche",
"price": 60000,
"color": "green"
}
];
json2csv({ data: myCars, fields: fields }, function(err, csv) {
if (err) console.log(err);
fs.writeFile('file.csv', csv, function(err) {
if (err) throw err;
console.log('file saved');
});
});
If you wish to append instead of writing to a new file you can use
fs.appendFile('file.csv', csv, function (err) { });
How are scope keys used for write operations?
When I try to use a (scoped) write key, the API responds 401 Unauthorized; the "master write key" works like a charm. Using a scope key for read operations works as well.
I assume my selection of filters etc. isn't working out, but I can't find any details in the documentation on how scope keys work for write operations.
(For context, I am working to constrain scope keys to enforce certain parameter values. In essence, using the scope keys to "shard" the collections on a given key so that multiple tenants can write to the same collection, while not being able to falsify each other's values.)
I use a filter like the following:
{
"filters": [
{
"property_name": "whatever",
"operator": "eq",
"property_value": "client value"
}
],
"allowed_operations": ["write"]
}
I use the .net SDK to create the scope key, and can verify the filter values through decrypting the key afterwards. It will get used on a web app, so using the Keen IO JS library, similar to:
var client = new Keen({
projectId: "…",
writeKey: "…", // <- generated scoped write key goes here
readKey: "…",
protocol: "https",
host: "api.keen.io/3.0",
requestType: "jsonp"
});
client.addEvent("my-collection", { /* … */ }, function (err, res) { /* … */ });
What's the SOP for scoped writes on Keen.IO?
SOP: the Keen API expects you to supply the write enabled scoped key as the writeKey in your POST request.
Scoped keys for write operations won't perform as you're suggesting (at least not today). Currently scoped keys for writes do nothing more than obfuscate your master/write keys. All property data that you append as part of each event must still be supplied in the JSON payload of the addEvent method on the client side.
We generally recommend server side implementations in cases where you need to protect/manipulate your writes before you send them to Keen.
I'm building a relatively big NodeJS application, and I'm currently trying to figure out how to fetch the data I need from the DB. Here is a part of my models :
One user has one role, which has access to many modules (where there's a table role_modules to link roles and modules).
In Rails, I would do something like user.role.modules to retrieve the list of the modules he has access to. In NodeJS it's a bit more complicated. I'm using node-orm2 along with PostgreSQL. Here is what I have so far:
req.models.user.find({email: req.body.user}, function(err, user) {
user[0].getRole(function(err, role) {
role.getModules(function(err, modules) {
var list_modules = Array();
modules.forEach(function(item) {
console.log(item);
list_modules.push(item.name);
})
But I can't do this, because item only contains role_id and module_id. If I want to have the name, I would have to do item.getModule(function() {...}), but the results would be asynchronous ... so I don't really see how I could end up with an array containing the names of the modules a user has access to ... have any idea?
Also, isn't that much slower than actually running a single SQL query with many JOIN? Because as I see it, the ORM makes multiple queries to get the data I want here...
Thank you!
I wrote an ORM called bookshelf.js that aims to simplify associations and eager loading relations between models in SQL. This is what your query would probably look like to load the role & modules on a user given your description:
var Module = Bookshelf.Model.extend({
tableName: 'modules'
});
var Role = Bookshelf.Model.extend({
tableName: 'roles',
modules: function() {
return this.belongsToMany(Module);
}
});
var User = Bookshelf.Model.extend({
tableName: 'users'
role: function() {
return this.hasOne(Role);
}
});
User.forge({email: req.body.user})
.fetch({
require: true,
withRelated: ['role.modules']
})
.then(function(user) {
// user, now has the role and associated modules eager loaded
console.log(user.related('role'));
console.log(user.related('role').related('modules'))
}, function(err) {
// gets here if no user was found.
});
Might be worth taking a look at.