Setting
I'm building an app using microservice architecture.
I need at some point to pass a mongoose session (and its transaction) to another microservice.
Thus, if a single error happens in one microservice, I can rollback all the changes made by all the microservices.
What I tried
Sending ClientSession
I first tried to send directly ClientSession object to my microservice.
However, since ClientSession is not serializable, I cannot send it properly.
Received error:
ERROR HttpError [InternalServerError]: Converting circular structure to JSON
--> starting at object with constructor 'MongoClient'
| property 's' -> object with constructor 'Object'
| property 'sessionPool' -> object with constructor 'ServerSessionPool'
--- property 'client' closes the circle
Sending Transaction
I then tried to send transaction started by my session in microservice A and to attach it to a freshly created session on microservice B.
Microservice A:
const session = await mongoose.startSession();
mongoose.startTransaction();
const { transaction } = session;
try {
await callMicroserviceB(transaction);
await session.commitTransaction();
} catch (e) {
await session.abortTransaction();
} finally {
await session.endSession();
}
Microservice B:
const { transaction } = event;
const session = await mongoose.startSession();
session.transaction = transaction;
... // Microservice code
await session.endSession();
When performing any query on a Model, I receive the following error:
Error: session.transaction.transition is not a function
Is there a way to share a ClientSession or a Transaction?
Related
I have a getUser function that return a User entity after being saved into the database for caching reasons. Here is the pseudo-code for it:
export const getUser = async (userId: string): Promise<User> => {
if (userStoredInDatabase) return userStoredInDatabase // <- pseudo code
// ...if no user stored -> fetch external services to get user name, avatar, ...
const user = new User()
user.id = userId // <- PRIMARY KEY
// ...add data to the user
return getManager().save(user)
}
I use this function in 2 distinct routes of a simple expressjs API:
app.get('/users/:userId/profile', async (req, res) => {
const user = await getUser(req.params.userId)
// ...
})
app.get('/users/:userId/profile-small', async (req, res) => {
const user = await getUser(req.params.userId)
// ...
})
So far so good until I came to the problem that my frontend need to fetch /users/:userId/profile and /users/:userId/profile-small at the exact same time to show the 2 profiles. If the user is not yet cached in the database, the .save(user) will be called twice almost at the same time and I will respond for one of the 2 with an error due to an invalid sql insertion as the given user.id already exists.
I know I could just delay one of the request to make it work good enough but I'm not in the favor of this hack.
Do you have any idea how to concurrently .save() a User even if it is called at the same time from 2 different contexts so that TypeOrm knows for one of the 2 calls that user.id already exist and therefore do an update instead of an insert?
Using delay will not solve the prolm since the user can open 3 tabs at the same time. Concurency has to be handled.
Catch if there is a primary key violation (Someone already has stored the user between the get user and persist user code block)
Here is an example:
export const getUser = async (userId: string): Promise<User> => {
try {
if (userStoredInDatabase) return userStoredInDatabase // <- pseudo code
// ...if no user stored -> fetch external services to get user name, avatar, ...
const user = new User()
user.id = userId // <- PRIMARY KEY
// ...add data to the user
return getManager().save(user)
} catch (e) {
const isDuplicatePrimaryKey = //identify it is a duplicate key based on e prop,this may differ depending on the SQL Engine that you use. Debugging will help
if (isDuplicatePrimaryKey) {
return await //load user from db
}
throw e; // since e is not a duplicate primary key, the issue is somewhere else
}
}
I'm having problems when I deploy my alexa skill to google cloud functions.
Initially, I'm using the following code.
My skill.js file
//imports here...
Router.post("/:locale", async (req, res) => {
const requestEnvelope = JSON.stringify(req.body);
try {
await new SkillRequestSignatureVerifier().verify(requestEnvelope, req.headers);
await new TimestampVerifier().verify(requestEnvelope);
} catch (err) {
console.log(`Error with message: ${err.message}`);
console.log(`Error object: ${JSON.stringify(err)}`);
return res.status(400).send(err.message);
}
const responseASK = await skill.invoke(req.body);
return res.status(200).send(responseASK);
});
module.exports = Router;
In my server.js file
const skillRoute = require("./skill");
const express = require("express");
const server = express();
server.use("/", skillRoute);
exports.server = functions.https.onRequest(server);
Everything goes well with the deploy, including, I can invoke my alexa skill normally. But, when running the validations for distribution, I get the following problem:
The skill is rejecting the request when executed with additional properties in the JSON request.
When we invoke the skill with additional parameters, the skill is rejecting it when we expect this to be accepted. Future versions of the Alexa Skills Kit may add new properties to the JSON request and response formats, while maintaining backward compatibility for the existing properties. Your code must be resilient to these types of changes. For example, your code for de-serializing a JSON request must not break when it encounters a new, unknown property. Please ensure that your code can handle new attributes and does not break when it encounters new properties in the JSON request.
Documentation Help:
Request and response JSON reference: Click here
On my server console I get the following errors:
Error with message: request body and signature does not match
and:
Error object: {"name":"AskSdk.SkillRequestSignatureVerifier Error"}
It seems to me that it may be the result of some ask-sdk-core update, as I have other servers working with the same code.
In following a tutorial, I am trying to set up an AWS Lambda function that will pass a SQL query to an AWS RDS Aurora Serverless MySQL database using the Data API and return the query results (presumably as a JSON).
The code I used is below (where the params are stored as environment variables):
const AWS = require('aws-sdk')
const RDS = new AWS.RDSDataService()
exports.handler = async (event, context) => {
console.log(JSON.stringify(event, null, 2)) // Log the entire event passed in
// Get the sqlStatement string value
// TODO: Implement a more secure way (e.g. "escaping") the string to avoid SQL injection
var sqlStatement = event.sqlStatement;
// The Lambda environment variables for the Aurora Cluster Arn, Database Name, and the AWS Secrets Arn hosting the master credentials of the serverless db
var DBSecretsStoreArn = process.env.DBSecretsStoreArn;
var DBAuroraClusterArn = process.env.DBAuroraClusterArn;
var DatabaseName = process.env.DatabaseName;
const params = {
awsSecretStoreArn: DBSecretsStoreArn,
dbClusterOrInstanceArn: DBAuroraClusterArn,
sqlStatements: sqlStatement,
database: DatabaseName
}
try {
let dbResponse = await RDS.executeSql(params)
console.log(JSON.stringify(dbResponse, null, 2))
return JSON.stringify(dbResponse)
} catch (error) {
console.log(error)
return error
}
}
I run the following test from the Lambda console (where "Bonds" is the name of an existing table in my database):
{
"sqlStatement": "SELECT * FROM Bonds"
}
My test is logged as a success, with a blank output {} and the following error information logged:
INFO TypeError: Converting circular structure to JSON
--> starting at object with constructor 'Request'
| property 'response' -> object with constructor 'Response'
--- property 'request' closes the circle
at JSON.stringify (<anonymous>)
at Runtime.exports.handler (/var/task/index.js:25:24)END
Does anyone know how I can successfully retrieve data with this method, and/or what the above error means?
RDS.executeSql(params) does not return a promise that you can await. It simply constructs a request object for you.
Replace it instead with await RDS.executeSql(params).promise() so you can get the value that you want.
References:
executeSql
Using Javascript Promises (in AWS SDK)
I'm attempting to replicate a remote CouchDB database to a local one. I can successfully access the remote docs and create a local PouchDB instance. However, as soon as I initiate a replication or sync command, I'm returned a PouchDB error with an empty array (i.e. no message, code, etc.).
// This is the portion of code that is failing:
try {
var result = await localDb.replicate.from(remoteDb)
} catch (err) {
console.log(err)
}
The local.ini file on the CouchDB server has the following options set:
[chttpd]
port = 5984
bind_address = 192.168.1.1
[httpd]
enable_cors = true
[cors]
origins = *
Any ideas?
Im writting a node app to log some informations in a mongo database.
Below is the snippet code that called each time i need to store log in the mongo database.
const mongo = {}
const mongo_cli = require('mongodb').MongoClient
module.exports = {
log (l) {
mongo_cli.connect(the_mongo_url, (error, client) => {
if (error) throw error;
mongo.cli = client;
mongo.db = client.db(the_database);
//insert and update operations
});
}
}
The code above work for now. I mean, I can insert and update logs already inserted at the price of one (or more) connection (s) that I never close due to my lack of control of callback functions.
So, how can i structure it better so that i can just have only one mongo_cli call to not consume too many ressources ?