Where should I initialize pg-promise - javascript

I just started to learn nodejs-postgres and found the pg-promise package.
I read the docs and examples but I don't understand where should I put the initialization code? I using Express and I have many routes.
I have to put whole initialization (including pg-monitor init) to every single file where I would like to query the db or I need to include and initalize/configure them only in the server.js?
If I initialized them only in the server.js what should I include other files where I need a db query?
In other words. Its not clear to me if pg-promise and pg-monitor configuration/initalization was a global or a local action?
It's also unclear if I need to create a db variable and end pgp for every single query?
var db = pgp(connection);
db.query(...).then(...).catch(...).finally(**pgp.end**);

You need to initialize the database connection only once. If it is to be shared between modules, then put it into its own module file, like this:
const initOptions = {
// initialization options;
};
const pgp = require('pg-promise')(initOptions);
const cn = 'postgres://username:password#host:port/database';
const db = pgp(cn);
module.exports = {
pgp, db
};
See supported Initialization Options.
UPDATE-1
And if you try creating more than one database object with the same connection details, the library will output a warning into the console:
WARNING: Creating a duplicate database object for the same connection.
at Object.<anonymous> (D:\NodeJS\tests\test2.js:14:6)
This points out that your database usage pattern is bad, i.e. you should share the database object, as shown above, not re-create it all over again. And since version 6.x it became critical, with each database object maintaining its own connection pool, so duplicating those will additionally result in poor connection usage.
Also, it is not necessary to export pgp - initialized library instance. Instead, you can just do:
module.exports = db;
And if in some module you need to use the library's root, you can access it via property $config:
const db = require('../db'); // your db module
const pgp = db.$config.pgp; // the library's root after initialization
UPDATE-2
Some developers have been reporting (issue #175) that certain frameworks, like NextJS manage to load modules in such a way that breaks the singleton pattern, which results in the database module loaded more than once, and produce the duplicate database warning, even though from NodeJS point of view it should just work.
Below is a work-around for such integration issues, by forcing the singleton into the global scope, using Symbol. Let's create a reusable helper for creating singletons...
// generic singleton creator:
export function createSingleton<T>(name: string, create: () => T): T {
const s = Symbol.for(name);
let scope = (global as any)[s];
if (!scope) {
scope = {...create()};
(global as any)[s] = scope;
}
return scope;
}
Using the helper above, you can modify your TypeScript database file into this:
import * as pgLib from 'pg-promise';
const pgp = pgLib(/* initialization options */);
interface IDatabaseScope {
db: pgLib.IDatabase<any>;
pgp: pgLib.IMain;
}
export function getDB(): IDatabaseScope {
return createSingleton<IDatabaseScope>('my-app-db-space', () => {
return {
db: pgp('my-connect-string'),
pgp
};
});
}
Then, in the beginning of any file that uses the database you can do this:
import {getDB} from './db';
const {db, pgp} = getDB();
This will ensure a persistent singleton pattern.

A "connection" in pgp is actually an auto-managed pool of multiple connections. Each time you make a request, a connection will be grabbed from the pool, opened up, used, then closed and returned to the pool. That's a big part of why vitaly-t makes such a big deal about only creating one instance of pgp for your whole app. The only reason to end your connection is if you are definitely done using the database, i.e. you are gracefully shutting down your app.

Related

How can I invoke a firebase storage function locally and manually?

I am fairly familiar with invoking firebase firestore functions manually for testing. I have done this in the past with the docs here. In short I make a wrappedOnWrite variable, assign it to my admin sdk.wrap(my firebase function), and then I invoke it with an object that represents the firestore collection before and after the call. Example:
const testEnv = functions({ projectId: **** }, '****');
let wrappedOnWrite:
| WrappedScheduledFunction
| WrappedFunction<Change<QueryDocumentSnapshot>>;
beforeAll(() => {
// wrap the firebase function, ready to be invoked
wrappedOnWrite = testEnv.wrap(moderateCompetition);
});
const changeDoc: Change<QueryDocumentSnapshot> = {
before: testEnv.firestore.makeDocumentSnapshot(dataBefore, path),
after: testEnv.firestore.makeDocumentSnapshot(dataAfter, path),
};
// call firebase function with the changes inputted to the function
await wrappedOnWrite(changeDoc);
This is great for testing my firestore collections with jest, however, I am never seen this done with firebase storage, and I haven't been able to find many useful resources either. I have a pretty basic firestore .onFinalize function for storage:
export const blurImages = functions.storage.object().onFinalize(async (object) => {}
Is there any way to manually test it like in the example I gave above? When I initially deployed the function, it ran recursively and charged my company a couple of dollars, and I need to be able to run it periodically so that recession doesn't happen on the live function. Thanks in advance :)

How do I import local json file as a starter as my in-memory resource in Express

I am working on a simple todo app with node.js express and I wanted to manipulate some resource in memory, instead of connecting to a database.
I have a local json file todo.json with some predefined data set and I wanted to use that as a starter and the CRUD operations are built on top of it.
So I have a function initializeTodos and a function getTodos
import { readFile } from 'fs/promises'
const initializeTodos = async () =>
JSON.parse(
await readFile(process.cwd() + '/src/resources/todo/todo.json', 'utf-8')
)
export const getTodos = async () => {
return initializeTodos()
}
then in each route handler I would call getTodos to get the todo list and perform crud operations on it. But now the issue is, every time I call getTodos it in turn calls initializeTodos and that gives me the json from the json file, which is static. That means any operations I perform after getTodos is not saved in memory and it is going to get reset every time I call getTodos
I guess I could write back to the disk for each crud operation but I really wanted to keep it simple here to just do it in memory. Is there a way I can achieve that?
But now the issue is, every time I call getTodos it in turn calls initializeTodos
Then don't call initializeTodos
You should load the file once at the start of your app and assign the data to a global variable that will be shared throughout your application. That will be your 'database' - all in memory
Then the updates and reads will be going to the same place so you will see updated results everytime i.e the routes will read write from that global variable
Then once you have this working - refactor the global variable out to its own class and call it ToDoInMemoryDb and hide all the access behind it to keep things clean. Global shared vars can lead you to learn bad habits
On app shutdown you can persist the latest value of the variable back to disk so the next time you have all the edits made

Set up internal dependencies at build time

I'm looking at using Terraform to build a small multi-cloud project. The project will call some API, perform some manipulation of the data received, then store in the cloud.
I've written a Typescript project that I intend to upload as either an AWS Lambda function or an Azure function, depending on the choice of cloud provider, set at deployment.
The problem I'm facing is that the Typescript project must have the ability to switch its storage mechanism depending on which cloud provider the project is being deployed to. I have something similar to this, setting the cloud provider as an environment variable on deployment:
export const handler = async (event: any = {}): Promise<any> => {
const users: User[] = await fetchUsers();
users.PerformSomeUpdate();
const userRepository: IUserRepository = getUserRepository();
userRepository.Save(users);
return JSON.stringify(event, null, 2);
}
function getUserRepository(): IUserRepository {
if (process.env["CLOUD_PROVIDER"] == "AWS") {
return new S3Repository();
}
if (process.env["CLOUD_PROVIDER"] == "AZURE") {
return new AzureBlobRepository();
}
}
This works fine, but I'd rather not have to check the cloud provider on each execution of the function. Is there a way I can set this dependency at the build/deployment stage instead?
I've looked into several DI frameworks, but I don't think they're the right answer, as a DI container resolves dependencies at run time.

How to configure AWS CDK Account and Region to look up a VPC

I am learning the AWS CDK, and this is a problem I can't seem to figure out. JS/Node are not languages I use often, so if there is some obvious native thing that I am missing, please don't be too harsh. I'm trying to deploy a container to an existing VPC / new ECS Cluster. The following code isn't my whole script but is an important part. Hopefully, it gives the idea of what I'm trying to do.
//import everything first
stack_name = "frontend";
class Frontend extends core.Stack {
constructor(scope, id, props = {}) {
super(scope, id);
console.log("env variable " + JSON.stringify(props));
const base_platform = new BasePlatform(this, id, props);
//this bit doesn't matter, I'm just showing the functions I'm calling to set everything up
const fargate_load_balanced_service = ecs_patterns.ApplicationLoadBalancedFargateService();
this.fargate_load_balanced_service.taskDefinition.addToTaskRolePolicy();
this.fargate_load_balanced_service.service.connections.allowTo();
const autoscale = this.fargate_load_balanced_service.service.autoScaleTaskCount({});
this.autoscale.scale_on_cpu_utilization();
}
}
class BasePlatform extends core.Construct {
constructor(scope, id, props = {}) {
super(scope, id);
this.environment_name="frontend";
console.log("environment variables " + JSON.stringify(process.env));
//This bit is my problem child
const vpc = ec2.Vpc.fromLookup(
this, "VPC",{
vpcId: 'vpc-##########'
});
//this bit doesn't matter, I'm just showing the functions I'm calling to set everything up
const sd_namespace = service_discovery.PrivateDnsNamespace.from_private_dns_namespace_attributes();
const ecs_cluster = ecs.Cluster.from_cluster_attributes();
const services_sec_grp = ec2.SecurityGroup.from_security_group_id();
}
}
const app = new core.App();
_env = {account: process.env.CDK_DEFAULT_ACCOUNT, region: process.env.CDK_DEFAULT_REGION };
new Frontend(app, stack_name, {env: _env});
app.synth();
When I run CDK synth, it spits out:
Error: Cannot retrieve the value from context provider vpc-provider since the account/region is not specified at the stack level. Either configure "env" with explicit account and region when you define your stack or use the environment variables "CDK_DEFAULT_ACCOUNT" and "CDK_DEFAULT_REGION" to inherit environment information from the CLI (not recommended for production stacks)
But I don't know why. My usage here fits several other Stackoverflow answers to similar questions, it loos like the examples in the AWS docs, and when I console.log(process.env), it spits out the correct/expected values of CDK_DEFAULT_REGION and CDK_DEFAULT_ACCOUNT. When I log "env" it spits out the expected values as well.
So my question is, how do I configure my environment so ec2.Vpc.fromLookup knows my account info, or how do I pass the values properly to "env"?
As I understand, it you must specify an environment explicitly if you want to use environment specifics at synth time.
The AWS CDK distinguishes between not specifying the env property at all and specifying it using CDK_DEFAULT_ACCOUNT and CDK_DEFAULT_REGION. The former implies that the stack should synthesize an environment-agnostic template. Constructs that are defined in such a stack cannot use any information about their environment. For example, you can't write code like if (stack.region === 'us-east-1') or use framework facilities like Vpc.fromLookup (Python: from_lookup), which need to query your AWS account. These features do not work at all without an explicit environment specified; to use them, you must specify env.
If you want to share environment variables with the cli you can do it like this:
new MyDevStack(app, 'dev', {
env: {
account: process.env.CDK_DEFAULT_ACCOUNT,
region: process.env.CDK_DEFAULT_REGION
}});
Pass the props with env to the parent construct constructor explicitly as mentioned by Nick Cox
class BasePlatform extends core.Construct {
constructor(scope, id, props = {}) {
super(scope, id, props);
Since I was not able to comment, I am posting my query here.
From the look of it, there is just a single stack frontend. So I believe you can also try hard coding account-id and region in code, and see if it works.
Also I am curious what is the output of
console.log("environment variables " + JSON.stringify(process.env));
Replace super(scope, id) with super(scope, id, props);
The props need to be passed to super for the vpc-provider to use it.
Easiest way is to use aws cli(aws configure).
You will need to have programmatic access for your user and generate access keys from aws console.
AWS CDK documentation

Polkadot-js Babel error when add custom #polkadot/types

I'm setting up a Nuxt.js app with #polkadot-js. When I do request to custom substrate runtime module with my #polkadot/types - I'm getting this error Class constructor Struct cannot be invoked without 'new'.
This is for a Nuxt.js app with official setup of typescript. In the past, I've tried to setup it with clean Nuxt.js and Vue but always the same error. Only if I setup clean NodeJS (with or without typescript) or with #polkadot react apps - it works well.
I've created a repository to try some other ways.
API call:
class VecU32 extends Vector.with(u32) {}
class Kind extends Struct {
constructor(value) {
super({
stuff: VecU32
}, value);
}
}
const Alice = "5GrwvaEF5zXb26Fz9rcQpDWS57CtERHpNehXCPcNoHGKutQY";
const provider = new WsProvider("ws://127.0.0.1:9944");
const typeRegistry = getTypeRegistry();
typeRegistry.register({ Kind });
const api = await ApiPromise.create(provider);
// With types providede in create function - works well
// types: {
// Kind: {
// stuff: "Vec<u32>"
// }
// }
const res = await api.query.template.kinds(Alice);
console.log(res);
I expect empty (or some values, depending on what is in the blockchain) result output, but the actual output is the error, Class constructor Struct cannot be invoked without 'new'.
Short answer:
Instead of this const typeRegistry = getTypeRegistry();, do:
const typeRegistry.register({
Kind: {
'stuff': 'Vec<u32>'
}
});
Longer answer
When you're calling typeRegistry.register({ Kind }); you're trying to register the Typescript class as a custom type in the registry, but the types you need to pass to the type registry of the API have nothing to do with your Typescript types, these two are not directly associated to each other.
Event if you'd be writing plain Javascript you would need to register your custom Substrate Types in the Polkadot-JS API.
The types passed to the API are used to decode and encode the data you're sending and receiving to/from your substrate node. They are compliant with the SCALE codec which is also implemented in the Substrate core Rust code. Using these types makes sure that the data can be correctly de- and encoded in different environments and by different languages.
You can read more about it here: https://substrate.dev/docs/en/overview/low-level-data-format
The Javascript representation of these types are what's listed as "Codec types" in the Polkadot-JS docs:
https://polkadot.js.org/api/types/#codec-types
All the other types you find in the Polkadot-JS docs are extensions of these low-level codec types.
What you need to pass to the JS-API are all custom types of all your custom substrate modules so that the API knows how to de- and encode your data, so in your case what you declared here in Rust:
pub struct Kind {
stuff: Vec<u32>,
}
needs to be registered like this in Javascript:
const typeRegistry.register({
Kind: {
'stuff': 'Vec<u32>'
}
});
Your Typescript types on the other hand are there to make sure that the data your handling client side in your frontend written in typescript has the correct types.
They're only needed by Typescript and they're adding an extra layer of security, but the types itself are not needed to communicate with the API. Your data definitely needs to have the correct format to prevent errors, though.
You can think of https://www.npmjs.com/package/#polkadot/types as a Substrate/ Polkadot specific version of https://github.com/DefinitelyTyped/DefinitelyTyped
But even if you're not using Typescript https://polkadot.js.org/api/types/ is still 100% your go-to reference.

Categories

Resources