Consuming BigQuery API Client through middleware and locating secrets.json files - javascript

I want to use a secrets folder to store my bigquery api client secrets json and use BigQuery client library for node to make queries to the database. But in the documentations, it shows way to load credentials from a json file and not directly.
I am using typescript to query a table and append row, In NEXT js, I am a littlebit confused about how to access the path of a secrets folder that's not going to be exposed to client side and stays in for my middleware.
Here's the function for the same:
import type { NextApiHandler } from 'next';
import axios from 'axios';
import { BigQuery } from '#google-cloud/bigquery';
import bqSecrets from '../../../bigquery/keys.json';
const options = {
keyFilename: '../../../secrets/keys.json',
projectId: 'XXXXXXXXXXXXXXXXXXXXXX',
};
const bigquery = new BigQuery(options);
const submitUserData: NextApiHandler = async (request, response) => {
const { geoData, googleData, post, userRole } = request.body;
if (!post) response.json({ error: false, msg: 'Not Required' });
else {
delete googleData.isAuthenticated;
try {
const rep = await bigquery
.dataset('stackconnect')
.table('googleoAuth')
.insert([requestPayload]);
console.log(await rep);
response.json({ error: false, msg: 'Success' });
} catch (e) {
console.log('Error = ', e);
response.json({ error: true, msg: 'Error' });
}
}
};
But this throws an error saying: File not Found. Can someone help me identify about how to either locate the file in NEXT or how can I directly pass the JSON data using BQ Node Client?
Additionally, I want to know in NEXT, which is the ideal place to store secrets that are not exposed to client side?

After digging into the source code, I realized that there's an attribute called credentials, and so I was able to make it work by doing the following modification:
import bqSecrets from '../../../bigquery/keys.json';
const options = {
credentials: bqSecrets,
projectId: 'project_id',
};
const bigquery = new BigQuery(options);
I hope this helps anyone who's looking to directly inject the json credentials to the BigQuery Node Client.
Additionally, I am still not very sure about the file structure of NEXT as I still don't know if any other assets than those placed under public folder are exposed to client side or not.

Related

"Access denied" accessing images in myS3 bucket from my express server

I have some problems accessing my S3 images via get request form my express server.
I have a mongo database where I store text information for the items on my webpage and save the image key that I send to my S3 bucket. Now when I try to get all the items and the respective png images, this error came to me:
...aws-sdk\lib\request.js:31
throw err;
^
AccessDenied: Access Denied ...
even if my user authorization in S3 is good.
Because I need to fetch all the items for a productPage component I go like this:
//ROUTER FILE
router.get("/cust/test", async (req, res) => {
try {
let tests;
tests = await Test.find();
tests.map((t) => {
const png = t.png;
const readStream = s3DwnPng(png);
readStream.pipe(res);
console.log(png);
});
res.status(200).json(tests);
console.log(tests);
} catch (err) {
res.status(500).json(err);
}
});
//S3 FILE
function s3DwnPng(fileKey) {
const dwnParams = {
Bucket: process.env.AWS_BUCKET_NAME,
Key: `png/${fileKey}`,
};
return s3.getObject(dwnParams).createReadStream();
}
exports.s3DwnPng = s3DwnPng;
but this does not work for me.
Someone could help me?
And is it worth persisting accessing the images passing throw my server? I'm considering switching to a public policy with private CORS access to make the load on my server lighter, is it really secure to do so?

Lowdb (json database) with Next.js via Netlify returns internal server error 500 on API route, works locally

I've got a really simple JSON flat file db setup that works when running locally but doesn't work once it's hosted on Netlify. I don't get any other error info besides a 500 error on the server. I get the error even if all I do is import the clusterDB object, so something is happening with the lowdb object. I've also tried using another json db library called StormDB and I get the same issue.
Return my API route with a static import of the json file (no db libraries) also works fine.
I'm new to Next.js and this seems related to maybe the SSR portion of things since the API routes run only on the server? Do I need to structure my files differently? Are these libraries not compatible? Lowdb says it works with Node, and everything works locally for me.
Here is my db init file (root/db/db.js)
import {Low, JSONFileSync} from 'lowdb'
// Cluster DB Setup
const adapter = new JSONFileSync('cluster-db.json')
const clusterDB = new Low(adapter)
// Initialize if empty
clusterDB.read()
clusterDB.data ||= { clusters: [] }
clusterDB.write()
export {clusterDB}
And my only API route (root/pages/api/clusters.js)
import {clusterDB} from '../../db/db'
export default async function handler(req, res) {
await clusterDB.read()
switch(req.method) {
case 'POST':
let newCluster = {severity: req.query.severity, comments: req.query.comments, date: req.query.date}
clusterDB.data.clusters.push(newCluster)
clusterDB.write()
res.status(200).json({status: "Success", cluster: newCluster})
break;
case 'GET':
if(clusterDB.data.clusters) {
res.status(200).json(clusterDB.data.clusters)
} else {
res.status(404).json({status: "404"})
}
break;
}
res.status(200).json({test: "yay"})
}

I keep getting a 403 from Firebase storge when trying to read image files

I'm having a hard time understanding the whole token part of Firebase uploads.
I want to simply upload use avatars, save them to the database and then read them at the client side.
const storageRef = firebase.storage().ref();
storageRef.child(`images/user-avatars/${user.uid}`).put(imageObj);
Then, in my cloud function, I grab the new url like this:
exports.writeFileToDatabase = functions.storage.object().onFinalize(object => {
const bucket = defaultStorage.bucket();
const path = object.name as string;
const file = bucket.file(path);
return file
.getSignedUrl({
action: "read",
expires: "03-17-2100"
})
.then(results => {
const url = results[0];
const silcedPath = path.split("/");
return db
.collection("venues")
.doc(silcedPath[1])
.set({ images: FieldValue.arrayUnion(url) }, { merge: true });
});
});
I've enabled IAM in the Google APIs platform, and have added Cloud functions service agent to the App Engine default service account.
I feel like the exact same configuration has worked before, butt now it sometimes doesn't even write the new url or I get 403 trying to read it. I can't find any explanations or errors to what I'm doing wrong.
EDIT:
Forgot to add this piece of code, but FieldValue is set at the top of the document as
const FieldValue = admin.firestore.FieldValue;
EDIT:
This the exact error I get Failed to load resource: the server responded with a status of 403 ()
And I just got it when I've tried to use this link, which has been generated automatically by the function above, as the source for an image component:
https://storage.googleapis.com/frothin-weirdos.appspot.com/images/user_avatars/yElCIVY4bAY5g5LnoOBhqN6mDNv2?GoogleAccessId=frothin-weirdos%40appspot.gserviceaccount.com&Expires=1742169600&Signature=qSqPuuY4c5xmdnpvfZh39Pw3Vyu2B%2FbGMD1rQwHDBUZTAnKwP11MaOFQt%2BTV53krkIgvJgQT0Xl3UUxkngmW9785fUri75SSPoBk0z4DKyZnEBLxgTGRE8MzmXadQ%2BHDJ3rSI8IkkoomdnANpLsPN9oySshZ1h4BfOBvAmK0hQ4Gge1glH7qhxFjVWfX3tovZoL8e2smhuCRXxDsZtJh0ihbIeZUEnX8lGic%2B9IT6y4OskS2ZlrZNjvM10hcEesoPdHsT4oCvfhCNbUcJcueRKfsWlDCd9m6qmf42WVOc7UI0nE0oEvysMutWY971GVRKTLwIXRnTLSNOr6fSvJE3Q%3D%3D

Best way to structure a service in node.js/express?

I am starting to move the logic away from the routes in the express application, into a service provider. One of these routes deals with streams, not only that, it also requires some more logic to take place once the stream is finished. Here is an example of the express route.
router.get("/file-service/public/download/:id", async(req, res) => {
try {
const ID = req.params.id;
FileProvider.publicDownload(ID, (err, {file, stream}) => {
if (err) {
console.log(err.message, err.exception);
return res.status(err.code).send();
} else {
res.set('Content-Type', 'binary/octet-stream');
res.set('Content-Disposition', 'attachment; filename="' + file.filename + '"');
res.set('Content-Length', file.metadata.size);
stream.pipe(res).on("finish", () => {
FileProvider.removePublicOneTimeLink(file);
});
}
})
} catch (e) {
console.log(e);
res.status(500).send(e);
}
})
And here is one of the functions inside the service provider.
this.publicDownload = async(ID, cb) => {
const bucket = new mongoose.mongo.GridFSBucket(conn.db, {
chunkSizeBytes: 1024 * 255,
})
let file = await conn.db.collection("fs.files")
.findOne({"_id": ObjectID(ID)})
if (!file|| !file.metadata.link) {
return cb({
message: "File Not Public/Not Found",
code: 401,
exception: undefined
})
} else {
const password = process.env.KEY;
const IV = file.metadata.IV.buffer
const readStream = bucket.openDownloadStream(ObjectID(ID))
readStream.on("error", (e) => {
console.log("File service public download stream error", e);
})
const CIPHER_KEY = crypto.createHash('sha256').update(password).digest()
const decipher = crypto.createDecipheriv('aes256', CIPHER_KEY, IV);
decipher.on("error", (e) => {
console.log("File service public download decipher error", e);
})
cb(null, {
file,
stream: readStream.pipe(decipher)
})
}
}
Because it is not wise to pass res or req into the service provider (I'm guessing because of unit test).I have to return the stream inside the callback, from there I pipe that stream into the response, and also add an on finish event to remove a one-time download link for a file. Is there any way to move more of this logic into the service provider without passing res/req into it? Or am I going at this all wrong?
Is there any way to move more of this logic into the service provider without passing res/req into it?
As we've discussed in comments, you have a download operation that is part business logic and part web logic. Because you're streaming the response with custom headers, it's not as simple as "business logic get me the data and I'll manage the response completely on my own" like many classic database operations are.
If you are going to keep them completely separate while letting the download process encapsulate as much as it can, you would have to create a higher bandwidth interface between your service provider and the Express code that knows about the res object than just the one callback you have now.
Right now, you only have one operation supported and that's to pass the piped stream. But, the download code really wants to specify the content-type and size information (that's where it's known inside the download code) and it wants to know when the write stream is done so it can do its cleanup logic. And, something you don't show is proper error handling if there's an error while streaming the data to the client (with proper cleanup in that case too).
If you want to move more code into the downloader, you'd have to essentially make a little interface that allows the service code to drive more than one operation on the response, but without having an actual response object. That interface doesn't have to be a full response stream. It could just have methods on it for getting notified when the stream is done, starting the streaming, setting headers, etc...
As I've said in the comments, you will have to decide if that actually makes the code simpler or not. Design guidelines are not absolute. They are things to consider when making design choices. They shouldn't drive you in a direction that gives you code that is significantly more complicated than if made different design choices.

Nodejs mssql/msnodesqlv8 issue sending semicolon in database request

Attempting to build a basic API to interact with a MSSQL v12 database using Nodejs. I have been able to connect to the database using the mssql/msnodesqlv8 package but parameterized queries are failing with the following.
code: 'EREQUEST',
number: 102,
state: undefined,
originalError:
{ Error: [Microsoft][SQL Server Native Client 11.0][SQL Server]Incorrect syntax near ''. sqlstate: '42000', code: 102 },
name: 'RequestError' }
Debug: internal, implementation, error
I used SQL Server Profiler and saw that the query was coming in as such
exec sp_executesql N'declare #SecurityKey nvarchar (MAX);set #SecurityKey=#P1;exec database.getSecurityBySecurityId #SecurityKey;',N'#P1 nvarchar(20)',N'XXXXXXXX'
and failing. After some investigation it seems to be an issue with the semicolons after the declare and set statements as it is not allowed in TSQL (very new to MSSql, will need to read up). Removing the semicolons did indeed fix the issue when I ran the query manually.
So my question is this.. is there a way to get msnodesqlv8 to work with my version on |Mssql and if yes, how so? Is there a way to omit these semicolons.
If you think there is a better way, i would like to hear it as I am new to Nodejs + MSSql.
Contents of getSecurity.sql
exec database.getSecurityBySecurityId #SecurityKey
contents of index.js
"use strict";
const utils = require("../utils");
const api = async ({ sql, getConnection }) => {
const sqlQueries = await utils.loadSqlQueries("events");
const getSecurity = async SecurityKey => {
const cnx = await getConnection();
const request = await cnx.request();
request.input('SecurityKey', SecurityKey);
return request.query(sqlQueries.getSecurity);
};
return {
getSecurity
};
};
module.exports = { api };
I was able to work around this by editing the library.
In ./lib/msnodesqlv8.js you can find where is concatenates the query string
...
}
if (input.length) command = `declare ${input.join(',')} ${sets.join(';')};${command};`
if (output.length) {
command += `select ${output.join(',')};`
handleOutput = true
}
....
Editing this will allow you to control the flow.

Categories

Resources