I am starting to move the logic away from the routes in the express application, into a service provider. One of these routes deals with streams, not only that, it also requires some more logic to take place once the stream is finished. Here is an example of the express route.
router.get("/file-service/public/download/:id", async(req, res) => {
try {
const ID = req.params.id;
FileProvider.publicDownload(ID, (err, {file, stream}) => {
if (err) {
console.log(err.message, err.exception);
return res.status(err.code).send();
} else {
res.set('Content-Type', 'binary/octet-stream');
res.set('Content-Disposition', 'attachment; filename="' + file.filename + '"');
res.set('Content-Length', file.metadata.size);
stream.pipe(res).on("finish", () => {
FileProvider.removePublicOneTimeLink(file);
});
}
})
} catch (e) {
console.log(e);
res.status(500).send(e);
}
})
And here is one of the functions inside the service provider.
this.publicDownload = async(ID, cb) => {
const bucket = new mongoose.mongo.GridFSBucket(conn.db, {
chunkSizeBytes: 1024 * 255,
})
let file = await conn.db.collection("fs.files")
.findOne({"_id": ObjectID(ID)})
if (!file|| !file.metadata.link) {
return cb({
message: "File Not Public/Not Found",
code: 401,
exception: undefined
})
} else {
const password = process.env.KEY;
const IV = file.metadata.IV.buffer
const readStream = bucket.openDownloadStream(ObjectID(ID))
readStream.on("error", (e) => {
console.log("File service public download stream error", e);
})
const CIPHER_KEY = crypto.createHash('sha256').update(password).digest()
const decipher = crypto.createDecipheriv('aes256', CIPHER_KEY, IV);
decipher.on("error", (e) => {
console.log("File service public download decipher error", e);
})
cb(null, {
file,
stream: readStream.pipe(decipher)
})
}
}
Because it is not wise to pass res or req into the service provider (I'm guessing because of unit test).I have to return the stream inside the callback, from there I pipe that stream into the response, and also add an on finish event to remove a one-time download link for a file. Is there any way to move more of this logic into the service provider without passing res/req into it? Or am I going at this all wrong?
Is there any way to move more of this logic into the service provider without passing res/req into it?
As we've discussed in comments, you have a download operation that is part business logic and part web logic. Because you're streaming the response with custom headers, it's not as simple as "business logic get me the data and I'll manage the response completely on my own" like many classic database operations are.
If you are going to keep them completely separate while letting the download process encapsulate as much as it can, you would have to create a higher bandwidth interface between your service provider and the Express code that knows about the res object than just the one callback you have now.
Right now, you only have one operation supported and that's to pass the piped stream. But, the download code really wants to specify the content-type and size information (that's where it's known inside the download code) and it wants to know when the write stream is done so it can do its cleanup logic. And, something you don't show is proper error handling if there's an error while streaming the data to the client (with proper cleanup in that case too).
If you want to move more code into the downloader, you'd have to essentially make a little interface that allows the service code to drive more than one operation on the response, but without having an actual response object. That interface doesn't have to be a full response stream. It could just have methods on it for getting notified when the stream is done, starting the streaming, setting headers, etc...
As I've said in the comments, you will have to decide if that actually makes the code simpler or not. Design guidelines are not absolute. They are things to consider when making design choices. They shouldn't drive you in a direction that gives you code that is significantly more complicated than if made different design choices.
Related
In the Apollographql documentation it states:
The onError link can retry a failed operation based on the type of GraphQL error that's returned. For example, when using token-based authentication, you might want to automatically handle re-authentication when the token expires.
This is followed up by their sample code:
onError(({ graphQLErrors, networkError, operation, forward }) => {
if (graphQLErrors) {
for (let err of graphQLErrors) {
switch (err.extensions.code) {
// Apollo Server sets code to UNAUTHENTICATED
// when an AuthenticationError is thrown in a resolver
case "UNAUTHENTICATED":
// Modify the operation context with a new token
const oldHeaders = operation.getContext().headers;
operation.setContext({
headers: {
...oldHeaders,
authorization: getNewToken(),
},
});
// Retry the request, returning the new observable
return forward(operation);
}
}
}
// To retry on network errors, we recommend the RetryLink
// instead of the onError link. This just logs the error.
if (networkError) {
console.log(`[Network error]: ${networkError}`);
}
});
My question is in regards to the getNewToken(), as no code was provided for this function, I want to know (assuming this is another request to the backend and I am not sure how it could not be), if you are able to and or supposed to use query/mutation in graphql or make the request through axios for example.
One problem, if it can/should be a graphql query or mutation, is to get the new token, the onError code is defined in the same file as the ApolloClient as ApolloClient needs access to onError, thus when trying to implement this as retrieving a new token through a graphql mutation I got the following error:
React Hook "useApolloClient" is called in function "refresh" that is
neither a React function component nor a custom React Hook function.
After trying to useQuery/useMutation hook and realizing I cannot outside of a react component and at the top level I found this post whose answers suggested you can use useApolloClient.mutate instead but I still ran into issues. My code was (and tried multiple iterations of this same code like useApolloClient() outside of the function and inside etc.):
const refresh = () => {
const client = useApolloClient();
const refreshFunc = () => {
client
.mutate({ mutation: GET_NEW_TOKEN })
.then((data) => {
console.log(data);
})
.catch((err) => {
console.log(err);
});
};
refreshFunc();
};
I could capitalize Refresh but this still would not work and would break the rules of hooks.
And to clarify all the above would do is I would replace the console.logs with setting session storage to the retrieved new token and then re trying the original request with onError.
Now in another post I found when looking into this, the users getNewToken request was a rest request using axios:
const getNewToken = async () => {
try {
const { data } = await axios.post(
"https://xxx/api/v2/refresh",
{ token: localStorage.getItem("refreshToken") }
);
localStorage.setItem("refreshToken", data.refresh_token);
return data.access_token;
} catch (error) {
console.log(error);
}
};
Now from my understanding, if I wanted to implement it this way I would have to change my backend to include express as I am only using apolloserver. Now I could definitely be wrong about that as my backend knowledge is quite limited and would love to be corrected their.
So my question is, what is the best way to do this, whether natively using graphql queries/mutations (if possible), doing it with axios, or maybe their is another best practice for this seemingly common task I am unaware of.
The Nodejs functions return an error from try/catch scope, such as the one below if the user doesn't exist of if a database is not reachable:
router.delete('/delete/:email', async (req, res) => {
var email = req.params.email;
try {
let result = await User.remove({"email": email});
res.status(204).send(email);
} catch (err) {
res.status(400).send(err);
}
});
I can also return the Error from Nodejs server by myself:
return res.status(400).send(new Error(`The user with email ${email} doesn't exist.`));
The first problem is that I can't find the error message that is embedded somewhere deep in the body the returned Error object. It is stored in one of its 100+ attributes. Where should I look for it so I could display in on a screen for the end user to read it?
Then, the err object generated by the try/catch scope has a set of different attributes comparing to the Error object created with new Error("Here is my error message"). Is there a way to normalize the returned Errors so they all have the same or similar attributes?
You don't need to return the whole error object from the server, and arguably shouldn't since error messages can expose internals about your code and infrastructure.
One way you could handle this is to format and return an error message from the server yourself. Assuming you're using express this would look something like:
return res.status(400).json({ message: `The user with email ${email} doesn't exist.` });
Alternatively you could use an error handling middleware like strong-error-handler found here: https://github.com/strongloop/strong-error-handler which automatically builds a json formatted message that's easier to parse, but keep in mind that the content of the message differs depending on whether you set debug mode to true or no.
If you want to develop a secure web application with nice error handling, i will suggest you the following structure.
Step 1. At front end divide your api calls in four main operations for e.g. inset,update,query and filter.
now whenever your page loads and you want to show some data fetched from server then your api call must be like 'https://domainname.tld/server/query' and send some payload with this api call according to need of your data requirements to be fetched.
At backend probably at Server.js handle like this :
app.all("/server/query", function (req, res) {
try {
console.log(a);
// some database or io blocking process
} catch (error) {
// error handling
var err = writeCustomError(error.message || error.errmsg || error.stack);
res.status(417).json(err).end();
}
});
function writeCustomError(message) {
var errorObject = {};
errorObject.message = message;
errorObject.code = 10001; // as you want
errorObject.status = "failed";
return errorObject;
}
in try block you can also handle logical errors using same function i.e writeCustomError
So if you use this approach you can also implement end-to-end encryption and send only eP('encrypted payload') and eK('encryption Key'),by doing this end users and bad end users even can not evaluate your serve API calls.
If you are thinking how will you route different paths at server then simplest solution is send uri in payload from client to server for e.g
User wants to reset password :-
then
call api like this
https://domain.tld/server/execute and send Json object in payload like this {uri:"reset-password",old:"",new:""}.
at backend
use
app.all("/server/execute", function (req, res) {
try {
// decrypt payload
req.url = payload.uri;
next();
} catch (error) {
// error handling
var err = writeCustomError(error.message || error.errmsg || error.stack);
res.status(417).json(err).end();
}
});
app.all("/reset-password", function (req, res) {
try {
// reset logic
} catch (error) {
// error handling
var err = writeCustomError(error.message || error.errmsg || error.stack);
res.status(417).json(err).end();
}
});
so in this way only developer know where password reset logic and how it can called and what parameters are required.
I will also suggest you to create different router files for express like QueryRouter,InsertRouter etc.
Also try to implement end-to-end encryption.Any query regarding post,kindly comment it.
I am a little stumped on how to handle this the best way possible. I've decided to rewrite this controller, and I need to (at least I think) make use of promise.all() here.
Premise:
In this application, the Admin user must be able to bulk upload a bunch of .pdf's at once that are for multiple users. The .pdf's adhere to a specific naming convention that my backend upload controller by using a regEx, pulls out a first and last name. These .pdf's are auto-generated in a program, that always names them exactly the same, so there is no human error in misspelling names.
Each call to the database and an AWS S3 Bucket is made within an Array.prototype.map() a function that is looping through and uploading a file to an S3 bucket, and then it takes the Key name of the file returned from s3.upload() and saves that Key to a user model in Mongo DB as a reference to their file(s) within the S3 Bucket.
Example Code:
This is what I currently have (that does work somewhat). This is the block of code responsible for what I described above. employeeFiles is created further up in the controller and contains an array of objects that each have a file and id property. The file name destructuring and user matching happen further up in the controller as well, and the employeeFiles array is a result of that. The id property contains the mongo _id of the employee, and the file property contains the file to be saved. This all works perfectly, and I don't think that code is needed for context here. fileType is a variable available within the scope of the controller:
const employeeFileUploadToDb = () => {
employeeFiles.map((employee, i) => {
const { file, id } = employee;
const params = {
Bucket: S3_BUCKET_NAME,
Body: file.buffer,
Key: `${filetype}/${file.originalname}`
};
s3.upload(params, (err, data) => {
if (err) {
next(err);
}
if (data) {
//Save reference to Employee model
let dataObj = {
key: data.key,
fileName: file.originalname,
date: Date.now()
};
Employee.findOneAndUpdate(
{ _id: id },
{ $push: { [`${filetype}`]: dataObj } }
)
.then(resp => res.send(200))
.catch(err => next(err));
}
});
});
};
I am making use of next() to handle any errors within the s3.upload() and findOneAndUpdate() functions (I do realize findOneAndUpdate() is deprecated) moving forward. My idea here is that if there is an error with one of the functions, next() will send it to my error handler middleware and keep going, versus ending the process and halting all of it.
Inside of every iteration of s3.upload(), I make a call to my database so that I can save the reference to the file uploaded to the S3 Bucket. Inside of a then() method of Employee.findOneAndUpdate(), I return a (200) response to let my client know everything has been uploaded to S3 and saved in my DB. So on each iteration of this map() function, I am returning a 200. If I have 10 files, I am returning 200 10 times.
I feel that I can convert this into an async function, and make use of a promise.all() to return a single status code upon completion. Returning that many status codes seem a bit crazy to me. But I am not too sure how to approach this while using a map() function to loop and make an async call on every iteration.
Hope this makes sense, and thank you in advance for looking at this!
I would split it up into a 2-step process. Upload in bulk and then save to mongo if it all worked out.
const employeeFileUploadToDb = () => {
const uploadFiles = files => files.map((employee, i) => new Promise((resolve, reject) => {
//...
s3.upload(params, (err, data) => {
if (err) {
return reject(err);
}
resolve(data);
})
});
});
Promise.all(uploadData(employeeFiles)).then((err, data) => {
// Handle saving to mongo
})
};
So, this is the first time I'm trying to take a large js app written in one file and modularize it into separate files. The goal is to create a more organized base of files rather than one big one.
There are a lot of api calls and a lot of shared information. I'm making use of module.exports but I'm not sure that it's the best way to go about it. I'd like some advice on how to do it more correctly or maybe I should use some other method? I'm using module.exports to pass back specific data rather than functions.
For example, here's the authentication function which was in the larger file and now in authenticate.js (some irrelevant parts were taken out):
module.exports.authenticate = (logger) => {
return new Promise((resolve, reject) => {
const authentication = new logger("Authentication Service");
fs.createReadStream('auth.json').pipe(request.post(('https://example.com/auth'), function (error, response, body) {
authentication.log('Authenicating API access');
body = JSON.parse(body);
token = body.response.token
if (typeof(token) === 'undefined' || token === '') {
reject('No Token Available');
}
authentication.log('Successfully logged in.');
module.exports.token = token;
resolve();
}));
})
}
So specifically, i'm using 'module.exports.token = token;' to pass back the token info that was just retrieved from the api call, I'm doing this in quite a few modules though for different pieces of information.
Is this proper and good practice?
Thanks!
I have a Node.js server which queries MySQL database. It serves as an api end point where it returns JSON and also backend server for my Express application where it returns the retrieved list as an object to the view.
I am looking into implementing flat-cache for increasing the response time. Below is the code snippet.
const flatCache = require('flat-cache');
var cache = flatCache.load('productsCache');
//get all products for the given customer id
router.get('/all/:customer_id', flatCacheMiddleware, function(req, res){
var customerId = req.params.customer_id;
//implemented custom handler for querying
queryHandler.queryRecordsWithParam('select * from products where idCustomers = ? order by CreatedDateTime DESC', customerId, function(err, rows){
if(err) {
res.status(500).send(err.message);
return;
}
res.status(200).send(rows);
});
});
//caching middleware
function flatCacheMiddleware(req, res, next) {
var key = '__express__' + req.originalUrl || req.url;
var cacheContent = cache.getKey(key);
if(cacheContent){
res.send(cacheContent);
} else{
res.sendResponse = res.send;
res.send = (body) => {
cache.setKey(key,body);
cache.save();
res.sendResponse(body)
}
next();
}
}
I ran the node.js server locally and the caching has indeed greatly reduced the response time.
However there are two issues I am facing that I need your help with.
Before putting that flatCacheMiddleware middleware, I received the response in JSON, now when I test, it sends me an HTML. I am not too well versed with JS strict mode (planning to learn it soon), but I am sure the answer lies in the flatCacheMiddleware function.
So what do I modify in the flatCacheMiddleware function so it would send me JSON?
I manually added a new row to the products table for that customer and when I called the end point, it still showed me the old rows. So at what point do I clear the cache?
In a web app it would ideally be when the user logs out, but if I am using this as an api endpoint (or even on webapp there is no guarantee that the user will log out the traditional way), how do I determine if new records have been added and the cache needs to be cleared.
Appreciate the help. If there are any other node.js caching related suggestions you all can give, it would be truly helpful.
I found a solution to the issue by parsing the content to JSON format.
Change line:
res.send(cacheContent);
To:
res.send(JSON.parse(cacheContent));
I created cache 'brute force' invalidation method. Calling clear method will clear both cache file and data stored in memory. You have to call it after db change. You can also try delete specified key using cache.removeKey('key');.
function clear(req, res, next) {
try {
cache.destroy()
} catch (err) {
logger.error(`cache invalidation error ${JSON.stringify(err)}`);
res.status(500).json({
'message' : 'cache invalidation error',
'error' : JSON.stringify(err)
});
} finally {
res.status(200).json({'message' : 'cache invalidated'})
}
}
Notice, that calling the cache.save() function will remove other cached API function. Change it into cache.save(true) will 'prevent the removal of non visited keys' (like mentioned in comment in the flat-cache documentation.