Load local file using Netlify functions - javascript

I've written a script which takes a JSON file and outputs it to an API endpoint using Netlify's Functions feature (https://functions.netlify.com/). For the most part, this works without a hitch, however, one of my endpoints has a lot of text and for ease of editing, I've split the large text blocks into markdown files which I then loaded into the endpoint.
Locally, this works perfectly, but when deployed I get a console error saying Failed to load resource: the server responded with a status of 502 (). I presume this is because I used a node fs method and Netlify doesn't allow that, however, I can't find any information about this.
The code I've used is here:
const marked = require('marked')
const clone = require('lodash').cloneDeep
const fs = require('fs')
const resolve = require('path').resolve
const data = require('../data/json/segments.json')
// Clone the object
const mutatedData = clone(data)
// Mutate the cloned object
mutatedData.map(item => {
if (item.content) {
const file = fs.readFileSync(resolve(`./src/data/markdown/${item.content}`), 'utf-8')
item.content = marked(file)
}
})
exports.handler = function(event, context, callback) {
callback(null, {
statusCode: 200,
body: JSON.stringify({data: mutatedData})
});
}
I've also attempted to replace
const file = fs.readFileSync(resolve(`./src/data/markdown/${item.content}`), 'utf-8')
with
const file = require(`../data/markdown/${item.content}`)
but that complains about a loader and I'd like to avoid adding webpack configs if possible as I'm using create-react-app, besides, I doubt it will help as I'd still be accessing the file-system after build time.
Has anyone else come across this issue before?

At the time when this answer is written (September 2019), Netlify does not seem to upload auxiliary files to AWS Lambda, it appears that only the script where the handler is exported will be uploaded. Even if you have multiple scripts exporting multiple handlers, Netlify seems to upload them into isolated "containers" (different AWS instances), which means the scripts will not be able to see each other in relative paths. Disclaimer: I only tested with a free account and there could be settings that I'm not aware of.
Workaround:
For auxiliary scripts, make them into NPM packages, add into package.json and require them in your main script. They will be installed and made available to use.
For static files, you can host them on Netlify just like before you have AWS Lambda, and make http requests to fetch the files in your main script.

Related

How do you ensure a service worker caches a consistent set of files?

I have a progressive web app (PWA) consisting of several files including index.html, manifest.json, bundle.js and serviceWorker.js. I update my app by uploading all these files to my host. In case it matters, I am using Firebase so I use firebase deploy to upload the files.
Usually everything works correctly: When an existing user opens the app they still see the old version but in the background the new service worker installs any changed files to the cache. Then when the user next opens the app it activates and they see the new version.
But I have a problem when a user opens the app a short time after I deploy it. What seems to happen is: The host delivers the new serviceWorker.js but the old bundle.js. And so the install puts old bundle.js in its new cache. The user gets the old functionality or worse might get an app made up of an inconsistent mixture of new and old files.
I guess it could be argued that it is the host's fault for not updating atomically, but I have no control over Firebase. And it does not sound possible anyway because the browser is sending a series of independent fetches and there can be no guarantee that they will all return a consistent version.
In case it helps, here is my serviceWorker.js. The cacheName strings such as "app1-a0f43550e414" are generated by my build pipeline. Here a0f43550e414 is the hash of the latest bundle.js so that the cache is only updated if the content of bundle.js has changed.
"use strict";
const appName = "app1";
const cacheLookup = {
"app1-aefa820f62d2": "/",
"app1-a0f43550e414": "bundle.js",
"app1-23d94a4a7388": "manifest.json"
};
self.addEventListener("install", function (event) {
event.waitUntil(
Promise.all(Object.keys(cacheLookup).map(cacheName =>
caches.open(cacheName)
.then(cache => cache.add(cacheLookup[cacheName]))
))
);
});
self.addEventListener("activate", event => {
event.waitUntil(
caches.keys().then(cacheNames =>
Promise.all(cacheNames.map(cacheName => {
if (cacheLookup[cacheName]) {
// cacheName holds a file still needed by this version
} else if (!cacheName.startsWith(appName + "-")) {
// Do not delete the cache of other apps at same scope
} else {
console.log("Deleting out of date cache:", cacheName);
return caches.delete(cacheName);
}
}))
)
);
});
const handleCacheMiss = request =>
new Promise((_, reject) => {
reject(Error("Not in service worker cacheLookup: " + request.url));
});
self.addEventListener("fetch", event => {
const request = event.request;
event.respondWith(
caches.match(request).then(cachedResponse =>
cachedResponse || handleCacheMiss(request)
)
);
});
I have considered bundling all my HTML, CSS and JavaScript into a giant file so it cannot be inconsistent. But a PWA need several supporting files that cannot be bundled including the service worker, manifest and icons. If I bundle all that I can, the user can still get stuck with an old version of the bundle and still have inconsistent supporting files. And anyway, in the future I would like to increase granularity by doing less bundling and having more files so on a typical update only a few small files would need to be fetched.
I have also considered uploading bundle.js and the other files with a different filename for each version. The service worker's fetch could hide the name change so other files like index.html can still refer to it as bundle.js. But I don't see how this works the first time a browser loads the app. And I don't think you can rename index.html or manifest.json.
It sounds like your request for bundle.js inside of your install handler might be fulfilled by the HTTP cache, instead of via the network.
You can try changing this current snippet:
cache.add(cacheLookup[cacheName])
to explicitly create a Request object that has its cache mode set to reload to ensure that the response isn't provided by the HTTP cache:
cache.add(new Request(cacheLookup[cacheName], {cache: 'reload'})
Alternatively, if you're concerned about non-atomic global deployments and you can go through the effort to generate sha256 or better hashes as part of your build process, you can make use of subresource integrity to ensure that you're getting the correct response bytes from the network that your new service worker expects.
Adapting your code would something like the following, where you'd actually have to generate the correct sha256 hashes for each file you care about during build time:
const cacheLookup = {
"sha256-[hash]": "/",
"sha256-[hash]": "bundle.js",
"sha256-[hash]": "manifest.json"
};
// Later...
cache.add(new Request(cacheLookup[cacheName], {
cache: 'reload',
integrity: cacheName,
}));
If there's an SRI mismatch, then the request for a given resource will fail, and that will cause the cache.add() to reject, which will in turn cause the overall service worker installation to fail. Service worker installation will be retried the next time an update check happens, at which point (hopefully!) the deployment will be finished and the SRI will be valid.

How do I globally install a Node.js command-line script on the server?

As a prototype, I’ve written a simple Node.js command-line script which loads a .txt file and outputs the contents. I’ve read many articles which suggest that Node.js scripts are usually executed via the command-line and should be installed globally, so that’s the direction I’ve taken, even if there are other techniques.
For the sake of this query, my globally installed command is called my-prototype.
This all works fine on the desktop but I intend to host this prototype on a server, using Express.
How do I globally install my-prototype on the server and, if that’s possible, will I be able to execute it in the same way as in my local tests, as a child process?:
const { execSync } = require('child_process')
const output = execSync(`${process.execPath} my-prototype`)
console.log(output.toString())
I'm not entirely sure that globally installing command-line scripts is for use on a server and wonder whether that's for desktop use only.
If you need just output the contents, then you do not need a console, or rather: you only need it once, to run nodejs
const fs = require('fs');
const express = require('express');
const app = express();
//
app.get('/', async(req, res)=>{
let fileContains = fs.readFileSync('./package.json', 'utf8');
try{
fileContains = JSON.parse(fileContains);
}catch(e){}
res.json(fileContains);
});
//
app.listen(80);
Also, read a little more about the pm2 module, it's very useful

How to Use "file-type" NPM Module Client-Side?

I'm attempting to use the "file-type" NPM module (which I have working on the server) client side to validate mime type prior to a file upload to an S3 bucket.
The readme for the module includes an example of using it in the browser:
const FileType = require('file-type/browser');
const url = 'https://upload.wikimedia.org/wikipedia/en/a/a9/Example.jpg';
(async () => {
const response = await fetch(url);
const fileType = await FileType.fromStream(response.body);
console.log(fileType);
//=> {ext: 'jpg', mime: 'image/jpeg'}
})();
That obviously won't work directly in the browser due to the "require", and if I link to the NPM file directly, I get:
Uncaught ReferenceError: require is not defined
So I've tried using Webpack (with which I'm not well versed at all, but have followed the official Webpack tutorial, and a few other tutorials) to create a "main.js" file and then accessed that via a script tag.
For example, running this through Webpack:
import * as mimeFromBuffer from 'file-type/browser.js';
export function mimeTyping(mimeValidationBytes){
(async () => {
const fileType = await mimeFromBuffer(mimeValidationBytes);
console.log(fileType);
//=> {ext: 'jpg', mime: 'image/jpeg'}
})();
return;
}
Results in this when I call the mimeTyping client side:
Uncaught ReferenceError: mimeTyping is not defined
I've tried Vinay's answer from this question in the browser:
import fileType from 'file-type';
const blob = file.slice(0, fileType.minimumBytes);
const reader = new FileReader();
reader.onloadend = function(e) {
if (e.target.readyState !== FileReader.DONE) {
return;
}
const bytes = new Uint8Array(e.target.result);
const { ext, mime } = fileType.fromBuffer(bytes);
// ext is the desired extension and mime is the mimetype
};
reader.readAsArrayBuffer(blob);
Which gets me this:
Uncaught SyntaxError: Cannot use import statement outside a module
I've researched that error a bit, with a common solution being to add "type":"module" to package.json. This did not help (error was unchanged).
I found a similar question in the module's Github repository, which suggests:
Although this module is primary designed for Node.js, using it in the
browser is possible as well. That is indeed where 'file-type/browser'
intended for. It provides the right dependencies and right functions
to the JavaScript module bundler. Some dependencies, which are
typically present in a Node.js environment, but are missing in a
browser environment, you may need to pass (polyfill) to your module
bundler. Although in some cases it may a bit tricky to configure it,
note that this is a pretty common task handled by a module bundler.
I'm struggling to understand what next steps that suggests.
Another comment at the bottom of that thread indicates the author had success with:
import { fromBuffer } from 'file-type/core';
...
const buffer = await uploadedFile.arrayBuffer();
const types = await fromBuffer(buffer);
I'm not sure how to implement that, and what other code I need (I'm guessing this gets passed to Webpack, and probably requires an export statement, and then an import on the client side?)
I've tried passing this into Webpack:
const fileType = require('file-type/browser');
module.exports = fileType;
But linking to the output file again gets:
Uncaught SyntaxError: Cannot use import statement outside a module
I think I understand conceptually what I need to do: pass the NPM module to Webpack, which in turn parses it and finds any dependencies, gets those dependencies, and creates a JavaScript file I can use client side. It seems I'm doing something in there wrong.
I've spent days trying to understand how to use NPM modules client side (still quite foggy in my mind) and trying all sorts of variants on the above code - would really appreciate some guidance (first time posting a question here - please go easy on me!).
Thank you!
Edit: I don't think this is a duplicate - I did review How to import JS library from node_modules, Meteor Npm-module client-side?, how to use node.js module system on the clientside, How to call a server-side NodeJS function using client-side JavaScript and Node Js application in client machine but no suggestion I tried in any of those seemed to help.
Finally got this working. In case anyone else is stuck on this, here's an explanation (apologies for the lack of brevity - probably this should be a blog post...).
To flesh out the use case a bit further, I'm using Uppy to allow users to upload files to an AWS S3 bucket. The way this works is that, when the user uploads a file, Uppy makes a call to my server where an AWS pre-signed URL is generated and passed back to the client. The client then uses that pre-signed URL to upload the file directly to the S3 bucket, bypassing the server, such that the file doesn't pass through the server at any point.
The problem I was attempting to solve was that files missing an extension ended up uploaded with the content / MIME type set as "application/octet", because it seems the browser, Uppy, and S3 all rely on the file extension to decide the file type (rather than parsing the so-called "magic bytes" of the file), and if the file extension is missing, AWS defaults to "application/octet". This causes issues when users attempt to open the file, as they are not handled correctly (i.e. a png file without an extension and with an "application/octet" content / MIME type opens a download dialog rather than being previewed, etc.). I also want to validate the MIME type / file type in cases even where the extension exists so that I can exclude certain types of files, and so the files get handled appropriately when they are later downloaded (where the MIME type will again be validated) if an incorrect file extension is used.
I use the "file-type" NPM module to determine the mimetype server side, and that's straight forward enough, but changing the file's content type / MIME type when generating the AWS pre-signed URL is not enough to fix the problem - it still gets uploaded as "application/octet". I wanted to use the same module client side so we get the exact same results on the client as on the server, and needed in any case to determine the MIME type and set it accordingly pre-upload but post-pre-signed URL. I had no idea how to do this (i.e. use "file-type" client side - the meat of my question).
I finally gave up on Webpack - nothing I tried worked. So I switched to Browserify, and the sample browser code at the "file-type" repository worked at once! So then I was left trying to figure out how to pass a function through Browserify to use in the client side code.
This proved impossible for me - I couldn't figure out how to pass the asynchronous IIFE through to my code. So instead, I moved my Uppy code into the code I pass to Browserify:
// Get the dependency, which we've added to node via "npm install file-type":
const FileType = require('file-type/browser');
// When the user adds a file for upload to Uppy...
uppy.on('file-added', (file) => {
// 1. Create a filereader:
const filereader = new FileReader();
filereader.onloadend = function(evt) {
// 4. Once the filereader has successfully finished reading the file...
if (evt.target.readyState === FileReader.DONE) {
// Get the unsigned 8 bit int8Array (ie Uint8Array) of the 600 bytes (this looks like '[119,80,78,71...]'):
const uint = new Uint8Array(evt.target.result);
// Determine the mime type using the "file-type" NPM package:
(async () => {
// Pass in our 600 bytes ("uint"):
const fileType = await FileType.fromBuffer(uint);
console.log(fileType); // outputs => {ext: 'jpg', mime: 'image/jpeg'}
// Do some validation here...
//
// Assign the results to the file for upload - we're done!:
file.extension = fileType.ext;
file.meta.type = fileType.mime;
file.type = fileType.mime;
})();
}
}
// 2. Grab the first 600 bytes of the file for mime type analysis server side - most mime
// types use the first few bytes, but some (looking at you, Microsoft...) start at the
// 513th byte; ISO CD files start at the 32,770th byte, so we'll ignore those rather than
// send that much data for each file - users of this system certainly aren't expected to
// upload ISO CD files! Also, some .zip files may start their identifying bytes at the
// 29,153nd byte - we ignore those too (mostly, .zip files start at the first, 31st, or 527th byte).
const blob = file.data.slice(0, 600);
// 3. Start reading those 600 bytes...continues above at 'filereader.onloadend':
filereader.readAsArrayBuffer(blob);
})
That all goes into a file I call "index.js" and then, having installed Browserify at the command line via "npm install -g browserify", I use this at the command line to create the file ("main.js") I link to in my client side code:
browserify index.js -o main.js

Front end Sensitive info

I am building my first react app and not sure about front end security. I am making a call to the following third party library: emailjs.sendForm(serviceID, templateID, templateParams, userID);
The userId field is sensitive information. I make the following call on my onSubmit handler. I am wondering if I need to secure this information somehow? Also, is there a way for me to check if a user can somehow see this information my inspecting and finding the code in the method somehow?
emailjs
.sendForm(
"gmail",
"client-email",
"#form",
"**here_is_the_sensitive_info**"
)
.then(() => {
resetForm({});
})
.catch(() => {
const acknowledgement = document.createElement("H6");
acknowledgement.innerHTML = "Something went wrong, please try.";
document.getElementById("form").appendChild(acknowledgement);
});
In this case, EmailJS is meant to be used in the browser, so I don't think that the userId is sensitive at all.
In their own documentation, you can see the following instruction to get started.
<script type="text/javascript"
src="https://cdn.jsdelivr.net/npm/emailjs-com#2.4.1/dist/email.min.js">
</script>
<script type="text/javascript">
(function(){
emailjs.init("YOUR_USER_ID");
})();
</script>
That said, anyone can definitely see this in the source of the page in their browser. You are right to be cautious with anything sensitive in client-side JavaScript.
To avoid anyone using your userId on their own website (which is very unlikely since it only triggers emails that you configured), you can whitelist your own domain with their paid plan apparently.
The .env file, when used in a frontend project, only serves to set environment variables that are used at compilation time. The file never gets to the browser, but the values are often just interpolated (e.g. with the DefinePlugin) in the final bundle source, so there's nothing necessarily more secure here.
WARNING: Do not store any secrets (such as private API keys) in your
React app!
Environment variables are embedded into the build, meaning anyone can
view them by inspecting your app's files.
# (s) for sensitive info
.env -> compilation -> bundle -> browser -> third-party
(s) (s) (s) (s) (s)
That said, when used in a Node.js server, the .env file serves to set, again, environment variables, but this time, at the start of the application. These values are not shared with the frontend though, so one way to use this as a secure solution is to expose your own endpoint, whitelisting only your own domain, which then uses the sensitive information only on the server.
.env -> Node.js server -> third-party
(s) (s) (s)
^
/ (api call)
...bundle -> broswer
But then again, here, EmailJS' userId is not sensitive information.
You should never have sensitive info in the frontend. You should have for instance, a nodejs instance running, expose and endpoint, to the frontend, and call it. Then, inside your nodejs application, you should have a .env file with your credentials.
Then, just use the .env info from your node.js server.
If you have sensitive info in the frontend, you are exposing everything.
1.first we need install DotENV in you are project
command: npm install dotenv
and now check in your package.json file install or not , if install file we can see like this "dotenv": "^10.0.0", and we can configure the file in your file in top of the file like "require('dotenv').config();" and now where you want now your using .env file.
first we need to understand how to using .env file in your file
any .env file are using (process.env)
and more information for Sensitive info Questions please go to the like
https://www.youtube.com/watch?v=17UVejOw3zA
Thankyou,

How to download files and store them to s3 by nodejs (through heroku)?

I want to know how to download the files to aws-s3 through heroku, and dosen't use the web page.
I'm using wget in Nodejs to download audio files, and wanting to re-encoding them by ffmpeg. Though I have to store the file first, but heroku doesn't provide the space to store.
Then I find the way that heroku can connect with aws-s3, however the sample code is the page for user to upload, and I'm just want to using codes.
This is the code that haven't connected with s3 yet.
const wget = spawn('wget',['-O', 'upload_audio/'+ sender_psid +'.aac', audioUrl]);
//spit stdout to screen
wget.stdout.on('data', function (data) { process.stdout.write(data.toString()); });
// //spit stderr to screen
wget.stderr.on('data', function (data) { process.stdout.write(data.toString()); });
wget.on('close', function (code) {
console.log(process.env.PATH);
const ffmpeg = spawn('ffmpeg',['-i', 'upload_audio/'+ sender_psid +'.aac', '-ar', '16000', 'upload_audio/'+ sender_psid +'.wav', '-y']);
...
});
And it comes the error:
upload_audio/1847432728691358.aac: No such file or directory
...
Error: ENOENT: no such file or directory, open 'upload_audio/1847432728691358.wav'
How can I solve the problem?
Could anyone give some suggest plz?
Maybe won't use s3 can get it?
I don't recommend using S3 until you've tried the NodeJS Static Files API. On the server, that looks like this:
app.use(express.static(path.join(__dirname, 'my_audio_files_folder')));
More on NodeJS's Static Files API available here.
AWS S3 seems to be adding unneeded complexity, but if you have your heart set on it, have a look at this thread.

Categories

Resources