env file not loaded by NextJS - javascript

I'm trying to load an env variable with NextJS version 9.5.
I've created a file named .env.production at the root of my project following the official NextJS documentation over here: https://nextjs.org/docs/basic-features/environment-variables
Here's my env file content:
# APP VARIABLES
# ...
# Webservices
NEXT_PUBLIC_WEBSERVICE_HOST=$WEBSERVICE_HOST
# Keycloak
NEXT_PUBLIC_KEYCLOAK_HOST=$KEYCLOAK_HOST
NEXT_PUBLIC_KEYCLOAK_REALM=$KEYCLOAK_REALM
NEXT_PUBLIC_KEYCLOAK_CLIENT_ID=$KEYCLOAK_CLIENT_ID
I need these variables to be available on the browser as well so I've prepended the env variable name with NEXT_PUBLIC_.
Once I launch the app, I can see that the env file are loaded correctly:
> project# start /app
> NODE_ENV=production node dist/server/server.js
info - Loaded env from /app/.env.production
> Config - API host: https://<redacted>
> Config - Keycloak host: https://<redacted>
> Ready on localhost:3000 - env production
But once in the browser when I try to call the API using this piece of code
const keycloakUrl = process.env.NEXT_PUBLIC_KEYCLOAK_HOST as string
const keycloakRealm = process.env.NEXT_PUBLIC_KEYCLOAK_REALM as string
export const clientId = process.env.NEXT_PUBLIC_KEYCLOAK_CLIENT_ID as string
const instance = axios.create({
baseURL: `${keycloakUrl}/auth/realms/${keycloakRealm}`,
timeout: 30000,
})
# ...
instance.post('/').catch(console.err);
I can see that my variables keycloakUrl and keycloakRealm are empty when I call my API.
How can I solve this problem ?

Related

ImportError: cannot import name 'fetch' from 'js'

I am working on a coursera IBM project with Jupyter on my Ubuntu 22.04 and having problems with the very first cell of one section:
# Download and read the `spacex_launch_geo.csv`
from js import fetch
import io
URL = 'https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/spacex_launch_geo.csv'
resp = await fetch(URL)
spacex_csv_file = io.BytesIO((await resp.arrayBuffer()).to_py())
spacex_df=pd.read_csv(spacex_csv_file)
spacex_df.head(15)
and get the error:
ImportError: cannot import name 'fetch' from 'js' (/home/tim/.local/lib/python3.10/site-packages/js/__init__.py)
I have made sure to pip install js to my machine. Any ideas would be helpful. The only thing I notice when looking at the command line for my machine is that is says:
Requirement already satisfied: js in ./.local/lib/python3.10/site-packages (1.0)
Is this a problem with a global v. local to user installation of js? And if so, what can I do to alleviate the issue. Thank you.
There's no need to use the js module, just comment on the respective lines and read the file as usual:
# Download and read the `spacex_launch_geo.csv`
#from js import fetch
#import io
URL = 'https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/spacex_launch_geo.csv'
#resp = await fetch(URL)
#spacex_csv_file = io.BytesIO((await resp.arrayBuffer()).to_py())
spacex_df=pd.read_csv(URL)
spacex_df.head(15)

Browser-sync - proxy a domain gets HTTP error 403 - you don't have authorization to view this page

I run a gulp task using NodeJS module browser-sync as below.
=== File gulpfile.js ===
let browserSync = require('browser-sync').create();
gulp.task('browser-sync', function(){
browserSync.init( {
open: true,
injectChanges: true,
proxy: 'https://generalgulp.devsunset',
host: '192.168.1.76',
serveStatic: ['.'],
https: {
key: 'C:\\WebProjects\\GeneralGulp\\resources\\certificates\\server-generalgulp.key',
cert: 'C:\\WebProjects\\GeneralGulp\\resources\\certificates\\server-generalgulp.crt'
}
});
});
=== ===
My local project information is as below (I use latest up to current post date):
Node version: 17.1.0
NPM versions: 8.1.3
gulp: 4.0.2
NPM module browser-sync: 2.27.7
I run the browser-sync task. The output looks good.
==>
Using gulpfile C:\WebProjects\GeneralGulp\gulpfile.js
[Browsersync] Starting 'browser-sync'...
[Browsersync] Proxying: https://generalgulp.devsunset
Access URLs:
Local: https://localhost:3000
External: https://192.168.1.76:3000
UI: http://localhost:3001
UI External: http://localhost:3001
==>
I already add the SSL certificate for this domain to trusted root. I also have DNS records pointing from this domain ( https://generalgulp.devsunset ) - IP addresses ( 127.0.0.1 & 192.168.1.76)
I can access the site from both local & external address.
However, when I try to access the local resources using proxied domain ( https://generalgulp.devsunset
) , it gets an HTTP 403 :
Access to <my_custom_domain> was denied. You are not authorize to
view this page
I suppose when running my gulp "browser-sync" task, it will translate the custom domain to the https://localhost:3000 or https://192.168.1.76:3000
I have followed exactly the documents of https://browsersync.io/docs . I have also made an attempt with all solutions I could find. Those solutions led me to the gulp task that I wrote at the beginning.
I would appreciate if you can suggest me which things I should do further to troubleshoot why does my browser-sync cannot “proxy” my domain? Is there any parameter missing in my Gulp task?
Thanks !
I have modified the "proxy" parameter as below and it works when i access the proxied domain with given port:
(for my case is http(s)://generalgulp.devsunset:3000 )
`gulp.task('browser-sync', function(){
browserSync.init( {
open: true,
injectChanges: true,
proxy: 'generalgulp.devsunset',
host: '192.168.1.76',
serveStatic: ['.'],
https: {
key: 'C:\\WebProjects\\GeneralGulp\\resources\\certificates\\server-generalgulp.key',
cert: 'C:\\WebProjects\\GeneralGulp\\resources\\certificates\\server-generalgulp.crt'
}
});
});
`
This is a temporary acceptable solution regarding to the current question scope.
However, What i expect is the browser-sync will auto-forward traffic from custom domain ( http(s)://generalgulp.devsunset ) to : ( http://192.168.1.76:3000 ).
Does browser-sync allow users to do it ?

How to load data from init-db.js in docker-compose

I am using docker to containerize a Flask API. The Flask API is accessing MongoDB using the "links" keyword in the docker-compose.yml file shown below:
app:
build: .
command: python -u app.py
ports:
- "5000:5000"
volumes:
- .:/app
#Linking db.
links:
- db
db:
image: mongo:latest
hostname: test_mongodb
environment:
- MONGO_INITDB_DATABASE=Exploit_Resources
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=pass
# Data provider.
volumes:
- ./init-db.js:/docker-entrypoint-initdb.d/init-db.js:ro
ports:
- 27017:27017
The link is working fine, and the file given under volumes (init-db.js) is supposed to provide the Mongo container with data. Here is my init-db.js file:
db = db.getSiblingDB("Exploit_Resources");
db.EDB.drop();
db.EDB.insertMany([
// Insert data here.
]);
The data I want to provide MongoDB comes from a remote csv file. I tried using PapaParse to access and feed the data, but I was having some difficulties importing it.
I tried this code inside the insertMany method:
const {StringStream} = require("scramjet");
const request = require("request");
request.get("https://srv.example.com/main.csv") // fetch csv
.pipe(new StringStream()) // pass to stream
.CSVParse() // parse into objects
.consume(object => console.log("Row:", object)) // do whatever you like with the objects
.then(() => console.log("all done"))
I got the code from here. It did not work because a ReferenceError was thrown, saying that 'require' is not defined.
Is there a way to fix this code or another way to get data from a remote csv file and provide it to the MongoDB container from the init-db.js?

WebAuthn Relaying Party ID for various Setups

I have an Angular 11 Project, which implements a WebAuthn registration. The backend is SpringBoot 2.4
WebAuthn Login should work in two parts of the project, the "main" and the "viewer"
The domain setup is rather complicated:
Main Project
Urls
Local: https://localhost:4202
Staging: https://company.com (local Kubernetes Server)
Prod: https://company-project.com
Viewer Project
Urls
Local: https://localhost:4200
Staging: https://viewer.develop.plattform.intra.company.com (local Kubernetes Server)
Prod: https://viewer.company-project.com
Code
environment.ts
prodUrls: ['company-project.com'],
webauthn: {
name: "Company DEV",
rpId: "localhost"
}
environment.prod.ts (replace in build)
prodUrls: ['company-project.com'],
webauthn: {
name: "Company Prod",
rpId: "plattform.intra.company.com" // gets overridden by values in "prodUrls"
}
webauthn.service.ts
private _getRelyingPartyInfo(): RelyingParty {
let rpId = environment.webauthn.rpId;
/**
* Check if the Hostname matches one of our Prod Hostnames
* and use this instead
*/
environment.prodUrls.forEach((url, index) => {
if (location.hostname.indexOf(url) > -1) {
rpId = environment.prodUrls[index];
}
});
const rp = {
id: rpId,
name: environment.webauthn.name
};
return rp;
}
The Issues
It works locally, using the rpId localhost (both Backend and Frontend locally)
It does NOT work on staging --> Backend throws
WebAuthnException message: rpIdHash doesn't match the hash of preconfigured rpId.
It should work on Prod using company-project.com as rpId (scared to deploy as it does not work on staging)
What I tried
For staging, I changed the rpId to develop.plattform.intra.company.com and I can register and login in "main". Logging in on "viewer" throws an error as well
The spec is not very specific about what should work: https://www.w3.org/TR/webauthn/#relying-party-identifier, it only says what shouldn't work. I assume, that the multiple subdomains complicate things on staging?
What would be the correct rpId for staging and is the assumption that company-project.com as rpId should work on prod correct?
For staging, I changed the rpId to develop.plattform.intra.company.com and I can register and login in "main". Logging in on "viewer" throws an error as well
What's your code to get the assertion? You might also be running into this other question. You need to set the get assertion RP ID to the same RP ID used for registration. If you don't, it will default to the origin, which for your subdomain will be different.

aws-serverless-express with serverless-offline stage breaks routing

I'm using serverless package with:
- aws-serverless-express
- serverless-offline
when I'm running sls offline
I get everything to run properly but the paths I get are:
ANY | http://localhost:3001/dev/
ANY | http://localhost:3001/dev/{proxy+}
serverless.yaml
is
functions:
app:
handler: src/lambda.handler
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
I know I have "stage" env I can set to change
but my express routes looking for it:
//this wont work
app.get('/r', (req, res) => {
res.send('ready');
})
//this will work
app.get('/dev/r', (req, res) => {
res.send('ready');
})
But in production or if I use any other "stage" my routes won't work if I don't prefix them with the stage.
Any ideas?
Thanks
Based on the docs:
sls offline --noPrependStageInUrl
should work for you
or via config:
custom:
serverless-offline:
noPrependStageInUrl: true
Available Cli options:
--apiKey Defines the API key value to be used for endpoints marked as private Defaults to a random hash.
--corsAllowHeaders Used as default Access-Control-Allow-Headers header value for responses. Delimit multiple values with commas. Default: 'accept,content-type,x-api-key'
--corsAllowOrigin Used as default Access-Control-Allow-Origin header value for responses. Delimit multiple values with commas. Default: '*'
--corsDisallowCredentials When provided, the default Access-Control-Allow-Credentials header value will be passed as 'false'. Default: true
--corsExposedHeaders Used as additional Access-Control-Exposed-Headers header value for responses. Delimit multiple values with commas. Default: 'WWW-Authenticate,Server-Authorization'
--disableCookieValidation Used to disable cookie-validation on hapi.js-server
--enforceSecureCookies Enforce secure cookies
--hideStackTraces Hide the stack trace on lambda failure. Default: false
--host -o Host name to listen on. Default: localhost
--httpPort Http port to listen on. Default: 3000
--httpsProtocol -H To enable HTTPS, specify directory (relative to your cwd, typically your project dir) for both cert.pem and key.pem files
--ignoreJWTSignature When using HttpApi with a JWT authorizer, don't check the signature of the JWT token. This should only be used for local development.
--lambdaPort Lambda http port to listen on. Default: 3002
--noPrependStageInUrl Don't prepend http routes with the stage.
--noAuth Turns off all authorizers
--noTimeout -t Disables the timeout feature.
--prefix -p Adds a prefix to every path, to send your requests to http://localhost:3000/[prefix]/[your_path] instead. Default: ''
--printOutput Turns on logging of your lambda outputs in the terminal.
--resourceRoutes Turns on loading of your HTTP proxy settings from serverless.yml
--useChildProcesses Run handlers in a child process
--useWorkerThreads Uses worker threads for handlers. Requires node.js v11.7.0 or higher
--websocketPort WebSocket port to listen on. Default: 3001
--webSocketHardTimeout Set WebSocket hard timeout in seconds to reproduce AWS limits (https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#apigateway-execution-service-websocket-limits-table). Default: 7200 (2 hours)
--webSocketIdleTimeout Set WebSocket idle timeout in seconds to reproduce AWS limits (https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html#apigateway-execution-service-websocket-limits-table). Default: 600 (10 minutes)
--useDocker Run handlers in a docker container.
--layersDir The directory layers should be stored in. Default: ${codeDir}/.serverless-offline/layers'
--dockerReadOnly Marks if the docker code layer should be read only. Default: true
--allowCache Allows the code of lambda functions to cache if supported.

Categories

Resources