I am trying to create a deployment or replicaSet with the Kubernetes Javascript client. The Kubernetes javascript client documentation is virtually non-existent.
Is there any way to achieve this?
Assuming that by:
createDeployment()
you are referring to: createNamespacedDeployment()
You can use below code snippet to create a Deployment using Javascript client library:
const k8s = require('#kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromDefault();
const k8sApi = kc.makeApiClient(k8s.AppsV1Api); // <-- notice the AppsV1Api
// Definition of the deployment
var amazingDeployment = {
metadata: {
name: 'nginx-deployment'
},
spec: {
selector: {
matchLabels: {
app: 'nginx'
}
},
replicas: 3,
template: {
metadata: {
labels: {
app: 'nginx'
}
},
spec: {
containers: [
{
name: 'nginx',
image: 'nginx'
} ]
}
}
}
};
// Sending the request to the API
k8sApi.createNamespacedDeployment('default', amazingDeployment).then(
(response) => {
console.log('Yay! \nYou spawned: ' + amazingDeployment.metadata.name);
},
(err) => {
console.log('Oh no. Something went wrong :(');
// console.log(err) <-- Get the full output!
}
);
Disclaimer!
This code assumes that you have your ~/.kube/config already configured!
Running this code for the first time with:
$ node deploy.js
should output:
Yay!
You spawned: nginx-deployment
You can check if the Deployment exists by:
$ kubectl get deployment nginx-deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 6m57s
Running this code once again will output (deployment already exists!):
Oh no. Something went wrong :(
Additional resources:
Github.com: Kubernetes-client: Javascript
Be careful when you try to deploy a different kinds of resources such as deployment or service.
You need to correctly specify the API version.
const k8sApi = kc.makeApiClient(k8s.AppsV1Api) or (k8s.CoreV1Api) for namespace and etc.
First, you create a kube config object and then create the associated API type. I.e,
import k8s from '#kubernetes/client-node';
const kubeConfig = new k8s.KubeConfig();
kubeConfig.loadFromCluster(); // Or whatever method you choose
const api = kubeConfig.makeApiClient(k8s.CoreV1Api); // Or whatever API
// you'd like to
// use.
const namespace = 'default';
const manifest = new k8s.V1ConfigMap();
// ... additional manifest setup code...
await api.createNamespacedConfigMap(namespace, manifest);
This is the gist of it. If you'd like, I recently created a library with the intention of simplifying interactions with the kubernetes javascript api and it can be found here:
https://github.com/ThinkDeepTech/k8s
If it doesn't help you directly, perhaps it can serve as an example of how to interact with the API. I hope that helps!
Also, make sure the application executing this code has the necessary permissions (i.e, the K8s Role, RoleBinding and ServiceAccount configs) necessary to perform the actions you're attempting. Otherwise, it'll error out.
Related
I have an Electron Forge config file set up with many options and it all works automagically and beautifully (thanks Forge team!!). But I have found certain situations where I might want to handle a bare npm run package differently than a full npm run make (which as I understand it runs the package script as part of its process). Is there any way to programmatically detect whether the package action was run direct from the command line rather than as part of the make process, so that I can invoke different Forge configuration options depending? For example sometimes I just want to do a quick build for local testing and skip certain unnecessary time-consuming steps such as notarizing on macOS and some prePackage/postPackage hook functions. Ideally I'm looking for a way to do something like this in my Forge config file:
//const isMake = ???
module.exports = {
packagerConfig: {
osxNotarize: isMake ? {appleId: "...", appleIdPassword: "..."} : undefined
},
hooks: {
prePackage: isMake ? someFunction : differentFunction
}
}
You can do it by process.argv[1]:
let isMake;
if (process.argv[1].endsWith('electron-forge-make.js') {
isMake = true;
} else {
isMake = false;
}
module.exports = {
// ...
}
When calling process.argv, it returns an array with two strings: The first one with node.js directory and the second one with electron forge module directory.
The make module ends with electron-forge-make.js and package module ends with electron-forge-package.js. So you can look at the end of it and determine whether it's package or make.
I'm using Webpack 5 and I want to have a Service Worker that will intercept fetch requests and return responses locally without hitting the network. I also want to be able to import npm modules within the Service Worker. I used to use a library called serviceworker-webpack-plugin for this purpose, but it's no longer maintained, (and throws errors when I use it). The Webpack docs recommend using Workbox, but this seems to be just for caching assets in the Service Worker, as far as I can gather.
Could someone tell me what the correct approach is in 2020 for creating a Service Worker with Webpack 5?
Add the service worker to your webpack.config.js entries field
entry: {
'app': "./src/index.js",
'service-worker': "./src/service-worker.ts",
},
output: {
filename: "[name].js",
},
This will emit dist/app.js and dist/service-worker.js, and it will be possible to import things in both.
serviceworker-webpack-plugin also provides a way for the serviceworker to see a list of all the bundle files it should cache, but that functionality is not available directly and requires making a webpack plugin to get.
2022 Answer
This is supported in webpack pretty much out of the box.
https://webpack.js.org/guides/progressive-web-application/
This will give you basic caching of assets which web pack is handling for your.
You can get more advanced:
https://developer.chrome.com/docs/workbox/modules/workbox-webpack-plugin/
Note this is using Workbox from google. I've used this for years in an offline first app and it works very well.
Webpack 5 should do this for you out of the box, similar to other workers:
When combining new URL for assets with new Worker/new SharedWorker/navigator.serviceWorker.register webpack will automatically create a new entrypoint for a web worker.
new Worker(new URL("./worker.js", import.meta.url)).
The syntax was chosen to allow running code without bundler too. This syntax is also available in native ECMAScript modules in the browser.
(From https://webpack.js.org/blog/2020-10-10-webpack-5-release/#native-worker-support)
With webpack 4, our service worker code looked like this:
// eslint-disable-next-line #typescript-eslint/ban-ts-comment
// #ts-ignore
import downloadWorker from 'worker-plugin/loader!../workers/downloadHelper'
navigator.serviceWorker.register(downloadWorker).then( //...
With webpack 5, the code became:
navigator.serviceWorker
// eslint-disable-next-line #typescript-eslint/ban-ts-comment
// #ts-ignore
.register(new URL('../workers/downloadHelper.ts', import.meta.url))
.then( // ...
We removed the worker-plugin from our webpack 5 configuration, which v4's code made use of in the import statement.
The key is using new URL() - webpack 5 will interpret this as being a web worker and will create a chunk for this service worker and link up the url properly.
We had to add the eslint-disable-next-line and #ts-ignore because the interface for navigator.serviceworker.register expects a string, rather than a URL. It appears that webpack correctly sends over a string, but TypeScript doesn't seem capable of understanding that when TypeScript is run prior to webpack running.
Dont overcomplicate it.
You can make an sw work in just 2 steps. Create an sw and register it.
Create an .js file like sw.js and write in it:
self.addEventListener('fetch', function (event) {
event.respondWith(
caches.open('mysite-dynamic').then(function (cache) {
return cache.match(event.request).then(function (response) {
var fetchPromise = fetch(event.request).then(function (networkResponse) {
cache.put(event.request, networkResponse.clone());
return networkResponse;
});
return response || fetchPromise;
});
}),
);
});
Thats the stale-while-revalidate approach
Now register it.
if ('serviceWorker' in navigator) {
window.addEventListener('load', function() {
navigator.serviceWorker.register('/sw.js').then(function(registration) {
// Registration was successful
console.log('ServiceWorker registration successful with scope: ', registration.scope);
}, function(err) {
// registration failed :(
console.log('ServiceWorker registration failed: ', err);
});
});
}
So i'm using the Contentful API to get some content from my account and display it in my Next.Js app (i'm using next 9.4.4). Very basic here. Now to protect my credentials, i'd like to use environment variables (i've never used it before and i'm new to all of this so i'm a little bit losted).
I'm using the following to create the Contentful Client in my index.js file :
const client = require('contentful').createClient({
space: 'MYSPACEID',
accessToken: 'MYACCESSTOKEN',
});
MYSPACEID and MYACCESSTOKEN are hardcoded, so i'd like to put them in an .env file to protect it and don't make it public when deploying on Vercel.
I've created a .env file and filled it like this :
CONTENTFUL_SPACE_ID=MYSPACEID
CONTENTFUL_ACCESS_TOKEN=MYACCESSTOKEN
Of course, MYACCESSTOKEN and MYSPACEID contains the right keys.
Then in my index.js file, i do the following :
const client = require('contentful').createClient({
space: `${process.env.CONTENTFUL_SPACE_ID}`,
accessToken: `${process.env.CONTENTFUL_ACCESS_TOKEN}`,
});
But it doesn't work when i use yarn dev, i get the following console error :
{
sys: { type: 'Error', id: 'NotFound' },
message: 'The resource could not be found.',
requestId: 'c7340a45-a1ef-4171-93de-c606672b65c3'
}
Here is my Homepage and how i retrieve the content from Contentful and pass them as props to my components :
const client = require('contentful').createClient({
space: 'MYSPACEID',
accessToken: 'MYACCESSTOKEN',
});
function Home(props) {
return (
<div>
<Head>
<title>My Page</title>
<link rel="icon" href="/favicon.ico" />
</Head>
<main id="page-home">
<Modal />
<NavTwo />
<Hero item={props.myEntries[0]} />
<Footer />
</main>
</div>
);
}
Home.getInitialProps = async () => {
const myEntries = await client.getEntries({
content_type: 'mycontenttype',
});
return {
myEntries: myEntries.items
};
};
export default Home;
Where do you think my error comes from?
Researching about my issue, i've also tried to understand how api works in next.js as i've read it could be better to create api requests in pages/api/ but i don't understand how to get the content and then pass the response into my pages components like i did here..
Any help would be much appreciated!
EDIT :
So i've fixed this by adding my env variables to my next.config.js like so :
const withSass = require('#zeit/next-sass');
module.exports = withSass({
webpack(config, options) {
const rules = [
{
test: /\.scss$/,
use: [{ loader: 'sass-loader' }],
},
];
return {
...config,
module: { ...config.module, rules: [...config.module.rules, ...rules] },
};
},
env: {
CONTENTFUL_SPACE_ID: process.env.CONTENTFUL_SPACE_ID,
CONTENTFUL_ACCESS_TOKEN: process.env.CONTENTFUL_ACCESS_TOKEN,
},
});
if you are using latest version of nextJs ( above 9 )
then follow these steps :
Create a .env.local file in the root of the project.
Add the prefix NEXT_PUBLIC_ to all of your environment variables.
eg: NEXT_PUBLIC_SOMETHING=12345
use them in any JS file like with prefix process.env
eg: process.env.NEXT_PUBLIC_SOMETHING
You can't make this kind of request from the client-side without exposing your API credentials. You have to have a backend.
You can use Next.js /pages/api to make a request to Contentful and then pass it to your front-end.
Just create a .env file, add variables and reference it in your API route as following:
process.env.CONTENTFUL_SPACE_ID
Since Next.js 9.4 you don't need next.config.js for that.
By adding the variables to next.config.js you've exposed the secrets to client-side. Anyone can see these secrets.
New Environment Variables Support
Create a Next.js App with Contentful and Deploy It with Vercel
Blog example using Next.js and Contentful
I recomended to update at nextjs 9.4 and up, use this example:
.env.local
NEXT_PUBLIC_SECRET_KEY=i7z7GeS38r10orTRr1i
and in any part of your code you could use:
.js
const SECRET_KEY = process.env.NEXT_PUBLIC_SECRET_KEY
note that it must be the same name of the key "NEXT_PUBLIC_ SECRET_KEY" and not only "SECRET_KEY"
and when you run it make sure that in the log says
$ next dev
Loaded env from E:\awesome-project\client\.env.local
ready - started server on http://localhost:3000
...
To read more about environment variables see this link
Don't put sensitive things in next.config.js however in my case I have some env variables that aren't sensitive at all and I need them Server Side as well as Client side and then you can do:
// .env file:
VARIABLE_X=XYZ
// next.config.js
module.exports = {
env: {
VARIABLE_X: process.env.VARIABLE_X,
},
}
You have to make a simple change in next.config.js
const nextConfig = {
reactStrictMode: true,
env:{
MYACCESSTOKEN : process.env.MYACCESSTOKEN,
MYSPACEID: process.env.MYSPACEID,
}
}
module.exports = nextConfig
change it like this
Refer docs
You need to add a next.config.js file in your project. Define env variables in that file and those will be available inside your app.
npm i --save dotenv-webpack#2.0.0 // version 3.0.0 has a bug
create .env.development.local file in the root. and add your environment variables here:
AUTH0_COOKIE_SECRET=eirhg32urrroeroro9344u9832789327432894###
NODE_ENV=development
AUTH0_NAMESPACE=https:ilmrerino.auth0.com
create next.config.js in the root of your app.
const Dotenv = require("dotenv-webpack");
module.exports = {
webpack: (config) => {
config.resolve.alias["#"] = path.resolve(__dirname);
config.plugins.push(new Dotenv({ silent: true }));
return config;
},
};
However those env variables are gonna be accessed by the server. if you want to use any of the env variables you have to add one more configuration.
module.exports = {
webpack: (config) => {
config.resolve.alias["#"] = path.resolve(__dirname);
config.plugins.push(new Dotenv({ silent: true }));
return config;
},
env: {
AUTH0_NAMESPACE: process.env.AUTH0_NAMESPACE,
},
};
For me, the solution was simply restarting the local server :)
Gave me a headache and then fixed it on accident.
It did not occur to me that env variables are loaded when the server is starting.
I'm having node.js application/module which is working OK with plug-in concept ,i.e.
My module is acting like proxy with additional capabilities such as adding new functionality to the out-of-the-box functionality(methods).
To do this you need to do the following:
clone my application
create new folder which is called extenders(inside my app)
In this folder you should provide two files
extend.js with your logic as functions/methods
extend.json which define your API (to know which file to invoke)
Note: the JS & JSON file name must be identical
for example lets assume that this is your extend.json file
{
"extenders": [
{
"path": "run",
"fn": "runFn"
},
}
In this case when the user put in the browser the following link
localhost:3000/run
Im invoking the runFn function (which exist in the extend.js file)with its logic and this is working as expected (under the hood I read the json & js files and invoke the function like extender[fnName](req, res));
Now I want to support the use case of adding external extender via code
for example that the user will do something like
var myModule = require('myModule');
myModule.extend('./pathTo/newExternalPathforExtendersFolder');
so when my module will run it search weather there is new external extenders exist with all configuration and if so refer to it in RT (to the js&json files).
My questions are:
I need to find when my module is starting who is register to my module and then do my logic on this module , how it can be done in node?
2.if there is other solution in node please let me know.
You could implement initialization function in your api to give freedom to module users. For example.
var yourModule = require('yourModule').init({
extenders: [
{
"path": "run",
"fn": "runFn"
}
]
});
yourModule.listen(3000);
Or as MattW wrote you can implement it like an express middleware, so module users could use it with their own server. For example:
var yourModule = require('yourModule').init({
extenders: [
{
"path": "run",
"fn": "runFn"
}
]
});
app = require('express')();
app.use(yourModule.getMiddleware());
Look at webpack-dev-server, webpack-dev-middleware like another example. Hope there's some similarity with your task. Webpack also deals with fs and configs. They've just splitted middleware and standalone server to separate modules. And there's no "working code", because we need your module's code, to talk about wrapper, which would depend on yourModule implementation. Just some thoughts.
If i'm not wrong understanding your problem maybe this approach can help you.
I think you could list your extenders in an ACL like JSON which not only include the path or the fnName, but the file_to_js path or any other property you need, like if it's active or security parameters.
extenders: [
{
"path": "run",
"fn": "runFn",
"file": "file_path"
"api-key": true,
"active": true
}
]
Then you can preload your modules reading ACL json and let them cached ready for extend.
var Router = {
extenders: {},
init: function () {
this.extenders = {};
this.loadExtenderFiles();
},
loadExtenderFiles: function () {
var key, extender;
// Iterate extender files
for (key in ACL_JSON) {
// extender load logic
extender = ACL_JSON[key];
if (extender.active) {
this.extenders[extender.fn] = require(extender.file);
}
}
},
// this fn should allow you to Router.extend() anywhere during the request process
extend: function (fn, request, response) {
// Parse request/response to match your extender module pattern
// extender process logic
this.extenders[fn](request, response);
}
};
module.exports = Router;
So Router.init() should do the cache work on server init;
Router.extend() should resolve your api request or extend one being processed.
Hope it helps you!
I believe a simple router should satisfy your requirements:
var userModule = require('userModule');
router.use('/run', function (req, res, next) {
return next(userModule(req));
}).all(yourReverseProxy);
I've created Node js with express and currently I use the console.log to log message and morgan for expresss...for production use what is recommended approach to use for error handling and logging ,there is recommended modules to use?
Examples will be very useful!
I try with the following
module.exports = function () {
var logger = new winston.Logger({
levels: {
info: 1
},
transports: [
new (winston.transports.File)({
level: 'info',
filename: path.join(process.cwd(), '/logs/log.json'),
})
]
});
}
I have used winston in the past quite effectively. In the below excerpt we are creating a custom log level called info such that we can call logger.info to log messages. I believe there are numerous default levels defined on winston which are well documented.
The second part is to define a transport. In winston this is essentially a storage device for your logs. You can define multiple transports in the array including Console logging, File logging, Rotated file logging, etc... These are all well documented here. Im my example I have created a file transport where the log file is located under log/logs.json within the root of the application. Every time I now call logger.info('blah blah blah') I will see a new log entry in the file.
var winston = require('winston'),
, path = require('path')
// Log to file.
var logger = new winston.Logger({
levels: {
info: 1
},
transports: [
new (winston.transports.File)({
level: 'info',
filename: path.join(process.cwd(), '/log/logs.json'),
})
]
});
// Write to log.
logger.info("something to log");