Azure function is not triggered locally - javascript

I developed my Azure function app locally using VScode and pushed it to azure cloud, I have eventhub-trigger functions, I used to debug my code locally through VScode normally, but now when I run func host start --debuge, functions in my app started but nothing is triggered, I can see them triggered on the cloud through their log, it drive me mad, why they are not triggered locally, they are enabled, I restarted my function app several times, but I got nothing.
My app is https://butterflyfnapp.azurewebsites.net

In additional to Mikhail, other option is to create a separate consumer group of the Event Hub for each environment such as a cloud and development/VS and configured them in the Application settings or local.settings.json.
Then add the ConsumerGroup = "%consumergroup%" to the EventHubTrigger argument in your function, where the consumergroup is an example of the variable name in the settings.
Beside the above options, still you have a capability for testing a non-Http trigger function locally using a Http POST request. In other words, your function can be tested locally the same way like is done in the portal. More details here.
The following is an example of the testing EventHubTrigger function using a Http POST request:
url: http://localhost:7071/admin/functions/MyFunction
payload:
{
"input": '{"Id":1234,"Name":"abcd"}'
}

Event Hub consumer information (checkpoints) are stored in Blob Storage. If you share the connection string to Blob Storage between development / production environments, they will use the same checkpoints, so they will compete against each other.
My guess is that your cloud deployment always processes the events, updates the checkpoint to the latest position, and then local deployment takes this checkpoint and doesn't do anything.
To make sure this doesn't happen, create an additional "dev" Blob Storage and set the local connection string setting to that storage.

Related

How to secure API Keys in Environment Variables in a Vue CLI 4 and Electron project

I'm trying to develop a desktop app which would need to make a few private API calls, authenticated using some secret keys.
The keys are created for me by external IT service providers outside of my organisation - they are responsible for the security so there are a few constraints:
They said even though they have already taken steps on their end to secure the API and there are mitigation strategies in place even if a breach happens, but still they would like to make sure that I treat the keys with a security-conscious mindset and take whatever steps possible on my end as well to make sure they remain secured.
I'm not allowed to just create random middleware / gateway on a private server or serverless platform to perform the API calls on my app's behalf as these calls may contain business data.
I have done some research and from what I can find, the general recommendation is to set up a ".env" file in the project folder and use environment variables in that file to store the API keys.
But upon reading the Vue CLI documentation I found the following:
WARNING
Do not store any secrets (such as private API keys) in your app!
Environment variables are embedded into the build, meaning anyone can
view them by inspecting your app's files.
So, given the constraints, is there a way to store these keys securely in a Vue CLI 4 + Electron Desktop app project?
Thanks.
In general, especially if you have a lot of environment variables, it would be better practice to store environment variables in a dot env file (.env), however, it's possible that this file could be leaked when you package your electron app. So, in this case it would be better to store your environment variables from the terminal/command line. To do so follow this guide (https://www.electronjs.org/docs/api/environment-variables).
Keep in mind anything that requires the API key/private information try to keep it on the backend, i.e., the electron process and send the results to the Vue front end.
Here's an example of how you could implement this:
On windows from CMD:
set SOME_SECRET="a cool secret"
On POSIX:
$ export SOME_SECRET="a cool secret"
Main process:
// Other electron logic
const { ipcMain } = require("electron");
// Listen for an event sent from the client to do something with the secret
ipcMain.on("doSomethingOnTheBackend", (event, data) => {
API.post("https://example.com/some/api/endpoint", {token: process.env.SOME_SECRET, data});
});
Client side:
const { ipcRenderer } = require("electron");
ipcRenderer.send("doSomethingOnTheBackend", {username: "test", password: "some password"});
Also note, to use the ipcRenderer on the client side nodeIntegration needs to be enabled.
Here are some more resources to help you get started:
https://www.electronjs.org/docs/api/ipc-renderer
https://www.electronjs.org/docs/api/ipc-main

How to run a script on a newly created EC2 instance via AWS SDK?

I'm currently using AWS's Javascript SDK to launch custom EC2 instances and so far so good.
But now, I need these instances to be able to run some tasks when they are created, for example, clone a repo from Github, install a software stack and configure some services.
This is meant to emulate a similar behaviour I have for local virtual machine deployment. In this case, I run some provisioning scripts with Ansible that get the job done.
For my use case, which would be the best option amongst AWS's different services to achieve this using AWS's Javascript SDK?
Is there anyway I could maybe have a template script to which I passed along some runtime obtained variables to execute some tasks in the instance I just created? I read about user-data but I can't figure out how that wraps with AWS's SDK. Also, it doesn't seem to be customisable.
At the end of the day, I think I need a way to use the SDK to do this:
"On the newly created instance, run this script that is stored in such place, replacing these
placeholder values in the script with these I'm giving you now"
Any hints?
As Mark B. stated, UserData is the way to go for executing commands on instance launch. As you tagged the question with javascript here's an example on passing this in the ec2.runInstances command:
let AWS = require('aws-sdk')
let ec2 = new AWS.EC2({region: 'YOUR_REGION'})
// Example commands to create a folder, a file and delete it
let commands = [
'#!/usr/bin/env bash',
'mkdir /home/ubuntu/test',
'touch /home/ubuntu/test/examplefile',
'rm -rf /home/ubuntu/test'
];
let params = {
...YOUR PARAMS HERE...
UserData: new Buffer(commands.join("\n")).toString('base64')
}
// You need to encode it with Base64 for it to be executed by the userdata interpreter
ec2.runInstances(params).promise().then(res => { console.log(res); })
When you launch the new instances you can provide the user-data at that time, in the same AWS SDK/API call. That's the best place to put any server initialization code.
The only other way to kick off a script on the instance via the SDK is via the SSM service's Run Command feature. But that requires the instance to already have the AWS SSM agent installed. This is great for remote server administration, but user-data is more appropriate for initializing an instance on first boot.

Accessing API in a function in a file in node.js and then deploying that file through firebase

I am new to firebase. I am trying to create a Weather bot on Dialogflow. But, firebase doesn't seem to be able to access openweather API when index.js file is deployed. At the same time this works just fine in Command prompt.
The following error occurs while executing
https.get("https://api.openweathermap.org/data/2.5/weather?q="+city+"&APPID={APPID}",function(response){...})
Error: Firebase.child failed: First argument was an invalid path: "undefined". Paths must be non-empty strings and can't contain ".", "#", "$", "[", or "]"
What's the problem here? How do I get around this?
function xyz(){
//Code
var https= require("https");
var city=London;
https.get("https://api.openweathermap.org/data/2.5/weather?q="+city+"&APPID={APPID}",function(response){
//Code
});
//Code
}
Google Cloud Functions doesn't allow to access Outbound networking calls to APIs other than that of Google Services in their free plan(Spark). If you want to make such call then you have to upgrade your plan. The fact that it is working on your local system is that local system allows making Outbound network calls to other services.
You can find more about pricing from here Google Pricing Plans
A small advice from my side is to use AWS Lambda if you don't want to pay and use it as free service to make Outbound networking calls.

Meteor: How can I get useraccounts package to write a new user doc into a remote collection?

I'm using the packages accounts-password and useraccounts:bootstrap and it all works fine, meaning the sign-on form creates a new doc in the Meteor.users collection. But I don't want any collection on the client facing app, hence I do have a second app running to which I successfully connect via DDP.connect() and I can exchange all necessary docs/collections via pub/sub and calling methods on the remote app.
The only thing that doesn't work is the useraccount doc. I've used (on the client app):
remote.subscribe('users', Meteor.userId(), function() {
});
and (on the remote app):
Meteor.publish('users', function() {
return Meteor.users.find({});
});
even though I'm not sure if there is a pub/sub already included in the package. Still, the doc is written to the local (client) app and not to the remote app.
How can I achieve this?
useraccounts:core simply makes use of Accounts.createUser on the server side (see this line) within a method called from the client-side (see this another line).
So the new user object is created from the server side and not from the client (though it flows all the way down to the client thanks to the DDP and default users subscriptions...).
If you're really looking to change the defaul behaviour provided by the Meteor core Accounts packages (accounts-base, accounts-password in this case...) you should try to override the Accounts.createUser method which is were all begins...
In any case be warned that the current user is published to the client by default: see these lines
Finally, to prevent useraccounts:core to use the Accounts API you could try to override the AtCreateUserServer method and deal with the creation of a new user on a remote application inside there.
Package accounts-base provide such functionality.
The accounts-base package exports two constructors, called AccountsClient and AccountsServer, which are used to create the Accounts object that is available on the client and the server, respectively.
Nevertheless, these two constructors can be instantiated more than once, to create multiple independent connections between different accounts servers and their clients, in more complicated authentication situations.
Documentation: Accounts (multi-server)

Is there an equivalent of Netscape navigator functions in nodejs?

Can I access the inbuilt navigator functions like isinNet() or DomainNameorHost() from nodejs?
Since nodeJS runs on the server, not the browser, you can't access functions that are only provided in a browser.
Most developers use a middleware like Express to create a web service on nodejs.
In a route, such as
app.route("/play", function(req,res){
// code that handles URL /play
});
there is a callback function that is called when a request arrives for that route.
The req object parameter contains everything about the request.
req.ip is the upstream (incoming) ip address.
I looked around in npm for a module that might map remote ips to hostnames and could not find one. Presumably all it would do is reverseDNS, which could take time and hold up processing requests.

Categories

Resources