How to pass event parameters to AWS Lambda function using API Gateway? - javascript

I have an AWS Lambda function written in python that is initiated by a Zapier trigger that I set up. As I pass some input parameters to the function in the Zapier trigger, I can access to the input parameters in my python code by using variables such as event[parameter1]. It perfectly works.
I'm trying to access the same Lambda function in Airtable Scripting environment. In order to do it, I set up an API Gateway trigger for the Lambda function, but I can't figure out how to pass input parameters in the vanilla JS environment. Below is the code that I have, which gives me "Internal Server Error".
Your help would be definitely appreciated!
const awsUrl = "https://random-id.execute-api.us-west-2.amazonaws.com/default/lambda-function";
let event = {
"queryStringParameters": {
"gdrive_folder_id": consFolderId,
"invitee_email": email
}
};
let response = await fetch(awsUrl, {
method: "POST",
body: JSON.stringify(event),
headers: {
"Content-Type": "application/json",
}
});
console.log(await response.json());
[Edited] Plus, here's the code of the Lambda function and the latest cloudwatch log after a successful execution invoked by Zapier. It's a simple code that automates Google Drive folder sharing based on 2 inputs. (Folder ID + email address) Please bear with me for the poor code quality!
from __future__ import print_function
from googleapiclient.discovery import build
from google.oauth2 import service_account
SCOPES = ['https://www.googleapis.com/auth/drive']
SERVICE_ACCOUNT_FILE = 'service.json'
def lambda_handler(event, context):
"""Shows basic usage of the Drive v3 API.
Prints the names and ids of the first 10 files the user has access to.
"""
# 2-legged OAuth from Google service account
creds = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
drive_service = build('drive', 'v3', credentials=creds)
# change multiple permissions with batch requests
folder_id = event['gdrive_folder_id']
email_address = event['invitee_email']
def callback(request_id, response, exception):
if exception:
# Handle error
print(exception)
else:
print("Permission Id: {}".format(response.get('id')))
batch = drive_service.new_batch_http_request(callback=callback)
user_permission = {
'type': 'user',
'role': 'writer',
'emailAddress': email_address
}
batch.add(drive_service.permissions().create(
fileId=folder_id,
body=user_permission,
fields='id',
))
batch.execute()

I'm not a Python expert and I don't know how you've setup your API Gateway integration with Lambda but I believe your code can have two issues:
1.) Internal Server Error as a response from the API Gateway endpoint also often refers to a problem in the integration between the API Gateway and your Lambda function. In this case here I can not see where you are returning a valid response back to the API Gateway. In your example the return value of batch.execute() is probably returned, right? However, by default the API Gateway expects an object that contains a statusCode and body and optionally headers. You can have a look at the AWS Lambda handler documentation for Python and their examples. Also this documentation page might be of interest for you.
2.) In your function you are accessing the event data like event['gdrive_folder_id']. However, I can not see that you are parsing the event data somewhere. Are you using a custom integration between your API Gateway? Because in case of a proxy integration the API Gateway sends an object that has a body field and from there you'd need to read the HTTP body. See examples on this documentation page.
Here are some more thing you can check on your own:
Have you also checked what you get when you just print the event data? Also, is the batch.execute() waiting for the batch processing or does it return anything? If so, what does it return?
One note here: You haven't told us anything about the integration between your API Gateway and your Lambda function. Since you can do some mapping between the API Gateway and AWS Lambda, it could be possible that you are converting the request and response outside of the Lambda function and hence, my suggestions above are wrong. Let me know if this is true or not and we can further investigate it.

Related

AWS - API Gateway, Lambda Authorizers, and ‘challenge/response’ style authentication

I have an automation that takes a Webhook from a service and posts it to Slack. The webhook goes to an API Gateway URL, authorizes through a Lambda authorizer function, and then goes to the Lambda function. This process is outlined here.
The verification is in the header["authentication"] field, which is validated by the authorizer. I scoped that text in the authorizer, and it passes it to the Lambda proxy as authenticationToken. If validated, a certificate is created and the webhook is passed to Lambda. This has all been working for a few months.
This year, as many companies tend to do, the service with the webhook is deprecating this auth method, and changing their authentication to a "Challenge-Response Check" style. So, the webhook sends a validation webhook, and the authorizer needs to make a hashed token out of several headers, including timestamp, and then the authorizer passes that hashed token back to the webhook, before sending the real webhook payload. Here is a node/express sample of how to do this:
app.post('/webhook', (req, res) => {
var response
console.log(req.body)
console.log(req.headers)
// construct the message string
const message = `v0:${req.headers['x-zm-request-timestamp']}:${JSON.stringify(req.body)}`
const hashForVerify = crypto.createHmac('sha256', process.env.ZOOM_WEBHOOK_SECRET_TOKEN).update(message).digest('hex')
// hash the message string with your Webhook Secret Token and prepend the version semantic
const signature = `v0=${hashForVerify}`
// you validating the request came from Zoom https://marketplace.zoom.us/docs/api-reference/webhook-reference#notification-structure
if (req.headers['x-zm-signature'] === signature) {
// Zoom validating you control the webhook endpoint https://marketplace.zoom.us/docs/api-reference/webhook-reference#validate-webhook-endpoint
if(req.body.event === 'endpoint.url_validation') {
const hashForValidate = crypto.createHmac('sha256', process.env.ZOOM_WEBHOOK_SECRET_TOKEN).update(req.body.payload.plainToken).digest('hex')
response = {
message: {
plainToken: req.body.payload.plainToken,
encryptedToken: hashForValidate
},
status: 200
}
....
Is this methodology supported by the authorizer/Lambda method? The APIG authorizer would need to send several headers to the Lambda authorizer function, vs just the one. I see a new feature that is changing the authorizer Type to Request (from Token), but I was not able to pass the headers to the auth function that way, after re-creating and re-deploying the APIG.
Or, should I get rid of the authorizer all together, and just do my authentication on the actual Lambda?
Or, should I start to use Lambda's newer URL feature, and get rid of APIG all together?
What is the best workflow in this use case?
Any advice or links appreciated.

firebase functions calls to GCP - with a 403 - IAM blocks the call

Hello in this awesome 2022, that WILL be better!
here's the case we got and the problem we are facing:
We have two parts of the same project - firebase (firebase functions created in javascript - project A) as well as google cloud (cloud function created in python - project B).
From the FirebaseFunction part (A) we are sending a POST into the CloudFunction (B)
We want this request to be authenticated with the IAM on GCP side (B)
And so in project A:
const firebase = require("firebase-admin");
const botInformURL = "URLTOHIT";
const token = await firebase.auth().createCustomToken("XXX-notification-system");
const output = await axios.post(botInformURL, {
headers: {
"Content-Type": "application/json",
Authorization: "Bearer " + token,
},
method: "post",
body: JSON.stringify({ additionalMessage }),
From the google cloud Platform (for CloudFunctions - in project B setup) I have added service account email ((...)projectA(...)iam.gserviceaccount.com) as a Cloud Functions Invoker (tried admin as well - without success)
And now - I am being blocked on the GCP with a 403.
What I am missing here?
When an unauthenticated caller sends a request to the Cloud Function, they will see a 401/403 status code response. In this scenario, the way out is to ensure that ‘allUsers’ has ‘roles/cloudfunctions.invoker’ role in the Cloud Function's IAM. You may refer to this documentation for more on this.
But yes, you are correct, if you use ‘allUsers’ it would expose the endpoint publicly and anyone would be able to access it.
If you want to avoid this, you have to follow the below steps:
From the receiving function:
1. You need to configure the receiving function to accept requests from
the calling function.
2. Use the gcloud functions add-iam-policy-binding command.
From the calling function:
1. Create a Google-signed OAuth ID token with the audience (aud) set to
the URL of the receiving function.
2. Include the ID token in an
Authorization: Bearer ID_TOKEN header in the request to the
function.
You may also refer to the Stackoverflow link for more information.

Use git credential manager to fetch azure devops api instead of personal access token

I am trying to fetch git azure devops api to get information about repositories and branches in js.
In order to achieve that, I made a little application with the following code :
$(document).ready(function() {
var personalToken = btoa(':'+'<personnalAccessToken>');
fetch('https://dev.azure.com/<company>/<project>/_apis/git/repositories?api-version=5.1', {
method: 'GET',
headers: {
'Content-Type': 'application/json'
'Authorization': 'Basic '+ personalToken
}
}).then(function(response) {
return response.json();
}).then(function(repositories) {
console.log("There are "+repositories.count+" repositories");
}).catch(function(error) {
console.log('Fetch error: ' + error.message);
});
This code is working great but as you can see there is my personnalAccessToken writen directly inside the code... which is really bad...
When I am using git in command line, I don't have to specify any credential information because I use git credential manager for windows. Which means my personnalAccessToken is already stored, cached and automatically used everytime I use a git command, like clone, etc.
So, I would like my js code to use the same thing, I would like it to use my stored credentials automatically to fetch the api without being required to set my personnalAccessToken in code.
I have already searched for hours but can't find out if it is possible.
I have already searched for hours but can't find out if it is
possible.
Sorry but as I know it's impossible. The way you're calling the Rest API is similar to use Invoke-RestMethod to call rest api in Powershell.
In both these two scenarios, the process will try to fetch PAT for authentication in current session/context and it won't even try to search the cache in Git Credential Manager.
You should distinguish the difference between accessing Azure Devops service via Rest API and by Code:
Rest API:
POST https://dev.azure.com/{organization}/{project}/{team}/_apis/wit/wiql?api-version=5.1
Request Body:
{
"query": "Select [System.Id], [System.Title], [System.State] From WorkItems Where [System.WorkItemType] = 'Task' AND [State] <> 'Closed' AND [State] <> 'Removed' order by [Microsoft.VSTS.Common.Priority] asc, [System.CreatedDate] desc"
}
Corresponding Code in C#:
VssConnection connection = new VssConnection(new Uri(azureDevOpsOrganizationUrl), new VssClientCredentials());
//create http client and query for resutls
WorkItemTrackingHttpClient witClient = connection.GetClient<WorkItemTrackingHttpClient>();
Wiql query = new Wiql() { Query = "SELECT [Id], [Title], [State] FROM workitems WHERE [Work Item Type] = 'Bug' AND [Assigned To] = #Me" };
WorkItemQueryResult queryResults = witClient.QueryByWiqlAsync(query).Result;
Maybe you can consider using a limited PAT, limit its scope to Code only:
I know there exists other Authentication mechanism
:
For Interactive JavaScript project: ADALJS and Microsoft-supported Client Libraries.
You can give it a try but I'm not sure if it works for you since you're not using real Code way to access the Azure Devops Service... Hope it makes some help :)
If you have the script set up in an Azure Runbook you can set it as an encrypted variable there and have it pull it from there before running rather than having it directly written into the code.
$encryptedPatVarName = "ADO_PAT"
$adoPat = Get-AutomationVariable -Name $encryptedPatVarName
$adoPatToken = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($adoPat)"))
$adoHeader = #{authorization = "Basic $adoPatToken"}
The above is the Powershell version of it. I have seen some people do it with other

Javascript - Querying VoltDB api with fetch

I'm trying to query a VoltDB using its api:
const url = 'http://server:8080/api/1.0/'
const queryParam = encodeURIComponent('select * from table')
const queryURL = url + `?Procedure=#AdHoc&Parameters=['${queryParam}']&jsonp=console.log`
fetch(queryURL).then( response => {
response.text().then( text => console.log(text) )
})
With that code throws an "No Access-Control-Allow-Origin" error.
If I change the fetch call to this:
fetch(queryURL, { mode: 'no-cors').then( response => {
response.text().then( text => console.log(text) )
})
It does nothing
This is a browser security feature. If you are serving a web page from one url and within the page you have embedded url calls to another host or port, then the browser won't allow this.
One way to get around this is to add a proxy to your web server, so it can make the calls to port 8080, and pass the responses back to the web page from the same origin.
You may see some answers on Stack Overflow about using CORS to get around this error, but that requires changing the headers that VoltDB uses on port 8080, so that's not something you can do yourself, and we have no plans to do that.
Another solution is to use the voltdb.js file provided in some of our demos, such as the NBBO demo dashboard: https://github.com/VoltDB/voltdb/tree/master/examples/nbbo/web
I think this uses low-level javascript to open a socket to make the HTTP call without using XMLHttpRequest, so it avoids the No Access-Control-Allow-Origin error.
In the example, the code that is specific to the NBBO example is in demo.js, voltdb-dashboard.js contains code that is common to various example dashboards, and voltdb.js is the base library that provides access to call procedures asynchronously.
You should encode all the URI parameters not just the procedure parameters
$ curl --data 'Procedure=#AdHoc&Parameters=["select count(*) from store;"]' http://127.0.0.1:8080/api/1.0/
{"status":1,"appstatus":-128,"statusstring":null,"appstatusstring":null,"results":[{"status":-128,"schema":[{"name":"C1","type":6}],"data":[[100000]]}]}
or
$ curl --data 'Procedure=%40AdHoc&Parameters=%5B%22select+count(*)+from+store%3B%22%5D' http://127.0.0.1:8080/api/1.0/; echo
{"status":1,"appstatus":-128,"statusstring":null,"appstatusstring":null,"results":[{"status":-128,"schema":[{"name":"C1","type":6}],"data":[[100000]]}]}

authenticating a service account to call a Google API with JavaScript client library

I want to make JSON-RPC calls from localhost (WAMP environment) to the Google FusionTables API (and a couple of other APIs) using the Google Client Library for JavaScript
Steps I have taken:
setup a project on the Google Developer Console
enabled the FusionTables API
created a service account and downloaded the JSON file.
successfully loaded the JS client library with the auth package: gapi.load('client:auth2', initAuth);
constructed the init method parameter the following 3 ways:
the downloaded JSON verbatim
the downloaded JSON modified to include the scope
just the client ID and scope
tried (and failed) to initialize the GoogleAuth instance: gapi.auth2.init(params)
function failed(reason) {
console.log(reason);
}
gapi.load('client:auth2', initAuth);
function initAuth() {
var APIkey = 'MY API KEY';
gapi.client.setApiKey(APIkey); //I understand this to be unnecessary with authorized requests, included just for good measure
var GDTSAKey = 'MY SERVICE ACCOUNT KEY';
var scopes = 'https://www.googleapis.com/auth/fusiontables';
gapi.auth2.init({
client_id: "101397488004556049686",
scope: 'https://www.googleapis.com/auth/fusiontables'
}).then(signin, failed("couldn't initiate"));
//passing the downlaoded JSON object verbatim as parameter to init didn't work either
} //initAuth()
function signin() {
gapi.auth2.getAuthInstance().signIn().then(makeAPIcall), failed("couldn't sign-in");
}
function makeAPIcall(){
gapi.client.load('fusiontables', 'v2', function(){
var tableId = '1PSI_...';
var table = gapi.client.fusiontables.table.get(tableId);
document.querySelector("#result").innerHTML = table;
});
}
based on JS client library >> Samples
the gapi.auth2.init method invokes the second callback (which I understand to be an error handler): failed("couldn't initiate"), but then, curiously, I also get `couldn't sign in' which could only have originated from within the provided success handler. What's going on? How do I get this to work?
Note: I am only willing to try the CORS/xhr, if there is no way to do it with JS client lib.
What's going on?
You are trying to use a service account with the Google JavaScript client library which does not support service accounts.
How do I get this to work?
Switch to Oauth2 authentication or if you must use a service account switch to a server sided language like PHP or python for example. Which support service account authentication.

Categories

Resources