I would like to add auth token to http request header every time a http request sent and if authorization fails, I want to redirect user to the login. Should I decorate Http Driver or is there a better way to do it?
I came with a solution that decorates http driver. But I'm not sure this is the correct way of doing it. Here's the code so far I have written:
import Rx from 'rx';
import {makeHTTPDriver} from '#cycle/http';
function makeSecureHTTPDriver({eager = false} = {eager: false}) {
return function secureHTTPDriver(request$) {
const httpDriver = makeHTTPDriver(eager);
const securedRequest$ = request$
.map(request => {
const token = localStorage.getItem('token');
if (token) {
request.headers = request.headers || {};
request.headers['X-AUTH-TOKEN'] = token;
}
return request;
});
const response$ = httpDriver(securedRequest$);
//todo: check response and if it fails, redirect to the login page
return response$;
}
}
export default makeSecureHTTPDriver;
Here is the code how I use makeSecureHttpDriver
const drivers = {
DOM: makeDOMDriver('#app'),
HTTP: makeSecureHttpDriver()
};
This is a little late, I don't frequent SO very much. I'd suggest using other drivers instead to avoid placing any logic in your drivers.
import storageDriver from '#cycle/storage'
import {makeHTTPDriver} from '#cycle/http'
function main(sources) {
const {storage, HTTP} = sources
const token$ = storage.local.getItem('token')
.startWith(null)
const request$ = createRequest$(sources)
const secureRequest$ = request$.withLatestFrom(token$,
(request, token) => token ?
Object.assign(request, {headers: {'X-AUTH-HEADER' : token }) :
request
)
return {HTTP: secureRequest$, ...}
}
Cycle.run(main, {
...
storage: storageDriver,
HTTP: makeHTTPDriver()
})
I'm not sure if this will help but HTTP driver is superagent under the hood so you can pass it an object like with required info like here.
But in regards to your issue I think that the HTTP driver might need this option added to the driver it self so you can dictate if the driver should be secure or not eg:
const drivers = {
DOM: makeDOMDriver('#app'),
HTTP: makeSecureHttpDriver({secure:true})
};
Because your implementation looks ok to me, it might be worth having it in the driver itself.
I'd create an issue in the HTTP driver repo and see what the community think, you can also ask people to interact via the gitter channel :-)
Related
In the Amplify documentation, under the Storage/File access levels section there is a paragraph that states:
Files are stored under private/{user_identity_id}/ where the user_identity_id corresponds to the unique Amazon Cognito Identity ID for that user.
How to fetch user_identity_id from the lambda function?
Request to the lambda is authorized, the event.requestContext.authorizer.claims object is available, I can see the user data, but not the user_identity_id.
EDIT: Now I see that there is a field event.requestContext.identity.cognitoIdentityId, but the value is null. Still need to find the way to fetch it.
Ok, so there's no right way to map Cognito identity ID and Cognito user. There is a lengthy discussion here where a couple of workarounds can be found. For now, I'm going to use this solution where, instead of identity_id, you can specify a custom attribute (most likely a sub) as a folder name.
EDIT: There is another solution that might help (found somewhere on the internet, and I verified that it works)
const AWS = require('aws-sdk')
const cognitoIdentity = new AWS.CognitoIdentity();
function getCognitoIdentityId(jwtToken) {
const params = getCognitoIdentityIdParams(jwtToken);
return cognitoIdentity
.getId(params)
.promise()
.then(data => {
if (data.IdentityId) {
return data.IdentityId;
}
throw new Error('Invalid authorization token.');
});
}
function getCognitoIdentityIdParams(jwtToken) {
const loginsKey = `cognito-idp.${process.env.REGION}.amazonaws.com/${process.env.USERPOOLID}`;
return {
IdentityPoolId: `${process.env.IDENTITY_POOL_ID}`,
Logins: {
[loginsKey]: jwtToken,
},
};
}
If the user accesses the lambda through graphql via the AppSync service then the identity is stored event.identity.owner
Here is some typescript code I use to pull the user_identity_id from the event. However, the user doesn't always call the lambda direct sp the user_identity can also be based in if from an authorized IAM role.
export function ownerFromEvent(event: any = {}): string {
if (
event.identity.userArn &&
event.identity.userArn.split(":")[5].startsWith("assumed-role")
) {
// This is a request from a function over IAM.
return event.arguments.input.asData.owner;
} else {
return event.identity.owner;
}
}
For anyone else still struggling with this, I was finally able to use the aws-sdk for JavaScript v3 to obtain a Cognito User's IdentityId & Credentials in a Lambda Function invoked via API-Gateway with a Cognito User Pool Authorizer from the Cognito User's identity jwtToken passed into the Authorization header of the request.
Here is the code used in my JavaScript Lambda Function:
const IDENTITY_POOL_ID = "us-west-2:7y812k8a-1w26-8dk4-84iw-2kdi849sku72"
const USER_POOL_ID = "cognito-idp.us-west-2.amazonaws.com/us-west-2_an976DxVk"
const { CognitoIdentityClient } = require("#aws-sdk/client-cognito-identity");
const { fromCognitoIdentityPool } = require("#aws-sdk/credential-provider-cognito-identity");
exports.handler = async (event,context) => {
const cognitoidentity = new CognitoIdentityClient({
credentials: fromCognitoIdentityPool({
client: new CognitoIdentityClient(),
identityPoolId: IDENTITY_POOL_ID,
logins: {
[USER_POOL_ID]:event.headers.Authorization
}
}),
});
var credentials = await cognitoidentity.config.credentials()
console.log(credentials)
// {
// identityId: 'us-west-2:d393294b-ff23-43t6-d8s5-59876321457d',
// accessKeyId: 'ALALA2RZ7KTS7STD3VXLM',
// secretAccessKey: '/AldkSdt67saAddb6vddRIrs32adQCAo99XM6',
// sessionToken: 'IQoJb3JpZ2luX2VjEJj//////////...', // sessionToken cut for brevity
// expiration: 2022-07-17T08:58:10.000Z
// }
var identity_ID = credentials.identityId
console.log(identity_ID)
// us-west-2:d393294b-ff23-43t6-d8s5-59876321457d
const response = {
statusCode: 200,
headers: {
"Access-Control-Allow-Headers": "*",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods" : "OPTIONS,POST,GET,PUT"
},
body:JSON.stringify(identity_ID)
};
return response;
}
After a Cognito User has signed in to my application, I can use the Auth directive of aws-amplify and fetch() in my React-Native app to invoke the lambda function shown above by sending a request to my API-Gateway trigger (authenticated with a Cognito User Pool Authorizer) by calling the following code:
import { Auth } from 'aws-amplify';
var APIGatewayEndpointURL = 'https://5lstgsolr2.execute-api.us-west-2.amazonaws.com/default/-'
var response = {}
async function getIdentityId () {
var session = await Auth.currentSession()
var IdToken = await session.getIdToken()
var jwtToken = await IdToken.getJwtToken()
var payload = {}
await fetch(APIGatewayEndpointURL, {method:"POST", body:JSON.stringify(payload), headers:{Authorization:jwtToken}})
.then(async(result) => {
response = await result.json()
console.log(response)
})
}
More info on how to Authenticate using aws-amplify can be found here https://docs.amplify.aws/ui/auth/authenticator/q/framework/react-native/#using-withauthenticator-hoc
I'm kind of new to Redis and I'm currently experiencing a project stand-still because I don't know any other way to set and get in Redis.
My problem is I'm building a url shortener and when the user posts (a POST request) a url to the server, I'm setting the url as the key and a nanoid generated code as the value and sending back the nanoid code to the user. But when the user sends a GET request with the url code to the server I have to check if the url is already cached and redirect the user to the url but I can't because the actual url as been set as the key not the url code so it will always return undefined. Please can you help me with this problem? Is there some other to do this? Many thanks in advance! Here is the code:
import redis from 'redis';
import http from 'http';
import express from 'express';
import { Router } from 'express';
import { promisify } from 'util';
import { nanoid } from 'nanoid';
interface Handler {
(req: Request, res: Response, next: NextFunction): Promise<void> | void;
}
interface Route {
path: string;
method: string;
handler: Handler | Handler[];
}
const { PORT = 8080} = process.env;
// I'm using a docker container
const { REDIS_URL = 'redis://cache:6379' } = process.env;
const redisClient = redis.createClient({
url: REDIS_URL
});
const initCache = async () =>
new Promise((resolve, reject) => {
redisClient.on('connect', () => {
console.log('Redis client connected');
resolve(redisClient);
});
redisClient.on('error', error => reject(error));
});
async function getShortenedURL(url: string) {
const urlCode = nanoid(7);
redisClient.setex(url, 3600, urlCode);
return urlCode;
}
const getAsync = promisify(redisClient.get).bind(redisClient);
async function getFromCache(key: string) {
const data = await getAsync(key);
return data;
}
const routes = [
{
path: '/:url',
method: 'get',
handler: [
async ({ params }: Request, res: Response, next: NextFunction) => {
try {
const { url } = params;
const result = await getFromCache(url);
if (result) {
res.redirect(301, result);
} else {
throw new Error('Invalid url');
}
} catch (error) {
console.error(error);
}
}
]
},
{
path: '/api/url',
method: 'post',
handler: [
async ({ body }: Request, res: Response, next: NextFunction) => {
const { url } = body;
const result = await getFromCache(url);
result ? res.status(200).send(`http://localhost:${PORT}/${result}`) : next();
},
async ({ body }: Request, res: Response) => {
const result = await getShortenedURL(body.url as string);
res.status(200).send(result);
}
]
}
];
const applyRoutes = (routes: Route[], router: Router) => {
for (const route of routes) {
const { method, path, handler } = route;
(router as any)[method](path, handler);
}
};
const router = express();
applyRoutes(routes, router);
const server = http.createServer(router);
async function start() {
await initCache();
server.listen(PORT, () => {
console.log(`Server is running on http://localhost:${PORT}...`)
}
);
}
start();
As I understand, you need to make sure that you do not shorten and store any given url twice.
You could encode the url and use it as the sort version and as a key at the same time. E.g.
www.someurltoshorten.com -> encoded value ->
{key: value} -> encoded value: www.someurltoshorten.com
If a user wants to shorten a url, you encode it first and you should get the exact same hash for the exact same url.
Once you get the encoded value, you can use the SET command with a "GET" option. You can also use the expire (EXAT) option to clean up old urls (those that nobody is looking for anymore) using the feature that is built in Redis.
It will do the following for you:
Set key to hold the string value (the key is the short version of the url and the value is the url itself)
If the value exists, it will overwrite it and reset (extend) the TTL (time to live) if you set it.
And the "GET" option will return the old value if it exists or null.
With one command you will be able to:
Create a value in Redis
Get the value if it already exists resetting the TTL (it makes sense to extend it) and all of the without any extra code with one command only!!!
The flow may look as follows:
A user inputs a url to be shortened:
you encode the url
you store it in Redis using the SET command where the key is the encoded value and the value is the url.
you return the encoded value which you already now. There is no need to check whether the url has already been shortened once because the SET command will either create a new entry or update the existing once.
A user inputs a shortened url
you encode the url
you store it in Redis using the SET command where the key is the encoded value and the value is the url.
you get the url from the value that was returned by the SET command thank to the "GET" option.
The only difference between the two cases is in whether you return the shortened url or the normal url
Basically, you need one Redis command for all of that to work.
I did not test the encoding/hashing of the url and it may not work with all types of url. You need to check which encoding would cover all cases.
But the idea here is the concept itself. It's similar to how we handle passwords. When you register, it's hashed. Then, when you log in and provide the same password, we can hash it again and compare hashes. Secure hashing with bycript, as an example, can be expensive (can take a lot of time).
For urls you need to make sure that encoding/hashing always produces the same result for the same url.
Keep in mind the length of the keys as describe here https://redis.io/topics/data-types-intro#redis-keys
you should use the HashCode generate for the URL as the Key for your dictionary since you intend to lookup by the shortened URL later.
Post--> Hash the URL, Encode it as per your need for length restrictions return the shortened Key as shortened URL and put <Hash,URL> in your map
Get--> User gives the shortened Key, Dictionary lookup for shortened Key and return the actual URL.
When a user creates a post in my RESTful application, I want to set the response status code to 201.
I followed the documentation and created start/hooks.js as follows:
'use strict'
const { hooks } = require('#adonisjs/ignitor')
hooks.after.httpServer(() => {
const Response = use('Adonis/Src/Response')
Response.macro('sendStatus', (status) => {
this.status(status).send(status)
})
})
Now in my PostController.js, I have this:
async store( {request, response, auth} ) {
const user = await auth.current.user
response.sendStatus(201)
}
But I am getting 500 HTTP code at this endpoint.
What am I doing wrong?
I noticed when I run Response.hasMacro('sendStatus') I get false.
In fact adonis already have this out of the box for all response codes...
Just write response.created(.....).
You can also use for example: .badRequest(), .notFound(), etc...
More info on: https://adonisjs.com/docs/4.1/response#_descriptive_methods
I solved this problem yesterday:
hooks.after.httpServer(() => {
const Response = use('Adonis/Src/Response')
Response.macro('sendStatus', function (status) => {
this.status(status).send(status)
})
})
I'm in the process of setting a graphql endpoint with servlerless/ lambda and am receiving an error when trying to connect to the graphql playground that comes with graphql-yoga. When I go to my route that has the playground (/playground) it launches the playground interface however it just says:
Server cannot be reached
In the top right of the playground. It's worth noting i'm using the makeRemoteExecutableSchema utility to proxy to another graphql endpoint (which is my CMS called Prismic). I don't believe this is the issue as I have successfully connected to it with the playground when testing on a normal express server.
Here is the code in my handler.js
'use strict';
const { makeRemoteExecutableSchema } = require('graphql-tools');
const { PrismicLink } = require("apollo-link-prismic");
const { introspectSchema } = require('graphql-tools');
const { ACCESS_TOKEN, CMS_URL } = process.env;
const { GraphQLServerLambda } = require('graphql-yoga')
const lambda = async () => {
const link = PrismicLink({
uri: CMS_URL,
accessToken: ACCESS_TOKEN
});
const schema = await introspectSchema(link);
const executableSchema = makeRemoteExecutableSchema({
schema,
link,
});
return new GraphQLServerLambda({
schema: executableSchema,
context: req => ({ ...req })
});
}
exports.playground = async (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
const graphQl = await lambda();
return graphQl.playgroundHandler(event, context, callback);
};
I have followed this guide for getting it running up till here and am fairly sure i've followed similar steps for what applies to what i'm trying to do but can't seem to figure out where i've gone wrong.
Thanks,
Could you take a look at what version of the graphql-yoga package you are using?
I had a similar problem using the Apollo server in combination with Kentico Cloud Headless CMS and I found this issue:
https://github.com/prisma/graphql-yoga/issues/267
I encounter a problem in calling a SOAP API from a ReactJS app; a sample code that i'm using :
const $ = require("jquery");
const soap = require('soap-everywhere');
import cookie from 'react-cookies';
class Phonebook {
list () {
const url = 'http://url/to/wsdl';
let args = {
};
console.log(soap);
soap.createClient(url, (err, client) => {
client.list(args, (err, result) => {
console.log(result);
}
});
}
}
module.exports = new Phonebook();
And I would like to pass custom HTTP Headers which are stored in browser cookies like :
"IPBX_SESSION": cookie.load("IPBX_SESSION"),
"IPBX_MODE": cookie.load("IPBX_MODE")
But in this browser soap module, it seems there is no method to add custom HTTPHeaders in a way where it could be like :
client.addHTTPHeaders({
"IPBX_SESSION": cookie.load("IPBX_SESSION")
});
This is a custom SOAP API on which we have not all the control, we only have access to the web server options.