Alexa Smarthome-Skill without AWS - javascript

Im a little confuse atm.
Is it possible to build a Alexa Smarthome Skill without hosting it on AWS.
For my last custom skill I used Alexa-App but this doesn't support the Smarthome-API from Amazon, as far as I know. Also I did not found any library that does support the Smarthome-API.
Maybe you can Help me find a lib, so I can host my Smart home-Skill on my own Server.
Pref language: JavaScript and Ruby

Is it possible to build a Alexa Smarthome Skill without hosting it on AWS.
No, it is not -- not entirely, anyway.
Alexa supports hosting custom skills entirely externally. They call this "hosting a skill as a web service" -- that is, a web-accessible endpoint that Alexa can send requests to. However:
Web services can only be used for custom skills.
https://developer.amazon.com/docs/custom-skills/host-a-custom-skill-as-a-web-service.html
Smart Home Skills must be run in Lambda. Of course, the Lambda function for a Smart Home Skill can make is own external requests to the "device cloud" -- whatever that means to you, and which may involve servers of your own -- but this is done using either HTTPS request or requests using any other custom protocol you might use, either way, from inside the Lambda function that Alexa invokes.
Your skill code, which is hosted as a Lambda function receives and parses the directive, validating the authentication information. Your skill communicates with your systems, or device cloud, using communication channels you've defined to turn on the customer’s kitchen light. (emphasis added)
https://developer.amazon.com/docs/smarthome/understand-the-smart-home-skill-api.html

Related

How to secure the source code of react native application?

I am building an application that has auth system and a lot of post requests,
I want to know how to make my backend endpoints accept only requests that are coming from my application, not from anything else like Postman.
For example, if a user submitted a registration form, a post request is sent to my backend with user info, how can I make sure this post request is coming from my application?
What I was thinking of, is saving a secret on the client’s side that is to be sent with each request to the backend, so that I can make sure the request is coming from my app.
I think SSL pinning is meant for this.
I know that anyone can access my app source code if they extract the APK file.
I want to make sure that no one can alter or steal my source code.
I read that I can make my code unreadable by Obfuscating it ( I still need to figure out how I am going to do that on my EAS build ), is this enough?
And I have to use JailMonkey to detect if the device is rooted.
I am using Expo secure store to save my sensitive info on the client side.
Is this approach good enough, is there anything I am missing?
I have zero information about security, this is just what I learned through searching.
Let me know if you have better suggestions.
Thank you in advance.
The Difference Between WHO and WHAT is Accessing the API Server
I want to know how to make my backend endpoints accept only requests that are coming from my application, not from anything else like Postman.
First, you need to understand the difference between WHO and WHAT is accessing the API Server to be in a better position to look for a solution to your problem.
I wrote a series of articles around API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in detail the difference between who and what is accessing your API server, but I will extract here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
So think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
When you grasp this idea and it's ingrained in your mindset, then you will look into mobile API security with another perspective and be able to see attack surfaces that you never though they existed before.
Certificate Pinning and MitM Atacks
What I was thinking of, is saving a secret on the client’s side that is to be sent with each request to the backend, so that I can make sure the request is coming from my app. I think SSL pinning is meant for this.
Certificate pinning on the mobile app side serves to guarantee that the app is talking only with your API server and not anything else, like when a MitM attack occurs and the app has its requests intercepted, and potentially modified and/or replayed, or simply saved to later extract the secrets from it.
Pinning doesn't guarantee to your API server that the request is coming indeed from what it expects, a genuine and unmodified version of your mobile app, "unless" you implement mutual pinning, that isn't encouraged to do so, because you will need to ship the private key for the API server certificate in the mobile app. Even if you do so, all an attacker needs to do is to extract the private key and will be able to communicate with your API server like if it was your genuine mobile app.
I don't have an article to implement pinning on a react-native mobile app but you can take a look to the one I wrote for Android to understand better all the process. Read my article Securing HTTPS with Certificate Pinning on Android on how you can implement certificate pinning and by the end you will understand how it can prevent a MitM attack.
In this article you have learned that certificate pinning is the act of associating a domain name with their expected X.509 certificate, and that this is necessary to protect trust based assumptions in the certificate chain. Mistakenly issued or compromised certificates are a threat, and it is also necessary to protect the mobile app against their use in hostile environments like public wifis, or against DNS Hijacking attacks.
You also learned that certificate pinning should be used anytime you deal with Personal Identifiable Information or any other sensitive data, otherwise the communication channel between the mobile app and the API server can be inspected, modified or redirected by an attacker.
Finally you learned how to prevent MitM attacks with the implementation of certificate pinning in an Android app that makes use of a network security config file for modern Android devices, and later by using TrustKit package which supports certificate pinning for both modern and old devices.
Bypassing Certificate Pinning
I think SSL pinning is meant for this.
The good news is that you already learned how good pinning is to prevent MitM attacks, now the bad news is that it can be bypassed, and yes I also wrote an article on how to it on Android (sorry to not be specific on react-native). If you want to learn the mechanics of it then read my article How to Bypass Certificate Pinning with Frida on an Android App:
Today I will show how to use the Frida instrumentation framework to hook into the mobile app at runtime and instrument the code in order to perform a successful MitM attack even when the mobile app has implemented certificate pinning.
Bypassing certificate pinning is not too hard, just a little laborious, and allows an attacker to understand in detail how a mobile app communicates with its API, and then use that same knowledge to automate attacks or build other services around it.
Code Obfuscation and Modifying Code
I know that anyone can access my app source code if they extract the APK file. I want to make sure that no one can alter or steal my source code.
Sorry, but once you release it to the public is up for grabs for everyone, even if heavily obfuscated its still possible to modify it statically or during runtime.
I read that I can make my code unreadable by Obfuscating it ( I still need to figure out how I am going to do that on my EAS build ), is this enough?
No, you can use the best obfuscation tool, but then an attacker well versed in deobuscation techniques will be able to understand your code and modify it statically or at runtime. Several open-source tools exist to ake this easy, and if you read the article to bypass certificate pinning then you already saw an example of doing it at runtime with Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
RASP - Runtime Application Self-Protection
And I have to use JailMonkey to detect if the device is rooted.
Using Frida the check can be modified to always return that the device is not rooted. Also JailMonkey may not detect all ways used to hide that a device is rooted, and this a moving target, because hackers and developers are in a constant cat and mouse game.
Sensitive Info Security
I am using Expo secure store to save my sensitive info on the client side.
Even when a secret is securely stored it will need to be used at some point, and the attacker will hook Frida to this point and extract the secret or do it in a MitM attack.
Possible Solutions
Is this approach good enough, is there anything I am missing?
From all I wrote it looks no matter what you are doomed to failure in properly secure your sensitive info and to guarantee that your API server knows that what is making the request is the genuine mobile app it expects, but security its all about of applying as many layers of defences as possible, like done in medieval castles, prisons, etc., because this will increase the level of effort, time and expertise required to succeed in an attack.
You now need to find a solution that allows you to detect MitM attacks, tampered and modified apk binaries, Frida present at runtime and that can deliver a runtime secret to mobile apps that pass a mobile app attestation that guarantees with a very high degree of confidence that such threats are not present. Unfortunately I don't know any open-source project that can deliver all this features, but a commercial solution exists (I work there), and if you want to learn more about you can read the article:
Hands-on Mobile App and API Security - Runtime Secrets Protection
In a previous article we saw how to protect API keys by using Mobile App Attestation and delegating the API requests to a Proxy. This blog post will cover the situation where you can’t delegate the API requests to the Proxy, but where you want to remove the API keys (secrets) from being hard-coded in your mobile app to mitigate against the use of static binary analysis and/or runtime instrumentation techniques to extract those secrets.
We will show how to have your secrets dynamically delivered to genuine and unmodified versions of your mobile app, that are not under attack, by using Mobile App Attestation to secure the just-in-time runtime secret delivery. We will demonstrate how to achieve this with the same Astropiks mobile app from the previous article. The app uses NASA's picture of the day API to retrieve images and descriptions, which requires a registered API key that will be initially hard-coded into the app.
Do You Want To Go The Extra Mile?
In any response to a security question I always like to reference the excellent work from the OWASP foundation.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
short answer you can't.
I want to know how to make my backend endpoints accept only requests
that are coming from my application, not from anything else like
Postman
the only thing you can do here is cors Cross-Site Request Forgery Prevention. Y to stop other servers from calling your api.
and you can't make only your application communicate with the server
you can hard code(parameters in the request) in the application to send to the server.but hackers can listen to request made from devices
I know that anyone can access my app source code if they extract the
APK file. I want to make sure that no one can alter or steal my source
code.
short answer you also can't
you can use ProGuard(native code) to obfuscate on native android and ios have compiled binary on release but those are not to js
so basically anyone can read your bundle js in plain text editor.
maybe in the future facebook can make something for hermes.

Deploying a JS/Python/PostgresQL app on AWS

I have, as title says, a JS/Python/PostgresQL app that I would like to deploy using AWS. I feel as though I could figure out deployment of the 3 pieces as separate, discrete entities, but what I haven't been able to figure out/understand, is how the 3 pieces will communicate once they are live.
The site will be a lightly trafficked one where only I can add resources to the db. Additionally, what AWS services would you recommend for hosting each part?
Thanks kindly. And please let me know if I can provide any more helpful info.
One service that you can look at would be aws lightsail that can spin up your application and connect to your database and is designed for hosting such applications.
Another way would be to have your python app send request to an AWS lambda using api gateway and the lambda executes your SQL command and returns back the data. These are 2 ways one more service you can explore is AWS amplify that can do the same as well

How to communicate securely between shell app and micro application(frontend) via pubsub

I have a shell application which is the container application that performs all the API communication. Also, I do have multiple Micro application which just broadcast the API request signal to shell application.
Now, keeping the security in mind, as a shell application how it can ensure that API request signal is coming from the trusted micro app which I own.
To be very precise, My ask is, is there a way to let shell application know that the signal is coming from the micro app that it owns and not from any untrusted(like hacking, XSS) source
As per the Micro-Frontend architecture each Micro Frontend should make call to it's own API (micro service). However, your Shell app can provide some common/global library which can help the Micro Frontends make the AJAX call. But the onus of making the call must remain with the individual micro frontend.
From your question it is unclear if your apps are running in iframes, or are being loaded directly into your page.
In the case of iFrames your using postMessage and you can check the origin on received message via event.origin. Compare this with a list of allowed domains.
If your micro apps are directly on your page then you just control what is allowed to load into them.
So, in most microfrontends, each microapp does its own API calls to the corresponding microservice on the backend, and the shell app is ignorant of it. The most the shell app would do relative to this is passing some app config to all microapps which has config like the hostname of the various backends, and, an auth token if all the backends are using the same auth.
But to ensure the shell app doesn't have, say, an advertisement with malicious code trying to pose as another microapp, well..
how are the microapps talking to the shell? Is there a common custom event? The name of the customEvent would have to be known to the intruder, but that's only security-by-obscurity, which isn't real.
other methods like postMessage are between window objects, which I don't think help your case.
You might be able to re-use the authToken that the shell and microapps both know, since it was communicated at startup. But if you have microapps that come and go that won't work either.

AWS serverless and javascript - is it secure?

So I am digging into the 'serverless' architecture and after going over a tutorial about angular as front-end and nodejs lambdas available by an API I am not sure if it is secure at all. The angular website that I did makes calls to AWS api which is linked to lambda function. Because it is angular and it is visible to the client's browser, important secret keys such as - AWSCognito.config.update({accessKeyId: 'something', secretAccessKey: 'something'}); can be seen.
When creating those keys, AWS lets you see them once and then hides the secretAccessKey so I guess it is not quite reasonable to leave it in a js file? I am still learning the fundamentals of AWS so let me know what do you think and what is the best solution, thanks!
Because it is javascript and it is all visible to the client
That isn't true.
JavaScript is a programming language.
JavaScript you send to the browser to run on the browser is visible to the owner of the browser. You seem to be conflating this with "All JavaScript".
JavaScript you send to AWS to run on AWS is not visible to the owner of the browser.

Is a Spring Boot microservice with a non-Java front-end client possible?

I've implemented the shell of a microservices-based REST API application. I have simply followed the guides on Pivotal Springs' own documentation using Eureka and Ribbon for load balancing. Everything works. I have a discovery server with a handful of independent services which can register with the discovery server.
Now, my problem is that I might prefer not to write my client-side app in Java - maybe Angular or node.js, etc. However, the load balancing and connecting to the discovery server is all done in Java in the examples I've followed.
Is it possible to use JavaScript to do the same things that the Eureka client does with the Spring Boot microservices so that I don't need to be constrained in my choices of browser client technology? Does anybody have any advice for how this should be approached? I had difficulty finding any articles that cover this, to be honest.
Yes. Definitely you can choose technology of your choice for developing front end application. From your front end application, you make calls to API endpoint that you expose via your spring boot application.
You might want to expose your services via single API gateway that will help you route requests to designated micro services using your discovery server.
Actually you should not be doing load balancing/service discover etc. in the front-end. So the question about whether it is possible in JavaScript or with which libraries is irrelevant.
Typically you'll have an API gateway or a (load balancing) proxy which works with your service registry and routes requests accordingly. In the current project we use Consul for service registry and Nginx + consul-template as proxy. We plan to migrate to some API gateway.
With this setup your front-end will connect to just one central endpoint which would do load balancing/routing to individual service instances behind the scenes. Thus your front-end will not need to implement anything like Eureka/Ribbon etc.

Categories

Resources