So im really new with these VPS stuff, Always wanted to host my discord bots in them, Then I see Google cloud with a $300 credit so I said why not, After I signed up and created an EC2 instance Thing is though, I've never used a VPS on SSH, I've tried tutorials on youtube but none works. My questions is, How do I connect to my SSH VPS through "FileZilla" or "PuTTy"? I've tried PuTTyGen like in youtube vids, But they don't bring me to my "captionedcan" account. So I can't really import anything to my "captionedcan"s folders.
First of all, I can see that you are familiar with AWS, but I want to let you know that there are some differences on the name of the services, for example Amazon Elastic Compute Cloud (Amazon EC2) has its equivalent in Google Cloud Platform, its name is Compute Engine you could check the following document that provides a side-by-side comparison of the various services available on AWS and Google Cloud.
On the other hand, a simple way to connect to Linux instance (Your VPS) is through the Google Cloud Console and you can follow the next steps:
In the Cloud Console, go to the VM instances page.
In the list of virtual machine instances, click SSH in the row of the instance that you want to connect to.
I hope you find this information useful.
Related
I'm exploring using Azure blob storage with my bot. I'd like to use it as a persistent store for state, as well as storing transcripts.
I configure the BlobStorage object like this:
storageProvider = new BlobStorage( {
containerName: process.env.BlobContainerName,
storageAccountOrConnectionString: process.env.BlobConnectionString
} );
As sensitive information is stored in these files, especially transcripts, I'm working with my team on securing the storage account and the container within it.
We have created a system assigned managed identity for the application service hosting the bot, and we have given this account the 'Storage Blob Data Contributor' role. Which, as I understand it, provides read, write and delete access to content stored.
Unfortunately when the bot tries to access the storage the access attempt fails. I see the following error in the 'OnTurnError trace':
StorageError: Forbidden
Interestingly running the bot locally with the same blob storage connection string works. Suggesting that this issue is related to the service identity and/or the permissions that it has.
Does anyone know what could be causing the error? Are more permissions required to the storage account? Any thoughts on increasing the logging of the error to potentially see a more detailed error message is also most welcome.
At this moment in time I do not believe that the framework supports using a system assigned managed identity for access to the blob storage.
In looking into this I found a number of examples of Node.js that use two specific packages for accessing blob storage using a system assigned identity. Specifically:
#azure/storage-blob - https://www.npmjs.com/package/#azure/storage-blob
#azure/identity - https://www.npmjs.com/package/#azure/identity
The identity package is the one that provides the functionality to get a token associated with a credential, that is then used by code in the storage-blob package to interact with the storage account.
If I look at the dependency tree for the bot framework I don’t see either of these packages. Instead I see:
azure-storage - https://www.npmjs.com/package/azure-storage
botbuilder-azure - https://www.npmjs.com/package/botbuilder-azure
Taking a deep dive into these two packages I don’t see any code for connecting to an azure storage account that uses a credential. The only code I can find uses access keys. Therefore my conclusion currently is that the bot framework doesn’t support accessing a storage account using a credential.
While we could explore adding code that uses these packages, such significant development work is outside the scope of our project at present.
If anyone with more knowledge than I can see that this is incorrect please let me know via a comment and I'll explore further.
For the moment we have settled on continuing to use access keys. As it is not any less secure than the way the bot accesses other services. Such as the cognitive services like QnA Maker.
I am having a Node.js and Vue.js project, where a user is going to provide his AWS credentials, a pointer to some online resource (which stores a large amount of data), and some algorithm on this data is going to be run on user's AWS account that he/she provided.
For this purpose, I am having two difficulties and I would like to ask for some help.
Firstly, I want to deploy some simple javascript code in the cloud, to test that everything works. What is the easiest way to do that? How can the npm packages aws-sdk and aws-lambda help me? Do I necessarily need to give my debit card details to make use of AWS just for quick testing purpose?
The second thing is, is there an authorization library/tool that AWS offers, like Facebook, for example, so the user just needs to input his username and password into a window, and he/she is automatically authorized (with OAuth, probably that's what they are using).
In addition, I would appreciate any general advice on how to approach this problem, how can I run code on huge amount of data on cloud user accounts? Maybe another cloud platform is more appropriate? Thank you!
This is a big question. so I'll provide some pointers for you to do further reading on:
to start with, decide if you want your webapp to be server-based (EC2, Node.js, and Express) or serverless (CloudFront, API Gateway, Lambda, and S3)
learn how to use Cognito as a way to get AWS credentials associated with a social provider login (such as Facebook or Google)
to operate in another user's AWS account, you should leverage cross-account IAM roles (they create a role and give you permission to assume it)
on the question of running code against large amounts of data, the repository for this data will typically be S3 or perhaps Redshift in some situations, and the compute environment could be any one of Lambda (short lifetime, serverless), EMR (clustered Hadoop, Spark etc.), EC2 (vanilla VM), Athena (SQL queries over content in S3), or ECS (containers). You haven't given enough information to help decide which might be more suitable.
The simplest option to test things out is likely to be S3 (for storage) and EC2 (use t2.micro instances in Free Tier, deploy your web application just like you would on any other Linux environment).
Yes, to use AWS you need an account and to get an account you need to supply a credit card. There is a substantial Free Tier in your first year, however.
I am trying to get started using couchDB. While standing up a cluster of three docker containers is easy enough using this GitHub repo, I cannot create a javascript program to query the database.
All the guides I find say the required libraries live at:
http://127.0.0.1:15984/_utils/script/jquery.couch.js
and
http://127.0.0.1:15984/_utils/script/jquery.js
However, when I try to run this, I keep getting a not found response. What am I doing wrong?
Note:
I know normally the port is 5984 but with my docker cluster 15984
is the correct choice
Going to localhost:15984/_utils, localhost:25984/_utils, and localhost:35984/_utils takes me to the CouchDB interface.
I cannot connect to the couchdb using the container's IP addresses
I can also use 172.17.0.1 in place of localhost
I'm changing scopes in an app for Google Classroom. I remove from courses .readonly and added student listing
var SCOPES = "https://www.googleapis.com/auth/classroom.courses https://www.googleapis.com/auth/classroom.coursework.students";
I get this error when requesting students even after logging out and attempting to re-authenticate:
Request had insufficient authentication scopes
It seems the token has been cached somewhere.
This Github issue, although for Google Sheets, says the token is in Documents/.credentials/ folder. I don't have this folder though on my Macbook Pro Sierra 10.12.6.
Where can I find that folder and remove the saved scopes so it reauthenticates and accepts my new scopes?
If you change the scopes needed in your application then the user will need to authenticate your application. especially if you go from a read-only scope to a read write scope. This is because you need additional permissions then what you had originally requested. List of google classroom scopes
Assuming that you are using the Google .net client library then you can find the user credentials in the %appdata% folder on your machine. By deleting that fine you can force a authentication. I am guessing that you are since this is the github project you have linked to.
Note: there should be a way of forcing reauth via code but i cant remember the command right now i will have to look it up.
I am working on a Azure Web application. The code compiles and runs fine on my local machine. But when I upload the package in Azure Platform, the webrole wouldn't start and gives Busy status with the message: "Waiting for role to start... System is initializing. [2012-04-30T09:19:08Z]"
Both Onstart() and Run() don't contain any code. I am not blocking the return of OnStart.
However I am using window.setInterval in javascript. The javascript function retrieves the values from Database every 10 seconds.
What can be done to resolve this?
In most of the situation when a Role (Web or Worker) is stuck, I have found the following steps very useful:
Always add RDP access to your role because in some cases when role is stuck you still have ability to RDP your instance and investigate the problem by yourself. In some cases you could not RDP to your instance because dependency services are not ready yet to let you in. So if you have RDP enabled or could RDP to your instance please try to log you in.
Once you have RDP to your instance. Get the local machine IP address and launch directly in browser. In internal IP address starts from 10.x.x.x so you can open the browser based on your endpoint configuration i.e. http://10.x.x.x:80 or https://10.x.x.x:443
If you could not get into the instance over RDP then your best bet are to get the diagnostics info to understand where the problem could be. The diagnostics info is collected in your instance and sent to Azure Storage, configured by you in your WebRole.cs (in Web Role) or WorkerRole.cs (in Worker Role) code. Once diagnostics is working in your role, you can collect the Diagnostics data at your configured Azure Blob/Table storage to understand the issue.
If you don't have RDP access and don't have any Azure Diagnostics configured (or could not get any diagnostics data to understand the problem) your best bet is to contact Azure Support team (24x7 via web or phone) at the link below and they will have ability to access your instance (with your permission) and provide you root cause.
https://support.microsoft.com/oas/default.aspx?gprid=14928&st=1&wfxredirect=1&sd=gn
When contacting Azure Support, please provide your Subscription ID, Deployment ID, your Azure Live Account ID and a small description of your problem.
Two things to check:
Make sure your diagnostics connection string is pointing to a real account, and not dev storage
Check the session state provider in web.config. By default, it points to SQL Express LocalDB, which won't exist in Windows Azure.
The reason for this I run into the most is a missing or invalid assembly. But there are several great posts that can help disanosis this so I won't dive into the matter too deeply myself.
http://social.msdn.microsoft.com/Forums/en-US/windowsazuretroubleshooting/thread/6c739db9-4f6a-4203-bbec-eba70733ec16
http://blogs.msdn.com/b/tom/archive/2011/02/25/help-with-windows-azure-role-stuck-in-initializing-busy-stopped-state.aspx
http://blogs.staykov.net/2009/12/windows-azure-role-stuck-in.html