I am trying to upload files to S3 from an app written in JavaScript. For this reason - mobile app, I am limited to libraries that I could use. Got the thing to work by using FormData until it was decided to use SAML and delegate authentication. Now, the temporary credentials are being obtained OK. However, AWS::S3 does not want to recognize them. It throws an error: The AWS Access Key Id you provided does not exist in our records.
My code is below:
console.log("AWS temp credentials: " + JSON.stringify(delegated_jwt.Credentials));
var aws_creds = delegated_jwt.Credentials;
var secret = aws_creds.SecretAccessKey;
var policyBase64 = base64.encode(JSON.stringify(POLICY_JSON));
console.log ("policy base64: " + policyBase64 );
var signature = CryptoJS.enc.Base64.stringify(CryptoJS.HmacSHA1(policyBase64, secret));
console.log("signature: " + signature);
var key = "user_uploads" + "/" + delegated_jwt.Subject + '/' + (new Date).getTime() + ".jpg";
console.log("AWS::S3 key: " + key);
var params = new FormData();
params.append('key', key);
params.append('acl', 'private');
params.append('Content-Type', "image/jpeg");
params.append('AWSAccessKeyId', aws_creds.AccessKeyId);
params.append('policy', policyBase64);
params.append('signature', signature);
params.append('file', captured.uri);
var xhr = new XMLHttpRequest();
xhr.open('POST', 'https://mybucket.s3.amazonaws.com/', true);
xhr.onload = () => {
...
When I used permanent access and secret keys, it worked fine. If this there is something wrong with my AWS settings, how do I debug this? What else should I check?
Related
I have a project I finished , and when you upload it on Github pages it doesn't work. It won't bring in any scripts, externally linked fonts, and API data. The API only supports HTTP, and Github pages only accepts HTTPS. Any way around it without changing API's?
The API is Openweathermap.
$(document).ready(function(){
var temp = $('.temperature');
var APIKEY = ';
var loc = $('#search').val();
function updateByCity(loc){
var url = "http://api.openweathermap.org/data/2.5/weather?q=" + loc + "&APPID=" + APIKEY;
sendRequest(url);
}
function k2f(k){
return Math.round(k*(9/5)-459.67);
}
function ascii(a){
return String.fromCharCode(a);
}
$('.enter').click(function(event){
event.preventDefault();
var loc = $('#search').val();
var url = "http://api.openweathermap.org/data/2.5/weather?q=" + loc + "&APPID=" + APIKEY;
console.log(url);
var xmlhttp = new XMLHttpRequest ();
xmlhttp.onreadystatechange = function(){
var url = "http://api.openweathermap.org/data/2.5/weather?q=" + loc + "&APPID=" + APIKEY;
console.log("lol");
var data = JSON.parse(xmlhttp.responseText);
var datatext = data.id;
var name = data.name;
var locname = name;
var temptext = k2f(data.main.temp) + ascii(176) + "F";
console.log(temp);
console.log(url);
$('.temperature').text(temptext);
$('.city').text(name);
};
xmlhttp.open("GET", url, true);
xmlhttp.send();
});
No, there won't be an easy way around this restriction as it is important for the security and integrity of your website. If you access resources from an HTTPS encrypted page via an unencrypted connection, the user will always see security warnings.
You could set up a proxy that accesses the API via HTTP and passes the calls on to the browser via HTTPS. Note that this may cause considerable overhead in terms of development effort.
The simplest solution would probably be to switch to a different weather data provider, considering that HTTPS encryption by default may be a good idea.
Scenario:
I would like to invoke an already defined workflow or a custom action from a web page which is located outside the CRM Dynamics context. (Let's say MS CRM 2011-2013-2015-2016 and 365)
My solution:
My idea would be about defining a kind of controller page into the CRM context accessible from the web and execute the rest call within that page (through javascript).
This page will be able to read input parameters and execute the right rest call.
Does it make sense? Could you suggest a better implementation?
Thanks in advance!
If you have the resources, you can setup a service utilizing the following methods and then ajax it.
private static void ExecuteWorkflow(Guid workflowId, Guid entityId)
{
try
{
string url = ConfigurationManager.ConnectionStrings["crm"].ConnectionString;
ClientCredentials cc = new ClientCredentials();
cc.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials;
OrganizationServiceProxy _service = new OrganizationServiceProxy(new Uri(url), null, cc, null);
ExecuteWorkflowRequest request = new ExecuteWorkflowRequest()
{
WorkflowId = workflowId,
EntityId = entityId
};
ExecuteWorkflowResponse r = (ExecuteWorkflowResponse)_service.Execute(request);
_service.Dispose();
}
catch (Exception ex)
{
//Handle Exception
}
}
If you're unable to have the service on the same domain as the CRM server, you should be able to impersonate.
cc.Windows.ClientCredential.Domain = "DOMAIN";
cc.Windows.ClientCredential.Password = "PASSWORD";
cc.Windows.ClientCredential.UserName = "USERNAME";
You can find more details here.
https://msdn.microsoft.com/en-us/library/microsoft.crm.sdk.messages.executeworkflowrequest.aspx
You can invoke a workflow in a js like this:
You can query the workflowId by its name and the type definition.
var entityId = // The GUID of the entity
var workflowId = // The GUID of the workflow
var url = // Your organization root
var orgServicePath = "/XRMServices/2011/Organization.svc/web";
url = url + orgServicePath;
var request;
request = "<s:Envelope xmlns:s=\"http://schemas.xmlsoap.org/soap/envelope/\">" +
"<s:Body>" +
"<Execute xmlns=\"http://schemas.microsoft.com/xrm/2011/Contracts/Services\" xmlns:i=\"http://www.w3.org/2001/XMLSchema-instance\">" +
"<request i:type=\"b:ExecuteWorkflowRequest\" xmlns:a=\"http://schemas.microsoft.com/xrm/2011/Contracts\" xmlns:b=\"http://schemas.microsoft.com/crm/2011/Contracts\">" +
"<a:Parameters xmlns:c=\"http://schemas.datacontract.org/2004/07/System.Collections.Generic\">" +
"<a:KeyValuePairOfstringanyType>" +
"<c:key>EntityId</c:key>" +
"<c:value i:type=\"d:guid\" xmlns:d=\"http://schemas.microsoft.com/2003/10/Serialization/\">" + entityId + "</c:value>" +
"</a:KeyValuePairOfstringanyType>" +
"<a:KeyValuePairOfstringanyType>" +
"<c:key>WorkflowId</c:key>" +
"<c:value i:type=\"d:guid\" xmlns:d=\"http://schemas.microsoft.com/2003/10/Serialization/\">" + workflowId + "</c:value>" +
"</a:KeyValuePairOfstringanyType>" +
"</a:Parameters>" +
"<a:RequestId i:nil=\"true\" />" +
"<a:RequestName>ExecuteWorkflow</a:RequestName>" +
"</request>" +
"</Execute>" +
"</s:Body>" +
"</s:Envelope>";
var req = new XMLHttpRequest();
req.open("POST", url, false);
// Responses will return XML. It isn't possible to return JSON.
req.setRequestHeader("Accept", "application/xml, text/xml, */*");
req.setRequestHeader("Content-Type", "text/xml; charset=utf-8");
req.setRequestHeader("SOAPAction", "http://schemas.microsoft.com/xrm/2011/Contracts/Services/IOrganizationService/Execute");
req.send(request);
If the request.status is 200 the request was succesfull. This was tested on an CRM2011 enviroment.
I recommend you to create a WCF rest or web api, reference the IOrganizationService and from that use the operation of the CRM service. It is better to call a intermediate WCF than the IOrganizationService directly.
I am using ADAL JS for authenticating the users against Azure AD. And as I am new to ADAL JS, I started reading with following articles, which I find very informative:
Introducing ADAL JS v1
ADAL JavaScript and AngularJS – Deep Dive
After reading the articles, I had the impression that ADAL JS intercepts the service calls and if the service url is registered as one of the endpoint in AuthenticationContext configuration, it attaches the JWT token as Authentication Bearer information.
However, I found the same is not happening in my case. And after some digging, it seemed to me that it is only possible, if adal-angular counter part is also used, which I am not using currently, simply because my web application is not based on Angular.
Please let me know if my understanding is correct or not. If I need to add the bearer information explicitly, the same can be done, but I am more concerned whether I am missing some out-of-the-box facility or not.
Additional Details: My present configuration looks like following:
private endpoints: any = {
"https://myhost/api": "here_goes_client_id"
}
...
private config: any;
private authContext: any = undefined;
....
this.config = {
tenant: "my_tenant.onmicrosoft.com",
clientId: "client_id_of_app_in_tenant_ad",
postLogoutRedirectUri: window.location.origin,
cacheLocation: "sessionStorage",
endpoints: this.endpoints
};
this.authContext = new (window["AuthenticationContext"])(this.config);
Also on server-side (WebApi), Authentication configuration (Startup.Auth) is as follows:
public void ConfigureOAuth(IAppBuilder app, HttpConfiguration httpConfig)
{
app.UseWindowsAzureActiveDirectoryBearerAuthentication(
new WindowsAzureActiveDirectoryBearerAuthenticationOptions
{
Tenant = "my_tenant.onmicrosoft.com",
TokenValidationParameters = new TokenValidationParameters
{
ValidAudience = "client_id_of_app_in_tenant_ad"
}
});
}
However, the Authorization is always null in request.Headers.
UPDATE: It seems that the same applies for auto-renewal of tokens as well; when used in conjunction with adal-angular, the renewal of token works seamlessly by calling AuthenticationContext.acquireToken(resource, callback) under the hood. Please correct me if I am wrong.
After reading the articles, I had the impression that ADAL JS intercepts the service calls and if the service url is registered as one of the endpoint in AuthenticationContext configuration, it attaches the JWT token as Authentication Bearer information.
This will work only if your application is angular based. As you mentioned, the logic for this lives in adal-angular.
If, however, you want to stick to pure JS, you will not get the automatic "get-access-token-and-attach-it-to-header" support. You can use acquireToken(resource, callback api to get a token for the endpoint. But you will have to do some work in the controller that is sending the request to the api.
This might give you some idea: https://github.com/Azure-Samples/active-directory-javascript-singlepageapp-dotnet-webapi/blob/master/TodoSPA/App/Scripts/Ctrls/todoListCtrl.js. This sample does not uses angular.
ADAL.JS is incompatible with v2.0 implicit flow. I could not get it working since I set my project up recently and don't think projects are backwards compatible.
This was very confusing and took me a long time to figure out that I was mixing up the versions, and can't use ADAL.JS with v2.0. Once I removed it, things went much smoother, just did a couple of XHR requests and a popup window, no magic actually required!
Here is code for v2:
function testNoADAL() {
var clientId = "..guid..";
var redirectUrl = "..your one.."
var authServer = "https://login.microsoftonline.com/common/oauth2/v2.0/authorize?";
var responseType = "token";
var stateParam = Math.random() * new Date().getTime();
var authUrl = authServer +
"response_type=" + encodeURI(responseType) +
"&client_id=" + encodeURI(clientId) +
"&scope=" + encodeURI("https://outlook.office.com/Mail.ReadWrite") +
"&redirect_uri=" + encodeURI(redirectUrl) +
"&state=" + stateParam;
var popupWindow = window.open(authUrl, "Login", 'width=' + 300 + ', height=' + 600 + ', top=' + 10 + ', left=' + 10 + ',location=no,toolbar=yes');
if (popupWindow.focus) {
popupWindow.focus();
}
}
Note: redirectUrl will appear in popup window, needs to have code in it to pass location hash, such as this:
<script>window.opener.processMicrosoftAuthResultUrl(location.hash);window.close();</script>
function processMicrosoftAuthResultUrl(hash) {
if (hash.indexOf("#") == 0) {
hash = hash.substr(1);
}
var obj = getUrlParameters(hash);
if (obj.error) {
if (obj.error == "invalid_resource") {
errorDialog("Your Office 365 needs to be configured to enable access to Outlook Mail.");
} else {
errorDialog("ADAL: " + obj.error_description);
}
} else {
if (obj.access_token) {
console.log("ADAL got access token!");
var token = obj.access_token;
var url = "https://outlook.office.com/api/v2.0/me/MailFolders/Inbox/messages";
$.ajax({
type: "GET",
url: url,
headers: {
'Authorization': 'Bearer ' + token,
},
}).done(function (data) {
console.log("got data!", data);
var message = "Your latest email is: " + data.value[0].Subject + " from " + data.value[0].From.EmailAddress.Name+ " on " + df_FmtDateTime(new Date(data.value[0].ReceivedDateTime));
alertDialog(message);
}).fail(function () {
console.error('Error getting todo list data')
});
}
}
}
function getUrlParameters(url) {
// get querystring and turn it into an object
if (!url) return {};
if (url.indexOf("?") > -1) {
url = url.split("?")[1];
}
if (url.indexOf("#") > -1) {
url = url.split("#")[0];
}
if (!url) return {};
url = url.split('&')
var b = {};
for (var i = 0; i < url.length; ++i) {
var p = url[i].split('=', 2);
if (p.length == 1) {
b[p[0]] = "";
} else {
b[decodeURIComponent(p[0])] = decodeURIComponent(p[1].replace(/\+/g, " "));
}
}
return b;
}
I've got a website (in asp.net) through which I want users to upload photos to a Google Cloud Storage bucket. The users doing the uploading CAN be authenticated with Google if necessary (though I would prefer it if they weren't - the site is locked down with usernames/passwords/captchas).
Slightly unrelated - the photos in the bucket though have to be visible to everyone, so long as they have the link (some of our clients have IT depts who refuse to allow them to use Google accounts, and we can't change their minds). I would want the link ideally returned when the photo is uploaded.
I have the following Javascript code, which I think should work (basically it's taken from here):
<script type="text/javascript" src="https://apis.google.com/js/client:plusone.js"></script>
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script>
<script type="text/javascript" src="https://apis.google.com/js/client.js"></script>
<script type="text/javascript">
var PROJECT = 'MY_PROJECT';
var clientId = 'MY_CLIENT_ID_ENDING_IN_apps.googleusercontent.com';
var apiKey = 'MY_API_KEY';
var scopes = 'https://www.googleapis.com/auth/devstorage.read_write';
//quick question - I've got a photo in the bucket already, and its
//URL points to a v1_internal folder. Would that mean that the API
//version is v1_internal?
var API_VERSION = 'v1';
var BUCKET = 'MY_BUCKET';
var object = "";
//question - when using a specific group, should this read group-blahblah
//or, for instance, owners-blahblah
//or even group-owners-blahblah?
var GROUP = 'group-MY_LONG_GROUP_ID';
//stuck on these next few ones
var ENTITY = 'group-Owners';
var ROLE = 'OWNER';
var ROLE_OBJECT = 'OWNER';
function insertObject(event) {
try {
var fileData = event.target.files[0];
}
catch(e) {
//'Insert Object' selected from the API Commands select list
//Display insert object button and then exit function
//filePicker.style.display = 'block';
return;
}
var boundary = '-------314159265358979323846';
var delimiter = "\r\n--" + boundary + "\r\n";
var close_delim = "\r\n--" + boundary + "--";
var reader = new FileReader();
reader.readAsBinaryString(fileData);
reader.onload = function(e) {
var contentType = fileData.type || 'application/octet-stream';
var metadata = {
'name': fileData.name,
'mimeType': contentType
};
var base64Data = btoa(reader.result);
var multipartRequestBody =
delimiter +
'Content-Type: application/json\r\n\r\n' +
JSON.stringify(metadata) +
delimiter +
'Content-Type: ' + contentType + '\r\n' +
'Content-Transfer-Encoding: base64\r\n' +
'\r\n' +
base64Data +
close_delim;
//Note: gapi.client.storage.objects.insert() can only insert
//small objects (under 64k) so to support larger file sizes
//we're using the generic HTTP request method gapi.client.request()
var request = gapi.client.request({
'path': '/upload/storage/' + API_VERSION + '/b/' + BUCKET + '/o',
'method': 'POST',
'params': {'uploadType': 'multipart'},
'headers': {
'Content-Type': 'multipart/mixed; boundary="' + boundary + '"'
},
'body': multipartRequestBody});
//Remove the current API result entry in the main-content div
//listChildren = document.getElementById('main-content').childNodes;
//if (listChildren.length > 1) {
// listChildren[1].parentNode.removeChild(listChildren[1]);
//}
//look at http://stackoverflow.com/questions/30317797/uploading-additional-metadata-as-part-of-file-upload-request-to-google-cloud-sto
try{
//Execute the insert object request
executeRequest(request, 'insertObject');
//Store the name of the inserted object
object = fileData.name;
}
catch(e) {
alert('An error has occurred: ' + e.message);
}
}
}
</script>
Currently, I can execute the function. The file ends up within the function (I've been able to call an alert on the fileData.name). However, it doesn't end up in the bucket, and there's no error message brought up.
Does this code look okay, or is it a problem with how the storage bucket could be set up? And am I using the correct values (or have I formatted them correctly)?
Sorted it. For those who have tried using this code and, like me, didn't bother to read it all, Google is using an "authorize" button. Basically, the example code they provided gives you the ability to more or less "attach" the file, then another button which does the actual uploading. I hadn't seen this myself.
This is probably a dumb question, but I am new to Web programming. I am trying to communicate with the Google Drive using client side JavaScript and CORS. I first used the jsclient library and that worked fine:
request = gapi.client.drive.files.list( {'q': " trashed = false " } );
Using CORS, my code looks like:
var xhr = new XMLHttpRequest();
xhr.open('GET','https://www.googleapis.com/drive/v2/files');
var mysearch = encodeURIComponent("q=trashed=false");
xhr.open('GET',"https://www.googleapis.com/drive/v2/files?" +mysearch,true);
xhr.setRequestHeader('Authorization', 'Bearer ' + accessToken);
xhr.onload = function() { handleResponse(xhr.responseText); };
xhr.onerror = function() { handleResponse(null); };
xhr.send();
I have tried:
var mysearch = encodeURIComponent("q=trashed=false");
var mysearch = encodeURIComponent("trashed=false");
var mysearch = encodeURIComponent("q='trashed=false'");
They all return the list of all the files. If I don't have a search string, I also get all the files.
I would like to have other search parameters also, using &, but I can't get just one to work.
How do I format the mysearch string?
Encode only the value part of the parameter:
var url = 'https://www.googleapis.com/drive/v2/files?q=' + encodeURIComponent("'trashed=false'")