Stream dynamically created zip to server via HTTP Post - javascript

Currently my nodeJS code running on my local machine takes a JSON Object, creates a zip file locally of the JSON Data and then uploads that newly created zip file to a destination web-service. The JSON Object passed in as a parameter is the result of query a db and aggregating the results into 1 large JSON Object.
The problem I am facing is that I am unsure how to remove the need for the creation of the zip file locally prior to making the http post request.
/**
* Makes a https POST call to a specific destination endpoint to submit a form.
* #param {string} hostname - The hostname of the destination endpoint
* #param {string} path - The path on the destination endpoint.
* #param {Object} formData - the form of key-value pairs to submit.
* #param {Object} formHeaders - the headers that need to be attached to the http request.
* #param {Object} jsonData - a large JSON Object that needs to be compressed prior to sending.
* #param {Object} logger - a logging mechanism.
*/
function uploadDataToServer(hostname, path, formData, formHeaders, jsonData, logger){
var jsonData = {};// large amout of JSON Data HERE.
var hostname = ""; // destination serverless
var path = ""; //destination path
let url = 'https://'+hostname+path;
return new Promise((resolve, reject) => {
var zip = new JSZip();
zip.file("aggResults.json", JSON.stringify(jsonData));
const zipFile = zip
.generateNodeStream({type:'nodebuffer',streamFiles:true})
.pipe(fs.createWriteStream('test.zip'))
.on('finish', function () {
const readStream = fs.createReadStream('test.zip');
console.log("test.zip written.");
request.post(url, {
headers:formHeaders,
formData: {
uploadFile:readStream
},
}, function (err, resp, body) {
if (err) {
reject(err);
} else {
resolve(body);
}
})
})
});
}

Related

How to pass parameter value in function nodejs

How to pass the value of the parameter after the function is created. Here I want to pass the path parameter value into for loop. Please check the second place where I am trying to access the function and then pass the value of the parameter.
I want to create a mock server client from the first function and then make a call to the backend at a specific path.
Check Below from the library mockServerClient package.
/**
* Start the client communicating at the specified host and port
* for example:
*
* var client = mockServerClient("localhost", 1080);
*
* #param host the host for the server to communicate with
* #param port the port for the server to communicate with
* #param path the path if server was deployed as a war
*/
mockServerClient = function (host, port, path) {
/**
* The default headers added to to the mocked response when using mockSimpleResponse(...)
*/
var defaultResponseHeaders = [
{"name": "Content-Type", "values": ["application/json; charset=utf-8"]},
{"name": "Cache-Control", "values": ["no-cache, no-store"]}
];
var defaultRequestHeaders = [];
outside for loop
const client = mockServerClient('localhost', '8080');
inside for loop makes a call to the backend at a specific path.
var array = [path1, path2, .....]; // array of some path
for (let i = 0; i < array.length; i++) {
//here I want pass the path
client.passValueOfParameter as path
this client will make some call backend call..
}
If I understand you correctly you want to create a mock server client from the first function and then make a call to the backend at a specific path.
If I am wrong then please comment it out
A simple 'Client' class will do the job.
class Client {
constructor(host, port) {
this.host = host;
this.port = port;
}
call(path) {
// code to send the request
}
}
You can simply create a client object and call the 'call' function to send the request.
const paths = [path1, path2, ...path3];
const client = new Client('localhost', 8080);
// using 'path' package to join the paths
client.call(path.join(paths));

Obtain the result of stored procedure in node.js

Can I get multiple select from distinct tables (many rows) in just one store procedure in mysql and retrieve those result in nodejs?
Like in .NET with SQL Server we can use use "sqlnextresult"
IMAGE FROM STORE PROCEDURE
Here you have an example using mssql node package, you can find here the documentation about this package.
var sql = require('mssql'),
connectionObj = {
user: 'myUser',
password: 'myPassword',
server: 'http://mysqlserver.whatever'
options: {
database: 'myDB'
}
};
/**
* Opens connection to sql server.
*
* #return {Object} Connection object.
*/
function openConnection() {
var connection = new sql.Connection(connectionObj);
return connection;
};
/**
* Closes connection.
*
* #param {Object} connection - Connection Object.
* #return {Promise} Promise.
*/
function closeConnection(connection) {
return connection.close();
};
/**
* Does a request to sql server.
*
* #param {Object} connection - Connection object.
* #param {string} query - Query string to compute.
* #return {Promise} Promise.
*/
function doRequest(connection, query) {
return connection.request().query(query);
};
/**
* Gets Request.
*
* #param {Object} connection - Connection object.
*/
function getRequest(connection) {
return new sql.Request(connection);
};
var request, conn = openConnection();
// First open the connection with DB (method uses the object created at the begining.
conn.connect()
.then(function() {
// Now creates a Request.
request = getRequest(conn);
// Executes your stored procedure.
return request.execute('usp_get_Masters');
})
.then(function(result) {
console.log(result); // Here in result you have the result of the stored procedure execution.
})
.catch(function(error) {
console.log(error);
});

AWS Rekognition JavaScript SDK using Bytes

The AWS Rekognition Javascript API states that for rekognition.compareFaces(params,...) method, the SourceImage and TargetImage can take Bytes or S3Object. I want to use the Bytes which can be
"Bytes — (Buffer, Typed Array, Blob, String)"
Blob of image bytes up to 5 MBs.
When I pass the Base64 encoded string of the images, the JS SDK is re-encoding again (i.e double encoded). Hence server responding with error saying
{"__type":"InvalidImageFormatException","Message":"Invalid image
encoding"}
Did anyone manage to use the compareFaces JS SDK API using base64 encoded images (not S3Object)? or any JavaScript examples using Bytes param would help.
The technique from this AWS Rekognition JS SDK Invalid image encoding error thread worked.
Convert the base64 image encoding to a ArrayBuffer:
function getBinary(base64Image) {
var binaryImg = atob(base64Image);
var length = binaryImg.length;
var ab = new ArrayBuffer(length);
var ua = new Uint8Array(ab);
for (var i = 0; i < length; i++) {
ua[i] = binaryImg.charCodeAt(i);
}
return ab;
}
Pass into rekognition as Bytes parameter:
var data = canvas.toDataURL('image/jpeg');
var base64Image = data.replace(/^data:image\/(png|jpeg|jpg);base64,/, '');
var imageBytes = getBinary(base64Image);
var rekognitionRequest = {
CollectionId: collectionId,
Image: {
Bytes: imageBytes
}
};
Based on the answer supplied by #Sean, I wanted to add another way to get the bytes from a URL request using axios and passed to rekognition.detectLabels() -- or other various detection methods for Amazon Rekognition.
I went ahead create a promise for fs.readFile that should work with the async/await structure. Then some regex to determine if you need a URL fetch or file read as a fallback.
I've also added a check for Gray and World Of Warcraft for the labels. Not sure if anyone else experiences that but lorempixel seems to throw those labels every once in a while. I've seen them show on an all black image before as well.
/* jshint esversion: 6, node:true, devel: true, undef: true, unused: true */
const AWS = require('aws-sdk'),
axios = require('axios'),
fs = require('fs'),
path = require('path');
// Get credentials from environmental variables.
const {S3_ACCESS_KEY, S3_SECRET_ACCESS_KEY, S3_REGION} = process.env;
// Set AWS credentials.
AWS.config.update({
accessKeyId: S3_ACCESS_KEY,
secretAccessKey: S3_SECRET_ACCESS_KEY,
region: S3_REGION
});
const rekognition = new AWS.Rekognition({
apiVersion: '2016-06-27'
});
startDetection();
// ----------------
async function startDetection() {
let found = {};
found = await detectFromPath(path.join(__dirname, 'test.jpg'));
console.log(found);
found = await detectFromPath('https://upload.wikimedia.org/wikipedia/commons/9/96/Bill_Nye%2C_Barack_Obama_and_Neil_deGrasse_Tyson_selfie_2014.jpg');
console.log(found);
found = await detectFromPath('http://placekitten.com/g/200/300');
console.log(found);
found = await detectFromPath('https://loremflickr.com/g/320/240/text');
console.log(found);
found = await detectFromPath('http://lorempixel.com/400/200/sports/');
console.log(found);
// Sometimes 'Grey' and 'World Of Warcraft' are the only labels...
if (found && found.labels.length === 2 && found.labels.some(i => i.Name === 'Gray') && found.labels.some(i => i.Name === 'World Of Warcraft')) {
console.log('⚠️', '\n\tMaybe this is a bad image...`Gray` and `World Of Warcraft`???\n');
}
}
// ----------------
/**
* #param {string} path URL or filepath on your local machine.
* #param {Number} maxLabels
* #param {Number} minConfidence
* #param {array} attributes
*/
async function detectFromPath(path, maxLabels, minConfidence, attributes) {
// Convert path to base64 Buffer data.
const bytes = (/^https?:\/\//gm.exec(path)) ?
await getBase64BufferFromURL(path) :
await getBase64BufferFromFile(path);
// Invalid data.
if (!bytes)
return {
path,
faces: [],
labels: [],
text: [],
celebs: [],
moderation: []
};
// Pass buffer to rekognition methods.
let labels = await detectLabelsFromBytes(bytes, maxLabels, minConfidence),
text = await detectTextFromBytes(bytes),
faces = await detectFacesFromBytes(bytes, attributes),
celebs = await recognizeCelebritiesFromBytes(bytes),
moderation = await detectModerationLabelsFromBytes(bytes, minConfidence);
// Filter out specific values.
labels = labels && labels.Labels ? labels.Labels : [];
faces = faces && faces.FaceDetails ? faces.FaceDetails : [];
text = text && text.TextDetections ? text.TextDetections.map(i => i.DetectedText) : [];
celebs = celebs && celebs.CelebrityFaces ? celebs.CelebrityFaces.map(i => ({
Name: i.Name,
MatchConfidence: i.MatchConfidence
})) : [];
moderation = moderation && moderation.ModerationLabels ? moderation.ModerationLabels.map(i => ({
Name: i.Name,
Confidence: i.Confidence
})) : [];
// Return collection.
return {
path,
faces,
labels,
text,
celebs,
moderation
};
}
/**
* https://nodejs.org/api/fs.html#fs_fs_readfile_path_options_callback
*
* #param {string} filename
*/
function getBase64BufferFromFile(filename) {
return (new Promise(function(resolve, reject) {
fs.readFile(filename, 'base64', (err, data) => {
if (err) return reject(err);
resolve(new Buffer(data, 'base64'));
});
})).catch(error => {
console.log('[ERROR]', error);
});
}
/**
* https://github.com/axios/axios
*
* #param {string} url
*/
function getBase64BufferFromURL(url) {
return axios
.get(url, {
responseType: 'arraybuffer'
})
.then(response => new Buffer(response.data, 'base64'))
.catch(error => {
console.log('[ERROR]', error);
});
}
/**
* https://docs.aws.amazon.com/rekognition/latest/dg/labels.html
* https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Rekognition.html#detectLabels-property
*
* #param {Buffer} bytes
* #param {Number} maxLabels
* #param {Number} minConfidence
*/
function detectLabelsFromBytes(bytes, maxLabels, minConfidence) {
return rekognition
.detectLabels({
Image: {
Bytes: bytes
},
MaxLabels: typeof maxLabels !== 'undefined' ? maxLabels : 1000,
MinConfidence: typeof minConfidence !== 'undefined' ? minConfidence : 50.0
})
.promise()
.catch(error => {
console.error('[ERROR]', error);
});
}
/**
* https://docs.aws.amazon.com/rekognition/latest/dg/text-detection.html
* https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Rekognition.html#detectText-property
*
* #param {Buffer} bytes
*/
function detectTextFromBytes(bytes) {
return rekognition
.detectText({
Image: {
Bytes: bytes
}
})
.promise()
.catch(error => {
console.error('[ERROR]', error);
});
}
/**
* https://docs.aws.amazon.com/rekognition/latest/dg/celebrities.html
* https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Rekognition.html#recognizeCelebrities-property
*
* #param {Buffer} bytes
*/
function recognizeCelebritiesFromBytes(bytes) {
return rekognition
.recognizeCelebrities({
Image: {
Bytes: bytes
}
})
.promise()
.catch(error => {
console.error('[ERROR]', error);
});
}
/**
* https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html
* https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Rekognition.html#detectModerationLabels-property
*
* #param {Buffer} bytes
* #param {Number} minConfidence
*/
function detectModerationLabelsFromBytes(bytes, minConfidence) {
return rekognition
.detectModerationLabels({
Image: {
Bytes: bytes
},
MinConfidence: typeof minConfidence !== 'undefined' ? minConfidence : 60.0
})
.promise()
.catch(error => {
console.error('[ERROR]', error);
});
}
/**
* https://docs.aws.amazon.com/rekognition/latest/dg/faces.html
* https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Rekognition.html#detectFaces-property
*
* #param {Buffer} bytes
* #param {array} attributes Attributes can be "ALL" or "DEFAULT". "DEFAULT" includes: BoundingBox, Confidence, Landmarks, Pose, and Quality.
*/
function detectFacesFromBytes(bytes, attributes) {
return rekognition
.detectFaces({
Image: {
Bytes: bytes
},
Attributes: typeof attributes !== 'undefined' ? attributes : ['ALL']
})
.promise()
.catch(error => {
console.error('[ERROR]', error);
});
}
/**
* https://docs.aws.amazon.com/rekognition/latest/dg/API_CompareFaces.html
* https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Rekognition.html#compareFaces-property
*
* #param {Buffer} sourceBytes
* #param {Buffer} targetBytes
* #param {Number} similarityThreshold
*/
function compareFaces(sourceBytes, targetBytes, similarityThreshold) {
return rekognition
.detectModerationLabels({
SourceImage: {
Bytes: sourceBytes
},
TargetImage: {
Bytes: targetBytes
},
SimilarityThreshold: typeof similarityThreshold !== 'undefined' ? similarityThreshold : 0.0
})
.promise()
.catch(error => {
console.error('[ERROR]', error);
});
}
Resources:
https://github.com/axios/axios
https://docs.aws.amazon.com/rekognition/latest/dg/labels.html
https://docs.aws.amazon.com/rekognition/latest/dg/text-detection.html
https://docs.aws.amazon.com/rekognition/latest/dg/celebrities.html
https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html
https://docs.aws.amazon.com/rekognition/latest/dg/faces.html
https://docs.aws.amazon.com/rekognition/latest/dg/API_CompareFaces.html
https://docs.aws.amazon.com/rekognition/latest/dg/image-bytes-javascript.html
AWS JavaScript SDK Reference:
detectLabels
detectText
recognizeCelebrities
detectModerationLabels
detectFaces
compareFaces
Reference:
Download an image using Axios and convert it to base64
Upload a binary file to S3 using AWS SDK for Node.js
AWS Rekognition JS SDK Invalid image encoding error
Pipe a stream to s3.upload()
untarring files to S3 fails, not sure why
Using Promises with fs.readFile in a loop
How do I return the response from an asynchronous call?
NodeJS UnhandledPromiseRejectionWarning
How do I check whether an array contains a string in TypeScript?
Do you need to use path.join in node.js?
I was running into a similar issue when reading in a file in Node as a byte array buffer and sending it to Rekognition.
I solved it by instead reading in the base64 representation, then turning it into a buffer like this:
const aws = require('aws-sdk');
const fs = require('fs');
const rekognition = new aws.Rekognition({
apiVersion: '2016-06-27'
});
// pull base64 representation of image from file system (or somewhere else)
fs.readFile('./test.jpg', 'base64', (err, data) => {
// create a new buffer out of the string passed to us by fs.readFile()
const buffer = Buffer.from(data, 'base64');
// now that we have things in the right type, send it to rekognition
rekognition.detectLabels({
Image: {
Bytes: buffer
}
}).promise()
.then((res) => {
// print out the labels that rekognition sent back
console.log(res);
});
});
This might also be relevant to people getting the: Expected params.Image.Bytes to be a string, Buffer, Stream, Blob, or typed array object message.
It seems that converting the string to a buffer works more consistently but documentation on it is very hard to find.
For Node, you can use this to convert the params from the string (make sure you take off the data... up to the ",".
var params = {
CollectionId: collectionId,
Image: {
Bytes: new Buffer(imageBytes, 'base64')
}
};
In normal JS, you'd want can convert with atob and pass the Array buffer using this code:
function getBinary(base64Image) {
var binaryImg = Buffer.from(base64Image, 'base64').toString();
var length = binaryImg.length;
var ab = new ArrayBuffer(length);
var ua = new Uint8Array(ab);
for (var i = 0; i < length; i++) {
ua[i] = binaryImg.charCodeAt(i);
}
return ab;
}
I had same issue you had and i'm going to tell you how i solved it.
Amazon Rekognition supports image type are JPEG and PNG
It means that if you input image file with encoding other formats like webp, you always get same that error.
After changing image formats which not encoding with jpeg or png to jpeg, i could solved that problem.
Hope you to solve this problem!

Renaming a file while opening in browser

I have a requirement as a follows. I user AngularJS, JavaScript.
1. User clicks on a document in the browser. I get the document path and open it. >> window.open(documentpath);
2. But the document which is saved in the directory has a file name replaced as Id and NO extensions. abc/files/4
3. The actual filename is in the DB as Id: 4 Filename: Hello.pdf
So when I open the file, I get abc/files/4 which has no format in it and it doesn't open the file.
How can I open the file with the right name abc/files/Hello.pdf?
1st, I want to take the path abc/files/4 and I don't want to download the file. Just store it somewhere locally like cache/Temp to get the file contents and rename 4 to Hello.pdf and then open it in the browser. All this should happen in the background and should open the file correctly when the user clicks on it.
Is it possible using JavaScript, AngularJS? Please let me know
JavaScript usually has no access to the local file system for security reasons.
What you have to do instead is to pass the file name along with your HTTP response. To do this, add this header to the response:
Content-Disposition: inline; filename="Hello.pdf"
See also:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition
How to set response filename without forcing saveas dialog
How to encode the filename parameter of Content-Disposition header in HTTP?
This will let you download the file with async request and open it in a base64 encoded data url. It WONT set the name, just force the display as pdf. You can use this if you have no access to your server and cant implement Aaron Digulla method which is infinitely better.
/**
*
* jquery.binarytransport.js
*
* #description. jQuery ajax transport for making binary data type requests.
* #version 1.0
* #author Henry Algus <henryalgus#gmail.com>
*
*/
$.ajaxTransport("+binary", function(options, originalOptions, jqXHR) {
// check for conditions and support for blob / arraybuffer response type
if (window.FormData && ((options.dataType && (options.dataType == 'binary')) || (options.data && ((window.ArrayBuffer && options.data instanceof ArrayBuffer) || (window.Blob && options.data instanceof Blob))))) {
return {
// create new XMLHttpRequest
send: function(headers, callback) {
// setup all variables
var xhr = new XMLHttpRequest(),
url = options.url,
type = options.type,
async = options.async || true,
// blob or arraybuffer. Default is blob
dataType = options.responseType || "blob",
data = options.data || null,
username = options.username || null,
password = options.password || null;
xhr.addEventListener('load', function() {
var data = {};
data[options.dataType] = xhr.response;
// make callback and send data
callback(xhr.status, xhr.statusText, data, xhr.getAllResponseHeaders());
});
xhr.open(type, url, async, username, password);
// setup custom headers
for (var i in headers) {
xhr.setRequestHeader(i, headers[i]);
}
xhr.responseType = dataType;
xhr.send(data);
},
abort: function() {}
};
}
});
var blobToBase64 = function(blob, cb) {
var reader = new FileReader();
reader.onload = function() {
var dataUrl = reader.result;
var base64 = dataUrl.split(',')[1];
cb(base64);
};
reader.readAsDataURL(blob);
};
$.ajax("Description.pdf", {
dataType: "binary"
}).done(function(data) {
blobToBase64(data, function(base64encoded) {
window.open("data:application/pdf;base64," + base64encoded);
});
})
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>

How to pass Service OAuth access_token from PHP to JavaScript

I am working on an application that is using Google Cloud Storage. I would like to use the JSON_API to manage uploading/downloading the pictures our user's provide. My goal is to grab the access_token using my service account method, and pass it off the the JSON_API JavaScript client api.
Here's my fancy pants class to grab the access_token:
<?php
require_once 'google-api-php-client/src/Google_Client.php';
require_once 'google-api-php-client/src/contrib/Google_StorageService.php';
class Model_Storage_Auth
{
const CLIENT_ID = "clientid.apps.googleusercontent.com";
const SERVICE_ACCOUNT_NAME = "accountname#developer.gserviceaccount.com";
const KEY_FILE = "/super/secret/path/key.p12";
const ACCESS_TOKEN = 'access_token';
const APP_NAME = 'Fancy App';
private $google_client;
function __construct()
{
$this->google_client = new Google_Client();
$this->google_client->setApplicationName(self::APP_NAME);
}
public function getToken()
{
//return '{}';
if(is_null($this->google_client->getAccessToken()))
{
try{$this->google_client->setAccessToken(Session::get(self::ACCESS_TOKEN, '{}'));}catch(Exception $e){}
if(is_null($this->google_client->getAccessToken()))
{
$scope = array();
$scope[] = 'https://www.googleapis.com/auth/devstorage.full_control';
$key = file_get_contents(self::KEY_FILE);
$this->google_client->setAssertionCredentials(new Google_AssertionCredentials(
self::SERVICE_ACCOUNT_NAME,
$scope,
$key)
);
$this->google_client->setClientId(self::CLIENT_ID);
Google_Client::$auth->refreshTokenWithAssertion();
$token = $this->google_client->getAccessToken();
Session::set(self::ACCESS_TOKEN, $token);
}
}
return $this->google_client->getAccessToken();
}
}
I took the google JavaScript example and modified it a little bit to try to add my implementation, here it is:
<?php
$access_token = json_decode(html_entity_decode($access_token), true);
?>
<!--
Copyright (c) 2012 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.
To run this sample, replace YOUR_API_KEY with your application's API key.
It can be found at https://code.google.com/apis/console under API
Access. Activate the Google Cloud Storage service at
https://code.google.com/apis/console/ under Services
-->
<!DOCTYPE html>
<html>
<head>
<meta charset='utf-8' />
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>
<script src="https://apis.google.com/js/client.js"></script>
<script type="text/javascript">
/**
* The number of your Google Cloud Storage Project.
*/
var projectNumber = 'HAS REAL NUMBER IN MINE';
/**
* Enter a client ID for a web application from the Google Developer
* Console. In your Developer Console project, add a JavaScript origin
* that corresponds to the domain from where you will be running the
* script.
*/
var clientId = 'YOUR_CLIENT_ID';
/**
* Enter the API key from the Google Developer Console, by following these
* steps:
* 1) Visit https://code.google.com/apis/console/?api=storage
* 2) Click on "API Access" in the left column
* 3) Find section "Simple API Access" and use the "API key." If sample is
* being run on localhost then delete all "Referers" and save. Setting
* should display "Any referer allowed."
*/
var apiKey = 'YOUR_API_KEY';
/**
* To enter one or more authentication scopes, refer to the documentation
* for the API.
*/
var scopes = 'https://www.googleapis.com/auth/devstorage.full_control';
/**
* Constants for request parameters. Fill these values in with your custom
* information.
*/
var API_VERSION = 'v1beta1';
var PROJECT = projectNumber;
/**
* The name of the new bucket to create.
*/
var BUCKET = 'code-sample-bucket';
/**
* The name of the object inserted via insertObject method.
*/
var object = "";
/**
* Get this value from the API console.
*/
var GROUP =
'group-0000000000000000000000000000000000000000000000000000000000000000';
/**
* Valid values are user-userId, user-email, group-groupId, group-email,
* allUsers, allAuthenticatedUsers
*/
var ENTITY = 'allUsers';
/**
* Valid values are READER, OWNER
*/
var ROLE = 'READER';
/**
* Valid values are READER, OWNER
*/
var ROLE_OBJECT = 'READER';
/**
* A list of example calls to the Google Cloud Storage JavaScript client
* library, as well as associated explanations of each call.
*/
var listApiRequestExplanations = {
'listBuckets': 'This API call queries the Google Cloud Storage API ' +
'for a list of buckets in your project, and returns the result as ' +
'a list of Google Cloud Storage buckets.',
'listObjects': 'This API call queries the Google Cloud Storage API ' +
'for a list of objects in your bucket, and returns the result as ' +
'a list of Google Cloud Storage objects.',
'listBucketsAccessControls': 'This API call queries the Google Cloud ' +
'Storage API for the list of access control lists on buckets in your ' +
'project and returns the result as a list of Google Cloud Storage ' +
'Access Control Lists.',
'listObjectsAccessControls': 'This API call queries the Google Cloud ' +
'Storage API for the list of access control lists on objects in your ' +
'bucket and returns the result as a list of Google Cloud Storage ' +
'Access Control Lists.',
'getBucket': 'This API call queries the Google Cloud Storage API ' +
'for a bucket in your project, and returns the result as a ' +
'Google Cloud Storage bucket.',
'getBucketAccessControls': 'This API call queries the Google Cloud ' +
'Storage API for the access control list on a specific bucket ' +
'and returns the result as a Google Cloud Storage Access Control List.',
'getObjectAccessControls': 'This API call queries the Google Cloud ' +
'Storage API for the access control list on a specific object ' +
'and returns the result as a Google Cloud Storage Access Control List.',
'insertBucket': 'This API call uses the Google Cloud Storage API ' +
'to insert a bucket into your project.',
'insertObject': 'This API call uses the Google Cloud Storage API ' +
'to insert an object into your bucket.',
'insertBucketAccessControls': 'This API uses the Google Cloud ' +
'Storage API to insert an access control list on a specific bucket ' +
'and returns the result as a Google Cloud Storage Access Control List.',
'insertObjectAccessControls': 'This API uses the Google Cloud ' +
'Storage API to insert an access control list on a specific object ' +
'and returns the result as a Google Cloud Storage Access Control List.',
'deleteBucket': 'This API uses the Google Cloud Storage API to delete ' +
'an empty bucket and returns an empty response to indicate success.',
'deleteObject': 'This API uses the Google Cloud Storage API to delete ' +
'an object and returns an empty response to indicate success.'
};
/**
* Google Cloud Storage API request to retrieve the list of buckets in
* your Google Cloud Storage project.
*/
function listBuckets() {
var request = gapi.client.storage.buckets.list({
'projectId': PROJECT
});
executeRequest(request, 'listBuckets');
}
/**
* Google Cloud Storage API request to retrieve the list of objects in
* your Google Cloud Storage project.
*/
function listObjects() {
var request = gapi.client.storage.objects.list({
'bucket': BUCKET
});
executeRequest(request, 'listObjects');
}
/**
* Google Cloud Storage API request to retrieve the access control list on
* a bucket in your Google Cloud Storage project.
*/
function listBucketsAccessControls() {
var request = gapi.client.storage.bucketAccessControls.list({
'bucket': BUCKET
});
executeRequest(request, 'listBucketsAccessControls');
}
/**
* Google Cloud Storage API request to retrieve the access control list on
* an object in your Google Cloud Storage project.
*/
function listObjectsAccessControls() {
var request = gapi.client.storage.objectAccessControls.list({
'bucket': BUCKET,
'object': object
});
executeRequest(request, 'listObjectsAccessControls');
}
/**
* Google Cloud Storage API request to retrieve a bucket in
* your Google Cloud Storage project.
*/
function getBucket() {
var request = gapi.client.storage.buckets.get({
'bucket': BUCKET
});
executeRequest(request, 'getBucket');
}
/**
* Google Cloud Storage API request to retrieve a bucket's Access Control
* List in your Google Cloud Storage project.
*/
function getBucketAccessControls() {
var request = gapi.client.storage.bucketAccessControls.get({
'bucket': BUCKET,
'entity': GROUP
});
executeRequest(request, 'getBucketAccessControls');
}
/**
* Google Cloud Storage API request to retrieve an object's Access Control
* List in your Google Cloud Storage project.
*/
function getObjectAccessControls() {
var request = gapi.client.storage.objectAccessControls.get({
'bucket': BUCKET,
'object': object,
'entity': GROUP
});
executeRequest(request, 'getObjectAccessControls');
}
/**
* Google Cloud Storage API request to insert a bucket into
* your Google Cloud Storage project.
*/
function insertBucket() {
resource = {
'id': BUCKET,
'projectId': PROJECT
};
var request = gapi.client.storage.buckets.insert({
'resource': resource
});
executeRequest(request, 'insertBucket');
}
/**
* Google Cloud Storage API request to insert an object into
* your Google Cloud Storage bucket.
*/
function insertObject(event) {
try{
var fileData = event.target.files[0];
}
catch(e) {
//'Insert Object' selected from the API Commands select list
//Display insert object button and then exit function
filePicker.style.display = 'block';
return;
}
const boundary = '-------314159265358979323846';
const delimiter = "\r\n--" + boundary + "\r\n";
const close_delim = "\r\n--" + boundary + "--";
var reader = new FileReader();
reader.readAsBinaryString(fileData);
reader.onload = function(e) {
var contentType = fileData.type || 'application/octet-stream';
var metadata = {
'name': fileData.name,
'mimeType': contentType
};
var base64Data = btoa(reader.result);
var multipartRequestBody =
delimiter +
'Content-Type: application/json\r\n\r\n' +
JSON.stringify(metadata) +
delimiter +
'Content-Type: ' + contentType + '\r\n' +
'Content-Transfer-Encoding: base64\r\n' +
'\r\n' +
base64Data +
close_delim;
//Note: gapi.client.storage.objects.insert() can only insert
//small objects (under 64k) so to support larger file sizes
//we're using the generic HTTP request method gapi.client.request()
var request = gapi.client.request({
'path': '/upload/storage/v1beta2/b/' + BUCKET + '/o',
'method': 'POST',
'params': {'uploadType': 'multipart'},
'headers': {
'Content-Type': 'multipart/mixed; boundary="' + boundary + '"'
},
'body': multipartRequestBody});
//Remove the current API result entry in the main-content div
listChildren = document.getElementById('main-content').childNodes;
if (listChildren.length > 1) {
listChildren[1].parentNode.removeChild(listChildren[1]);
}
try{
//Execute the insert object request
executeRequest(request, 'insertObject');
//Store the name of the inserted object
object = fileData.name;
}
catch(e) {
alert('An error has occurred: ' + e.message);
}
}
}
/**
* Google Cloud Storage API request to insert an Access Control List into
* your Google Cloud Storage bucket.
*/
function insertBucketAccessControls() {
resource = {
'entity': ENTITY,
'role': ROLE
};
var request = gapi.client.storage.bucketAccessControls.insert({
'bucket': BUCKET,
'resource': resource
});
executeRequest(request, 'insertBucketAccessControls');
}
/**
* Google Cloud Storage API request to insert an Access Control List into
* your Google Cloud Storage object.
*/
function insertObjectAccessControls() {
resource = {
'entity': ENTITY,
'role': ROLE_OBJECT
};
var request = gapi.client.storage.objectAccessControls.insert({
'bucket': BUCKET,
'object': object,
'resource': resource
});
executeRequest(request, 'insertObjectAccessControls');
}
/**
* Google Cloud Storage API request to delete a Google Cloud Storage bucket.
*/
function deleteBucket() {
var request = gapi.client.storage.buckets.delete({
'bucket': BUCKET
});
executeRequest(request, 'deleteBucket');
}
/**
* Google Cloud Storage API request to delete a Google Cloud Storage object.
*/
function deleteObject() {
var request = gapi.client.storage.objects.delete({
'bucket': BUCKET,
'object': object
});
executeRequest(request, 'deleteObject');
}
/**
* Removes the current API result entry in the main-content div, adds the
* results of the entry for your function.
* #param {string} apiRequestName The name of the example API request.
*/
function updateApiResultEntry(apiRequestName) {
listChildren = document.getElementById('main-content')
.childNodes;
if (listChildren.length > 1) {
listChildren[1].parentNode.removeChild(listChildren[1]);
}
if (apiRequestName != 'null') {
window[apiRequestName].apply(this);
}
}
/**
* Determines which API request has been selected, and makes a call to add
* its result entry.
*/
function runSelectedApiRequest() {
var curElement = document.getElementById('api-selection-options');
var apiRequestName = curElement.options[curElement.selectedIndex].value;
updateApiResultEntry(apiRequestName);
}
/**
* Binds event listeners to handle a newly selected API request.
*/
function addSelectionSwitchingListeners() {
document.getElementById('api-selection-options')
.addEventListener('change',
runSelectedApiRequest, false);
}
/**
* Template for getting JavaScript sample code snippets.
* #param {string} method The name of the Google Cloud Storage request
* #param {string} params The parameters passed to method
*/
function getCodeSnippet(method, params) {
var objConstruction = "// Declare your parameter object\n";
objConstruction += "var params = {};";
objConstruction += "\n\n";
var param = "// Initialize your parameters \n";
for (i in params) {
param += "params['" + i + "'] = ";
param += JSON.stringify(params[i], null, '\t');
param += ";";
param += "\n";
}
param += "\n";
var methodCall = "// Make a request to the Google Cloud Storage API \n";
methodCall += "var request = gapi.client." + method + "(params);";
return objConstruction + param + methodCall;
}
/**
* Executes your Google Cloud Storage request object and, subsequently,
* inserts the response into the page.
* #param {string} request A Google Cloud Storage request object issued
* from the Google Cloud Storage JavaScript client library.
* #param {string} apiRequestName The name of the example API request.
*/
function executeRequest(request, apiRequestName) {
request.execute(function(resp) {
console.log(resp);
var apiRequestNode = document.createElement('div');
apiRequestNode.id = apiRequestName;
var apiRequestNodeHeader = document.createElement('h2');
apiRequestNodeHeader.innerHTML = apiRequestName;
var apiRequestExplanationNode = document.createElement('div');
apiRequestExplanationNode.id = apiRequestName + 'RequestExplanation';
var apiRequestExplanationNodeHeader = document.createElement('h3');
apiRequestExplanationNodeHeader.innerHTML = 'API Request Explanation';
apiRequestExplanationNode.appendChild(apiRequestExplanationNodeHeader);
var apiRequestExplanationEntry = document.createElement('p');
apiRequestExplanationEntry.innerHTML =
listApiRequestExplanations[apiRequestName];
apiRequestExplanationNode.appendChild(apiRequestExplanationEntry);
apiRequestNode.appendChild(apiRequestNodeHeader);
apiRequestNode.appendChild(apiRequestExplanationNode);
var apiRequestCodeSnippetNode = document.createElement('div');
apiRequestCodeSnippetNode.id = apiRequestName + 'CodeSnippet';
var apiRequestCodeSnippetHeader = document.createElement('h3');
apiRequestCodeSnippetHeader.innerHTML = 'API Request Code Snippet';
apiRequestCodeSnippetNode.appendChild(apiRequestCodeSnippetHeader);
var apiRequestCodeSnippetEntry = document.createElement('pre');
//If the selected API command is not 'insertObject', pass the request
//paramaters to the getCodeSnippet method call as 'request.B.rpcParams'
//else pass request paramaters as 'request.B'
if (apiRequestName != 'insertObject') {
apiRequestCodeSnippetEntry.innerHTML =
getCodeSnippet(request.B.method, request.B.rpcParams);
//Selected API Command is not 'insertObject'
//hide insert object button
filePicker.style.display = 'none';
} else {
apiRequestCodeSnippetEntry.innerHTML =
getCodeSnippet(request.B.method, request.B);
}
apiRequestCodeSnippetNode.appendChild(apiRequestCodeSnippetEntry);
apiRequestNode.appendChild(apiRequestCodeSnippetNode);
var apiResponseNode = document.createElement('div');
apiResponseNode.id = apiRequestName + 'Response';
var apiResponseHeader = document.createElement('h3');
apiResponseHeader.innerHTML = 'API Response';
apiResponseNode.appendChild(apiResponseHeader);
var apiResponseEntry = document.createElement('pre');
apiResponseEntry.innerHTML = JSON.stringify(resp, null, ' ');
apiResponseNode.appendChild(apiResponseEntry);
apiRequestNode.appendChild(apiResponseNode);
var content = document.getElementById('main-content');
content.appendChild(apiRequestNode);
});
}
/**
* Set required API keys and check authentication status.
*/
function handleClientLoad() {
gapi.client.setApiKey(apiKey);
window.setTimeout(checkAuth, 1);
}
/**
* Authorize Google Cloud Storage API.
*/
function checkAuth() {
gapi.auth.authorize({
client_id: clientId,
scope: scopes,
immediate: true
}, handleAuthResult);
}
/**
* Handle authorization.
*/
function handleAuthResult(authResult) {
var authorizeButton = document.getElementById('authorize-button');
if (authResult && !authResult.error) {
authorizeButton.style.visibility = 'hidden';
initializeApi();
filePicker.onchange = insertObject;
} else {
authorizeButton.style.visibility = '';
authorizeButton.onclick = handleAuthClick;
} console.log(gapi.auth);
}
/**
* Handle authorization click event.
*/
function handleAuthClick(event) {
gapi.auth.authorize({
client_id: clientId,
scope: scopes,
immediate: false
}, handleAuthResult);
return false;
}
/**
* Load Google Cloud Storage API v1beta12.
*/
function initializeApi() {
gapi.client.load('storage', API_VERSION);
}
/**
* Driver for sample application.
*/
$(window)
.bind('load', function() {
gapi.auth.setToken('<?php print $access_token["access_token"]; ?>');
gapi.auth.token= '<?php print $access_token["access_token"]; ?>';
console.log(gapi.auth.getToken());
addSelectionSwitchingListeners();
handleClientLoad();
});
</script>
</head>
<body>
<!--Add a button for the user to click to initiate auth sequence -->
<button id="authorize-button" style="visibility: hidden">Authorize</button>
<header>
<h1>Google Cloud Storage JavaScript Client Library Application</h1>
</header>
<label id="api-label">Try a sample API call!</label>
<select id="api-selection-options">
<option value="null">
Please select an example API call from the dropdown menu
</option>
<option value="listBuckets">
List Buckets
</option>
<option value="listObjects">
List Objects
</option>
<option value="listBucketsAccessControls">
List Buckets Access Control List
</option>
<option value="listObjectsAccessControls">
List Objects Access Control List
</option>
<option value="getBucket">
Get Bucket
</option>
<option value="getBucketAccessControls">
Get Bucket Access Controls
</option>
<option value="getObjectAccessControls">
Get Object Access Controls
</option>
<option value="insertBucket">
Insert Bucket
</option>
<option value="insertObject">
Insert Object
</option>
<option value="insertBucketAccessControls">
Insert Bucket Access Controls
</option>
<option value="insertObjectAccessControls">
Insert Object Access Controls
</option>
<option value="deleteBucket">
Delete Bucket
</option>
<option value="deleteObject">
Delete Object
</option>
</select>
<br/>
<input type="file" id="filePicker" style="display: none" />
<div id="main-content">
</div>
</body>
</html>
Does anyone have any ideas? I'm not being successful because my console prints this out:
as an error loading the page https://accounts.google.com/o/oauth2/auth?client_id=YOUR_CLIENT_ID&scope=ht…ifechurch.tv&response_type=token&state=412841043%7C0.2670569503&authuser=0
While it is possible to retrieve an access token in your server code for a service account, and then include that as part of a web-page or Javascript response to allow a browser direct access to Google Cloud Storage - this is generally a very dangerous approach, as the access token will allow anyone who has it full access to all of your Cloud Storage resources - without opportunity for your trusted server-side code to be able to apply access control restrictions specific to your application (for example to allow a user of your application only to upload/downloads files/object they should have access to).
Direct file upload/download to Cloud Storage can however be achieved a slightly different way. Google Cloud Storage supports a signed URL - this is a URL that you can generate in your server code (using the same private key you used to get an access token) that allows the bearer the ability to upload or download a specific file for a specific period of time.
Documentation for this can be found here:
https://developers.google.com/storage/docs/accesscontrol#Signed-URLs
And an example from PHP here:
https://groups.google.com/forum/#!msg/google-api-php-client/jaRYDWdpteQ/xbNTLfDhUggJ
Once you generate this URL on your server you can pass to the browser, which in turn can then PUT directly to that URL, or GET directly from it without the need to use any special Google Javascript libraries. See for an example: Google Storage REST PUT With Signed URLs
I have recently completed a similar project using Ajax to access my PHP class which controls the access token renewals, connection URls etc. As it a live connection over HTTPS no sensitive information is visible in code and I can then load the relevant data directly into javascript.
Hope this helps

Categories

Resources