How to allow other user to execute script on Google Apps Script - javascript

I have script that works fine on my account but I want other user to be able to use that script over API Executable.
var serviceScript = new ScriptService(new BaseClientService.Initializer()
{
HttpClientInitializer = UserInfo.Credentials,
ApplicationName = "Read Google Scripts .NET",
});
var scriptId = "scriptId";
var Param = new { spreadsheetId = spreadsheetId };
var exec = new ExecutionRequest();
exec.Function = "createDocument";
exec.DevMode = true;
exec.Parameters = new List<object>();
exec.Parameters.Add(Param);
var script = serviceScript.Scripts.Run(exec, scriptId);
//
var result = script.Execute();
This is the error I got when other user tries to access:
GoogleApiException: The service script has thrown an exception. HttpStatusCode is Forbidden. The caller does not have permission Google.Apis.Requests.ClientServiceRequest<TResponse>.ParseResponse(HttpResponseMessage response)
I am using OAuth 2.0 and after creating Apps Script I have two Client IDs
I have deployed script as 'Anyone with Google account' but it is not accessible over API and works fine as Web Application.

Looks it's only possible if I gave Editor role on script for everybody

Related

Google Drive File IDs to Personal Website using Drive API & Javascript

I have a Google Drive folder filled with many audio files, and I want my website to play one at random when a user clicks a button. I have the logic setup already, and it's working if I manually enter the google drive links into my array like this:
var fileArray = ["https://docs.google.com/uc?export=open&id=XXXXXXX", "https://docs.google.com/uc?export=open&id=XXXXXXXXX"];
The problem is I have hundreds of files, and I update the Drive often so I want it to update by itself. Thus where the Google Drive API comes in. Unfortunately, I have now found myself going from 1 line of code to many lines of code and being totally lost. I have an API_KEY & CLIENT_ID. The documentation at Drive API -> Javascript Quickstart has a "handleAuthClick" function, but I don't want the user to have to sign in as they will never be making changes to the Drive. I don't even see anywhere it enters the folder ID which I don't understand. People talking about javascript origins, redirect uris, google picker api. The documentation at Drive API -> files.get has cURL & http files, and I've also seen people talking about json files storing information like service account details. Is it possible to just keep everything in my javascript script? Here is an example of some code I've tried:
<script src="https://apis.google.com/js/client:platform.js"></script>
<script type="text/javascript">
var fileArray = [];
const API_KEY = 'XXXXX';
const CLIENT_ID = 'XXXXX';
const CLIENT_SECRET = 'XXXXXX';
const REDIRECT_URI = 'https://developers.google.com/oauthplayground';
const DISCOVERY_DOC = 'https://www.googleapis.com/discovery/v1/apis/drive/v3/rest';
const { google } = require('googleapis');
const SCOPES = 'https://www.googleapis.com/auth/drive.metadata.readonly';
function getFileIDs() {
var folderId = 'XXXXXX';
let response;
try {
response = await gapi.client.drive.files.list({
'fileId': folderId,
'fields': 'files(id)',
});
for (var i = 0; i < files.length; i++){
var currentID = files[i].id;
var currentDriveLink = 'https://docs.google.com/uc?export=open&id=' + currentID;
fileArray.push(currentDriveLink);
}
}
}
</script>
I just want the minimum code to get to file.id, and then I can prepend the URL structure and push it to my array. Any thoughts are helpful thank you.

Script to transfer user data to (for example, a md#org.com) after deleting a user account from G Suite Admin SDK

I'm trying to develop a google script to transfer user data to (for example, a md#org.com), after deleting a user account from G Suite Admin SDK.
I've tried and am unable to find it anywhere the script to transfer user data after google mail deletion.
function onFormSubmit(e) {
deleteUsers(e);
}
function deleteUsers() {
var ss = SpreadsheetApp.openById('1Z0cNwh2BJLrq1bMQS3eU1tWLrjz2DLUne8CY3rMM7OE');
var sheet = ss.getSheetByName('Delete Users');
var data = sheet.getDataRange().getValues();
var len = data.length;
for(var i=1; i<len; i++){
var user = data[i][0];
var transferToEmail = data["Transfer to Email"][1];
Logger.log(user);
//use try catch in case a user is already removed
try{
AdminDirectory.Users.remove(user);
}
catch(err){}
}
}
Some Google APIs are integrated with Google Apps Script as advanced services, unfortunately the Data Transfer API isn't, that is why on How to execute Data Transfer API? are using UrlFetchApp to make a HTTP request to call the Data Transfer API instead of something like AdminDirectory to call the Directory API / Reports API.

Trying to post url segment to firebase

I am running a script on a public webpage and i want to post part of the url into firebase.
I can insert a button that retrieves the url segment as a string variable but I can't post automatically to firebase from the open page because of permissions. Is there any way to do this other than creating an external page and posting the variable manually? Here is the Script I am using. This runs fine in external pages but i want to run it from the public page.
function pushit() {
firebase.initializeApp(config);
var url = location.href;
var filename = url.substr(38, 8);
console.log("Push Successfull!!!");
var database = firebase.database();
var ref = database.ref('url/data'); var data = {url: filename }
ref.push(data);
}
The error get is:
Uncaught ReferenceError: pushit is not defined
at HTMLButtonElement.onclick (index.html)
I created a popup window instead, which sends the data to the Firebase. I realized that running this sort of code in other websites was not possible.

Gmail Attachment using gmail API

I am trying to download attachment using Gmail API and below is the code for the that
var Data = req.body;
var parts = Data.payload.parts;
for (var i = 0; i < parts.length; i++) {
var part = parts[i];
if (part.filename && part.filename.length > 0) {
var attachId = part.body.attachmentId;
var request = gapi.client.gmail.users.messages.attachments.get({
'id': attachId,
'messageId': message.id,
'userId': userId
});
request.execute(function(attachment) {
callback(part.filename, part.mimeType, attachment);
});
}
}
I have used the link
Gmail API to get the Attachment and since it require autorization as mention so who to pass the refershToken,clientSecret,clientId,accessToken etc..or whether this is required first place.
Currently i am getting Gmail is not defined, i have installed gapi and included it as
var cs = require("coffee-script/register");
var gapi = require('gapi');`
I haven't used gapi inside of nodejs environment, but from my experience in using gapi library in chrome extensions - after loading gapi script you need to load gmail separatelly - something like this:
gapi.client.load('gmail', 'v1', callback);
And after that you can start using it. That's probably reason for getting "Gmail is not defined" error.
Additionally, you can always make API calls without using gapi library.

is it possible to write web crawler in javascript?

I want to crawl the page and check for the hyperlinks in that respective page and also follow those hyperlinks and capture data from the page
Generally, browser JavaScript can only crawl within the domain of its origin, because fetching pages would be done via Ajax, which is restricted by the Same-Origin Policy.
If the page running the crawler script is on www.example.com, then that script can crawl all the pages on www.example.com, but not the pages of any other origin (unless some edge case applies, e.g., the Access-Control-Allow-Origin header is set for pages on the other server).
If you really want to write a fully-featured crawler in browser JS, you could write a browser extension: for example, Chrome extensions are packaged Web application run with special permissions, including cross-origin Ajax. The difficulty with this approach is that you'll have to write multiple versions of the crawler if you want to support multiple browsers. (If the crawler is just for personal use, that's probably not an issue.)
If you use server-side javascript it is possible.
You should take a look at node.js
And an example of a crawler can be found in the link bellow:
http://www.colourcoding.net/blog/archive/2010/11/20/a-node.js-web-spider.aspx
Google's Chrome team has released puppeteer on August 2017, a node library which provides a high-level API for both headless and non-headless Chrome (headless Chrome being available since 59).
It uses an embedded version of Chromium, so it is guaranteed to work out of the box. If you want to use an specific Chrome version, you can do so by launching puppeteer with an executable path as parameter, such as:
const browser = await puppeteer.launch({executablePath: '/path/to/Chrome'});
An example of navigating to a webpage and taking a screenshot out of it shows how simple it is (taken from the GitHub page):
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
await browser.close();
})();
We could crawl the pages using Javascript from server side with help of headless webkit. For crawling, we have few libraries like PhantomJS, CasperJS, also there is a new wrapper on PhantomJS called Nightmare JS which make the works easier.
There are ways to circumvent the same-origin policy with JS. I wrote a crawler for facebook, that gathered information from facebook profiles from my friends and my friend's friends and allowed filtering the results by gender, current location, age, martial status (you catch my drift). It was simple. I just ran it from console. That way your script will get privilage to do request on the current domain. You can also make a bookmarklet to run the script from your bookmarks.
Another way is to provide a PHP proxy. Your script will access the proxy on current domain and request files from another with PHP. Just be carefull with those. These might get hijacked and used as a public proxy by 3rd party if you are not carefull.
Good luck, maybe you make a friend or two in the process like I did :-)
My typical setup is to use a browser extension with cross origin privileges set, which is injecting both the crawler code and jQuery.
Another take on Javascript crawlers is to use a headless browser like phantomJS or casperJS (which boosts phantom's powers)
This is what you need http://zugravu.com/products/web-crawler-spider-scraping-javascript-regular-expression-nodejs-mongodb
They use NodeJS, MongoDB and ExtJs as GUI
yes it is possible
Use NODEJS (its server side JS)
There is NPM (package manager that handles 3rd party modules) in nodeJS
Use PhantomJS in NodeJS (third party module that can crawl through websites is PhantomJS)
There is a client side approach for this, using Firefox Greasemonkey extention. with Greasemonkey you can create scripts to be executed each time you open specified urls.
here an example:
if you have urls like these:
http://www.example.com/products/pages/1
http://www.example.com/products/pages/2
then you can use something like this to open all pages containing product list(execute this manually)
var j = 0;
for(var i=1;i<5;i++)
{
setTimeout(function(){
j = j + 1;
window.open('http://www.example.com/products/pages/ + j, '_blank');
}, 15000 * i);
}
then you can create a script to open all products in new window for each product list page and include this url in Greasemonkey for that.
http://www.example.com/products/pages/*
and then a script for each product page to extract data and call a webservice passing data and close window and so on.
I made an example javascript crawler on github.
It's event driven and use an in-memory queue to store all the resources(ie. urls).
How to use in your node environment
var Crawler = require('../lib/crawler')
var crawler = new Crawler('http://www.someUrl.com');
// crawler.maxDepth = 4;
// crawler.crawlInterval = 10;
// crawler.maxListenerCurrency = 10;
// crawler.redisQueue = true;
crawler.start();
Here I'm just showing you 2 core method of a javascript crawler.
Crawler.prototype.run = function() {
var crawler = this;
process.nextTick(() => {
//the run loop
crawler.crawlerIntervalId = setInterval(() => {
crawler.crawl();
}, crawler.crawlInterval);
//kick off first one
crawler.crawl();
});
crawler.running = true;
crawler.emit('start');
}
Crawler.prototype.crawl = function() {
var crawler = this;
if (crawler._openRequests >= crawler.maxListenerCurrency) return;
//go get the item
crawler.queue.oldestUnfetchedItem((err, queueItem, index) => {
if (queueItem) {
//got the item start the fetch
crawler.fetchQueueItem(queueItem, index);
} else if (crawler._openRequests === 0) {
crawler.queue.complete((err, completeCount) => {
if (err)
throw err;
crawler.queue.getLength((err, length) => {
if (err)
throw err;
if (length === completeCount) {
//no open Request, no unfetcheditem stop the crawler
crawler.emit("complete", completeCount);
clearInterval(crawler.crawlerIntervalId);
crawler.running = false;
}
});
});
}
});
};
Here is the github link https://github.com/bfwg/node-tinycrawler.
It is a javascript web crawler written under 1000 lines of code.
This should put you on the right track.
You can make a web crawler driven from a remote json file that opens all links from a page in new tabs as soon as each tab loads except ones that have already been opened. If you set up a with a browser extension running in a basic browser (nothing runs except the web browser and an internet config program) and had it shipped and installed somewhere with good internet, you could make a database of webpages with an old computer. That would just need to retrieve the content of each tab. You could do that for about $2000, contrary to most estimates for search engine costs. You'd just need to basically make your algorithm provide pages based on how much a term appears in the innerText property of the page, keywords, and description. You could also set up another PC to recrawl old pages from the one-time database and add more. I'd estimate it would take about 3 months and $20000, maximum.
Axios + Cheerio
You can do this with axios and cheerios. Check axios docs for response format.
const cheerio = require('cheerio');
const axios = require('axios');
//crawl
//get url
var url = 'http://amazon.com';
axios.get(url)
.then((res) => {
//response format
var body = res.data;
var statusCode = res.status;
var statusText = res.statusText;
var headers = res.headers;
var request = res.request;
var config = res.config;
//jquery
let $ = cheerio.load(body);
//example
//meta tags
var title = $('meta[name=title]').attr('content');
if(title == undefined || title == 'undefined'){
title = $('title').text();
}else{
title = title;
}
var description = $('meta[name=description]').attr('content');
var keywords = $('meta[name=keywords]').attr('content');
var author = $('meta[name=author]').attr('content');
var type = $('meta[http-equiv=content-type]').attr('content');
var favicon = $('link[rel="shortcut icon"]').attr('href');
}).catch(function (e) {
console.log(e);
});
Node-Fetch + Cheerio
You can do the same thing with node-fetch and cheerio.
fetch(url, {
method: "GET",
}).then(function(response){
//response
var html = response.text();
//return
return html;
})
.then(function(res) {
//response html
var html = res;
//jquery
let $ = cheerio.load(html);
//meta tags
var title = $('meta[name=title]').attr('content');
if(title == undefined || title == 'undefined'){
title = $('title').text();
}else{
title = title;
}
var description = $('meta[name=description]').attr('content');
var keywords = $('meta[name=keywords]').attr('content');
var author = $('meta[name=author]').attr('content');
var type = $('meta[http-equiv=content-type]').attr('content');
var favicon = $('link[rel="shortcut icon"]').attr('href');
})
.catch((error) => {
console.error('Error:', error);
});

Categories

Resources