Get Latest Vimeo Portfolio Video - javascript

I have a client that wants to pull the latest video in a specific Vimeo Portfolio. I can pull in the latest video on the entire account using JS like so:
http://codepen.io/buschschwick/pen/pgrmvg
var vimeoUserName = 'yellowboxfilms';
// Tell Vimeo what function to call
var videoCallback = 'latestVideo';
var oEmbedCallback = 'embedVideo';
// Set up the URLs
var videosUrl = 'http://vimeo.com/api/v2/' + vimeoUserName + '/videos.json?callback=' + videoCallback;
var oEmbedUrl = 'http://vimeo.com/api/oembed.json';
// This function puts the video on the page
function embedVideo(video) {
videoEmbedCode = video.html;
document.getElementById('embed').innerHTML = unescape(video.html);
}
// This function uses oEmbed to get the last clip
function latestVideo(videos) {
var videoUrl = videos[0].url;
// Get the oEmbed stuff
loadScript(oEmbedUrl + '?url=' + encodeURIComponent(videoUrl) + '&callback=' + oEmbedCallback);
}
// This function loads the data from Vimeo
function loadScript(url) {
var js = document.createElement('script');
js.setAttribute('type', 'text/javascript');
js.setAttribute('src', url);
document.getElementsByTagName('head').item(0).appendChild(js);
}
// Call our init function when the page loads
window.onload = function() {
loadScript(videosUrl);
};
But I want to pull a latest in a portfolio. I found the API Call, but I get an authorization error.
http://codepen.io/buschschwick/pen/jWLoWb
var latestVideo = function() {
var vimeoAPI = 'https://api.vimeo.com/users/414104/portfolios';
$.getJSON(vimeoAPI).done(function(data) {
console.log(data);
})
};
latestVideo();
I think it might need an oAuth token or something like that, but trying to find out how to pass that got me no where and I feel the Vimeo API Docs aren't helping either. Any help or guidance would be much appreciated. Thanks!

Here are Vimeo's authentication docs: https://developer.vimeo.com/api/authentication
You can generate a single token on your app page, or you can generate the token on a server.
Vimeo's token generation does not yet support client side authorization, so know that if you share the token in the client, anyone can take that token and make API calls.
You can reduce the risk by requesting read-only scopes, but that token will still have access to private data.

Related

How do I get Figma API to work with the Google App-script API?

I am thinking of creating a google slides to Figma exporter using Google App Script. Starting out I would first like to route the shapes created in from google Slides to figma. How would I go about setting up my file? And I don't know how to set up the Oauth api communication between Google and Figma or if it's even possible.
I believe that I can start with:
References
Figma reference
https://github.com/figma/plugin-samples/blob/master/react/src/code.ts
google app script reference
https://github.com/gsuitedevs/apps-script-samples/blob/master/slides/style/style.gs#L30
Get Figma Shape
var file=projectid.key()
var=figma rectangle= file()
await figma.loadFontAsync({ family: "Roboto", style: "Regular" })
name;
var figmaShape = {
figma.ui.onmessage = msg => {
if (msg.type === 'create-rectangles') {
const nodes = []
for (let i = 0; i < msg.count; i++) {
const rect = figma.createRectangle()
rect.x = i * 150
rect.fills = [{type: 'SOLID', color: {r: 1, g: 0.5, b: 0}}]
figma.currentPage.appendChild(rect)
nodes.push(rect)
}
figma.currentPage.selection = nodes
figma.viewport.scrollAndZoomIntoView(nodes)
}
figma.closePlugin()
}
};
Get Google Docs File Shape
var powerpointfile = driveApp.getFileById = ("### Slide file ID ###")
function powerPointShape = () {
var slide = SlidesApp.getActivePresentation().getSlides()[0];
var shape = slide.insertShape(SlidesApp.ShapeType.TEXT_BOX, 100, 200, 300,
getObjectId.element(SHAPE);
};
Create new Figma file#
file.getSlides.shape = (powerPointShape, ) => {
this.powerPointShape.getRigh()=this.figmaShape(rect.x);
this.powerPointShape.getleft()=this.figmaShape(rect.y);
}
But from there would I also want to get the file id from google app script to a Figma File?
and after looking at: https://github.com/alyssaxuu/figma-to-google-slides/blob/master/Chrome%20Extension/background.js I wonder if I would have to create a chrome extension or a google Slides plugin.
How about this answer?
Issue and workaround:
Unfortunately, it seems that the shapes of Google Slides cannot be put to the page of Figma file. Because it seems that there are no methods of API for putting the shapes. But it was found that that the pages of Figma file can be retrieved as the image using Figma API.
In this answer, I would like to propose the sample script that the pages of Figma file can be put to the Google Slides as the image using Figma API with the access token. So you can directly use Figma API with Google Apps Script.
Usage:
1. Retrieve access token
You can see the method for retrieving the access token at here. Although there is also OAuth2 for retrieving the access token, in your situation, I thought that the method for directly generating the access token on the site might be suitable. So in this answer, the generated access token on the site is used. Please retrieve the access token as follows.
Generate a personal access token
Login to your Figma account.
Head to the Account Settings from the top-left menu inside Figma.
Find the Personal Access Tokens section.
Click Create new token.
A token will be generated. This will be your only chance to copy the token, so make sure you keep a copy of this in a secure place.
The access token is like #####-########-####-####-####-############. At Google Apps Script, the authorization is done by headers: {"X-Figma-Token": accessToken}.
2. Retrieve file key
In order to retrieve the Figma file using Figma API, the file key is required. You can retrieve the file key from the URL of the file.
The URL of the file is like https://www.figma.com/file/###/sampleFilename. In this case, ### is the file key.
3. Run script
The sample script is as follows. Before you run the script, please set the variables of accessToken and fileKey.
function myFunction() {
var accessToken = "###"; // Please set your access token.
var fileKey = "###"; // Please set the file key.
var baseUrl = "https://api.figma.com/v1";
var params = {
method: "get",
headers: {"X-Figma-Token": accessToken},
muteHttpExceptions: true,
};
var fileInfo = JSON.parse(UrlFetchApp.fetch(baseUrl + "/files/" + fileKey, params));
var children = JSON.parse(UrlFetchApp.fetch(baseUrl + "/images/" + fileKey + "?format=jpg&scale=3&ids=" + fileInfo.document.children.map(function(c) {return c.id}).join(","), params));
if (!children.err) {
var s = SlidesApp.create("sampleSlide");
var slide = s.getSlides()[0];
var keys = Object.keys(children.images);
keys.forEach(function(c, i) {
slide.insertImage(children.images[c]);
if (i != keys.length - 1) slide = s.insertSlide(i + 1);
})
} else {
throw new Error(children);
}
}
When myFunction() is run, at first, the file information is retrieved with the file key fileKey. Then, all pages are retrieved from the retrieved file information, and the retrieved pages are put to each slide of new Google Slides.
I think that the action of this script is similar to the script which is shown at the bottom of your question.
Note:
This is a sample script. So please modify it for your actual situation.
References:
Figma API
Class UrlFetchApp
Class SlidesApp
If I misunderstood your question and this was not the direction you want, I apologize.

Gmail Attachment using gmail API

I am trying to download attachment using Gmail API and below is the code for the that
var Data = req.body;
var parts = Data.payload.parts;
for (var i = 0; i < parts.length; i++) {
var part = parts[i];
if (part.filename && part.filename.length > 0) {
var attachId = part.body.attachmentId;
var request = gapi.client.gmail.users.messages.attachments.get({
'id': attachId,
'messageId': message.id,
'userId': userId
});
request.execute(function(attachment) {
callback(part.filename, part.mimeType, attachment);
});
}
}
I have used the link
Gmail API to get the Attachment and since it require autorization as mention so who to pass the refershToken,clientSecret,clientId,accessToken etc..or whether this is required first place.
Currently i am getting Gmail is not defined, i have installed gapi and included it as
var cs = require("coffee-script/register");
var gapi = require('gapi');`
I haven't used gapi inside of nodejs environment, but from my experience in using gapi library in chrome extensions - after loading gapi script you need to load gmail separatelly - something like this:
gapi.client.load('gmail', 'v1', callback);
And after that you can start using it. That's probably reason for getting "Gmail is not defined" error.
Additionally, you can always make API calls without using gapi library.

Real-time basic web analytics with Javascript

I need to develop an in-house real-time analytics solution (similar to GA or mixpanel for example) that collects:
Information from the website itself ­­(URL)
Information from the user’s browser ­­(lang, device, OS etc..)
Information from the referring source etc..
.. and sends this data to the server with a single-pixel image request. Similar to how GA and other solutions work:
Google Analytics works by the inclusion of a block of JavaScript code
on pages in your website. When users to your website view a page, this
JavaScript code references a JavaScript file which then executes the
tracking operation for Analytics. The tracking operation retrieves
data about the page request through various means and sends this
information to the Analytics server via a list of parameters attached
to a single-pixel image request.
I wonder if there's any open source project available that does this part which I could use as base to build further. There's Piwik but its too feature-packed and too heavy for my requirement.
Edited to add: I'm doing something specific with the data, otherwise I'd just use the existing solutions.
Try
var img = new Image;
img.width = img.height = "1px";
var res = window.navigator;
var data = {};
var _plugins = {};
Array.prototype.slice.call(navigator.plugins).forEach(function(v, k) {
_plugins[v.name.toLowerCase().replace(/\s/, "-")] = {
"name": v.name,
"description": v.description,
"filename": v.filename
}
});
delete res.plugins && delete res.mimeTypes;
data.url = window.location.href;
data.ref = document.referrer;
data.nav = res;
data._plugins = _plugins;
// set `img` `dataset` with `data` ,
// send `img` to server , decode `img` `dataset` at server
img.dataset.stats = JSON.stringify(data);
var img = new Image;
img.width = img.height = "1px";
var res = window.navigator;
var data = {};
var _plugins = {};
Array.prototype.slice.call(navigator.plugins).forEach(function(v, k) {
_plugins[v.name.toLowerCase().replace(/\s/, "-")] = {
"name": v.name,
"description": v.description,
"filename": v.filename
}
});
delete res.plugins && delete res.mimeTypes;
data.url = window.location.href;
data.ref = document.referrer;
data.nav = res;
data._plugins = _plugins;
img.dataset.stats = JSON.stringify(data);
document.write(
img.dataset.stats
);
There are 2 big solutions for open source analytics.
Piwik as you mentioned is a well documented and pretty mature solution. Drilling down the code, how Piwik makes things come around will give you some insights.
Open Web Analytics is the other big player on the game. A more simplified tool which will help you understand how basic tracking is made.
Depending on the data you want to track I would also suggest taking a look on this tutorial which uses sockets in order to track real time data.
Least but not last you can also check what Crazy Egg does if you want to track down user's interactivity.

How to read/write google spreadsheets from javascript with supported APIs?

I am terribly confused on how one is to write a javascript client (non-gadget) to a private Google Spreadsheet using supported APIs? I have no difficulties getting an OAuth2 Drive API client going, but then there is no spreadsheet support!
https://developers.google.com/apis-explorer
This issue crudely asks for the spreadsheet API to appear on that page:
http://code.google.com/p/google-api-javascript-client/issues/detail?id=37
I am probably missing something obvious, so thank you for your kindness to help me...
Update:
Wow, this is kicking my behind! So, I am going down the path of attempting to take the access_token from the Oauth2 workflow and then set the gdata API Authorization header like so:
service = new google.gdata.client.GoogleService('testapp');
service.setHeaders({'Authorization': 'Bearer '+ access_token});
Unfortunately, chrome console shows that this header is not actually getting sent to google when I do something like
service.getFeed(url, cb, eb);
Uffff!
In order to get information from Google Spreadsheets, just send a GET request to the relevant link with the access token attached. The urlLocation is found by going to Google Drive and copying the long string of digits and letters in the url after the word "key=".
Also I used jQuery in this example.
Code:
var urlLocation = ''; //Put the Spreadsheet location here
var url = 'https://spreadsheets.google.com/feeds/list/' + urlLocation + '/od6/private/full?access_token=' + token;
$.get(url, function(data) {
console.log(data);
});
In order to get a JSON representation, use this instead:
var urlLocation = ''; //Same as above
var url = 'https://spreadsheets.google.com/feeds/list/' + urlLocation + '/od6/private/full?alt=json-in-script&access_token=' + token + '&callback=?';
$.getJSON(url, function(data) {
console.log(data);
});

is it possible to write web crawler in javascript?

I want to crawl the page and check for the hyperlinks in that respective page and also follow those hyperlinks and capture data from the page
Generally, browser JavaScript can only crawl within the domain of its origin, because fetching pages would be done via Ajax, which is restricted by the Same-Origin Policy.
If the page running the crawler script is on www.example.com, then that script can crawl all the pages on www.example.com, but not the pages of any other origin (unless some edge case applies, e.g., the Access-Control-Allow-Origin header is set for pages on the other server).
If you really want to write a fully-featured crawler in browser JS, you could write a browser extension: for example, Chrome extensions are packaged Web application run with special permissions, including cross-origin Ajax. The difficulty with this approach is that you'll have to write multiple versions of the crawler if you want to support multiple browsers. (If the crawler is just for personal use, that's probably not an issue.)
If you use server-side javascript it is possible.
You should take a look at node.js
And an example of a crawler can be found in the link bellow:
http://www.colourcoding.net/blog/archive/2010/11/20/a-node.js-web-spider.aspx
Google's Chrome team has released puppeteer on August 2017, a node library which provides a high-level API for both headless and non-headless Chrome (headless Chrome being available since 59).
It uses an embedded version of Chromium, so it is guaranteed to work out of the box. If you want to use an specific Chrome version, you can do so by launching puppeteer with an executable path as parameter, such as:
const browser = await puppeteer.launch({executablePath: '/path/to/Chrome'});
An example of navigating to a webpage and taking a screenshot out of it shows how simple it is (taken from the GitHub page):
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
await browser.close();
})();
We could crawl the pages using Javascript from server side with help of headless webkit. For crawling, we have few libraries like PhantomJS, CasperJS, also there is a new wrapper on PhantomJS called Nightmare JS which make the works easier.
There are ways to circumvent the same-origin policy with JS. I wrote a crawler for facebook, that gathered information from facebook profiles from my friends and my friend's friends and allowed filtering the results by gender, current location, age, martial status (you catch my drift). It was simple. I just ran it from console. That way your script will get privilage to do request on the current domain. You can also make a bookmarklet to run the script from your bookmarks.
Another way is to provide a PHP proxy. Your script will access the proxy on current domain and request files from another with PHP. Just be carefull with those. These might get hijacked and used as a public proxy by 3rd party if you are not carefull.
Good luck, maybe you make a friend or two in the process like I did :-)
My typical setup is to use a browser extension with cross origin privileges set, which is injecting both the crawler code and jQuery.
Another take on Javascript crawlers is to use a headless browser like phantomJS or casperJS (which boosts phantom's powers)
This is what you need http://zugravu.com/products/web-crawler-spider-scraping-javascript-regular-expression-nodejs-mongodb
They use NodeJS, MongoDB and ExtJs as GUI
yes it is possible
Use NODEJS (its server side JS)
There is NPM (package manager that handles 3rd party modules) in nodeJS
Use PhantomJS in NodeJS (third party module that can crawl through websites is PhantomJS)
There is a client side approach for this, using Firefox Greasemonkey extention. with Greasemonkey you can create scripts to be executed each time you open specified urls.
here an example:
if you have urls like these:
http://www.example.com/products/pages/1
http://www.example.com/products/pages/2
then you can use something like this to open all pages containing product list(execute this manually)
var j = 0;
for(var i=1;i<5;i++)
{
setTimeout(function(){
j = j + 1;
window.open('http://www.example.com/products/pages/ + j, '_blank');
}, 15000 * i);
}
then you can create a script to open all products in new window for each product list page and include this url in Greasemonkey for that.
http://www.example.com/products/pages/*
and then a script for each product page to extract data and call a webservice passing data and close window and so on.
I made an example javascript crawler on github.
It's event driven and use an in-memory queue to store all the resources(ie. urls).
How to use in your node environment
var Crawler = require('../lib/crawler')
var crawler = new Crawler('http://www.someUrl.com');
// crawler.maxDepth = 4;
// crawler.crawlInterval = 10;
// crawler.maxListenerCurrency = 10;
// crawler.redisQueue = true;
crawler.start();
Here I'm just showing you 2 core method of a javascript crawler.
Crawler.prototype.run = function() {
var crawler = this;
process.nextTick(() => {
//the run loop
crawler.crawlerIntervalId = setInterval(() => {
crawler.crawl();
}, crawler.crawlInterval);
//kick off first one
crawler.crawl();
});
crawler.running = true;
crawler.emit('start');
}
Crawler.prototype.crawl = function() {
var crawler = this;
if (crawler._openRequests >= crawler.maxListenerCurrency) return;
//go get the item
crawler.queue.oldestUnfetchedItem((err, queueItem, index) => {
if (queueItem) {
//got the item start the fetch
crawler.fetchQueueItem(queueItem, index);
} else if (crawler._openRequests === 0) {
crawler.queue.complete((err, completeCount) => {
if (err)
throw err;
crawler.queue.getLength((err, length) => {
if (err)
throw err;
if (length === completeCount) {
//no open Request, no unfetcheditem stop the crawler
crawler.emit("complete", completeCount);
clearInterval(crawler.crawlerIntervalId);
crawler.running = false;
}
});
});
}
});
};
Here is the github link https://github.com/bfwg/node-tinycrawler.
It is a javascript web crawler written under 1000 lines of code.
This should put you on the right track.
You can make a web crawler driven from a remote json file that opens all links from a page in new tabs as soon as each tab loads except ones that have already been opened. If you set up a with a browser extension running in a basic browser (nothing runs except the web browser and an internet config program) and had it shipped and installed somewhere with good internet, you could make a database of webpages with an old computer. That would just need to retrieve the content of each tab. You could do that for about $2000, contrary to most estimates for search engine costs. You'd just need to basically make your algorithm provide pages based on how much a term appears in the innerText property of the page, keywords, and description. You could also set up another PC to recrawl old pages from the one-time database and add more. I'd estimate it would take about 3 months and $20000, maximum.
Axios + Cheerio
You can do this with axios and cheerios. Check axios docs for response format.
const cheerio = require('cheerio');
const axios = require('axios');
//crawl
//get url
var url = 'http://amazon.com';
axios.get(url)
.then((res) => {
//response format
var body = res.data;
var statusCode = res.status;
var statusText = res.statusText;
var headers = res.headers;
var request = res.request;
var config = res.config;
//jquery
let $ = cheerio.load(body);
//example
//meta tags
var title = $('meta[name=title]').attr('content');
if(title == undefined || title == 'undefined'){
title = $('title').text();
}else{
title = title;
}
var description = $('meta[name=description]').attr('content');
var keywords = $('meta[name=keywords]').attr('content');
var author = $('meta[name=author]').attr('content');
var type = $('meta[http-equiv=content-type]').attr('content');
var favicon = $('link[rel="shortcut icon"]').attr('href');
}).catch(function (e) {
console.log(e);
});
Node-Fetch + Cheerio
You can do the same thing with node-fetch and cheerio.
fetch(url, {
method: "GET",
}).then(function(response){
//response
var html = response.text();
//return
return html;
})
.then(function(res) {
//response html
var html = res;
//jquery
let $ = cheerio.load(html);
//meta tags
var title = $('meta[name=title]').attr('content');
if(title == undefined || title == 'undefined'){
title = $('title').text();
}else{
title = title;
}
var description = $('meta[name=description]').attr('content');
var keywords = $('meta[name=keywords]').attr('content');
var author = $('meta[name=author]').attr('content');
var type = $('meta[http-equiv=content-type]').attr('content');
var favicon = $('link[rel="shortcut icon"]').attr('href');
})
.catch((error) => {
console.error('Error:', error);
});

Categories

Resources