I am trying to implement an identity check for my chrome extension since I want to sell it to some users.
The code below is usig the chrome.identity API to get the unique OpenID in combination with the email that is logged in. Then it is fetching data from a pastebin and checks if the id is included or not.
If not I would want to block the user from using the extension. What would be the best approach?
My code:
// license check
chrome.identity.getProfileUserInfo({ 'accountStatus': 'ANY' }, async function (info) {
email = info.email;
console.log(info.id);
let response = await fetch('https://pastebin.com/*****');
let data = await response.text();
console.log(data.indexOf(info.id));
if (data.indexOf(info.id) !== -1) {
console.log('included');
} else {
console.log('not included');
// block chrome extension usage;
}
});
I found a pretty simple solution that should work for most cases.
// license check
chrome.identity.getProfileUserInfo({ 'accountStatus': 'ANY' }, async function (info) {
email = info.email;
console.log(info.id);
let response = await fetch('https://pastebin.com/*****');
let data = await response.text();
console.log(data.indexOf(info.id));
if (data.indexOf(info.id) !== -1) {
console.log('included');
} else {
console.log('not included');
// block chrome extension usage;
chrome.browserAction.setPopup({ popup: 'index.html' }); // index.html has to
be in the
extension folder and can have e. g. a <h1> which says "Invalid license"
}
});
Related
I want to get the messages of users by gmail api. For that google authorization is needed. I managed to authorize the user by following code -
let authBtn = document.getElementById('authorize_button');
const CLIENT_ID = 'XXXXXX-XXXXXXXXXXX.apps.googleusercontent.com';
const API_KEY = 'XXXXXX-XXXXXXXXXXXXXXX';
const DISCOVERY_DOC = 'https://www.googleapis.com/discovery/v1/apis/gmail/v1/rest';
const SCOPES = 'https://www.googleapis.com/auth/gmail.readonly';
let tokenClient;
let gapiInited = false;
let gisInited = false;
authBtn.style.visibility = 'hidden';
function gapiLoaded() {
gapi.load('client', intializeGapiClient);
}
async function intializeGapiClient() {
await gapi.client.init({
apiKey: API_KEY,
discoveryDocs: [DISCOVERY_DOC],
});
gapiInited = true;
maybeEnableButtons();
}
function gisLoaded() {
tokenClient = google.accounts.oauth2.initTokenClient({
client_id: CLIENT_ID,
scope: SCOPES,
callback: '',
});
gisInited = true;
maybeEnableButtons();
}
function maybeEnableButtons() {
if (gapiInited && gisInited) {
authBtn.style.visibility = 'visible';
}
}
function handleAuthClick() {
tokenClient.callback = async (resp) => {
if (resp.error !== undefined) throw (resp);
authBtn.innerText = 'Refresh';
await getMessages();
};
if (gapi.client.getToken() === null) {
tokenClient.requestAccessToken({prompt: 'consent'});
} else {
tokenClient.requestAccessToken({prompt: ''});
}
}
In above code gapi.client.getToken() === null is always false. Everytime I refresh the page I have to reauthorize user with prompt: 'consent'.
I also want user to stay signed in until user sign out.
How can I achieve by modifying the above code?
Can Please someone help me?
You are using a system that requires a server-side authentication flow, read about properly handling that here:
https://developers.google.com/identity/gsi/web/guides/verify-google-id-token
The gapi JavaScript is browser code (you obviously know this because the question specifies all sorts of DOM related code), and therefore Authentication is fundamentally not going to be possible entirely in the browser without a server-side flow to handle the callbacks from Google that occur out-of-band from the browser.
The only exception I can find to the rule of having a server-side component is to credential manager API:
https://developers.google.com/identity/gsi/web/guides/display-browsers-native-credential-manager
It seems to significantly simplify things, but from what I can tell supports Chrome only (maybe including chrome-based browsers Edge, Brave, etc. but maybe not Chromium as it seems to be needing Google accounts in the browser itself, e.g. login is not managed by your code for your website but the user using the browser directly before they visit your site)
As the title says, how would i implement that the content script would be injected according to user specified websites?
Right now i'm saving user specified websites in a Storage.
I tried doing it programmatically by injecting content script upon tab creation/update but i had issues with cross origin stuff, and couldn't get it to work.
const injectContentScripts = function(sites)
{
// if not provided, use default.
if (!sites)
sites = 'website.com'
const urls = sites.split('\n').map(url => `*://${url}/*`)
chrome.tabs.query({ url:urls, status:'complete' }, function(tabs) {
for (const tab of tabs)
{
chrome.tabs.executeScript(tab.id, { file:'ext/jquery.min.js', allFrames:true, runAt:'document_end' }, function(){
chrome.tabs.executeScript(tab.id, { file:'content.js', allFrames:true, runAt:'document_end' })
})
}
})
}
I also had an idea i could just format manifest.json file upon user updating what websites user would like to add but that wouldn't work as when you pack an extension it becomes that .crx file or somethiing, right?
I tried searching about this case but couldn't find anyone asking this. Sorry if i'm missing something, i'm not that great in English.
Actually found solution myself. :D
Well, if anyone needs something like this as well. I got it working with this,
chrome.webNavigation.onDOMContentLoaded.addListener(async (e) => {
const saved = await chrome.storage.sync.get(['sites'])
// if not provided, use default.
if (!saved.sites)
saved.sites = 'website.com'
const split = e.url.split('://')
const splitUrl = split[1] ? split[1].split('/')[0] : false
// if not url or frame is not main frame return.
if (!splitUrl || e.frameId != 0)
return
for (const url of saved.sites.split('\n'))
{
if (splitUrl.match(url))
{
injectContentScripts(e.tabId)
break;
}
}
})
const injectContentScripts = async function(tabId)
{
if (tabId == undefined)
return
try {
// i'm using this npm library, whichh allows async await https://www.npmjs.com/package/chrome-extension-async
await chrome.tabs.executeScript(tabId, { file:'ext/jquery.min.js', allFrames:true, runAt:'document_end' })
await chrome.tabs.executeScript(tabId, { file:'content.js', allFrames:true, runAt:'document_end' })
} catch(e) {
console.error(e)
}
}
There is integrated url matcher, but i wasn't sure how i could access those functions. Anyway this works fine, cleaner way would be with RegExp but it works so it's fine i guess. :D
In order this to work, you have to specify these permissions in manifest file.
"permissions": [
"webNavigation",
"<all_urls>"
],
I am trying to discover how to make a statement appear when someone registering to the form that I was made but the email address was already used. I am using firebase. I am not familiar with fetchSignInForEmail and am wondering how to use it and implement it.
I am thinking I can use an if statement
if(email exists) {
push firebase user to directed page
} else {
statement.style.display === block
}
I am also curious on how to do this with passwords as well.
Thank you
Listen for that error. However, I prefer to merge the accounts and let the user sign in. Below is an example snippet. I've got this done for you, provided you want to allow email link authentication (no password required). Firebase offers a pre-rolled one as well that supports passwords and federation/oAuth (twitter, facebook, etc).
} catch (error) {
if(error.code === "auth/email-already-in-use"){
// REMEMBER AUTH CURRENT USER OBJECT
previousUser = firebase.auth().currentUser;
// WE MUST HANDLE DB READ AND DELETE WHILE SIGNED IN AS PREVIOUS USER PER FIRESTORE SECURITY RULES
if(localUserDoc){
if(localUserDoc.data().apples){
apples = localUserDoc.data().apples;
}
}
//DELETE CURRENT USER RECORD WHILE STILL SIGNED IN
await firebase.firestore().collection("users").doc(previousUser.uid).delete();
// CLEAN UP DONE. NOW SIGN IN USING EMAIL LINK CREDENTIAL
try {
var firebaseUserObj = await firebase.auth().signInAndRetrieveDataWithCredential(credential);
// FIRESTORE USER RECORD FOR EMAIL LINK USER WAS CREATED WHEN THEY ADDED APPLE TO CART
try {
var doc = await firebase.firestore().collection("users").doc(firebaseUserObj.user.uid).get();
if (doc.exists) {
if(doc.data().apples){
apples = apples + doc.data().apples;
}
}
await firebase.firestore().collection("users").doc(firebaseUserObj.user.uid).update({
apples: apples
});
} catch(error) {
console.log("Error getting document:", error);
}
previousUser.delete();
} catch (error) {
console.log(".signInWithCredential err ", error);
}
}
}
I've written a webapp that allows you to store the images in the localStorage until you hit save (so it works offline, if signal is poor).
When the localStorage reaches 5MB Google Chrome produces an error in the javascript console log:
Uncaught Error: QUOTA_EXCEEDED_ERR: DOM Exception 22
How do I increase the size of the localStorage quota on Google Chrome?
5MB is a hard limit and that is stupid. IndexedDB gives you ~50MB which is more reasonable. To make it easier to use try Dexie.js https://github.com/dfahlander/Dexie.js
Update:
Dexie.js was actually still an overkill for my simple key-value purposes so I wrote this much simpler script https://github.com/DVLP/localStorageDB
with this you have 50MB and can get and set values like that
// Setting values
ldb.set('nameGoesHere', 'value goes here');
// Getting values - callback is required because the data is being retrieved asynchronously:
ldb.get('nameGoesHere', function (value) {
console.log('And the value is', value);
});
Copy/paste the line below so ldb.set() and ldb.get() from the example above will become available.
!function(){function e(t,o){return n?void(n.transaction("s").objectStore("s").get(t).onsuccess=function(e){var t=e.target.result&&e.target.result.v||null;o(t)}):void setTimeout(function(){e(t,o)},100)}var t=window.indexedDB||window.mozIndexedDB||window.webkitIndexedDB||window.msIndexedDB;if(!t)return void console.error("indexDB not supported");var n,o={k:"",v:""},r=t.open("d2",1);r.onsuccess=function(e){n=this.result},r.onerror=function(e){console.error("indexedDB request error"),console.log(e)},r.onupgradeneeded=function(e){n=null;var t=e.target.result.createObjectStore("s",{keyPath:"k"});t.transaction.oncomplete=function(e){n=e.target.db}},window.ldb={get:e,set:function(e,t){o.k=e,o.v=t,n.transaction("s","readwrite").objectStore("s").put(o)}}}();
You can't, it's hard-wired at 5MB. This is a design decision by the Chrome developers.
In Chrome, the Web SQL db and cache manifest also have low limits by default, but if you package the app for the Chrome App Store you can increase them.
See also Managing HTML5 Offline Storage - Google Chrome.
The quota is for the user to set, how much space he wishes to allow to each website.
Therefore since the purpose is to restrict the web pages, the web pages cannot change the restriction.
If storage is low, you can prompt the user to increase local storage.
To find out if storage is low, you could probe the local storage size by saving an object then deleting it.
You can't but if you save JSON in your localStorage you can use a library to compress data like : https://github.com/k-yak/JJLC
demo : http://k-yak.github.io/JJLC/
Here you can test your program , you should handle also the cases when the cuota is exceed
https://stackoverflow.com/a/5664344/2630686 The above answer is much amazing. I applied it in my project and implement a full solution to request all kinds of resource.
// Firstly reference the above ldb code in the answer I mentioned.
export function get_file({ url, d3, name, enable_request = false }) {
if (name === undefined) { // set saved data name by url parsing alternatively
name = url.split('?')[0].split('/').at(-1).split('.')[0];
}
const html_name = location.href.split('/').at(-1).split('.')[0]
name = `${html_name}_${name}`
let ret = null;
const is_outer = is_outer_net(url); // check outer net url by its start with http or //
// try to access data from local. Return null if not found
if (is_outer && !enable_request) {
if (localStorage[name]) {
ret = new Promise(resolve => resolve(JSON.parse(localStorage[name])));
} else {
ret = new Promise(r => {
ldb.get(name, function (value) {
r(value)
})
});
}
} else {
ret = new Promise(r => r(null))
}
ret.then(data => {
if (data) {
return data
} else {
const method = url.split('.').at(-1)
// d3 method supported
if (d3 && d3[method]) {
ret = d3[method](url)
} else {
if (url.startsWith('~/')) { // local files accessed supported. You need a local service that can return local file data by requested url's address value
url = `http://localhost:8010/get_file?address=${url}`
}
ret = fetch(url).then(data => {
// parse data by requested data type
if (url.endsWith('txt')) {
return data.text()
} else {
return data.json()
}
})
}
ret = ret.then(da => {
data = da
if (is_outer) { // save data to localStorage firstly
localStorage[name] = JSON.stringify(data);
}
}).catch(e => { // save to ldb if 5MB exceed
ldb.set(name, data);
}).finally(_ => {
return data;
});
}
})
return ret;
}
I want to create a scraper that:
opens a headless browser,
goes to a url,
logs in (there is steam oauth),
fills some inputs,
and clicks 2 buttons.
My problem is that every new instance of headless browser clears my login session, and then I need to login again and again...
How to save it through instances? (using puppeteer with headless chrome)
Or how can I open already logged in chrome headless instance? (if I have already logged in in my main chrome window)
There is an option to save user data using the userDataDir option when launching puppeteer. This stores the session and other things related to launching chrome.
puppeteer.launch({
userDataDir: "./user_data"
});
It doesn't go into great detail but here's a link to the docs for it: https://pptr.dev/#?product=Puppeteer&version=v1.6.1&show=api-puppeteerlaunchoptions
In puppeter you have access to the session cookies through page.cookies().
So once you log in, you could get every cookie and save it in a json file:
const fs = require(fs);
const cookiesFilePath = 'cookies.json';
// Save Session Cookies
const cookiesObject = await page.cookies()
// Write cookies to temp file to be used in other profile pages
fs.writeFile(cookiesFilePath, JSON.stringify(cookiesObject),
function(err) {
if (err) {
console.log('The file could not be written.', err)
}
console.log('Session has been successfully saved')
})
Then, on your next iteration right before using page.goto() you can call page.setCookie() to load the cookies from the file one by one:
const previousSession = fs.existsSync(cookiesFilePath)
if (previousSession) {
// If file exist load the cookies
const cookiesString = fs.readFileSync(cookiesFilePath);
const parsedCookies = JSON.parse(cookiesString);
if (parsedCookies.length !== 0) {
for (let cookie of parsedCookies) {
await page.setCookie(cookie)
}
console.log('Session has been loaded in the browser')
}
}
Checkout the docs:
https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#pagecookiesurls
https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#pagesetcookiecookies
For a version of the above solution that actually works and doesn't rely on jsonfile (instead using the more standard fs) check this out:
Setup:
const fs = require('fs');
const cookiesPath = "cookies.txt";
Reading the cookies (put this code first):
// If the cookies file exists, read the cookies.
const previousSession = fs.existsSync(cookiesPath)
if (previousSession) {
const content = fs.readFileSync(cookiesPath);
const cookiesArr = JSON.parse(content);
if (cookiesArr.length !== 0) {
for (let cookie of cookiesArr) {
await page.setCookie(cookie)
}
console.log('Session has been loaded in the browser')
}
}
Writing the cookies:
// Write Cookies
const cookiesObject = await page.cookies()
fs.writeFileSync(cookiesPath, JSON.stringify(cookiesObject));
console.log('Session has been saved to ' + cookiesPath);
For writing Cookies
async function writingCookies() {
const cookieArray = require(C.cookieFile); //C.cookieFile can be replaced by ('./filename.json')
await page.setCookie(...cookieArray);
await page.cookies(C.feedUrl); //C.url can be ('https://example.com')
}
For reading Cookies, for this, you've to install jsonfile in your project : npm install jsonfile
async function getCookies() {
const cookiesObject = await page.cookies();
jsonfile.writeFile('linkedinCookies.json', cookiesObject, { spaces: 2 },
function (err) {
if (err) {
console.log('The Cookie file could not be written.', err);
}
console.log("Cookie file has been successfully saved in current working Directory : '" + process.cwd() + "'");
})
}
Call these two functions using await and it will work for you.