I came accross capture screen with electron when rendering a web site when I needed a solution for enabling screenshare inside my electron app;
however the desktopCapturer is always undefined, on my side and the only way I can access;
sources is inside the main process;
I would like to know if there is a way to have sources define when I do something like this
let all_sources = undefined
ipcRenderer.on('SET_SOURCES', (ev, sources) => {
all_sources = sources
console.log("The sources are : ", all_sources)
})
const wait_function = function() {
return new Promise(resolve => {
setTimeout(function() {
resolve(all_sources);
}, 4000);
});
};
contextBridge.exposeInMainWorld("myCustomGetDisplayMedia", async () => {
await ipcRenderer.send('GET_SOURCES')
await wait_function(); // want to make sure all_sources is defined
const selectedSource = all_sources[0]; // take the Entire Screen just for testing purposes
return selectedSource;
});
this is inside the preload js script.
thanks
Related
I have a code that uses window.foo.abc as a condition to display something.
I want to test this functionality with cypress and I want to mock this value to be false and true.
How can I do that?
I've tried
before(function() {
Cypress.on('window:before:load', win => {
win.foo.abc = true;
});
and
cy.window().then(win => {
window.foo.abc = true;
});
with no success.
How can I mock this value?
thanks 🙏
This code is incorrect,
Cypress.on('window:before:load', win => {
window.foo.abc = true;
});
It should be
Cypress.on('window:before:load', win => {
win.foo.abc = true;
});
You don't have to use it in before(), but it should be at the top of the spec.
But I suspect it still won't work after correcting, most likely the app resets foo to a new object during loading, i.e during cy.visit()
You can use the 2nd block
cy.visit('...') // visit before changing
cy.window().then(win => {
win.foo.abc = true; // correct the syntax here as well
})
"Most likely due to page navigation"
The page I'm trying to use has the following behavior. To get to the content I want, I have to click a button. But on clicking that button, the content I want is loaded on a new tab. The current tab I'm on navigates to a useless ad. When I try to do anything with "page" (page.eval, page.url()), it gives me the above error (the actual browser gives an error about the page having been moved permanently).
How do I get puppeteer to follow the new tab instead of getting stuck on the old one?
I've tried making a separate third tab with puppeteer newPage and goto, which works, kind of, but that runs into other issues down the road. I'm looking for a different way.
Edit:
I followed the answer below and did this:
const [newPage, oldPage] = await Promise.all([getNewPage(), getOldPage()]);
console.log("new page", newPage.url());
console.log("old page", oldPage.url());
function getOldPage() {
return new Promise((resolve) => {
page.evaluate(function () {
let element = document.querySelector("button[class*=OfferCta__]");
element.click();
});
resolve(page);
});
}
function getNewPage() {
return new Promise((resolve) => {
browser.on("targetcreated", checkNewTarget);
function checkNewTarget(target) {
if (target.type() === "page") {
browser.off("targetcreated", checkNewTarget);
resolve(target.page());
}
}
});
}
It didn't work. I got this console:
3:17:40 PM web.1 | new page https://www.nike.com/register?cid=4942550&cp=usns_aff_nike__PID_2210202_WhaleShark+Media%3A+RetailMeNot.com&cjevent=3a020cab18a211eb830d00030a1c0e0c
3:17:40 PM web.1 | old page chrome-error://chromewebdata/
So by the time the Promise checks on the old url that I need, it is already erroring out.
EDIT:
It turns out I was blocking navigation requests when I was trying to block third-party scripts. This caused my button press to fail.
Maybe something like this:
const [newPage] = await Promise.all([
getNewPage(),
page.click(selector),
]);
// ...
function getNewPage() {
return new Promise((resolve) => {
browser.on('targetcreated', checkNewTarget);
function checkNewTarget(target) {
if (target.type() === 'page') {
browser.off('targetcreated', checkNewTarget);
resolve(target.page());
}
}
});
}
I'm using WebDriverIO (v5.18.7) and I'm trying to write something the can go to each URL, and scroll down in increments until you reached the bottom, then move on the next URL. The issue I'm having is when it comes to the scrolling portion of the script, it might scroll once before it goes to the next URL.
From what I'm understanding, the documentation for WebDriverIO says the commands are send in asynchronous, which is handled in the framework behind the scenes. So I tried sticking with the framework and tried browser.execute / browser.executeAsync but wasn't able to get it working.
Here's what I have that seems close to what I want. Any guidance would be appreciated!
const { doesNotMatch } = require('assert');
const assert = require('assert');
const { Driver } = require('selenium-webdriver/chrome');
// variable for list of URLS
const arr = browser.config.urls
describe('Getting URL and scrolling', () => {
it('Get URL and scroll', async () => {
// let i = 0;
for (const value of arr) {
await browser.url(value);
await browser.execute(() => {
const elem = document.querySelector('.info-section');
// scroll until this reaches the end.
// add another for loop with a counter?
elem.scrollIntoView(); //Using this for now as a place holder
});
// i += 1;
}
})
})
Short anwer $('.info-section').scrollIntoView()
See https://webdriver.io/docs/api/element/scrollIntoView.html
WebdriverIO suppors sync and async modes, see https://webdriver.io/docs/sync-vs-async.html
I have a service worker in sw.js, it uses a template engine to get the commit numbre as a version number. I set the cache name like this:
var version = {{ commit_hash }};
self.cacheName = `cache-` + version;
I have some scripts being added to the cache on the worker's install, but there are scripts that are dynamically loaded on the page. I would like to load all the scripts/css on the first load without forcing the user to wait for the app to install first.
I can get all the content on the page with the following code in the bottom of index.html:
var toCache = ['/'];
var css = document.getElementsByTagName("link");
for(el of css) {
var href = el.getAttribute("href");
if(href) {
toCache.push(href);
}
}
var js = document.getElementsByTagName("script");
for(el of js) {
var src = el.getAttribute("src");
if(src) {
toCache.push(src);
}
}
That works fine, now I would just need to open the correct cache, fetch files that aren't already present, and store them. Something like:
toCache.forEach(function(url) {
caches.match(url).then(function(result) {
if(!result) {
fetch(url).then(function(response) {
caches.open(cacheName).then(cache => {
cache.put(url, response)
});
});
}
});
});
Is there a way to get the cacheName from the service worker inside a script tag in a different file?
And yes, I know that I could simplify this greatly by doing the check in the for/of loops. I broke it apart so it would be easier to describe.
No.
JavaScript executing in the window context cannot access SW's context and vice versa. You have to implement a workaround of some sort.
Remember that you can use postMessage to communicate between the two.
Using this blog I was able to pass messages from the service worker and back. First, I added the following function at the top of sw.js:
function clientPostMessage(client, message){
return new Promise(function(resolve, reject){
var channel = new MessageChannel();
channel.port1.onmessage = function(event){
if(event.data.error){
reject(event.data.error);
}
else {
resolve(event.data);
}
};
client.postMessage(message, [channel.port2]);
});
}
This allows my service worker to post a message to the window, and then do a callback with a promise.
Then, in my index.html file I added the following to a script tag:
navigator.serviceWorker.addEventListener('message', event => {
switch(event.data) {
case "addAll":
var toCache = [];
var css = document.getElementsByTagName("link");
for(el of css) {
var href = el.getAttribute("href");
if(href) {
toCache.push(href);
}
}
var js = document.getElementsByTagName("script");
for(el of js) {
var src = el.getAttribute("src");
if(src) {
toCache.push(src);
}
}
event.ports[0].postMessage(toCache);
break;
default:
console.log(event.data);
}
});
This listens to any service workers asking for messages, and if it is a "addAll" message, it will get all the scripts and linked content on the page and return an array of the scripts.
Finally, I added the following to my activate event listener function in sw.js:
// Get all the clients, and for each post a message
clients.matchAll().then(clients => {
clients.forEach(client => {
// Post "addAll" to get a list of files to cache
clientPostMessage(client, "addAll").then(message => {
// For each file, check if it already exists in the cache
message.forEach(url => {
caches.match(url).then(result => {
// If there's nothing in the cache, fetch the file and cache it
if(!result) {
fetch(url).then(response => {
caches.open(cacheName).then(cache => {
cache.put(url, response);
});
});
}
})
});
});
})
});
For all clients the service worker sends an "addAll" message to the page and gets the result. For each item in the result, it checks if the value is already in the cache and if not, fetches and adds it.
With this method, the install listener of the service worker only needs to contain:
self.addEventListener('install', event => {
if(self.skipWaiting) {
self.skipWaiting();
}
event.waitUntil(
caches.open(cacheName).then(cache => {
return cache.addAll([
'/',
'/index.html',
])
})
);
});
It seems to be working well so far, if anyone has any suggestions or sees any errors I'd be happy to hear! You can also tell me how improper this is, but it makes my life a lot easier for adding service workers for pre-existing projects that rely on scripts that aren't bundled together.
This is a bit of an edge case but it would be helpful to know.
When developing an extension using webpack-dev-server to keep the extension code up to date, it would be useful to listen to "webpackHotUpdate"
Chrome extensions with content scripts often have two sides to the equation:
Background
Injected Content Script
When using webpack-dev-server with HMR the background page stays in sync just fine. However content scripts require a reload of the extension in order to reflect the changes. I can remedy this by listening to the "webpackHotUpdate" event from the hotEmmiter and then requesting a reload. At present I have this working in a terrible and very unreliably hacky way.
var hotEmitter = __webpack_require__(XX)
hotEmitter.on('webpackHotUpdate', function() {
console.log('Reloading Extension')
chrome.runtime.reload()
})
XX simply represents the number that is currently assigned to the emitter. As you can imagine this changed whenever the build changes so it's a very temporary proof of concept sort of thing.
I suppose I could set up my own socket but that seems like overkill, given the events are already being transferred and I simply want to listen.
I am just recently getting more familiar with the webpack ecosystem so any guidance is much appreciated.
Okay!
I worked this out by looking around here:
https://github.com/facebookincubator/create-react-app/blob/master/packages/react-dev-utils/webpackHotDevClient.js
Many thanks to the create-react-app team for their judicious use of comments.
I created a slimmed down version of this specifically for handling the reload condition for extension development.
var SockJS = require('sockjs-client')
var url = require('url')
// Connect to WebpackDevServer via a socket.
var connection = new SockJS(
url.format({
// Default values - Updated to your own
protocol: 'http',
hostname: 'localhost',
port: '3000',
// Hardcoded in WebpackDevServer
pathname: '/sockjs-node',
})
)
var isFirstCompilation = true
var mostRecentCompilationHash = null
connection.onmessage = function(e) {
var message = JSON.parse(e.data)
switch (message.type) {
case 'hash':
handleAvailableHash(message.data)
break
case 'still-ok':
case 'ok':
case 'content-changed':
handleSuccess()
break
default:
// Do nothing.
}
}
// Is there a newer version of this code available?
function isUpdateAvailable() {
/* globals __webpack_hash__ */
// __webpack_hash__ is the hash of the current compilation.
// It's a global variable injected by Webpack.
return mostRecentCompilationHash !== __webpack_hash__
}
function handleAvailableHash(data){
mostRecentCompilationHash = data
}
function handleSuccess() {
var isHotUpdate = !isFirstCompilation
isFirstCompilation = false
if (isHotUpdate) { handleUpdates() }
}
function handleUpdates() {
if (!isUpdateAvailable()) return
console.log('%c Reloading Extension', 'color: #FF00FF')
chrome.runtime.reload()
}
When you are ready to use it (during development only) you can simply add it to your background.js entry point
module.exports = {
entry: {
background: [
path.resolve(__dirname, 'reloader.js'),
path.resolve(__dirname, 'background.js')
]
}
}
For actually hooking into the event emitter as was originally asked you can just require it from webpack/hot/emitter since that file exports an instance of the EventEmitter that's used.
if(module.hot) {
var lastHash
var upToDate = function upToDate() {
return lastHash.indexOf(__webpack_hash__) >= 0
}
var clientEmitter = require('webpack/hot/emitter')
clientEmitter.on('webpackHotUpdate', function(currentHash) {
lastHash = currentHash
if(upToDate()) return
console.log('%c Reloading Extension', 'color: #FF00FF')
chrome.runtime.reload()
})
}
This is just a stripped down version straight from the source:
https://github.com/webpack/webpack/blob/master/hot/dev-server.js
I've fine-tuned the core logic of the crx-hotreload package and come up with a build-tool agnostic solution (meaning it will work with Webpack but also with anything else).
It asks the extension for its directory (via chrome.runtime.getPackageDirectoryEntry) and then watches that directory for file changes. Once a file is added/removed/changed inside that directory, it calls chrome.runtime.reload().
If you'd need to also reload the active tab (when developing a content script), then you should run a tabs.query, get the first (active) tab from the results and call reload on it as well.
The whole logic is ~35 lines of code:
/* global chrome */
const filesInDirectory = dir => new Promise(resolve =>
dir.createReader().readEntries(entries =>
Promise.all(entries.filter(e => e.name[0] !== '.').map(e =>
e.isDirectory
? filesInDirectory(e)
: new Promise(resolve => e.file(resolve))
))
.then(files => [].concat(...files))
.then(resolve)
)
)
const timestampForFilesInDirectory = dir => filesInDirectory(dir)
.then(files => files.map(f => f.name + f.lastModifiedDate).join())
const watchChanges = (dir, lastTimestamp) => {
timestampForFilesInDirectory(dir).then(timestamp => {
if (!lastTimestamp || (lastTimestamp === timestamp)) {
setTimeout(() => watchChanges(dir, timestamp), 1000)
} else {
console.log('%c 🚀 Reloading Extension', 'color: #FF00FF')
chrome.runtime.reload()
}
})
}
// Init if in dev environment
chrome.management.getSelf(self => {
if (self.installType === 'development' &&
'getPackageDirectoryEntry' in chrome.runtime
) {
console.log('%c 📦 Watching for file changes', 'color: #FF00FF')
chrome.runtime.getPackageDirectoryEntry(dir => watchChanges(dir))
}
})
You should add this script to your manifest.json file's background scripts entry:
"background": ["reloader.js", "background.js"]
And a Gist with a light explanation in the Readme: https://gist.github.com/andreasvirkus/c9f91ddb201fc78042bf7d814af47121