Open a custom URL using window.open() in Firefox - javascript

In Firefox 88 it seems that opening a custom deep-linking URL with window.open(customURL, '_parent') reloads the current tab? Is there any solution to this problem? Should I use an <iframe> instead of window.open()? The behavior is different in Chrome however.
Required behaviour: window.open() opens the deeplinking app while the web app (written in Angular) continues to work in the same frame.

Recommend an Iframe
Some browsers are particularly weary of window.open as its abused quite heavily and thus blocked. However, an iframe is pretty normal and if using an app scheme and supported, has the side-effect of opening up the app.
Here's an approach you could try.
function checkIsAppSupported(appSchemeUrl, msThreshold) {
return new Promise((resolve) => {
let docBlurOccurred = false
const iframe = document.createElement('iframe', {
src: appSchemeUrl,
style: 'display:none'
})
document.appendChild(iframe)
function onDocBlur() {
docBlurOccurred = true
}
document.addEventListener('blur', onDocBlur)
// or try the newer
// document.addEventListener('visibilitychange', onDocBlur)
setTimeout(() => {
// cleanup
iframe.remove()
document.removeEventListener('blur', onDocBlur)
resolve(docBlurOccurred)
}, msThreshold); // checks if document focus changed within ms
});
}
async function tryAppSupported() {
const isAppSupported = await checkIsAppSupported('my-app://', 500);
if (isAppSupported) {
// do something
}
}
tryAppSupported()
Meaning if the main window becomes out of focus near the time of iframe loading, you could reasonably assume the actual app scheme was loaded and moved focus off the current window.
There are some npms that do this type of thing for example
https://www.npmjs.com/package/open-native-app and has great support for caring about what type of OS checks this behavior.
Happy coding.

Related

Electron "ready-to-show" event not working as expected

Here is a block of code from my application Codey.
src/main.py
// Show window once it has finished initialising
docsWindow.once("ready-to-show", () => {
if (darkMode) {
docsWindow.webContents.send("dark-mode:toggle");
}
if (!isDarwin) {
docsWindow.webContents.send("platform:not-darwin");
}
docsWindow.webContents.send("docs:jump", section);
docsWindow.show();
});
src/docs/renderer.js
window.api.darkMode.toggle.receive(toggleDarkMode);
When darkMode = true, toggleDarkMode is never run.
My application has two different windows - an editor and a docs window. For both windows, on the "ready-to-show" event, "dark-mode:toggle" is sent to the renderer process. However, the docs window fails to run the toggleDarkMode function whilst it works for the editor window.
Note: The application must be packaged using "yarn package" as some features do not work in the dev environment.
Any help will be much appreciated.
(Repo: https://github.com/Liamohara/Codey)
For anyone that's interested. I found a solution to my issue.
src/main.py
// Show window once it has finished initialising
- docsWindow.once("ready-to-show", () => {
+ docsWindow.webContents.once("did-finish-load", () => {
if (darkMode) {
docsWindow.webContents.send("dark-mode:toggle");
}
if (!isDarwin) {
docsWindow.webContents.send("platform:not-darwin");
}
docsWindow.webContents.send("docs:jump", section);
docsWindow.show();
});
In the docs window, there is more HTML content to be renderered than in the editor window. As a result, the DOM takes longer to load and the "ready-to-show" event is emitted before it has finished. Using the "did-finish-load" event, ensures that the API functions are not called before the DOM has finished loading.

Making a Chrome extension for a site that uses React. How to persist changes after re-rendering?

I want my extension to add elements to a page. But the site uses React which means every time there's a change the page is re-rendered but without the elements that I added.
I'm only passingly familiar with React from some tutorial apps I built a long time ago.
I implemented #wOxxOm's comment to use MutationObserver and it worked very well.
static placeAndObserveMutations (insertionBoxSelector) {
let placer = new Placement ();
placer.place(insertionBoxSelector);
placer.observeMutations(insertionBoxSelector);
}
place (insertionBoxSelector) {
let box = $(insertionBoxSelector)
this.insertionBox = box; //insertionBox is the element that the content
// will be appended to
this.addedBox = EnterBox.addInfo(box); //addedBox is the content
// Worth noting that at this point it's fairly empty. It'll get filled by
// async ajax calls while this is running. And all that will still be there
// when it's added back in the callback later.
}
observeMutations(insertionBoxSelector) {
let observer = new MutationObserver (this.replaceBox.bind(this));
// this.insertionBox is a jQuery object and I assume `observe` doesn't accept that
let insertionBox = document.querySelector(insertionBoxSelector);
observer.observe(title, {attributes: true});
}
replaceBox () {
this.insertionBox.append(this.addedBox);
_position (this.addedBox);
}
That being said someone suggested adding the content above the React node (i.e. the body) and just positioning the added content absolutely relative to the window. And as this will actually solve a separate problem I was having on some pages of a site I'll probably do that. Also, it's far simpler.
But I thought this was still an interesting solution to a rare problem so I wanted to post it as well.

Electron does not listen keydown event

I am a backend developer who got a little project to fix it.
So my boss gives me an electron project which runs on touch devices.
I know that I can listen any key events in Javascript if I use the document object, but in electron it does not work, it says the docuemnt cannot be found.
So implemented this when I or other support guy press the F12 button then the dev tools be rendered out in the electron app.
mainWindow = new BrowserWindow({
'web-preferences': {'web-security': false}
});
mainWindow.onkeydown = function (e) {
console.log("Key down");
if (e.which === 123) {
console.log("Key is F12");
mainWindow.webContents.openDevTools();
}
};
But this code is not working to me. I have no idea how I can listen the F12 button is pressed.
Unfortunately I cannot render out button to the UI which can show the devtools. Because of the customers mustn't press it.
Sometimes I need to see the realtime console tab in devtools on the device.
There is a known issue in Electron ( which has lately been marked as wontfix ) that prevents the usual approach to catch key events using the traditional JS approach.
There also is a small library called electron-localshortcut that circumvents this issue by hijacking the Electron global shortcuts API when the window is active.
Use like this in your main.js:
const electronLocalshortcut = require('electron-localshortcut');
electronLocalshortcut.register(mainWindow, 'F12', () => {
// Open DevTools
});
Without additional libraries you can use "globalShortcut" of electron
const { app, BrowserWindow, globalShortcut } = require("electron");
globalShortcut.register("CmdOrCtrl+F12", () => {
mainWindow.isFocused() && mainWindow.webContents.toggleDevTools();
});
I think F12 is preserved so I use ctrl+f12 which is not far off
You can use the library mousetrap to add global short cut, because it can be installed through node and could bypass the problem of electron mentioned in the accepted answer.
A code example in the render process would be:
var Mousetrap = require('mousetrap');
Mousetrap.bind('4', function() { console.log('4'); });

How can I access the DOM of a <webview> in Electron?

I'm just getting started with Electron, with prior experience with node-webkit (nw.js).
In nw.js, I was able to create iframes and then access the DOM of said iframe in order to grab things like the title, favicon, &c. When I picked up Electron a few days ago to port my nw.js app to it, I saw advice to use webviews instead of iframes, simply because they were better. Now, the functionality I mentioned above was relatively easy to do in nw.js, but I don't know how to do it in Electron (and examples are slim to none). Can anyone help?
Also, I have back/forward buttons for my webview (and I intend on having more than one). I saw in the documentation that I could call functions for doing so on a webview, but nothing I have tried worked either (and, I haven't found examples of them being used in the wild).
I dunno who voted to close my question, but I'm glad it didn't go through. Other people have this question elsewhere online too. I also explained what I wanted to achieve, but w/e.
I ended up using ipc-message. The documentation could use more examples/explanations for the layperson, but hey, I figured it out. My code is here and here, but I will also post examples below should my code disappear for whatever reason.
This code is in aries.js, and this file is included in the main renderer page, which is index.html.
var ipc = require("ipc");
var webview = document.getElementsByClassName("tabs-pane active")[0];
webview.addEventListener("ipc-message", function (e) {
if (e.channel === "window-data") {
// console.log(e.args[0]);
$(".tab.active .tab-favicon").attr("src", e.args[0].favicon);
$(".tab.active .tab-title").html(e.args[0].title);
$("#url-bar").val(e.args[0].url);
$("#aries-titlebar h1").html("Aries | " + e.args[0].title);
}
// TODO
// Make this better...cancel out setTimeout?
var timer;
if (e.channel === "mouseover-href") {
// console.log(e.args[0]);
$(".linker").html(e.args[0]).stop().addClass("active");
clearTimeout(timer);
timer = setTimeout(function () {
$(".linker").stop().removeClass("active");
}, 1500);
}
});
This next bit of code is in browser.js, and this file gets injected into my <webview>.
var ipc = require("ipc");
document.addEventListener("mouseover", function (e) {
var hoveredEl = e.target;
if (hoveredEl.tagName !== "A") {
return;
}
ipc.sendToHost("mouseover-href", hoveredEl.href);
});
document.addEventListener("DOMContentLoaded", function () {
var data = {
"title": document.title,
"url": window.location.href,
// need to make my own version, can't rely on Google forever
// maybe have this URL fetcher hosted on hikar.io?
"favicon": "https://www.google.com/s2/favicons?domain=" + window.location.href
};
ipc.sendToHost("window-data", data);
});
I haven't found a reliable way to inject jQuery into <webview>s, and I probably shouldn't because the page I would be injecting might already have it (in case you're wondering why my main code is jQuery, but there's also regular JavaScript).
Besides guest to host IPC calls as NetOperatorWibby, it is also very useful to go from host to guest. The only way to do this at present is to use the <webview>.executeJavaScript(code, userGesture). This api is a bit crude but it works.
If you are working with a remote guest, like "extending" a third party web page, you can also utilize webview preload attribute which executes your custom script before any other scripts are run on the page. Just note that the preload api, for security reasons, will nuke any functions that are created in the root namespace of your custom JS file when your custom script finishes, however this custodial process will not nuke any objects you declare in the root. So if you want your custom functions to persist, bundle them into a singleton object and your custom APIs will persist after the page fully loads.
[update] Here is a simple example that I just finished writing: Electron-Webview-Host-to-Guest-RPC-Sample
This relates to previous answer (I am not allowed to comment): Important info regarding ipc module for users of Electron 1.x:
The ipc module was split into two separate modules:
ipcMain for the main process
ipcRenderer for the renderer process
So, the above examples need to be corrected, instead of
// Outdated - doesn't work in 1.x
var ipc = require("ipc");
use:
// In main process.
var ipcMain = require('electron').ipcMain
And:
// In renderer process.
var ipcRenderer = require('electron').ipcRenderer
See: http://electron.atom.io/blog/2015/11/17/electron-api-changes section on 'Splitting the ipc module'

iOS UI Automation wait until web view is ready and rendered

I am creating an iOS UI Automation javascript using Instruments to automate taking a screenshot for my iOS app. The tool I am using to automate taking a screenshot is Snapshot.
I am using a webview for part of my app and I want to take a screenshot of fully rendered webview before proceeding to the rest of the script.
So currently my script looks like:
var target = UIATarget.localTarget();
target.frontMostApp().mainWindow().scrollViews()[0].webViews()[0].links()[0].tap();
// take screen shot here
target.frontMostApp().navigationBar().leftButton().tap();
But when it takes the screen shot, the webview was not fully rendered so it's taking an empty screen and go back to the main screen.
Is there a way to wait until the webview is fully loaded, then take a screen shot and continue rest of the script?
The Illuminator framework (full disclosure, I wrote it) provides a function called waitForChildExistence() in its Extensions.js that allows you to continuously evaluate a reference to an element until it actually appears.
You would do something like this to wait up to 10 seconds for the webview to load:
var webView = target.frontMostApp().mainWindow().scrollViews()[0].webViews()[0];
webView.tap();
webView.waitForChildExistence(10, true, "a specific link in the web view", function(wb) {
// the argument wb will contain the reference to our var webView
return wb.links()["the link you are waiting for"];
});
// take screen shot here
target.frontMostApp().navigationBar().leftButton().tap();
You can use computed style, put some inert style property (I used clip), and you can check with intervals of 1 second (if you use css files better create a class with it and use the class on element).
The function below i user to get the style computed.
function getStyle (el, prop) {
try {
if (getComputedStyle !== undefined) {
return getComputedStyle(el, null).getPropertyValue(prop);
}
else {
return el.currentStyle[prop];
}
} catch (e) {
return el.currentStyle[prop];
}
}
The timer you can try this
Be aware of that computed style may vary by browsers.
Hope it helps...

Categories

Resources