CEF Python ExecuteJavascript will not set value of a input element - javascript

When I try browser.ExecuteJavascript("alert('ExecuteJavaScript works!');") it works fine (pops up a alert when the browser is created). When I try browser.ExecuteJavascript("document.getElementsByName('q')[0].value = 24;") nothing happens. So I know that ExecuteJavascript is working but how come when I try to set a value of an input element, the input element doesn't change? The code I am trying is below if anyone has an idea as for why that particular Javascript will not execute I would be very grateful.
from cefpython3 import cefpython as cef
import platform
import sys
def main():
sys.excepthook = cef.ExceptHook
cef.Initialize()
browser = cef.CreateBrowserSync(url="https://www.google.com", window_title="Hello World!")
browser.ExecuteJavascript("document.getElementsByName('q')[0].value = 24")
cef.MessageLoop()
cef.Shutdown()
if __name__ == '__main__':
main()

DOM is not yet ready after the browser was just created. Open Developer Tools window using mouse context menu and you will see the error. You should use LoadHandler to detect when window finishes loading of web page or when DOM is ready. Options:
Implement LoadHandler.OnLoadingStateChange:
main():
browser.SetClientHandler(LoadHandler())
class LoadHandler(object):
def OnLoadingStateChange(self, browser, is_loading, **_):
if not is_loading:
browser.ExecuteJavascript("document.getElementsByName('q')[0].value = 24")
Implement LoadHandler.OnLoadStart and inject js code that adds an event listener DOMContentLoaded that will execute the actual code.
See Tutorial > Client handlers:
https://github.com/cztomczak/cefpython/blob/master/docs/Tutorial.md#client-handlers
See also API reference for LoadHandler.

Related

How to disable javascript error debugging in IE WebBrowser Control

Application is using IE WebBrwoser Control. However sometimes javascript error dialogs comes up,to solve this put_silent property was used on WebBrowser element,but that disables all the dialogs.So is there a way to disable Javascript error debugging in WebBrowser control?
On your control do right click and click on Inspect Element. If you did not disable the IE menu, it should open the Developer window on right or bottom side. Select there the tab "Debug", click on the hexagon and check "Don't stop on exception" or "Stop on unhandled exceptions". I believe this is a global setting for the browser, so you maybe can do it just from IE.
Update 1
First implement IDocHostUIHandler and wrap the external handler calls. It is declared in Mshtmhst.h so you probably have to include it. Don't forget about IUnknown members, also have to be wrapped. ATL wizards can be used to implement interfaces, but anyway you will have to understand exactly that you do:
class MyDocHostUIHandler: public IDocHostUIHandler
{
public:
IDocHostUIHandler* externalHandler;
HRESULT EnableModeless( BOOL fEnable)
{
return externalHandler->EnableModeless(fEnable);
}
HRESULT FilterDataObject(IDataObject* pDO, IDataObject** ppDORet)
{
return externalHandler->FilterDataObject(pDO, ppDORet)ș
}
.... Wrap all the functions from External Handler like above
};
Create an instance of your class:
MyDocHostUIHandler* myHandler = new MyDocHostUIHandler();
Then in your code call like it is specified in MSDN.
First you get the MSHTML object
CComPtr<IHTMLDocument2> m_spDocument;
hr = m_WebBrowser->get_Document(&m_spDocument);// Get the MSHTML object
Then you get the existing default handler
ComPtr<IOleObject> spOleObject;
hr = m_spDocument.As(&spOleObject);
ComPtr<IOleClientSite> spClientSite;//<--this will be the default handler
hr = spOleObject->GetClientSite(&spClientSite);
Save the existing handler to your class, so you will be able to wrap its functions
//see myHandler is the instance of interface you implemented in first step
myHandler->externalHandler = spClientSite;
Get the custom doc:
ComPtr<ICustomDoc> spCustomDoc;
hr = m_spDocument.As(&spCustomDoc);//m_spDocument it is the pointer to your MSHTML
Now replace the handler from HSMTML:
//myHandler is the instance of class you implemented above
spCustomDoc->SetUIHandler(myHandler);
After this step, the MSHTML should not notice anything, but you will be able to add breakpoints in your MyDocHostUIHandler class and see which function is called by your MSHTML and when.

Send event to js from swift or objective-c

I have created the following class (condensed version), heres a reference to the full file
https://github.com/cotyembry/CastRemoteNative/blob/7e74dbc56f037cc61241f6ece24a94d8c52abb32/root/ios/CastRemoteNative/NativeMethods.swift
#objc(NativeMethods)
class NativeMethods: RCTEventEmitter {
#objc(sendEventToJSFromJS)
func sendEventToJSFromJS {
self.emitEvent(eventName: "test", body: "bodyTestString")
}
func emitEvent(eventName: String: body: Any) {
self.sendEvent(withName: eventName, body: body)
}
}
This works perfectly and fires my callback listener that is in my javascript code when I call the emitEvent method like the following, its an altered snippet from
https://github.com/cotyembry/CastRemoteNative/blob/7e74dbc56f037cc61241f6ece24a94d8c52abb32/root/js/Components/ChromecastDevicesModal.js
From the javascript side
import {
NativeModules,
NativeEventEmitter
} from 'react-native'
//here I bring in the swift class to use inside javascript
var NativeMethods = NativeModules.NativeMethods;
//create an event emitter to use to listen for the native events when they occur
this.eventEmitter = new NativeEventEmitter(NativeMethods);
//listen for the event once it sends
this.subscription = this.eventEmitter.addListener('test', (body) => { console.log('in test event listener callback', body)});
NativeMethods.sendEventToJSFromJS() //call the native method written in swift
I simply have the sendEventToJSFromJS method invoked on a button press in javascript
Again, this works and the console.log('in test event listener callback', body) code works and runs on the javascript side
My Issue where this does NOT work:
If I was to do the following inside the swift file after defining the class, this would not work:
var nativeMethodsInstance = nativeMethods()
nativeMethodsInstance.sendEventToJSFromSwift()
Why? Because the following error is thrown:
Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'bridge is not set. This is probably because you've explicitly synthesized the bridge in NativeMethods, even though it's inherited from RCTEventEmitter.'
So, when creating an instance of NativeMethods, versus not... what is the difference?
For additional information:
Objective-C gets the same bridge not set issue when I write these same snippets of code in .h and .m files instead of in .swift files
I found where the error message is getting printed in the native code, but it just has the variable
_bridge
and is checking to see if it is nil
The files are this error comes from is:
RCTEventEmitter.h
RCTEventEmitter.c
here is the full snippet of RCTEventEmitter.c
- (void)sendEventWithName:(NSString *)eventName body:(id)body
{
RCTAssert(_bridge != nil, #"bridge is not set. This is probably because you've "
"explicitly synthesized the bridge in %#, even though it's inherited "
"from RCTEventEmitter.", [self class]);
if (RCT_DEBUG && ![[self supportedEvents] containsObject:eventName]) {
RCTLogError(#"`%#` is not a supported event type for %#. Supported events are: `%#`",
eventName, [self class], [[self supportedEvents] componentsJoinedByString:#"`, `"]);
}
if (_listenerCount > 0) {
[_bridge enqueueJSCall:#"RCTDeviceEventEmitter"
method:#"emit"
args:body ? #[eventName, body] : #[eventName]
completion:NULL];
} else {
RCTLogWarn(#"Sending `%#` with no listeners registered.", eventName);
}
}
Where does this _bridge value get set and how does it get set so I can know, in the cases where it is failing how to set it
I found the following also in RCTEventEmitter.h
#property (nonatomic, weak) RCTBridge *bridge;
In the error that is given it mentions the bridge is inherited in the RCTEventEmitter, so is this maybe an issue with the weak part to the bridge property?
Or do I need to change my strategy in how I'm doing this all together?
I know it probably has to be something to do with me not fully understanding the
#synthesize bridge = _bridge;
part of the code and all the languages being mixed in doesnt help much lol...
This is really hard, so any help would be much appreciated!
Thanks so much for your time
here is a link to the full project when the project history code represented the code from my question above (since I have since made changes to the project):
https://github.com/cotyembry/CastRemoteNative/tree/7e74dbc56f037cc61241f6ece24a94d8c52abb32
I figured it out
Warning: this solution uses a deprecated method react native method - I could not figure out how to "properly" inherit from the RCTEventEmitter and send an event... every time I tried to the _bridge would end up being nil
Make sure Swift is bridged to Objective C (if you're using swift to send the event to javascript)
Do Not create instances of the exported Native modules (whether they be written in Swift or Objective C)
Let React Native's underlying implementation do this and for each and every class that needs to send an event, export that particular Native Class Objective C Implementation code or Swift code (the Native Module) to React-Native. This allows the javascript to be able to listen to the event
var publicBridgeHelperInstance = PublicBridgeHelper() //instantiate the the objective c class from inside the .swift file to use later when needing to get a reference to the bridge to send an event to javascript written in react native
#objc(DeviceManager) //export swift module to objective c
class DeviceManager: NSObject {
#objc(deviceDidComeOnline:) //expose the function to objective c
public func deviceDidComeOnline(_ device: GCKDevice) {
//imagine this deviceDidComeOnline function gets called from something from the Native code (totally independent of javascript) - honestly this could be called from a native button click as well just to test it works...
//emit an event to a javascript function that is a in react native Component listening for the event like so:
//1. get a reference to the bridge to send an event through from Native to Javascript in React Native (here is where my custom code comes in to get this to actually work)
let rnBridge = publicBridgeHelperInstance.getBridge() //this gets the bridge that is stored in the AppDelegate.m file that was set from the `rootView.bridge` variable (more on this later)
//(if you want to print the bridge here to make sure it is not `nil` go ahead:
print("rnBridge = \(rnBridge)")
//2. actually send the event through the eventDispatcher
rnBridge?.eventDispatcher().sendAppEvent(withName: "test", body: "testBody data!!!")
}
}
in AppDelegate.h put (additionally to the code that was already in the file)
#import "YourProjectsBridgingHeaderToMakeThisCodeAvailableInSwift.h" //replace this with your actual header you created when creating a swift file (google it if you dont know how to bridge swift to objective c)
#interface PublicBridgeHelper: NSObject
-(RCTBridge*)getBridge;
#end
in AppDelegate.m put (in addition to the code that was already in the file)
#import <React/RCTRootView.h>
RCTBridge *rnBridgeFromRootView;
#implementation PublicBridgeHelper //this is created to SIMPLY return rnBridgeFromRootView defined above over to my Swift class when actually sending the event to javascript that defines a react native Component
-(RCTBridge*)getBridge {
NSLog(#"rnBridgeFromRootView = #%#", rnBridgeFromRootView);
return rnBridgeFromRootView;
}
important - also make sure to add the following line of code to the Objective C .h's bridging header to make this PublicBridgeHelper definition available to be used in the .swift code
#import "AppDelegate.h"
finally,
now to show you how to set the rnBridgeFromRootView variable used in AppDelegate.m (that gets returned and used in the .swift code right before sending the event to javascript)
open AppDelegate.m and in the method body of
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { ... }
include the following after the line of code that instantiates the rootView variable
i.e. after the line that probably looks like
RCTRootView *rootView = [[RCTRootView alloc] initWithBundleURL:jsCodeLocation moduleName:#"YourProjecNameProbably" initialProperties:nil launchOptions:launchOptions];
add:
rnBridgeFromRootView = rootView.bridge //set the bridge to be exposed and returned later and used by the swift class
Now to explain the publicBridgeHelperInstance.getBridge() part that is in the .swift file
publicBridgeHelper is an instance of an objective c class which allows the swift class ability to get a reference to the react native bridge
If you are still having problems understanding my answer after reading this I made a video over it and you can watch it here:
https://www.youtube.com/watch?v=GZj-Vm9cQIg&t=9s

Qt function runJavaScript() does not execute JavaScript code

I am trying to implement the displaying of a web page in Qt. I chose to use the Qt WebEngine to achieve my task. Here's what I did :
Wrote a sample web page consisting of a empty form.
Wrote a JS file with just an API to create a radio button inside the form.
In my code, it looks like this :
View = new QWebEngineView(this);
// read the js file using qfile
file.open("path to jsFile");
myJsApi = file.Readall();
View->page()->runjavascript (myjsapi);
View->page()->runjavascript ("createRadioButton(\"button1\");");
I find that the runJavaScript() function has no effect on the web page. I can see the web page in the output window, but the radio button I expected is not present. What am I doing wrong?
I think you will have to connect the signal loadFinished(bool) of your page() to a slot, then execute runJavaScript() in this slot.
void yourClass::mainFunction()
{
View = new QWebEngineView(this);
connect( View->page(), SIGNAL(loadFinished(bool)), this, SLOT(slotForRunJS(bool)));
}
void yourClass::slotForRunJS(bool ok)
{
// read the js file using qfile
file.open("path to jsFile");
myJsApi = file.Readall();
View->page()->runJavaScript(myjsapi);
View->page()->runJavaScript("createRadioButton(\"button1\");");
}
I had this problem, runJavascript didn't have any effect. I had to put some html content into the view (with page().setHtml("") before running it.
Check the application output, it might contain JavaScript errors. Even if your JS code is valid, you might encounter the situation where the script is run before DOMContentLoaded event, that is document.readyState == 'loading'. Therefore, the DOM might not be available yet, as well as variables or functions provided by other scripts. If you depend on them for your code to run, when you detect this readyState, either wait for the event or try calling the function later, after a timeout. The second approach with timeout might be needed if you need to get the result of the code execution, as this can be done only synchronously.

Firefox: How do I get an nsIMessageManager instance from a JS Module under Electrolysis (e10s)?

I'm trying to port my Firefox extension to work under Electrolysis / e10s / multi-process mode. I've got a feature that requires registration through nsIComponentRegistrar so it's in a JSM which gets loaded only once (per process). I'm running in the child scope, so I don't have access to things like files, but my feature requires that. So I want to sendSyncMessage() to the parent process to fetch that detail (just the path to a file in this case).
The docs even mention doing something like this explicitly. But in the JSM I don't have a message manager in scope to call sendSyncMessage() on. How do I get a handle to (the right?) one? When I get called I don't have anything relating to the content document/window in scope.
Update, for clarity:
var c = Cc['#mozilla.org/childprocessmessagemanager;1'];
var s = c.getService(Ci.nsISyncMessageSender);
var response = s.sendSyncMessage('id', {'data': 'x'});
dump('response len?? ' + response.length + '\n');
This code produces 0 responses, even running directly in the frame script (not in the JSM which the frame script loads). If I just use the globally available sendSyncMessage() in the frame script then it gets the 1 response I expect.
"#mozilla.org/childprocessmessagemanager;1" is the way to go. Use that in child process JSMs.
However, as MDN puts it:
In addition to Message Managers centered around window and tab objects
there also is a separate hierachy focusing on process boundaries.
Therefore, you cannot use the regular frame script messengers, but have to use "#mozilla.org/parentprocessmessagemanager;1" in the parent (main) process.
child.jsm
let cpmm = Cc["#mozilla.org/childprocessmessagemanager;1"].
getService(Ci.nsISyncMessageSender);
cpmm.sendSyncMessage("addon:present?!")[0] === "yup"
parent.jsm
let ppmm = Cc["#mozilla.org/parentprocessmessagemanager;1"].
getService(Ci.nsIMessageListenerManager);
ppmm.addMessageListener("addon:present?", m => "yup");
Core code uses this scheme in various places, e.g. Network:SampleRate
This may work, no promises.
Try loading:
Cc["#mozilla.org/globalmessagemanager;1"].getService(Ci.nsIMessageListenerManager);
If that doesn't work then try using:
Cc['#mozilla.org/childprocessmessagemanager;1'].getService(Ci.nsISyncMessageSender);
Or vice-versa

QtWebKit bridge: call JavaScript functions

I am writing a hybrid application with HTML interface and Python code.
I can access Python functions via a shared object:
pythonPart.py:
class BO(QObject):
def __init__(self, parent=None):
super(BO, self).__init__(parent)
#Slot(str)
def doStuff(self, txt):
print(txt)
bridgeObj = BO()
# init stuff and frame...
frame.addToJavaScriptWindowObject( 'pyBridge', bridgeObj )
frame.evaluateJavaScript('alert("Alert from Python")')
frame.evaluateJavaScript('testMe()')
frame.evaluateJavaScript('alert("Starting test");testMe();alert("Test over")')
jsPart.js:
function testMe() { alert('js function testMe called'); }
pyBridge.doStuff("bla");
testMe();
Calling Python functions from JS works, as does calling testMe from JS. Calling "standard" JS functions like alert from Python works, too.
The last two Python lines won't:
evaluateJavaScript("testMe()") doesn't do anything at all.
The last line executes the first alert and won't continue after that.
EDIT: I already tried having some time.sleep() between loading and calling the evaluateJavaScript and I'm loading the webpage from the local machine.
The most likely problem is that the JavaScript just isn't loaded yet. Adding time.sleep() calls doesn't help for that, those will also block the Qt event loop from continuing, not just your Python code.
Try waiting for the page to have fully loaded instead, for example (using the loadFinished signal:
def onLoad():
frame.evaluateJavaScript('testMe()')
frame.loadFinished.connect(onLoad)
Aditionally, for getting more debug information in situations like this, you might want to implement QtWebKit.QWebPage.javaScriptConsoleMessage.
There are at least two errors in the example code.
Firstly, when you add the object to the javascript window, you call it "pyBridge", but you then try to reference it in the javascript as "bridgeObj". Obviously, this will throw a ReferenceError which will prevent any further execution of the script.
Secondly, the doStuff method is missing a self argument, which will cause a TypeError to be raised by PySide.
Dealing with those two issues should be enough to fix your example code, so long as you make sure that the bridge object is added to the javacsript window before the html is loaded. This step is required if you want to reference the bridge object in top-level javascript code. However, if the bridge object is only ever referenced in function-level code, it can be safely added to the javascript window after the html has loaded.

Categories

Resources