I use following code to load page to MyWebView:
private void DisplayLocalPage(string filename)
{
var html = new HtmlWebViewSource();
html.BaseUrl = DependencyService.Get<IBaseUrl>().Get();
html.Html = ReadFile(filename);
MyWebView.Source = html;
MyWebView.Eval("alert(200)");
}
Page is rendered well - all scripts run too, but this alert isn't fired, what are possible reasons?
Please try this:
webView.Navigated += (o, s) => {
webView.Eval("alert('text')");
};
I think I figured out why it might not work. The Eval is triggered before the WebView is fully initialized or web page is rendered, so the script is trigerred, but doesn't have any effect.
Related
I am using WPF's WebBrowser control to load a simple web page. On this page I have an anchor or a button. I want to capture the click event of that button in my application's code behind (i.e. in C#).
Is there are a way for the WebBrowser control to capture click events on the loaded page's elements?
In addition, is it possible to communicate event triggered data between the page and the WebBrowser? All of the above should be possible am I right?
Edit: Probable solution:
I have found the following link that might be a solution. I haven't tested it yet but it's worth the shot. Will update this question depending on my test results.
http://support.microsoft.com/kb/312777
Link taken from: Source
Ok Answer found - tested and it works:
Add a reference from the COM tab called: Microsoft HTML Object Library
The following is an example code:
You will need two components: WebBrowser (webBrowser1) and a TextBox (textBox1)
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
webBrowser1.LoadCompleted += new LoadCompletedEventHandler(webBrowser1_LoadCompleted);
}
private void webBrowser1_LoadCompleted(object sender, NavigationEventArgs e)
{
mshtml.HTMLDocument doc;
doc = (mshtml.HTMLDocument)webBrowser1.Document;
mshtml.HTMLDocumentEvents2_Event iEvent;
iEvent = (mshtml.HTMLDocumentEvents2_Event)doc;
iEvent.onclick += new mshtml.HTMLDocumentEvents2_onclickEventHandler(ClickEventHandler);
}
private bool ClickEventHandler(mshtml.IHTMLEventObj e)
{
textBox1.Text = "Item Clicked";
return true;
}
}
Here is another example.
I was trying to inject a remote javascript file and execute some code when ready by adding a DOM element <script src="{path to remote file}" /> to the header, essentially the same idea as jQuery.getScript(url, callback)..
The code below works fine.
HtmlElementCollection head = browser.Document.GetElementsByTagName("head");
if (head != null)
{
HtmlElement scriptEl = browser.Document.CreateElement("script");
IHTMLScriptElement element = (IHTMLScriptElement)scriptEl.DomElement;
element.src = url;
element.type = "text/javascript";
head[0].AppendChild(scriptEl);
// Listen for readyState changes
((mshtml.HTMLScriptEvents2_Event)element).onreadystatechange += delegate
{
if (element.readyState == "complete" || element.readyState == "loaded")
{
Callback.execute(callbackId);
}
};
}
Good time of day!:).Net 4.0, console. I need to write a parser page in console mode to get the code page in the form in which it is displayed to the user after download, without clicking the buttons, scrolling, and other events. I use this code, but it returns absolutely not what you need. In what ways can I get my desired result? I use other methods or components?
wb = new WebBrowser();
wb.Navigate(linkNorm);
wb.ScriptErrorsSuppressed = true;
wb.DocumentCompleted += new
WebBrowserDocumentCompletedEventHandler(w_DocumentCompleted);
while (wb.ReadyState != WebBrowserReadyState.Complete)
{
Application.DoEvents();
}
originalText = wb.DocumentText;
wb.Dispose();
void w_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)
{
Trace.WriteLine(wb.DocumentText);
}
Change method to wb.Document.Body.OuterHtml - not help me. Result is so bad.
I think this is a pretty simple task, probably someone already solved.
Note - i need full HTML text in string-variable after work ALL JS
Most of the answers I have read concerning this subject point to either the System.Windows.Forms.WebBrowser class or the COM interface mshtml.HTMLDocument from the Microsoft HTML Object Library assembly.
The WebBrowser class did not lead me anywhere. The following code fails to retrieve the HTML code as rendered by my web browser:
[STAThread]
public static void Main()
{
WebBrowser wb = new WebBrowser();
wb.Navigate("https://www.google.com/#q=where+am+i");
wb.DocumentCompleted += delegate(object sender, WebBrowserDocumentCompletedEventArgs e)
{
mshtml.IHTMLDocument2 doc = (mshtml.IHTMLDocument2)wb.Document.DomDocument;
foreach (IHTMLElement element in doc.all)
{
System.Diagnostics.Debug.WriteLine(element.outerHTML);
}
};
Form f = new Form();
f.Controls.Add(wb);
Application.Run(f);
}
The above is just an example. I'm not really interested in finding a workaround for figuring out the name of the town where I am located. I simply need to understand how to retrieve that kind of dynamically generated data programmatically.
(Call new System.Net.WebClient.DownloadString("https://www.google.com/#q=where+am+i"), save the resulting text somewhere, search for the name of the town where you are currently located, and let me know if you were able to find it.)
But yet when I access "https://www.google.com/#q=where+am+i" from my Web Browser (ie or firefox) I see the name of my town written on the web page. In Firefox, if I right click on the name of the town and select "Inspect Element (Q)" I clearly see the name of the town written in the HTML code which happens to look quite different from the raw HTML that is returned by WebClient.
After I got tired of playing System.Net.WebBrowser, I decided to give mshtml.HTMLDocument a shot, just to end up with the same useless raw HTML:
public static void Main()
{
mshtml.IHTMLDocument2 doc = (mshtml.IHTMLDocument2)new mshtml.HTMLDocument();
doc.write(new System.Net.WebClient().DownloadString("https://www.google.com/#q=where+am+i"));
foreach (IHTMLElement e in doc.all)
{
System.Diagnostics.Debug.WriteLine(e.outerHTML);
}
}
I suppose there must be an elegant way to obtain this kind of information. Right now all I can think of is add a WebBrowser control to a form, have it navigate to the URL in question, send the keys "CLRL, A", and copy whatever happens to be displayed on the page to the clipboard and attempt to parse it. That's horrible solution, though.
I'd like to contribute some code to Alexei's answer. A few points:
Strictly speaking, it may not always be possible to determine when the page has finished rendering with 100% probability. Some pages
are quite complex and use continuous AJAX updates. But we
can get quite close, by polling the page's current HTML snapshot for changes
and checking the WebBrowser.IsBusy property. That's what
LoadDynamicPage does below.
Some time-out logic has to be present on top of the above, in case the page rendering is never-ending (note CancellationTokenSource).
Async/await is a great tool for coding this, as it gives the linear
code flow to our asynchronous polling logic, which greatly simplifies it.
It's important to enable HTML5 rendering using Browser Feature
Control, as WebBrowser runs in IE7 emulation mode by default.
That's what SetFeatureBrowserEmulation does below.
This is a WinForms app, but the concept can be easily converted into a console app.
This logic works well on the URL you've specifically mentioned: https://www.google.com/#q=where+am+i.
using Microsoft.Win32;
using System;
using System.ComponentModel;
using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace WbFetchPage
{
public partial class MainForm : Form
{
public MainForm()
{
SetFeatureBrowserEmulation();
InitializeComponent();
this.Load += MainForm_Load;
}
// start the task
async void MainForm_Load(object sender, EventArgs e)
{
try
{
var cts = new CancellationTokenSource(10000); // cancel in 10s
var html = await LoadDynamicPage("https://www.google.com/#q=where+am+i", cts.Token);
MessageBox.Show(html.Substring(0, 1024) + "..." ); // it's too long!
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
// navigate and download
async Task<string> LoadDynamicPage(string url, CancellationToken token)
{
// navigate and await DocumentCompleted
var tcs = new TaskCompletionSource<bool>();
WebBrowserDocumentCompletedEventHandler handler = (s, arg) =>
tcs.TrySetResult(true);
using (token.Register(() => tcs.TrySetCanceled(), useSynchronizationContext: true))
{
this.webBrowser.DocumentCompleted += handler;
try
{
this.webBrowser.Navigate(url);
await tcs.Task; // wait for DocumentCompleted
}
finally
{
this.webBrowser.DocumentCompleted -= handler;
}
}
// get the root element
var documentElement = this.webBrowser.Document.GetElementsByTagName("html")[0];
// poll the current HTML for changes asynchronosly
var html = documentElement.OuterHtml;
while (true)
{
// wait asynchronously, this will throw if cancellation requested
await Task.Delay(500, token);
// continue polling if the WebBrowser is still busy
if (this.webBrowser.IsBusy)
continue;
var htmlNow = documentElement.OuterHtml;
if (html == htmlNow)
break; // no changes detected, end the poll loop
html = htmlNow;
}
// consider the page fully rendered
token.ThrowIfCancellationRequested();
return html;
}
// enable HTML5 (assuming we're running IE10+)
// more info: https://stackoverflow.com/a/18333982/1768303
static void SetFeatureBrowserEmulation()
{
if (LicenseManager.UsageMode != LicenseUsageMode.Runtime)
return;
var appName = System.IO.Path.GetFileName(System.Diagnostics.Process.GetCurrentProcess().MainModule.FileName);
Registry.SetValue(#"HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATION",
appName, 10000, RegistryValueKind.DWord);
}
}
}
Your web-browser code looks reasonable - wait for something, that grab current content. Unfortunately there is no official "I'm done executing JavaScript, feel free to steal content" notification from browser nor JavaScript.
Some sort of active wait (not Sleep but Timer) may be necessary and page-specific. Even if you use headless browser (i.e. PhantomJS) you'll have the same issue.
I have implemented an IE extension using C++. Its function is to inject javascript in the webpage's head tag, whenever the extension icon is clicked. I have used execScript method for script injection.
It works fine but when I refresh the webpage, or when I click on any link on the webpage, or when I enter another URL the injected script vanishes away.
I don't want the script to vanish away, I want it to be persistent inside the web browser.
How can I achieve that? I am new to IE extension development, any help would be highly appreciated.
Thanks.
STDMETHODIMP CBlogUrlSnaggerAddIn::Exec(
const GUID *pguidCmdGroup, DWORD nCmdID,
DWORD nCmdExecOpt, VARIANTARG *pvaIn, VARIANTARG *pvaOut){
HRESULT hr = S_OK;
CComPtr<IDispatch> spDispDoc;
hr = m_spWebBrowser->get_Document(&spDispDoc);
if (SUCCEEDED(hr)){
CComPtr<IDispatch> spDispDoc;
hr = m_spWebBrowser->get_Document(&spDispDoc);
if (SUCCEEDED(hr) && spDispDoc){
CComPtr<IHTMLDocument2> spHTMLDoc;
hr = spDispDoc.QueryInterface<IHTMLDocument2>( &spHTMLDoc );
if (SUCCEEDED(hr) && spHTMLDoc){
VARIANT vrt = {0};
CComQIPtr<IHTMLWindow2> win;
hr = spHTMLDoc->get_parentWindow(&win);
CComBSTR bstrScript = L"function fn() {alert('helloooo');}var head = document.getElementsByTagName('head')[0],script = document.createElement('script');script[script.innerText ? 'innerText' : 'textContent'] = '(' + fn + ')()';head.appendChild(script);head.parentNode.replaceChild(script,'script');";
CComBSTR bstrLanguage = L"javascript";
HRESULT hrexec = win->execScript(bstrScript,bstrLanguage, &vrt);
}
}}
Instead of writing the execScript code in the Exec event, try adding the piece of code under OnDocumentComplete method. Use the Sink map which is used to set up event handling. A sample is provided below.
BEGIN_SINK_MAP(CMyClass)
SINK_ENTRY_EX(1, DIID_DWebBrowserEvents2,DISPID_DOCUMENTCOMPLETE , OnDocumentComplete)
END_SINK_MAP()
Implement the DocumentComplete in your class file.
void STDMETHODCALLTYPE CMyClass::OnDocumentComplete(IDispatch *pDisp,VARIANT *pvarURL)
{
//Inject the scripts here
}
Updated :
I haven't tried this, but I guess DownloadBegin event would serve your purpose. It is similar to the the Document complete event mapped, only thing which would differ would be the DISPID_DOWNLOADBEGIN. Map a corresponding handler method to the DISPID and give it a try.
BEGIN_SINK_MAP(CMyClass)
SINK_ENTRY_EX(1,DIID_DWebBrowserEvents2,DISPID_DOWNLOADBEGIN, OnDocumentLoad)
END_SINK_MAP()
Similar to DocumentComplete Handler method
void STDMETHODCALLTYPE CMyClass::OnDocumentLoad(IDispatch *pDisp,VARIANT *pvarURL)
{
//Inject scripts here
}
http://msdn.microsoft.com/en-us/library/cc136547(v=vs.85).aspx
I am injecting this JS to get the navigating url (destination) at run time. I am getting JS exceptions. What is wrong?
static String NAVIGATING_FUNCTION = "window.onbeforeunload = function(){ window.external.notify(' + location.href + ''); };";
webView.InvokeScript("eval", new String[] { NAVIGATING_FUNCTION });
Also please tell me how to cancel the navigation and return back to the previous page using this JS?
If I understand correctly, since I haven't used the WebView (yet), you want to report the current location before leaving and then cancel it?
You could try this:
window.onbeforeunload = function(){ window.external.notify(location.href); return false; };
I am not sure why you had quotes in there; location.href would be in that web view, and it would be passed as a string.
Also, take a look at the responses to this thread. One of them says:
Did you get your problem solved? In the Release Preview, you need to
add some code that looks like this:
List<Uri> allowedUris = new List<Uri>();
allowedUris.Add(e.Uri);
allowedUris.Add(new Uri("http://www.bing.com"));
Browser.AllowedScriptNotifyUris = allowedUris;
Another thing, this thread: Can I get the current HTML or Uri for the Metro WebView control?, talks about the LoadCompleted event handler.
See if that helps.