With WebBrowser control, invoking a javascript function doesn't fire DocumentCompleted event - javascript

In my C# code, I'm using a WebBrowser control in a Form to display my data.
Initially I fill in my data through a
webViewer.DocumentText = someHtml; (1)
This one produces the sequence of Navigating, Navigated, and DocumentCompleted events, as expected.
Then at a later stage I make some calls to
webViewer.Navigate(new Uri("javascript:myFunction(x,y)")); (2)
Since I'm on the Compact Framework, I understand this is the only way to call some javascript. (as seen in this discussion)
Now, in terms of events, I only receive the Navigating event, and that's it..., not the other ones. WebBrowser.ReadyState will also stay at "Loading" forever. I'm a bit puzzled by this behaviour.
I've dumbed down myFunction to the simplest one to prove that this behaviour wasn't related to the actual content of that function (as if it was containing an endless loop or some critical errors).
While myFunction is indeed called, the ReadyState stuck in Loading state will prevent further updates of the DocumentText as done in (1), which prevents further normal execution.
I imagine I might have missed something about the use of this control, maybe there's a way to reset its state if it doesn't happen automatically, but I'm running out of ideas. Any hint?

Related

window.pageYOffset is sometimes 0 instead of a valid value

I'm having a problem where sometimes when my JavaScript in a Web page gets the value of window.pageYOffset it is inexplicably 0 even though I know the user is viewing the middle of the document and its value should be huge, like 650000. Note that a huge percentage of the time I get a reasonable value. But sometimes it's zero and sometimes it's a seemingly random small value, like in the 6000 range when I'm expecting 650000.
Rather than post a bunch of code, I'd like to ask some general questions to help me figure out where to begin to look.
This page is being displayed in an iOS WKWebView (though this problem can manifest in a similar context in an Android app). JavaScript methods in my app can be invoked in one of several ways:
When my app is notified that the page has finished loading (via a delegate method), it invokes a JavaScript method using evaluateJavaScript from the Objective-C code.
My app can call evaluateJavaScript at other times, not just when the page finishes loading.
A JavaScript function may be called as the result of a timer firing.
A JavaScript function may be called as the result of a scroll event.
I have been operating under the assumption that the JavaScript code on the page is always operating in a single thread. That is, I don't have a situation where a timer firing, a scroll event happening, or even a call from the Objective-C code (using evaluateJavaScript) is interrupting anything that might be happening in the JavaScript runtime. So I shouldn't have to worry about interrupting some system-level activity that is modifying window.pageYOffset while I'm trying to access it.
So that's my first question: Am I correct that someone outside my code is invoking my JavaScript methods on a single thread and not monkeying with the DOM on another thread?
My second question is related: My code modifies the DOM, adding and removing div elements. I've been assuming that those modifications are synchronous -- if I insert an element with insertAfter or insertBefore, I expect that the child/parent/sibling pointers are accurate upon return, and I assume that I can immediately access things like the top and left values on some other element and they will have been updated to reflect the inserted/removed element. The point being that I shouldn't have to "wait" for the DOM to "stabilize" after making changes and before checking something like window.pageYOffset. Is this correct?
One more clue: To help mitigate this, I have had good luck simply testing window.pageYOffset for zero at the top of a function. If it is zero, I call myself back on a timer (with just a 1 msec delay). If I do that long enough, it will eventually be non-zero.
Perhaps after reading all this, none of the detail is relevant and you know the answer to the basic question: Why do I sometimes get an invalid value (usually 0) in window.pageYOffset when the same line of code gives a valid value at other times.
The problem turned out to be that there appears to be a period of time between when I give the WKWebView a new HTML string to render and when it tells me that it is done loading the page that the existing page is still active. During this time, timers continue to fire, but some document and window properties will not be valid.
Because of the difficulty of debugging JavaScript running in this environment, I was tricking myself into thinking "eventually pageYOffset becomes valid" when in fact what I was seeing was that the new page eventually finished loading, and it was this new page that was generating valid calls to my timer functions.
In my particular case (may not work for everyone) I am able to detect the value of window.pageYOffset at the top of my timer function and if it is 0, call myself back after a brief delay. This allows me to handle the case where, for some reason, window.pageYOffset is just not yet valid (my test will eventually pass and my timer function will continue as usual) and the case where everything is in the process of being thrown away in favor of the new page (in which case the timer will not fire because the page goes away).

React view not successfully listening to store

I have a React controller-view that subscribes to two different stores, and initiates calls that will cause each of them to emit:
componentWillMount() {
UserStore.addChangeListener(this._onChange);
EventStore.addChangeListener(this._onChange);
}
componentDidMount() {
UserService.getUser(this.props.params.slug)
EventService.fetchEventsForUser(this.props.params.slug);
}
For the UserStore listener, this works fine. The user is fetched from the API, which comes back to the UserStore, which emits a change. If I put console.log statements in, I can see all of this flow in exactly the way I'd expect, ending in the _onChange function of my controller-view.
But. For some inexplicable reason, the change listener added to the EventStore isn't firing. The events are correctly fetched from the API, and which come back to the EventStore method that I expect, which correctly updates EventStore's internal state, and it even fires emitChange() just fine (I've verified with copious console.log statements). But _onChange in the controller-view is never called!
I've been trying to troubleshoot this for close to double-digit hours now. I don't feel any closer to an answer than when I started. I don't even know where to look.
Other notes
I tried removing the UserStore listener from my controller-view, just setting static content in UserStore. I thought maybe there could be a race condition between the two updating. Nothing changed.
I tried splitting my controller-view's _onChange into two separate functions, again with the suspicion that they weren't playing nicely together. But the console.logs showed that _onUserChange was called as expected, but _onEventChange was never called at all.
This used to work! In the master branch, which the pull request I've been linking to branches from, it works! Everything's structured a bit differently, there—the controller-view I've been linking to only listens to changes in UserStore, while its child component listens for changes in EventStore. If I duplicate this structure, though, it still doesn't work. In my new branch, whatever component is listening to changes in EventStore seems to be completely deaf to such changes.
Also on master, EventStore did much more computation than it does now. I very much doubt that it being slow would make its listeners more likely to hear it, though. UserStore is equivalently simple, and its listeners have no problem hearing it.
Sometimes it does work. About 1 page load in 100, it works. The component updates its state, and the server-loaded content actually displays. I haven't been able to reproduce this or determine what causes it.
Another component in this app also listens for EventStore changes, and also rarely hears them.
You can see it failing to work at http://life.chadoh.com/#/chadoh. If you then visit http://life.chadoh.com/#/chadoh/week/0, you'll see that an event has correctly been loaded from the server. But for some reason the listeners were not alerted. If you refresh the page on http://life.chadoh.com/#/chadoh/week/0, the sidebar will never be updated with events (the difference is whether it's initially rendered before or after EventStore has already been loaded with data).
Changing the get events() method in EventStore to a regular events() has no effect.
Putting the EventService.fetchEvents call inside a setTimeout does not yield better results.
Putting the listen calls inside the constructor does not yield better results.
I see EventsStore extends BaseStore which extends events.EventEmitter. Is that the node.js standard library event emitter?
If so, you may have a naming conflict on _events.
Your store uses _events to hold data: https://github.com/chadoh/life/blob/no-dates/src/stores/EventStore.js#L10
But EventEmitter also uses _events to store event handlers: https://github.com/joyent/node/blob/d13d7f74d794340ac5e126cfb4ce507fe0f803d5/lib/events.js#L140-L186
As a result, event data may overwrite the collection of event handlers, causing handlers to never fire (because they aren't there!).
You could try using a different key to hold data (eg _eventData) and see if that solves your problem!

DOM MutationObservers: How to support this one important use of DOM3 Mutation Events?

I get wordy sometimes: tl;dr: read the bold text.
The motivation behind deprecating Mutation Events is well understood; their efficacy in achieving many types of tasks is questionable.
However, today, I have discovered a use for them that is highly dependent on those very same undesired properties.
I will first present the question, and then present the reasons that lead me to the question, because the question will be absurd without it.
Is it possible to use the new Mutation Observers in a way that we can have the VM stop at the instant of the change (like the DOM3 Mutation Events do), rather than report it to me after the fact?
Basically, the very thing that makes the Mutation Observer performant and "reasonable" is its asynchronicity, which means (necessarily, it seems) throwing away the stack, pushing a record mutation to a list, and delivering the list to qualified Observers at the next tick or several ticks later.
What I am after is precisely that stack trace of the DOM3 Mutation Event. I really really hope this will work, but basically the Mutation Event callback (which I am allowed to write) will have a stacktrace that will lead me back to the actual code that created my element I'm listening for. So in theory I'd write a Mutation Event handler like this:
// NOT in an onload cb
$("div#haystack").on('DOMNodeInserted', function(evt) {
if (is_needle(evt.target)) {
report(new Error().stack); // please, Chrome, tell me what code created the needle
}
});
This gives me the golden answer.
It seems that Mutation Observers will make it impossible to extract this information. What, then, am I to do once Mutation Events are completely taken out? They have been deprecated for a while now.
Now, to explain a little better the real actual circumstances, and why this matters.
I have been trying to kill a bug which I describe here: I have built a full-DOM serializer which nicely spits back out every element that exists on the webpage, and in comparing them, the broken page and the working page are identical. I have tested this and it is pretty nice. it captures every little thing that's different: Whatever hovery-thing my mouse happens to be over, the CSS class that gets consequently set will be reflected in the HTML dump. Any text of any form on the page will show up if you search it (provided it doesn't span across elements). All inline JS (and more importantly, all differences between inline JS) is present.
I have then gone on to verify that the broken page is missing several event handlers. So none of the clickable items respond to hover or clicks, and therefore no useful work can be done on the interactive form. This is not known to be the only problem, but it does fully explain the behavior. Given that the DOM has no differences in inline JS that explains the difference in behavior, then it must be the case that either the content of the linked resources or the invisible properties of elements (event handlers being in this category) are causing the difference in behavior.
Now I know which elements are supposed to have handlers, but I know not where in the comically large code base (ballpark: 200K lines of JS all loaded as one resource, assembled by several M lines of Perl serverside code) lies the code that assigns the events.
I have tried JS methods to watch modifications of object properties, such as this one (there are many, but all work on the same principle of setting setters and getters), which works the first time, and then subsequently breaks the app afterward. Apparently assigning setters and getters cause the system to stop functioning. It's not clear to me how I can take that approach of watching property assignments to a point where i can get a list of code points that hit a specific element. It might be feasible, but surely not if I can only fire it once, and it breaks everything thereafter.
So watching variables with JS is out.
I might be able to manually instrument jQuery itself, so that when my is_needle() succeeds on the element processed by jQuery, I log all event-related functions performed by jQuery on that element. This is dreadful, and I will resort to this if my Mutation Observer approach fails.
There are yet more ways to skin the cat of course. I could use the handy getEventListeners() on my target element when it is working to get the list of event listener functions that are on it, and then look at the code there, and search the code base to find those functions, and then analyze the code to find out all the places there those functions are inserted into event handlers. That is actually pretty straightforward.
Now I know which elements are supposed to have handlers, but I know not where in the comically large code base (ballpark: 200K lines of JS all loaded as one resource, assembled by several M lines of Perl serverside code) lies the code that assigns the events.
Have you considered simply instrumenting .addEventListener function calls one way or another, e.g. via debugger breakpoints or by modifying the DOM element prototype to replace it with a wrapper method? This would be browser-specific but should be sufficient for your debugging needs.
You also might want to try firefox's tracer, available in nightlies I think. It basically records function execution without the need to use breakpoints or instrumenting code.

Debugging JS/CoffeeScript code: Events, Callbacks etc

Recently I am finding difficulty understanding whats happening in a CoffeeScript/Backbone app. Its hard to trace whats happening quickly without a very slow step through. The problem I think is: I know an event is triggered (Backbone view event). But I dont know which functions are called because of it. There maybe more than 1. I may not even know with view partial is the event defined (so I cant put a breakpoint?)
Is there a debugger which plots the execution of the program as a graph? So that I can zoom into what I need, or maybe something I can use to "visualize" the execution of my code. Maybe not, if what should I be looking out for. I am not sure where I need to put a breakpoint since I dont know where some events are triggered. Then sometimes I find it hard to understand why the code step through might be jumping here and there, maybe its multiple events and their handlers executing?
Everything in Backbone (Views, Models, Collections, Routers) extends Backbone.Events. That means they have an _events property that contains each bound event (e.g. change) with an array of their subscribers.
In order to access this open your javascript console in chrome, firefox or safari (or anything but IE) and enter the name of a globally accessible instantiated object with ._events at the end. E.g.
products._events
After pressing enter you should be able to expand this and see what is published and subscribed.

DOM input events vs. setTimeout/setInterval order

I have a block of JavaScript code running on my page; let's call it func1. It takes several milliseconds to run. While that code is running, the user may click, move the mouse, enter some keyboard input, etc. I have another block of code, func2, that I want to run after all of those queued-up input events have resolved. That is, I want to ensure the order:
func1
All handlers bound to input events that occurred while func1 was running
func2
My question is: Is calling setTimeout func2, 0 at the end of func1 sufficient to guarantee this ordering, across all modern browsers? What if that line came at the beginning of func1—what order should I expect in that case?
Please back up your answers with either references to the relevant specs, or test cases.
Update: It turns out that no, it's not sufficient. What I failed to realize in my original question was that input events aren't even added to the queue until the current code block has been executed. So if I write
// time-consuming loop...
setTimeout func2, 0
then only after that setTimeout is run will any input events (clicks, etc.) that occurred during the time-consuming loop be queued. (To test this, note that if you remove, say, an onclick callback immediately after the time-consuming loop, then clicks that happened during the loop won't trigger that callback.) So func2 is queued first and takes precedence.
Setting a timeout of 1 seemed to work around the issue in Chrome and Safari, but in Firefox, I saw input events resolving after timeouts as high as 80 (!). So a purely time-based approach clearly isn't going to do what I want.
Nor is it sufficient to simply wrap one setTimeout ... 0 inside of another. (I'd hoped that the first timeout would fire after the input events queued, and the second would fire after they resolved. No such luck.) Nor did adding a third, or a fourth, level of nesting suffice (see Update 2 below).
So if anyone has a way of achieving what I described (other than setting a timeout of 90+ milliseconds), I'd be very grateful. Or is this simply impossible with the current JavaScript event model?
Here's my latest JSFiddle testbed: http://jsfiddle.net/EJNSu/7/
Update 2: A partial workaround is to nest func2 inside of two timeouts, removing all input event handlers in the first timeout. However, this has the unfortunate side effect of causing some—or even all—input events that occurred during func1 to fail to resolve. (Head to http://jsfiddle.net/EJNSu/10/ and try rapidly clicking the link several times to observe this behavior. How many clicks does the alert tell you that you had?) So this, again, surprises me; I wouldn't think that calling setTimeout func2, 0, where func2 sets onclick to null, could prevent that callback from being run in response to a click that happened a full second ago. I want to ensure that all input events fire, but that my function fires after them.
Update 3: I posted my answer below after playing with this testbed, which is illuminating: http://jsfiddle.net/TrevorBurnham/uJxQB/
Move the mouse over the box (triggering a 1-second blocking loop), then click multiple times. After the loop, all the clicks you performed play out: The top box's click handler flips it under the other box, which then receives the next click, and so on. The timeout triggered in the mouseenter callback does not consistently occur after the click events, and the time it takes for the click events to occur varies wildly across browsers even on the same hardware and OS. (Another odd thing this experiment turned up: I sometimes get multiple jQuery mouseenter events even when I move the mouse steadily into the box. Not sure what's going on there.)
I think you are on the wrong track with your experiments. One problem is of course that you are fighting different message loop implementations here. The other (the one you didn't recognize it seems) is different double click handling. If you click the link twice you won't get two click events in MSIE - it's rather one click event and a dblclick event (for you that looks like the second click was "swallowed"). All other browsers seem to generate two click events and a dblclick event in this scenario. So you need to handle dblclick events as well.
As message loops go, Firefox should be easiest to handle. From all I know, Firefox adds messages to the queue even when JavaScript code is running. So a simple setTimeout(..., 0) is sufficient to run code after the messages are processed. You should refrain from hiding the link after func1() is done however - at this point clicks aren't processed yet and they won't trigger event handlers on a hidden element. Note that even a zero timeout doesn't get added to the queue immediately, current Firefox versions have 4 milliseconds as the lowest possible timeout value.
MSIE is similar, only that there you need to handle dblclick events as I mentioned before. Opera seems to work like that as well but it doesn't like it if you don't call event.preventDefault() (or return false from the event handler which is essentially the same thing).
Chrome however seems to add the timeout to the queue first and only add incoming messages after that. Nesting two timeouts (with zero timeout value) seems to do the job here.
The only browser where I cannot make things work reliably is Safari (version 4.0 on Windows). The scheduling of messages seems random there, looks like timers there execute on a different thread and can push messages into the message queue at random times. In the end you probably have to accept that your code might not get interrupted on the first occasion and the user might have to wait a second longer.
Here is my adaptation of your code: http://jsfiddle.net/KBFqn/7/
If I'm understanding your question correctly, you have a long-running function but you don't want to block the UI while it is running? After the long-running function is done you then want to run another function?
If so instead of using timeouts or intervals you might want to use Web Workers instead. All modern browsers including IE9 should support Web Workers.
I threw together an example page (couldn't put it on jsfiddle since Web Workers rely on an external .js file that has to be hosted on the same origin).
If you click A, B, C or D a message will be logged on the right. When you press start a Web Worker starts processing for 3 seconds. Any clicks during those 3 seconds will be immediately logged.
The important parts of the code are here:
func1.js The code that runs inside the Web Worker
onmessage = function (e) {
var result,
data = e.data, // get the data passed in when this worker was called
// data now contains the JS literal {theData: 'to be processed by func1'}
startTime;
// wait for a second
startTime = (new Date).getTime();
while ((new Date).getTime() - startTime < 1000) {
continue;
}
result = 42;
// return our result
postMessage(result);
}
The code that invokes the Web Worker:
var worker = new Worker("func1.js");
// this is the callback which will fire when "func1.js" is done executing
worker.onmessage = function(event) {
log('Func1 finished');
func2();
};
worker.onerror = function(error) {
throw error;
};
// send some data to be processed
log('Firing Func1');
worker.postMessage({theData: 'to be processed by func1'});
At this point, I'm prepared to say that, regrettably, there is no solution to this problem that will work under all browsers, in every scenario, every time. In a nutshell: If you run a JavaScript function, there's no way to reliably distinguish between input events that the user triggered during that time and those the user triggered afterward. This has interesting implications for JS developers, especially those working with interactive canvases.
My mental model of how JS input events work was off the mark. I'd thought that it went
The user clicks a DOM element while code is running
If that element has a click event handler, the callback is queued
When all blocking code has executed, the callback is run
However, my experiments, and those contributed by Wladimir Palant (thanks, Wladimir) show that the correct model is
The user clicks a DOM element while code is running
The browser captures the coordinates, etc. of the click
Some time after all blocking code has executed, the browser checks which DOM element is at those coordinates, then runs the callback (if any)
I say "some time after" because different browsers seem to have very different behaviors for this—in Chrome for Mac, I can set a setTimeout func2, 0 at the end of my blocking code and expect func2 to run after the click callbacks (which run only 1-3ms after the blocking code finished); but in Firefox, the timeout always resolves first, and the click callbacks typically happen ~40ms after the blocking code finished executing. This behavior is apparently beyond the purview of any JS or DOM spec. As John Resig put it in his classic How JavaScript Timers Work:
When an asynchronous event occurs (like a mouse click, a timer firing, or an XMLHttpRequest completing) it gets queued up to be executed later (how this queueing actually occurs surely varies from browser-to-browser, so consider this to be a simplification).
(Emphasis mine.)
So what does this mean from a practical standpoint? This is a non-issue as the execution time of blocking code approaches 0. Which means that this problem is yet another reason to hew to that old advice: Break up your JS operations into small chunks to avoid blocking the thread.
Web workers, as Useless Code suggested, are even better when you can use them—but be aware that you're foregoing compatibility with Internet Explorer and all major mobile browsers.
Finally, I hope browser-makers will move forward on standardizing input events in the future. This is one of many quirks in that area. I hope Chrome will lead the way to the future: excellent thread isolation, low event latency, and relatively consistent queueing behavior. A web developer can dream, can't he?
You can use dispatchEvent with a custom event name at the end of your function. This won't work on IE, but is still possible; just use fireEvent instead.
Take a look at this:
http://jsfiddle.net/minitech/NsY9V/
Click "start the long run", and click on the textbox and type in it. Voilà!
You can make the event handlers check to see if a flag is set by func1; if so queue func2 if not already queued.
This may either be elegant or ugly depending on the specializedness of func2. (Actually it's probably just ugly.) If you choose this approach, you need some way to hook events, or alternatively your own bindEvent(event,handler,...) function which wraps the handler and binds the wrapped handler.
The correctness of this approach depends on all the events during func1 being queued at the same time. If this is not the case, you can either make func2 idempotent, or (depending on the semantics of func2) put an ugly "cannot be called again for N milliseconds" lock on it.
please describe better you scenario.
What you need do
some time ago i need do something how that was so i build an simple javascript's routine across serialize async call in one sync call. maybe you could used that added one variant
for example that let me show how that work
first register all async or sync routines
second register end callback
third register call's to routines with yours parameters
fourth thrown process
in your case it neccesary added one call routine and that routine should be UoW of user actions.
Now the main problem is not call to routine in and order of execution if not track changes done by the user
first register all async or sync routines
second register end callback
third register call's to routines with yours paramter
--register your first routine
--register BlockUi //maybe for not accept more changes in the view
--register UiWriter // UoW of change done by user
--register you last routine
fourth thrown process
in real code that is one call dummy's function
function Should_Can_Serializer_calls()
{
RegisterMethods(model);
model.Queue.BeginUnitProcess(); //clear stack of execution, y others
model.Queue.AddEndMethod(SucessfullEnd); // callback to end routine
model.AbstractCall("func1",1,"edu",15,""); //set routine how first to execute
model.AbstractCall("BlockUi"); //track changes and user's actions
model.AbstractCall("UiWork"); //track changes and user's actions
model.AbstractCall("func2","VALUE"); //set second routine for execute
model.Process(); //throw call
}
Now the methods should be async for themselves for this you could use that library http://devedge-temp.mozilla.org/toolbox/examples/2003/CCallWrapper/index_en.html
so, what do you want do?

Categories

Resources