Can JavaScript talk to Selenium 2? - javascript

I know I can get Selenium 2's webdriver to run JavaScript and get return values but so much asynchronous stuff is happening I would like JavaScript to talk to Selenium instead of the other way around. I have done some searching and haven't found anything like this. Do people just generally use implicitly_wait? That seems likely to fail since it's not possible to time everything? Perfect example would be to let Selenium know when an XHR completed or an asynchronous animation with undetermined execution time.
Is this possible? We're using Selenium 2 with Python on Saucelabs.

You should look into the execute_async_script() method (JavascriptExecutor.executeAsyncScript in Java, IJavaScriptExecutor.ExecuteAsyncScript() in .NET), which allows you to wait for a callback function. The callback function is automatically appended to the arguments array in your JavaScript function. So, assuming you have a JavaScript function already on the page that waits until the condition you want, you could do something like the following (Java code below, C# and Python code should be similar):
String script = "var callback = arguments[arguments.length - 1];"
+ "callback(myJavaScriptFunctionThatWaitsUntilReady());";
driver.manage().timeouts().setScriptTimeout(15, TimeUnit.SECONDS);
((JavascriptExecutor)driver).executeAsyncScript(script);
It might be possible to be even more clever and pass the callback function directly to an event that returns the proper data. You can find more information on the executeAsyncScript() function in the project JavaDocs, and can find sample code for this in the project source tree. There's a great example of waiting for an XHR to complete in the tests in this file.
If this isn't yet available in the version of the Python bindings available for use with SauceLabs, I would expect it to be available before long. Admittedly, in a sense, this is pushing the "poll for desired state" from your test case into JavaScript, but it would make your test more readable.

Theoretically it is possible, but I would advise against it.
The solution would probably have some jQuery running on the site that sets a variable to true when the JavaScript processing has finished.
Set selenium up to loop through a getEval until this variable becomes true and then do something in Selenium.
It would meet your requirements but it's a really bad idea. If for some reason your jQuery doesn't set the trigger variable to true (or whatever state you expect) Selenium will sit there indefinetly. You could put a really long timeout on it, but then what would be the different in just getting Selenium to do a getEval and wait for a specific element to appear?
It sounds like you are trying to overengineer your solution and it will cause you more pain in the future will very few additional benefits.

Not to be overly blunt, but if you want your App to talk to your Test Runner, then you're doing it wrong.
If you need to wait for an XHR to finish, you could try displaying a spinner and then test that the spinner has disappeared to indicate a successful request.
In regards to the animation, when the animation has completed, maybe its callback could add a class indicating that the animation has finished and then you could test for the existence of that class.

Testing animation with selenium is opening a can of worms. The tests can be quite brittle and cause many false positives.
The problem is to do that the calls are asynchronous, and difficult to track the behaviour and change in state of the page.
In my experience the asynchronous call can be so quick that the spinner is never displayed, and the state of the page may skip a state entirely (that Selenium can detect).
Waiting for the state of the page to transition can make the tests less brittle, however the false positives cannot be removed entirely.
I recommend manual testing for animation.

Related

Getting callback to behave synchronously

I have an issue with Javascript Callbacks. I have a class hierachy that all have a method getVisual() which return a visual representation of the given object. All of them work nicely and synchronously. For one more class in the hierachy that I am just implementing now to introduce a new feature during getVisual() I need to wait for an Image.onload() to finish get the visual representation. As the getVisual() methods of all the classes is synchronous I have a problem I guess. And the only way I see is to either figure out a way to wait for onload to finish - which according to all stackoverflow articles I read is not recommended - or I would have to completely change my application architecture for the caller of getVisual() to only request a visual and offer a method to be called once the visual has been created. I really would hate to change the whole architecture because of one single operation. Is there any way around it?
Cheers T

Dealing with stale elements when using WebDriver with Backbone.js

We are using Backbone.js and having issues when running our WebDriver tests. We are getting the following error:
org.openqa.selenium.StaleElementReferenceException: Error Message => 'Element does not exist in cache'
Our understanding is that this is caused when we are finding an element, and executing an action on that element (e.g. click()). The element that we have found has gone 'stale', and we suspect that element has been re-rendered or modified.
We have seen lots of solutions that we are not keen on:
Use Thread.Sleep(...). We don't want explicit sleeps in our code
Using a retry strategy, either as a loop or try-catching the StaleElementReferenceException. We feel this is not the right/clean solution, and is prone to breaking in the future
Some people are using WebDriverWait and waiting until some javascript function execution returns true. We have seen people wait for notifyWhenNoOutstandingRequests(callback) in Angular, but can't find anything obvious for Backbone.
We are hoping there is a clean solution that does not involve explicit sleeping, or some form of looping. Any thoughts?
I looked into WebDriverWaits a bit more and I think i've come up with a combination of expectations that works for us:
wait.until(refreshed(elementToBeClickable(...)));
The refreshed expectation is a wrapper for other expectations that deals with StaleElementReferenceException, and the elementToBeClickable expectation checks the element is clickable. What is interesting is that looking at the source for the built in expectations, some of them deal with StaleElementReferenceExceptions, while others don't (e.g. presenceOfElementLocated) and need to be wrapped in the refreshed expectation, so I think that's what initially threw me off when I first looked at WebDriverWaits.

Javascript Rule of thumb for delay length while using setTimeout() to allow a "loading" popup to appear

I'm using the setTimeout() function in javascript to allow a popup that says "loading" to be shown while I'm parsing some xml data. I found that at small enough delay values (below 10ms) it doesn't have time to show it before the browser freezes for a moment to do the actual work.
At 50ms, it has plenty of time, but I don't know how well this will translate to other systems. Is there some sort of "rule of thumb" that would dictate the amount of delay necessary to ensure a visual update without causing unnecessary delay?
Obviously, it'll depend on the machine on which the code is running etc., but I just wanted to know if there was anything out there that would give a little more insight than my guesswork.
The basic code structure is:
showLoadPopup();
var t = setTimeout(function()
{
parseXML(); // real work
hideLoadPopup();
}, delayTime);
Thanks!
UPDATE:
Turns out that parsing XML is not something that Web Workers can usually do since they don't have access to the DOM or the document etc. So, in order to accomplish this, I actually found a different article here on Stack Overflow about parsing XML inside a Web Worker. Check out the page here.
By serializing my XML object into a string, I can then pass it into the Web Worker through a message post, and then, using the JavaScript-only XML parser that I found in the aforementioned link, turn it back into an XML object within the Web Worker, do the parsing needed, and then pass back the desired text as a string without making the browser hang at all.
Ideally you would not ever have to parse something on the client side that actually causes the browser to hang. I would look into moving this to an ajax request that pulls part of the parsed xml (child nodes as JSON), or look at using Web Workers or a client side asynchronous option.
There appears to be no "rule-of-thumb" for this question simply because it was not the best solution for the problem. Using alternative methods to do the real meat of the work was the real solution, not using a setTimeout() call to allow for visual update to the page.
Given options were:
HTML 5's new Web Worker option (alternative information)
Using an AJAX request
Thanks for the advice, all.

Call setTimeout without delay

Quite often see in JavaScript libraries code like this:
setTimeout(function() {
...
}, 0);
I would like to know why use such a wrapper code.
Very simplified:
Browsers are single threaded and this single thread (The UI thread) is shared between the rendering engine and the js engine.
If the thing you want to do takes alot of time (we talking cycles here but still) it could halt (paus) the rendering (flow and paint).
In browsers there also exists "The bucket" where all events are first put in wait for the UI thread to be done with whatever it´s doing. As soon as the thread is done it looks in the bucket and picks the task first in line.
Using setTimeout you create a new task in the bucket after the delay and let the thread deal with it as soon as it´s available for more work.
A story:
After 0 ms delay create a new task of the function
and put it in the bucket. At that exact moment the UI thread is busy
doing something else, and there is another tasks in the bucket
already. After 6ms the thread is available and gets the task infront
of yours, good, you´re next. But what? That was one huge thing! It has
been like foreeeeeever (30ms)!!
At last, now the thread is done with that and comes and gets your
task.
Most browsers have a minimum delay that is more then 0 so putting 0 as delay means: Put this task in the basket ASAP. But telling the UA to put it in the bucket ASAP is no guarantee it will execute at that moment. The bucket is like the post office, it could be that there is a long queue of other tasks. Post offices are also single threaded with only one person helping all the task... sorry customers with their tasks. Your task has to get in the line as everyone else.
If the browser doesn´t implement its own ticker, it uses the tick cycles of the OS. Older browsers had minimum delays between 10-15ms. HTML5 specifies that if delay is less then 4ms the UA should increase it to 4ms. This is said to be consistent across browsers released in 2010 and onward.
See How JavaScript Timers Work by John Resig for more detail.
Edit: Also see What the heck is the event loop anyway? by Philip Roberts from JSConf EU 2014. This is mandatory viewing for all people touching front-end code.
There are a couple of reasons why you would do this
There is an action you don't want to run immediately but do want to run at some near future time period.
You want to allow other previously registered handlers from a setTimeout or setInterval to run
When you want to execute rest of your code without waiting previous one to finish you need to add it in anonymous method passed to setTimeout function. Otherwise your code will wait until previous is done
Example:
function callMe()
{
for(var i = 0; i < 100000; i++)
{
document.title = i;
}
}
var x = 10;
setTimeout(callMe, 0);
var el = document.getElementById('test-id');
el.innerHTML = 'Im done before callMe method';
That is the reason I use it.
Apart from previous answers I'd like to add another useful scenario I can think of: to "escape" from a try-catch block. A setTimeout-delay from within a try-catch block will be executed outside the block and any exception will propagate in the global scope instead.
Perhaps best example scenario: In today's JavaScript, with the more common use of so called Deferreds/Promises for asynchronous callbacks you are (often) actually running inside a try-catch.
Deferreds/Promises wrap the callback in a try-catch to be able to detect and propagate an exception as an error in the async-chain. This is all good for functions that need to be in the chain, but sooner or later you're "done" (i.e fetched all your ajax) and want to run plain non-async code where you Don't want exceptions to be "hidden" anymore.
AFAIK Dojo, Kris Kowal's Q, MochiKit and Google Closure lib use try-catch wrapping (Not jQuery though).
(On couple of odd occasions I've also used the technique to restart singleton-style code without causing recursion. I.e doing a teardown-restart in same loop).
To allow any previously set timeouts to execute.

Is there any reason to use a synchronous XMLHttpRequest?

It seems most everyone does asynchronous requests with XMLHttpRequest but obviously the fact that there is the ability to do synchronous requests indicates there might be a valid reason to do so. So what might that valid reason be?
Synchronous XHRs are useful for saving user data. If you handle the beforeunload event you can upload data to the server as the user closes the page.
If this were done using the async option, then the page could close before the request completes. Doing this synchronously ensures the request completes or fails in an expected way.
I think they might become more popular as HTML 5 standards progress. If a web application is given access to web workers, I could foresee developers using a dedicated web worker to make synchronous requests for, as Jonathan said, to ensure one request happens before another. With the current situation of one thread, it is a less than ideal design as it blocks until the request is complete.
Update:
The below hinted at - but was unsuccessful in delivering - that with the advent of better asynchronous request handling, there really is no reason to use synchronous requests, unless intending to purposely block the users from doing anything until a request is complete - sounds malicious :)
Although, this may sound bad, there may be times where it's important that a request (or series of requests) occur before a user leaves a page, or before an action is performed - blocking other code execution (e.g., preventing back button) could possibly reduce errors/maintenance for a poorly designed system; that said, I've never seen it in the wild and stress that it should be avoided.
Libraries, like promise, feign synchronicity by chaining processes via callbacks. This suits the majority of development needs where the desire is to have ordered, non-blocking events that enable the browsers to retain responsiveness for the user (good UX).
As stated in the Mozilla docs there are cases where you have to use synchronous requests; however, also listed is a workaround that uses beacon (not available in IE/Safari) for such cases. While this is experimental, if it ever reaches standards-acceptance, it could possibly put a nail in the synchronous-request coffin.
You'd want to perform synchronous calls in any sort of transaction-like processing, or wherever any order of operation is necessary.
For instance, let's say you want to customize an event to log you out after playing a song. If the logout operation occurs first, then the song will never be played. This requires synchronizing the requests.
Another reason would be when working with a WebService, especially when performing math on the server.
Example: Server has a variable with value of 1.
Step (1) Perform Update: add 1 to variable
Step (2) Perform Update: set variable to the power of 3
End Value: variable equals 8
If Step (2) occurs first, then the end value is 2, not 8; thus order of operation matters and synchronization is needed.
There are very few times that a synchronous call may be justified in a common real world example. Perhaps when clicking login and then clicking a portion of the site that requires a user to be logged in.
As others have said, it will tie up your browser, so stay away from it where you can.
Instead of synchronous calls, though, often users want to stop an event that is currently loading and then perform some other operation. In a way this is synchronization, since the first event is quit before the second begins. To do this, use the abort() method on the xml connection object.
I'd say that if you consider blocking the user's browser while the request completes acceptable, then sure use a synchronous request.
If serialization of requests is your aim, then this can be accomplished using async requests, by having the onComplete callback of your previous request fire the next in line.
There are many real world cases where blocking the UI is exactly the desired behaviour.
Take an app with multiple fields and some fields must be validated by a xmlhttp call to a remote server providing as input this field's value and other fields values.
In synchronous mode, the logic is simple, the blocking experienced by the user is very short and there is no problem.
In async mode, the user may change the values of any other fields while the initial one is being validated. These changes will trigger other xmlhttp calls with values from the initial field not yet validated. What happens if the initial validation failed ? Pure mess. If sync mode becomes deprecated and prohibited, the application logic becomes a nightmare to handle. Basically the application has to be re-written to manage locks (eg. disable other items during validation processes). Code complexity increases tremendously. Failing to do so may lead to logic failure and ultimately data corruption.
Basically the question is: what is more important, non-blocked UI experience or risk of data corruption ? The answer should remain with the application developer, not the W3C.
I can see a use for synchronous XHR requests to be used when a resource in a variable location must be loaded before other static resources in the page that depend on the first resource to fully function. In point of fact, I'm implementing such an XHR request in a little sub-project of my own whereas JavaScript resources reside in variable locations on the server depending on a set of specific parameters. Subsequent JavaScript resources rely on those variable resources and such files MUST be guaranteed to load before the other reliant files are loaded, thus making the application whole.
That idea foundation really kind of expands on vol7ron's answer. Transaction-based procedures are really the only time where synchronous requests should be made. In most other cases, asynchronous calls are the better alternative in which, after the call, the DOM is updated as necessary. In many cases, such as user-based systems, you could have certain features locked to "unauthorized users" until they have, per se, logged in. The those features, after the asynchronous call, are unlocked via a DOM update procedure.
I'd have to finally say that I agree with most individuals' points on the matter: wherever possible, synchronous XHR requests should be avoided as, with the way it works, the browser locks up with synchronous calls. When implementing synchronous requests, they should be done in a manner where the browser would normally be locked, anyway, say in the HEAD section before page loading actually occurs.
jQuery uses synchronous AJAX internally under some circumstances. When inserting HTML that contains scripts, the browser will not execute them. The scripts need to be executed manually. These scripts may attach click handlers. Assume a user clicks on an element before the handler is attached and the page would not function as intended. Therefore to prevent race conditions, synchronous AJAX would be used to fetch those scripts. Because synchronous AJAX effectively blocks everything else, it can be sure that scripts and events execute in the right order.
As of 2015 desktop javascript apps are becoming more popular. Usually in those apps when loading local files (and loading them using XHR is a perfectly valid option), the load speed is so fast that there is little point overcomplicating the code with async. Of course there might be cases where async is the way to go (requesting content from the internet, loading really big files or a huge number of files in a single batch), but otherwise sync works just fine (and is much easier to use).
Reason:
Let's say you have an ajax application which needs to do half a dozen http gets to load various data from the server before the user can do any interaction.
Obviously you want this triggered from onload.
Synchronous calls work very well for this without any added complexity to the code. It is simple and straightforward.
Drawback:
The only drawback is that your browser locks up until all data is loaded or a timeout happens. As for the ajax application in question, this isn't much of a problem because the application is of no use until all the initial data is loaded anyway.
Alternative?
However many browsers lock up all windows/tabs when while the javascript is busy in any one of them, which is a stupid browser design problem - but as a result blocking on possibly slow network gets is not polite if it keeps users from using other tabs while waiting for ajax page to load.
However, it looks like synchronous gets have been removed or restricted from recent browsers anyway. I'm not sure if that's because somebody decided they were just always bad, or if browser writers were confused by the WC Working Draft on the topic.
http://www.w3.org/TR/2012/WD-XMLHttpRequest-20120117/#the-open-method does make it look like (see section 4.7.3) you are not allowed to set a timeout when using blocking mode. Seems counter intuitive to me: Whenever one does blocking IO it's polite to set a reasonable timeout, so why allow blocking io but not with a user specified timeout?
My opinion is that blocking IO has a vital role in some situations but must be implemented correctly. While it is not acceptable for one browser tab or window to lock up all other tabs or windows, that's a browser design flaw. Shame where shame is due. But it is perfectly acceptable in some cases for an individual tab or window to be non-responsive for a couple of seconds (i.e. using blocking IO/HTTP GET) in some situations -- for example, on page load, perhaps a lot of data needs to be before anything can be done anyway. Sometimes properly implemented blocking code is the cleanest way to do it.
Of course equivalent function in this case can be obtained using asynchronous http gets, but what sort of goofy routine is required?
I guess I would try something along these lines:
On document load, do the following:
1: Set up 6 global "Done" flag variables, initialized to 0.
2: Execute all 6 background gets (Assuming the order didn't matter)
Then, the completion callbacks for each of the 6 http get's would set their respective "Done" flags.
Also, each callback would check all the other done flags to see if all 6 HTTP gets had completed. The last callback to complete, upon seeing that all others had completed, would then call the REAL init function which would then set everything up, now that the data was all fetched.
If the order of the fetching mattered -- or if the webserver was unable to accept multiple requests at same time -- then you would need something like this:
In onload(), the first http get would be launched.
In it's callback, the second one would be launched.
In it's callback, the third -- and so on and so forth, with each callback launching the next HTTP GET. When the last one returned, then it would call the real init() routine.
What happens if you make a synchronous call in production code?
The sky falls down.
No seriously, the user does not like a locked up browser.
I use it to validate a username, during the check that the username does not exist already.
I know it would be better to do that asynchronously, but then I should use a different code for this particular validation rule. I explain better. My validation setup uses some validation functions, which return true or false, depending if the data is valid.
Since the function has to return, I cannot use asynchronous techniques, so I just make that synchronous and hope that the server will answer promptly enough not to be too noticeable. If I used an AJAX callback, then I would have to handle the rest of the execution differently from the other validation methods.
Sometimes you have an action that depends in others. For example, action B can only be started if A is finished. The synchronous approach is usually used to avoid race conditions. Sometimes using a synchronous call is a simpler implementation then creating complex logic to check every state of your asynchronous calls that depend on each other.
The problem with this approach is that you "block" the user's browser until the action is finished (until the request returns, finishes, loads, etc). So be careful when using it.
I use synchronous calls when developing code- whatever you did while the request was commuting to and from the server can obscure the cause of an error.
When it's working, I make it asynchronous, but I try to include an abort timer and failure callbacks, cause you never know...
SYNC vs ASYNC: What is the difference?
Basically it boils down to this:
console.info('Hello, World!');
doSomething(function handleResult(result) {
console.info('Got result!');
});
console.info('Goodbye cruel world!');
When doSomething is synchronous this would print:
Hello, World!
Got result!
Goodbye cruel world!
In contrast, if doSomething is asynchronous, this would print:
Hello, World!
Goodbye cruel world!
Got result!
Because the function doSomething is doing it's work asynchronously, it returns before it's work is done. So we only get the result after printing Goodbye cruel world!
If we are depending on the result of an asynch call, we need to place the depending code in the callback:
console.info('Hello, World!');
doSomething(function handleResult(result) {
console.info('Got result!');
if (result === 'good') {
console.info('I feel great!');
}
else {
console.info('Goodbye cruel world!');
}
});
As such, just the fact that 2 or three things need to happen in order is no reason to do them synchronously (though sync code is easier for most people to work with).
WHY USE SYNCHRONOUS XMLHTTPREQUEST?
There are some situations where you need the result before the called function completes. Consider this scenario:
function lives(name) {
return (name !== 'Elvis');
}
console.info('Elvis ' + (lives('Elvis') ? 'lives!' : 'has left the building...');
Suppose we have no control over the calling code (the console.info line) and need to change function lives to ask the server... There is no way we can do an async request to the server from within lives and still have our response before lives completes. So we wouldn't know whether to return true or false. The only way to get the result before the function completes is by doing a synchronous request.
As Sami Samhuri mentions in his answer, a very real scenario where you may need an answer to your server request before your function terminates is the onbeforeunload event, as it's the last function from your app that will ever run before the window being closed.
I DON'T NEED SYNCH CALLS, BUT I USE THEM ANYWAY AS THEY ARE EASIER
Please don't. Synchronous calls lock up your browser and make the app feel unresponsive. But you are right. Async code is harder. There is, however a way to make dealing with it much easier. Not as easy as sync code, but it's getting close: Promises.
Here is an example: Two asynch calls should both complete succesfully before a third segment of code may run:
var carRented = rentCar().then(function(car){
gasStation.refuel(car);
});
var hotelBooked = bookHotel().then(function(reservation) {
reservation.confirm();
});
Promise.all([carRented, hotelBooked]).then(function(){
// At this point our car is rented and our hotel booked.
goOnHoliday();
});
Here is how you would implement bookHotel:
function bookHotel() {
return new Promise(function(resolve, reject){
if (roomsAvailable()) {
var reservation = reserveRoom();
resolve(reservation);
}
else {
reject(new Error('Could not book a reservation. No rooms available.'));
}
});
}
See also: Write Better JavaScript with Promises.
XMLHttpRequest is traditionally used for asynchronous requests. Sometimes (for debugging, or specific business logic) you would like to change all/several of the async calls in one page to sync.
You would like to do it without changing everything in your JS code. The async/sync flag gives you that ability, and if designed correctly, you need only change one line in your code/change the value of one var during execution time.
Firefox (and probable all non-IE browsers) does not support async XHR timeOut.
Stackoverflow discussion
Mozilla Firefox XMLHttpRequest
HTML5 WebWorkers do support timeouts. So, you may want to wrap sync XHR request to WebWorker with timeout to implement async-like XHR with timeout behaviour.
I just had a situation where asynchronous requests for a list of urls called in succession using forEach (and a for loop) would cause the remaining requests to be cancelled. I switched to synchronous and they work as intended.
Synchronous XHR can be very useful for (non-production) internal tool and/or framework development. Imagine, for example, you wanted to load a code library synchronously on first access, like this:
get draw()
{
if (!_draw)
{
let file;
switch(config.option)
{
case 'svg':
file = 'svgdraw.js';
break;
case 'canvas':
file = 'canvasdraw.js';
break;
default:
file = 'webgldraw.js';
}
var request = new XMLHttpRequest();
request.open('GET', file, false);
request.send(null);
_draw = eval(request.responseText);
}
return _draw;
}
Before you get yourself in a tizzy and blindly regurgitate the evil's of eval, keep in mind that this is only for local testing. For production builds, _draw would already be set.
So, your code might look like this:
foo.drawLib.draw.something(); //loaded on demand
This is just one example of something that would be impossible to do without sync XHR. You could load this library up front, yes, or do a promise/callback, but you could not load the lib synchronously without sync XHR. Think about how much this type of thing could clean up your code...
The limits to what you can do with this for tooling and frameworks (running locally) is only limited by your imagination. Though, it appears imagination is a bit limited in the JavaScript world.
Using synchronous HTTP requests is a common practice in the mobile advertisement business.
Companies (aka "Publishers") that build applications often run ads to generate revenue. For this they install advertising SDKs into their app. Many exist (MoPub, Ogury, TapJob, AppNext, Google Ads AdMob).
These SDKs will serve ads in a webview.
When serving an ad to a user, it has to be a smoothe experience, especially when playing a video. There should be no buffering or loading at any moment.
To solve this precaching is used. Where the media (picture / videos / etc) are loaded synchronously in background of the webview.
Why not do it asynchronously?
This is part of a globally accepted standard
The SDK listens for the onload event to know when the ad is "ready" to be served to the user
With the deprecation of synchronous XMLHttpRequests, ad business will most likely be forced to change the standard in the future unless another way can be determined.
Well here's one good reason. I wanted to do an http request then, depending on the result, call click() on an input type=file. This is not possible with asynchronous xhr or fetch. The callback loses the context "user action", so the call click() is ignored. Synchronous xhr saved my bacon.
onclick(event){
//here I can, but I don't want to.
//document.getElementById("myFileInput").click();
fetch("Validate.aspx", { method : "POST", body: formData, credentials: "include" })
.then((response)=>response.json())
.then(function (validResult) {
if (validResult.success) {
//here, I can't.
document.getElementById("myFileInput").click();
}
});
}
Because chrome.webRequest.*.addListener does not support asynchronous handlers.

Categories

Resources