I'm trying to load sucessive urls using javascript from firefox developer console.
So far, I've tried with different versions of this code:
function redirect() {
var urls = ["http://www.marca.com", "http://www.yahoo.es"]
for (i = 0; i < urls.length; i++) {
setTimeout(location.assign(urls[i], 5000));
}
}
But the result of this code is that it only redirects to the last url from the array. Every page should be fully loaded before iterating to the next page.
I've also tried using window.onload, but with no luck either. It's always the last url which is loaded.
I guess this must be something very basic (I'm new to javascript), but can't find any solution to this.
Any help or hints of what I'm doing wrong here would be very appreciated. Thanks in advance!
Inspecting the console after running your code, it appears that a request is sent to the first URL but is soon aborted when the loop runs for the second time and instead it redirects to the latest URL.
My suggestion would be to open the pages in different tabs, if that works for you.
You can do,
for (i = 0; i < urls.length; i++) {
window.open(urls[i],"_blank");
}
This will open the pages in the new tabs.
Related
I have been trying to download all USA and CANADA servers here on Nord VPN website: https://nordvpn.com/ovpn/
I tried to manually download it but it is time consuming to scroll down every time and identify each US related servers, so i just wrote simple Javascript that can be run on Chrome Inspect Element Console:
var servers = document.getElementsByClassName("mr-2");
var inc_servers = [];
for (var i = 0; i < servers.length; i++) {
var main_server = servers[i];
var server = servers[i].innerText;
if(server.includes("ca")){
var parent_server = main_server.parentElement.parentElement;
parent_server.querySelector(".Button.Button--primary.Button--small").click();
inc_servers.push(server);
}
}
console.log(JSON.stringify(inc_servers));
I also wrote simple python script that automatically click "save" file:
while True:
try:
app2 = pywinauto.Application().connect(title=u'Save As', found_index=0)
app2.SaveAs.Save.click()
except:
pass
It gets all the elements, it works there, however, when I let javascript click each element, maybe because of too many elements, it returns an error:
VM15575:8 Throttling navigation to prevent the browser from hanging. See https://crbug.com/1038223. Command line switch --disable-ipc-flooding-protection can be used to bypass the protection
Are there any other best alternative for my problem? Or maybe how to fix the error message above? I tried running this command in my command prompt: switch --disable-ipc-flooding-protection
but it returns also an error: 'switch' is not recognized as an internal or external command,
operable program or batch file.
I only know basic Javascript and Python. Thanks
So right off the bat, your program is simply downloading files too fast.
Adding a small delay between each file download allows your JavaScript to run.
var servers = document.getElementsByClassName("mr-2");
var inc_servers = [];
for (var i = 0; i < servers.length; i++) {
var main_server = servers[i];
var server = servers[i].innerText;
if(server.includes("ca")){
var parent_server = main_server.parentElement.parentElement;
// Add 1 second delay between each download (fast enough on my computer.. Might be too fast for yours.)
await new Promise(resolve => setTimeout(resolve, 1000));
parent_server.querySelector(".Button.Button--primary.Button--small").click();
}
}
// Remove the logging. Just tell the user that it's worked
console.log("Done downloading all files.");
This is more of a temporary solution, but this script seems like it only needs to be run once, so it'll work for now.
(your python code runs fine. Nothing to do there)
Hope this helped.
I'm very new to javascript and Puppeteer as well.
I'm trying to grab some innerHTML from a series of web pages inside a forum. The pages' URLs follow a pattern that has a prefix and '/page-N' at the end, N being the page number.
So I decided to loop through the pages using a for loop and template literals to load a new page URL on each loop, until I reach the final number of pages, contained in the variable C.numberOfPages.
Problem is: the code inside the page.evaluate() function is not working, when I run my code I get the TypeError: Cannot read property of undefined. I've checked and the source of the problem is that document.getElementById('discussion_subentries') is returning undefined.
I've tested the same code that is inside the page.evaluate() function in Chrome Dev Tools and it works fine, returning the innerHTML I wanted. All of those .children[] concatenations were necessary due to the structure of the page I'm scraping, and they work fine at the browser, returning the proper value.
So how do I make it work in my Puppeteer script?
for (let i = 1; i <= C.numberOfPages; i++) {
let URL = `${C.url}page-${i}`;
await page.goto(URL);
await page.waitForSelector('#discussion_subentries');
let pageData = await page.evaluate(() => {
let discussionEntries = document.getElementById('discussion_subentries')
.children[1];
let discussionEntryMessages = [];
for (let j = 0; j < discussionEntries.childElementCount; j++) {
let thisEntryMessage =
discussionEntries.children[j].children[0].children[1].children[1]
.children[1].innerHTML;
discussionEntryMessages.push(thisEntryMessage);
}
return discussionEntryMessages;
});
entryData.discussionEntryMessages.push(pageData);
}
Page evaluate is not the problem, it works 100% as the devtools. The problem is most probably that wait for selector doesnt to its proper job and doesnt wait for the element to be properly loaded before going further. Try to debug with adding some sleep instead of the wait for selector, to confirm that thats the problem.
I've been getting the warning message
You have included the Google Maps API multiple times on this page.
This may cause unexpected errors.
Each time I open a different tab of my website, as each tab has its own map to show to the users.
The way I've made the code to call the google API was this:
function loadMapScript() {
var script = document.createElement("script");
script.src = "https://maps.google.com/maps/api/js";
script.type = "text/javascript";
document.getElementsByTagName("head")[0].appendChild(script);
}
And I call it here:
$(document).ready(function () {
loadMapScript();
... ... ...
The website has various tabs and when each one of them opens, the script is called, hence why it's there multiple times, I got that far.
What I didn't get is how I stop this from happening, I've tried to perform a few verification's inside the loadMapScript function but they did not work at all. I'd like to know if someone knows a way to make this verification inside the loadMapScript function, to prevent it from adding the google API script more than once.
I faced the same type of problem. This occurs if your web page including maps api more than one time.
I have checked in my case there was a .js file that was also calling maps api, so please first check if you are including maps api more than one time, if you are, remove the one.
I figured it out, it required me to do the following verification, it was simpler than I thought:
var children = document.getElementsByTagName("head")[0].childNodes;
for (i = 0; i < children.length; i++) {
if (children[i].src == script.src) {
found = true;
break;
}
}
if (found == false) { document.getElementsByTagName("head")[0].appendChild(script); }
Thanks for the attempt to help at least
I have no clue what I'm doing wrong here! This should be working I believe. I'm writing a chrome extension and this should get the current tab's url and set the html of #current-tab to the url (for testing purposes). The code successfully gets to the callback, but it says that tab.url is undefined when I put it in an alert box, and it does not put it's value in #current-tab. Here's the code I have:
$('#get-tab').click(function(){
chrome.tabs.query({"active" : true}, function(tab){
for (var i = 0; i < tab.length; i++) {
alert(tab[i].url);
$('#current-tab').append(tab[i].url);
};
});
});
chrome.tabs.query actually returns an array of Tab objects, so you would have to reference the tab inside of the array (even if it is only one Tab):
$('#get-tab').click(function(){
chrome.tabs.query({"active" : true}, function(tab){
for (var i = 0; i < tab.length; i++) {
alert(tab[i].url);
$('#current-tab').append(tab[i].url);
};
});
});
I figured out what was wrong! Apparently you need to reload the extension to refresh changes to the manifest file! I had added the permissions later, but had not reloaded the extensions in the extension manager, so the change had not taken effect! We're rolling now!
I am trying to intercept the window.location changes to do some native work in Android app. To ve more specific, I overwrite the call in WebViewClient:
public boolean shouldOverrideUrlLoading(WebView view, String url)
to look for anything start with "native://". The JavaScript code is like this.
function callNative() {
window.location = "native://doSomeNativeWork()";
}
function callNativeManyTimes(count) {
for(i = 0 ; i < count ; i ++) {
callNative();
}
}
DoSomeNativeWork<br/>
The problem I am seeing is that if I call "window.location = something" many time very quickly (like in the code above) , I will only get one call inside the WebViewClient on the native code side. If I make the call 50ms apart, I will get everyone of them. I am thinking that the browser is doing some optimization around this.
I think I can solve this problem like this: do not use window.location, change to embed a native object to javascript, and call methods on that object in javascript. I am just wondering why this is happening. Can someone more familiar wit JS to share some insight?
Thanks
I had exact the same problem. You can do that by creating iframes. I create iframes with the native urls and then I delete the iframe after 2 seconds just so we don't have too many laying around on the DOM. Worked like a charm to me. You can create as many iframes as you want.
I also included a cache buster on the iframe url, even though I'm not sure it's needed. Better safe then sorry.
function callNative(url){
var _frame = document.createElement('iframe');
_frame.width=0; _frame.height=0;_frame.frameBorder=0;
document.body.appendChild(_frame);
if (url.indexOf('?') >= 0){
url = url + "&cb=";
}else{
url = url + "?cb=";
}
_frame.src = url + Math.round(Math.random()*1e16);
// Remove the iframe
setTimeout(function(){document.body.removeChild(_frame);}, 2000);
}
function callNativeManyTimes(count) {
for(i = 0 ; i < count ; i ++) {
callNative('native://doSomeNativeWork()');
}
}
DoSomeNativeWork<br/>
Note that I used the approach above on iOS. Not sure if it's exactly the same on Android, but from judging the question and other answer I guess it's the same.
I would guess that loading a URL is asynchronous (it would probably be a bad idea to block the execution of a script until a URL has been resolved). As such, setting window.location will presumably only queue up the loading of the new URL, which will be done in a different thread.
Waiting 50ms is a hack that may or may not work. You need to find a different approach. You need something that guarantees that each one of those URLs will be resolved. If the order doesn't matter, you could just use images, like somebody suggested. Otherwise, you could use a native JavaScript interface (which is probably the better approach).