Function has no reference to my predefined variable - javascript

I'm very new to JavaScript so I might did not get the basics right.
I'm trying to open a new page based on the currents page URL. But it seems that the function has no reference to the current URL.
It should be a simple Firefox Addon that looks at the URL, replaces a certain part and opens the "new" page in a new tap.
browser.browserAction.onClicked.addListener(function() {
var path = window.location.href
var newPath = path.replace('www', 'test')
var creating = browser.tabs.create({
"url": "${newPath}"
});
});
Being on www.google.com it should open me a new tap with test.google.com. Instead it opens me a Tab with the following URL=moz-extension://fde3def8-cf60-4536-b96b-1bf7ed91a8da/$%7BnewPath%7D.
Taking a look at the end of the URL I think it has no reference to the variable. When replacing the newPath variable in the last line with a static sample URL e.g. www.facebook.com it works fine.

Extension pages, such as an extension pop-up page, are separate pages. They run in a separate context that has no (or very little) connection to the regular currently open page.
Your path is the URL of the extension, which apparently happens to be "moz-extension://fde3def8-cf60-4536-b96b-1bf7ed91a8da/". You are trying to replace parts of that URL and open the result in a new tab.
Read up on architecture of extensions first:
Anatomy of an extension
Achitecture of extensions (for Chrome, but the concept is the
same)
To accomplish what you are trying to do, you will probably have to first query the active tab using tabs.query({active: true}) to get its URL.
Note on asynchronous execution:
Many extension related APIs (including tabs.query) are asynchronous (promise-based in Firefox). Promises might be a bit difficult to grasp for beginers.
You also confuse strings and template strings:
This is also incorrect: "url": "${newPath}". It should be simply "url": newPath. You seem to be confusing regular strings with template strings.

String interpolation in JS works only with backticks not with quotation marks as in: "url": `${newPath}`

Related

Simple JavaScript bulk-download script for Humble Bundle library is only returning the last item

Trying to make a small JavaScript that will download all my e-books (for example) on Humble Bundle. I realize that something like this has been done before, but all the solutions I've encountered so far work in the purchases section, not the library. I also realize that Humble Bundle, at some point, added a "bulk download" button on each e-book purchase page, making the aforementioned solutions obsolete.
I prefer not asking for help, but at this point, I just want to make my script work and learn why it is not. I don't want to take the easy way out using any third-party add-on or application (e.g. download managers). I have tried this in jQuery as well, but have gotten the same results below. Would like to do it in vanilla JS, but welcome any helpful suggestions!
Here is my code:
var domItem = document.querySelectorAll("div.selector-content div.text-holder h2"), domItemName, domItemDownload;
domItem.forEach(function(itemBtn) {
itemBtn.click();
domItemName = document.querySelector("div.details-holder div.details-heading div.text-holder h2");
domItemDownload = document.querySelectorAll("div.details-holder div.js-button-holder div.js-download-button h4");
domItemDownload.forEach(function(downloadBtn) {
console.log(domItemName.innerText + ": " + downloadBtn.innerText);
downloadBtn.click();
});
});
What I expect to happen for each e-book is an output of the e-book name and type of e-book it is downloading (PDF, etc.) and then navigating to the URL obtained by clicking on the download button. An example of the URL is here: https://dl.humble.com/torrents/unixpowertools.mobi.torrent?gamekey=xxxxx&ttl=xxxxx&t=xxxxx.
This works as expected up to the point where it downloads all the torrent files: the browser console log will say that it has navigated to each URL to download the needed file, but only the last entry gets downloaded. For example, say I have three e-books and each of them have a PDF torrent file, the script will click everything as expected and the browser will say something like the following:
CSS Refactoring: PDF main.min.js:10:15514
CSS: The Definitive Guide: PDF main.min.js:10:15514
D3 Data-Driven Documents Pocket Primer: PDF main.min.js:10:15514
Navigated to https://dl.humble.com/torrents/cssrefactoring.pdf.torrent?gamekey=xxxxx&ttl=xxxxx&t=xxxxx
Navigated to https://dl.humble.com/torrents/css_thedefinitiveguide.pdf.torrent?gamekey=xxxxx&ttl=xxxxx&t=xxxxx
Navigated to https://dl.humble.com/torrents/d3datadrivendocuments_pocketprimer.pdf.torrent?gamekey=xxxxx&ttl=xxxxx&t=xxxxx
However, I will only get the torrent file for that last entry. No matter what type of e-book it is, whether it is a direct download or the torrent file, no matter where I start and end the loop, or whether I use Chrome or Firefox, I always download only the last entry's file.
So, after seeing that I can get the e-books' download URLs by clicking on the download buttons, I tried random ones directly in the browser and was able to download each of them individually, so I know the URLs are working as expected. To just get to an expected result, I then copy-pasted all the URLs in the console log and put them into an array. I then looped through the array with the following script, but still get the same result:
var urls = [
'https://dl.humble.com/torrents/cssrefactoring.pdf.torrent?gamekey=xxxxx&ttl=xxxxx&t=xxxxx',
'https://dl.humble.com/torrents/css_thedefinitiveguide.pdf.torrent?gamekey=xxxxx&ttl=xxxxx&t=xxxxx',
'https://dl.humble.com/torrents/d3datadrivendocuments_pocketprimer.pdf.torrent?gamekey=xxxxx&ttl=xxxxx&t=xxxxx'
];
for (var i = 0; i < urls.length; i++) {
document.location.href = urls[i];
};
Based on my research, this sounds just like a closure issue. However, using techniques like those found on https://dzone.com/articles/why-does-javascript-loop-only-use-last-value have not resolved the issue. Furthermore, my understanding of a closure issue is that I shouldn't be seeing the browser "navigating" to each URL, but instead expect it to say it is navigating to the same URL many times.
I also thought that maybe this was an issue with the browser trying to download too many files from the server too quickly, so I tried implementing a wait in three ways: setTimeout, setInterval, and wrote a function to while-loop until a specified time has elapsed (bad, I know). This still gave the same result, but slower.
I am sure the issue is something simple but having worked on and abandoned this particular task many times before, I just need a set of fresh, more experienced eyes on it.
This is my first post, so I appreciate your time reading this and let me know if there is any more information you may need or if I need to fix up my post.
It is not related to closures. When you click on a link, the browser closes the current page and opens a new one. If you click on another link while the page gets opened, the page will abort the loading process of the page and open the new one instead. You get the same behaviour with .click() will cause a redirect that aborts the previous ones, therefore the last page is opened.
Instead you could open each link in a new tab:
for (var i = 0; i < urls.length; i++) {
window.open(urls[i], "download");
};

JavaScript. Getting "resource"

When I browse, let's say, to example.com/page/name?source=illia I get to example.com/password page. This is an application set up.
In the dev tools, on the Network tab, I can see "resources" (not sure if I name it correctly) in the Name column.
So there are /name?source=illia and /password and all other items.
The question is how can I access /name?source=illia with js. Based on that I'd like to change the workflow.
document.referrer is an empty string
UPDATE:
Here is the screenshot from the devtools. Is it possible to get diagnostic?source=illia#example.com with javascript?
Here you can call javascript to get whole url:
window.location.href
Then you can extract what you need.
One easy way is to split:
var url = window.location.href;
alert(url.split('?')[1]);
Hope to be useful;

Replacing XML DataIsland Files in asp.net

The app I am working on, currently uses XML dataisland files to retrieve the dropdown data. Below is the code that defines the file.
<xml id="DefaultDataIslands" src="../XMLData/DataIslands-<%=((User.Identity).Database)%>.xml">
</xml>
Below is an example code that uses these XML dataislands.
var oDataIsland = document.getElementById("DefaultDataIslands");
var oXmlNodes = oDataIsland.XMLDocument.selectNodes("XMLDataIslands/DataIsland[#ID='DIMRGroup']/Option");
This oDataIsland line is used about 4k times total in the application. The application itself is intranet, so, I can even ask the users to download the xml files directly. Whole point is to keep the changes required to minimum, while removing all traces of XML tags. I want to make sure that application works on Chrome once this thing is completed.
I checked the link from mozilla regarding the dataislands here. https://developer.mozilla.org/en/docs/Using_XML_Data_Islands_in_Mozilla
Below is the code based on that mozilla link.
var doc = document.getElementById(strDataSource).contentDocument;
var mainNode = doc.getElementsByTagName("DataIsland");
var oXmlNodes;
var strOptions = "";
//finds the selected node based on the id, and gets the options for that id
for (i = 0; i < mainNode.length; i++) {
if (mainNode[i].getAttributeNode("ID").nodeValue == strDataMember) {
oXmlNodes = mainNode[i].getElementsByTagName("Option");
}
}
This code reads the data properly, works perfectly in IE (10 with standards mode, no quirks), was easy enough change to do in the full solution.
Only problem is, document.getElementById(strDataSource).contentDocument; line fails in Chrome. This is the same line as what was mentioned in Mozilla's documentation. But somehow contentDocument property is undefined on chrome.
So, I need some other suggestion on how to get this fixed. I tried other methods, using HTTPRequest (too many request per page), using JSON (requires changing existing methods completely), using backend to process XML instead of doing it client side (requires architectural changes). Till now, all these ideas failed.
Is there any other method that I can use? Or should I fix the contentDocument issue?
To allow contentDocument in Chrome, you will have to use --allow-file-access-from-files. Following are the steps for doing so:
Get the url of your Chrome Installation path to your chrome
installation e.g
C:\Users-your-user-name\AppData\Local\Google\Chrome\Application>
Launch the Google Chrome browser from the command line window with
the additional argument ‘–allow-file-access-from-files’. E.g ‘path
to your chrome installation\chrome.exe
--allow-file-access-from-files’
Temporary method you can use each time you are testing
Copy the existing chrome launcher
Do as above and save it with a new name e.g chrome - testing
Alternatively, you can simply create a new launcher with the above and use it to start chrome.
Source

Custom dojo widgets not loading on default.htm

I've got a web page (default.htm) that loads some custom dojo widgets. The widgets load fine when the entire url is typed:
http:/www.eg/default.htm
but when the site is hit as:
http:/www.eg
the widgets dont load.
when they load properly (when default.htm is specified) the console message is:
XHR finished loading:
GET "http://www.eg/Templates/WatershedMap.htm"
when they dont load the console message is:
OPTIONS http://templates/WatershedMap.htm net::ERR_NAME_NOT_RESOLVED
I'm running iis 7. Does anyone have an idea of how I might fix this?
Thanks
I suspect that in your dojoConf or data-dojo-conf, you are using location.pathname, is that correct? Or perhaps directly in your xhr request where WatershedMap.htm is loaded?
When you view the page with just http;//www.eg/, the location.pathname is just a slash "/". If then, for example, xhr tries to do this:
xhr(location.pathname + "/Templates/WatershedMap.html")...
... then the request will actually go to //Templates/WatershedMap.html.
That double slash means "protocol relative url". The browser will take the same protocol scheme (http/https) as the current page, and append whatever comes after the double slash.
In other words, that will actually try to make a cross domain request to http;//Templates , which triggers a preflight OPTIONS request.
However, when your page is loaded with http;//www.eg/foo/, the location.pathname will be "/something/something", and the request will go to http;//www.eg/foo/Templates/WatershedMap.htm.
You will have to share some more code if you need help to pinpoint the problem. Look through your code for location.pathname and see if you find anything that may be relevant.
Edit: based on your comment, your dojoConf has the following:
packages: [{
name: "Templates",
location: location.pathname.replace(/\/[^/]+$/, "") + "/Templates"
}]
The line with replace() in it takes the current page's path (for example /foo/bar.htm), and removes the last slash and everything after it, then appends "/Templates".
It is supposed to ensure that whenever you load something that starts with "Templates" (for example if you do dojo/text!Templates/Map.htm, it will look in the same directory on your server as the current page.
However, When you are on http;//www.eg/ , the pathname is simply a slash, and nothing is removed. So you end up with "//Templates". As mentioned earlier, this becomes a protocol relative url, with Templates as the hostname. Definitely not what you want!
On the other hand, when you are on http;//www.eg/default.htm, the pathname is /default.htm, so all of that is stripped away, and you're left with just "/Templates". This is what you want!
You could solve it by simply replacing the line with:
location: location.pathname.replace(/\/[^/]*$/, "") + "/Templates"
Only a single character difference (+ became *)! Now it will remove the single slash if you are viewing http;//www.eg/ as well.
In my opinion though, it's better to use an explicit path. If you know that /Templates will always be http;//www.eg/Templates, you may as well do:
location: "/Templates"

How to bypass document.domain limitations when opening local files?

I have a set of HTML files using JavaScript to generate navigation tools, indexing, TOC, etc. These files are only meant to be opened locally (e.g., file://) and not served on a web server. Since Firefox 3.x, we run into the following error when clicking a nav button that would generate a new frame for the TOC:
Error: Permission denied for <file://> to get property Location.href from <file://>.
I understand that this is due to security measures within FF 3.x that were not in 2.x, in that the document.domain does not match, so it's assuming this is cross-site scripting and is denying access.
Is there a way to get around this issue? Perhaps just a switch to turn off/on within Firefox? A bit of JavaScript code to get around it?
In firefox:
In address bar, type about:config,
then type network.automatic-ntlm-auth.trusted-uris in search bar
Enter comma separated list of
servers (i.e.,
intranet,home,company)
Another way is editing the users.js.
In users.js, write:
user_pref("capability.policy.policynames", "localfilelinks");
user_pref("capability.policy.localfilelinks.sites", "http://site1.com http://site2.com");
user_pref("capability.policy.localfilelinks.checkloaduri.enabled", "allAccess");
But if you want to stop all verification, just Write the following line into users.js file:
user_pref("capability.policy.default.checkloaduri.enabled", "allAccess");
You may use this in firefox to read the file.
function readFile(arq) {
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
var file = Components.classes["#mozilla.org/file/local;1"].createInstance(Components.interfaces.nsILocalFile);
file.initWithPath(arq);
// open an input stream from file
var istream = Components.classes["#mozilla.org/network/file-input-stream;1"].createInstance(Components.interfaces.nsIFileInputStream);
istream.init(file, 0x01, 0444, 0);
istream.QueryInterface(Components.interfaces.nsILineInputStream);
var line = {}, lines = [], hasmore;
do {
hasmore = istream.readLine(line);
lines.push(line.value);
} while(hasmore);
istream.close();
return lines;
}
Cleiton's method will work for yourself, or for any users who you expect will go through this manual process (not likely unless this is a tool for you and your coworkers or something).
I'd hope that this type of thing would not be possible, because if it is, that means that any site out there could start opening up documents on my machine and reading their contents.
You can have all files that you want to access in subfolders relative to the page that is doing the request.
You can also use JSONP to load files from anywhere.
Add "file://" to network.automatic-ntlm-auth.trusted-uris in about:config

Categories

Resources