How to bypass document.domain limitations when opening local files? - javascript

I have a set of HTML files using JavaScript to generate navigation tools, indexing, TOC, etc. These files are only meant to be opened locally (e.g., file://) and not served on a web server. Since Firefox 3.x, we run into the following error when clicking a nav button that would generate a new frame for the TOC:
Error: Permission denied for <file://> to get property Location.href from <file://>.
I understand that this is due to security measures within FF 3.x that were not in 2.x, in that the document.domain does not match, so it's assuming this is cross-site scripting and is denying access.
Is there a way to get around this issue? Perhaps just a switch to turn off/on within Firefox? A bit of JavaScript code to get around it?

In firefox:
In address bar, type about:config,
then type network.automatic-ntlm-auth.trusted-uris in search bar
Enter comma separated list of
servers (i.e.,
intranet,home,company)
Another way is editing the users.js.
In users.js, write:
user_pref("capability.policy.policynames", "localfilelinks");
user_pref("capability.policy.localfilelinks.sites", "http://site1.com http://site2.com");
user_pref("capability.policy.localfilelinks.checkloaduri.enabled", "allAccess");
But if you want to stop all verification, just Write the following line into users.js file:
user_pref("capability.policy.default.checkloaduri.enabled", "allAccess");

You may use this in firefox to read the file.
function readFile(arq) {
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
var file = Components.classes["#mozilla.org/file/local;1"].createInstance(Components.interfaces.nsILocalFile);
file.initWithPath(arq);
// open an input stream from file
var istream = Components.classes["#mozilla.org/network/file-input-stream;1"].createInstance(Components.interfaces.nsIFileInputStream);
istream.init(file, 0x01, 0444, 0);
istream.QueryInterface(Components.interfaces.nsILineInputStream);
var line = {}, lines = [], hasmore;
do {
hasmore = istream.readLine(line);
lines.push(line.value);
} while(hasmore);
istream.close();
return lines;
}

Cleiton's method will work for yourself, or for any users who you expect will go through this manual process (not likely unless this is a tool for you and your coworkers or something).
I'd hope that this type of thing would not be possible, because if it is, that means that any site out there could start opening up documents on my machine and reading their contents.

You can have all files that you want to access in subfolders relative to the page that is doing the request.
You can also use JSONP to load files from anywhere.

Add "file://" to network.automatic-ntlm-auth.trusted-uris in about:config

Related

GET net::ERR_FAILED error when probing for extension

There is a certain Chrome extension and I want to get a PNG file from it by XMLHttpRequest. If the extension is enabled, I want to write 'load' to the console, and if the extension is disabled, I want to write 'error' to the console.
It works fine, but if the Extension is disabled, Chrome writes an error in the console that I do not want to appear:
How can I remove this error from the console?
(I have tried window.onerror but it doesn't work)
The code:
var loadHref = function(href) {
var xmlhttp = new XMLHttpRequest();
xmlhttp.onload = function(){console.log('load')};
xmlhttp.onerror = function() {console.log('error');};
xmlhttp.open('GET', href);
xmlhttp.send();
}
loadHref('chrome-extension://77672b238520494cba8855547dd00ba8/img/icon24.png');
Basically, you can't silence those errors, as they are not JS errors but network errors.
Assuming your goal is to detect that a specific extension is present:
Assume you need it at a specific domain and for a specific extension that is controlled by you.
In this case, the optimal approach is externally_connectable communication. Here's a sample.
Assume you need it at a non-specific domain not known in advance, but you control the extension.
In this case, a Content Script can be injected (probably with "run_at": "document_start") and add something to the document signalling the presence of the extension. For example, injecting a page-level script that sets a variable.
Assume you don't control the extension.
Well, in that case you're screwed. If an extension won't cooperate in the manners described above, probing its web-accessible resources (if any!) is the only way to detect it, short of watching for specific content script activity in the DOM (again, if any).
Actually, there is already an existing issue on error when chrome cast extension is not installed with google-cast-sdk and based on that issues tracker, this hasn't been totally resolved yet. There are, however, given workarounds from one of the comments:
the workarounds would be to either install the Google Cast extension or disable network warnings (please note you may miss some warnings that could be on interest to you) so you don't see these additional logs.
And, you may also try with the probable solutions given in this SO post - Google chrome cast sender error if chrome cast extension is not installed or using incognito and who knows, it might help. :)

There is any way to save image/pdf content into local file system applicable for all browsers [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I got image content by ajax response in an array buffer.appended that array buffer to blob builder.now i want to write these contents to a file.Is there any way to do this..?
I used windows.requestFileSystem it is working fine with chrome but in mozilla not working..
here is my piece of code ,
function retrieveImage(studyUID, seriesUID, instanceUID, sopClassUID,nodeRef) {
window.requestFileSystem = window.requestFileSystem||window.webkitRequestFileSystem;
var xhr = new XMLHttpRequest();
var url="/alfresco/createthumbnail?ticket="+ticket+"&node="+nodeRef;
xhr.open('GET', url, true);
xhr.responseType = 'arraybuffer';
xhr.onload = function(e) {
if(this.status == 200) {
window.requestFileSystem(window.TEMPORARY, 1024*1024, function(fs) {
var fn = '';
if(sopClassUID == '1.2.840.10008.5.1.4.1.1.104.1') {
fn = instanceUID+'.pdf';
} else {
fn = instanceUID+'.jpg';
}
fs.root.getFile(fn, {create:true}, function(fileEntry) {
fileEntry.createWriter(function(writer) {
writer.onwriteend = function(e) {
console.log(fileEntry.fullPath + " created");
}
writer.onerror = function(e) {
console.log(e.toString());
}
var bb;
if(window.BlobBuilder) {
bb = new BlobBuilder();
} else if(window.WebKitBlobBuilder) {
bb = new WebKitBlobBuilder();
}
bb.append(xhr.response);
if(sopClassUID == '1.2.840.10008.5.1.4.1.1.104.1') {
writer.write(bb.getBlob('application/pdf'));
} else {
writer.write(bb.getBlob('image/jpeg'));
}
}, fileErrorHandler);
}, fileErrorHandler);
}, fileErrorHandler);
}
};
xhr.send();
}
The script of a web page is not allowed to write arbitrary files [such as pdfs] to client's storage. And you should be thankful because that means that web pages have a hard time trying to put malware on your machine.
Instead you should redirect the user (or open a new window/tab) to an url where the browser can find the content desired for download, and let it handle it. Use the header to tell the client to download it or displayed as explained here.
If you need to create the downloaded content dynamically, then manage it on the server making it an active page (.php, .jsp, .aspx, etc...). What matters is to have the correct MIME type in the header of the response.
Note: yes, I'm telling you to not use ajax, just window.open. Edit: I guess you may want to present the images in a img, in that case, it is the same, just put the url in the src attribute and have no ajax. Only some javascript to update the attribute if appropiate.
Given your comment I understand that you want:
To cache the image in the client to avoid to have to get it back from the server every time.
To allow the user to customize his experience allowing the use of images from local storage.
Now, again for security reasons, arbirary access to client's files is not allowed. In this case it works both ways: first it prevents the webpage to spy you, and second it prevents you to inject malicious content on the page.
So, for the first part, as far as I know the default is to cache images [this is handled by your browser, and yes, you should clean it from time to time because it tends to grow]. If that is not working for you, I guess you could try use a cache manifest.
About the second, the usual way would be use local storage [which, again is handled by your browser, but is not arbitrary access to client's files] to store/retrieve the url of the image and use it present the image.
The image can still be saved at the server, and yes, it can be cached. To get it to the server - of course - you can always upload it with <input type="file" ... /> and you may need to set enctype to your form. - You already knew that, right? - On the server, store the image on a database (or dedicated folder). Now the page that is resposible to retrieve the image should:
check the request method
check user's permissions (identify it by the session / cookie)
check the parameters of the request (if any)
set the header
output the file got the database (or dedicated folder)
Now, let's say you want to allow this to works as an xcopy deployable application (that just happens to run in a browser). In this case you can always tell the user to store the images he want in a particular location and access them with a relative path.
Or - just because - you are hosting in a place were there is no chance of server-side scripting. So you got to go along only with what javascript gives you. Well, you cannot use relative path here, since it is not local... and if you try to use a local absolute path, the browser will just diss you (I mean, it just ignores it).
So, you can't get the image from a file of the client, and you can't store it on the server...
Well, as you know there is a working draft for that, and I notice it is what you are trying. The problem is that it is a working draft. The initial implementation gets staggered by the security issues, to quote Jonas Sicking:
The main problem with exposing this functionality to the web is security. You wouldn’t want just any website to read or modify your images. We could put up a prompt like we do with the GeoLocation API, given that this API potentially can delete all your pictures from the last 10 years, we probably want something more. This is something we are actively working on. But it’s definitely the case here that security is the hard part here, not implementing the low-level file operations.
So, I guess the answer is "not yet"? In fact, considering Microsoft's approach of only providing the parts of the standardar that reach recommendation status, and also its approach of launching a new version of IE each new version of Windows... then you will have to wait a while to have supports in all the browsers. First wait until FileAPI reaches recommendation status. Then wait until Microsoft updates IE to support it. And if, by any chance (as it seems will happen) it will be only for IE10 (or a future IE11) and those deosn't work on a Windows before Windows 8, you will be waiting a lot of people to upgrade.
If this is your situation, I would suggest to get an API for some image hosting web site, and use that instead [That will probably not be free (or not be private), so you could just change your web hosting already].
you cant have a common way to store the response in files compatible with all the browsers ,
there is a way , u can use FileReader in javascript but that again wudn't work on IE either .
I had the similar prob a few weeks ago , what i did was i made an ajax request to a server passing the content , the server stored the content for me in the file , then it return a reference to the stored file.
i stored my files in a temp database table and the server action returned the id for the file by which we can access the file from database whenever we want.
you can also store your files on the server in some thumbnail , but i prefered database.
if u need any more specification , let me know

How to get cross-domain communication to work in iframes?

I have an iframe-based online help system that has worked well for years. With IE8 it chokes on some of the javascripting that calls location.toString(). This same code works fine in IE6.
Specifically, the code is:
var iss = parent.left.location.toString();
var isInd = iss.indexOf("indexframe");
I get a "permission denied" error. I believe the problem is related to cross-domain communications, which I'm not sure I fully understand. The whole package runs locally using local HTML and javascript files. I'm not trying to have a frame in one domain control a frame in another domain. Or maybe I'm way off base in assuming this is the problem.
Could someone help me to understand what I need to do to work around this issue?
If the iFrame and the parent Document are in the same domain then you should not get that error. It suggests to me that the documents are in different domains.
If the Iframe is in www.mydomain.com and the document is in help.mydomain.com YOU WILL GET AN ERROR! The pages must think they are in the exact same domain.
In both documents you could add javascript the set the domain:
document.domain = "mydomain.com";
Javascript will allow you to drop into the host domain on both pages. This allows you to communicate accross the frames. Of course if the pages are in different HOST domains then this won't work and javascript will throw the error.
Typically when accessing the content of another iframe, i use something like this:
var f = document.getElementById('IdOfIFrame'),
d = f.contentDocument||f.contentWindow;
alert(d.location);
If you are indeed accessing 2 domains from your site, and you own both of them, you can create an xml file that specifies which domains should be allowed to share. See the spec document. This opt-in cross-site access is supported by more than just Adobe (MS Silverlight for one). Here is Silverlight's support spec.

Take Screenshot of Browser via JavaScript (or something else)

For support reasons I want to be able for a user to take a screenshot of the current browser window as easy as possible and send it over to the server.
Any (crazy) ideas?
That would appear to be a pretty big security hole in JavaScript if you could do this. Imagine a malicious user installing that code on your site with a XSS attack and then screenshotting all of your daily work. Imagine that happening with your online banking...
However, it is possible to do this sort of thing outside of JavaScript. I developed a Swing application that used screen capture code like this which did a great job of sending an email to the helpdesk with an attached screenshot whenever the user encountered a RuntimeException.
I suppose you could experiment with a signed Java applet (shock! horror! noooooo!) that hung around in the corner. If executed with the appropriate security privileges given at installation it might be coerced into executing that kind of screenshot code.
For convenience, here is the code from the site I linked to:
import java.awt.Dimension;
import java.awt.Rectangle;
import java.awt.Robot;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import java.io.File;
...
public void captureScreen(String fileName) throws Exception {
Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
Rectangle screenRectangle = new Rectangle(screenSize);
Robot robot = new Robot();
BufferedImage image = robot.createScreenCapture(screenRectangle);
ImageIO.write(image, "png", new File(fileName));
}
...
Please see the answer shared here for a relatively successful implementation of this:
https://stackoverflow.com/a/6678156/291640
Utilizing:
https://github.com/niklasvh/html2canvas
You could try to render the whole page in canvas and save this image back to server. have fun :)
A webpage can't do this (or at least, I would be very surprised if it could, in any browser) but a Firefox extension can. See https://developer.mozilla.org/en/Drawing_Graphics_with_Canvas#Rendering_Web_Content_Into_A_Canvas -- when that page says "Chrome privileges" that means an extension can do it, but a web page can't.
Seems to me that support needs (at least) the answers for two questions:
What does the screen look like? and
Why does it look that way?
A screenshot -- a visual -- is very necessary and answers the first question, but it can't answer the second.
As a first attempt, I'd try to send the entire page up to support. The support tech could display that page in his browser (answers the first question); and could also see the current state of the customer's html (helps to answer the second question).
I'd try to send as much of the page as is available to the client JS by way of AJAX or as the payload of a form. I'd also send info not on the page: anything that affects the state of the page, like cookies or session IDs or whatever.
The cust might have a submit-like button to start the process.
I think that would work. Let's see: it needs some CGI somewhere on the server that catches the incoming user page and makes it available to support, maybe by writing a disk file. Then the support person can load (or have loaded automatically) that same page. All the other info (cookies and so on) can be put into the page that support sees.
PLUS: the client JS that handles the submit-button onclick( ) could also include any useful JS variable values!
Hey, this can work! I'm getting psyched :-)
HTH
-- pete
I've seen people either do this with two approaches:
setup a separate server for screenshotting and run a bunch of firefox instances on there, check out these two gem if you're doing it in ruby: selenium-webdriver and headless
use a hosted solution like http://url2png.com (way easier)
You can also do this with the Fireshot plugin. I use the following code (that I extracted from the API code so I don't need to include the API JS) to make a direct call to the Fireshot object:
var element = document.createElement("FireShotDataElement");
element.setAttribute("Entire", true);
element.setAttribute("Action", 1);
element.setAttribute("Key", "");
element.setAttribute("BASE64Content", "");
element.setAttribute("Data", "C:/Users/jagilber/Downloads/whatev.jpg");
if (typeof(CapturedFrameId) != "undefined")
element.setAttribute("CapturedFrameId", CapturedFrameId);
document.documentElement.appendChild(element);
var evt = document.createEvent("Events");
evt.initEvent("capturePageEvt", true, false);
element.dispatchEvent(evt);
Note: I don't know if this functionality is only available for the paid version or not.
Perhaps http://html2canvas.hertzen.com/ could be used. Then you can capture the display and then process it.
You might try PhantomJs, a headlesss browsing toolkit.
http://phantomjs.org/
The following Javascript example demonstrates basic screenshot functionality:
var page = require('webpage').create();
page.settings.userAgent = 'UltimateBrowser/100';
page.viewportSize = { width: 1200, height: 1200 };
page.clipRect = { top: 0, left: 0, width: 1200, height: 1200 };
page.open('https://google.com/', function () {
page.render('output.png');
phantom.exit();
});
I understand this post is 5 years old, but for the sake of future visits I'll add my own solution here which I think solves the original post's question without any third-party libraries apart from jQuery.
pageClone = $('html').clone();
// Make sure that CSS and images load correctly when opening this clone
pageClone.find('head').append("<base href='" + location.href + "' />");
// OPTIONAL: Remove potentially interfering scripts so the page is totally static
pageClone.find('script').remove();
htmlString = pageClone.html();
You could remove other parts of the DOM you think are unnecessary, such as the support form if it is in a modal window. Or you could choose not to remove scripts if you prefer to maintain some interaction with dynamic controls.
Send that string to the server, either in a hidden field or by AJAX, and then on the server side just attach the whole lot as an HTML file to the support email.
The benefits of this are that you'll get not just a screenshot but the entire scrollable page in its current form, plus you can even inspect and debug the DOM.
Print Screen? Old school and a couple of keypresses, but it works!
This may not work for you, but on IE you can use the snapsie plugin. It doesn't seem to be in development anymore, but the last release is available from the linked site.
i thing you need a activeX controls. without it i can't imagine. you can force user to install them first after the installation on client side activex controls should work and you can capture.
We are temporarily collecting Ajax states, data in form fields and session information. Then we re-render it at the support desk. Since we test and integrate for all browsers, there are hardly any support cases for display reasons.
Have a look at the red button at the bottom on holidaycheck
Alternatively there is html2canvas of Google. But it is only applicable for never browsers and I've never tried it.
In JavaScript? No. I do work for a security company (sort of NetNanny type stuff) and the only effective way we've found to do screen captures of the user is with a hidden application.

Accessing and modifying tabs opened using window.open in Google Chrome

I used to be able to do this to create an exported HTML page containing some data. But the code is not working with the latest version of Google Chrome (It works all right with Chrome 5.0.307.11 beta and all other major browsers).
function createExport(text) {
var target = window.open();
target.title = 'Memonaut - Exported View';
target.document.open();
target.document.write(text);
target.document.close();
}
Chrome now complains that the domains don't match and disallows the JavaScript calls as unsafe. How can I access and modify the document of a newly opened browser-tab in such a scenario?
I also got this problem when using a local page using the file:// protocol (in Chromium 5.0.342.9 (Developer Build 43360) under Linux). The exact error message is:
Unsafe JavaScript attempt to access
frame with URL about:blank from frame
with URL
file:///home/foo/bar/index.htm.
Domains, protocols and ports must
match.
Apparently the protocols don't match, but the good news is: when this page on a web server, Chromium also opens a new window as "about:blank", but it doesn't complain any longer. It also works when using a local web server accessed via http://localhost.
EDIT: there is bug filed upstream about this. According to this comment, it is fixed and it will be rolled into trunk shortly.
UPDATE: this bug is now fixed, the following test case works properly:
var target = window.open();
target.title = 'Memonaut - Exported View';
target.document.open();
target.document.write("test");
target.document.close();
Here is the explanation I think
http://groups.google.com/group/chromium-dev/browse_thread/thread/9844b1823037d297?pli=1
Are you accessing any data from other domain? Not sure, but that might be causing this problem.
One alternative would be a data: protocol URL.
https://developer.mozilla.org/en/data_URIs
http://msdn.microsoft.com/en-us/library/cc848897%28VS.85%29.aspx
var durl = "data:text/html," + encodeURIComponent(text);
var target = window.open(durl);
Supported in all modern browsers except IE7 and below.
you can also try closing self tab/window by using this code,
originally I've made this small greasemonkey script sometime ago in order to close
bad popups and ad windows, it worked fair (not too brilliant though)...
//window.addEventListener("load", function () {
window.addEventListener("onbeforeunload", function () {
try {
// clear inner html content to prevent malicious JS overrides.
document.getElementsByTagName("html")[0].innerHTML = "";
window.open("javascript:window.close();", "_self", "");
window.open("javascript:window.close();", "_self", "");
}
catch (e) {}
}(), false);

Categories

Resources