I am building a chrome extension, and am reading conflicting information regarding the webRequest Api, as to whether it will be deprecated or not. I am building a chrome extension that uses chrome.webRequest.onBeforeSendHeaders.addListener() to read outgoing request headers before a request is sent, and to modify the request being sent based on some conditions, along with caching some of the request header values.
So my question is, if I submit a chrome extension using this api, and the method above, will I be allowed to submit it and have people freely download and use it, possibly even monetize it, and whether the webRequest api will be deprecated soon. Please only comment if you are 100 percent sure, I have read many contradictory stack overflow answers on similar questions.
The documentation seems very vague to me, and I specifically remember Google adding a orange coloured banner on https://developer.chrome.com/docs/extensions/reference/webRequest/ warning that this api will be deprecated soon, which is now no longer there( I could be misremembering this).
Edit: I know there is declarativeNetRequest but it is not powerful enough for what I require.
No it's not deprecated.
All ManifestV3 extensions can use webRequest API in a read-only way i.e. you should remove 'blocking' from addListener and remove webRequestBlocking from manifest.json.
If you want to change the headers you'll have to keep using ManifestV2 (in general) or try making a set of rules prepared beforehand in declarativeNetRequest e.g. in some cases you can add the rule dynamically inside webRequest.onBeforeRequest listener so that the rule will have a chance to apply to the incoming headers.
Force-installed (via policies) ManifestV3 extensions can still use the 'blocking' mode and webRequestBlocking permission.
Related
I am currently trying to detect if a user has a certain Chrome extension installed. The Chrome extension is not my own and I do not have the source code to it. I have tried methods in numerous posts but they all fail. What I've tried and why it failed is detailed below.
This results in 'cannot read property connect of undefined' when executed:
var myPort=chrome.extension.connect('idldbjenlmipmpigmfamdlfifkkeaplc', some_object_to_send_on_connect);
Trying to load a resource of the extension as follows to test if it's there but going to this URL in browser results in 'your file was not found' Chrome error page (note that I found this path by going to C:\Users\\AppData\Local\Google\Chrome\User Data\Default\Extensions\idldbjenlmipmpigmfamdlfifkkeaplc\1.0.0.1_0\ on my Windows local machine):
chrome-extension://idldbjenlmipmpigmfamdlfifkkeaplc/1.0.0.1_0/icon_16.png
Using Chrome management but this results in console error 'cannot read property get of undefined' when executed
chrome.management.get("idldbjenlmipmpigmfamdlfifkkeaplc", function(a){console.log(a);});
And most other answers I've come across seem to involve the extension being written by the same person who is trying to check for it.
Assuming you need it from a website
connect/message method implies that the extension specifically listed your website in the list of origins it expects connection from. This is unlikely unless you wrote this extension yourself, as this cannot be a wildcard domain.
Referring to files within the extension from web context will return 404 simulate a network error unless the extension declared them as web-accessible. This used to work before 2012, but Google closed that as a fingerprinting method - now extensions have to explicitly list resources that can be accessed. The extension you specifically mention doesn't list any files as web-accessible, so this route is closed as well.
chrome.management is an extension API; websites cannot use it at all.
Lastly, if an extension has a content script that somehow modifies the DOM of your webpage, you may detect those changes. But it's not very reliable, as content scripts can change their logic. Again, in your specific case the extension listens to a DOM event, but does not anyhow make clear the event is received - so this route is closed.
Note that, in general, you cannot determine that content script code runs alongside yours, as it runs in an isolated context.
All in all, there is no magic solution to that problem. The extension has to cooperate to be discoverable, and you cannot bypass that.
Assuming you need it from another extension
Origins whitelisted for connect/message method default to all extensions; however, for this to work the target extension needs to listen to onConnectExternal or onMessageExternal event, which is not common.
Web-accessible resources have the same restrictions for access from other extensions, so the situation is not better.
Observing a page for changes with your own content script is possible, but again there may be no observable ones and you cannot rely on those changes being always the same.
Similar to extension-webpage interaction, content scripts from different extensions run in isolated context, so it's not possible to directly "catch"code being run.
chrome.management API from an extension is the only surefire way to detect a 3rd party extension being installed, but note that it requires "management" permission with its scary warnings.
The idea is quite simple in concept:
I would like to create a userscript that will let me press a button and save something on the page(most commonly and problematically images).
Note: A userscript is a script that is injected client-side(by browser extensions such as Tampermonkey and Greasemonkey) and is used to add functionality to a site.
To do so I merely need to call the saveAs() function and pass it the data.
The question then becomes how to I obtain the data.
Most approaches I've seen run into the situation where the resource is not of the same domain as the script perhaps?(not sure how this works).
Now, Tampermonkey(and Greasemonkey) have created a function to deal with this problem specifically - GM_XMLHTTPRequest, which can circumvent the need for proper CORS headers.
This however creates another request to the server, for a file that has already been downloaded.
My question is: Is there a way to not have to send secondary requests to the server?
Here is a chronicle of my efforts:
From what research I've managed to do, you can create a canvas and draw the image in there. However this "taints" the canvas, preventing it from running functions that extract that data(such as .toBlob() or .toDataURL()).
CORS offers 2 mechanisms as far as I understand it: Setting the proper HTTP headers, which requires control of the server, and a special attribute that can be put on HTML elements: crossorigin
I tried adding this property post-load and it won't work, you still get a tainted canvas.
Tampermonkey offers several different options on when to run the script. So the next idea was to run when the DOM is loaded, but the resources haven't yet been fetched. It seems the earliest this is possible is document-end(earlier the getElementById call returns null). However this currently returns an error when loading the image on the page(before any other additional code is run):
Image from origin '...' has been blocked from loading by Cross-Origin Resource Sharing policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '...' is therefore not allowed access.
There's also the --disable-web-security flag in Chrome, but I'd rather not go there.
No, there is no way to do it without a new request to the server.
When the first request is made, the image is marked as unsafe by the browser, and will then block a few features, like canvas' toDataURL, getImageData or toBlob, or in case of audio files, AudioContext's createMediaElementSource and AnalyserNode's methods and probably some others.
There is nothing you can do to circumvent this security, once it's marked as unsafe, it is unsafe.
You then have to make a new request to the server to get a new file from the server in safe way this time.
Commonly, you would just set the crossOrigin attribute on the media element before doing the request, and after the server has been properly configured to answer to such requests.
Now in your case, it seems clear that you can't configure any server where your script will be used on.
But as you noticed, extensions such as GreaseMonkey or TamperMonkey have access to more features than basic javascript ran from a webpage. In these features, there is one allowing your browser to be less careful about such cross-origin requests, and this is what the GM_xmlhttpRequest method does.
But once again, even extensions don't have enough power to unmark non-safe media.
You must perform a new request, using their less secured way.
I want to do a PUT request across a different domain. But the script fails only in IE.
I figured out what the problem was, in IE if you look at Internet Options > Security tab > Custom level > Miscellaneous > Access data sources across domains option was set to disable. The only way I was able to get my put request work is setting that option to Allow.
So my question: Is there a way I can get this working without enforcing end users to set the option?
There is XDomainRequest() which can be used for XDomain requests in IE but, this method doesn't support PUT.
IE9 and older does not support PUT method in cross domain request. Only GET and POST.
You could use a library like Xdomain or EasyXDM to get a CORS alternative using the Post Message hack.
I prefer to use Xdomain because it hijacks the native XMLHTTPRequest and provides a "drop-in" solution. EasyXDM forces you to use their API which means more conditional coding overhead, however, it supports IE6/IE7.
The main take away? Don't stop supporting CORS! Just make IE behave itself and opt-in to the future.
I see that enablePrivilege is deprecated in Firefox. I am trying to adapt my intranet code base to this.
The most critical place is assigning the 'view' of a 'tree' element. This requires elevated privs, though I really don't understand why. Is there another way to do this that does not require the elevated privileges? Will a way to do this be provided before enablePrivilege goes away?
The application is not an extension but a signed JAR file that runs as content.
Looking through bug 546848, Mozilla doesn't plan to allow websites with elevated privileges any more. This functionality introduces security risks that are simply not worth it (similarly to remote XUL in general). The proposed solution would be using a Firefox extension to do any special actions that might be needed. Ideally, you would move your entire web application UI into an extension and only leave the server as a backend. But I guess that this solution would require too much effort on your side. A simpler solution would be a single-purpose extension that receives a message from your website and sets the tree view.
Interaction between privileged and non-privileged pages describes how this communication could be implemented. Your website would set a property _myTreeView on the <tree> element and dispatch an event on it. The extension would receive the event, verify that event.target.ownerDocument.defaultView.location.host is your intranet website (important, allowing any website to trigger your extension would be a security hole) and then set event.target.view = event.target.wrappedJSObject._myTreeView. See XPCNativeWrapper documentation on why wrappedJSObject is necessary here.
I have a page where I don't want the outbound links to send a referrer so the destination site doesn't know where they came from.
I'm guessing this isn't possible but I just want to make sure there weren't any hidden javascript magic that could do it and that would work with some (if not most) browsers.
Maybe some clever HTTP status code redirecting kung-fu?
Something like this would be perfect
link
The attribute you are looking for is rel="noreferrer": https://html.spec.whatwg.org/multipage/links.html#link-type-noreferrer
According to https://caniuse.com/rel-noreferrer, all the major browsers have supported it since at least 2015, though Opera Mini does not (and, of course, some users may be using older browser versions).
For anyone who's visiting in 2015 and beyond, there's now a proper solution gaining support.
The HTTP Referrer Policy spec lets you control referrer-sending for links and subresources (images, scripts, stylesheets, etc.) and, at the moment, it's supported on Firefox, Chrome, Opera, and Desktop Safari 11.1.
Edge, IE11, iOS Safari, and desktop versions of Safari prior to 11.1 support an older version of the spec with never, always, origin, and default as the options.
According to the spec, these can be supported by specifying multiple policy values. Unrecognized ones will be ignored and the last recognized one will win.
<meta name="referrer" content="never">
<meta name="referrer" content="no-referrer">
Also, if you want to apply it to audio, img, link, script, or video tags which require a crossorigin attribute, prefer crossorigin="anonymous" where possible, so that only the absolute minimum (the Origin header) will be shared.
(You can't get rid of the Origin header while using CORS because the remote sites need to know what domain is making the request in order to allow or deny it.)
HTML 5 includes rel="noreferrer", which is supported in all major browsers. So for these browsers, you can simply write:
link
There's also a shim available for other browsers: https://github.com/knu/noreferrer
Bigmack is on the right track, but a javascript location change still sends a referrer in firefox. Using a meta refresh seems to solve the problem for me.
</html>'>Link
I was trying to figure this out too.
The solution I thought of was to use a data url to hide the actual page I am coming from.
<a href='data:text/html;charset=utf-8, <html><script>window.location = "http://google.ca";</script></html>'>Link</a>
This link opens a page that only contains javascript to load a different page.
In my testing no referrer is given to the final destination. I don't know what it could send as a referrer if it tried anyways, maybe the data url ? which wouldn't give away where you came from.
This works in Chrome. Chrome is my only concern for my current problem but for browsers that don't like javascript in pages that are data urls. You could probably try a meta refresh.
In addition to the information already provided. Lots more information on the topic here: https://w3c.github.io/webappsec-referrer-policy/#referrer-policy-no-referrer
Specifically allowing you to either send or not send referral information if you need different rules for same-origin or cross-origin requests.
Something to consider depending on your specific use case. i.e. if you are pulling in images/css/javascript from 3rd party websites, then you may want to not identify the URL that you are doing this from and hence would use the no-referrer option. Whereas if you are linking out to other websites from your own website, you may want them to know that you are sending them traffic. Always think through the implications of this on both sides. If there is a conflict in these two areas, then there are other options such as adding UTM tracking parameters to the end of URLs which may come in handy for some people. Full details here: https://www.contradodigital.com/2014/06/03/importance-utm-tracking-parameters-social-media/
I don't know if I'm missing something here and am v happy to be corrected, but wouldn't a URL shortening service meet your needs here?
Presumably the logs at the destination site would only show the domain of the shortening service, not the initial referring domain, so you would remain hidden.