Get Checksum of file from webRequest - javascript

I am writing a Google Chrome extension that contains a whitelist of SHA256 checksums for javascript files. Only files within this whitelist are supposed to execute, all others should be blocked.
Currently, i use this inefficient but working code for getting a checksum and blocking the request.
chrome.webRequest.onHeadersReceived.addListener(
function(details) {
if(details.type == "script")
{
return getURL(details.url, false, function(data) {
var content = sha256(data);
if( hashList.indexOf(content) == -1 )
{
console.error(details.url+" not in whitelist");
return {cancel: true};
}
});
}
}, {
urls: ["<all_urls>"],
types: ["main_frame","script"]
}, ["blocking", "responseHeaders"]
);
The problems with this code:
It is quite inefficient as this requires downloading every file twice.
It has to be running synchronously to stop the files before arriving at the client. This makes the delay even bigger.
I want to know / ask whether there are ways to circumvent those issues by accessing the source file without downloading it twice (didnt find anything in the documentation) or at least increase the performance by any other means.

Mr. Cameron, unfortunately it is not possible to get the request body using webRequest API.
Partially this is correct because saving the content of a request may lead to performance problems. See this discussion for more details.

Related

Intercept and replace images in web app with Chrome extension

I am attempting to write a chrome extension (for personal use) to swap/replace images loaded by a webpage with alternate images. I'd had this working for some time using chrome.webRequest, but am attempting to bring it up-to-speed with manifest v3.
My general solution is that I am hosting my replacement images on my own server, including a script to retrieve as json a list of such images. I fetch that list and, for each image, create a dynamic redirect rule with chrome.declarativeNetRequest.updateDynamicRules.
This all works beautifully if I request an image to be replaced in a main frame. I can see the successful match with an onRuleMatchedDebug listener, and (of course) the path is dutifully redirected.
However, when I load the web app that in turn loads the image (with javascript, presumably with xmlhttprequest?), the redirect rule does not trigger. The initiator (a javascript source file) is on the same domain and similar path to the images being replaced.
//manifest.json
{
"name": "Image replace",
"description": "Replace images in web app",
"version": "2.0",
"manifest_version": 3,
"background": {"service_worker": "background.js"},
"permissions": [
"declarativeNetRequestWithHostAccess",
// "declarativeNetRequestFeedback" // Not necessary once tested
],
"host_permissions" : [
// "https://domain1.com/outerframe/*", // Not necessary
"https://domain2.com/innerframe/*",
"https://domain3.com/*",
"https://myexample.com/*"
]
}
// background.js
//chrome.declarativeNetRequest.onRuleMatchedDebug.addListener((info) => console.log(info)); // Not necessary once tested
var rules = [];
var idx = 1;
fetch("https://myexample.com/list") // returns json list like: ["subdir1\/image1.png", "subdir1\/image2.png", "subdir2\/image1.png"]
.then((response) => response.json())
.then((data) => {
console.log(data);
for (const path of data) {
var src = "https://domain2.com/innerframe/v*/files/" + path; // wildcards a version number
var dst = "https://myexample.com/files/" + path;
rules.push({
"id" : idx++,
"action" : {
"type": "redirect",
"redirect": {
"url": dst
}
},
"condition" : {
"urlFilter": src,
// In the end I only needed main_frame, image, and not xmlhttprequest
"resourceTypes": ["main_frame", "image"]
}
});
}
chrome.declarativeNetRequest.updateDynamicRules({"addRules": rules, "removeRuleIds" : rules.map(r => r.id)});
});
Again, this DOES all work IF I load a source image directly in chrome, but fails when it's being loaded by the javascript app.
I also attempted to test the match by specifying the proper initiator with testMatchOutcome, but my browser seems to claim this API does not exist. Not at all sure what could be wrong here.
// snippet attempted after above updateDynamicRules call
chrome.declarativeNetRequest.testMatchOutcome({
"initiator": "https://domain2.com/innerframe/files/script.js",
"type": "xmlhttprequest",
"url": "https://domain2.com/innerframe/v001/files/subdir/image1.png"
}, (outcome) => console.log(outcome));
I would expect a redirect to "https://myexample.com/files/subdir/image1.png"
Instead, I get this error:
Uncaught (in promise) TypeError: chrome.declarativeNetRequest.testMatchOutcome is not a function
Documentation https://developer.chrome.com/docs/extensions/reference/declarativeNetRequest/#method-testMatchOutcome says it's supported in chrome 103+. I'm running chrome 108.0.5359.72
Thanks!
Edit: Example code updated to reflect my answer below.
I've managed to work out why direct requests were redirected while script loaded ones were not. My problem was with the initiator and host permissions. I had been relying on Chrome developer tools to provide the initiator, which in the above example originated with domain2.com. However, the actual host permission I needed was from a third domain (call it domain3.com), which seems to be the source of the content that loaded scripts from domain2.com.
I discovered this when I recalled that host permissions allows "<all_urls>", which is not a good idea long term, but it did allow the redirects to complete. From there, my onRuleMatchedDebug listener could fire and log to the console the characteristics of the redirect, which showed me the proper initiator I was missing.
Having a concise view of the redirects I need, I can now truncate some of these options to only the ones actually needed (edited in original question).
Subsequent to that I thought to look back at the HTTP requests in developer tools and inspect the Referer header, which also had what I was needing.
So, silly oversights aside, I would like to leave this question open a little while longer in case anyone has any idea why chrome.declarativeNetRequest.testMatchOutcome seems unavailable in Chrome 108.0.5359.72 but is documented for 103+. I'd chalk it up to the documentation just being wrong, but it seems this function must have shipped at some point and somehow was erroneously removed? Barring any insights, I might just submit it as a bug.

Chrome ignores certificate pinning from extension

I am creating a secure chrome extension that is supposed to safely connect to an external https url that i do not own and that doesnt utilize certificate pinning itself. Using HPKP, i want to pin the certificate in the browser extension to verify the host is legit.
I did some research where its claimed that this is not possible apart from calling chrome via command line. It was hard finding a lot of informations about HKPK for chrome (extensions)
Using webRequest & webRequestBlocking permissions i tried to intercept the response header. The theory is that if i can add the HPKP header into the response header, i can simulate the browser filtering future traffic from this header. I do this with the code below.
chrome.webRequest.onHeadersReceived.addListener(function(details) {
//86400 = 1 day to prevent lockout on missing update
var addHeader = [
{name:"Public-Key-Pins", value:'max-age=30; pin-sha256="+sCGKoPvhK0bw4OcPAnWL7QYsM5wMe/mn1t8VYqY9mM="; pin-sha256="bumevWtKeyHRNs7ZXbyqVVVcbifEL8iDjAzPyQ60tBE="'},
{name:"Strict-Transport-Security", value:"max-age=86400"}
];
for(var i=0;i<details.responseHeaders;i++) {
var existHeader = details.responseHeaders[i];
for(var o=0;o<addHeader.length;o++) {
var writeHeader = addHeader[o];
if(existHeader.name.toLowerCase() == writeHeader.name.toLowerCase()) {
details.responseHeaders[i].value = writeHeader.value;
addHeader[o].written = true;
}
}
}
for(var i=0;i<addHeader.length;i++) {
var writeHeader = addHeader[i];
if(!writeHeader.hasOwnProperty("written")) details.responseHeaders.push({name:writeHeader.name, value:writeHeader.value});
}
return {responseHeaders: details.responseHeaders};
}, {urls: ["<all_urls>"]}, ["blocking","responseHeaders"]);
chrome.webRequest.onResponseStarted.addListener(function(details) {
console.log(details);
}, {urls: ["<all_urls>"]}, ["responseHeaders"]);
The SHA256 hashes are random certificates found online, and when i call a website i expect the browser to block the request as long as its an https url (as RFC ignores http ones). Sadly, nothing happens at all, but the console returns that the data did get indeed attached.
My question is: Did i do something wrong at another place, or is there any other way to have my browser extension verify its talking to the correct host?

online offline check using javascript [duplicate]

This question already has answers here:
Detect the Internet connection is offline?
(22 answers)
Closed 8 years ago.
How do you check if there is an internet connection using jQuery? That way I could have some conditionals saying "use the google cached version of JQuery during production, use either that or a local version during development, depending on the internet connection".
The best option for your specific case might be:
Right before your close </body> tag:
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script>window.jQuery || document.write('<script src="js/vendor/jquery-1.10.2.min.js"><\/script>')</script>
This is probably the easiest way given that your issue is centered around jQuery.
If you wanted a more robust solution you could try:
var online = navigator.onLine;
Read more about the W3C's spec on offline web apps, however be aware that this will work best in modern web browsers, doing so with older web browsers may not work as expected, or at all.
Alternatively, an XHR request to your own server isn't that bad of a method for testing your connectivity. Considering one of the other answers state that there are too many points of failure for an XHR, if your XHR is flawed when establishing it's connection then it'll also be flawed during routine use anyhow. If your site is unreachable for any reason, then your other services running on the same servers will likely be unreachable also. That decision is up to you.
I wouldn't recommend making an XHR request to someone else's service, even google.com for that matter. Make the request to your server, or not at all.
What does it mean to be "online"?
There seems to be some confusion around what being "online" means. Consider that the internet is a bunch of networks, however sometimes you're on a VPN, without access to the internet "at-large" or the world wide web. Often companies have their own networks which have limited connectivity to other external networks, therefore you could be considered "online". Being online only entails that you are connected to a network, not the availability nor reachability of the services you are trying to connect to.
To determine if a host is reachable from your network, you could do this:
function hostReachable() {
// Handle IE and more capable browsers
var xhr = new ( window.ActiveXObject || XMLHttpRequest )( "Microsoft.XMLHTTP" );
// Open new request as a HEAD to the root hostname with a random param to bust the cache
xhr.open( "HEAD", "//" + window.location.hostname + "/?rand=" + Math.floor((1 + Math.random()) * 0x10000), false );
// Issue request and handle response
try {
xhr.send();
return ( xhr.status >= 200 && (xhr.status < 300 || xhr.status === 304) );
} catch (error) {
return false;
}
}
You can also find the Gist for that here: https://gist.github.com/jpsilvashy/5725579
Details on local implementation
Some people have commented, "I'm always being returned false". That's because you're probably testing it out on your local server. Whatever server you're making the request to, you'll need to be able to respond to the HEAD request, that of course can be changed to a GET if you want.
Ok, maybe a bit late in the game but what about checking with an online image?
I mean, the OP needs to know if he needs to grab the Google CMD or the local JQ copy, but that doesn't mean the browser can't read Javascript no matter what, right?
<script>
function doConnectFunction() {
// Grab the GOOGLE CMD
}
function doNotConnectFunction() {
// Grab the LOCAL JQ
}
var i = new Image();
i.onload = doConnectFunction;
i.onerror = doNotConnectFunction;
// CHANGE IMAGE URL TO ANY IMAGE YOU KNOW IS LIVE
i.src = 'http://gfx2.hotmail.com/mail/uxp/w4/m4/pr014/h/s7.png?d=' + escape(Date());
// escape(Date()) is necessary to override possibility of image coming from cache
</script>
Just my 2 cents
5 years later-version:
Today, there are JS libraries for you, if you don't want to get into the nitty gritty of the different methods described on this page.
On of these is https://github.com/hubspot/offline. It checks for the connectivity of a pre-defined URI, by default your favicon. It automatically detects when the user's connectivity has been reestablished and provides neat events like up and down, which you can bind to in order to update your UI.
You can mimic the Ping command.
Use Ajax to request a timestamp to your own server, define a timer using setTimeout to 5 seconds, if theres no response it try again.
If there's no response in 4 attempts, you can suppose that internet is down.
So you can check using this routine in regular intervals like 1 or 3 minutes.
That seems a good and clean solution for me.
You can try by sending XHR Requests a few times, and then if you get errors it means there's a problem with the internet connection.
I wrote a jQuery plugin for doing this. By default it checks the current URL (because that's already loaded once from the Web) or you can specify a URL to use as an argument. Always doing a request to Google isn't the best idea because it's blocked in different countries at different times. Also you might be at the mercy of what the connection across a particular ocean/weather front/political climate might be like that day.
http://tomriley.net/blog/archives/111
i have a solution who work here to check if internet connection exist :
$.ajax({
url: "http://www.google.com",
context: document.body,
error: function(jqXHR, exception) {
alert('Offline')
},
success: function() {
alert('Online')
}
})
Sending XHR requests is bad because it could fail if that particular server is down. Instead, use googles API library to load their cached version(s) of jQuery.
You can use googles API to perform a callback after loading jQuery, and this will check if jQuery was loaded successfully. Something like the code below should work:
<script type="text/javascript">
google.load("jquery");
// Call this function when the page has been loaded
function test_connection() {
if($){
//jQuery WAS loaded.
} else {
//jQuery failed to load. Grab the local copy.
}
}
google.setOnLoadCallback(test_connection);
</script>
The google API documentation can be found here.
A much simpler solution:
<script language="javascript" src="http://maps.google.com/maps/api/js?v=3.2&sensor=false"></script>
and later in the code:
var online;
// check whether this function works (online only)
try {
var x = google.maps.MapTypeId.TERRAIN;
online = true;
} catch (e) {
online = false;
}
console.log(online);
When not online the google script will not be loaded thus resulting in an error where an exception will be thrown.

Executing a script via AJAX on Firefox OS device

My question regards the Apps CSP https://developer.mozilla.org/en-US/Apps/CSP
Here it says that all the remote script, inline script, javascript URIs, and other security issues won't work on a Firefox OS app.
So, I tried to download a script that is necessary for my app (Flurry and Ad service) and neither would work on the device. The way I made the call was with AJAX, that way I would avoid the remote and inline scripting that both scripts ment. In the simulator works perfectly, but on the device the ads never show and the Flurry session never starts.
Here is the part of my code where I make the AJAX call for Flurry:
$.ajax({
url: 'https://cdn.flurry.com/js/flurry.js',
dataType: "script",
xhrFields: {
mozSystem: true
},
success: function(msg){
console && console.log("Script de Flurry: luego de la descarga en AJAX "+msg);
flurryLibrary = true;
FlurryAgent.startSession("7ZFX9Z4CVT66KJBVP7CF");
},
error:function(object,status,errortxt){
console && console.log("The script wasn't downloaded as text. The error:" +errortxt);
flurryLibrary = false;
},
always: function(object,status,errortxt){
console && console.log("The script may or may not be downloaded or executed. The error could be:" +errortxt);
}
});
In my app I use the systemXHR permission and make the calls for other websites using this line:
request = new XMLHttpRequest({ mozSystem: true });
Wich is the same as using the xhrFields{mozSystem:true} in the AJAX call.
I believe it's not a cross domain problem because in the rest of my app I make calls for xml files that are not in my domain, and the calls are returned succesfully.
So, my question is, can a Firefox OS app execute scripts that are downloaded via AJAX? Is there a way to get around this problem?
Thank you for your time.
PS: I forgot to add that my app is privileged, just in case you ask
I believe that is a security feature and the short answer to your question would be NO. To quote the CSP doc that you linked to yourself:
You cannot point a at a remote JavaScript file. This means that all JS files that you reference must be included in your app's package.
If you load a JS file using ajax from a remote server, that JS is not included in your app package. You should be careful to obey CSP restrictions. It is possible to get many things working in the simulator or even the phone while developing without fully complying to CSP, but that does not mean it is OK. When you submit your app in future to any credible marketplace (such as Firefox Marketplace), it will be reviewed carefully to make sure it does not violate CSP restrictions. As a general rule of thumb, I would say any attempt at dynamically evaluating JS code will be a security risk and most likely banned by CSP regulations.
First, I'll point out that your two examples are not equivalent.
$.ajax({
xhrFields: {
mozSystem: true
},
});
Is the same as
request = new XMLHttpRequest();
request.mozSystem = true;
which is not the same as
request = new XMLHttpRequest({ mozSystem: true });
Instead, we can follow the advice in the linked bug report and run the following at application load time:
$.ajaxSetup( {
xhr: function() {
return new window.XMLHttpRequest( {
mozSystem: true
} );
}
} );
This alone should fix your problem. However, if it doesn't work, then the next workaround here is to fetch the script resource as plain text and then load that text content as a script.
However, inline scripts and data: URLs are off-limits for privileged Firefox OS apps. We might still accomplish this goal through a blob: URL, however:
window.URL = window.URL || window.webkitURL;
var request = new XMLHttpRequest({ mozSystem: true });
request.open("GET", "https://cdn.flurry.com/js/flurry.js");
// when the Ajax request resolves, load content into a <script> tag
request.addEventListener("load", function() {
// make a new blob whose content is the script
var blob = new Blob([request.textContent], {type: 'text/javascript'});
var script = document.createElement('script');
script.src = window.URL.createObjectURL(blob);
// after the script finishes, do something else
script.addEventListener("load", function() {
flurryLibrary = true;
FlurryAgent.startSession("7ZFX9Z4CVT66KJBVP7CF");
});
document.body.appendChild(script);
});
However, if the script itself does something not allowed by the CSP, then you're definitely out of luck.
You must use mozSystem and mozAnon properties, example:
var xMLHttpRequest = new XMLHttpRequest({
mozAnon: true,
mozSystem: true
});
Its a shame this is a problem, I was hoping on getting loadScript working, as firefoxOS is an environment, and in my app all the application code is HTML5 and local, the current rule is all the scripts need to be loaded in memory in one shot, unless you url load a full page, which means you can not have a persisten wrapper around the site, and ajax inthe pages with assosiated scripts when needed. you would have thought that firefox would have enabled local lazy load for scripts at least. works in chrome, but not in firefox.

Search for injected script code on the Windows Server for my ASP.NET site

My site was probably hacked. I am finding script.js from bigcatsolutions.com in my page. It triggers a popup of an affiliate program. The script isn't on the page by default and I want to know how can I find where it was injected. The script sometimes injects other ad sites.
In chrome I see this:
The injected script code:
function addEvent(obj, eventName, func) {
if (obj.attachEvent) {
obj.attachEvent("on" + eventName, func);
} else if (obj.addEventListener) {
obj.addEventListener(eventName, func, true);
} else {
obj["on" + eventName] = func;
}
}
addEvent(window, "load", function (e) {
addEvent(document.body, "click", function (e) {
if (document.cookie.indexOf("booknow") == -1) {
params = 'width=800';
params += ', height=600';
params += ', top=50, left=50,scrollbars=yes';
var w = window.open("http://booknowhalong.com/discount-news", 'window', params).blur();
document.cookie = "booknow";
window.focus();
}
});
})
My site is moved from my hosting company to Amazon EC2 Windows 2013 Server and still have the issues, so it means that the code still resides on the server somewhere. My site was build using ASP.ENT / C#.
Things I did:
tried to search the original aspx and aspx.cs code files
Have you checked the IIS logs to see if they are hitting a specific page and injecting it there?
Do you load any data from a database? You could check in the tables and see if anything out of the ordinary appears there.
It is unlikely that the .aspx pages have actually been physically modified and even more unlikely that the DLL have been as .aspx.cs files are compiled in to your BIN folder as DLL's. The more likely scenario is that you have an unsecure page that a malicious site is injecting its script into. The other possible attack vector is that you have had malicious code via SQL injection and are loading it each time.
After deep searching and I missed it in the first run, I found that the script was injected into the ASP.NET masterpage.
I ran a search to search for a specific string in all the files and that's how I found it. It seems that the server itself was breached and the hacker put the code into several websites.
So for those of you who have this type of problem, I recommend running a text search and try to find the URL that is tights to the running script.
Hope that helps and thanks for your time.

Categories

Resources