Possible reasons for different SRI results in different browsers? - javascript

A client would like to use SRI on all CSS and JS assets on their website, but they ran into a very strange issue with Firefox. Their server is an apache2 instance, serving HTML content. CORS is enabled for the whole virtual host, for any (*) origin. There is no cache or CDN in place.
The two files in question are
company.min.css and
company.min.js
To generate the SRI hashes, initially SRI Hash Generator was used. The output from this has multiple algorithms and looks like this:
<script src="https://example.com/static/company.min.js" integrity="sha256-aKuSpMxn15zqbFa0u0FVA7mAFOSVwIwU4gA3U7AZf5Y= sha384-WDAg+qGBjbEyE52SdQ5UHdTObTY+jYTL63m9Oy2IJcGZR8AFn0t9JNN7qdut6DBk sha512-bxmUFj1FVOVV74+zIYau/HSUcLq9cuneBKoMJyW9dAz//yKFi8eRhyyIezF++2vbGO7cR6Pzm1l9rvnPcrvIrg==" crossorigin="anonymous"></script>
similar for the CSS file. These were inserted in the HTML and the site was tested in a few different browsers, with these result:
works in Chrome (/Canary), Opera, Edge and even IE
doesn't work in Firefox (/Nightly).
Firefox only dislikes the CSS, saying that the SHA512 does not match the resource. It processes the JS file fine for whatever reason.
I confirmed (using OpenSSL) that the hash generated by the above tools is indeed correct, and the fact that it works in almost every browser except Firefox got me thinking.
So I tried to hash the resources using Mozilla's own SRI tool, srihash.org, which is a recommendation by Mozilla from their blog post on SRI.
Now it get's a bit weird.
For the CSS file, srihash.org generates a completely different hash.
For the JS file, the hash is the same for both generators and match my offline hashing with OpenSSL.
But, if I replace the CSS link with the Mozilla-generated one, this is the result:
works in Firefox
doesn't work anywhere else, since the hash mismatches
Question
I suspect this is some problem within Firefox' SRI implementation. The relevant discussion for the implementation is here, but it doesn't give any reasons why the resource is different in Firefox. But I'm not strictly a web developer, so are there any likely (documented) reasons for different SRI hashes in different browsers?
I can't disclose the exact server/resources. This is a general question, so if you have any objective experience or references to authorized sources documenting differences in the SRI implementation, please answer.

Related

Is there any comprehensive documentation for what browsers do and don't allow for file: scheme URIs?

If you can avoid it, most people will not use file: URIs because there are just so many quirky rules about what is and isn't allowed, but sometimes it's not up to you, you just have to support loading a HTML file or app from a file: URI.
For those times, is there any comprehensive and up-to-date documentation or guide for what browsers do and don't allow for file: URIs? There is information out there, but it is piecemeal. This question for example explains that you can't load an ES module over a file: URI in Chrome, but you can in Firefox. But what about fetch(), WASM, or other modern web technologies? Maybe they don't work over file:, but do support data:, if you can construct one with the right MIME type?
It would be very useful for those rare times when you just have to support running a web app from a local file: URI, if there was a guide that laid out what works consistently across all browsers, so that if you stick with those few technologies your app will work. Does anyone know of anything like that? I've tried searching the web and MDN, but haven't found anything, but that could be because it seems like most search engines ignore the colon in file:.
Here I found an interesting article. They basicaly said:
Chromium allows HTML pages served from file:// URIs to load images and scripts from the same path, but Legacy Edge (v18) and Internet Explorer are the only browsers that consider all local-PC file:// URIs to be same-origin, allowing such pages to refer to other HTML resources on the local computer. Other browsers treat file origins as unique, blocking DOM interactions between frames from different local files, etc.
Based on this whatwg1 or whatwg2 or w3:
it is UA-specific "is left as an exercise to the reader"
For Chrome:
file:// schemed URIs do not contain a host component; be sure that your UI accounts for this possibility.
For Mozilla:
file: Host-specific file names
So basically, it is a mess as they have to face security issues.
Other resources:
Stack similar question
Chromium discution
More doc

Scan and access local file directory in Firefox and IE?

I'm doing some research on whether or not it's possible for a web app (meant to be used and distributed internally) to scan and read files from a local directory (on user machine). I came across a couple of terms as following:
NPAPI: no longer supported by majority of web browser
ActiveX: IE only
Sandbox: Chrome uses this kind of technology, plus it's not fitting to the requirement so I have to look elsewhere
I feel like ActiveX might be the only option even though I haven't actually written any ActiveX control before (not sure if it's possible).
Also the goal is to support more than one kind of web browser, so other than IE I thought Firefox might be capable of achieving the requirement, since no search result so far said otherwise.
Could someone please give me some pointer? I just need to know if it's at all possible to build a ActiveX control or Firefox extension to scan and read files from a local directory. If it is, then what is the downside other than security vulnerability.

Implementing Boomerang into a browser extension/plugin

I am currently developing a browser extension/plugin that one would install and would then report information such as page load times, number of objects on each page, etc. to a server so that the data could be analyzed.
I was curious if something like Yahoo's Boomerang JavaScript code (http://hacks.bluesmoon.info/boomerang/doc/) would be able to be able to do this. From what I have read, it seems like Boomerang was developed for developers to implement into their own website in order to gather data, but would I be able to gather the same kind of data by putting this code in a browser extension in order to gather the data from each website that is visited?
The link you're using for boomerang is very outdated (it was my first experimental page). Use http://lognormal.github.com/boomerang/doc/
Boomerang does already use these APIs, and much more, but as a browser extension, you could do much more in terms of removing code that supports other browsers, and also maintaining offline storage more efficiently than boomerang's cookies.
FWIW, yslow already does most of what you want, so maybe just use that (it was built by the same team)
I don't see why not from a technical perspective, at least in Firefox and Chrome. However, user privacy issues and policies of the browser extension stores might prevent you from tracking users in certain ways and/or without consent. So better check that first to avoid surprises later.
You'd need a way to gather information. Judging from your question text, the regular DOM APIs and the PerformanceTiming API might be sufficient. And that's probably what Boomerang uses already.
You'd just attach your code (or Boomerang) with e.g. Firefox Add-on SDK PageMod or Chrome extension Content Scripts.
You'll also need to transmit the data somewhere. Both Firefox (XUL1, Add-on SDK) and Chrome extensions allow cross-origin XHR.
So there you are. ;)
1 XUL overlay scripts are privileged, and not restricted by the same-origin policy.

In what case does IE8 block Javascript and how to avoid it?

I got a web site using jQuery, jQuery Tools and some handcrafted JS running performing graphical enhancements. While it's running smooth on FF, Safari and Chrome, IE blocks the script execution :
There is nothing particularly more dangerous on this code than, let's say, on Netvibes.
Why is even talking about activeX ? I'm using JS.
And how can I prevent that ? I don't want my user to click on "I allow this website" to work. It would be like putting a big red absolute DIV reading "Live quick and never come back".
js can't access the filesystem, so what's so point ?
Actually JS can traditionally do some bad stuff running from the My Computer Zone, like install ActiveX objects. A lot of past IE exploits used this to leverage filesystem access into arbitrary-code access.
So faced with this problem Microsoft decided to solve the problem, not by simply removing the My Computer zone — oh no, that would be far too easy — but by adding an extra layer of complexity. So Internet Explorer gained an option, on by default, to lock down content from the filesystem, whilst allowing other applications that used embedded WebBrowser controls to continue as before, on the grounds that maybe some applications were relying on the loose settings in their internal HTML interfaces.
(They weren't, really, in the consumer space, but then we never know what shades of foulness may exist in the bespoke Enterprise app world.)
After the embarrassment of IE getting hacked all the time, MS overcompensated by making the lockdown settings for filesystem pages considerably more restrictive than even normal web pages from the Intenet. So you can't run JavaScript from files off the filesystem, for no particularly good reason.
At this point web authors whinged, so MS responded not by removing the excessive lockdown — oh no, that would be far too easy — but by adding an extra layer of complexity. So now you can get out of the My Computer Zone simply by placing at the top of your file:
<!-- saved from url=(0014)about:internet -->
This cryptic incantation is known as the Mark of the Web. The newline at the end of it has to be a Windows CRLF, which nicely shafts you if you're using plain LF line-endings. Including this string puts you in the normal Internet Zone where JScript works but you get no other special privileges.
The amusing thing is that since then, the normal security settings in the My Computer Zone have been tightened up so that it's pretty much the same as the default Internet Zone. So the net result is the same as if they'd just got rid of the bloody My Computer Zone in the first place, only with lots of extra complication for the user and annoyance to the web author.
Thank you so very much Microsoft.
As Ken Browning said in a comment to your question, this alert happens with javascript when on a High Security Zone, where local pages are.
If someone has the Internet as a High Security Zone, then the warning will pop up.
You can add localhost to the Trusted Sites Zone.

Microsoft CDN for jQuery or Google CDN? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Does it actually matter which CDN you use to link to your jquery file or any javascript file for that matter. Is one potentially faster than the other? What other factors could play a role in which cdn you decide to use? I know that Microsoft, Yahoo, and Google all have CDN's now.
Update based on comments:
Short version: It doesn't matter much, but it may depend on what they host. They all host different things: Google doesn't host jQuery.Validate, Microsoft did not host jQuery-UI, since 2016 they do!!, Microsoft offers their scripts that would otherwise be served via ScriptResource.axd and an easier integration (e.g. ScriptManager with ASP.Net 4.0).
Important Note: If you're building an intranet application, stay away from the CDN approach. It doesn't matter who's hosting it, unless you're on a very overloaded server internally, no CDN will give you more performance than local 100mb/1GB ethernet will. If you use a CDN for a strictly internal application you're hurting performance. Set your cache expiration headers correctly and ignore CDNs exist in the intranet-only scenario.
The chances of either being blocked seems to be about equal, almost zero. I have worked on contracts where this isn't true, but it seems to be an exception. Also, since the original posting of this answer, the context surrounding it has changed greatly, the Microsoft CDN has made a lot of progress.
The project I'm currently on uses both CDNs which works best for our solution. Several factors play into this. Users with an older browser are still probably making 2 simultaneous requests per domain as recommended by the HTTP specification. This isn't an issue for anyone running anything decently new that supports pipelining (every current browser), but based on another factor we're knocking out this limitation as well, at least as far as the javascript.
Google's CDN we're using for:
jquery.min.js
jquery-ui.min.js.
Microsoft's CDN we're using for:
MicrosoftAjax.js
MicrosoftAjaxWebForms.js (until 4.0 we're not completely removing all UpdatePanels)
jQuery.Validate.min.js
Our server:
Combined.js?v=2.2.0.6190 (Major.Minor.Iteration.Changeset)
Since part of our build process is combining and minifying all custom javascript, we do this via a custom script manager that includes the release or debug (non-minified) versions of these scripts depending on the build. Since Google doesn't host the jQuery validation package, this can be a down-side. MVC is including/using this in their 2.0 release, so you could rely completely on Microsoft's CDN for all your needs, and all of it automatic via the ScriptManager.
The only other argument to be made would be DNS times, there is a cost to this in terms of page load speed. On Average: Simply because it's used more (it's been around longer) ajax.googleapis.com is likely to be returned by DNS sooner than ajax.microsoft.com, simply because the local DNS server was more likely to get a request for it (this is a first user in the area penalty). This is a very minor thing and should only be considered if performance is extremely important, down to the millisecond.
(Yes: I realize this point is contrary to my using both CDNs, but in our case the DNS time is far overshadowed by the wait time on the javascript/blocking that occurs)
Last, if you haven't looked at it, one of the best tools out there is Firebug, and some plug-ins for it: Page Speed and YSlow. If you use a CDN but your pages are requesting images every time because of no cache-headers, you're missing the low-hanging fruit. Firebug's Net panel can quickly give you a quick breakdown of your page load-time, and Page Speed/YSlow can offer some good suggestions to help.
You should absolutely use the Google CDN for jQuery (and this is coming from a Microsoft-centric developer).
It's simple statistics. Those who would consider using the MS CDN for jQuery will always be a minority. There are too many non-MS developers using jQuery who will use Google's and wouldn't consider using Microsoft's. Since one of the big wins with a public CDN is improved caching, splitting usage among multiple CDNs decreases the potential for that benefit.
Google will send you a jQuery version minified with their own software, this version is 6kb lighter than the standard minified version served by MS. Go for Google.
One minor thing to consider is that both companies offer slightly different "extra" libraries:
Microsoft is offering the JQuery validation library on their CDN, whereas Google is not (http://www.asp.net/ajaxlibrary/cdn.ashx)
Google is offering the JQuery UI library on their CDN, whereas Microsoft is not (http://code.google.com/apis/ajaxlibs/documentation/)
Depending on your needs, this may be relevant.
It should also be noted that as ajax.microsoft.com is a sub domain of microsoft.com requests send all microsoft.com cookies adding to the overall time it takes to get the file back.
Also, ajax.microsoft.com is using default IIS7 compression which is inferior to the standard compression that other web servers use.
http://ajax.microsoft.com/ajax/jquery/jquery-1.4.4.min.js - 33.4K
http://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js - 26.5K
Also, as others have mentioned google CDN is way more popular which greatly increases the chance of a file being cached.
So I strongly recommend using google.
It probably doesn't matter, but you could validate this with some A/B testing. Send half of your traffic to one CDN, and half to the other, and set up some profiling to measure the response. I would think it more important to be able to switch easily in case one or the other had some serious unavailability issues.
I know I'm chiming in a little late here, but here is the code that I've been using in production. I've never had issues with it, but your mileage may vary. Make sure you test it in your own environment.
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" type="text/javascript"></script>
<script type="text/javascript">
!window.jQuery && document.write('<script src="/scripts/jquery-1.4.2.min.js"><\/script>')
</script>
<script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.4/jquery-ui.min.js" type="text/javascript"></script>
<script type="text/javascript">
!window.jQuery.ui && document.write('<script src="/scripts/jquery-ui-1.8.2.min.js"><\/script>')
</script>
Is one potentially faster than the other?
I was actually curious of this myself so I setup a jsbin test page using each of the following and then ran it through webpagetest.org's visual comparison tool. I tested:
ajax.googleapis.com
code.jquery.com
ajax.aspnetcdn.com
cdnjs.cloudflare.com
Who was fastest: code.jquery.com by 0.1 second in both tests
Who was slowest: ajax.aspnetcdn.com by 0.7 seconds in first test and ajax.googleapis.com by 1 second in second test
Here's the 1st test (each was tested 3 times):
Video: http://www.webpagetest.org/video/view.php?id=121019_16c5e25eff2937f63cc1714ed1eac814794e62b3
Reports: http://www.webpagetest.org/video/compare.php?tests=121019_D2_KF0,121019_9Q_KF1,121019_WW_KF2,121019_9K_KF3
Here's the 2nd test (another 3 each):
Video: http://www.webpagetest.org/video/view.php?id=121019_a7b351f706cad2c25664fee7ef349371f17c4e74
Reports: http://www.webpagetest.org/video/compare.php?tests=121019_MP_KJN,121019_S6_KJP,121019_V9_KJQ,121019_VY_KJR
As stated by Pingdom:
When someone visits your site, if they have already visited another
site that uses the same jQuery file on the same CDN, the file will
have been cached and doesn’t need to be downloaded at all. It can’t
get any faster than that.
This means that the most widely used CDN will have the odds on its
side, which can pay off for your site.
A few observations on performance:
Google’s CDN is consistently the slowest of the three both in North America and Europe. In Europe, Microsoft’s CDN is the fastest.
I think it depends on where is your targeted audience. You can use alertra.com to check both CDN speed from many locations around the world.
One additional consideration - if your site is SSL and you need to support Android 2.1 (or earlier), the SSL certificate on the HTTPS version of the Microsoft CDN will crash those versions of the Android browser, per this issue: http://code.google.com/p/android/issues/detail?id=5001. It's not Microsoft's "fault", as the SSL certificate is technically valid and the defect is in Android's SSL implementation... but it will crash your site, nonetheless.
The SSL cert on Google's CDN does not fall afoul of this particular issue (relating to the certificate's "Certificate Subject Alt Name").
So, for SSL + Android 2.1 support, use the Google CDN.
My answer is bit different than others, I will go with microsoft if you need jquery validator which almost everyone need if you are using jquery.
Microsoft CDN http connection is Keep-Alive which is big plus when you are requesting multiple items.
So if you need jquery validation then use Microsoft CDN, even if you need jquery ui use microsoft because google not not keeping keep-alive so every request are on it's own. so mixing in that way is plus. if you are using microsoft only for validator then you are doing seperate connection with google server for each request.
In the summery it says that microsoft is not offering UI, that is not correct (any more). It can be downloadloade at http://www.asp.net/ajaxlibrary/cdn.ashx.
Also consider when using Google CDN that some times people make typos such as ajax.googelapis.com. This could potentially create a really nasty xss (cross site scripting) attack. I have actually tested this out by registering a googlapis.com typo and very quickly found myself serving requests for javascript, maps, css etc.
I emailed Google and asked them to register similar CDN typo URL's but have not heard back. This could be a real reason not to rely on CDN's because there are potentially dangerous attackers awaiting the typo requests and can easily serve back jquery etc with an xss payload.
Thank you
Depending which industry the application targets, you may not want to use a CDN managed by other organisations. It often raises issues regarding to compliance, privacy and confidentiality.
For example, when you include Google Analytics in a secure application, the browser still sends the current URL as the "referer" header. Any identifiers, say a session id or secret token may appear in their logs. For example, if a client IP of 192.0.2.5references https://healthsystem.example/condition/impotence, then well, you can infer information which is considered to be rather private.
Other cases include information of consequence, such as an account number, social security number or session information in the URL. That sort of data should never be in the URL as it can be used outside of the application.
While you may trust Google, Microsoft or Yahoo, your users may not.
For industries like Finance, Legal and Health Care, you may want to establish your own CDN with the help of a vendor (e.g. Akamai) with which you can sign a BAA.
I would advise that you base your usage on the general location of the users you're targeting.
If your site is targeted for general public, then using Google's CDN would be a good choice.
If your site is also targeted at China, then using Microsoft's CDN would be a better choice.
I know from my experience, as Google's servers kept getting blocked by the Chinese government, rendering websites that uses them un-loadable.
*Note that you can of cos create region specific sites, e.g. cn.mysite.com to cater specifically for China, but if you're low on resources and time, its worth a consideration.
Full list of Microsoft CDN here.
http://www.asp.net/ajaxlibrary/cdn.ashx
They have since renamed to ajax.aspnetcdn.com, which reduces the likelihood of blockage by firewall rules.
I would use both!
As the Google Jquery hosting has been around a lot longer, the chances are much higher that people will already have it cached compared to the Microsoft one, so I would have it first.
Personally, I would use something like this -
if (typeof jQuery == 'undefined') {
// jQuery is not loaded
document.write("<scr" + "ipt type=\"text/javascript\" src=\"http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js\"></scr" + "ipt>");
}
} else {
// jQuery is loaded
}
(Not sure this 100% works, but I was just going to write the idea and not example - This references the Google hosted Jquery and not the Microsoft one as I couldn't find the link)

Categories

Resources