Browsers allow extensions to inject code, manipulate the DOM, etc.
Over the years, I have noticed lots and various uncaught errors (using window.onerror) on a website (app) I am watching, generated by unknown browser extensions on Firefox, Chrome and Internet Explorer (all versions).
These errors didn't seem to be interrupting anything. Now I want to increase the security of this website, because it will start processing credit cards. I have seen with my own eyes malware/spyware infecting browsers with modified browser extensions (innocent browser extension, modified to report to attackers/script kiddies) working as keyloggers (using trivial onkey* event handlers, or just input.value checks).
Is there a way (meta tag, etc.) to inform a browser to disallow code injection or reading the DOM, standard or non-standard? The webpage is already SSL, yet this doesn't seem to matter (as in give a hint to the browser to activate stricter security for extensions).
.
Possible workarounds (kind of a stretch vs. a simple meta tag) suggested by others or off the top of my head:
Virtual keyboard for entering numbers + non textual inputs (aka img for digits)
remote desktop using Flash (someone suggested HTML5, yet that doesn't solve the browser extension listening on keyboard events; only Flash, Java, etc. can).
Very complex Javascript based protection (removes non white listed event listeners, in-memory input values along with inputs protected with actual asterix characters, etc.) (not feasible, unless it already exists)
Browser extension with the role of an antivirus or which could somehow protect a specific webpage (this is not feasible, maybe not even possible without creating a huge array of problems)
Edit: Google Chrome disables extensions in Incognito Mode, however, there is no standard way to detect or automatically enable Incognito Mode and so a permanent warning must be displayed.
Being able to disable someone's browser extension usually implies taking over the browser. I don't think it's possible. It would be a huge security risk. Your purpose maybe legit, but consider the scenario of webmasters programatically disabling addblockers for users in order to get them to view the advertisments.
In the end it's the user's responsability to make sure they have a clean OS when making online banking transactions. It's not the website's fault that the user is compromised
UPDATE
We should wrap things up.
Something like:
<meta name="disable-extension-feature" content="read-dom" />
or
<script type="text/javascript">
Browser.MakeExtension.MallwareLogger.to.not.read.that.user.types(true);
</script>
doesn't exist and i'm sure there won't be implemented in the near future.
Use any means necessary to best use the current up to date existing technologies and design your app as best as you can security wise. Don't waste your energy trying to cover for users who souldn't be making payments over the internet in the first place
UPDATE (2019-10-16): This isn't a "real" solution - meaning you should not rely on this as a security policy. Truth is, there is no "real" solution because malicious addons can hijack/spoof JavaScript in a way which in not detectable. The technique below was more of an exercise for me to figure out how to prevent simple key logging. You could expand on this technique to make it more difficult for hackers... but Vlad Balmos said it best in his answer below - Don't waste your energy trying to cover for users who souldn't be making payments over the internet in the first place.
You can get around the key logging by using a javascript prompt. I wrote a little test case (which ended up getting a little out of hand). This test case does the following:
Uses a prompt() to ask for the credit card number on focus.
Provides a failsafe when users check "prevent additional dialogs" or if the user is somehow able to type in the CC field
Periodically checks to make sure event handlers haven't been removed or spoofed and rebinds/ warns the user when necessary.
http://jsfiddle.net/ryanwheale/wQTtf/
prompt('Please enter your credit card number');
Tested in IE7+, Chrome, FF 3.6+, Android 2.3.5, iPad 2 (iOS 6.0)
Your question is interesting, and thoughtful (+1'd), however unfortunately the proposed security does not provide real security, thus no browser will ever implement it.
One of the core principle on browser/web/network security is to resist from the desire of implementing a bogus security feature. Web will be less secure with the feature than without!
Hear me out:
Everything execute on the client-side can be manipulated. Browsers are just another HTTP clients that talks to server; server should never ever trust the computation result, or checks done in front-end Javascript. If someone can simply bypass your "security" check code executed in a browser with a extension, they can surely fire the HTTP request directly to your server with curl to do that. At least, in a browser, skilled users can turn to Firebug or Web Inspector and bypass your script, just like what you do when you debug your website.
The <meta> tag stopping extensions from injection does make the website more robust, but not more secure. There are a thousand ways to write robust JavaScript than praying for not having an evil extension. Hide your global functions/objects being one of them, and perform environment sanity check being another. GMail checks for Firebug, for example. Many websites detects Ad block.
The <meta> tag does make sense in terms of privacy (again, not security). There should be a way to tell the browser that the information currently present in the DOM is sensitive (e.g. my bank balance) and should not be exposed to third parties. Yet, if an user uses OS from vender A, browser from vender B, extension from vender C without reading through it's source code to know exactly what they do, the user have already stated his trust to these venders. Your website will not be at fault here. Users who really cares about privacy will turn to their trusted OS and browser, and use another profile or private mode of the browser to check their sensitive information.
Conclusion: If you do all the input checks on sever-side (again), your website is secure enough that no <meta> tag can make it more secure. Well done!
I saw something similar being done many times, although the protection was directed in the other way: quite a few sites, when they offer sensitive information in a form of text would use a Flash widget to display the text (for example, e-mail addresses, which would be otherwise found by bots and spammed).
Flash applet may be configured to reject any code that comes from the HTML page, actually, unless you specifically expect this to be possible, it will not work out of the box. Flash also doesn't re-dispatch events to the browser, so if the keylogger works on the browser level, it won't be able to log the keys pressed. Certainly, Flash has its own disadvantages, but given all other options this seems the most feasible one. So, you don't need remote desktop via Flash, simple embedded applet will be just as good. Also, Flash alone can't be used to make a fully-functional remote desktop client, you'd be looking into NaCl or JavaFX, which would make this only usable by corporate users and only eventually by private users.
Other things to consider: write your own extension. Making Firefox extension is really easy + you could reuse a lot of your JavaScript code since it can also use JavaScript. I never wrote a Google Chrome or MSIE extension, but I would imagine it's not much more difficult. But you don't need to turn it into an antivirus extension. With the tools available, you could make it so no other extension can eavesdrop on what's going on inside your own extension. I'm not sure how friendly your audience will greet that, but if you are targeting corporate sector, then that audience is, in a way, a very good one, as they don't get to choose their tools... so you can just obligate them to use the extension.
Any more ideas? - well, this one is very straight-forward and efficient: have users open a pop-up window / separate tab and disable JavaScript in it :) I mean, you could decline to accept a credit card info if the JavaScript is enabled in the browser - obviously, it is very easy to check. This would require some mental effort from the users to find the setting, where they can disable it + they will be raging over a pop-up window... but almost certainly this will disable all code injection :)
This wont work, but i'll try something around document.createElement = function(){};
That should affect client side scripts (greasemonkey)
You can also try to submit the current DOM using an hidden input
myform.onsubmit=function(){myform.hiddeninput.value=document.body.innerHTML;} and check server side for unwanted DOM elements. I guess using a server side generated id/token on every element can help here (as injected DOM node will surely miss it)
=> page should look like
<html uniqueid="121234"> <body uniqueid="121234"><form uniqueid="121234"> ...
So finding un-tracked elements in the POST action should be easy (using xpath for example)
<?php
simplexml_load_string($_POST['currentdom'])->xpath("*:not(#uniqueid)") //style
Something around that for the DOM injection issue.
As for the keylogging part, i don't think you can do anything to prevent keylogger from a client side perspective (except using virtual keyboard & so), as there is no way to discern them from the browser internals. If you are paranoid, you should try a 100% canvas generated design (mimicking HTML element & interaction) as this might protect you (no DOM element to be bound to), but that would mean creating a browser in a browser.
And just that we all know we cannot explicitly block the extensions from our code,
one another way can be to find the list of event listeners attached to key fields like password, ssn and also events on body like keypress, keyup, keydown and verify whether the listener belongs to your code, if not just throw a flash message to disable addons.
And you can attach mutation events to your page and see if there are some new nodes being created / generated by a third party apart from your code.
ok its obvious that you will get into performance issues, but thats a trade off for your security.
any takers ?
Related
I'm investigating something and it led me to a website online.
I haven't yet visited the website because I have reason to believe that it may have malicious content.
I know that in Google Chrome, you can view the source code of a webpage by pressing Ctrl-U. Unfortunately, that requires having visited the page.
Then I discovered that you can get the source code of a webpage without visiting it by adding view-source: before the https:// part of the URL.
So I can view Wikipedia's home page source code with view-source:https://www.wikipedia.org.
I want to do the same with the potentially malicious website but I don't want anything to happen to my computer. The only person I could consult regarding the website said that it "tracks the hell out of your computer". While whomever they heard that from does have a background in network engineering, they themselves don't, so I don't have any detailed information about it.
I know that basically all websites "track" you, i.e., gather information about your computer, such as its IP address, window resolution, user login, etc. by installing cookies on the user's computer to be requested later upon the next visit, but I don't know much about how far those abilities can extend.
I also found out from somewhere (I may be wrong) that there is a difference between "view page source" and "inspect page source" because the first one gives you the raw source code before any JavaScript is applied and the second one is available once you're on the site and any applicable JavaScript has already been applied and you can see its results.
Based on that, I'm assuming that it's perfectly safe to use the view-source: technique if I don't care about the results of the scripts on the page.
So essentially, I need to know these things:
Is it really perfectly safe to use view-source:? I'm assuming not, so I'd like to know exactly what risks I'm taking and what risks I'm avoiding by doing this. EDIT: Forgot to mention. Does the website know that I'm viewing its source code, and does it by that fact know that my IP address requested its source code?
Assuming I can read the JavaScript scripts, can I get a general sense of what the scripts do by reading what I get from view-source: alone, or can a webpage access scripts from other webpages without them explicitly being written on that page? (I'm assuming they can do that since I see hyperlinks on other websites ending in .js that I can click on revealing more JavaScript scripts) Note: I don't really care what the content of the webpage is in terms of what an ordinary user sees, since my investigation already knows and/or doesn't care about what is on it, I just care about what the webpage does in terms of tracking users.
What can "tracking the hell out of your computer" entail exactly? In other words, what are some worst-case scenarios? No scenario is too outlandish; part of my investigation is to learn about this kind of stuff since it will help us down the line.
The general answer is to just disable javascript and cookies in your browser first.
Generally yes it's ok to view source, especially if javscript is disabled prior.
You can if their scripts are readable, many sites however will minimize the code, which is generally not very readable.
If javascript is disabled it's likely that their tracking would not work or at best be incomplete.
I ignore the "how to ask" topic from above for now to answer your question.
What I am not sure whether Stack Overflow is the right site for it in Stack Exchange.
The question is basically what threats you suspect from your "potentially malicious page".
If your concerns are mainly about privacy, it might be OK to take the risk.
Sometimes I even just use "incognito mode", despite I know about it flaws, if the threat I suspect is limited.
If your concern is that the page code might try to elevate privileges out of the sandbox using security issues in the browser or more, you would basically trust the security implementation of the same software, which the page is trying to "hack in".
For the latter I at least use a read-only VM with minimal software and network access or, when it is about a serious threat, e.g. a ransomware, really an old notebook, which gets installed before and wiped after or even the hard-disk destroyed afterwards.
And even with the latter, I am taking the risk, that something might have modified the BIOS.
Well let's say you have a virus that you designed yourself on your computer using JavaScript. If you save the source code of the virus as a .js file on your device, your device will not be harmed because it has not become a virus yet, or in other words, it has not been run. Now let's consider that you have browsed a malicious site, but you do not realize that since you visited the site and the browser has turned on the source code of the site, i.e. The browser has edited the virus or ran it, but if you view the source code of the site via view-source: the virus will not run even if it exists because the browser has not translated the site yet, meaning that it is practically still closed and you have never visited it only you It shows the source code of the site and does not go to the site, it's like an apk file that has not been installed yet. I get my point
Is there any way to consistently detect PhantomJS/CasperJS? I've been dealing with a spat of malicious spambots built with it and have been able to mostly block them based on certain behaviours, but I'm curious if there's a rock-solid way to know if CasperJS is in use, as dealing with constant adaptations gets slightly annoying.
I don't believe in using Captchas. They are a negative user experience and ReCaptcha has never worked to block spam on my MediaWiki installations. As our site has no user registrations (anonymous discussion board), we'd need to have a Captcha entry for every post. We get several thousand legitimate posts a day and a Captcha would see that number divebomb.
I very much share your take on CAPTCHA. I'll list what I have been able to detect so far, for my own detection script, with similar goals. It's only partial, as they are many more headless browsers.
Fairly safe to use exposed window properties to detect/assume those particular headless browser:
window._phantom (or window.callPhantom) //phantomjs
window.__phantomas //PhantomJS-based web perf metrics + monitoring tool
window.Buffer //nodejs
window.emit //couchjs
window.spawn //rhino
The above is gathered from jslint doc and testing with phantom js.
Browser automation drivers (used by BrowserStack or other web capture services for snapshot):
window.webdriver //selenium
window.domAutomation (or window.domAutomationController) //chromium based automation driver
The properties are not always exposed and I am looking into other more robust ways to detect such bots, which I'll probably release as full blown script when done. But that mainly answers your question.
Here is another fairly sound method to detect JS capable headless browsers more broadly:
if (window.outerWidth === 0 && window.outerHeight === 0){ //headless browser }
This should work well because the properties are 0 by default even if a virtual viewport size is set by headless browsers, and by default it can't report a size of a browser window that doesn't exist. In particular, Phantom JS doesn't support outerWith or outerHeight.
ADDENDUM: There is however a Chrome/Blink bug with outer/innerDimensions. Chromium does not report those dimensions when a page loads in a hidden tab, such as when restored from previous session. Safari doesn't seem to have that issue..
Update: Turns out iOS Safari 8+ has a bug with outerWidth & outerHeight at 0, and a Sailfish webview can too. So while it's a signal, it can't be used alone without being mindful of these bugs. Hence, warning: Please don't use this raw snippet unless you really know what you are doing.
PS: If you know of other headless browser properties not listed here, please share in comments.
There is no rock-solid way: PhantomJS, and Selenium, are just software being used to control browser software, instead of a user controlling it.
With PhantomJS 1.x, in particular, I believe there is some JavaScript you can use to crash the browser that exploits a bug in the version of WebKit being used (it is equivalent to Chrome 13, so very few genuine users should be affected). (I remember this being mentioned on the Phantom mailing list a few months back, but I don't know if the exact JS to use was described.) More generally you could use a combination of user-agent matching up with feature detection. E.g. if a browser claims to be "Chrome 23" but does not have a feature that Chrome 23 has (and that Chrome 13 did not have), then get suspicious.
As a user, I hate CAPTCHAs too. But they are quite effective in that they increase the cost for the spammer: he has to write more software or hire humans to read them. (That is why I think easy CAPTCHAs are good enough: the ones that annoy users are those where you have no idea what it says and have to keep pressing reload to get something you recognize.)
One approach (which I believe Google uses) is to show the CAPTCHA conditionally. E.g. users who are logged-in never get shown it. Users who have already done one post this session are not shown it again. Users from IP addresses in a whitelist (which could be built from previous legitimate posts) are not shown them. Or conversely just show them to users from a blacklist of IP ranges.
I know none of those approaches are perfect, sorry.
You could detect phantom on the client-side by checking window.callPhantom property. The minimal script is on the client side is:
var isPhantom = !!window.callPhantom;
Here is a gist with proof of concept that this works.
A spammer could try to delete this property with page.evaluate and then it depends on who is faster. After you tried the detection you do a reload with the post form and a CAPTCHA or not depending on your detection result.
The problem is that you incur a redirect that might annoy your users. This will be necessary with every detection technique on the client. Which can be subverted and changed with onResourceRequested.
Generally, I don't think that this is possible, because you can only detect on the client and send the result to the server. Adding the CAPTCHA combined with the detection step with only one page load does not really add anything as it could be removed just as easily with phantomjs/casperjs. Defense based on user agent also doesn't make sense since it can be easily changed in phantomjs/casperjs.
I was looking into a script embedded in a webpage that creates an Outlook appointment and opens it. I tested a sample appointment shared by Brian White: http://www.winscripter.com/WSH/MSOffice/90.aspx
and embedded it in a sample web page, but here are two problems:
The script works only in IE and not in any other browser.
IE issues a security message about an ActiveX control and asks if to enable it.
Do you have any idea how to make it work in all browsers and not to scare users with the ActiveX warning?
Thank you in advance!
The script you've linked to works by creating an instance of the Outlook ActiveX control. As such, no, there's no way to make this work in browsers that don't support ActiveX, which is effectively all of them except Internet Explorer.
As for not scaring the users with the ActiveX dialog box, that's not in your hands. The warning message is a security feature, part of the browser itself, and can only be disabled by changing the browser's settings - which isn't something you can do through code, for obvious reasons!
If it's appropriate to your situation, rather than do this through client-side javascript your could instead use Exchange Web Services on the server-side. This comes with its own set of limitations and things to be aware of, namely (a) it's obviously impossible to open Outlook with this method, and (b) on the server-side you'd require access to the Exchange server and would need to know the username/password of an Exchange user with permission to write to the relevant calendar (which is only going to happen if we're talking about a corporate environment).
Although I realize it is an old post, I wanted to offer another approach.
I notice your question refers specifically to OUTLOOK appointments, but what about using "iCalendar"?
{http://en.wikipedia.org/wiki/ICalendar}
This could offer a wider solution. Also, a page could offer two alternative icons.
One for Outlook, another one using iCalendar, and let the user choose which one to use.
Hope this helps. Cheers.
Marcelo F.
I'm currently building a project and I would like to make use of some simple javascript - I know some people have it disabled to prevent XSS and other things. Should I...
a) Use the simple javascript, those users with it disabled are missing out
b) Don't use the simple javascript, users with it enabled have to click a little more
c) Code both javascript-enabled and javascript-disabled functionality
I'm not really sure as the web is always changing, what do you recommend?
Degrade gracefully - make sure the site works without JavaScript, then add bells and whistles for those with JavaScript enabled.
Everyone else has committed good comments, but there are a few other considerations to make.
Sometimes the javascript will be hosted on a different domain, and be prone to timeout.
Sometimes that domain may become inacessible, while your site remains accessible. Its not good to have your site completely stack itself in this scenario.
For this reason, "blocking" scripts ( ie: document write inline ) like that present in google's tracker, should be avoided, or at very least, should go as late in the page as possible so the page renders whether or not the domain is timing out requests or not.
If you happen to be serving JS from a broken/malicious server, by intent or by accident, one can halt page rendering simply by having a script that serves that javascript which just calls "sleep(forever)" once its sent all the headers.
Some People Use NoScript
Like the above problem, sometimes the clients environment may block certain script sources, be it the users choosing, or other reasons ( ie: browser security satisfactions, odd antivirus/anti-malware apps ). The most popular and controllable instance of this is NoScript, and I myself paranoidly block some of the popular tracking/advertising services with it ( some proxy servers will do this too ).
However, if a site is not well designed, the failing of one script to load still executes code that was dependant on that script being present, which yeilds errors and stops everything working.
My recommendation is :
Use Firebug
Use NoScript and block out everything --> See Site still works
Enable core site scripts that you cant' do without for anything --> See site still works and firebug doesn't whine.
Enable 3rd party stuff --> See site still works and firebug doesn't whine.
There are a lot of other complications that can crop up, but satisfying the above 2 should solve most of them. Just assume that, for whatever reason, one or more resources that comprise a page are viable to spontaneously disappear ( they do, all the time ), and you want the page to "survive" this problem as amicably as possible. For the problems that may persist for < 10 seconds, its not so bad, refresh the page and its fixed, but if its a problem that can occur, and severley hamper usability for an hour or more at a time.
In essence, instead of thinking "oh, theres the edge case users that don't have javascript", try thinking more a long the lines of "its really easy to have something go wrong, and have ALL of our users with broken javascript. Ouch! Lets try make it so we dont' really hose ourself when that does happen"
( I've seen IE updates get rolled out and hose javascript for that entire browser until the people whom wrote the scripts find a workaround. Losing all your IE customers is not a good thing )
:set sarcasm
:set ignoreSpelling
:set iq=76
Don't worry, its only a 5% Niché Market
Nobody cares about targeting Niché markets right? All those funny propeller heads running lynx in their geeky stupid linoox cpus, spending all their time on the intarwebs surfing because they have nothing better to do with their life or money? the crazy security paranoid nerds disabling javascript left and right because they don't like it?
Nobody wants them as your primary customer now do they?
Niché markets. Pfft. Who cares!
:set nosarcasm
Consider your audience
"Degrade gracefully" is generally the best answer. But lots of sites now depend on JS - especially AJAX.
Consider your audience. If your site is aimed at extremely tech-savvy people, the chances of them not having javascript are small, and you can notify them to turn it on if necessary.
If your audience may access your site with mobile devices, don't assume they have JavaScript, and don't even assume they support CSS properly. Aim to degrade gracefully all the way down to bare HTML.
I've learned a lot from my question: What's With Those Do-Not-Use Javascript People
Go with Ajax and Web 2.0. It's the way the web is going and it's wonderful. Isn't Stackoverflow great to be on? It's not quite as nice with your Javascript turned off.
Once you have your site ready, but before you let it go live, test it with Javascript off, and just add whatever you feel you need to make your site appear and function to them. You only need to add what you feel is essential.
Remember, except for visually impared people using screen readers, the others have chosen to turn javascript off. They can also choose to trust your site and turn javascript on for your site if they want to use all the functionality you have. It really is their choice.
As other have said, it should "degrade gracefully".
In other works, it must work without Javascript (period). It doesn't have to work well. The folks who've disabled Javascript know the limitations that causes and have accepted them. But if you are trying to sell them something, it's important that they can still buy it.
On the site I'm designing, there's a javascript-based fly-out menu. With Javascript off, all the flyouts are always open. It doesn't look as cool as it would with JS, but it can still be used to navigate the site.
It depends on how much time you have to develop and maintain both solutions, and how much the non-javascript users are worth to you.
My e-commerce site relies heavily on javascript, and in over a year and a half, I've not received a single complaint.
In fact, I don't think I've seen a single visitor with javascript disabled in any of logs since I started.
That doesn't mean they're not out there. It just means that either (a) they're a tiny percentage, (b) they're not interested in what I'm selling, or (c) both of the above.
Code your web site with support for the bare minimum kind of browser. Then more people can use your site without frustration even if they don't have all the bells and whistles--like Flash, Javascript, and Java--enabled. It may not be practical to continue support for ancient browsers, say Netscape Navigator 4, because a user can be reasonably expected to keep their computer up-to-date. However, features like Javascript, Flash, and Java can be security holes in old or modern browsers, as well as being an annoyance.
Neither of my parents keep Javascript or Flash enabled because they've had too many experiences with them slowing down their already slow connection, crashing their browsers, or being more of an annoyance on sites that use it stupidly (which is a lot of them...) than a useful feature. It's just bad design if, for example, your form requires an AJAX call be made and you can't actually hit a submit button to send the form when Javascript is disabled.
My mother was recently quite frustrated to discover that she is now unable to click through eBay results pages because each one requires Javascript. The only way she can see the next page of results is to turn on Javascript or to show more results per page. Now what reason would there be for page links to require Javascript while the 'results per page' links are just plain links? They should all be plain old HTML links. Maybe Javascript could be used to add some whiz-bang to the navigation, but a user should not be punished with a bad interface for having Javascript disabled. It's stupid on eBay's part, and it causes undue hassle for their users.
I am one of those that uses 'No-Script.' And I can tell you that sites that use javascript and don't work without it enabled is extremely annoying, stackOverflow... No we don't expect it to be very fancy, if I upvote load a new page that says "Thank you."
We expect to be able to use the site with reasonable limitations, don't ever display a page that says JS must be enabled, though, even if the site is crap without it. And yes if your site convinces us to stay we will enable. A function that isn't in common use on the site can also require javascript.
Please note that your site should also look good with no JS or CSS, if nothing else it is good for Bots.
As others have pointed out some phones don't have JS, this is changing but another good reason to have reasonable non-JS. I suggest code with non-JS and add JS after the former works, there are good ways where JS can work with the non-JS layout.
It helps me in my implementations to think about it as "progressive enhancement" rather than graceful degradation. Degradation often leads you to figure out how to make it work w/o js after it is implemented, instead of making a baseline and enhancing with js.
It is essential to at least test your website is functional when JavaScript is turned off.
As orip says, degrading gracefully is very important. It should be vital that your page both looks nice and functions when JavaScript is disabled.
For a standard web site that is primarily intended for conveying information, degrade gracefully always.
For web applications:
When building a web application for a standard internet audience, I would keep the three following facts in mind:
95%-97% of potential users will have JavaScript enabled.
At times established users will need to access functionality when JavaScript is not available.
3%-5% of potential users will have JavaScript intentionally disabled.
Given fact one, if you believe that building a JavaScript reliant web application will deliver a superior user experience, then by all means do it. Doing so may help you accumulate users.
However, given fact two, you should always provide a means by which your users can access core functionality without JavaScript. Do you need to offer every single feature? Probably not. But a user should be able to get his or her work done. This will keep your users happy when they find themselves temporarily without JavaScript.
Given fact three, I would also provide an in depth tour as an attempt to entice these users to enable JavaScript.
As an aside, one of my most favorite web applications, Remember The Milk follows this approach. Also, Google's Calendar application is unusable without JavaScript. So JavaScript reliant web apps are on the rise and that trend is probably unstoppable. In my opinion this is a good thing.
(Do keep in mind that JavaScript make Accessbility a bigger problem than it is already. Please do make an effort to make your apps usable by those with disabilities.)
As said before, it depends on your target audience.
If I'm part of it, you want to make sure that your site works (if not ideally) on my phone, and that it gives me reason to turn Javascript on when I surf there with it off. Nobody expects full functionality with Javascript disabled, and anybody who uses their phone to access websites expects some issues, but you need to at least provide teasers. For a web store, make sure customers can see at least some merchandise anyway, even if they can't buy without Javascript.
I've never actually used greasemonkey, but I was considering using it.
Considering that GreaseMonkey allows you to let random people on the Internet change the behavior of your favorite websites, how safe can it be?
Can they steal my passwords? Look at my private data? Do things I didn't want to do?
How safe is Greasemonkey?
Thanks
Considering that GreaseMonkey allows you to let random people on the Internet change the behavior of your favorite websites, how safe can it be?
It's as safe as you allow it to be - but you aren't very clear, so let's look at it from a few perspectives:
Web Developer
Greasemonkey can't do anything to your website that a person with telnet can't already do to your website. It automates things a bit, but other than that if greasemonkey is a security hole, then your website design is flawed - not greasemonkey.
Internet user with Greasemonkey loaded
Like anything else you load on your system, greasemonkey can be used against you. Don't load scripts onto your system unless you trust the source (in both meanings of the term 'source'). It's fairly limited and sandboxed, but that doesn't mean it's safe, merely that it's harder for someone to do something nefarious.
Internet user without Greasemonkey
If you do not load greasemonkey or any of its scripts, it cannot affect you in any way. Greasemonkey does not alter the websites you visit unless you've loaded it on your system.
Greasemonkey developer
There's not much you can do beyond what can already be done with XUL and javascript, but it is possible to trash your mozilla and/or firefox profile, and possibly other parts of your system. Unlikely, difficult to do on purpose or maliciously, but it's not a bulletproof utility. Develop responsibly.
-Adam
Considering that GreaseMonkey allows you to let random people on the Internet change the behavior of your favorite websites
Random people whose UserScript you have installed. No one can force you to install a UserScript.
Can they steal my passwords?
Yes, a UserScript could modify a login page so it sent your password to an attacker.
No, it cannot look at your current passwords, or for websites the UserScript isn't enabled for
Look at my private data?
Yes, if your private data is viewable on a website that you've given a UserScript access too
Do things I didn't want to do?
Yes, a UserScript could do things to a webpage (you've given it access to) that are unwanted
How safe is GreaseMonkey?
As safe as the individual UserScripts you have installed
When used with discretion, Greasemonkey should be perfectly safe to install and use. While it is definitely possible to do all manners of mischief with carte-blanche Javascript access to pages, Greasemonkey scripts are restricted to specific URLs, and will not run on sites that are not specified by the URL patterns in their headers.
That being said, a basic rule of thumb is to consider most information on pages with Greasemonkey scripts active to be accessible to those scripts. It is technically feasible to play games like replacing input boxes (in which you might enter passwords or personal info), read any data on the pages, and send data collected to a third party. Greasemonkey scripts do run in an effective sandbox within the browser, and shouldn't be able to affect your computer outside of Firefox.
That being said, in some respects, the risk is comparable to or less than that of installing any other small pieces of open source software. Since Greasemonkey scripts are simple open source Javascript files, it's relatively easy for a programmer to take a look inside and make sure it does what it says it does. As always, run strangers' code (of any form) with care, and take the time to skim the source code if the software is important to you.
In general though, Greasemonkey scripts should be pretty safe. Try to use scripts with a large number of reviews and users, since these are likely to be more thoroughly vetted and analyzed by the community.
Happy userscripting!
Yes, userscripts can steal your passwords. That's the bottom line. Don't use firefox addons or userscripts on work or government computers without referring to your bosses.
Unlike firefox addons userscripts are not formally vetted. (Firefox 'experimental' addons are also not vetted). You can register and add a malicious script to userscripts.org in a moment.
Userscripts are very unsafe. The cross-site scripting ability means that it's no difficulty at all to send off your details/passwords to an evil server quite invisibly. And the script can do it for any site. Ignore the other answers that attempt to dismiss/minimise this issue. There are two issues: evil script writers putting their evil wares on to userscripts.org and scripts that break greasemonkeys' sandbox and so are vulnerable to being used by malicious code on a hacked site that would otherwise be restricted to same-domain.
In the case of evil script authors you can examine the scripts for code that sends your details; not much fun. At the very least you could restrict the script to particular sites by editing the 'include/exclude' clause. That doesn't solve the problem but at least it won't be sending off your banking credentials (unless you've used the same login details). It's a pity there isn't an 'includexss' clause to restrict xss requests, which would effectively solve the problem since, crucially, it would be easy to check even for non-developers. (the Firefox addon "RequestPolicy" doesn't block userscripts.)
Unsafe scripts: look for any use of 'unsafewindow'. There are other risky calls. Greasemonkey doesn't warn you of their use when the script is installed. Use of these calls doesn't mean the script is unsafe, just that the script writer had better be good at secure programming; it's difficult and most aren't. I avoid writing scripts that would need these calls. There are popular, high-download scripts that use these calls.
Firefox plugins/addons at Mozilla.org have similar problems to userscripts but at least they are formally vetted. The vetting/review includes the all-important code-review. Nevertheless there are clever techniques for avoiding the detection of evil code without the need of obfuscation. Also the addon may be hosted on an (unknown to anyone) hacked site. Unfortunately mozilla also lists 'experimental' addons which are not vetted and have had malicious code. You get a warning but how many know the real significance. I didn't until I picked up security knowledge. I never install such addons.
Userscripts are not formally vetted. Unless a script has a lot of installs I examine the code. Even so a high-install script could still have had the script-writer's account hijacked and script modified. Even if I examine a script the use of anti-detection programming means I may not see the evil. Perhaps the best bet is to examine outgoing requests with "Tamper Data" firefox addon, but a clever script will delay or infrequently send data. It's a tactical war, unfortunately. Ironically only microsoft's certificate based activeX objects really approach a real solution in developer traceability (but didn't go far enough).
It's true that a firefox addon gives an evil-doer greater exposure to potential victims, since firefox addons are generally more popular and so seem more likely to be targeted, but the firefox vetting process makes userscripts more attractive to the evil-doer since they are not vetted. Arguably a low-download userscript can still get a criminal plenty of valuable logins until it is spotted, while also giving the benefit of the relative obscurity and low community churn of userscripts, as well as a low chance of anyone code-reviewing it. You can't depend on firefox addons' popularity to protect you from evil userscripts.
As a non-developer you are dependent on other users spotting evil scripts/addons. How likely is that? Who knows. The truth is it's a crap security model.
Ultimately I use firefox for general browsing and Google Chrome (without greasemonkey/plugins) for admin purposes. Chrome also has a usable 'profiles' feature (totally separate browsing spaces) which is effectively like using different browsers. I've set up three chrome profiles to make myself even more safe: email/general-admin, banking, ebay/paypal. Firefox has unusable profiles (in my experience) but I prefer firefox as a browser which is why I still use it for uncritical browsing. Profiles also protect against old fashioned browser security holes and hacked sites, at least limiting their scope. But make sure you use different passwords. Another approach is a clean bootable ubuntu install on a USB stick for critical admin (see here http://www.geekconnection.org/remastersys/).
Jetpacks' special trust model, rather like the PGP trust network, which underlines the seriousness of this issue, should hopefully mitigate it. Jetpack is firefox's new kid on the block: a kind of super greasemonkey.