Javascript injected into URL - javascript

We have a relatively popular website, and recently we started seeing some strange URL's popping up in our logs. Our pages reference jQuery and we started seeing pieces of those scripts being inserted into URL's. So we have logging entries like this:
/js/,data:c,complete:function(a,b,c){c=a.responseText,a.isResolved()&&(a.done(function(a){c=a}),i.html(g?d(
The User Agent string of the Request is Java/1.6.0_06, so I think we can safely assume it's a bot that's probably written in Java. Also, I can find back the piece of appended code in the jQuery file.
Now, my question is why would a bot try to insert referenced Javascript into the URL?

It may not be specifically targeted at your site -- it may be a shotgun attempt to find XSS-able sites so that an attacker later can figure out what's stealable and craft an attack and write a web-page to deploy it against real users.
In that cases, the attacker may use bots to collect HTML from sites, and then pass that HTML to instances of IE running on zombie machines to see what messages get out.
I don't see any active payload here so I assume you've truncated some code here, but it looks like JSCompiled jQuery code that probably uses jQuery's postMessage so it's probably an attempt to XSS your code to exfiltrate user data or credentials, install a JavaScript keylogger, etc.
I would grep through your JavaScript looking for code that does something like
eval(location.substring(...));
or anything that uses a regexp or substring call to grab part of the location and uses eval or new Function to unpack it.

Checking for Cross Site Scripting vulnerabilities, maybe.
If the bot detects a successful injection, it might inject dangerous code (e.g. stealing your users' passwords or redirecting them to malicious sites).

Related

How can I make an indexable website that uses Javascript router?

I have been working on a project that uses Backbone.js router and all data is loaded by javascript via restful requests. I know that there is no way to detect whether Javascript is enabled or not in server-side but here is the scenarios that I thought to make this website indexable:
I can append a query string for each link on sitemap.xml and I can put a <script> tag to detect whether Javascript is enabled or not. The server renders this page with indexable data and when a user visits this page I can manually initialize Backbone.js router. However the problem is I need to execute an sql query to render indexable data in server-side and it will cause an extra load if the visitor is not a bot. And when users share an url of the website somewhere, it won't be an indexable page and web crawlers may not identify the content of that url. And an extra string in web crawler's search page may be annoying for users.
I can detect popular web crawlers like Google, Yahoo, Bing, Facebook in server-side from their user-agents but I suspect that there will be some web crawlers that I missed.
Which way seems more convenient or do you have any idea & experience to make indexable this kind of websites?
As elias94xx suggested in his comment, one solid solution to this dilemma is to take advantage of Google's "AJAX crawling". In short Google told the web community "look we're not going to actually render your JS code for you, but if you want to render it server-side for us, we'll do our best to make it easy on you." They do that with two basic concepts: pretty URL => ugly URL translation and HTML snapshots.
1) Google implemented a syntax web developers could use to specify client-side URLs that could still be crawled. This syntax for these "pretty URLs", as Google calls them, is: www.example.com?myquery#!key1=value1&key2=value2.
When you use a URL with that with that format, Google won't try to crawl that exact URL. Instead, it will crawl the "ugly URL" equivalent: www.example.com?myquery&_escaped_fragment_=key1=value1%26key2=value2. Since that URL has a ? instead of a # this will of course result in a call to your server. Your server can then use the "HTML snapshot" technique.
2) The basics of that technique is that you have your web-server run a headless JS runner. When Google requests an "ugly URL" from your server, the server loads up your Backbone router code in the headless runner, and it generates (and then returns to Google) the same HTML that code would have generated had it been run client-side.
A full explanation of pretty=>ugly URLs can be found here:
https://developers.google.com/webmasters/ajax-crawling/docs/specification
A full explanation of HTML snapshots can be found here:
https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshot
Oh, and while everything so far has been based on Google, Bing/Yahoo also adopted this syntax, as indicated by Squidoo here:
http://www.squidoo.com/ajax-crawling

Is it safe to use a Wymeditor on a live site? Could an automated script disable javascript?

Im curious, the pages in question are password protected but it got me thinking. Im experitmenting with the Wymeditor J-Query plugin, it works rather well and deals with XSS scripting attempts by automatically converting angle brackets into html characters etc.
Obviously with direct access a user could disable javascript and then insert any manner of malicious tags into the DB. However i am wondering would it be possible for an automated script to disable javascript if it managed to get buy the password protection, and thus insert malicious scripts to the DB which would then be run when DB information is displayed on another page??
* UPDATE *
I should probably have expanded a bit. I would usually strip_tags() and use prepared statements however for Wymeditor to be of any use to the client then I can't use strip_tags(). I know I could write some code to remove any malicious looking content but im not sure just how much malicious content i would be looking for, im assuming XSS attacks are alot more varied than just < script >do bad stuff< script > type of thing.
Rule #1: never trust user data.
Corollary: anything that comes from the client side is user data, no matter what in-browser measures your page takes (An automated script might not be running JS at all to begin with, or may inject form fields not present on your page).
So, while the JS editor doesn't make the site any less safe, it doesn't provide any additional security either: client-side measures (such as JS input filtering) are for user convenience only and provide exactly zero protection; you need to sanitize user input server-side, regardless of client-side.
You should check your data upon insert into the database ... no client side script can protect you. Wymeditor can make the input easier and can help a lot, it but will not protect you against malicious pieces of codes which some hackers will try to insert. So you have to do a stron server side check.

identify the user from cross-site post request

In my app, I will provide my client a javascript plugin, which will collect some HTML data and send to my server. I wonder what's the best way to identify my client. Say someone copied the javascript and put into his website. A similar case is the live chat plugin.
Really your questions it is not very clear to me. I am monitoring it from the beginning, so as no one answers I can say the following:
1.- If your javascript plugin is to plug in websites, as a jquery plugin, then you don't be sure about nothing because the code can easily be modified to remove any security procedure.
2.- If your javascript plugin is to plug in browsers, as a FF addon. Well, indeed can be modified too, but in the most of cases you can track simply with cookies or a login procedure.
Said that I think that if the case is the first (plug in websites) you could identify the websites asking for a authentication token stored in the server's website (requested by AJAX) and add it to the HTML data that is send to your server.
Hopefully you can understand my Emglizch :) and do not say pure garbage.

How do end users (hackers) change Jquery and HTML values?

I've been looking for better ways to secure my site. Many forums and Q/A sites say jquery variables and HTML attributes may be changed by the end user. How do they do this? If they can alter data and elements on a site, can they insert scripts as well?
For instance I have 2 jquery scripts for a home page. The fist is a "member only" script and the second is a "visitor only" script. Can the end user log into my site, copy the "member only" script, log off, and inject the script so it'll run as a visitor?
Yes, it is safe to assume that nothing on the client side is safe. Using tools like Firebug for Firefox or Developer Tools for Chrome, end users are able to manipulate (add, alter, delete):
Your HTML
Your CSS
Your JS
Your HTTP headers (data packets sent to your server)
Cookies
To answer your question directly: if you are solely relying on JavaScript (and most likely cookies) to track user session state and deliver different content to members and guests, then I can say with absolute certainty that other people will circumvent your security, and it would be trivial to do so.
Designing secure applications is not easy, a constant battle, and takes years to fully master. Hacking applications is very easy, fun for the whole family, and can be learned on YouTube in 20 minutes.
Having said all that, hopefully the content you are containing in the JS is not "mission-critical" or "sensitive-data". If it is, I would seriously weigh the costs of hiring a third party developer who is well versed in security to come in and help you out. Because, like I said earlier, creating a truly secure site is not something easily done.
Short Answer: Yes.
Anything on the users computer can be viewed and changed by the user, and any user can write their own scripts to execute on the page.
For example, you will up vote this post automatically if you paste this in your address bar and hit enter from this page:
javascript: $('#answer-7061924 a.vote-up-off').click();
It's not really hacking because you are the end user running the script yourself, only doing actions the end user can normally do. If you allow the end user on your site to perform actions that affect your server in a way they shouldn't be able to, then you have a problem. For example, if I had a way to make that Javascript execute automatically instead of you having to run it yourself from your address bar. Everyone who came to this page would automatically upvote this answer which would be (obviously) undesired behavior.
Firebug and Greasemonkey can be used to replace any javascript: the nature of the Browser as a client is such that the user can basically have it do anything they want. Your specific scenario is definitely possible.
well, if your scripts are public and not protected by a server side than the Hacker can run it in a browser like mozilla.
you should always keep your protected content in a server side scripting and allow access by the session (or some other server side method)
Yes a user can edit scripts however all scripts are compiled on the user's machine meaning that anything they alter will only affect their machine and not any of your other visitors.
However, if you have paid content which you feed using a "members-only" script then it's safest if you use technology on the server to distribute your members-only content rather than rely on the client scripts to secure your content.
Most security problems occur when the client is allowed to interact with the server and modify data on the server.
Here's a good bit on information you can read about XSS: http://en.wikipedia.org/wiki/Cross-site_scripting
To put it very simply:
The web page is just an interface for clients to use your server. It can be altered in all possible ways and anyone can send any kind of data to your server.
For first, you have to check that the user sending that data to your server has privileges to do so. Usually done by checking against server session.
Then you have to check at your server end that you are only taking the data you want, and nothing more or less and that the data is valid by validating it on your server.
For example if there is a mandatory field in some form that user has to fill out, you have to check that the data is actually sent to server because user may just delete the field from the form and send it without.
Other example is that if you are trying to dynamically add data from the form to database, user may just add new field, like "admin", and set it to 1 and send the form. If you then have admin field in database, the user is set as an admin.
The one of the most important things is to remember avoid SQL injection.
There are many tools to use. They are made for web developers to test if their site is safe. Hackbar is one for example.

What are the security considerations for a javascript password generator?

For the longest time I was considering using a Javascript bookmarklet to generate the passwords for the different sites I visit to avoid the problem of "similar passwords everywhere", yet still be portable. However, after reading this paper it became clear to me that using this method would mean that a single malicious page could compromise my whole security.
Now I'm pondering the following solution: a bookmarklet, which would only do one thing: open an URL in a new page with the original URL appended (for example http://example.com/password_man.html?url=slashdot.org). The script located on the page from example.com would do the actual password generation.
Does anybody see any security problem with this approach? While it is less convenient than the original one, as far as I can see, even a malicious page could only see the password it gets and would not have access to sensitive information like the master password. Am I correct in assuming this?
More clarifications:
The generating of the password will be done entirely client-side. The "password_man.html" mentioned in the example above will contain javascript code similar to the one already contained in bookmarklets and it will contain an entry field for your to specify the master password
The interpretation of the "url" parameter will also be done client-side. I'm thinking of hosting this file as a particular revision on my google code account (ie. v1234 of password_man.html), which would provide assurances that I'm not changing the page underneath the users
Also, HTTP/HTTPS is not an issue, since all the processing is done by the client browser, no data is sent back to the server. You might argue that a MITM attack could modify the page so that it sends back the generated password for example (or the master password for that matter) in the case that you are using a clear-text protocol (like HTTP), but if you already have a MITM situation there are other avenues of attack which are easier to do (for example: snooping the password from the request which is submitting it, or snooping the session id, etc)
Update: after searching around and thinking about the problem, I concluded that there is no way this can be done securely within the same page. Even if the bookmarklet would only capture the domain and open up a new window (via window.open), a malicious site could always override window.open, such that it would open a copy of the page which would actually capture the master password (essentially perform a phising attack).
supergenpass sounds very similar to what you're proposing to make.
If requirement for being implemented as bookmarklet is portability, there are existing multi-platform password managers. For example, I'm using Lastpass, it supports all major browsers, also works in Opera Mini, also comes in bookmarklet form.
You may want to pass in a passphrase also, along with the url, that way there is two things that must be known in order to regenerate the password.
If you pass in just the url and it always goes to the same password then only one person can use this application.
The odds that two people will use the same passphrase is unlikely, and you can use the same passphrase for every site.
If you use an https connection then it would be more secure from snooping.
I believe you have some usability issues with your approach, and if you use http connection then you will also be vulnerable to snooping. The fact that someone can get the password by knowing the url means that this is more vulnerable than using the same password on each site, IMO.
Update:
Due to the clarification my answer changes.
Basically, in javascript you can have private members, so that other code can't see the values, unless something like firebug is used, but then the user is the one looking.
This link will help explain this more:
http://www.crockford.com/javascript/private.html
If you put the master passphrase and all related information into here, and do the password generation then no other javascript code can get that information, as you will create setters with no getters.
This should enable you to have a secure password generation page.
If you don't mind a Firefox-centric solution take a look at Password Hasher. There is "portable page" option that allows you to generate a page that can be used with other browsers, but I have only tried it with Chrome
The source for it is available here if you want to adapt it for another browser.
Javascripts PRNGs are usually not cryptographically strong: https://bugzilla.mozilla.org/attachment.cgi?id=349768 ; so if you use to generate passwords, other sites might be able to guess the generated password or even influence the password chosen.

Categories

Resources