I'm facing a problem with my Chrome on both Ubuntu 15.04 and Windows 10. It's some sort of malware named xnxx-ads.js. This malware opens unwanted tabs and plays advertisement audio on all sorts of pages. For instance, I might have a SO tab open with a speaker icon! playing ad.
The thing that is important to me (as a web application developer) is that how this malware works!? How can some script be loaded on a web page without it being addressed in the source? Is it because of a security hole in Google Chrome?
BTW, my Chrome is: Version 46.0.2490.86 (64-bit) on both operating systems.
[UPDATE]
My Chrome was just updated to Version 47.0.2526.73 (64-bit) and the problem remains.
To get mal-ware inserted into pages, you generally need one of these things:
If it is only on a specific site, it is possible that that site has been compromised and the content comes from the site already infected.
Something in your ISP is compromised and the content comes from your ISP already infected.
Something in your own network (e.g. router) is compromised and the contents arrives on your PC already infected.
A malicious program got itself installed on your computer and it is injecting things into web pages as they arrive on your computer (either by modifying the incoming TCP or by messing with the browser).
A malicious browser extension got itself installed on your computer and it is injecting things into web pages as the browser loads them.
The most likely options are 4 and 5.
You can probably rule out 1, 2 and 3 by checking the site on your phone or tablet while attached to your home network's wifi. If there is no infection on the web pages viewed on the phone or tablet, then it is not likely 1 or 2 or 3.
If you disable all browser extensions in Chrome and the problem still occurs, then you can probably rule out #5. If the problem goes away when you disable all browser extensions, then you probably have a bad browser extension.
In all cases, you should run a good malware detector. When something like this happened to my daughter's computer, Microsoft Defender did not detect it, but when I downloaded and ran the free Malware-Bytes scanner, it did find the problem and removed it.
Related
I've developed interactive content for a client ( VR Objects ) using javascript and Flash (if needed) that they now want to distribute to prospective customers via a flash drive. That makes it local content causing security issues especially with IE. Actually there doesn't seem to be much problem with any browser except IE. True, IE displays the "allow blocked content" button but they fear that is too complicated or scary. And on IE11 in Win 8.1 it still may not work.
The development environment I use has a way around that for testing using an "embedded web server" although all that seems to do is produce a localhost address such as http://localhost:60331/wyj-01xn/output/surfacide_flash.html. Paste that in the URL bar of any browser on the same machine and you are good. Try it on another machine and no go. So I gather the port address and whatever the /wyj-01xn/ is about are machine specific. Another possible problem -- it may not work easily with IE11 on Win8.1, but I don't personally have that setup to test.
QUESTION: Is there a way I can produce this same functionality for my client, distributed along with the content on the flash drive, without the need to install some special software (local web server) on each client computer??? The current workaround is to tell customers they should us any browser except IE. Client isn't happy.
You could distribute your webpages along with a portable Nginx server, or wrapped inside a Node-webkit or AppJS package.
The Google trusted store badge in not showing across browsers and platforms.
I can get it to show in Safari Mac but not Chrome or Firefox Mac.
I can get it to show in IE Win and Firefox Win but not Chrome Win.
I went through Google's implementation tips.
Doctype checks out.
Google's Tag Assistant validates on the page.
The test, Test Drive, of the js implementation in Trusted Stores works fine.
robots.txt is also delivered under ssl.
Any ideas?
Google response:
We are writing to you because we noticed a posting your team made asking about the Trusted Stores badge visibility on your site.
I can confirm that your account, qxxxxxxxxxxxxxe.com, is in good standing. The badge is not displaying for half of users due to a few-week experiment we are running with all merchants in the program.
We run experiments from time to time, as we are always looking to improve the user experience with your site and the program. For example, we have made improvements to the badge design and behavior, such as only opening the flyover on click (instead of mouseover).
In Firefox version 23, mixed content blocking behavior is added.It means that Firefox has blocked content that is insecure on the page you're visiting.It shows the shield icon in the address bar which blocks some uploads in my app.From development side how to turn off this behavior?? .I am in ruby on rails development.
Can anybody guide me??
You cannot turn this off remotely! Except in your own browser, of course.
That is: Your rails application cannot turn off mixed-content blocking in the browser.
This is a preference only a (skilled) user may change in her browser... But shouldn't in the age of Firesheep, etc.
Instead, you should make all your active content available via https.
Or downgrade to insecure http. Since you're essentially wanting to allow Man-In-The-Middle attacks anyway, because that's what mixed-content means, the result of using http in the first place wouldn't be that much different. The only difference would be that a MITM could stay passive in http-only, instead of having to actively modify data in https-mixed-mode. But, seriously, what percentage of your users would recognize an active MITM, who maybe even only runs a small targeted attack?
I think you are using firefox version below 23.0 my suggestion is to
first upgrade and then proceed
First Uninstall and reinstall Firefox using Ubuntu software center.
New version of Firefox is available in Ubuntu software center
Reboot the system
your firefox will be upgraded to 23.0 version
I'm developing a local JavaScript webapp for demo purposes. The webapp consists of a single HTML page and a few JS files that are included into the app using <script> tags in <head>.
When I run this file (from the local filesystem on windows) on FF or Chromium, everything is as it should be - the app works fine.
When I run it in IE9, there is a "Internet Explorer restricted this page from running scripts or ActiveX controls" and the app fails to load properly. Clicking on "Allow content" does not help that much because the app already is a train wreck.
How when I host a local webserver with
python -m http.server 8888
and point IE to it - everything works fine.
Because this is a corporate setting I am not interested in changing the security settings.
I've dealt with the problem by sending these files to a server, but the questions remains: why does IE treat files from the filesystem (within the same directory even) as some sort of cross-site request or security risk?
PS. Bonus WTF: When opening the page with the developer tools on, everything is ok.
EDIT: In case you're wandering: I did add a closing script tag.
<script type="text/javascript" src="vendor/d3.v3.js"></script>
why does IE treat files from the filesystem [as a] security risk?
Historical Reasons.
When Microsoft came up with the idea of web security Zones, they originally decided that the My Computer Zone, containing the local filesystem, was more trusted than the Internet Zone.
This almost sounds like a sensible thing to do, except that (a) users expect web pages they download not to gain a load of privileges when run from the hard disc, and (b) lots of programs download files from the internet and put them in a predictable place... so if you can persuade them to download an HTML file, you are persauding them to inject privileged script into the My Computer Zone.
The original settings for the My Computer Zone were to allow ActiveX controls to install and run without prompting. This meant that if you could ever get some HTML onto the filesystem, you essentially had an execute-arbitrary-code security vulnerability. There were lots of web exploits that leveraged this as part of their infection mechanism to load malware.
Microsoft feared any change to My Computer Zone security settings would break applications that used the web browser control to render their own HTML content as part of their UI. So instead, the web browser control defaulted to existing settings, and browsers such as IE that used it were invited to enable "Local Machine Lockdown" mode, which would drop the extra privileges My Computer Zone pages got by default. IE turned this on by default.
Unfortunately in a classic over-reaction, "Local Machine Lockdown" was not just the same level of privilege as the Internet Zone would have been, but even more restrictive - blocking JavaScript as well as ActiveX. This broke pages that users had saved to the hard disc, so to work around that IE adds a marker to pages it downloads to allow them to escape the (formerly privileged, now restricted) My Computer Zone and be treated as normal Internet Zone pages.
This is the Mark of the Web and you can include it in your static files to make them behave normally too.
Of course this makes the added restrictiveness of Local Machine Lockdown completely pointless, as any file can opt out.
But then the whole thing is now completely pointless, because since then the default settings of the Local Machine Zone have been changed and now resemble the Internet Zone more closely, not allowing arbitrary ActiveX. So that's a lot of confusing added complexity for no gain whatsoever.
Suppose there is an extension for Google+, so when I'm visiting plus.google.com, it's running, but what happens if I close Google+ tab? Is it still running and consume my computer resource?
PS: I ask this because I'm wondering that if this is the truth, I can write an extension that enable or disable other extensions according to the website that I'm visiting, so maybe my Chrome would be faster
It depends.
The author of a Chrome extension can tell Chrome that the extension should only be active on particular websites. However, no matter the website you are visiting, the extension will always be running. To observe this phenomenon for yourself, hit Shift+Esc to display the task manager. Note the extension processes. You can see by trial and error that if Chrome is running, all of your enabled [background] extensions are also running.
The benefit of the Chrome extension developer specifying particular websites is that, even though the extension is always running, it will not receive event notifications for websites that don't apply to it - basically, it will be sleeping. So the effect is appreciable.
For more information about Chrome extension configuration options, see the Chrome extension manifest documentation here.
Edit: Please see Serg's answer re: modifying other extensions.
There are two types of extensions from resource consumption point of view - those that have a background page and those that don't. Permission warnings you see in the gallery don't give you any indication what kind of extension it is.
Extensions without a background page are consuming resources only (well, probably mostly) when used. Those with - consume memory always, plus might consume CPU depending on what they are doing there.
You can very easily write extension that disables all others with management api and the benefit from it will be noticeable on performance (I wrote one for myself actually).