Issue with PCI Scan Failing Because of Jquery - javascript

Hoping someone can help me here. Long story short I'm working with a client that is using Trustwave as their firewall provider and for their PCI scans. This particular location continues to fail their scans with the same two errors over and over for "jquery"
jQuery Core rquickExpr variable
with Cross-Site Scripting
Vulnerability, CVE-2012-6708
and
jQuery Cross-Domain
Asynchronous JavaScript and
Extensible Markup Language
Request Cross-site Scripting
Vulnerability, CVE-2015-9251
Now the issue here is that the network being scanned is a point of sale system. Why is this happening? Isn't this a script library for web content developers? the scans indicate that there is TCP communication over port 80 related to jquery but that's simply not true when I check using the command prompt. There is no program running that should be using this or anything java related. How can I find what is using this script library?
Another snippet from the scan indicated the following for the failures.
Evidence:
Match: '1.7.1' is less than '3.0.0'
Remediation:
Upgrade jquery to version 3.0.0 or higher.

Related

How to ensure that JavaScript page does not communicate

I created a small JavaScript application for which I reused some (quite large) JavaScript resources that I downloaded from the internet.
My application runs in the browser like other interactive web applications but works entirely offline.
However, I intend to enter some private information in the application which it shall visualize. Since I cannot ultimately trust the JavaScript pieces that I downloaded, I wonder if there is a JavaScript option to make sure that no data is downloaded and, in particular, uploaded to the web.
Note that I am aware that I can cutoff the local internet connection or perhaps change browser settings or use an application firewall, but this would not be a solution that suits my needs. You may assume that the isolation of a browser instance is save, that is no other, possibly malicious, web sites can access my offline JavaScript application or the user data I enter. If there is a secure way to (automatically) review the code of the downloaded resources (e.g. because communication is possible only via a few dedicated JavaScript commands that I can search for) that would be an acceptable solution too.
You should take a look at the Content Security Policy (CSP) (see here and here). This basically blocks every connection from your browser to any other hosts, unless explicitely allowed. Be aware that not all browsers support CSP, which leads to potential security problems.
Reviewing the library code might be difficult because there are many ways to mask such code pieces.
Find it yourself by watching your browser's network activity while your application is in action.
There are more than enough tools to do this. Also, if you know how to use netstat command line tool, it is readily shipped with windows.
Here is one cool chrome extension which watches the traffic of the current tab.
https://chrome.google.com/webstore/detail/http-trace/idladlllljmbcnfninpljlkaoklggknp
And, here is another extension which can modify the selected traffic.
https://chrome.google.com/webstore/detail/tamper-chrome-extension/hifhgpdkfodlpnlmlnmhchnkepplebkb?hl=en
You can set the filters and modify all requests/responses happening in your page.
If you want to write an extension to block requests yourself, check this answer out.

Simulate XMLHttpRequest as from localhost or Remote Connection to a machine

I have a website hosted in ISS (can be other) that loads when it's called on localhost but not from extern :) like: http://:8081/Website.html.
The verification whether the website is called from localhost it's on the client in a js script that I can’t modify as it’s encrypted.
So I was thinking at two options:
Develop an ASP application that has a remote desktop connection to the machine that host the website (not some many example on how to).
Maybe configure the IIS configuration (didn't found how)
I'm out of ideas
Do you have any other solution or can you point on how can I do one of the above?
I have tried the WinForm solution from here: https://www.codeproject.com/kb/cs/remotedesktop_csharpnet.aspx and it doesn't work. And I prefer a website.
Updates:
The only working solution that I have for now is to configure a Remote Desktop Services (Web Access) as I hosted the application on Server 2008 R2. Then I only shared the browser that has the localhost page as default page
The javascript files are all minified and encrypted, meaning that if I search localhost as a word in all the files, nothing shows up. So fixing the client will be hard.
Is it possible to create a new Site Binding on IIS and access the site using the binding hostname? This requires your network DNS to register the hostname to the IP Address.
I assume you are dealing with encrypted(???) javascript that is hardcoded to display DOM only if it is loaded from localhost.
If by encrypted you mean minified you should still be able to find reference to "localhost" and modify javascript in minified version. If it is really encrypted by a wrapper of third party javascript library then I would suggest you to rewrite javascript. I mean how can there be any quality code in javascript code that is hardcoded to load only from localhost?
Fix the client and stop exploring other solutions like remote desktop connection. None of them are practical and sustainable solutions.
I think you need to use WebRTC, but it's supported for Chrome and Firefox. It allows two users to communicate directly, browser to browser using the RTCPeerConnection API.

how to inject a script tag manually in a browser instance in protractor?

I have the dynatrace js agent piece of code and i am trying to inject that minified piece of code into the browser instances that pop up on when the protractor tests are running on the selenium grid.
The reason that this is not automatically injected is the fact that they are running on a docker container. What would be the best way to do a manual injection of the code in this case?
I tried doing this:
var dtagent = require('./dtagent-test.js');
browser.driver.executeScript("dtagent");
dtagent contains the minifed dynatrace code that needs to be injected.
but that did not work and it complained that window is not defined.
Any idea how this can work?
Thanks!
I assume this is the Dynatrace JavaScript Agent for UEM (=User Experience Management)? Correct? If that is the case - you need to make sure that you have a Dynatrace Web Server or Java Agent installed on your web/appp server. Why? Because this JavaScript file will be delivered from the dynatrace agent on the server. ALSO - the javascript file will capture data in the browser and POST it back to your web/app server. This also requires the Dynatrace Agent installed on your web/app server.
So - whether you do manual or automatic injection - you have to have the dynatrace agent installed on your server side
Andi
To understand more about the issue, need some information :
You using On-Premise (in home hosted Dynatrace Server) or Saas Portal hosted Dynatrace .
'./dtagent-test.js' is actually referring where:
a. If using SAAS portal, then the proper url mentioned or not?
b. If using On-Premise, then as Andi described that the respective Agent has been configured and the Dynatrace Collector is getting connected properly.
Let us know more so that we can drill down and help you. :)

Client side includes on local machine

I obviously can't use server side languages, this is just a page on my desktop.
I tried using AJAX with jquery, but I get the following error message
Sorry but there was an error: 0 [Exception... "Access to restricted URI denied" code: "1012" nsresult: "0x805303f4 (NS_ERROR_DOM_BAD_URI)" location: "https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js Line: 4"]
It has to do with the browser not loading scripts because it's hosted locally or something. So is there any way I can include files on a local machine without installing web server software?
This is an XSS error. You can't make http requests to third party sites (urls not on your domain). You would need to use a proxy to make requests to the page (or have the owner allow your site to make XSS requests).
Shameless plug of a library that I wrote the solve similar problem. We wanted to be able to splice HTML files for backend implementations without the overhead of a local HTTP server implementing server side includes. This library works on HTTP or local filesystem. But, as the repository README notes, you'll have to enable a --allow-file-access-from-files flag to your Chrome runtime. Other browsers work out of box.
https://github.com/LexmarkWeb/csi.js
<div data-include="/path/to/include.html"></div>
The above will take the contents of /path/to/include.html and replace the div with it.

Firefox: Signed script shows scary certificate dialog

The context: I'm writing JavaScript to run an executable and tweak some registry entries on the client machine. I've signed the .JAR using SignTool and my company's Authenticode certificate, but running the script produces a dialog saying:
There is no mention of the root certificate authority (in this case Comodo, I believe), so I could just as well have generated a self-signed certificate to put the company name string in the dialog.
My question is: is this all the user is meant to see? This example page at jar:http://www.mozilla.org/projects/security/components/signed-script-demo.jar!/signed-script-demo.html shows the same dialog, but there's still a lack of any "examine this certificate" link or mention of a root CA.
Are there any recent resources for writing signed scripts? The mozilla pages are mostly several years old and many reference now-defunct documentation at developer.netscape.com.
-- Martin
The code that runs those signed jars and elevates privileges hasn't changed in years either, so that documentation should be correct. The code hasn't really been touched because nobody on the web uses that stuff. Yes, I'm aware of the chicken-egg problem here with the crappy UI.
You could try filing a bug with Mozilla about this, but I'm not sure it'd get worked on (but patches would likely be welcomed).
We were forced to use signed scripts to access our Firefox addon from JavaScript. I wrote my experience about it here.
Shortly:
encapsulate your privileged logic in separate HTML+JS page
make this page do actual work on page load
sign it and put to server (you need packaging, custom content-type, etc.)
on usual (unsigned) pages: load signed page into hidden IFRAME and interop with it using JavaScript callbacks

Categories

Resources