Take Screenshot of Browser via JavaScript (or something else) - javascript

For support reasons I want to be able for a user to take a screenshot of the current browser window as easy as possible and send it over to the server.
Any (crazy) ideas?

That would appear to be a pretty big security hole in JavaScript if you could do this. Imagine a malicious user installing that code on your site with a XSS attack and then screenshotting all of your daily work. Imagine that happening with your online banking...
However, it is possible to do this sort of thing outside of JavaScript. I developed a Swing application that used screen capture code like this which did a great job of sending an email to the helpdesk with an attached screenshot whenever the user encountered a RuntimeException.
I suppose you could experiment with a signed Java applet (shock! horror! noooooo!) that hung around in the corner. If executed with the appropriate security privileges given at installation it might be coerced into executing that kind of screenshot code.
For convenience, here is the code from the site I linked to:
import java.awt.Dimension;
import java.awt.Rectangle;
import java.awt.Robot;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import java.io.File;
...
public void captureScreen(String fileName) throws Exception {
Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
Rectangle screenRectangle = new Rectangle(screenSize);
Robot robot = new Robot();
BufferedImage image = robot.createScreenCapture(screenRectangle);
ImageIO.write(image, "png", new File(fileName));
}
...

Please see the answer shared here for a relatively successful implementation of this:
https://stackoverflow.com/a/6678156/291640
Utilizing:
https://github.com/niklasvh/html2canvas

You could try to render the whole page in canvas and save this image back to server. have fun :)

A webpage can't do this (or at least, I would be very surprised if it could, in any browser) but a Firefox extension can. See https://developer.mozilla.org/en/Drawing_Graphics_with_Canvas#Rendering_Web_Content_Into_A_Canvas -- when that page says "Chrome privileges" that means an extension can do it, but a web page can't.

Seems to me that support needs (at least) the answers for two questions:
What does the screen look like? and
Why does it look that way?
A screenshot -- a visual -- is very necessary and answers the first question, but it can't answer the second.
As a first attempt, I'd try to send the entire page up to support. The support tech could display that page in his browser (answers the first question); and could also see the current state of the customer's html (helps to answer the second question).
I'd try to send as much of the page as is available to the client JS by way of AJAX or as the payload of a form. I'd also send info not on the page: anything that affects the state of the page, like cookies or session IDs or whatever.
The cust might have a submit-like button to start the process.
I think that would work. Let's see: it needs some CGI somewhere on the server that catches the incoming user page and makes it available to support, maybe by writing a disk file. Then the support person can load (or have loaded automatically) that same page. All the other info (cookies and so on) can be put into the page that support sees.
PLUS: the client JS that handles the submit-button onclick( ) could also include any useful JS variable values!
Hey, this can work! I'm getting psyched :-)
HTH
-- pete

I've seen people either do this with two approaches:
setup a separate server for screenshotting and run a bunch of firefox instances on there, check out these two gem if you're doing it in ruby: selenium-webdriver and headless
use a hosted solution like http://url2png.com (way easier)

You can also do this with the Fireshot plugin. I use the following code (that I extracted from the API code so I don't need to include the API JS) to make a direct call to the Fireshot object:
var element = document.createElement("FireShotDataElement");
element.setAttribute("Entire", true);
element.setAttribute("Action", 1);
element.setAttribute("Key", "");
element.setAttribute("BASE64Content", "");
element.setAttribute("Data", "C:/Users/jagilber/Downloads/whatev.jpg");
if (typeof(CapturedFrameId) != "undefined")
element.setAttribute("CapturedFrameId", CapturedFrameId);
document.documentElement.appendChild(element);
var evt = document.createEvent("Events");
evt.initEvent("capturePageEvt", true, false);
element.dispatchEvent(evt);
Note: I don't know if this functionality is only available for the paid version or not.

Perhaps http://html2canvas.hertzen.com/ could be used. Then you can capture the display and then process it.

You might try PhantomJs, a headlesss browsing toolkit.
http://phantomjs.org/
The following Javascript example demonstrates basic screenshot functionality:
var page = require('webpage').create();
page.settings.userAgent = 'UltimateBrowser/100';
page.viewportSize = { width: 1200, height: 1200 };
page.clipRect = { top: 0, left: 0, width: 1200, height: 1200 };
page.open('https://google.com/', function () {
page.render('output.png');
phantom.exit();
});

I understand this post is 5 years old, but for the sake of future visits I'll add my own solution here which I think solves the original post's question without any third-party libraries apart from jQuery.
pageClone = $('html').clone();
// Make sure that CSS and images load correctly when opening this clone
pageClone.find('head').append("<base href='" + location.href + "' />");
// OPTIONAL: Remove potentially interfering scripts so the page is totally static
pageClone.find('script').remove();
htmlString = pageClone.html();
You could remove other parts of the DOM you think are unnecessary, such as the support form if it is in a modal window. Or you could choose not to remove scripts if you prefer to maintain some interaction with dynamic controls.
Send that string to the server, either in a hidden field or by AJAX, and then on the server side just attach the whole lot as an HTML file to the support email.
The benefits of this are that you'll get not just a screenshot but the entire scrollable page in its current form, plus you can even inspect and debug the DOM.

Print Screen? Old school and a couple of keypresses, but it works!

This may not work for you, but on IE you can use the snapsie plugin. It doesn't seem to be in development anymore, but the last release is available from the linked site.

i thing you need a activeX controls. without it i can't imagine. you can force user to install them first after the installation on client side activex controls should work and you can capture.

We are temporarily collecting Ajax states, data in form fields and session information. Then we re-render it at the support desk. Since we test and integrate for all browsers, there are hardly any support cases for display reasons.
Have a look at the red button at the bottom on holidaycheck
Alternatively there is html2canvas of Google. But it is only applicable for never browsers and I've never tried it.

In JavaScript? No. I do work for a security company (sort of NetNanny type stuff) and the only effective way we've found to do screen captures of the user is with a hidden application.

Related

Encryption of the link address <a href="http://www.mycompany.com> so it does not appear in source view on IE toolbar

This is probably a simple question but I can't seem to find what I am looking for on the web so here it goes. I have a link on my company INTRAnet site that senior management does not want the employees to see the actual web address (via the source option on the View tab of IE).
Please let me know how I can do this in HTML, asp.net or JS.
Thanks!
:)
You can't. Tell senior management to quit being so secretive.
Not sure if this is what you want, but here is a similar Question:
php encrypt and decrypt
Does it help at all? There is another, but it is a php code:
http://php.net/manual/en/function.mcrypt-encrypt.php
Also, what language are you looking to implement the code?
Alernatively, you can use this site: http://www.iwebtool.com/html_encrypter and on the box you type your html e.g.
This is your post link
Then use the "Encrypt" button. It will return you the javascript you are looking for.
E.g.
"<"Script Language='Javascript'>
document.write(unescape('%3C%61%20%68%72%65%66%
3D%22%68%74%74%70%3A%2F%2F%73%74%61%63%6B%6F%76%65%72%66%6C%
6F%77%2E%63%6F%6D%2F%70%6F%73%74%73%2F%31%35%39%33%34%36%39%
36%22%3E%54%68%69%73%20%69%73%20%79%
6F%75%72%20%70%6F%73%74%3C%2F%61%3E'));
</Script>
No jsFiddle because that javascript isn't allowed.
First and foremost, it's impossible to hide the url from the browser. The browser has to request the webpage from the server, and even if the url was obscured somehow, it would have to be plaintext in the HTTP Request, which would open it up to a man-in-the-middle utility like Fiddler.
Second, this feels like security through obscurity. Resources that certain people shouldn't have access to should be locked down explicitly, not just hidden because the user doesn't know the url (yet).
However, purely as a thinking exercise... I suppose... you could write a handler that knows the real url, uses code to retrieve the content of the page, and then writes that to the response. So the users would see the handler url, but not where the handler is pulling it's data from. However, you'd then have to go to great lengths to find all links and resources on the page and convert those references to also go through your handler.
Of course, practically speaking, I think this concept is silly. There's some problem your senior management is trying to solve, and hiding the url from the user is not the answer.
If upper management is this secretive then it's a safe bet that you also already have IT people who have browsers locked down as well, meaning Internet Explorer. It's possible that your IT team might be able to force the address bar to hide for all browsers within your company. I don't think that this can be done on a per request basis. Meaning that the address bar would either be on or off all the time.
According to this post your IT team might be able to update the registry to hide the address bar like so:
Run following RegKey:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Internet Explorer\ToolBars]
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Internet Explorer\ToolBars\Restrictions]
"NoNavBar"=dword:00000001
Here's a google search that might also offer additional information.
Well rather than making it disappear you can make it hard for others to see through and even impossible for those who have no knowledge of base-64. Here is a code :
var a = document.querySelectorAll("*"), b = 0;
for ( b = 0; b < a.length; b ++ ) {
if ( a[b].hasAttribute("data-href") ) {
a[b].href = atob( a[b].getAttribute("data-href") );
};
};
Now you can call something like this :
<a data-href="aHR0cDovL3d3dy5teWNvbXBhbnkuY29t">Go</a>
By using btoa() I converted "http://www.mycompany.com" to "aHR0cDovL3d3dy5teWNvbXBhbnkuY29t" in base-64 and designed "data-href" to understand the encoding. Behind all this it will look and act like :
Go

disable (Ctrl+U) keyboard script to prevent view source [duplicate]

This question already has answers here:
How to disable (View Source) and (Ctrl + C ) from my site
(10 answers)
Closed 9 years ago.
I'm looking for disable keyboard script to protect hidden content.
It is not possible. A user will always be able to view your source since he needs to download it in order to render the page.
There are more ways to view source than what you are trying to prevent:
Using firebug
Using wget
Right clicking on content choosing 'view source'
Using the menu option
Via man in the middle
probably more...
This is not very complicated but totally unreliable. Its same with all other Javascript protections.
First the trick (IE incompatible):
function denyKey(event) {
var code = event.keyCode;
if(event.ctrlKey) {
if(code==85)
return false;
}
}
window.addEventListener("keydown", denyKey);
My code is just scratch, it is not cross-browser. This is where to get keyCodes. I did not put much effort in the code since I want to discourage you from using it.
Once you send data to user, he can read the data unless you encrypt them without giving him the key. This means any:
Javascript authentication
Secret loading pages
Javascript "Wait before download..."
Blocked mouse buttons
..can and will be bypassed by the user.
I have a bookmarklet to unblock mouse buttons for example.
That is not possible, even if it was, it would have been a horrible protection. Even I could write a simple script that fetches the source of an arbitrary page. Everything that the client sees, is 'view source'-able (somebody edit that). Only server-side code is safe. Even if it was only possible to view your page through a real browser (but you can't make it so) you'll probably overlook a accelerator key, or other shortcut. If you don't want the client to see some code, don't give it to him! Keep it server-side (and not in a .txt file, that's accessible too) or don't keep it.

Capture Browser Web Page [duplicate]

Is it possible to to take a screenshot of a webpage with JavaScript and then submit that back to the server?
I'm not so concerned with browser security issues. etc. as the implementation would be for HTA. But is it possible?
Google is doing this in Google+ and a talented developer reverse engineered it and produced http://html2canvas.hertzen.com/ . To work in IE you'll need a canvas support library such as http://excanvas.sourceforge.net/
I have done this for an HTA by using an ActiveX control. It was pretty easy to build the control in VB6 to take the screenshot. I had to use the keybd_event API call because SendKeys can't do PrintScreen. Here's the code for that:
Declare Sub keybd_event Lib "user32" _
(ByVal bVk As Byte, ByVal bScan As Byte, ByVal dwFlags As Long, ByVal dwExtraInfo As Long)
Public Const CaptWindow = 2
Public Sub ScreenGrab()
keybd_event &H12, 0, 0, 0
keybd_event &H2C, CaptWindow, 0, 0
keybd_event &H2C, CaptWindow, &H2, 0
keybd_event &H12, 0, &H2, 0
End Sub
That only gets you as far as getting the window to the clipboard.
Another option, if the window you want a screenshot of is an HTA would be to just use an XMLHTTPRequest to send the DOM nodes to the server, then create the screenshots server-side.
Another possible solution that I've discovered is http://www.phantomjs.org/ which allows one to very easily take screenshots of pages and a whole lot more. Whilst my original requirements for this question aren't valid any more (different job), I will likely integrate PhantomJS into future projects.
Pounder's if this is possible to do by setting the whole body elements into a canvase then using canvas2image ?
http://www.nihilogic.dk/labs/canvas2image/
A possible way to do this, if running on windows and have .NET installed you can do:
public Bitmap GenerateScreenshot(string url)
{
// This method gets a screenshot of the webpage
// rendered at its full size (height and width)
return GenerateScreenshot(url, -1, -1);
}
public Bitmap GenerateScreenshot(string url, int width, int height)
{
// Load the webpage into a WebBrowser control
WebBrowser wb = new WebBrowser();
wb.ScrollBarsEnabled = false;
wb.ScriptErrorsSuppressed = true;
wb.Navigate(url);
while (wb.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); }
// Set the size of the WebBrowser control
wb.Width = width;
wb.Height = height;
if (width == -1)
{
// Take Screenshot of the web pages full width
wb.Width = wb.Document.Body.ScrollRectangle.Width;
}
if (height == -1)
{
// Take Screenshot of the web pages full height
wb.Height = wb.Document.Body.ScrollRectangle.Height;
}
// Get a Bitmap representation of the webpage as it's rendered in the WebBrowser control
Bitmap bitmap = new Bitmap(wb.Width, wb.Height);
wb.DrawToBitmap(bitmap, new Rectangle(0, 0, wb.Width, wb.Height));
wb.Dispose();
return bitmap;
}
And then via PHP you can do:
exec("CreateScreenShot.exe -url http://.... -save C:/shots domain_page.png");
Then you have the screenshot in the server side.
This might not be the ideal solution for you, but it might still be worth mentioning.
Snapsie is an open source, ActiveX object that enables Internet Explorer screenshots to be captured and saved. Once the DLL file is registered on the client, you should be able to capture the screenshot and upload the file to the server withing JavaScript. Drawbacks: it needs to register the DLL file at the client and works only with Internet Explorer.
We had a similar requirement for reporting bugs. Since it was for an intranet scenario, we were able to use browser addons (like Fireshot for Firefox and IE Screenshot for Internet Explorer).
This question is old but maybe there's still someone interested in a state-of-the-art answer:
You can use getDisplayMedia:
https://github.com/ondras/browsershot
The SnapEngage uses a Java applet (1.5+) to make a browser screenshot. AFAIK, java.awt.Robot should do the job - the user has just to permit the applet to do it (once).
And I have just found a post about it:
Stack Overflow question JavaScript code to take a screenshot of a website without using ActiveX
Blog post How SnapABug works – and what they should do
I found that dom-to-image did a good job (much better than html2canvas). See the following question & answer: https://stackoverflow.com/a/32776834/207981
This question asks about submitting this back to the server, which should be possible, but if you're looking to download the image(s) you'll want to combine it with FileSaver.js, and if you want to download a zip with multiple image files all generated client-side take a look at jszip.
You can achieve that using HTA and VBScript. Just call an external tool to do the screenshotting. I forgot what the name is, but on Windows Vista there is a tool to do screenshots. You don't even need an extra install for it.
As for as automatic - it totally depends on the tool you use. If it has an API, I am sure you can trigger the screenshot and saving process through a couple of Visual Basic calls without the user knowing that you did what you did.
Since you mentioned HTA, I am assuming you are on Windows and (probably) know your environment (e.g. OS and version) very well.
If you are willing to do it on the server side, there are options like PhantomJS, which is now deprecated. The best way to go would be Headless Chrome with something like Puppeteer on Node.JS. Capturing a web page using Puppeteer would be as simple as follows:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
await browser.close();
})();
However it requires headless chrome to be able to run on your servers, which has some dependencies and might not be suitable on restricted environments. (Also, if you are not using Node.JS, you might need to handle installation / launching of browsers yourself.)
If you are willing to use a SaaS service, there are many options such as
Restpack
UrlBox
Screenshot Layer
A great solution for screenshot taking in Javascript is the one by https://grabz.it.
They have a flexible and simple-to-use screenshot API which can be used by any type of JS application.
If you want to try it, at first you should get the authorization app key + secret and the free SDK
Then, in your app, the implementation steps would be:
// include the grabzit.min.js library in the web page you want the capture to appear
<script src="grabzit.min.js"></script>
//use the key and the secret to login, capture the url
<script>
GrabzIt("KEY", "SECRET").ConvertURL("http://www.google.com").Create();
</script>
Screenshot could be customized with different parameters. For example:
GrabzIt("KEY", "SECRET").ConvertURL("http://www.google.com",
{"width": 400, "height": 400, "format": "png", "delay", 10000}).Create();
</script>
That's all.
Then simply wait a short while and the image will automatically appear at the bottom of the page, without you needing to reload the page.
There are other functionalities to the screenshot mechanism which you can explore here.
It's also possible to save the screenshot locally. For that you will need to utilize GrabzIt server side API. For more info check the detailed guide here.
As of today Apr 2020 GitHub library html2Canvas
https://github.com/niklasvh/html2canvas
GitHub 20K stars | Azure pipeles : Succeeded | Downloads 1.3M/mo |
quote : " JavaScript HTML renderer The script allows you to take "screenshots" of webpages or parts of it, directly on the users browser. The screenshot is based on the DOM and as such may not be 100% accurate to the real representation as it does not make an actual screenshot, but builds the screenshot based on the information available on the page.
I made a simple function that uses rasterizeHTML to build a svg and/or an image with page contents.
Check it out :
https://github.com/orisha/tdg-screen-shooter-pure-js

How to manipulate Javascript websites in Perl

I have been asked to automate the logging into a webapp(what I assume to be one, that runs a lot of .aspx and .js scripts) that, currently, can only run in IE. Now i am programming in Perl and have tried to use Win32::IE::Mechanize to run the IE browser and log in. What i did was try an extract all the forms from the webapp, and given the users information, fill out the required forms, but this is where the problem arises, when I try and run the subroutine no forms appear......
So then I transitioned into WWW::Mechanize and used the post subroutine(from LWP::UserAgent) which solved the problem for the most part. Now i've run into a problem in the response, from the server, I get this script as the content of the response and I don't know what to do with it.
So my question is: Using Perl how can I go about to manipulate a Javascript functions in a website? Would that even be a valid solution to the problem?
I am open to writing this in other programming languages as well. Thanks in advance for the help!
(So that I can fully log in to the webapp)
Update: The content of the response:
var msgTimerID;
var strForceLogOff = "false";
function WindowOnLoad(){
if ("false" == "true" && "false" == "false")
MerlinSystemMsg("",64);
if ("false"=="true")
msgTimerID = window.setInterval("MerlinSystemMsg(10095,64)", 300000,'javascript');
}
function MyShowModal(){
showModalDialog("", window, strFeatures);}
function clearMsgInterval(){
window.clearInterval(msgTimerID);
}
function WindowOnUnLoad(){
if(top.frames(0).document.getElementById("OPMODE").value =="LOGOFF"){
strFeatures = "width=1,height=1,left=1000,top=1000,toolbar=no,scrollbars=no,menubar=no,location=no,directories=no,status=yes,resizable=1";
window.open("ForceLogOff.aspx","forcelogout",strFeatures);
}
}
window.onbeforeunload = WindowOnUnLoad;
window.onload = WindowOnLoad;
There is also this Frame Title that has the src:
FRAME TITLE="Service Desk Express Navigator" SRC="options_nailogo.aspx" MARGINWIDTH=0 MARGINHEIGHT=0 NORESIZE scrolling=no
Trying to emulate the browser with a fully functioning JS engine is going to be a mighty big task. Instead, I'd suggest that you just try to emulate the actual interaction with the web site and not care what HTML/JS is actually sent back. Your server side code doesn't care how the HTTP submissions take place, only that they do. Admittedly this is more fragile if the forms change a lot, but at least you're not trying to implement a full browser.
So look at modules like LWP::UserAgent, HTTP::Request and HTTP::Response.
I'm copying and pasting my answer to your other duplicate question here
(You should consider deleting one of these?)
That content is the website source :)
How WWW::Mechanize deals with FRAME SRC as a link:
Note that <FRAME SRC="..."> tags are parsed out of the the HTML and
treated as links so this method works with them.
You'll want to use follow_link on that link.
As far as dealing with Javascript, there is support for a Firefox Add-on called MozRepl that you can use in conjunction with WWW::Mechanize::Firefox that I have used in the past to call Javascript code while crawling a page.

Javascript debugging difficult as browser doesn't refresh the scripts!

I'm trying to debug a Javascript written in the Mootools framework. Right now I am developing a web application on top of Rails and my webserver is the rails s that boots WEBrick.
When I modify a particular tree.js file thats called with in one a mootools init script,
require: {
css: [MUI.path.plugins + 'tree/css/style.css'],
js: [MUI.path.plugins + 'tree/scripts/tree.js'],
onload: function(){
if (buildTree) buildTree('tree1');
}
},
the changes are not loaded as the headers being sent to the client are Last Modified: 10 July, 2010..... which is obviously not true since I just modified the file.
How do I get rid of this annoying caching. If I go directly to the script in my browser (Chrome) it doesn't show the changes until I hit refresh, but this doesn't fix my problem when I go back to my application and hit refresh, it still loads the pre-modified script.
This has happen to me also in FF, I think it is a cache header sent by the server or the browser itself.
Anyway a simple way to avoid this problem while in development is adding a random param to the file name of the script.
instead of calling 'tree/scripts/tree.js' use 'tree/scripts/tree.js?'+random that should invalidate all caches.
As frisco says, adding a random number in development does the trick but you will likely find that the problem still affects you production. You want to push new JavaScript changes to your users but can't until their browsers stop caching the file. In order to do this, just get the files mtime and add that as the random string. This will only change when the file is modified and so the JavaScript will be loaded from cache if it has not been changed or it will be loaded from the server, if it has.
PHP has the function filemtime but as I'm not familiar with Ruby, I'm afraid I can't help you further in that direction (sorry!). However, this answer seems to accomplish what you want.
Try the Ctrl+F5 trick. To avoid hitting browser cache.
More info here:
What requests do browsers' "F5" and "Ctrl + F5" refreshes generate?

Categories

Resources