Case :
I have 2 iframes and both have lot of divs and other controls so both iframes are like the medium size of HTML websites. I want to compare both and find out differences.
I thought different options here :
Solution 1: Take a full screenshot of 2 iframes and compare both screenshots using the pillow library of Python which draws the grid on the mismatch area in a screenshot. But here the issue is I did not find any code on the internet which can take full iframe screenshots (I have a long iframe with a scroll bar). I tried almost all answers on SO but all are working for a normal page but not for the iframe.
Reference : https://blog.rinatussenov.com/automating-manual-visual-regression-tests-with-python-and-selenium-be66be950196
Solution 2: Get somehow all HTML code from both iframe and compare it, but this won't be easy to analyze result because it will find some HTML code that is different or have a mismatch in 2 iframes. This will be more like text compare and not a good solution I believe.
So I am looking for either code which can take a full screenshot of iframe using Python or Javascript OR some better option which allows me to compare 2 iframes and find out differences.
I tried almost all answers which google find our as per below :
Sample Iframe is given here where whole html is within iframe : https://grapesjs.com/demo.html , If some code can take full screenshot of this iframe then it will be easy to compare for me.
As we discovered in our chat, the iframes under discussion are generated in javascript and not loaded from a URL.
This presents a difficulty in automating screen grabbing the iframe, however a manual process is possible:
In Firefox right click on the iframe and select "This Frame" in the popup menu, then select "Save Frame As...".
Once the frame is saved, some of the downloaded CSS will need to be fiddled with to get the background URLs to point to the correct place. Having done that, open the html file locally and you will be able to take a screen shot using the method you currently use for a normal web page.
Grabbing part of the screen
You can either grab it manually or automatically. If there are not many iframes to compare, then doing it manually is an option, you just do a screenshot which contains the content and crop the image if necessary. The difficulty of this approach is that you need to be very very precise while cropping.
You can do it automatically as well, for example loading the part of the DOM into a canvas and making a picture of it, like here: Using HTML5/Canvas/JavaScript to take in-browser screenshots
Also, you can modify temporarily your content to make sure the whole page is what you are interested about and then do a screenshot, as described here: https://www.cnet.com/how-to/how-to-take-a-screenshot-of-a-whole-web-page-in-chrome/
Comparing two images
You can compare two images by looping all their pixels and comparing them.
Algorithm to compare two images
Showing the results
Your program should take two images as input and create a new image of similar size as output. I would suggest that the target image should show the pixels of one of the images to compare and to draw a red line at the border of each differences. For this purpose you will need to divide a region of differences into rectangles. This way you could see where the differences are and what is the content that is different.
You could use pillow in combination with pyautogui, mayby pyautogui alone.
Some pseudo code:
As long as the scrollbar doesnt touch the bottom:
- take a screenshot
- save screenshot name in list
- scroll down, continue this loop
Do the above loop again for the second iframe
compare all the screenshots from the two lists of screenshots you have generated.
Well, that's how I would do it. There are probably better ways though.
Use html2canvas for taking screenshot
html2canvas(document.getElementById("img_dots")).then(
function (canvas2) {
var img_data2 = canvas2.toDataURL('image/png');
var im_data2 = img_data2.replace('data:image/png;base64,', '');
$.ajax({
type: "POST",
url: "send_image_to_backend",
data: {
"base64data": im_data2,
// "filename": filename.split(".")[0]
"filename": new_filename + ".png"
}, success: function () {
//send second image and compare
}
});
}
);
This will enable you to send images to back-end.
Use this thread to tweak html2canvas to fetch entire image
Once you have both images you can use openCV to find difference between 2 images
you can refer to this - https://www.pyimagesearch.com/2017/06/19/image-difference-with-opencv-and-python/
I have 2 iframes and both have lot of divs and other controls so both iframes are like the medium size of HTML websites. I want to compare both and find out differences.
What do you want compare ? CSS rules/properties differences ? Data/text differences ? Or visual render ?
Compare visual render
You can extract the iframe URL and load the page with Selenium to take screenshot (see example). Also you have firefox extension Selenium IDE.
Related
I'm searching for Javascript (JQuery if possible) plugin that can generate an image representing the inner content of a DIV.
Example : This link shows an image containing 3 x 3 box display.
What I would like is that these boxes could contain an automatically-generated picture showing what a specific DIV's content look like.
Is there such a thing?
If you don't have too much content on the screen, this seems like a simple option
html2canvas
It is well documented
Well tested
But it will not work for all elements
It will not work with all atributes
But this is the solution if you want to take the screenshot of your page only(where you know the possible attributes and elements)
I don't think that Javascript can create an image from the scratch, but for sure is possible to make that on the server and use JS to make an AJAX call to it.
Hope this helps.
You could use "webkit to image" wkhtmltoimage: https://code.google.com/p/wkhtmltopdf/ I've used it to generate images from javascript graphs and tables etc. Any html will work. Its not purely javascript, but you could send the html div (and relevant css) to the wkhtmltoimage and get the image back via ajax.
I am going to have a lot of images and trying to find the most efficient way of storing these images to keep the page snappy.
So far I have thought of just the two ways: load with javascript eg picture = new Image(); picture.src = "file.jpg"; and append / remove to the page as necessary, or load into <img> and set display:none.
Are there other options? what is considered the best way to do this?
The best way for a photo gallery (if thats what you are building) is usually to have several sizes of the images, at least two:
a smallish size that is highly compressed and thus have a small footprint: this is the image you load into grids and display in a page where there are multiple images
a larger image with lower compression and higher image quality - this is the one you show when people want to see details.
Since people most often come to the detailed image from a page where the small/fast loading version has already been shown, and thus is already in the browsers cache, you do a little trick and have instant photos, without preloading anything.
It goes like this:
On the details page you show the highly compressed small image in an image tag that has the dimension of the larger detailed version. You then load the larger detailed version in the background using new Image() with an onload event attached that changes the source of the image tag with the small compressed version to the large detailed version.
It looks great, works fast and users will love you ;)
PS: the best way to store images is the browsers cache, not js or the DOM, so if you truly wish to preload images, which is generally a bad practice (tho it can be necessary sometimes), make the browser fetch them for you in the background by including a css file that references them in styles that aren't applied to visual areas of your site.
I'm not sure about "efficient", but the most logical way would be not use the JavaScript to load an image (useless if you have JavaScript disabled) or to set the image as hidden via the display property (likewise, and the browser will probably just load the image anyway).
As such, a sensible suggestion would be to use boring old paging and display 'n' images per page. However, to bring this up to date, you could use "lazy" (a.k.a. "deferred") loading and load additional page content via Ajax as the user scrolls. However, it's key that this gracefully degrades into the standard "paged" behaviour if JavaScript is disabled, etc.
The perfect example of this in operation is Google's image search, and if you search here on StackOverflow you see a discussion of possible implementations, etc.
It's better to use javascript the way that you have it and then add it to the DOM as you need, as opposed to first adding it to the to the DOM and then hiding it because DOM manipulation is much slower and you may not use some images
I've seen similar questions asked and the answers were not quite what I'm after. Since this question is slightly different, I'm asking again - Hopefully you'll agree this isn't a duplicate.
What I want to do: Generate an image showing the contents of my own website as seen by the user (actually, each specific user).
Why I want to do it: I've got some code that identifies places on the page where the user's mouse hovers for a significant length of time (ppl tend to move the mouse to areas of interest). I also record click locations. These are recorded as X/Y co-ords. relative to the top-left of the page
NB: This is only done for users who are doing usability testing.
I'd ideally like to be able to capture a screenshot and then use something server-side to overlay the mouse data on the image (hotspots, mouse path, etc.)
The problem I have is that page content is very dynamic (not so much during display but during server-side generation) - depending on the type of user, assigned roles, etc... whole boxes can be missing - and the rest of the layout readjusts accordingly - consequently there's no single "right" screenshot for a page.
Option 1 (which feels a little nasty): would be to walk the DOM and serialize it and send that back to the server. I'd then open up the appropriate browser and de-serialize the DOM. This should work but sounds difficult to automate. I suspect there'd also be some issues around relative URLs, etc.
Option 2: Once the page has finished loading, capture an image of the client area (I'd ideally like to capture the whole length of the page but suspect this will be even harder). Most pages don't require scrolling so this shouldn't be a major issue - something to improve for version 2. I'd then upload this image to the server via AJAX.
NB: I don't want to see anything outside the contents of my own page (chrome, address bar, anything)
I'd prefer to be able to do this without installing anything on the end-user pc (hence javascript). If the only possibility is a client-side app, we can do that but it will mean more hassle when getting random users to usability test (currently, we just email friends/family/guinea pigs a different URL)
One alternative solution would be to "record" the positions and dimensions of the main structural elements on the page:
(using jQuery)
var pageStructure = {};
$("#header, #navigation, #sidebar, #article, #ad, #footer").each(function() {
var elem = $(this);
var offset = elem.offset();
var width = elem.outerWidth();
var height = elem.outerHeight();
pageStructure[this.id] = [offset.left, offset.top, width, height];
});
Then you send the serialized pageStructure along with the mouse-data, and based on that data you can reconstruct the layout of the given page.
One thing we always talk about where I work is the value of ownership vs the cost required to make something from scratch. With the group I have, we could build just about anything...however, at a per-hour rate in the $100 range, it would need to be a pretty marketable tool or replace a very expensive product for it to be worth our time. So, when it comes to things like this that are already done, I'd consider looking elsewhere first. Think of what you could do with all that extra time....
A simple, quick google search found this: http://www.trymyui.com/ It's likely not perfect, but it points to the fact that solutions like this are out there and already working/tested. Or, you could download a script such as this heatmap Obviously, you'd need to add a bit to allow you to re-create what was on the screen while the map was created.
Good Luck.
IMO, it's not worth reinventing the wheel. Just buy an existing solution like ClickTale.
http://www.clicktale.com/
This is a long shot but I've seen things which might make it posssible.
I have a div, which is filled with images. Album covers if you must know. And I want to allow users to download this as an image. So they could use it as something like a desktop background.
So is this possible? Get this visual representation of an element and display it as an image?
Basically you can't do that. At least crossbrowser. But if it is not critical. You can try <canvas>
check here http://www.nihilogic.dk/labs/canvas2image/
Assuming I understand the question...If you know the position of the images in the div, you could concatenate the images together server side into a single image. Then just have a button users can click on that will call the function to assemble and download this image.
From what I understood from your question you can use an
img
tag for this. The user can view the image in the browser and can save it to their hard drive.
from your description each of the image inside will be a different album cover, so combining these into a single image won't a good idea.
You could possibly do the rendering serverside. By this I mean that you could generate the HTML and kind of "screenshot" it on the server. The result would nearly always be at least slightly different from what the user sees, but depending on your requirements it might be enough.
There are various tools to do this, for example wkhtmltoimage, which is a sister project of wkhtmltopdf and can be found at https://code.google.com/p/wkhtmltopdf/
I have some HTML documents that are converted to PDF, using software that renders using QtWebkit (not sure which version).
Currently, the documents have specific tags to split into columns and pages - so whenever the wording changes, it is a manual time-consuming process to move these tags so that the columns and pages fit.
Can anyone provide a way to have text auto-wrapped into the next column/page (as appropriate) when it reaches the bottom of the current container?
Any HTML, CSS or JS supported by QtWebkit is ok (assuming it works in the PDF converter).
(I have tested the webkit-column-* in CSS3 and it appears QtWebkit does not support this.)
To make things more exciting, it also needs to:
- put a header at the top of each page, with page X of Y numbering;
- if an odd number of pages, add a blank page at the end (with no header);
- have the ability to say "don't break inside this block" or "don't break after this header"
I have put some quick example initial markup and target markup to help explain what I'm trying to do.
(The actual documents are far more complicated than that, but I need a simple proof-of-concept before I attack the real ones.)
Any suggestions?
Update:
I've got a partially working solution using Aaron's "filling up" suggestion - I'll post more details in a bit.
Create a document with a single page and all the text in a single column. Use JavaScript to cut the text into parts.
Use pixel coordinates to locate the paragraph/element that doesn't fit anymore. Move it and everything below to the next col. If a "page" already has two "col" divs, start a new page.
After all pages have been created, count and number the pages. Fix even/odd stuff, etc.
Will take some time but it's automatic.
Another approach would be to add all the content to a "source" div and move items to the col div until it's full and repeat with the next col.
Have a look at Prototype or jQuery; they should give you lots of tools to move stuff around in the document.
[EDIT] Instead of only relying on jQuery functions, I suggest to create one or two objects which keep track of the current page and the current column, etc. These give you stable foundations to stand on from which you can fire the helper methods.