Take a huge screenshot of a webpage (over 13500x13500px size) - javascript

I have about 3000 images of about 256x256px size (but some are 50x256 and some are 256x100 or so) which I wish to merge into one big picture. The images are shown in a webpage, thus the easiest way would be to take a screenshot of the said webpage. The webpage's image is of about 13500x13500px ! On theory, I could download all of the images and merge them together (with like bitmap() or CopyImage() or something like that but I really wish to go with an easier solution). Oh and this 3000->1 is one of about 120 in total cases that I have to do.
I've tried several methods, none of which worked out properly:
phantomJS (the .exe crashes after a while)
CutyCaps (the .exe crashes after a while)
Firefox's "screenshot --fullpage bla.png" (crashes with exception,
known issue with huge pages since the dawn of firefox)
SeleniumSDK (FirefoxDriver, crashes after a while)
html2canvas (crashes or just won't start)
basic javascript and canvas work (failed miserably)
A dozen of the top extensions for firefox (takes forever, then
crashes or produces a broken image)
A dozen of the top extensions for Chrome (takes forever, then
crashes or produces a broken image)
The only service that ever worked out perfectly was http://web-capture.net but I really wish to have an offline/local method to generate the image, due to the amount of times that I'd have to visit any 3th party website.
If anyone could point me to any better offline service or a solution, I would be very grateful. Any working code snippets (preferably in either php/javascript/java/selenium/c#) are highly appreciated.
More on the project itself:
The source data is a photograph of a cell, shown with LeafletJS (similar to GoogleMaps). I made up a web scraper of sort, which pulls all the data from the viewer page and shows them as a one huge picture, formed out of thousands of other smaller images (with sometimes different sizes). The Histology course has about 120 different images that I'd wish to have locally stored, hence this whole exercise. Also, to make stuff interesting, every image is of different size, but lets say that it has about 55-60 columns and 50-80 rows of small 256x256 images, forming the big picture.
Here is a preview of what I'm talking about:
http://89.215.196.209/Anatomy%20%28Histology%29%20data%20parser.htm
Sorry for the long post, have a cookie.

I actually just installed and checked out the 64x ver of wkhtmltoimage and it works out perfectly.

Related

Adobe Animate HTML5 Canvas Animation Too Large for Web (need to optimize for load speed)

I have just created my second interactive, Adobe Animate HTML5/Canvas project and I am very proud of it. The problem is, I know nothing about animating efficiently and conservatively when it comes to file formats, sizes, and excess data. I am wondering what steps I have to take to make this published HTML/Javascript project load without staring at a white screen for over a minute (please be patient, IT WILL LOAD EVENTUALLY). None of the Adobe Animate published code has been altered. I know I have to fix my images but do you guys see anything else that may speed this up other than adding a preloader?
http://weatherphases.epizy.com
I have run my page through PageSpeed Insights and still have no idea where to start. Let me know if you need screenshots, code, images, or any other information since I only have a link posted.
The majority of the load delay in your animation seems to come from loading images, with SpriteSheetUtils.extractFrame causing an additional delay. Your image sizes could be reduced by using something like tinypng to optimise them.
extractFrame is an expensive operation because it creates a new html image object and copies data from an internal canvas to it. The creators suggest replacing it with gotoAndStop in most cases: SpriteSheetUtils.extractFrame

Click through photo wall, with perspective

So I'm building a portfolio and sales website for a painter (my wife) based on WooComerce (WordPress). This is a side project that I have plenty of time to finish. I want to build a live/moving photo wall, with perspective. The following photo will give you a (albeit, very rough) idea.
Basically, I want to start off with 16 images (the number is actually arbitrary), apply perspective to them and allow the visitor to click any of the pics and go to that images associated page. Now, after a given time, I want new photos to show up.
I'm not particularly concerned if I flip these pics, randomly, like tiles to introduce new ones OR if a column slides off and a new column is added (i.e. the adding of 17-20 in my picture). This is a semantic difference in the way I build this code and isn't part of my question (I don't think). All of the original pictures are going to be square and will be uploaded by the user, whom we assume is of novice/intermediate computer experience.
So my question is about the approach. Do I:
Make my wall script (likely using jQuery and HTML < map > and < area >) take care of the flipping and linking, but the perspective and scaling is done and cached on the backend.
Every image I upload to the server, for the photo wall, run it through a ImageMagick script that will transform (i.e. apply perspective of the largest size for the wall) and then scale it down for the other columns using a naming convention like: orignalthumnail_marilyn.png perspective0_marilyn.png perspective1_marilyn.png etc. (with the number for the different columns, relating to the scaled sizes). This is will be harder on bandwidth (maybe not, if compressed correctly) and easiest on the user's hardware (assuming non-mobile).
Use Javascript & CSS (and possibly HTML5) to do everything. I load the images into and use the CSS3 skewed/transformed < div >s, JS to flip/moves the tiles (I could do CSS I suppose). I feel that this option is the worst, as far as looks. This is because CSS clips horribly using the transform attribute (on my browser, FF 30) (I also made a quick demo at http://jsbin.com/febatohi/2/edit). Also, it requires the user's hardware to be able to handle all of the transforms, which is not always appreciated online. Maybe there is a way to handle this with a JS library I'm not aware of.
Use Flash. This is my least desirable option. It requires me to either not build this myself or pay someone else (pfft!) or that I acquire and learn Flash from Adobe (I said time wasn't an object, but patience can be). However, it can produce the best looking result, as I have seen things done similarly to this is Flash. It also is a middle ground of hardware and bandwidth, but to me the most time consuming and also limiting to those browsers and users who use Flash (though I feel this is only a small percentage of users).
Other suggestions?

dynamically generating multiple thumbnails from a video src with javascript

Before you say it can't be done please take a look at my train of thought and entertain me.
I have read on stackoverflow that it can't be done and how to implement this using ffmpeg and other stuff on the server side which is great and simpleish enough to comprehend .. ive even used an extensiion to Video.js i found on github that makes this one step easier. But none the less what if I dont have a copy of the <video src=... > and I really dont care to get one?
I Do not want to use a server to do this Okay with that out of the way, I understand thanks to a post from Paul Irish that video playback is not a shared aspect of web-kit ports (the code which powers basically every browser ... minus chrome canary now using blink a webkit fork) This kinda makes sense why certain browsers only support certain video containers.
So for the sake of simplicity: I want to make this functionality only available on Chrome and only MPEG-4 AVC video containers, why can't this be done if some how I can actually view each frame of the video while its playedback?
additional note
So the generating of video thumbnails is possible using by drawing frames to a canvas, this will only be part of a final solution to my problem, I'm looking to do this each and everytime a video is viewed not store images on my server after a first playback is completed by a user. What I would like to eventually work up to is generating a thumbnail as the video is downloaded that can be viewed while a user uses a dragging scrollbar to ff/rw to a point in the video. So this will need to be done as frames of video come available, not once they have been rendered by the browser for user to view
One can actually feed in a video to the canvas, as seen here in HTML5Doctor. Basically, the line that does the magic is:
canvasContext.drawImage(videoElement,0,0,width,height);
Then you can run a timer that periodically retrieves the frames from the canvas. There are 2 options on this one
get raw pixel data
get the base64 encoded data
As for saving, send the data to the server to reconstruct an image using that data, and save to disk. I also suggest you size your canvas and video to the size you want your screenshots to be since the video-canvas transfer automatically manages scaling.
Of course, this is limited by the video formats that are supported by the browser. As well as support for canvas and video.
Generating thumbnails during first render? You'd run into problems with that since:
You can't generate all frames unless it's rendered on the video element.
Suppose you have generated thumbnails during first run and want to use them for further runs. Base64 data is very long, usually 3 times the file size if the image. Raw pixel data array is width x height x 4 in length. The most viable storage candidate is localStorage, which is just 5-10MB depending on the browser.
No way to cache the images generated into the browser cache (there could be a cache hack that I don't know using data-urls).
I suggest you do it on the server instead. It's too much burden and hassle to do in the client side.

Unloading Resources on HTML with JavaScript

I'm working on a HTML 5 game, it is already online, but it's currently small and everything is okay.
Thing is, as it grows, it's going to be loading many, many images, music, sound effects and more. After 15 minutes of playing the game, at least 100 different resources might have been loaded already. Since it's an HTML5 App, it never refreshes the page during the game, so they all stack in the background.
I've noticed that every resource I load - on WebKit at least, using the Web Inspector - remains there once I remove the <img>, the <link> to the CSS and else. I'm guessing it's still in memory, just not being used, right?
This would end up consuming a lot of RAM eventually, and lead to a downgrade in performance specially on iOS and Android mobiles (which I slightly notice already on the current version), whose resources are more limited than desktop computers.
My question is: Is it possible to fully unload a Resource, freeing space in the RAM, through JavaScript? Without having to refresh the whole page to "clean it".
Worst scenario: Would using frames help, by deleting a frame, to free those frames' resources?.
Thank you!
Your description implies you have fully removed all references to the resources. The behavior you are seeing, then, is simply the garbage collector not having been invoked to clean the space, which is common in javascript implementations until "necessary". Setting to null or calling delete will usually do no better.
As a common case, you can typically call CollectGarbage() during scene loads/unloads to force the collection process. This is typically the best solution when the data will be loaded for game "stages", as that is a time that is not time critical. You usually do not want the collector to invoke during gameplay unless it is not a very real-time game.
Frames are usually a difficult solution if you want to keep certain resources around for common game controls. You need to consider whether you are refreshing entire resources or just certain resources.
All you can do is rely on JavaScript's built in garbage collection mechanism.
This kicks in whenever there is no reference to your image.
So assuming you have a reference pointer for each image, if you use:
img.destroy()
or
img.parentNode.removeChild(img)
Worth checking out: http://www.ibm.com/developerworks/web/library/wa-memleak/
Also: Need help using this function to destroy an item on canvas using javascript
EDIT
Here is some code that allows you to load an image into a var.
<script language = "JavaScript">
var heavyImage = new Image();
heavyImage.src = "heavyimagefile.jpg";
......
heavyImage = null; // removes reference and frees up memory
</script>
This is better that using JQuery .load() becuase it gives you more control over image references, and they will be removed from memory if the reference is gone (null)
Taken from: http://www.techrepublic.com/article/preloading-and-the-javascript-image-object/5214317
Hope it helps!
There are 2 better ways to load images besides a normal <img> tag, which Google brilliantly discusses here:
http://www.youtube.com/watch?v=7pCh62wr6m0&list=UU_x5XG1OV2P6uZZ5FSM9Ttw&index=74
Loading the images in through an HTML5 <canvas> which is way way faster. I would really watch that video and implement these methods for more speed. I would imagine garbage collection with canvas would function better because it's breaking away from the DOM.
Embedded data urls, where the src attribute of an image tag is the actual binary data of the image (yeah it's a giant string). It starts like this: src="data:image/jpeg;base64,/9j/MASSIVE-STRING ... " After using this, you would of course want to use a method to remove this node as discussed in the other answers. (I don't know how to generate this base64 string, try Google or the video)
You said Worst scenario: Would using frames help, by deleting a frame, to free those frames' resources
It is good to use frame. Yes, it can free up resource by deleting the frames.
All right, so I've made my tests by loading 3 different HTML into an < article > tag. Each HTML had many, huge images. Somewhat about 15 huge images per "page".
So I used jQuery.load() function to insert each in the tag. Also had an extra HTML that only had an < h1 >, to see what happened when a page with no images was replacing the previous page.
Well, turns out the RAM goes bigger while you start scrolling, and shoots up when going through a particularly big image (big as in dimensions and size, not just size). But once you leave that behind and lighter images come to screen, the RAM consumption actually goes down. And whenever I replaced using JS the content of the page, the RAM consumption went really down when it was occupying to much. Virtual Memory remained always high and rarely went down.
So I guess the browser is quite smart about handling Resources. It does not seem to unload them if you leave it there for a long while, but as soon you start loading other pages or scrolling, it starts loading / freeing up.
I guess I don't have anything to worry about after all...
Thanks everyone! =)

How do you show pictures as fast as Facebook?

Can any of you help me to be able to show pictures as fast as facebook does!
Facebook is incredible to watch pictures at, because the pictures are kind of preloaded I think.
Often ved you view galleries on other sites, it is a pain in the a**, because it is so slow every time you change picture.
I think you need javascript to do it!?
Depending on your implementation, you could do this with some ajax and hidden dom elements.
Suppose you have a gallery with a slideshow.. You could insert a hidden dom element with the picture next picture of the slide show for each load. This would cause the image to be loaded. If you then were to use JS to insert that same image tag later, the browser would rely on it's cache rather than fetching it form the server since it already has that photo.
This is kind of a broad question but I think this approach would work. You would probally be better off not reinventing the wheel and seeing what Image prefetch librarbies based on JQuery or whatever are available to you..
Facebook compresses images to extremes. Try it yourself, take an image you are having trouble with and upload it to Facebook. Then check the size of the image, you will know why. Once I did a small test by uploading 17429 bytes image and it compressed it to 18757 bytes, a complete 7% increase from the original size!
At that compressed size, you can implement some sort of prefetch next image for display. Along with, I think, they have extremely good infrastructure.
Facebook uses Bigpipe, there is an open implementation in the works called openpipe
Bigpipe pushes the content to the browser when server stopped processing, so user will notice that it is faster.
It basically loads pagelets, when they are ready for the user, at the browser the implementation is Javascript based, and you must push the info to the client with your preferred server language.
First of all, facebook heavilycompresses images. Many other websites don't. Facebook also has a faster network than most other websites.
With the small image size, the client can prefetch the next image.
Preloaded would mean loading when the page is loaded, which is what happens with an <img> tag. No, it's simply because the file size is smaller.
If your wanting images to be viewed quicker on your site first make sure the images are decently compressed and aren't any bigger than they have to be. The amount of times I have seen websites using an extremely large image scaled down to fit in an element 5 times smaller is just ridiculous.
You can check out these sites that has many implementations and links on how to pre-load / pre-fetch images (css, JavaScript, ajax)
http://perishablepress.com/press/2009/12/28/3-ways-preload-images-css-javascript-ajax/
Since your question was tagged with 'jquery' here is one just for that.
http://engineeredweb.com/blog/09/12/preloading-images-jquery-and-javascript

Categories

Resources