I have a Javascript image sequence object that uses one <canvas> tag in the DOM, calling clearRect and drawImage quickly to play the sequence. There are 3 different sequences consisting of 1,440 images each, only one sequence needs to be loaded at a time, but having them all queued up will make the experience faster and smoother.
The images are pretty big in dimension, 8680x1920 each, about 1.5mb each as JPG. I have buttons that load each set individually instead of all at once. Everything is fine loading in the first sequence set, but the second one crashes (Aw Snap page) in Chrome 51 in Windows 7 Business.
Dev is happening on my Mac Pro and works perfectly, letting me load all 3 sequences just fine. The specs of my Mac Pro are far lower than the PC. The PC is an i7 quad core, 32gb RAM, 2x M5000 Nvidia Quadro cards with a Sync card. My understanding is that Chrome isn't even utilizing most of those advanced pieces of hardware, but we need them for other parts.
I have tried setting the existing image objects to an empty source then setting them to null before loading in the next sequence, I have also tried removing the <canvas> tag from the DOM, but nothing seems to help. I also find that watching Chrome's Network tab shows the crashes to always happen just after 1.5gb has been transferred. Chrome's Task Manager has the tab hovering around 8gb of memory usage on both Windows and Mac with 1 sequence loaded.
This is an obscure, one-off installation that will be disconnected from the internet, so I'm not concerned so much about security concerns or best practices, just getting it to work through any means necessary.
UPDATED to reflect that I had recently changed the <img> tag to a <canvas> tag for performance reasons
You should not be loading the entire sequence at once. You're most likely running out of RAM. Load only a few frames ahead using Javascript in memory, then assign that image to your image tag. Be sure to clear that look ahead cache by overwriting the variables or using the delete operator.
Secondly, changing the src attribute will cause the entire DOM to redraw. This is because when the src attribute changes, the image is assumed to have possibly changed size, which will cause all elements after might have shifted and need redrawing.
It's a better idea to set the image as the background of a <div> and update the background-image styles. You can also write the image to a <canvas>. In both cases only element needs redrawing.
Finally, a <video> tag would probably be your best option since it's designed to handle frame sequences efficiently. In order to make it possible to scrub to individual frames without lag you can either encode with the keyframe every 1 frames setting, or simply encode the video in an uncompressed format that doesn't use keyframes. A keyframe is like snapshot at a particular interval in a video, all subsequent frames only redraw the parts that have changed since the last keyframe. So if keyframes are far apart, seeking to a particular frame requires the the keyframe be rendered, then all the subsequent frames in between be added to it to get the final image of the frame you're on. By putting a keyframe on every frame, it will make the video larger since it can't use the differential compression, but it will seek must faster.
Related
Before you say it can't be done please take a look at my train of thought and entertain me.
I have read on stackoverflow that it can't be done and how to implement this using ffmpeg and other stuff on the server side which is great and simpleish enough to comprehend .. ive even used an extensiion to Video.js i found on github that makes this one step easier. But none the less what if I dont have a copy of the <video src=... > and I really dont care to get one?
I Do not want to use a server to do this Okay with that out of the way, I understand thanks to a post from Paul Irish that video playback is not a shared aspect of web-kit ports (the code which powers basically every browser ... minus chrome canary now using blink a webkit fork) This kinda makes sense why certain browsers only support certain video containers.
So for the sake of simplicity: I want to make this functionality only available on Chrome and only MPEG-4 AVC video containers, why can't this be done if some how I can actually view each frame of the video while its playedback?
additional note
So the generating of video thumbnails is possible using by drawing frames to a canvas, this will only be part of a final solution to my problem, I'm looking to do this each and everytime a video is viewed not store images on my server after a first playback is completed by a user. What I would like to eventually work up to is generating a thumbnail as the video is downloaded that can be viewed while a user uses a dragging scrollbar to ff/rw to a point in the video. So this will need to be done as frames of video come available, not once they have been rendered by the browser for user to view
One can actually feed in a video to the canvas, as seen here in HTML5Doctor. Basically, the line that does the magic is:
canvasContext.drawImage(videoElement,0,0,width,height);
Then you can run a timer that periodically retrieves the frames from the canvas. There are 2 options on this one
get raw pixel data
get the base64 encoded data
As for saving, send the data to the server to reconstruct an image using that data, and save to disk. I also suggest you size your canvas and video to the size you want your screenshots to be since the video-canvas transfer automatically manages scaling.
Of course, this is limited by the video formats that are supported by the browser. As well as support for canvas and video.
Generating thumbnails during first render? You'd run into problems with that since:
You can't generate all frames unless it's rendered on the video element.
Suppose you have generated thumbnails during first run and want to use them for further runs. Base64 data is very long, usually 3 times the file size if the image. Raw pixel data array is width x height x 4 in length. The most viable storage candidate is localStorage, which is just 5-10MB depending on the browser.
No way to cache the images generated into the browser cache (there could be a cache hack that I don't know using data-urls).
I suggest you do it on the server instead. It's too much burden and hassle to do in the client side.
I have a project I'm planning which based on kind of an 'interactive world' style experience where the browsers viewport moves around to show many different graphic environments, it must all be fluid and no page-to-page breaks. The project is in js/html5/css3
The problem this poses is that the entire 'world' will be perhaps 8-15,000 px squared (it also rotates, and has various png alpha overlays on top of it)
I was going to run some tests but there are so many ways to approach this and I'm looking for the most fluid one. My knowledge of the internal workings of browser render engines isn't great so I thought I'd ask around.
I cant use the 'tiling' approach which google map uses as it's not fluid enough (too blocky) also when rotating around it's going to create headaches do the math-transforms to work out which tiles to load at what angles so here are the 2 choices I have boiled it down to:
(1) The "Huge" image approach
The benefit of this is that once it's loaded everything is easy, the downside is that it's going to be huge and I cannot show an incremental preloader as the image queue will essentially be 2 images (overlay and huge img)
(2) Image segments
The benefit is that I can show a preloader with an image queue at 10% increments (10x images)
Question:
is the 2nd approach going to have a more painful overhead on the browser's rendering engine due to there being 9 separate sets of calculations being done or do browser engines simply see them as one painted area once it's initially rendered and then update it as a whole? Or each time the dom is changed (rotated etc), the browser has to run the same transform/repaint process 9 times?
Thanks very much.
LOTS of tests later: result: Use a big image, seems to be less for the browser to deal with.
I have a very basic ajax slideshow on my website. On every scroll, the new images and response content continually increase the amount of memory used by the browser.
I've done my research and tried all suggestions to reset the XHR object on each new request, but this does absolutely nothing to help.
The slideshows are basic but may contain hundreds of slides. I want a user to be able to navigate the slideshow indefinitely without crashing their browser. Is this even possible?
Thanks, Brian
Increasing memory usage is normal. You are, after all, loading more data each time - the HTML from your AJAX response, as well as the images that are being displayed. Unless you're using Adobe Pagemill-generated HTML, that's only going to be a few hundreds of bytes of HTML/text. It's the images that will suck up the most space. Everything get stuffed into the browser's cache.
Since you're not doing anything fancy with the DOM (building sub-trees and whatnot) directly, just replacing a chunk of HTML repetitively, eventually the browser will do a cleanup and chuck some of the unused/old/stale image data from memory/cache and reclaim some of that memory.
Now, if you were doing some highly complex DOM manipulations and generating lots of new nodes on the fly, and were leaking some nodes here and there, THEN you'd have a memory problem, as those leaked nodes will eventually bury the browser.
But, just increasing memory usage by loading images is nothing to worry about, it's just like a normal extended surfing session, except you're just loading some new pictures.
If its a slideshow, are you only showing one image at a time? If you do only show one at a time and you're never getting rid of the last one you show, it will always increase the memory. If you remove the slides not being shown, it should help.
I'm working on a HTML 5 game, it is already online, but it's currently small and everything is okay.
Thing is, as it grows, it's going to be loading many, many images, music, sound effects and more. After 15 minutes of playing the game, at least 100 different resources might have been loaded already. Since it's an HTML5 App, it never refreshes the page during the game, so they all stack in the background.
I've noticed that every resource I load - on WebKit at least, using the Web Inspector - remains there once I remove the <img>, the <link> to the CSS and else. I'm guessing it's still in memory, just not being used, right?
This would end up consuming a lot of RAM eventually, and lead to a downgrade in performance specially on iOS and Android mobiles (which I slightly notice already on the current version), whose resources are more limited than desktop computers.
My question is: Is it possible to fully unload a Resource, freeing space in the RAM, through JavaScript? Without having to refresh the whole page to "clean it".
Worst scenario: Would using frames help, by deleting a frame, to free those frames' resources?.
Thank you!
Your description implies you have fully removed all references to the resources. The behavior you are seeing, then, is simply the garbage collector not having been invoked to clean the space, which is common in javascript implementations until "necessary". Setting to null or calling delete will usually do no better.
As a common case, you can typically call CollectGarbage() during scene loads/unloads to force the collection process. This is typically the best solution when the data will be loaded for game "stages", as that is a time that is not time critical. You usually do not want the collector to invoke during gameplay unless it is not a very real-time game.
Frames are usually a difficult solution if you want to keep certain resources around for common game controls. You need to consider whether you are refreshing entire resources or just certain resources.
All you can do is rely on JavaScript's built in garbage collection mechanism.
This kicks in whenever there is no reference to your image.
So assuming you have a reference pointer for each image, if you use:
img.destroy()
or
img.parentNode.removeChild(img)
Worth checking out: http://www.ibm.com/developerworks/web/library/wa-memleak/
Also: Need help using this function to destroy an item on canvas using javascript
EDIT
Here is some code that allows you to load an image into a var.
<script language = "JavaScript">
var heavyImage = new Image();
heavyImage.src = "heavyimagefile.jpg";
......
heavyImage = null; // removes reference and frees up memory
</script>
This is better that using JQuery .load() becuase it gives you more control over image references, and they will be removed from memory if the reference is gone (null)
Taken from: http://www.techrepublic.com/article/preloading-and-the-javascript-image-object/5214317
Hope it helps!
There are 2 better ways to load images besides a normal <img> tag, which Google brilliantly discusses here:
http://www.youtube.com/watch?v=7pCh62wr6m0&list=UU_x5XG1OV2P6uZZ5FSM9Ttw&index=74
Loading the images in through an HTML5 <canvas> which is way way faster. I would really watch that video and implement these methods for more speed. I would imagine garbage collection with canvas would function better because it's breaking away from the DOM.
Embedded data urls, where the src attribute of an image tag is the actual binary data of the image (yeah it's a giant string). It starts like this: src="data:image/jpeg;base64,/9j/MASSIVE-STRING ... " After using this, you would of course want to use a method to remove this node as discussed in the other answers. (I don't know how to generate this base64 string, try Google or the video)
You said Worst scenario: Would using frames help, by deleting a frame, to free those frames' resources
It is good to use frame. Yes, it can free up resource by deleting the frames.
All right, so I've made my tests by loading 3 different HTML into an < article > tag. Each HTML had many, huge images. Somewhat about 15 huge images per "page".
So I used jQuery.load() function to insert each in the tag. Also had an extra HTML that only had an < h1 >, to see what happened when a page with no images was replacing the previous page.
Well, turns out the RAM goes bigger while you start scrolling, and shoots up when going through a particularly big image (big as in dimensions and size, not just size). But once you leave that behind and lighter images come to screen, the RAM consumption actually goes down. And whenever I replaced using JS the content of the page, the RAM consumption went really down when it was occupying to much. Virtual Memory remained always high and rarely went down.
So I guess the browser is quite smart about handling Resources. It does not seem to unload them if you leave it there for a long while, but as soon you start loading other pages or scrolling, it starts loading / freeing up.
I guess I don't have anything to worry about after all...
Thanks everyone! =)
The iPad stops loading large images after about 8 or 9 images for me, since the page runs into its allocated memory limits.
Since I'm showing these images one at a time, I'd like to remove the old ones from the browser cache so I don't hit the limit.
Any ideas on how to do this in javascript?
See this question for a good answer. In a nutshell, you should use the same image objects and set the src instead of creating new images.