Keeping a php script active - javascript

I was thinking of uploading some huge images onto my webpage and figured I could possibly optimize the load using a special image viewer with zoom functionality. As I've understood from long ago is that browsers have certain limits when it comes to image dimensions, and possibly memory as well. The workaround for the simple task of just displaying the image on a page would be to resize it server-side in PHP and display that.
However, with the idea of allowing zoom capabilities so one can look at all the little details (essentially seeing the picture as 1:1 scale), the problem I'm having is being able to provide that "zoom" quickly. As I understand it, a PHP script is run once when called, then cleared from memory when done. I made a test with a 40MB picture and imagecreatefrompng(); took 5 seconds, meaning each time a client zoomed (and panned), they'd have to wait 5 seconds before the image is updated.
The question becomes if it is possible to keep a PHP script running and have its memory preserved for the duration it's needed? Like when a client clicks on a thumbnail, a call goes out to the PHP to load the image and it returns the resized image, but remains active and retains the $img in memory, waiting for more instructions. When the client zooms, a new call goes out to the PHP with the dimensions needed, and instead of having to reload the image it resizes and crops the original $img it still has in memory. When the client is done by closing the viewer, tab or browser, a call goes out to the PHP saying it's done which triggers it to imagedestroy(original);, clear memory, and stop.
(maybe have some kind of timeout for cases when the client unexpectedly shuts down and no "stop" signal is sent)

As you are having really big images I would consider creating tiles once the image is uploaded and then use a JS library to handle the display and zoom functionnality like most map libraries. OpenSeadragon is typically one of these.
For the tiles generation, you could use Deepzoom or Zoomify as they are made to work with OpenSeadragon.

Related

dynamically generating multiple thumbnails from a video src with javascript

Before you say it can't be done please take a look at my train of thought and entertain me.
I have read on stackoverflow that it can't be done and how to implement this using ffmpeg and other stuff on the server side which is great and simpleish enough to comprehend .. ive even used an extensiion to Video.js i found on github that makes this one step easier. But none the less what if I dont have a copy of the <video src=... > and I really dont care to get one?
I Do not want to use a server to do this Okay with that out of the way, I understand thanks to a post from Paul Irish that video playback is not a shared aspect of web-kit ports (the code which powers basically every browser ... minus chrome canary now using blink a webkit fork) This kinda makes sense why certain browsers only support certain video containers.
So for the sake of simplicity: I want to make this functionality only available on Chrome and only MPEG-4 AVC video containers, why can't this be done if some how I can actually view each frame of the video while its playedback?
additional note
So the generating of video thumbnails is possible using by drawing frames to a canvas, this will only be part of a final solution to my problem, I'm looking to do this each and everytime a video is viewed not store images on my server after a first playback is completed by a user. What I would like to eventually work up to is generating a thumbnail as the video is downloaded that can be viewed while a user uses a dragging scrollbar to ff/rw to a point in the video. So this will need to be done as frames of video come available, not once they have been rendered by the browser for user to view
One can actually feed in a video to the canvas, as seen here in HTML5Doctor. Basically, the line that does the magic is:
canvasContext.drawImage(videoElement,0,0,width,height);
Then you can run a timer that periodically retrieves the frames from the canvas. There are 2 options on this one
get raw pixel data
get the base64 encoded data
As for saving, send the data to the server to reconstruct an image using that data, and save to disk. I also suggest you size your canvas and video to the size you want your screenshots to be since the video-canvas transfer automatically manages scaling.
Of course, this is limited by the video formats that are supported by the browser. As well as support for canvas and video.
Generating thumbnails during first render? You'd run into problems with that since:
You can't generate all frames unless it's rendered on the video element.
Suppose you have generated thumbnails during first run and want to use them for further runs. Base64 data is very long, usually 3 times the file size if the image. Raw pixel data array is width x height x 4 in length. The most viable storage candidate is localStorage, which is just 5-10MB depending on the browser.
No way to cache the images generated into the browser cache (there could be a cache hack that I don't know using data-urls).
I suggest you do it on the server instead. It's too much burden and hassle to do in the client side.

How to free used memory after loading differents page using AJAX? [duplicate]

I have a very basic ajax slideshow on my website. On every scroll, the new images and response content continually increase the amount of memory used by the browser.
I've done my research and tried all suggestions to reset the XHR object on each new request, but this does absolutely nothing to help.
The slideshows are basic but may contain hundreds of slides. I want a user to be able to navigate the slideshow indefinitely without crashing their browser. Is this even possible?
Thanks, Brian
Increasing memory usage is normal. You are, after all, loading more data each time - the HTML from your AJAX response, as well as the images that are being displayed. Unless you're using Adobe Pagemill-generated HTML, that's only going to be a few hundreds of bytes of HTML/text. It's the images that will suck up the most space. Everything get stuffed into the browser's cache.
Since you're not doing anything fancy with the DOM (building sub-trees and whatnot) directly, just replacing a chunk of HTML repetitively, eventually the browser will do a cleanup and chuck some of the unused/old/stale image data from memory/cache and reclaim some of that memory.
Now, if you were doing some highly complex DOM manipulations and generating lots of new nodes on the fly, and were leaking some nodes here and there, THEN you'd have a memory problem, as those leaked nodes will eventually bury the browser.
But, just increasing memory usage by loading images is nothing to worry about, it's just like a normal extended surfing session, except you're just loading some new pictures.
If its a slideshow, are you only showing one image at a time? If you do only show one at a time and you're never getting rid of the last one you show, it will always increase the memory. If you remove the slides not being shown, it should help.

How to load image, create texture, render, and save image asynchronously with WebGL?

I am attempting to write an application that uses very large textures. The idea is that you work on a scaled version of the texture in realtime modifying shaders and when finished the application would apply your changes to the original unscaled (large) texture. The problem is that profiling shows something like the following:
img.src = filename (500ms)
texImage2d(...) (1500ms)
bind/rendering (100ms)
readPixels (300ms)
Put into canvas (1000ms)
Save canvas to file (300ms)
Essentially this means the browser locks up for almost four seconds when saving the larger unscaled texture, with the user unable to do anything. Is it possible to do this asynchronously so that the browser stays responsive? It needs to all be done in javascript and client-side, as I'm using local files (HTML5 file/filesystem).
Web workers sounded like a good idea, but they are unable to access the DOM, so I can't use the browser's image loading and saving functionality, and they have no access to the WebGL context so they can't call texImage2d, which takes the most time.
Due to the size and number of images I cannot load them all into memory as textures when the page initially loads.
Is there anything that can be done to make this more responsive to the user? I'd like them to be able to continue working on the next image while the previous one renders.
The image loading should happen in the background, and you'll get no idea of progress for it, but you could use texSubImage2D to incrementally upload the texture. That will probably take a little bit longer overall but you should be able to give the user some feedback and respond to other events.
Also, you can just draw the webgl canvas directly into the canvas 2D. drawImage() takes images, video and canvas (2D or webgl) elements as arguments. That should happen almost instantaneously and save about 1300 ms.
This question is old but to others who find it here's some updated info
You can consider using OffscreenCanvas which has been shipping in Chrome since January 2019. Unfortunately it's not shipping anywhere else ATM
These 3 synchronous steps
readPixels (300ms)
Put into canvas (1000ms)
Save canvas to file (300ms)
can be turned into 1 asynchronous step
gl.canvas.toBlob((blob) => {
const url = URL.createObjectURL(blob);
// url is now something you can give the user to download
});
for rendering really large images (say 16k by 16k) you can render smaller portions and assemble them into a larger image.
There's a library for that here: https://greggman.github.io/dekapng/

How do you show pictures as fast as Facebook?

Can any of you help me to be able to show pictures as fast as facebook does!
Facebook is incredible to watch pictures at, because the pictures are kind of preloaded I think.
Often ved you view galleries on other sites, it is a pain in the a**, because it is so slow every time you change picture.
I think you need javascript to do it!?
Depending on your implementation, you could do this with some ajax and hidden dom elements.
Suppose you have a gallery with a slideshow.. You could insert a hidden dom element with the picture next picture of the slide show for each load. This would cause the image to be loaded. If you then were to use JS to insert that same image tag later, the browser would rely on it's cache rather than fetching it form the server since it already has that photo.
This is kind of a broad question but I think this approach would work. You would probally be better off not reinventing the wheel and seeing what Image prefetch librarbies based on JQuery or whatever are available to you..
Facebook compresses images to extremes. Try it yourself, take an image you are having trouble with and upload it to Facebook. Then check the size of the image, you will know why. Once I did a small test by uploading 17429 bytes image and it compressed it to 18757 bytes, a complete 7% increase from the original size!
At that compressed size, you can implement some sort of prefetch next image for display. Along with, I think, they have extremely good infrastructure.
Facebook uses Bigpipe, there is an open implementation in the works called openpipe
Bigpipe pushes the content to the browser when server stopped processing, so user will notice that it is faster.
It basically loads pagelets, when they are ready for the user, at the browser the implementation is Javascript based, and you must push the info to the client with your preferred server language.
First of all, facebook heavilycompresses images. Many other websites don't. Facebook also has a faster network than most other websites.
With the small image size, the client can prefetch the next image.
Preloaded would mean loading when the page is loaded, which is what happens with an <img> tag. No, it's simply because the file size is smaller.
If your wanting images to be viewed quicker on your site first make sure the images are decently compressed and aren't any bigger than they have to be. The amount of times I have seen websites using an extremely large image scaled down to fit in an element 5 times smaller is just ridiculous.
You can check out these sites that has many implementations and links on how to pre-load / pre-fetch images (css, JavaScript, ajax)
http://perishablepress.com/press/2009/12/28/3-ways-preload-images-css-javascript-ajax/
Since your question was tagged with 'jquery' here is one just for that.
http://engineeredweb.com/blog/09/12/preloading-images-jquery-and-javascript

I want to load multiple images very fast on a website, what's the best method?

UPDATE: This question is outdated, please disregard
So.. my idea is to load a full manga/comics at once, with a progress bar included, and make sort of a stream, like:
My page loads the basic (HTML+CSS+JS) (of course)
As done, I start loading the imgs(the URLs are stored on JS var) from my server, one a time (or some faster way) so I can make a sort of progress bar.
ALTERNATIVE: Is there a way to load a compresses file with all imgs and uncompress at the browser?
ALTERNATIVE: I was also thinking of saving then as strings and then decode, they are mostly .jpg
The images don't have to show right away, i just need the callback when they are done.
XTML and HTML5 is acceptable
What is the fastest way to load a series of images for my website?
EDIT
Since #Oded comment.. the question is truly what is the best tech for loading images and the user don't have to wait everytime is turns the 'page'. Targeting a more similar experience like when you read comics in real life.
EDIT2
As some people helped me realize, I'm looking for a pre-loader on steroids
EDIT3
No css techs will do
If you split large images into smaller parts, they'll load faster on modern browsers due to pipelining.
ALTERNATIVE: Is there a way to load a compresses file with all imgs and uncompress at the browser?
Image formats are already compressed. You would gain nothing by stitching and trying to further compress them.
You can just stick the images together and use background-position to display different parts of them: this is called ‘spriting’. But spriting's mostly useful for smaller images, to cut down the number of HTTP requests to the server and somewhat reduce latency; for larger images like manga pages the benefit is not so large, possibly outweighed by the need to fetch one giant image all at once even if the user is only going to read the first few pages.
ALTERNATIVE: I was also thinking of saving then as strings and then decode
What would that achieve? Transferring as string would, in most cases, be considerably slower than raw binary. Then to get them from JavaScript strings into images you'd have to use data: URLs, which don't work in IE6-IE7, and are limited to how much data you can put in them. Again, this is meant primarily for small images.
I think all you really want is a bog-standard image preloader.
You could preload the images in javascript using:
var x = new Image();
x.src = "someurl";
This would work like the one you described as "saving the image in strings".
Spriting
Just have a look how facebook does it: http://b.static.ak.fbcdn.net/rsrc.php/z3JQK/hash/11cngjg0.png
One image that loads FASTER than series of small images. To display the icon you simply create a div with fixed dimensions, and move the background inside it. Your div works as a viewport for the big image. You use background-position to move to appropriate part of the image. Everything else is hidden.
Different domains
Something you probably didn't know - Internet Explorer has a limit of connections per server. You can read about it here: http://support.microsoft.com/?scid=kb;en-us;183110&x=17&y=11 (here are exact numbers).
What it means - if user is using IE7, he will be able to load ONLY 4 (or 2) files at the same time from your server regardless his internet connection speed.
To speed things up, you could create few subdomains: server1.mydomain.com, server2.mydomain.com, server3.mydomain.com etc - and then user can download many files a lot quicker, because you use different hosts to serve different files.
As done, I start loading the imgs(the
URLs are stored on JS var) from my
server, one a time (or some faster
way) so I can make a sort of progress
bar.
Your browser already downloads the HTML first, that's how it knows to load any JS/images you reference. You are trying to invent something that already exists.
Just make sure your manga is made up of lots of images of a known size, which you specify in your img tags. Most browsers have some sort of progress bar to show that it's loading resources for you. You're not going to make loading large images faster unless you improve either the speed at which your server serves them, or your user's internet connection, or you compress them to make your image files smaller (likely at the cost of image quality).
Note: JPG and PNG are already compressed.
You can try using a "CSS sprites" technique. Basically the idea is you use your favorite image editing program to stich all your images into a single image. It's faster to send this because you lose the per/file overhead in terms of encoding the image and sending the image. On the client side you use CSS to only select the portion of the total image that is used in any one place.
http://www.alistapart.com/articles/sprites/
http://www.fiftyfoureleven.com/weblog/web-development/css/css-sprites-images-optimization
AND/OR
You can use lazy loading to only load images when they come into view.
http://www.appelsiini.net/projects/lazyload
Image preloaders have been around for ages. You really do not need to load them all at once, you can do it on demand [when the person loads the next page, you can fetch the image after it]
My page loads the basic (HTML+CSS+JS) (of course)
As done, I start loading the imgs(the URLs are stored on JS var) from my
server, one a time (or some faster way) so I can make a sort of progress bar.
The images don't have to show right away, i just need the callback
when they are done.
If you want to load 10 images as fast as possible, place 10 <img> tags on the page, one for each image. Use Javascript to hide all the but the currently viewed image; add next/back links that use JS to hide the current image and show the next one. Many browser already have some form of progress bar, and by doing things with regular old HTML, it will function correctly.
You're trying to re-invent all this functionality with Javascript for no good reason. You're not going to do it better than the browser.
All that said, this is probably a bad idea. You might dump 15MB of comic pages into the browser window only to have the user leave after reading the first page. Rather than trying to pre-load all images, you should use JS to always keep the next page (or two) pre-loaded, not the entire thing.
Here's something you can try, which by happenstance I just coded up:
(function() {
var imgs = [ "image1.png", "image2.png", ... /* all your image names */ ],
index = 0,
img;
function loader() {
if (index >= imgs.length) return;
(img = new Image()).onload = loader;
setTimeout(function() { img.src = "/path/to/images/" + imgs[index++]; }, 1);
}
loader();
})();
Plop all your image names (or the ones you want to preload) into the array, and make sure this script starts up when your page(s) start loading. It'll work its way through the list of images, loading them, and then moving on to the next one when each image finishes. (The setTimeout call is to make sure that the "onload" handler doesn't get called while you're still inside a handler.)
You'd probably want to do this for lots of the "nuts and bolts" images for your whole site - in other words, each page would try to load images for everything. Once they're in the cache, of course, this won't take a significant amount of time. Alternatively, you could run this script only on a couple pages, like "login" screens and the main "home" page. Of course, if you've got a site like Flickr, then you probably wouldn't want to preload all your images :-)

Categories

Resources