Javascript FileSaver saves empty files after writing large number of files - javascript

Background
I'm working on an internal project that basically can generate a video on the client side, but since there are no JavaScript video encoders I'm aware of, I'm just exporting each frame individually. I need to avoid uploading to the server; this is all happening on the client side.
Implementation
I'm using this FileSaver.js (more specifically, Chrome's webkit FileSystem API) to save a large number of PNGs generated by an HTML5 canvas. I set Chrome to automatically download to a specific folder, so when I hit 'Save' it just takes off and saves something like 20 images per second. This works perfectly for my purposes.
If I could use JSZip to compress all these frames into one file before offering it to the client to save, I would, but I haven't even tried because there's just no way the browser will have enough memory to generate ~8000 640x480 PNGs and then compress them.
Problem
The problem is that after a certain number of images, every file downloaded is empty. Chrome even starts telling me in the download bar that the file is 0 bytes. Repeated on the same project with the same export settings, the empty saves start at exactly the same time. For example, with one project, I can save the first 5494 frames before it chokes. (I know this is an insanely large number, but I can't help that.) I tried setting a 10ms delay between saves, but that didn't have any effect. I haven't tried a larger delay because exporting takes a very long time as it is.
I checked the blob.size and it's never zero. I suspect it's exceeding some quota, but there are no errors thrown; it just silently fails to either write to the sandbox or copy the file to the user-specified location.
Questions
How can I detect these empty saves? Prevent them? Is there a better way to do this? Am I just screwed?
EDIT: Actually, after debugging FileSaver.js, I realized that it's not even using webkitRequestFileSystem; it returns when it gets here:
if (can_use_save_link) {
object_url = get_object_url(blob);
save_link.href = object_url;
save_link.download = name;
if (click(save_link)) {
filesaver.readyState = filesaver.DONE;
dispatch_all();
return;
}
}
So, it looks like it's not even using the FileSystem API, and therefore I have no idea how to empty the storage before it's full.
EDIT 2: I tried moving the "if (can_use_save_link)" block to inside the "writer.onwriteend" function, and changing it to this:
if (can_use_save_link) {
save_link.href = file.toURL();
save_link.download = name;
click(save_link);
}else{
target_view.location.href = file.toURL();
}
The result is I'm able to save all 8260 files (about 1.5GB total) since it's now using storage with a quota. Before, the files didn't show up in the HTML5 FileSystem because I assume you didn't need to put them there if the anchor element supported the 'download' attribute.
I was also able to comment out the code that appends ".download" to the filename, and I had to provide an empty anonymous function as an argument to both instances of "file.remove()".

Use JSZip, it won't use too much memory if you disable compression (which is the default). To manually disable compression anyways, make sure to pass compression: "STORE" when calling zip.generate().

I ended up modifying FileSaver.js (see "EDIT 2" in the original post).

Related

ESP8266 serving HTML+js

I try to host an HTML file on an esp8266 access point. I can properly show an .html file. Unfortunately, when accessing the html page, my browser cannot display javascript content. Strangely, when I work locally on my machine - it works perfectly fine. When I access the page on the esp8266 I receive the error
"Not found: dygraph.min.js."
Obviously, the browser does not find the javascript source. I wonder why. I have tried out several ways of naming and referencing, but I was not lucky until now.
I upload the files with the ESP8266 Sketch Data Upload tool to the SPIFFS. In the html file I reference the js as <script type="text/javascript"
src="dygraph.min.js"></script>.
Did anybody experience anything like this before? The whole code can be found here:
https://github.com/JohnnyMoonlight/esp8266-AccessPoint-Logger-OfflineVisualisation
I am looking forward for your input!
Thanks and best!
Take a read through your code, and imagine the requests that will be made of your web server.
Your code is written to handle requests for two URLs: / and /temp.csv - that's it.
When /temp.csv is accessed, you serve the contents of index.html. When the browser interprets that file it will try to load /dygraph.min.js from your ESP. You don't have a handler for that file. So the load fails.
You need to add a handler for it and then serve the file. So you'll need to add a line like:
server.on("/dygraph.min.js", handleJS);
and define function void handleJS() that does what handleFile() does.
You'll need to do the same thing for the /dygraph.css; you don't have a handler for it either.
I would do it this way:
void handleHTML() {
handleFile("index.html");
}
void handleJS() {
handleFile("dygraph.min.js");
}
void handleCSS() {
handleFile("dygraph.css");
}
void handleFile(char *filename) {
File f = SPIFFS.open(filename, "r");
// the rest of your handleFile() code here
}
and in your setup():
server.on("/", handleRoot);
server.on("/temp.csv", handleHTML);
server.on("/dygraph.css", handleCSS);
server.on("/dygraph.min.js", handleJS);
Separately:
Your URL to file mappings are messed up. The code I shared above is consistent with what you have now, but normally you'd want / to serve index.html; you have it serving a fragment of HTML.
Normally /temp.csv would serve a comma-separated value file. I see you have one, in the repo and you have code to add data to it; you're just not serving it. Right now you have that serving index.html. Once you start successfully loading the Javascript you'll have problems with that.
You'll need to sort those out to get this working right.
Also, in loop() you should move server.handleClient(); to be the first thing in the loop. The way you have it written you're only checking to see if there's a web request if it's time to take another temperature reading. You should always check to see if there's a web request, otherwise you're unnecessarily slowing down web service.
One last thing, completely separate from the web server code, and I wouldn't worry about this till you get the rest of your code working: your code is writing to SPIFFS roughly every 5 seconds. SPIFFS is stored in flash memory on the ESP8266. ESP8266 boards use cheap flash memory that doesn't last a long time - it wears out after maybe 10,000 to 100,000 write cycles (this is a little complicated; it's broken into "pages" and the individual cells in the pages wear out, but you have to write the entire page at the same time).
It's hard to say for sure what its lifetime will be; it depends on the specific ESP8266 boards and flash chips involved. 10,000 write cycles means the flash memory on your board might start failing after 50,000 seconds - 100,0000 write cycles would give you about 500,000 writes -- if you keep writing to the same spot. It depends on how often the same place in flash is getting written to. If that's a problem for you, you might want to increase the delay between writes or do something else with your data.
You might not run into this because you're appending to a file - you'll still rewrite the same blocks of flash memory many times, but not 10,000 times - unless you often remove the CSV file and start over. So this might be a problem for you long term or might not.
You can read more about these problems at https://design.goeszen.com/mitigating-flash-wear-on-the-esp8266-or-any-other-microcontroller-with-flash.html
Good luck!

Unable to reload same gif image, if used twice in a page [duplicate]

I know there are many ways to prevent image caching (such as via META tags), as well as a few nice tricks to ensure that the current version of an image is shown with every page load (such as image.jpg?x=timestamp), but is there any way to actually clear or replace an image in the browsers cache so that neither of the methods above are necessary?
As an example, lets say there are 100 images on a page and that these images are named "01.jpg", "02.jpg", "03.jpg", etc. If image "42.jpg" is replaced, is there any way to replace it in the cache so that "42.jpg" will automatically display the new image on successive page loads? I can't use the META tag method, because I need everuthing that ISN"T replaced to remain cached, and I can't use the timestamp method, because I don't want ALL of the images to be reloaded every time the page loads.
I've racked my brain and scoured the Internet for a way to do this (preferrably via javascript), but no luck. Any suggestions?
If you're writing the page dynamically, you can add the last-modified timestamp to the URL:
<img src="image.jpg?lastmod=12345678" ...
<meta> is absolutely irrelevant. In fact, you shouldn't try use it for controlling cache at all (by the time anything reads content of the document, it's already cached).
In HTTP each URL is independent. Whatever you do to the HTML document, it won't apply to images.
To control caching you could change URLs each time their content changes. If you update images from time to time, allow them to be cached forever and use a new filename (with a version, hash or a date) for the new image — it's the best solution for long-lived files.
If your image changes very often (every few minutes, or even on each request), then send Cache-control: no-cache or Cache-control: max-age=xx where xx is the number of seconds that image is "fresh".
Random URL for short-lived files is bad idea. It pollutes caches with useless files and forces useful files to be purged sooner.
If you have Apache and mod_headers or mod_expires then create .htaccess file with appropriate rules.
<Files ~ "-nocache\.jpg">
Header set Cache-control "no-cache"
</Files>
Above will make *-nocache.jpg files non-cacheable.
You could also serve images via PHP script (they have awful cachability by default ;)
Contrary to what some of the other answers have said, there IS a way for client-side javascript to replace a cached image. The trick is to create a hidden <iframe>, set its src attribute to the image URL, wait for it to load, then forcibly reload it by calling location.reload(true). That will update the cached copy of the image. You may then replace the <img> elements on your page (or reload your page) to see the updated version of the image.
(Small caveat: if updating individual <img> elements, and if there are more than one having the image that was updated, you've got to clear or remove them ALL, and then replace or reset them. If you do it one-by-one, some browsers will copy the in-memory version of the image from other tags, and the result is you might not see your updated image, despite its being in the cache).
I posted some code to do this kind of update here.
Change the image url like this, add a random string to the querystring.
"image1.jpg?" + DateTime.Now.ToString("ddMMyyyyhhmmsstt");
I'm sure most browsers respect the Last-Modified HTTP header. Send those out and request a new image. It will be cached by the browser if the Last-Modified line doesn't change.
You can append a random number to the image which is like giving it a new version. I have implemented the similar logic and it's working perfectly.
<script>
var num = Math.random();
var imgSrc= "image.png?v="+num;
$(function() {
$('#imgID').attr("src", imgSrc);
})
</script>
I found this article on how to cache bust any file
There are many ways to force a cache bust in this article but this is the way I did it for my image:
fetch('/thing/stuck/in/cache', {method:'POST', credentials:'include'});
The reason the ?x=timestamp trick is used is because that's the only way to do it on a per image basis. That or dynamically generate image names and point to an application that outputs the image.
I suggest you figure out, server side, if the image has been changed/updated, and if so then output your tag with the ?x=timestamp trick to force the new image.
No, there is no way to force a file in a browser cache to be deleted, either by the web server or by anything that you can put into the files it sends. The browser cache is owned by the browser, and controlled by the user.
Hence, you should treat each file and each URL as a precious resource that should be managed carefully.
Therefore, porneL's suggestion of versioning the image files seems to be the best long-term answer. The ETAG is used under normal circumstances, but maybe your efforts have nullified it? Try changing the ETAG, as suggested.
Change the ETAG for the image.
See http://en.wikipedia.org/wiki/URI_scheme
Notice that you can provide a unique username:password# combo as a prefix to the domain portion of the uri. In my experimentation, I've found that inclusion of this with a fake ID (or password I assume) results in the treatment of the resource as unique - thus breaking the caching as you desire.
Simply use a timestamp as the username and as far as I can tell the server ignores this portion of the uri as long as authentication is not turned on.
Btw - I also couldn't use the tricks above with a google map marker icon caching problem I was having where the ?param=timestamp trick worked, but caused issues with disappearing overlays. Never could figure out why this was happening, but so far so good using this method. What I'm unsure of, is if passing fake credentials will have any adverse server performance affects. If anyone knows I'd be interested to know as I'm not yet in high volume production.
Please report back your results.
Since most, if not all, answers and comments here are copies of parts the question, or close enough, I shall throw my 2 cents in.
I just want to point out that even if there is a way it is going to be difficult to implement. The logic of it traps us. From a logical stance telling the browser to replace it's cached images for each changed image on a list since a certain date is ideal BUT... When would you take the list down and how would you know if everyone has the latest version who would visit again?
So my 1st "suggestion", as the OP asked for, is this list theory.
How I see doing this is:
A.) Have a list that our dynamic and manual changed image urls can be stored.
B.) Set a dead date where the catch will be reset and the list will be truncated regardless.
C.0) Check list on site entrance vs browser via i frame which could be ran in the background with a shorter cache header set to re-cache them all against the farthest date on the list or something of that nature.
C.1) Using the Iframe or ajax/xhr request I'm thinking you could loop through each image of the list refreshing the page to show a different image and check the cache against it's own modified date. So on this image's onload use serverside to decipher if it is not the last image when it is loaded go to the next image.
C.1a) This would mean that our list may need more information per image and I think the obvious one is the possible need of some server side script to adjust the headers as required by each image to minimize the footstep of re-caching changed site images.
My 2nd "suggestion" would be to notify the user of changes and direct them to clear their cache. (Carefully, remove only images and files when possible or warn them of data removal due to the process)
P.S. This is just an educated ideation. A quick theory. If/when I make it I will post the final. Probably not here because it will require server side scripting. This is at least a suggestion not mentioned in the OP's question that he say's he already tried.
It sounds like the base of your question is how to get the old version of the image out of the cache. I've had success just making a new call and specifying in the header not to pull from cache. You're just throwing this away once you fetch it, but the browser's cache should have the updated image at that point.
var headers = new Headers()
headers.append('pragma', 'no-cache')
headers.append('cache-control', 'no-cache')
var init = {
method: 'GET',
headers: headers,
mode: 'no-cors',
cache: 'no-cache',
}
fetch(new Request('path/to.file'), init)
However, it's important to recognize that this only affects the browser this is called from. If you want a new version of the file for any browser once the image is replaced, that will need to be accomplished via server configuration.
Here is a solution using the PHP function filemtime():
<?php
$addthis = filemtime('myimf.jpg');
?>
<img src="myimg.jpg?"<?= $addthis;?> >
Use the file modified time as a parameter will cause it to read from a cached version until the file has changed. This approach is better than using e.g. a random number as caching will still work if the file has not changed.
In the event that an image is re-uploaded, is there a way to CLEAR or REPLACE the previously cached image client-side? In my example above, the goal is to make the browser forget what "42.jpg" is
You're running firefox right?
Find the Tools Menu
Select Clear Private Data
Untick all the checkboxes except make sure Cache is Checked
Press OK
:-)
In all seriousness, I've never heard of such a thing existing, and I doubt there is an API for it. I can't imagine it'd be a good idea on part of browser developers to let you go poking around in their cache, and there's no motivation that I can see for them to ever implement such a feature.
I CANNOT use the META tag method OR the timestamp method, because I want all of the images cached under normal circumstances.
Why can't you use a timestamp (or etag, which amounts to the same thing)? Remember you should be using the timestamp of the image file itself, not just Time.Now.
I hate to be the bearer of bad news, but you don't have any other options.
If the images don't change, neither will the timestamp, so everything will be cached "under normal circumstances". If the images do change, they'll get a new timestamp (which they'll need to for caching reasons), but then that timestamp will remain valid forever until someone replaces the image again.
When changing the image filename is not an option then use a server side session variable and a javascript window.location.reload() function. As follows:
After Upload Complete:
Session("reload") = "yes"
On page_load:
If Session("reload") = "yes" Then
Session("reload") = Nothing
ClientScript.RegisterStartupScript(Me.GetType), "ReloadImages", "window.location.reload();", True)
End If
This allows the client browser to refresh only once because the session variable is reset after one occurance.
Hope this helps.
To replace cache for pictore you can store on server-side some version value and when you load picture just send this value instead timestamp. When your image will be changed change it`s version.
Try this code snippet:
var url = imgUrl? + Math.random();
This will make sure that each request is unique, so you will get the latest image always.
After much testing, the solution I have found in the following way.
1- I create a temporary folder to copy the images with the name adding time () .. (if the folder exists I delete content)
2- load the images from that temporary local folder
in this way I always make sure that the browser never caches images and works 100% correctly.
if (!is_dir(getcwd(). 'articulostemp')){
$oldmask = umask(0);mkdir(getcwd(). 'articulostemp', 0775);umask($oldmask);
}else{
rrmfiles(getcwd(). 'articulostemp');
}
foreach ($images as $image) {
$tmpname = time().'-'.$image;
$srcimage = getcwd().'articulos/'.$image;
$tmpimage = getcwd().'articulostemp/'.$tmpname;
copy($srcimage,$tmpimage);
$urlimage='articulostemp/'.$tmpname;
echo ' <img loading="lazy" src="'.$urlimage.'"/> ';
}
try below solutions,
myImg.src = "http://localhost/image.jpg?" + new Date().getTime();
Above solutions work for me :)
I usually do the same as #Greg told us, and I have a function for that:
function addMagicRefresh(url)
{
var symbol = url.indexOf('?') == -1 ? '?' : '&';
var magic = Math.random()*999999;
return url + symbol + 'magic=' + magic;
}
This will work since your server accepts it and you don't use the "magic" parameter any other way.
I hope it helps.
I have tried something ridiculously simple:
Go to FTP folder of the website and rename the IMG folder to IMG2. Refresh your website and you will see the images will be missing. Then rename the folder IMG2 back to IMG and it's done, at least it worked for me in Safari.

Javascript proof-of-concept of GIF (re)compression

My program Precomp can be used to further compress already compressed file formats like GIF, PNG, PDF, ZIP and more. Roughly summarized, it does this by decompressing the compressed streams, recompressing them and storing the differences between the expected compressed stream and the actual compressed stream. As an example, this rotating earth picture from Wikipedia is compressed from 1429 KB to 755 KB. The process is lossless, so the original GIF file can be restored.
The algorithm for the GIF file format can be isolated and implemented relatively easy, so I was thinking about a proof-of-concept implementation in JavaScript. This would lead to the web server sending a compressed version of the GIF file (.pcf ending, essentially a bzip2 compressed file of the
GIF image contents) and the client decompressing the data, recompressing to GIF and displaying it. The following things would've to be done:
The web site author would've to compress his GIF files using the standard version of Precomp and serve these instead of the GIF files together with a JavaScript for the client side recompression.
The client would decompress the bzip2 compressed file, this could be done using one of the existing bzip2 Javascript implementations.
The client would recompress the image content into the original GIF file.
The process is trade of bandwidth against CPU usage on the client side.
Now my questions are the following:
Are there any general problems with the process of loading a different file and "converting" it to GIF?
What would you recommend to display before the client side finishes (image placeholder)?
What do I have to do to make sure the .pcf file is cached? Bandwidth savings were useless if doesn't get cached.
Is there a way to display the original GIF if JavaScript is deactivated, but avoid loading the GIF if JavaScript is activated?
Can I give the users a way to configure the behaviour? E.g. on mobile devices, some might avoid bandwidth, but others might want less CPU usage.
Would it be possible to display interlaced GIFs as supposed (going from a rough version to the final image)? This would require updating the image content multiple times at different stages of recompression.
Let's begin by answering your specific questions. Code example below.
Q&A
Are there any general problems with the process of loading a different file and "converting" it to GIF?
The main problem is complication. You are effectively writing a browser addon, like those for JPEG2000.
If you are writing real browser addons, each major browsers do it differently, and change addon formats occasionally, so you have to actively maintain them.
If you are writing a JS library, it will be easier to write and maintain, but it will be unprivileged and suffer from limitations such as cross original restriction.
What would you recommend to display before the client side finishes (image placeholder)?
Depends on what your format can offer.
If you encode the image dimension and a small thumbnail early, you can display an accurate place-holder pretty early.
It is your format, afterall.
What do I have to do to make sure the .pcf file is cached? Bandwidth savings were useless if doesn't get cached.
Nothing different from other files.
Configure the Expires and Cache-Control header on server side and they will be cached.
Manifest and prefetch can also be used.
Is there a way to display the original GIF if JavaScript is deactivated, but avoid loading the GIF if JavaScript is activated?
This is tricky. When JavaScript is disabled, you can only replace elements, not attributes.
This means you cannot create an image somewhere that points to the .pcf files, and ask browser to rewrite the src attribute when JS is unavailable.
I think the best solution to support no JS is outputting the images with document.write, using noscript as fall back:
<noscript>
<img src=demo.gif width=90>
</noscript><script>
loadPcf("demo.pcf","width=90")
</script>
(Some library or framework may make you consider <img src=demo.gif data-pcf=demo.pcf>.
This will not work for you, because browsers will preload 'demo.gif' before your script kicks in, causing additional data transfer.)
Alternatively, browser addons are unaffected by "disable JS" settings, so if you make addons instead then you don't need to worry about it.
Can I give the users a way to configure the behaviour? E.g. on mobile devices, some might avoid bandwidth, but others might want less CPU usage.
Perhaps. You can code a user interface and store the preference in cookie or in localStorage.
Then you can detect preference and switch the logic in server code (if cookie) or in client code.
If you are doing addons, all browsers provide reliable preference mechanism.
The problem is that, again, every browser do it differently.
Would it be possible to display interlaced GIFs as supposed (going from a rough version to the final image)? This would require updating the image content multiple times at different stages of recompression.
If you hands browsers a partial image, they may think the image is corrupted and refuse to show it.
In this case you have to implement your own GIF decoder AND encoder so that you can hands browser a complete placeholder image, just to be safe.
(new) Can I decode image loaded from another site?
I must also repeat the warning that non-addon JS image decoding does not work with cross origin images.
This means, all .pcf files must be on the same server, same port, and same protocol with the site using it.
For example you cannot share images for multiple sites or do optimisations like domain sharding.
Code Example
Here is a minimal example that creates an <img>, loads a gif, half its width, and put it back to the <img>.
To support placeholder or progressive loading, listen to onprogress instead of/in addition to onload.
<!DOCTYPE html><html><head><meta charset="UTF-8"><script>
function loadPcf( file, attr ) {
var atr = attr || '', id = loadPcf.autoid = 1 + ~~loadPcf.autoid;
document.write( '<img id=pcf'+id+' ' + atr + ' />' );
var xhr = new XMLHttpRequest();
xhr.responseType = 'arraybuffer'; // IE 10+ only, sorry.
xhr.onload = function() { // I am loading gif for demo, you can load anything.
var data = xhr.response, img = document.querySelector( '#pcf' + id );
if ( ! img || ! data instanceof ArrayBuffer ) return;
var buf = new DataView( data ), head = buf.getUint32(0), width = buf.getUint16(6,1);
if ( head !== 1195984440 ) return console.log( 'Not a GIF: ' + file ); // 'GIF8' = 1195984440
// Modify data, image width in this case, and push it to the <img> as gif.
buf.setInt16( 6, ~~(width/2), 1 );
img.src = URL.createObjectURL( new Blob( [ buf.buffer ], { type: "image/gif" } ) );
};
xhr.open( 'GET', file );
xhr.send();
}
</script>
<h1>Foo<noscript><img src=a.gif width=90></noscript><script>loadPcf("a.gif","width=90")</script>Bar</h1>
If you don't need <noscript> compatibility (and thus prevent facebook/google+ from seeing the images when the page is shared), you can put the pcf file in <img> src and use JS to handle them en mass, so that you don't need to call loadPcf for each image and will make the html much simpler.
How about <video>?
What you envisioned is mostly doable, in theory, but perhaps you should reconsider.
Judging from the questions you ask, it will be quite difficult for you to define and pull off your vision smoothly.
It is perhaps better to encode your animation in WebM and use <video> instead.
Better browser support back to IE 9 - just add H.264 to make it a dual format video. You need IE 10+ to modify binary data.
Size: Movies has many, many options and tricks to minimise size, and what you learned can be reused in the future.
Progressive: <video> have had some techs for adaptive video, and hopefully they will stabilise soon.
JavaScript: <video> does not depend on JavaScript.
Future-proof: Both WebM and H.264 will be supported by many programs, long after you stopped working on your special format.
Cost-effective: Create a low-bandwith option using smaller or lower quality media is easier and more reliable than a custom format. This is why wikipedia and youtube offers their media in different resolutions.
For non-animations, PNG can also be colour indexed and 7z optimised (keeping the PNG format).
Icon size indexed PNG is often smaller than the same GIF.
Or perhaps your vision (as described in the pcf website) is the capability to encode many different files, not only GIF.
This will be more like supporting a new network protocol, and is likely beyond the scope of humble JavaScript. (e.g. how are you going to handle pdf download or streaming?)

Sitecore Pipeline Upload Processor

I'm using UploadProcessor to block specific file uploading into MediaLibrary.
Everything is working good and I can see the alert Sitecore's message. But, Sitecore's error message is not really user-friendly "One or more files could not be uploaded. See the Log file for more details"
So, I'd like to add extra alert box for users. Below is my code, but the javascript is not working.
Some people want me to use "SheerResponse", but Sitecore Document mentions that
The uiUpload pipeline is run not as part of the Sheer event, but as part of the form loading process in response to a post back. This is because the uploaded files are only available during the “real” post back, and not during a Sheer UI event. In this sense, the uiUpload pipeline has not been designed to provide UI. In order to provide feedback to a User, the processor should resort to some trick which emits the JScript code.
http://sdn.sitecore.net/Articles/Media/Prevent%20Files%20from%20Uploading/Pipeline%20upload.aspx
Do you have any idea how to implement alert box??
The Upload control in the Media Library uses flash to upload the files. As part of this upload process, the file sizes are checked using JavaScript and a client side validation is made before the upload.
There are a number of changes you need to make. I'm just going to list them here, you can find all the code in my Github Gists:
https://gist.github.com/jammykam/54d6af46593fa3b827b4
1) Extend and update the MediaFolder.js file to check the file size against the Image Size ONLY if the extension is one specified in config
if (file.size > this.uploadLimit() || this.uploadImageLimitReached(file)) {
...
}
2) Update MediaFolder.xml page to include the above JS. Amend the codebeside, inheriting from Sitecore.Shell.Applications.Media.MediaFolder.MediaFolderForm and overriding OnLoad and OnFilesCancelled, to render restricted extensions and max image size settings so these are passed to the Javascript and display friendly alert.
settings.Add("uploadImageLimit", ((long)System.Math.Min(ImageSettings.MaxImageSizeInDatabase, Settings.Runtime.EffectiveMaxRequestLengthBytes)).ToString());
settings.Add("uploadImageRestrictedExtensions", ImageSettings.RestrictedImageExtensions);
3) Update Attach.xaml.xml codebeside to check image size, inheriting from Sitecore.Shell.Applications.FlashUpload.Attach.AttachPage and overriding OnQueued method:
if (ImageSettings.IsRestrictedExtension(filename) && num > maximumImageUploadSize)
{
string text = Translate.Text("The image \"{0}\" is too big to be uploaded.\n\nThe maximum image size that can be uploaded is {1}.", new object[] { filename, MainUtil.FormatSize(maximumImageUploadSize) });
this.WarningMessage = text;
SheerResponse.Alert(text, new string[0]);
}
else
{
base.OnQueued(filename, lengthString);
}
4) Add a config include with the new settings.
<setting name="Media.MaxImageSizeInDatabase" value="1MB" />
<setting name="Media.RestrictedImageExtensions" value=".jpg|.jpeg|.png|.gif|.bmp|.tiff" />
You can still (and should) keep the pipelines in place, but note from my previous answer I gave the "Restricted Extension" config setting has now changed (into a single setting instead of passing it into the pipeline). The Gist contains the
NOTE that I have tested this using Sitecore 7.2 rev 140526, so the base code is taken from there. If you are using a different version then you should check the base C#, JS and XML code matches what I have provided. The code is commented to show you what has changed.
The above works in the Content Editor, it does not work in the Page Editor! Which in Sitecore 7.2+ uses SPEAK dialogs and it looks like they use a different set of pipelines. That would need more investigation (raise another question, and specify which version of Sitecore you are using).

Get binary image data from input type "file" using JavaScript/jQuery for use with picture preview with AJAX in WebMatrix [duplicate]

This question already has answers here:
Closed 10 years ago.
I have had trouble when researching or otherwise trying to figure out how (if it's even possible) to get binary image data using JavaScript/jQuery from an html input element of type file.
I'm using WebMatrix (C#), but it may not be necessary to know that, if the purposes of this question can be answered using JavaScript/jQuery alone.
I can take the image, save it in the database (as binary data), then later show the pic on the page, from the binary data, after posting. This does, however, leave me without a pic preview, before uploading, for which I am almost certain I must use AJAX.
Again, this may not even be possible, but as long as I can get the binary image data, I believe I can push it to the server with AJAX and process the image the same way I would if I were taking it from a database (note that I don't save the image files themselves using GUID and all that,I just save the binary data).
If there is an easier way to show a pic preview using the input element, that would work fine, too, of course, as the whole idea behind me trying to do this is to show a pic preview before they hit the submit form button (or at least create that illusion).
**********UPDATE***********
I do not consider this a duplicate of another question because, my real question is:
How can I get image data from an input type "file", with JavaScript/jQuery?
If I can just get the data (in the right format) back to the server, I should be able to work with it there, and then return it with AJAX (although, I am absolutely no AJAX expert).
There is, according to the research that I have done, NO WAY to get picture previews in all IE versions using only javascript (this is because getting the full file path is seen, by them, as a potential security risk). I could ask my users to add the site to the trusted sites, but you don't usually ask users to tamper with those kinds of settings (not to mention the quickest way to make your site seem suspicious to users is to ask them to directly add your site to the trusted sites list. That's like sending an email and asking for a password. "Just trust me! I'm soooo safe!" :)
Short answer: Use the jQuery form plugin, it suports AJAX-like form submits even for file uploads.
tl;dr
Thumbnail preview is popular websites is usually done by a number of steps, basically the website do these steps:
upload the RAW image
Resize and optimise the image for data storage
Generate a temporary link to that file (usually stored in a server maintained HTTP session)
Send it back to the user, to enable a 'preview'
Actually store the image after user confirms the image
A few bad solutions are:
Most of the modern browsers has options to enable script access to local files, but usually you don't ask your users to tinker with those low level settings.
Earlier Internet Explorer (ah... yes it's a shame) and ancient versions of modern browsers will expose the full file path by reading the 'value' of file input box, which you can directly generates an tag and use that value. (Now it is replaced by some c:/fakepath/... thing.)
Use Adobe Flash to mimic the file selection panel, it can properly read local files. But passing it into JavaScript is another topic...
Hope these helps. ;)
UPDATE
I actually came across a situation that requires a preview before uploading, I'd like to also put it here. As I could recall, there were no transitional versions in modern browsers that do not implement FileReader before masking the real file path, but feel free to correct me if so. This solution should caters most of the browsers, as long as they are supported by jQuery.
// 1. Listen to change event
$(':file').change(function() {
// 2. Check if it has the FileReader class
if (!this.files) {
// 2.1. Old enough to assume a real path
setPreview(this.value);
}
else {
// 2.2. Read the file content.
var reader = new FileReader();
reader.onload = function() {
setPreview(reader.result);
};
reader.readAsDataURL();
}
});
function setPreview(url) {
// Do preview things.
$('.preview').attr('src', url);
}

Categories

Resources