I'm using UploadProcessor to block specific file uploading into MediaLibrary.
Everything is working good and I can see the alert Sitecore's message. But, Sitecore's error message is not really user-friendly "One or more files could not be uploaded. See the Log file for more details"
So, I'd like to add extra alert box for users. Below is my code, but the javascript is not working.
Some people want me to use "SheerResponse", but Sitecore Document mentions that
The uiUpload pipeline is run not as part of the Sheer event, but as part of the form loading process in response to a post back. This is because the uploaded files are only available during the “real” post back, and not during a Sheer UI event. In this sense, the uiUpload pipeline has not been designed to provide UI. In order to provide feedback to a User, the processor should resort to some trick which emits the JScript code.
http://sdn.sitecore.net/Articles/Media/Prevent%20Files%20from%20Uploading/Pipeline%20upload.aspx
Do you have any idea how to implement alert box??
The Upload control in the Media Library uses flash to upload the files. As part of this upload process, the file sizes are checked using JavaScript and a client side validation is made before the upload.
There are a number of changes you need to make. I'm just going to list them here, you can find all the code in my Github Gists:
https://gist.github.com/jammykam/54d6af46593fa3b827b4
1) Extend and update the MediaFolder.js file to check the file size against the Image Size ONLY if the extension is one specified in config
if (file.size > this.uploadLimit() || this.uploadImageLimitReached(file)) {
...
}
2) Update MediaFolder.xml page to include the above JS. Amend the codebeside, inheriting from Sitecore.Shell.Applications.Media.MediaFolder.MediaFolderForm and overriding OnLoad and OnFilesCancelled, to render restricted extensions and max image size settings so these are passed to the Javascript and display friendly alert.
settings.Add("uploadImageLimit", ((long)System.Math.Min(ImageSettings.MaxImageSizeInDatabase, Settings.Runtime.EffectiveMaxRequestLengthBytes)).ToString());
settings.Add("uploadImageRestrictedExtensions", ImageSettings.RestrictedImageExtensions);
3) Update Attach.xaml.xml codebeside to check image size, inheriting from Sitecore.Shell.Applications.FlashUpload.Attach.AttachPage and overriding OnQueued method:
if (ImageSettings.IsRestrictedExtension(filename) && num > maximumImageUploadSize)
{
string text = Translate.Text("The image \"{0}\" is too big to be uploaded.\n\nThe maximum image size that can be uploaded is {1}.", new object[] { filename, MainUtil.FormatSize(maximumImageUploadSize) });
this.WarningMessage = text;
SheerResponse.Alert(text, new string[0]);
}
else
{
base.OnQueued(filename, lengthString);
}
4) Add a config include with the new settings.
<setting name="Media.MaxImageSizeInDatabase" value="1MB" />
<setting name="Media.RestrictedImageExtensions" value=".jpg|.jpeg|.png|.gif|.bmp|.tiff" />
You can still (and should) keep the pipelines in place, but note from my previous answer I gave the "Restricted Extension" config setting has now changed (into a single setting instead of passing it into the pipeline). The Gist contains the
NOTE that I have tested this using Sitecore 7.2 rev 140526, so the base code is taken from there. If you are using a different version then you should check the base C#, JS and XML code matches what I have provided. The code is commented to show you what has changed.
The above works in the Content Editor, it does not work in the Page Editor! Which in Sitecore 7.2+ uses SPEAK dialogs and it looks like they use a different set of pipelines. That would need more investigation (raise another question, and specify which version of Sitecore you are using).
Related
I'm trying to add the adding of images into with the editor, with drag 'n drop.
I wanted to update CK editor anyway, so after some reading I created a new CKeditor download via the package building, including the plugin uploadimage -
http://ckeditor.com/addon/uploadimage
When I try to drag 'n drop an image in it, I'll see a green bar saying upload succesful and for less then a second I see the image in the editor.
Then a red bar is showing. saying:
'HTTP error occurred during file upload (404: File not found).'
I have this in the ckeditor config.js:
config.uploadUrl = '/upload/';
As I assumed this was the path were the images are uploaded. The folder is created and for testing I have set its permissions to 777.
As this is not working I assume I did something wrong here, or that I'm missing something in the configuration. But via the documentation I don't see what it might be.
I hope someone can point me in the right direction.
On a side note, I do not need/want a file browser. A little context -> this editor will be used by logged in users. I do not want one user be able to see images from the other and the input text in only used once, so no need to find earlier images as for this particular use the user will only use this editor once for setup. This is why I tought the uploadimage plugin would fit best for my needs.
Kind regards,
Martijn
Based on the documentation of ckeditor:
The uploadUrl setting contains the location of a script that handles file uploads of pasted and dragged images
This is not the folder in your server that you want to upload the files to. This should be the script that handles the submit of the file that should be uploaded, and this script is the one who handle the saving of the file in the relevant folder on the server.
This plugin only covers the client side (what happen in the browser) and not the server side (which you need to implement by yourself).
The problem:
I work on an internal tool that allows users to upload images - and then displays those images back to them and others.
It's a Java/Spring application. I have the benefit of only needing to worry about IE11 exactly and Firefox v38+ (Chrome v43+ would be a nice to have)
After first developing the feature, it seems that users can just create a text file like:
<script>alert("malicious code here!")</script>
and save it as "maliciousImage.jpg" and upload it.
Later, when that image is displayed inside image tags like:
<img src="blah?imgName=foobar" id="someImageID">
actualImage.jpg displays normally, and maliciousImage.jpg displays as a broken link - and most importantly no malicious content is interpreted!
However If the user right-clicks on this broken link, and clicks 'view image'... bad things happen.
the browser does 'content-sniffing' a concept which is new to me, detects that 'maliciousImage.jpg' is actually a text file, and very kindly renders it as HTML without hesitation. Any script tags are passed to the JavaScript interpreter and, as you can imagine, we don't want this.
What I've tried so far
In short, every possible combination of response headers I can think of to prevent the browser from content-sniffing. All the answers I've found here on stackoverflow, and other docs, imply that setting the content-type header should prevent most browsers from content-sniffing, and setting X-content options should prevent some versions of IE.
I'm setting the x-content-type-options to no sniff, and I'm setting the response content type. The docs I've read lead me to believe this should stop content-sniffing.
response.setHeader("X-Content-Type-Options", "nosniff");
response.setContentType("image/jpg");
I'm intercepting the response and these headers are present, but seem to have no effect on how the malicious content is processed...
I've also tried detecting which images are and are not malicious at the point of upload, but I'm quickly realizing this is very much non-trivial...
End goal:
Naturally - any output at all for images that aren't really images (garbled nonsense, an unhandled exception, etc) would be better than executing the text-file as HTML/javascript in the clear, but displaying any malicious HTML as escaped/CDATA'd plain-text would be ideal... though maybe a bit impractical.
So I ended up fixing this problem but forgot to answer my own question:
Step 1: blocking invalid images
To get a quick fix out, I simply added some fairly blunt code that checked if an image was actually an image - during upload and before serving it, using the imageio lib:
import javax.imageio.ImageIO;
//......
Image img = attBO.getImage(imgId);
InputStream x = new ByteArrayInputStream(img.getData());
BufferedImage s;
try {
s = ImageIO.read(x);
s.getWidth();
} catch (Exception e) {
throw new myCustomException("Invalid image");
}
Now, initially i'd hoped that would fix my problem - but in reality it wasn't that simple and just made generating a payload more difficult.
While this would block:
<script>alert("malicious code here!")</script>
It's very possible to generate a valid image that's also an XSS payload - just a little more effort....
Step 2: framework silliness
It turned out there was an entire post-processing workflow that I'd never touched, that did things such as append tokens to response bodies and use additional frameworks to decorate responses with CSS, headers, footers etc.
This meant that, although the controller was explicitly returning image/png, it was being grabbed and placed (as bytes) post processing was taking that bytestream, and wrapping it in a header and footer, to form a fully qualified 'view' - this view would always have the 'content-type' text/html and thus was never displayed correctly.
The crux of this problem was that my controller was directly returning an image, in a RESTful fashion, when the rest of the framework was built to handle controllers returning full fledged views.
So I had to step through this workflow and create exceptions for the controllers in my code that returned something other than worked in a restful fashion.
for example with with site-mesh it was just an exclude(as always, simple fix once I understood the problem...):
<decorators defaultdir="/WEB-INF/decorators">
<excludes>
<pattern>*blah.ctl*</pattern>
</excludes>
<decorator name="foo" page="myDecorator.jsp">
<pattern>*</pattern>
</decorator>
and then some other other bespoke post-invocation interceptors.
Step 3: Content negotiation
Now, I finally got the stage where only image bytecode was being served and no review was being specified or explicitly generated.
A Spring feature called 'content negotiation' kicked in. It tries to reconcile the 'accepts' header of the request, with the 'messageconverters' it has on hand to produce such responses.
Because spring by default doesn't have a messageconverter to produce image/png responses, it was falling back to text/html - and I was still seeing problems.
Now, were I using spring 4, I could've simply added the annotation:
#Produces("image/png")
to my controller - simple fix...
Step 4: Legacy dependencies
but because I only had spring 3.0.5 (and couldn't upgrade it) I had to try other things.
I tried registering new messageconverters but that was a headache or adding a new post-method interceptor to simply change the content-type back to 'image/png' - but that was a hacky headache.
In the end I just exposed the request/reponse in the controller, and wrote my image directly to the response body - circumventing Spring's content-negotiation altogether
....and finally my image was served as an image and displayed as an image - and no injected code was executed!
That sounds odd, because it works perfectly elsewhere. Are you sure the X-Content-Type-Options header is present in the responses?
Here is a demo I built a while back, where I have a file that's a valid html, gif and javascript. As you can see it first loads as an HTML, but then loads itself as an image and as a script (which executes):
http://research.insecurelabs.org/content-sniffing/gifjs.html
However if you load it using the "X-Content-Type-Options: nosniff" header, the script no longer executes:
http://research.insecurelabs.org/content-sniffing/nosniff/gifjs.html
Btw, the image renders properly in FF/IE, but not in Chrome.
Here is a demo, where I attempted what you described:
http://research.insecurelabs.org/content-sniffing/stackexchange.html
First image is without nosniff, and second is with, and it seems to work as intended. Second one does not run the script when opened with "view image".
Edit:
Firefox doesn't seem to support X-Content-Type-Options: nosniff
So, you should also add "Content-disposition: attachment;filename=image.gif" or similar to the images. The image will load normally if loaded through an image tag, but if you open the URL directly, you will force a download instead of showing the image directly in the browser.
Example: http://research.insecurelabs.org/content-sniffing/attachment/
adeneo is pretty much spot-on. You should use whatever image library you want to check if the uploaded file is a valid file for the type it claims to be. Anything the client sends can be manipulated.
This question already has answers here:
Closed 10 years ago.
I have had trouble when researching or otherwise trying to figure out how (if it's even possible) to get binary image data using JavaScript/jQuery from an html input element of type file.
I'm using WebMatrix (C#), but it may not be necessary to know that, if the purposes of this question can be answered using JavaScript/jQuery alone.
I can take the image, save it in the database (as binary data), then later show the pic on the page, from the binary data, after posting. This does, however, leave me without a pic preview, before uploading, for which I am almost certain I must use AJAX.
Again, this may not even be possible, but as long as I can get the binary image data, I believe I can push it to the server with AJAX and process the image the same way I would if I were taking it from a database (note that I don't save the image files themselves using GUID and all that,I just save the binary data).
If there is an easier way to show a pic preview using the input element, that would work fine, too, of course, as the whole idea behind me trying to do this is to show a pic preview before they hit the submit form button (or at least create that illusion).
**********UPDATE***********
I do not consider this a duplicate of another question because, my real question is:
How can I get image data from an input type "file", with JavaScript/jQuery?
If I can just get the data (in the right format) back to the server, I should be able to work with it there, and then return it with AJAX (although, I am absolutely no AJAX expert).
There is, according to the research that I have done, NO WAY to get picture previews in all IE versions using only javascript (this is because getting the full file path is seen, by them, as a potential security risk). I could ask my users to add the site to the trusted sites, but you don't usually ask users to tamper with those kinds of settings (not to mention the quickest way to make your site seem suspicious to users is to ask them to directly add your site to the trusted sites list. That's like sending an email and asking for a password. "Just trust me! I'm soooo safe!" :)
Short answer: Use the jQuery form plugin, it suports AJAX-like form submits even for file uploads.
tl;dr
Thumbnail preview is popular websites is usually done by a number of steps, basically the website do these steps:
upload the RAW image
Resize and optimise the image for data storage
Generate a temporary link to that file (usually stored in a server maintained HTTP session)
Send it back to the user, to enable a 'preview'
Actually store the image after user confirms the image
A few bad solutions are:
Most of the modern browsers has options to enable script access to local files, but usually you don't ask your users to tinker with those low level settings.
Earlier Internet Explorer (ah... yes it's a shame) and ancient versions of modern browsers will expose the full file path by reading the 'value' of file input box, which you can directly generates an tag and use that value. (Now it is replaced by some c:/fakepath/... thing.)
Use Adobe Flash to mimic the file selection panel, it can properly read local files. But passing it into JavaScript is another topic...
Hope these helps. ;)
UPDATE
I actually came across a situation that requires a preview before uploading, I'd like to also put it here. As I could recall, there were no transitional versions in modern browsers that do not implement FileReader before masking the real file path, but feel free to correct me if so. This solution should caters most of the browsers, as long as they are supported by jQuery.
// 1. Listen to change event
$(':file').change(function() {
// 2. Check if it has the FileReader class
if (!this.files) {
// 2.1. Old enough to assume a real path
setPreview(this.value);
}
else {
// 2.2. Read the file content.
var reader = new FileReader();
reader.onload = function() {
setPreview(reader.result);
};
reader.readAsDataURL();
}
});
function setPreview(url) {
// Do preview things.
$('.preview').attr('src', url);
}
Background
I'm working on an internal project that basically can generate a video on the client side, but since there are no JavaScript video encoders I'm aware of, I'm just exporting each frame individually. I need to avoid uploading to the server; this is all happening on the client side.
Implementation
I'm using this FileSaver.js (more specifically, Chrome's webkit FileSystem API) to save a large number of PNGs generated by an HTML5 canvas. I set Chrome to automatically download to a specific folder, so when I hit 'Save' it just takes off and saves something like 20 images per second. This works perfectly for my purposes.
If I could use JSZip to compress all these frames into one file before offering it to the client to save, I would, but I haven't even tried because there's just no way the browser will have enough memory to generate ~8000 640x480 PNGs and then compress them.
Problem
The problem is that after a certain number of images, every file downloaded is empty. Chrome even starts telling me in the download bar that the file is 0 bytes. Repeated on the same project with the same export settings, the empty saves start at exactly the same time. For example, with one project, I can save the first 5494 frames before it chokes. (I know this is an insanely large number, but I can't help that.) I tried setting a 10ms delay between saves, but that didn't have any effect. I haven't tried a larger delay because exporting takes a very long time as it is.
I checked the blob.size and it's never zero. I suspect it's exceeding some quota, but there are no errors thrown; it just silently fails to either write to the sandbox or copy the file to the user-specified location.
Questions
How can I detect these empty saves? Prevent them? Is there a better way to do this? Am I just screwed?
EDIT: Actually, after debugging FileSaver.js, I realized that it's not even using webkitRequestFileSystem; it returns when it gets here:
if (can_use_save_link) {
object_url = get_object_url(blob);
save_link.href = object_url;
save_link.download = name;
if (click(save_link)) {
filesaver.readyState = filesaver.DONE;
dispatch_all();
return;
}
}
So, it looks like it's not even using the FileSystem API, and therefore I have no idea how to empty the storage before it's full.
EDIT 2: I tried moving the "if (can_use_save_link)" block to inside the "writer.onwriteend" function, and changing it to this:
if (can_use_save_link) {
save_link.href = file.toURL();
save_link.download = name;
click(save_link);
}else{
target_view.location.href = file.toURL();
}
The result is I'm able to save all 8260 files (about 1.5GB total) since it's now using storage with a quota. Before, the files didn't show up in the HTML5 FileSystem because I assume you didn't need to put them there if the anchor element supported the 'download' attribute.
I was also able to comment out the code that appends ".download" to the filename, and I had to provide an empty anonymous function as an argument to both instances of "file.remove()".
Use JSZip, it won't use too much memory if you disable compression (which is the default). To manually disable compression anyways, make sure to pass compression: "STORE" when calling zip.generate().
I ended up modifying FileSaver.js (see "EDIT 2" in the original post).
I've got a site where individual pages might require some javascript or CSS files hooked into their heads. I'm trying to keep everything client side when it comes to managing this process, rather than getting on the FTP and sorting everything out in the code so I need to be able to upload css and js files.
I've got CCK filefield up and running, and it works with css files, but it refuses to upload .js files. It instead seems to view every .js as ".js.txt" and then the file appears on the server as thisismyfile.js.txt
Not ideal...
Does anyone know how to work around this. Is it a mime type problem with Drupal or the server, or is Drupal set up to avoid script uploads and n00b hack attacks.
Once the files are uploaded I intend to use PHP mode on the page or node to call drupal_add_css and drupal_add_js.
Looking at the field_file_save_file() function in field_file.inc from filefield module, you can find the following snippet
// Rename potentially executable files, to help prevent exploits.
if (preg_match('/\.(php|pl|py|cgi|asp|js)$/i', $file->filename) && (substr($file->filename, -4) != '.txt')) {
$file->filemime = 'text/plain';
$file->filepath .= '.txt';
$file->filename .= '.txt';
}
So yes, it's a 'security thing', as Jeremy guessed.
You could patch that RegEx for an immediate 'fix', but that would remove this otherwise useful security check completely for all filefields used on the site.
So a more specific workaround might be a better approach. Since you want to add the files via drupal_add_js() calls from code anyways, you might as well do the renaming there, adding some kind of verification to make sure you can 'trust' the file (e.g. who uploaded it, whatever).
Edit: Concerning options to rename (and alternatives) when calling drupal_add_js():
For renaming the file, look into the file_move() function. A problem with this would be that it won't update the corresponding entry in the files table, so you would have to do that also, if the move operation succeeded. (The filefield just stores the 'fid' of the corresponding entry in the files table, so you'd need to find it there by 'fid' and change the 'filename', 'filepath' and 'filemime' entries according to your rename/move)
Alternatively, you could just load the content of the *.js.txt file and add that string with the 'inline' option of drupal_add_js(). This would be less 'elegant' and could be a performance hit, but if those are not important criteria in your specific case, it is less trouble.
Yet another option would be just passing the *.js.txt file as is to drupal_add_js(), ignoring the 'wrong' extension. A short local test showed that this works (at least in firefox). This might be the 'least effort' solution, but would need some additional testing concerning different browser behavior concerning usage of 'misnamed' js files.
Allowing Drupal to upload javascript files would be a security risk, which is also why it doesn't allow you to do it, but instead appends the .txt extension. The reason is that js files are executable along with php, pl, py, cgi, asp. So if Drupal could upload those files to the server, it would be possible for evil doers to upload a file and run it doing all kinds of nasty things on your server, basically anything is possible. Best thing would be to find a different way of uploading files which are secure.
I had a similar need, and found a way to get around the security by first changing the 'allow_insecure_uploads' variable value by running this line of code in your hook_install:
variable_set('allow_insecure_uploads', 1);
Then in a module add this function
/**
* Implementation of FileField's hook_file_insert().
*/
function MODULE_NAME_file_insert(&$file) {
//look for files with the extenstion .js.txt and rename them to just .js
if(substr($file->filename, -7) == '.js.txt'){
$file_path = $file->filepath;
$new_file_path = substr($file_path, 0, strlen($file_path)-4);
file_move($file_path, $new_file_path);
$file->filepath = $file_path;
$file->filename = substr($file->filename, 0, strlen($file->filename)-4);
$file->filemime = file_get_mimetype($file->filename);
$file->destination = $file->filepath;
$file->status = FILE_STATUS_TEMPORARY;
drupal_write_record('files', $file);
}
What this does is in the hook_insert call it checks if a file has the extension ".js.txt". If it does it copies it to a new location and renames it. This is after the security check so its ok. I don't think you need to worry about the cache clear deleting your js files as long as you don't put them in the files/js directory. Create your own directory for you module and you should be ok.
I faced this situation when I wanted to allow .js file to be upload as is (without .txt and with 'application/javascript' mimetype) for a specific field. Also, I didn't wanted to alter Drupal core... of course.
So I needed to create a module implementing hook_file_presave(). This also work for Multiupload File Widget, since its hook is on file_save().
Note that you would have to replace MYMODULE_NAME and MYFIELD_NAME by your own values.
function MYMODULE_NAME_file_presave($file) {
// Bypass secure file extension for .js for field_additional_js field only
if((isset($file->source) && strpos($file->source, "MYFIELD_NAME") !== FALSE) && substr($file->filename, strlen($file->filename) - 7) == ".js.txt") {
// Define new uri and save previous
$original_uri = $file->uri;
$new_uri = substr($file->destination, null, -4);
// Alter file object
$file->filemime = 'application/javascript';
$file->filename = substr($file->filename, null, -4);
$file->destination = file_destination($new_uri, FILE_EXISTS_RENAME);
$file->uri = $file->destination;
// Move fil (to remove .txt)
file_unmanaged_move($original_uri, $file->destination);
// Display message that says that
drupal_set_message(t('Security bypassed for .js for this specific field (%f).', array('%f' => $file->filename)));
}
}
Drupal also "munges" javascript files. To prevent Drupal from automatically adding underscores to the filename there is a hidden variable that is checked before the filename is "munged".
Setting the variable to 1 solves the issue for me (along with altering the REGEX in includes/file.inc).
I hate hacking core, but this seems like a poor design to me. Javascript files are not server side scripts like php, py, pl, cgi, and asp.
You can use the allowed file extensions settings to prevent php and other server side scripts from being uploaded.
eg:
variable_set('allow_insecure_uploads', 1);
See:
http://api.drupal.org/api/function/file_munge_filename/6
So uploading .js files to the files directory is pretty much impossible.
Even if you manage to get .js files uploaded cleanly, these files will get deleted when the cache is cleared.
Any js files that live inside the files directory will be deleted whenever the drupal_clear_js_cache() function is executed.
http://api.drupal.org/api/function/drupal_clear_js_cache/6
So Drupal sees .js files living in the file uploads directory as temporary.
Now I understand why they are appending ".txt", it is to prevent them from being removed when the cache is cleared.
So as a compromise I guess I will just be uploading .js files manually (via FTP) to the /misc folder. :(