I'm creating an ASP.Net form with a fileupload control which will then email the details of the form and the file to another admin. I want to ensure this secure (for the server and the recipient). The attachment should be a CV so I will restrict it to typical text documents.
From what I can tell the best bet is to check that the file extension or MIME Type is of that kind and check it against the "magic numbers" to verify that the extension hasn't been changed. I'm not too concerned about how to go about doing that but want to know if that really is enough.
I'd also be happy to use a third party product that takes care of this and I've looked at a couple:
blueimp jQuery file upload
http://blueimp.github.io/jQuery-File-Upload/
and cutesoft ajaxuploader
http://ajaxuploader.com/Demo/
But blueimp one still seems to require custom server validation (i guess just being jQuery it just handles client-side validation) and the .net one checks the MIME-type matches the extension but I thought the MIME type followed the extension anyway.
So,
Do I need to worry about server security when the file is added as an attachment but not saved?
Is there a plugin or control that takes care of this well?
If I need to implement something for server validation myself is matching the MIME-type to the "magic numbers" good enough?
I'm sure nothing is 100% bulletproof but file upload is pretty common stuff and I assume most implementations are "safe enough" - but how!?
If it's relevant, here is my basic code so far
<p>Please attach your CV here</p>
<asp:FileUpload ID="fileUploader" runat="server" />
and on submit
MailMessage message = new MailMessage();
if (fileUploader.HasFile)
{
try
{
if (fileUploader.PostedFile.ContentType == "text")
{
// check magic numbers indicate same content type... if(){}
if (fileUploader.PostedFile.ContentLength < 102400)
{
string fileName = System.IO.Path.GetFileName(fileUploader.PostedFile.FileName);
message.Attachments.Add(new Attachment(fileUploader.PostedFile.InputStream, fileName));
}
else
{
// show a message saying the file is too large
}
}
else
{
// show a message saying the file is not a text based document
}
}
catch (Exception ex)
{
// display ex.Message;
}
}
A server can never be 100% secure, but we should do our best to minimize the risk on an incident. I should say at this point that I am not an expert, I am just a computer science student. So, here is an approach that I would follow in such a case. Please, comment any additional tip you can give.
Generally speaking, to have a secure form, all client inputs must be checked and validated. Any information that does not origin from our system is not trusted.
Inputs from the client in our case:
file's name
name
extension
file's content
Extension
We don't really care about the minetype, this is info for a web server. We care about the file extension, because this is the indicator for the OS on how to run/read/open a file. We have to support only specific file extensions (what ever your admin's pc can handle) there is no point supporting unknown file types.
Name (without the extension)
The name of the file is not always a valuable info. When I deal with file uploading I usually rename it (set it) to an id (a username, a time-stamp, hashes etc). If the name is important, always check/trim it, if you only expect letters or numbers delete all other chars (I avoid to leave "/", "\", "." because they can be used to inject paths).
So now we suppose that the generated file name is safe.
Content
When you support no structured files, you just can not validate the file's content. Thus, let an expert program do this for you... scan them with an antivirus. Call the antivirus from the console (carefully, use mechanics that avoid injections). Many antivirus can scan zips contents too (a malicious file, in a folder on your server is not a good idea). Always keep the scan program updated.
On the comments I suggested zipping the file, in order to avoid any automatic execution on the admin's machine and on the sever. The admin's machine's antivirus can then handle it before unzip.
Some more tips, don't give more information's to the client than he needs... don't let the client know where the files are saved, don't let the web-server access them for distribution if there no need to. Keep a log with weird actions (slashes in filenames, too big files, too long names, warning extensions like "sh" "exe" "bat") and report the admins with an email if anything weird happen (it is good to know if your protections work).
All these creates server work load (more system holes), so you may should count the number of files that are scanned/checked at the moment before accepting a new file upload request (that is where I would launch a DDoS attack).
With a quick google search Avast! For Linux - Command Line Guide, I do not promote Avast, I am just showing it as an existing example.
Lastly but not least, you are not paranoid, I manage a custom translation system that I coded... spams and hack attacks have occurred more than once.
Some more thoughts, JavaScript running on a web-page is only secure for the client's computer (thanks to the browser's security). We can use it to prevent invalid posts to the server but this does not ensures that such requests will not be done as JavaScript can be bypassed/edited.
So, all JavaScript solutions are only for a first validation (usually just to help the user correct mistakes) and to correctly set the form data.
Related
I'm using UploadProcessor to block specific file uploading into MediaLibrary.
Everything is working good and I can see the alert Sitecore's message. But, Sitecore's error message is not really user-friendly "One or more files could not be uploaded. See the Log file for more details"
So, I'd like to add extra alert box for users. Below is my code, but the javascript is not working.
Some people want me to use "SheerResponse", but Sitecore Document mentions that
The uiUpload pipeline is run not as part of the Sheer event, but as part of the form loading process in response to a post back. This is because the uploaded files are only available during the “real” post back, and not during a Sheer UI event. In this sense, the uiUpload pipeline has not been designed to provide UI. In order to provide feedback to a User, the processor should resort to some trick which emits the JScript code.
http://sdn.sitecore.net/Articles/Media/Prevent%20Files%20from%20Uploading/Pipeline%20upload.aspx
Do you have any idea how to implement alert box??
The Upload control in the Media Library uses flash to upload the files. As part of this upload process, the file sizes are checked using JavaScript and a client side validation is made before the upload.
There are a number of changes you need to make. I'm just going to list them here, you can find all the code in my Github Gists:
https://gist.github.com/jammykam/54d6af46593fa3b827b4
1) Extend and update the MediaFolder.js file to check the file size against the Image Size ONLY if the extension is one specified in config
if (file.size > this.uploadLimit() || this.uploadImageLimitReached(file)) {
...
}
2) Update MediaFolder.xml page to include the above JS. Amend the codebeside, inheriting from Sitecore.Shell.Applications.Media.MediaFolder.MediaFolderForm and overriding OnLoad and OnFilesCancelled, to render restricted extensions and max image size settings so these are passed to the Javascript and display friendly alert.
settings.Add("uploadImageLimit", ((long)System.Math.Min(ImageSettings.MaxImageSizeInDatabase, Settings.Runtime.EffectiveMaxRequestLengthBytes)).ToString());
settings.Add("uploadImageRestrictedExtensions", ImageSettings.RestrictedImageExtensions);
3) Update Attach.xaml.xml codebeside to check image size, inheriting from Sitecore.Shell.Applications.FlashUpload.Attach.AttachPage and overriding OnQueued method:
if (ImageSettings.IsRestrictedExtension(filename) && num > maximumImageUploadSize)
{
string text = Translate.Text("The image \"{0}\" is too big to be uploaded.\n\nThe maximum image size that can be uploaded is {1}.", new object[] { filename, MainUtil.FormatSize(maximumImageUploadSize) });
this.WarningMessage = text;
SheerResponse.Alert(text, new string[0]);
}
else
{
base.OnQueued(filename, lengthString);
}
4) Add a config include with the new settings.
<setting name="Media.MaxImageSizeInDatabase" value="1MB" />
<setting name="Media.RestrictedImageExtensions" value=".jpg|.jpeg|.png|.gif|.bmp|.tiff" />
You can still (and should) keep the pipelines in place, but note from my previous answer I gave the "Restricted Extension" config setting has now changed (into a single setting instead of passing it into the pipeline). The Gist contains the
NOTE that I have tested this using Sitecore 7.2 rev 140526, so the base code is taken from there. If you are using a different version then you should check the base C#, JS and XML code matches what I have provided. The code is commented to show you what has changed.
The above works in the Content Editor, it does not work in the Page Editor! Which in Sitecore 7.2+ uses SPEAK dialogs and it looks like they use a different set of pipelines. That would need more investigation (raise another question, and specify which version of Sitecore you are using).
Trying to write to a text file w/ Adobe Acrobat Reader utilizing AcroJS.
As a concept I got how to use trusted functions in Acrobat but when I tried to run following example to save (different problem then the original) the pdf form under a different name using this.saveAs(..) received an error.
My question is two fold;
1- Why do I get "Security settings prevent access to this property or method" error and how do I get rid of it?
trusted function in javascript folder is as follwos (copeid off the web)
var mySaveAs = app.trustedFunction( function(cFlName)
{
app.beginPriv();
try{
this.saveAs(cFlName);
}
catch(e){
app.alert("Error During Save " + e.message );
}
app.endPriv();
});
I am calling the trusted function from the doucment as follwos and expecting a file with the name sample.pdf will be generated inside "C:/test"
if(typeof(mySaveAs) == "function")
{
mySaveAs("/C/test/sample.pdf");
}
else
{
app.alert("Missing Save Function");
}
2- How do I write to a text file? Here I want to extract some field values from the PDF form and write those into a text file (or XML)!
As you might have guessed, it's a security measure to prevent malicious scripts from causing havoc. You'll need to turn down the security settings. To do this, Ctrl+K into Preferences, go to the Enhanced Security tab and disable it.
For addition information on Enhanced Security, refer to: http://www.adobe.com/devnet-docs/acrobatetk/tools/AppSec/enhanced.html
As far as I know, there aren't any functions that will allow you to write arbitrary data to a text file or XML file. However, you have a couple of options:
Use Doc.exportAsText (text) and Doc.exportAsFDF (XML) to export data from carefully crafted fields. This isn't very straightforward and a little awkward, but it works.
Use Net.HTTP.request or Net.SOAP to send data to an ad-hoc local web server (eg: something simple, running Python or PHP) and let them handle the request. This allows you to do pretty much anything you want, but requires more work to setup the server.
See: Acrobat JS API Reference
This question already has answers here:
Closed 10 years ago.
I have had trouble when researching or otherwise trying to figure out how (if it's even possible) to get binary image data using JavaScript/jQuery from an html input element of type file.
I'm using WebMatrix (C#), but it may not be necessary to know that, if the purposes of this question can be answered using JavaScript/jQuery alone.
I can take the image, save it in the database (as binary data), then later show the pic on the page, from the binary data, after posting. This does, however, leave me without a pic preview, before uploading, for which I am almost certain I must use AJAX.
Again, this may not even be possible, but as long as I can get the binary image data, I believe I can push it to the server with AJAX and process the image the same way I would if I were taking it from a database (note that I don't save the image files themselves using GUID and all that,I just save the binary data).
If there is an easier way to show a pic preview using the input element, that would work fine, too, of course, as the whole idea behind me trying to do this is to show a pic preview before they hit the submit form button (or at least create that illusion).
**********UPDATE***********
I do not consider this a duplicate of another question because, my real question is:
How can I get image data from an input type "file", with JavaScript/jQuery?
If I can just get the data (in the right format) back to the server, I should be able to work with it there, and then return it with AJAX (although, I am absolutely no AJAX expert).
There is, according to the research that I have done, NO WAY to get picture previews in all IE versions using only javascript (this is because getting the full file path is seen, by them, as a potential security risk). I could ask my users to add the site to the trusted sites, but you don't usually ask users to tamper with those kinds of settings (not to mention the quickest way to make your site seem suspicious to users is to ask them to directly add your site to the trusted sites list. That's like sending an email and asking for a password. "Just trust me! I'm soooo safe!" :)
Short answer: Use the jQuery form plugin, it suports AJAX-like form submits even for file uploads.
tl;dr
Thumbnail preview is popular websites is usually done by a number of steps, basically the website do these steps:
upload the RAW image
Resize and optimise the image for data storage
Generate a temporary link to that file (usually stored in a server maintained HTTP session)
Send it back to the user, to enable a 'preview'
Actually store the image after user confirms the image
A few bad solutions are:
Most of the modern browsers has options to enable script access to local files, but usually you don't ask your users to tinker with those low level settings.
Earlier Internet Explorer (ah... yes it's a shame) and ancient versions of modern browsers will expose the full file path by reading the 'value' of file input box, which you can directly generates an tag and use that value. (Now it is replaced by some c:/fakepath/... thing.)
Use Adobe Flash to mimic the file selection panel, it can properly read local files. But passing it into JavaScript is another topic...
Hope these helps. ;)
UPDATE
I actually came across a situation that requires a preview before uploading, I'd like to also put it here. As I could recall, there were no transitional versions in modern browsers that do not implement FileReader before masking the real file path, but feel free to correct me if so. This solution should caters most of the browsers, as long as they are supported by jQuery.
// 1. Listen to change event
$(':file').change(function() {
// 2. Check if it has the FileReader class
if (!this.files) {
// 2.1. Old enough to assume a real path
setPreview(this.value);
}
else {
// 2.2. Read the file content.
var reader = new FileReader();
reader.onload = function() {
setPreview(reader.result);
};
reader.readAsDataURL();
}
});
function setPreview(url) {
// Do preview things.
$('.preview').attr('src', url);
}
I know it's impossible to hide source code but, for example, if I have to link a JavaScript file from my CDN to a web page and I don't want the people to know the location and/or content of this script, is this possible?
For example, to link a script from a website, we use:
<script type="text/javascript" src="http://somedomain.example/scriptxyz.js">
</script>
Now, is possible to hide from the user where the script comes from, or hide the script content and still use it on a web page?
For example, by saving it in my private CDN that needs password to access files, would that work? If not, what would work to get what I want?
Good question with a simple answer: you can't!
JavaScript is a client-side programming language, therefore it works on the client's machine, so you can't actually hide anything from the client.
Obfuscating your code is a good solution, but it's not enough, because, although it is hard, someone could decipher your code and "steal" your script.
There are a few ways of making your code hard to be stolen, but as I said nothing is bullet-proof.
Off the top of my head, one idea is to restrict access to your external js files from outside the page you embed your code in. In that case, if you have
<script type="text/javascript" src="myJs.js"></script>
and someone tries to access the myJs.js file in browser, he shouldn't be granted any access to the script source.
For example, if your page is written in PHP, you can include the script via the include function and let the script decide if it's safe" to return it's source.
In this example, you'll need the external "js" (written in PHP) file myJs.php:
<?php
$URL = $_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'];
if ($URL != "my-domain.example/my-page.php")
die("/\*sry, no acces rights\*/");
?>
// your obfuscated script goes here
that would be included in your main page my-page.php:
<script type="text/javascript">
<?php include "myJs.php"; ?>;
</script>
This way, only the browser could see the js file contents.
Another interesting idea is that at the end of your script, you delete the contents of your dom script element, so that after the browser evaluates your code, the code disappears:
<script id="erasable" type="text/javascript">
//your code goes here
document.getElementById('erasable').innerHTML = "";
</script>
These are all just simple hacks that cannot, and I can't stress this enough: cannot, fully protect your js code, but they can sure piss off someone who is trying to "steal" your code.
Update:
I recently came across a very interesting article written by Patrick Weid on how to hide your js code, and he reveals a different approach: you can encode your source code into an image! Sure, that's not bullet proof either, but it's another fence that you could build around your code.
The idea behind this approach is that most browsers can use the canvas element to do pixel manipulation on images. And since the canvas pixel is represented by 4 values (rgba), each pixel can have a value in the range of 0-255. That means that you can store a character (actual it's ascii code) in every pixel. The rest of the encoding/decoding is trivial.
The only thing you can do is obfuscate your code to make it more difficult to read. No matter what you do, if you want the javascript to execute in their browser they'll have to have the code.
Just off the top of my head, you could do something like this (if you can create server-side scripts, which it sounds like you can):
Instead of loading the script like normal, send an AJAX request to a PHP page (it could be anything; I just use it myself). Have the PHP locate the file (maybe on a non-public part of the server), open it with file_get_contents, and return (read: echo) the contents as a string.
When this string returns to the JavaScript, have it create a new script tag, populate its innerHTML with the code you just received, and attach the tag to the page. (You might have trouble with this; innerHTML may not be what you need, but you can experiment.)
If you do this a lot, you might even want to set up a PHP page that accepts a GET variable with the script's name, so that you can dynamically grab different scripts using the same PHP. (Maybe you could use POST instead, to make it just a little harder for other people to see what you're doing. I don't know.)
EDIT: I thought you were only trying to hide the location of the script. This obviously wouldn't help much if you're trying to hide the script itself.
Google Closure Compiler, YUI compressor, Minify, /Packer/... etc, are options for compressing/obfuscating your JS codes. But none of them can help you from hiding your code from the users.
Anyone with decent knowledge can easily decode/de-obfuscate your code using tools like JS Beautifier. You name it.
So the answer is, you can always make your code harder to read/decode, but for sure there is no way to hide.
Forget it, this is not doable.
No matter what you try it will not work. All a user needs to do to discover your code and it's location is to look in the net tab in firebug or use fiddler to see what requests are being made.
From my knowledge, this is not possible.
Your browser has to have access to JS files to be able to execute them. If the browser has access, then browser's user also has access.
If you password protect your JS files, then the browser won't be able to access them, defeating the purpose of having JS in the first place.
I think the only way is to put required data on the server and allow only logged-in user to access the data as required (you can also make some calculations server side). This wont protect your javascript code but make it unoperatable without the server side code
I agree with everyone else here: With JS on the client, the cat is out of the bag and there is nothing completely foolproof that can be done.
Having said that; in some cases I do this to put some hurdles in the way of those who want to take a look at the code. This is how the algorithm works (roughly)
The server creates 3 hashed and salted values. One for the current timestamp, and the other two for each of the next 2 seconds. These values are sent over to the client via Ajax to the client as a comma delimited string; from my PHP module. In some cases, I think you can hard-bake these values into a script section of HTML when the page is formed, and delete that script tag once the use of the hashes is over The server is CORS protected and does all the usual SERVER_NAME etc check (which is not much of a protection but at least provides some modicum of resistance to script kiddies).
Also it would be nice, if the the server checks if there was indeed an authenticated user's client doing this
The client then sends the same 3 hashed values back to the server thru an ajax call to fetch the actual JS that I need. The server checks the hashes against the current time stamp there... The three values ensure that the data is being sent within the 3 second window to account for latency between the browser and the server
The server needs to be convinced that one of the hashes is
matched correctly; and if so it would send over the crucial JS back
to the client. This is a simple, crude "One time use Password"
without the need for any database at the back end.
This means, that any hacker has only the 3 second window period since the generation of the first set of hashes to get to the actual JS code.
The entire client code can be inside an IIFE function so some of the variables inside the client are even more harder to read from the Inspector console
This is not any deep solution: A determined hacker can register, get an account and then ask the server to generate the first three hashes; by doing tricks to go around Ajax and CORS; and then make the client perform the second call to get to the actual code -- but it is a reasonable amount of work.
Moreover, if the Salt used by the server is based on the login credentials; the server may be able to detect who is that user who tried to retreive the sensitive JS (The server needs to do some more additional work regarding the behaviour of the user AFTER the sensitive JS was retreived, and block the person if the person, say for example, did not do some other activity which was expected)
An old, crude version of this was done for a hackathon here: http://planwithin.com/demo/tadr.html That wil not work in case the server detects too much latency, and it goes beyond the 3 second window period
As I said in the comment I left on gion_13 answer before (please read), you really can't. Not with javascript.
If you don't want the code to be available client-side (= stealable without great efforts),
my suggestion would be to make use of PHP (ASP,Python,Perl,Ruby,JSP + Java-Servlets) that is processed server-side and only the results of the computation/code execution are served to the user. Or, if you prefer, even Flash or a Java-Applet that let client-side computation/code execution but are compiled and thus harder to reverse-engine (not impossible thus).
Just my 2 cents.
You can also set up a mime type for application/JavaScript to run as PHP, .NET, Java, or whatever language you're using. I've done this for dynamic CSS files in the past.
I know that this is the wrong time to be answering this question but i just thought of something
i know it might be stressful but atleast it might still work
Now the trick is to create a lot of server side encoding scripts, they have to be decodable(for example a script that replaces all vowels with numbers and add the letter 'a' to every consonant so that the word 'bat' becomes ba1ta) then create a script that will randomize between the encoding scripts and create a cookie with the name of the encoding script being used (quick tip: try not to use the actual name of the encoding script for the cookie for example if our cookie is name 'encoding_script_being_used' and the randomizing script chooses an encoding script named MD10 try not to use MD10 as the value of the cookie but 'encoding_script4567656' just to prevent guessing) then after the cookie has been created another script will check for the cookie named 'encoding_script_being_used' and get the value, then it will determine what encoding script is being used.
Now the reason for randomizing between the encoding scripts was that the server side language will randomize which script to use to decode your javascript.js and then create a session or cookie to know which encoding scripts was used
then the server side language will also encode your javascript .js and put it as a cookie
so now let me summarize with an example
PHP randomizes between a list of encoding scripts and encrypts javascript.js then it create a cookie telling the client side language which encoding script was used then client side language decodes the javascript.js cookie(which is obviously encoded)
so people can't steal your code
but i would not advise this because
it is a long process
It is too stressful
use nwjs i think helpful it can compile to bin then you can use it to make win,mac and linux application
This method partially works if you do not want to expose the most sensible part of your algorithm.
Create WebAssembly modules (.wasm), import them, and expose only your JS, etc... workflow. In this way the algorithm is protected since it is extremely difficult to revert assembly code into a more human readable format.
After having produced the wasm module and imported correclty, you can use your code as you normallt do:
<body id="wasm-example">
<script type="module">
import init from "./pkg/glue_code.js";
init().then(() => {
console.log("WASM Loaded");
});
</script>
</body>
If you do script src="/path/to/nonexistent/file.js" in an HTML file and call that in a browser, and there are no dependencies or resources anywhere else in the HTML file that expect the file or code therein to actually exist, is there anything inherently bad-practice about doing this?
Yes, it is an odd question. The rationale is the developer is dealing with a CMS that allows custom (self-contained) javascript files to be provided in certain circumstances. The problem is the CMS is not very flexible when it comes to creating conditional includes for javascript. Therefore it is easier to just make references to the self-contained js files regardless of whether they are actually at the specified path.
Since no errors are displayed to the user, should this practice be considered a viable option?
Well the major drawback is performance since the browser will try (hard) to download the file and your server will look for it. Finally the browser may download the 404 page instead - thus slowing down the page load.
If you have the script referred to in the <head> tag, ( not recommended for starters ), it will slow down the initial page-render time somewhat too.
If instead of quickly returning a 404, your site just accepts the connection and then never responds, this can cause the page to take an indefinite amount of time to load, and in some cases, lock up the entire user interface.
( At least that was the case with one revision of FireFox, I hope they've fixed it since I saw that happen ~2 years ago.* )
You should at least put the tags as low in the page order as you can afford to to remedy this problem.
Your best bet by far is to have one consistent no-op url that is used as a fill-in for all "doesn't exist" JavaScript files, that returns a 0-byte response with HTTP headers telling the UA to cache it till the cows come home, that should negate most your server<->client load penalties beyond the first hit ( and that should hardly hurt people even on ye-olde dialup )
*Lesson learned: don't put script-src references in head, especially for 3rd-party scripts hosted outside your machine, because then you can have the joy of having clients be able to access your website, but run the risk of the page being inoperable because of a bit of advertising JS that was inaccessible due to some internet weirdness. Even if they're a reputable-ish 3rd-party.
If your web server is configured to do work on a 404 error ("you might be looking for this", etc) then you're also causing unnecessary load on the server.
you should ask yourself why you were too lazy to test this yourself :)
i tested 1000 randomized javascript filenames and it took several nanoseconds to load, so no, it doesn't make a difference. example:
script src="/7701992spolsky.js"
This was on my local machine however, so it should take N * roundTripTime for the browser to figure out for remote servers, where N is the number of bad scripts.
If however, you have random domain names that don't exist, like
script src="http://www.randomsite7701992.com/spolsky.js"
then it will take a long f-in time.
If you choose to implement it this way, you could tune the web server that if the referenced JS file is not found, instead of 404, it could return a redirect (301) to an empty/default JS file.
If you are using asp.net you can look into using custom handlers (ASHX files).
Here's an example:
public class JavascriptHandler : IHttpHandler {
public void ProcessRequest (HttpContext context)
{
context.Response.ContentType = "text/plain";
//Some code to check if javascript code exists
string js = "";
if(JavascriptExists())
{
js = GetJavascript();
}
context.Response.write(js);
}
}
Then in your html header you could declare a file pointing to the custom handler:
src="/js/javascripthandler.ashx"