first of all my question: Is it possible to pass file names from a running Flash application, which only purpose is to enable multiple-file-selection, to a JavaScript application which handles upload of all files to the server?
I have examined various Flash upload solutions (like SWFUpload, Uploadify, etc.) and none of them meets my needs. I want an easy to implement solution (like Uploadify) which also lets me specify various parts of the HTTP request.
The reason I need this is because my upload form uses session cookies (for user authentication) and an CSRF token both passed to the server when uploading files.
Is it technically possible to pass filenames (+ paths) to a JavaScript application which then handles the upload?
Thank you,
FMD
I'm sorry but no, its not possible to pass the filenames to JavaScript from Flash, however, you could pass the session ID to Flash.
If you are using PHP (I'm not saying you are, your server side language might have similar functions), you could reestablish the session:
session_id($_POST['ses']);
session_start();
The reason why you can't pass the filenames to JavaScript, (or set it by script in the first place) is that it would be a major security issue, consider the following:
var uploader = document.getElementById('id_of_input_type_file');
uploader.value = 'c:\Users\Administrator\Documents\commonBankKeyFile.ebjkeystore';
document.getElementById('formId').submit();
...And there you go, I just got your bank credentials just by you visiting my page, no Phishing needed.
Related
Im writing new app and this time i want to completely separate HTML/JS layer from the PHP layer. Thats because I'll do phonegap version in the future.
I have question about authentication. This time I can't use session variables so i must figure out new way of authentication. Im going to try it this way:
User fills login form and send it via ajax to php file.
Php file checks whether login and password are ok or not, and then create a key-token for that user. Save it on his side (ex. in mysql) and return it to the client side as javascript.
Browser is receiving key-token and save it in session_storage.
Each ajax request is attached by this token and then verified by php.
Is there a hole in that plan?. Maybe there is much easier/better solution. Its inspired by how php session works but with key-token instead of session id. Please help me.
I can't use session variables
What you describe sounds exactly like a session, but you're going to implement itself yourself rather than using the known, tested properties and flexibility of the standard PHP session handler. Hence even if you avoid the inherent design pitfalls, you run the risk of injecting defects in your implementation.
I would strongly urge you to use the standard PHP mechanism (although you might want to consider a more complex save handler, even if it's just enabling the multi-layer function).
Given that what you describe is no different from the PHP handler, then, yes it will work if implemented correctly - is it secure? Not from the information you've provided.
Session storage does offer the possibility of carrying out more secure operations without resorting to SSL (although HTTPS is a must have if security is important) since you can pre-share encryption keys (but the initial key negotiation is highly vulnerable).
OTOH what you describe is vulnerable to sniffing, injection and CSRF.
In our development environment we use jetty, in our production we use tomcat.
For some functionality we use javascript but there are some hardcoded locations for the use of jetty or tomcat.
I know it's a bit weird to use two server versions but it just the way it is.
So now when we are building the application, sometimes people forget to change the server version in the javascript file.
Is there a way to automatically check if the server is jetty or tomcat from javascript?
I was thinking of placing an txt file in the root of tomcat and let it check whether or not it exists but maybe there is a way to do it more natively.
Assuming one/both of your servers are sending the Server HTTP header (and Jetty usually does, and can easily be configured to do so), then you could use an XMLHttpRequest and look at the response headers.
Read more here: Accessing the web page's HTTP Headers in JavaScript
However, I would suggest that you extract the pieces of code that change between servers into 1 javascript file. e.g:
/* server_info.js */
locations = {
file1 : "/some/path",
file2 : "/another/path"
};
And include that file as a <script> in all your pages.
Then you can have Jetty and Tomcat each use a different version of that file. It should be easy enough to have a servlet (or filter, or action, or whatever exists in your framework) that looks at the server type and serves up the right file.
If that's too much, then you could do the same thing, but simply have:
/* server_info.js */
server_type = "tomcat";
And vary that file by server (you could easily generate that file in a JSP, or something similar)
Obligatory warning: As I'm sure you know, having different servers in dev and prod is not a fantastic idea, for these sorts of reasons. Once you implement a solution to this problem, how are you going to know that the tomcat code works?
Jetty is more than capable of being a production server, and tomcat can do a good job in development. I suspect you (as a team) are making more work for yourselves than really ought to be.
you could make an ajax request to the server and if each one responds in a unique way, you'll know which is which. whether that's the existence of a different file or different content in a particular file, there are many ways the servers can differentiate themselves to the client.
I am working on a file upload system which will store individual parts of large files on more than one server. So the distribution of a 1GB file will look something like this:
Server 1: 0-128MB
Server 2: 128MB-256MB
Server 2: 256MB-384MB
... etc
The intention of this is to allow for redundancy (each part will exist on more than one server), security (no one server has access to the entire file), and cost (bandwidth expenses are distributed).
I am curious if anyone has an opinion on how I might be able to "trick" web browsers into downloading the various parts all in one link.
What I had in mind was something like:
Browser is linked to Server 1, which provides a content-size of the full file
Once 128MB is served, Server 1 will intentionally close the connection
Hopefully, the browser will try to restart the download, requesting Server 1
Server 1 provides a 3XX redirect to Server 2
Browser continues downloading from Server 2
I don't know for certain that my example works, as I haven't tested it yet. I was curious if there were other solutions someone might have?
I'd like to make the whole process as easy as possible (ideally requiring no work beyond a simple download). I don't want the users to have to use another program (ie: cat'ing the files together). I'd also like to not use a proxy server, since it would incur extra bandwidth costs.
As far as I'm aware, there is no javascript solution for writing a file, if there was one, that would be great.
AFAIK this is not possible by using the HTTP protocol. You can probably use a custom browser extension but it would depend on the browser. Another alternative is to create a Java applet that would download the file from different servers. The applet can accept the URLs to the different servers as parameters.
To save the generated file:
https://stackoverflow.com/a/4551467/329062
That solution stores the file in memory though, so it won't work with very large files.
You can download the partial files into a JS variable using JSONP. That will also let you get around the same-origin policy.
Javascripts security model will only allow you to access data from the same origin where the Javascript came from - i.e. not multiple servers.
If you are going to have the file bits on multiple servers, you will need the user to load the web page, fetch the bit and then finally stick the bits together in the correct order. If you can manage to get all your users to do this (correctly), you are a better man than I.
It's possible to do in modern browsers over standard HTTP.
You can use XHR2 with CORS to download file chunks as ArrayBuffers and then merge them using Blob constructor and use createObjectURL to send merged file to the user.
However, I suspect that browsers will store these objects in RAM, so it's probably a bad idea to use it for large files.
What is the best practice for coordinating access to files in node.js?
I'm trying to write an http based file uploader for very large files (10sGB) that is resumable. I'm trying to figure out what the best approach is to handle two people trying to upload the same file at the same time... I'm also trying to think ahead to the possibility where more than one copy of the node.js http server is running behind a load balancer, which means catching duplicate uploads can't rely on just the code itself.
In python, for example, you can create a file by passing the correct flags to the open() call to force an atomic create. Not sure if the default node.js open new file is atomic.
Another option I thought of, but don't really want to pursue, is using a database with an async driver that supports atomic transactions to track this state...
In order to know if multiple users are uploading the same file, you will have to identify the files somehow. Hashing is best for this. First, hash the entire file on the client side to identify it. Tell the server the hash of the file, if there is already a file on the server with the same hash, then the file has already been uploaded or is currently being uploaded.
Since this is an http file server, you will likely want users to upload files from a browser. You can get the contents of a file with a browser using the File Reader API. Unfortunately as of now this isn't widely supported. You might have to use something like flash to to get it to work in other browsers.
As you stream the file into memory with the file reader, you will want to break it into chunks and hash the chunks. Then send the server all of the file's hashed chunks. It's important that you break the file into chunks and hash those individual chunks instead of the contents of the entire file because otherwise the client could send one hash and upload an entire different file.
After the hashes are received and compared to other files' hashes and it turns out someone else is currently uploading the same file, the server then decides which user gets to upload which chunks of the file. The server then tells the uploading clients what chunks it wants from them, and the clients upload their corresponding chunks.
As each chunk is finished uploading, it is rehashed on the server and compared with the original array of hashes to verify that the user is uploading the correct file.
I found this on HackerNews under a response to someone complaining about some of the same things in node.js. I'll put it here for completeness. This allows me to at least lock some file writes in node.js like I wanted to.
IsaacSchlueter 4 hours ago | link
You can open a file with O_EXCL if you pass in the open flags as a
number. (You can find them on require("constants"), and they need to
be binary-OR'ed together.) This isn't documented. It should be. It
should probably also be exposed in a cleaner way. Most of the rest of
what you describe is APIs that need to be polished and refined a bit.
The boundaries are well defined at this point, though. We probably
won't add another builtin module at this point, or dramatically expand
what any of them can do. (I don't consider seek() dramatic, it's just
tricky to get right given JavaScript's annoying Number problems.)
I have built a vb.net web application. I have tried to make it secure, with all users requiring a password to get in.
The only problem is that if anyone can guess (or detect using some kind of hacking tools) the url of the javascript file, they can download it and read it, without even having to log in first.
Is there any way that this can be prevented?
If the javascript file is not required as part of the logon process, then you can secure the file on the server so your users need to be authenticated and authorized in order to access it. This will prevent unauthorized access. Approaches to securing this file include using file system Access Control Lists (ACLs - 'Windows file permissions'), or using the "authorization" element in the ASP.NET web.config.
If the javascript file is required as part of the logon process, then you've got to give anonymous (unauthenticated) accees to the file, in which case you cannot prevent people being able to download it.
Don't serve the JS file up to people who haven't authenticated.
I don't known ASP.NET well enough to say what the best approach would be, but worst case is you stick it in a .aspx file, do the auth/authz stuff at the top, then set the right content type and serve it.