I am concerned with the feasibility of this:
On a pre-configured machine I will have a Web-Application pre-installed, next to an Apache-Suite. So client and server are the same!
In this Web-Application Users can drag and drop PDF-Files to an USB-Icon.
Then the Web-App should write the dropped PDF to an attached USB-Stick.
I have never done something like this (writing to USB), so I am fairly insecure.
And I am well aware of the browser-restrictions concerning JavaScript and Filesystem-Access, but...
after researching a bit I found out, that there might be some possible and
relevant (I'm a Web-Platform-Guy) solutions to this:
Make a "Chrome App" with USB-Permission (does this really work?)
Use PHP to find the USB and then write to it (how would that work under Windows?)
Use some Flash as middle man (not preferred)
Now I'd like to know:
Has anyone some good experience with before mentioned possibilities?
Has anybody ever done something similar? Did it work? Which path did you choose?
How would I know which drive the USB is mounted, and how would I get sure?
What other possible solutions to this problem are there?
You have a website ('client-side' user interface) and a back-end server ('server-side') running on the same machine. This gives you 2 options:
Client-side: Download a file through the browser via HTTP GET and let the user choose where they save it.
Server-side: Build your USB interactions into the back-end (Node.js) code, as #mcgraphix suggests.
Interacting with the USB on the server-side provides the most flexibility. Furthermore, there are a number of libraries that you can leverage. Head to npmjs.org and consider, among others, the following Node.js server-side packages:
usb-detection
usb
With the server-side approach, initiate a Webservice request when the user completes the drag & drop action on the client, and implement the USB interaction within the server (Express.js or similar) method which services the request.
If the letter of the stick is known then writing a file from PHP will be simple
file_put_contents( 'E:\\folder\\file.pdf', $data );
Update
You can read a list of drives into a dropdown and allow a user to select a default drive to write to
https://stackoverflow.com/a/8210132/696535
Your question is more an architecture question than a code specific question.
Your web app (if you insist on a web app) should have two major components, a server side component that can be given arbitrary commands, and a client side component (javascript using XMLHttpRequest) that can make requests to the server side component to execute said arbitrary commands.
So your server side component, the component that serves your web page should have some server side code that can write your pdf to the file system, it should probably generate the pdf file as well rather than doing that on the web browser.
Which technology you use is up to you, whether that's PHP, .Net, Node.js etc...
The general gist is you want a server side framework that deals with HTTP requests, in your case probably a post request from the client side containing the encoded pdf, and responds accordingly. Bind a particular http route to trigger your save logic.
Your http post request to the server will contain your payload which is the pdf file to a particular path, e.g. http://localhost/savepdf that whichever technology stack http listens to (you'll need to configure that)
Your server side component should read the incoming data, decode it as appropriate then make a file system request to write the received payload to disk.
Related
Maybe i'm misunderstanding how Node.js works but, I would like to use it just as a server backend for a web app, without it running as a service/listening a port.
I'm willing to hear ideas to better solve the issue, this app will only be available on our intranet.
Example of what i'm thinking :
backend server.js :
function connectDb(usr, pwrd){
//Some npm package code to connect to a db
return console.log("Sucessfully connected")
}
frontend javascript.js :
require("server.js")
$(".connect.button").on("click", function(e){
connectDb($(".connect.user").text(), $(".connect.pwrd").text())
})
There are two different aspects with your question and code example on which you could work to get a better understanding of the ecosystem.
Client / Server
When a client wants to get some resource from a server, it connects to a specific port on that server, on which the back-end application is "listening". That means, to be able to serve resources coming from a database, you must have a Node process listening to a port, fetching the requested resources from the database, and returning them. The perfect format for that kind of data exchange is JSON.
To get a better understanding of this process, you may want to try and write a simple Node app sending a piece of JSON over the network when it receives a request, and try to load it with an XHR in client code (for example with JQuery's AJAX method). Then, try and serve a dynamic piece of JSON coming from a database, with a query based on the request's content.
Module loading
require("server.js") only works in Node, and can't be used in JavaScript that is running in a client's browser (Well, at least for now. Maybe some kind of module loading could be normalised for browsers, but that's another debate.).
To use a script in a client browser, you have to include it in the loaded page with a <script> tag.
In node, you can load a script file with require. However, said script must declare what functions or variables are exposed to the scripts that require it. To achieve it, you must export these variables or function setting module.exports.
See this article to get some basic understanding, and this part of Node docs to master all the details of module loading. This is quite important, as this will help you structure your app and avoid strange bugs.
For one thing, node itself isn't a web server: it's a JS interpreter, which (among other things) can be used to write a web server. But node itself isn't a web server any more than Java is.
But if you want things to be able to connect to your node program, in order to do things like access a database, or serve a webpage, then, yeah, your program needs to be listening on some port on the machine it's running on.
Simply having your node program listening to a specific port on your machine doesn't mean that anyone else can access it; but that's really a networking question not a programming question.
I'm new to Arduino and am trying to connect it to the internet using an EthernetShield. Before I buy the EthernetShield, I want to make sure I will be able to execute the necessary steps with it. Is it possible to use Javascript to write to a text file stored on the server (containing binary data), connect to said server/file address with Arduino, and then use Text Finder (Arduino's) to read the file's binary data and perform the necessary commands? If so, what are the steps (if it diverges from this basic outline)?
It seems fairly straight forward, but through my own research, I am unsure if text files can be written and stored in that fashion, and if the Arduino can read this file type. I'm also aware that the conventional way entails PHP and mySQL, both of which I am fairly unfamiliar with.
Thanks!
Arduino can read text file. I suggest you use XMl or json instead of text file.
I am sharing a link of code for my final year btech project "Controlling devices using internet".
(ofcourse this can be done easily by using arduino+ethernet as server but the problem with this is you need port forward the router in order to access server from outside the local network Port forwarding is little risk as per security aspects.)
I used apache server (for testing I installed in my laptop, later I used hosting sites) and Arduino+Ethernet Shield as client. Arduino sends HTTP request to server for XML file after getting, it parses the XMl and control the devices. I used PHP for creating UI and updating XML file....
I hope this may be useful
https://drive.google.com/folderview?id=0BxWdBbr_6RYkSXVwcGxOa3pxTDA&usp=sharing
I'm working on a Parse web app and have run into some problems using the backbone.js based client side javascript sdk. I noticed the way I have things set up, the client can view all of my source code by simply using the dev tools to view source files and can also run code against the database (within the limits of the ACL's I've set). I've started working on rebuilding the app in cloud code using the Express.js module Parse provides so that all of my code is stored server side, but I was wondering how those using client side frameworks get around this obvious problem.
That's the issue with client-side code. Assume any code you send to the client is hacked, broken, and tampered with.
With JavaScript, your best bet is to use either Cloud Code and send AJAX or streaming data calls to the server, retrieve the data from the server at runtime (not super secure, but would fool some people), or accept that your code is vulnerable.
I typically work with frameworks in the MVC format, so I only expose a limited subset of the actual model via a REST API. I use both a client-side framework and a server-side framework. Any thing sensitive goes on the server.
I recently asked one question :- Handle Web Server with multiple clients
I have gone through the basic techniques to implement comet server like streamhub,Maven/Jetty etc.
I have following questions for that :
After that I found the issues like in case of Maven/Jetty internet
connection is required for downloading certain files from net.So it
it possible to implement it if no internet connection is there on
machine where the web server is hosted ?
Also I want the open source tools/technologies to achieve the thing
mentioned in the above question. and I think stream hub is not a
open source free version. Please help if you know any tool which is
free/open source to use.
Currently the web application is running on apache web server. so if
I use comet server what changes I need to do in that ??
Please help...
Thanks in advance...
For comet, pick a server which can handle many open connections. For a chat app I implemented which currently handles 10k open connections, I used Mochiweb. You might want to give that a look.
Going along the Mochiweb path, I will also recommend Erlang for implementing you server. It will be a small piece of code. Basically, you will listen on a path and hold the connection open till you have some data to respond with or timeout.
On the client side, you would write a simple JS function which will make an AJAX call and handle response timeout and data responses as and when they come. Nothing too different here. However, you may need JSONP instead (crossdomain/subdomain because of different servers for web and long poll), so ensure that your LongPoll server replies accordingly.
I am currently having an idea where I want to save an image from a c++/openGL application on demand from a browser. So basically I would like to run the application itself on the server and have a simple communication layer like this:
JS -> tell application to do calculations (and maybe pass a string or some simple data)
application -> tell JS when finished and maybe send a link, text or something as simple as that.
I don't really have alot of experience with webservers and as such don't know if that is possible at all (it's just my naive thinking). And note: I am not talking about a webGL application, I just want to have simple communication between a c++ serverside application, and the user.
Any ideas how to do that?
Thanks alot!
Basically no matter what your language/framework you choose for your web server, you just need a interface that is callable from your browser JS, and you can do whatever you want in the server once it recieves the call.
Most likely any web service interface exposed from the server.
Just need to safeguard your server not to get DoS since it sounds like it's a huge process.
As far as I know, JavaScript (at least when embedded in HTML) is executed on your local machine and not on the server so that there is IMHO no way to directly start your server-application using JS.
PHP for example is executed on the server-side and so you could use e.g. the php system function to call your C++/OpenGL application on the server - initiated on demand through a web-browser.
When the call is finished you could then directly present the image.
Well you could always use the cgi interface to invoke your application
and have it save that image somewhere accessible to the webserver.
Then have your js load that via ajax.
Or make a cgi App that talks to the app and then serves a small
page with the pic in it.
[EDIT]
Answering the comments:
CGI is not complex to learn, it is mostly a simple convention
you can follow. I think it would give you the maximum of
flexibility. I don't know which php mods allow you to leave the cozy protection of the server-application and interact with other stuff on your server.