Http methods differences - javascript

What is difference between
HTTPPOST
HTTPDELETE
HTTPPUT
HTTPGET
Normally used post and get method for submit form and i know them very well but want to know with delete and put method when and why they can be used to improve programming skills

What the different methods do depends entirely on how the remote web server chooses to interpret them. There is no fixed meaning. A server does not care if it sees GET or POST; rather, the code that ends up being executed to service the request does (and can decide to do anything, since it's code).
The HTTP protocol gives an official guideline for what kind of action each verb is supposed to trigger, which is:
GET: retrieve a resource
PUT: replace a resource with another, or create it if it does not exist
DELETE: remove a resource if it exists
POST: might do anything; typically used to "add" to a resource
However this mapping is ultimately governed by application code and is typically not respected by web applications (e.g. you will see logical deletions being enacted with POST instead of DELETE).
The situation is better when talking about REST architectures over HTTP.

In a nutshell:
GET = fetch a resource.
POST = update a resource.
DELETE = delete a resource.
PUT = create/replace a resource.
In HTML, only GET and POST are allowed. A typical web-development HTTP server will do nothing unless you have code (or configuration) to specify what you want it to do with the different HTTP methods.
There's nothing stopping you from updating user data in response to a GET request, but it's not advisable. Browsers deal with GET and POST differently, with respect to caching the request (a cached GET will automatically be reissued, but a cached POST will prompt the user to allow it to be resent) and many HTML elements can issue GETs, making them unsafe for updates. There are other HTTP methods too http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol.
Many people who claim to be RESTful will confuse HTTP POST and PUT with SQL UPDATE and INSERT. There isn't a direct correlation, it always depends on context. That is, what POST means depends entirely on the resource that you're interacting with. For example, creating a new entry on a blog could be a POST to the blog itself, or a PUT to a subordinate resource. However, a PUT, by definition, must always contain the entire resource.
Typically, you would not allow a HTTP client to determine the URI of a new resource, so a POST to /blog would be safer than a PUT to /blog/article-uri although HTTP does cater for appropriate responses should the server be unable to honour the intended URI. (HTTP is just a specification, you have to write the code to support it, or find a framework)
But as you can always achieve a PUT or DELETE use-case by POSTING to a parent resource responsible for its subordinates (i.e. POSTing a message to /mailbox instead of PUTting it at /mailbox/message-id), it isn't essential to expose PUT or DELETE methods publicly.
You can improve your programming skills by adopting REST principles to improve the visibility of the interactions within a system, it may be simpler to contextualise your interactions in terms of REST by having a uniform interface, for example.
REST is not HTTP though: http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm.

Related

How can I process POST data sent from one HTML to another? [duplicate]

I want to pass some textbox value strictly using POST from one html page to another...
how can this be done without using any server side language like asp.net or php
can it be done using javascript??
thnx
You can't read POST data in any way on javascript so this is not doable.
Here you can find similar questions:
http://forums.devshed.com/javascript-development-115/read-post-data-in-javascript-1172.html
http://www.sitepoint.com/forums/showthread.php?454963-Getting-GET-or-POST-variables-using-JavaScript
This reading can also be interesting: http://en.wikipedia.org/wiki/POST_%28HTTP%29
This expecially suggests why this answer (wikipedia is the source):
GET
Requests a representation of the specified resource. Requests using GET should only retrieve data and should have no other effect.
(This is also true of some other HTTP methods.)[1] The W3C has
published guidance principles on this distinction, saying, "Web
application design should be informed by the above principles, but
also by the relevant limitations."[10] See safe methods below.
POST
Submits data to be processed (e.g., from an HTML form) to the identified resource. The data is included in the body of the request.
This may result in the creation of a new resource or the updates of
existing resources or both.
POST data is added to the request. When you do a GET request the data is added to the url, and that's why you can access it through javascript (and that's why it's not parsed and you have to do it manually). Instead, POST send data directly into the http requests, which is not seen in any way by the html page (which is just a part of what is sent through the http request).
That said, only server side language will receive the full HTTP request, and definitely you can' access it by javascript.
I'm sorry but that is the real answer

REST API, tracking changes in multiple resources, front-end synchronization

I have a system with quite complex business logic, so far I have around 10-15 database tables (resources), and this number is growing. The front-end for user is angularjs single page application. The problem is communication with back-end and keeping angular front-end sychronized with back-end data.
Back-end keeps all resources and relationships between them, this is obvious. And front-end fetches those resources and keeps copy of them locally to make interface much more responsive for user and avoid fetching data at every request. And this is awesome.
Server-side has many operations which affect many resources at once. What it means is that adding/removing/editing one resource (via REST api) can modify a lot of other resources.
I want front-end app data to be always fully synchronized with back-end data. This allows me to keep data integrity and keep my application bug-free. Any kind of desynchronization is a big "no no", it introduces hundreds of places where undefined behaviours could possibly occur in my front-end app.
The question is: what is the best way to achieve that? My ideas/insights:
Business logic (modifying/editing/deleting resources, managing relationships, keeping data integrity) must be implemented only once. Doubling business logic implementation (one in front-end and one in back-end) introduces a lot of potential bugs and involves code duplication which is obviously a bad thing. If business logic was implemented in front-end, back-end still would have to validate data and keep their integrity - duplication of business logic. So, the business logic MUST be in back-end, period.
I use REST API. When my front-end updates one resource (or many resources via PATCH method), a lot of side-effects happen in server-side, other resources get modified too. I want my front-end angular app to know WHICH resources got modified and update them (to keep full synchronization). REST returns only the resource which was originally requested to update, without other affected resources.
I know that that I could use some form of linking resources, and to send my original updated resource with links to other affected resources. But what if there are 100 of them? Making 100 requests to server is total performance kill.
I am not very attached to REST, because my API is not public, it could be anything. I think that the best solution would be back-end sending back ALL modified resources. This would allow my front-end to always be in sync with backend, would be fast and would be atomic (no invalid intermediate state between multiple requests to server). I think that this architecture would be awesome. The question is: is this a common approach? Are there any protocols / standards / libs allowing me to do this? We could write it from scratch, but we don't want to reinvent the wheel.
Actually, I think that having business logic in front-end and back-end would be good, but ONLY if it would be implemented once. This means Javascript back-end application. Unfortunately, at the time being, this is not possible solution for me.
Any insight will be welcome!
Added backbone.js tag, because question is much more about architecture than any specific technology.
You're on the right track and it is a common problem you're facing right now. As you said, in a REST world your API returns the requested / changed resource. A simple example of your problem:
You - as user X - want to follow another user Y. The front end displays your own following counter (X) and the follower counter of the other user (Y). The http call would be something like:
PUT /users/X/subscribe/Y
The API would return the user Y resource but X is missing, or the other way around.
To handle this cases I use an extended structure of my standard API response structure, my standard structure is:
meta object - includes the http status code and an explanation why this code got used, which app server processed the response and more
notification object - includes information of errors during processing (if any), special messages for developers and more
resource - the resource which got requested / modified, the name of this attribute is the resource type in singular for single resources (e.g. user) or in plural for resource collections (e.g. users)
{
meta: {
status: 200,
message: 'OK',
appServer: app3
},
notification: {
errors: []
},
user: {
id: 3123212,
subscribers: 123,
subscriptions: 3234
}
}
In order to return also other affected resources und keeping the REST way + my static, standard response structure I attach one more object to the response called 'affectedResources' which is an array of all other affected resources. In this very easy example the array would include just the user X resource object. The front end iterates the array and takes care of all necessary changes front end wise.

Protecting my REST service, which I will use on the client side, from others to use

Let's assume that I have created my REST service smoothly and I am returning json results.
I also implemented API key for my users to communicate for my service.
Then Company A started using my service and I gave them an API key.
Then they created an HttpHandler for bridge (I am not sure what is the term here) in order not to expose API key (I am also not sure it is the right way).
For example, lets assume that my service url is as follows :
www.myservice.com/service?apikey={key_comes_here}
Company A is using this service from client side like below :
www.companyA.com/services/service1.ashx
Then they start using it on the client side.
Company A protected the api key here. That's fine.
But there is another problem here. Somebody else can still grab www.companyA.com/services/service1.ashx url and starts using my service.
What is the way of preventing others from doing that?
For the record, I am using WCF Web API in order to create my REST services.
UPDATE :
Company A's HttpHandler (second link) only looks at the host header in order to see if it is coming from www.companyA.com or not. but in can be faked easily I guess.
UPDATE 2 :
Is there any known way of implementing a Token for the url. For example, lets say that www.companyA.com/services/service1.ashx will carry a querystring parameter representing a TOKEN in order for HttpHandler to check if the request is the right one.
But there are many things here to think about I guess.
You could always require the client to authenticate, using HTTP Basic Auth or some custom scheme. If your client requires the user to login, you can at least restrict the general public from obtaining the www.companyA.com/services/service1.ashx URL, since they will need to login to find out about it.
It gets harder if you are also trying to protect the URL from unintended use by people who legitimately have access to the official client. You could try changing the service password at regular intervals, and updating the client along with it. That way a refresh of the client in-browser would pull the new password, but anyone who built custom code would be out of date. Of course, a really determined user could just write code to rip the password from the client JS programmatically when it changes, but you would at least protect against casual infringers.
With regard to the URL token idea you mentioned in update 2, it could work something like this. Imagine every month, the www.companyA.com/services/service1.ashx URL requires a new token to work, e.g. www.companyA.com/services/service1.ashx?token=January. Once it's February, 'January' will stop working. The server will have to know to only accept current month, and client will have to know to send a token (determined at the time the client web page loads from the server in the browser)
(All pseudo-code since I don't know C# and which JS framework you will use)
Server-side code:
if (request.urlVars.token == Date.now.month) then
render "This is the real data: [2,5,3,5,3]"
else
render "401 Unauthorized"
Client code (dynamic version served by your service)
www.companyA.com/client/myajaxcode.js.asp
var dataUrl = 'www.companyA.com/services/service1.ashx?token=' + <%= Date.now.month %>
// below is JS code that does ajax call using dataUrl
...
So now we have service code that will only accept the current month as a token, and client code that when you refresh in the browser gets the latest token (set dynamically as current month). Since this scheme is really predictable and could be hacked, the remaining step is to salted hash the token so no one can guess what it is going to be .
if (request.urlVars.token == mySaltedHashMethod(Date.now.month)) then
and
var dataUrl = 'www.companyA.com/services/service1.ashx?token=' + <%= mySaltedHashMethod(Date.now.month) %>
Which would leave you with a URL like www.companyA.com/services/service1.ashx?token=gy4dc8dgf3f and would change tokens every month.
You would probably want to expire faster than every month as well, which you could do my using epoch hour instead of month.
I'd be interested to see if someone out there has solved this with some kind of encrypted client code!
What you're describing is generally referred to as a "proxy" -- companyA's public page is available to anyone, and behind the scenes, it makes the right calls to your system. It's not uncommon for applications to use proxies to get around security -- for example, the same-origin policy means that your javascript can't make Ajax calls to, say, Amazon -- but if you proxy it on your own system, you can get around this.
I can't really think of a technical way to prevent this; once they've pulled data from your service, they can use that data however they want. You have legal options, of course; you can make it a term of service that proxying isn't allowed, and pull their API key if they don't comply. But most likely, if you haven't already included that in the TOS, you'd have to wait for, say, a renewal of their subscription to your service.
Presumably if they're making server-side HTTP requests to your service, those requests are all coming from the same IP address, so you could block that address. You'd probably want to tell them first, and they could certainly get around that if they wanted to.
With the second link exposed by Company A I don't think you can do much. As I understand it, you can only check whether the incoming request comes from Company A or not.
But each request issued to www.companyA.com/.. can't be distinguished from original request from Company A. Everyone they let in uses their referrer as a disguise.

request parameters ordering undefined [in multipart/form-data or in general] - what to do?

I am writing a web application that submits a form (one of its fields is mulitpart/form-data, so obviously POST must be used and not GET, since the files might be really big). One of the fields is kinda transaction/upload_id and the other is obviously the file contents. While uploading, a progress bar must be displayed.
The known fact says that the order of parameters is undefined in general, meaning that any of (file content / upload_id) might come first.
Is there any acceptable / recommended way to cause the browser to send the upload_id before sending the file content?
Is it a considered a correct implementation - to expect the upload_id to come first OR there is a better / most common / more correct way to handle the problem? In that case - it would be fantastic to hear some details.
Update: my server-side language is Java/Servlets 3.0
Well, the better answer (without utilizing filters) would be to publish the upload_id(s) as a part of the URL (after '?'), even when issuing a POST request. In that case, they will be always processed ahead of files' contents.
Using servlets as well, and in my case I wanted to run my CSRF filter in my servlet before I started streaming the file: if the filter failed, I can kill the request before I've uploaded my 20gb video file, as opposed to the default PHP implementation where the server only hits your script AFTER its parsed the entire request.
Its been a bit of a hack on my part, but in the couple of cases I've had to do this I've cheated and put the non-file request parameters into the URL and in every case (using pretty much every browser I've tested with) an Iterator over the request parameters on the server (I'm using commons fileupload in streaming mode) has received the non-file request parameters first before the file data was received. Somewhat fragile, but not unworkable.
I'm assuming that if you order your request parameters with the file <input> as the last item you'll get the same behavior.
You shouldn't have to worry about the order in which the parameters are sent. If so, then your server-side code is very brittle.
A multi-part request will contain the field name of every form field that is passed in. Use the name to reference that field regardless of the order it was sent in.
If you are parsing the post body by hand, I suggest you look at existing projects like Apache FileUpload which abstract that away.

How to prevent direct access to my JSON service?

I have a JSON web service to return home markers to be displayed on my Google Map.
Essentially, http://example.com calls the web service to find out the location of all map markers to display like so:
http://example.com/json/?zipcode=12345
And it returns a JSON string such as:
{"address": "321 Main St, Mountain View, CA, USA", ...}
So on my index.html page, I take that JSON string and place the map markers.
However, what I don't want to have happen is people calling out to my JSON web service directly.
I only want http://example.com/index.html to be able to call my http://example.com/json/ web service ... and not some random dude calling the /json/ directly.
Quesiton: how do I prevent direct calling/access to my http://example.com/json/ web service?
UPDATE:
To give more clarity, http://example.com/index.html call http://example.com/json/?zipcode=12345 ... and the JSON service
- returns semi-sensitive data,
- returns a JSON array,
- responds to GET requests,
- the browser making the request has JavaScript enabled
Again, what I don't want to have happen is people simply look at my index.html source code and then call the JSON service directly.
There are a few good ways to authenticate clients.
By IP address. In Apache, use the Allow / Deny directives.
By HTTP auth: basic or digest. This is nice and standardized, and uses usernames/passwords to authenticate.
By cookie. You'll have to come up with the cookie.
By a custom HTTP header that you invent.
Edit:
I didn't catch at first that your web service is being called by client-side code. It is literally NOT POSSIBLE to prevent people from calling your web service directly, if you let client-side Javascript do it. Someone could just read the source code.
Some more specific answers here, but I'd like to make the following general point:
Anything done over AJAX is being loaded by the user's browser. You could make a hacker's life hard if you wanted to, but, ultimately, there is no way of stopping me from getting data that you already freely make available to me. Any service that is publicly available is publicly available, plain and simple.
If you are using Apache you can set allow/deny on locations.
http://www.apachesecurity.net/
or here is a link to the apache docs on the Deny directive
http://httpd.apache.org/docs/2.0/mod/mod_access.html#deny
EDITS (responding to the new info).
The Deny directive also works with environment variables. You can restrict access based on browser string (not really secure, but discourages casual browsing) which would still allow XHR calls.
I would suggest the best way to accomplish this is to have a token of some kind that validates the request is a 'good' request. You can do that with a cookie, a session store of some kind, or a parameter (or some combination).
What I would suggest for something like this is to generate a unique url for the service that expires after a short period of time. You could do something like this pretty easily with Memcache. This strategy could also be used to obfuscate the service url (which would not provide any actual security, but would raise the bar for someone wanting to make direct calls).
Lastly, you could also use public key crypto to do this, but that would be very heavy. You would need to generate a new pub/priv key pair for each request and return the pubkey to the js client (here is a link to an implementation in javascript) http://www.cs.pitt.edu/~kirk/cs1501/notes/rsademo/
You can add a random number as a flag to determine whether the request are coming from the page just sent:
1) When generates index.html, add a random number to the JSON request URL:
Old: http://example.com/json/?zipcode=12345
New: http://example.com/json/?zipcode=12345&f=234234234234234234
Add this number to the Session Context as well.
2) The client browser renders the index.html and request JSON data by the new URL.
3) Your server gets the json request and checks the flag number with Session Context. If matched, response data. Otherwise, return an error message.
4) Clear Session Context by the end of response, or timeout triggered.
Accept only POST requests to the JSON-yielding URL. That won't prevent determined people from getting to it, but it will prevent casual browsing.
I know this is old but for anyone getting here later this is the easiest way to do this. You need to protect the AJAX subpage with a password that you can set on the container page before calling the include.
The easiest way to do this is to require HTTPS on the AJAX call and pass a POST variable. HTTPS + POST ensures the password is always encrypted.
So on the AJAX/sub-page do something like
if ($_POST["access"] == "makeupapassword")
{
...
}
else
{
echo "You can't access this directly";
}
When you call the AJAX make sure to include the POST variable and password in your payload. Since it is in POST it will be encrypted, and since it is random (hopefully) nobody will be able to guess it.
If you want to include or require the PHP directly on another page, just set the POST variable to the password before including it.
$_POST["access"] = "makeupapassword";
require("path/to/the/ajax/file.php");
This is a lot better than maintaining a global variable, session variable, or cookie because some of those are persistent across page loads so you have to make sure to reset the state after checking so users can't get accidental access.
Also I think it is better than page headers because it can't be sniffed since it is secured by HHTPS.
You'll probably have to have some kind of cookie-based authentication. In addition, Ignacio has a good point about using POST. This can help prevent JSON hijacking if you have untrusted scripts running on your domain. However, I don't think using POST is strictly necessary unless the outermost JSON type is an array. In your example it is an object.

Categories

Resources