I am working around AJAX for few months now and i see Ajax request as following,
Pass parameters to background page (PHP/ASP/HTML/TXT/XML ... what else can be here?)
Do some processing on server
Get back the results and show to client (HTML/XML/JSON ... what else can be here?)
If there is something else to add on Request lifecycle please I will be glad to know?
Now I have some questions around AJAX and i will try to frame them one by one.
How many concurrent AJAX request can be made?
Yes there is timeout period in AJAX but considering the web2.0 scenarios and possibilities with the Network what is the timeout period? Best practice?
Consider scenario that if user invoke AJAX Request and it’s in process on the server and meanwhile user left the page. Will the processing on the server will be left in haft way? Or all the execution on server will be done and response is sent back to browser? What will happen?
Is it a strict requirement that we should have a server page (PHP/JSP/ASP) to take the AJAX request? As with this approach considering wide use of AJAX now a day, on server we need page per request (or few pages to server more than one request) which is something become difficult to maintain.
Can we have something else instead of server side page (PHP/ASP etc.) like web service or something which can be directly requested from AJAX (JavaScript) like URL? If yes how? This can reduce need of additional server side pages.
AJAX request also supports the Authentication. In what scenario this is used? Is it mandatory?
Comet is something which I heard lot about. My understanding is that it’s just some pattern in which AJAX is used to get updated data by using polling mechanism. Is it right? Please provide your views/insight.
Security risk using AJAX? How can it can be mitigated (Encryption/Decryption or something else)?
Thanks all,
Depends on the browser. It follows the same rules as concurrent HTTP requests everywhere else in the browser.
Ditto.
Pretty much the same as the user hitting the Stop button on a regular page.
An HTTP request must request a URI. How you handle that on the backend is up to you. The term "page" doesn't really fit — that is an HTML document with associated resources (stylesheets, images, etc). Most systems don't have a 1:1 mapping between server side programs and resources. In an MVC pattern, for example, it isn't uncommon to have a shared model and a controller that just switches between views for determining if to return an HTML document or the same data expressed in JSON.
A web service is just a server side program that responds in a particular way, lots of people write them using PHP, JSP or ASP so the question doesn't really make sense.
No, it isn't mandatory. You use authentication when you need authentication. There is no special "ajax authentication", that is usually just using the same cookies that are used everywhere else in the site.
No, the point of Comet is to avoid polling. See http://en.wikipedia.org/wiki/Comet_%28programming%29
Requests containing data are sent to the server. Responses containing data are returned from the server. The security implications are no different to any other HTTP request you handle.
You must use the URI to use it
Related
So let's say I have a typical REST server that serves some data in a very specific manner, like: GET accounts, GET prices, GET inventory, GET settings, GET user_history, etc...
A single view, let's say, needs to fetch N different specific resources like this. What's the best technique/library/pattern for combining N HTTP requests into one without too much hassle?
Maintaining the "REST" idea would require writing new server code for every view because no two views would need the same set of resources. Doing this would become unnecessarily cumbersome in my opinion. I guess the only way that makes sense is to roll your own DSL that presents your data requirements to the server.
What's the easiest alternative to writing new response code for every possible combination of a given view's resource requirements?
You say this is a rest service, all you need to do is getting information, Why not issue a jsonp request?
issue a jsonp request for every get that you need, instead of writing a new response code for each and every get. It will save you alot of code and will enhance performance.
In conclusion, I would send a jsonp request to the server (given it's an external server of course) in order to get all the data that I need, while issuing ajax calls to the server.
issuing jsonp request to the same domain using .NET
I want to create a web application that displays data from a public api. I will use d3 (a javascript data-visualization library). I want to retrieve data from the api every ten minutes, and update my page (say it is traffic, or something). I have not built many web applications, how do I get the updates?
Should the js on the client side use a timer to request updates from the server side of my application (perhaps the application is written in Rails or node.js). The server then makes the api call and sends a response asynchronously? Is this called a socket? I have read that HTML5 provides sockets.
Or, perhaps an AJAX request?
Or, does the server side of my application create a timer, make the api call, and then "push" updates to the view. This seems wrong to me, there could be other views in this application, and the server shouldn't have to keep track of which view is active.
Is there a standard pattern for this type of web application? Any examples or tutorials greatly appreciated.
An AJAX request (XMLHttpRequest) is probably the way to go.
I have a very simple example of an XMLHttpRequest (with Java as the backend) here: https://stackoverflow.com/a/18028943/1468130
You could recreate a backend to receive HTTP GET requests in any other server-side language. Just echo back whatever data you retrieved, and xmlhttp.onload() will catch it.
Depending on how complex your data is, you may want to find a JSON library for your server-side language of choice, and serialize your data to JSON before echoing it back to your JS. Then you can use JavaScript's JSON.parse() method to convert your server data to an object that can easily be used by the client script.
If you are using jQuery, it handles AJAX very smoothly, and using $.ajax() would probably be easier than plain-old XMLHttpRequest.
http://api.jquery.com/jQuery.ajax/
(There are examples throughout this page, mostly-concentrated at the bottom.)
It really annoys me how complicated so many of the AJAX tutorials are. At least with jQuery, it's pretty easy.
Basically, you just need to ask a script for something (initiate the request, send url parameters), and wait for the script to give you something back (trigger your onload() or jqxhr.done() functions, supplying those functions with a data parameter).
For your other questions:
Use JavaScript's setTimeout() or setInterval() to initiate an AJAX request every 600000 milliseconds. In the request's onload callback, handle your data and update the page appropriately.
The response will be asynchronous.
This isn't a socket.
"Pushing" probably isn't the way to go in this case.
If I understand correctly and this API is external, then your problem can be divided into two separate sub-problems:
1) Updating data at the server. Server should download data once per N minutes. So, it should not be connected to customers' AJAX calls. If two customers will come to the website at the same time, your server will make two API call, what is not correct.
Actually, you should create a CRON job at the server that will call API and store its' result at the server. In this case your server will always make one call at a time and have quite a fresh information cached.
2) Updating data at clients. If data at customers' browsers should be updated without refreshing the page, then you should use some sort of Ajax. It can make a request to your server once per X minutes to get a fresh data or use so-called long-polling.
I think the most effective way to implement real time Web application is to use Web socket to push changes from the server rather than polling from the client side. This way users can see changes instantaneously once server notify that there is new data available. You can read more on the similar post.
I have tried using nodejs package called socket.io to make a real time virtual classroom. It works quite well for me.
I fear I may be trying to do something that certain security policies are specifically designed to forbid.
So there's a certain site with a certain AJAX-based chat application. It periodically polls the server and receives HTML fragments in return. I am looking to write an alternate mobile frontend that directly queries the existing backend using JS (i.e. does not use my server as a reflector).
Two main issues here that make this different from most such questions:
The server owner wouldn't mind me doing this but he's not going to go out of his way to help me, and so the format for talking with the server is not something I can change. That is, the server doesn't talk JSON let alone JSONP. It's HTML fragments but for my purposes that's essentially text.
I need to have the return value available to parse manually. It should not be automatically parsed/inserted/what-have-you through inclusion in the DOM or some other such mechanism.
If anyone has some advice on this matter, I would really appreciate it.
You could use a server side script to proxy it through your server.
You could use YQL as the middle man and use JSONP or CORS.
Tell the person on the other server to set up CORS for your server (tell them to add a header for each request, e.g. Access-Control-Allow-Origin: example.com).
could you create a php proxy,
ajax send url to fetch to local php(or other serverside script)
php uses curl to fetch that page and returns result.
I want to have a dynamic webpage that automaticly updates some information, this information should be received from my c/c++ application using HTTP. I have set up a socket and can send HTML and Javascript files to the browser.
I don't know how to move on. How to encapsulate my data into XMLHttpRequest objects? Or maybe this isn't the way to go? The problem is that my c/c++ application will be run on an embedded system that can't really support php or something like that.
I can't really understand how XMLHttpRequest works, I only find a lot of client examples on the web and not much about how a server should handle it.
A server should handle it as any other request. From the servers point of view, it's a normal HTTP request. Return the data that the client asks for! This is usually a HTML fragment, some XML or some JSON.
Ajax just send normal HTTP GET POST ... request, you should make sure your response header is correct, such as Content-Type.
How do you send information to the browser? The browser is client-side. To get information, you must either query the server (which you say is written in C++). If you want your client to receive request, you should probably emulate a server-like behavior using NodeJS.
We have a heavy Ajax dependent application. What are the good ways of making it sure that the request to server side scripts are not coming through standalone programs and are through an actual user sitting on a browser
There aren't any really.
Any request sent through a browser can be faked up by standalone programs.
At the end of the day does it really matter? If you're worried then make sure requests are authenticated and authorised and your authentication process is good (remember Ajax sends browser cookies - so your "normal" authentication will work just fine). Just remember that, of course, standalone programs can authenticate too.
What are the good ways of making it sure that the request to server side scripts are not coming through standalone programs and are through an actual user sitting on a browser
There are no ways. A browser is indistinguishable from a standalone program; a browser can be automated.
You can't trust any input from the client side. If you are relying on client-side co-operation for any security purpose, you're doomed.
There isn't a way to automatically block "non browser user" requests hitting your server side scripts, but there are ways to identify which scripts have been triggered by your application and which haven't.
This is usually done using something called "crumbs". The basic idea is that the page making the AJAX request should generate (server side) a unique token (which is typically a hash of unix timestamp + salt + secret). This token and timestamp should be passed as parameters to the AJAX request. The AJAX handler script will first check this token (and the validity of the unix timestamp e.g. if it falls within 5 minutes of the token timestamp). If the token checks out, you can then proceed to fulfill this request. Usually, this token generation + checking can be coded up as an Apache module so that it is triggered automatically and is separate from the application logic.
Fraudulent scripts won't be able to generate valid tokens (unless they figure out your algorithm) and so you can safely ignore them.
Keep in mind that storing a token in the session is also another way, but that won't buy any more security than your site's authentication system.
I'm not sure what you are worried about. From where I sit I can see three things your question can be related to:
First, you may want to prevent unauthorized users from making a valid request. This is resolve by using the browser's cookie to store a session ID. The session ID needs to tied to the user, be regenerated every time the user goes through the login process and must have an inactivity timeout. Anybody request coming in without a valid session ID you simply reject.
Second, you may want to prevent a third party from doing a replay attacks against your site (i.e. sniffing an inocent user's traffic and then sending the same calls over). The easy solution is to go over https for this. The SSL layer will prevent somebody from replaying any part of the traffic. This comes at a cost on the server side so you want to make sure that you really cannot take that risk.
Third, you may want to prevent somebody from using your API (that's what AJAX calls are in the end) to implement his own client to your site. For this there is very little you can do. You can always look for the appropriate User-Agent but that's easy to fake and will be probably the first thing somebody trying to use your API will think of. You can always implement some statistics, for example looking at the average AJAX requests per minute on a per user basis and see if some user are way above your average. It's hard to implement and it's only usefull if you are trying to prevent automated clients reacting faster than human can.
Is Safari a webbrowser for you?
If it is, the same engine you got in many applications, just to say those using QT QWebKit libraries. So I would say, no way to recognize it.
User can forge any request one wants - faking the headers like UserAgent any they like...
One question: why would you want to do what you ask for? What's the diffrence for you if they request from browser or from anythning else?
Can't think of one reason you'd call "security" here.
If you still want to do this, for whatever reason, think about making your own application, with a browser embedded. It could somehow authenticate to the application in every request - then you'd only send a valid responses to your application's browser.
User would still be able to reverse engineer the application though.
Interesting question.
What about browsers embedded in applications? Would you mind those?
You can probably think of a way of "proving" that a request comes from a browser, but it will ultimately be heuristic. The line between browser and application is blurry (e.g. embedded browser) and you'd always run the risk of rejecting users from unexpected browsers (or unexpected versions thereof).
As been mentioned before there is no way of accomplishing this... But there is a thing to note, useful for preventing against CSRF attacks that target the specific AJAX functionality; like setting a custom header with help of the AJAX object, and verifying that header on the server side.
And if in the value of that header, you set a random (one time use) token you can prevent automated attacks.