I want to implement AJAX like facebook, so my sites can be really fast too. After weeks of research and also knowing about bigPipe (which is not ajax).
so the only thing left was how they are pulling other requests like going to page/profile, I opened up firebug and was just checking things there for what I get if I click on different profiles. But the problem is, firebug doen'tt record any such request and but still page gets loaded with AJAX and changes the HTML also, firebug does show change on html.
So I'm wondering, if they are using iframe to block firebug to see the request or what? Because I want to know how much data they pull on each request. Is it the complete page or only a part of page, because page layout changes as well, depending on the page it is (for example: groups, page, profile, ...).
I would be really grateful if a pro gives some feedback on this, because i cant find it anywhere for weeks.
The reason they use iframe, usually its security. iframes are like new tabs, there is no communication between your page and the iframe facebook page. The iframe has its own cookies and session, so really you need to think about it like another window rather than part of your own page (except for the obvious fact that the output is shown within your page).
That said - the developer mode in chrome does show you the communications to and from the iframe.
When I click on user's profile at facebook, then in Firebug I clearly see how request for data happens, and how div's content changing.
So, what is the question about?
After click on some user profile, Facebook does following GET request:
http://www.facebook.com/ajax/hovercard/user.php?id=100000655044XXX&__a=1
This request's response is a complex JS data, which contain all necessary information to build a new page. There is a array of profile's friends (with names, avatar thumbnails links, etc), array of the profile last entries (again, with thumbnails URLs, annotations, etc.).
There is no magic, no something like code hiding or obfuscation. =)
Looking at face book through google chromes inspector they use ajax to request files the give back javascript which is then used to make any changes to the page.
I don't know why/wether Facebook uses IFRAMEs to asynchroneously load data but I guess there is no special reason behind that. We used IFRAMEs too but now switched to XMLHttpRequest for our projects because it's more flexible. Perhaps the IFRAME method works better on (much) older browsers, but even IE6 supports XMLHttpRequest fine.
Anyway, I'm certain that there is no performance advantage when using IFRAMEs. If you need fast asynchroneous data loading to dynamically update your page, go with XMLHttpRequest since any modern browsers supports it and it's fast as HTTP can be.
If you know about bigPipe then you will be able to understand that,
As you have read about big pipe their response look like this :-
<script type="text/javascript"> bigpipe.onPageArrive({ 'css' : '', '__html' : ' ' }); </script>
So if they ajax then they will not able to use bigpipe, mean if they use ajax and one server they flush buffer, on client there will no effect of that, the ajax oncomplete only will call when complete data received and connection closed, In other words they will not able to use their one of the best page speed technique there,
but what if they use iframe for ajax,, this make point,, they can use their bigpipe in iframe and server will send data like this :-
<script type="text/javascript"> parent.bigpipe.onPageArrive({ 'some' : 'some' });
so server can flush buffer and as soon as buffer will clear, browser will get that, that was not possible in ajax case.
Important :-
They use iframe only when page url change, mean when a new page need to be downloaded that contains the pagelets, for other request like some popup box or notifications etc they simple send ajax request.
All informations are unofficial, Actually i was researching on that, so i found,
( I m not a native english speaker, sorry for spelling and grammer mistakes! )
when you click on different profile, facebook doesn't use ajax for loading the profile
you simple open a new link plain old html... but maybe I misunderstood you
Related
I am working on a pure HTML website, all pages are HTML with no relation to any server side code.
Basically every request to the server is made using AJAX, I send data from forms, I process this data in Handlers, then I return a JSON string that will be processed back on the client side.
Let's say the page is loaded with parameters in the URL, something like question.html?id=1. Earlier, I used to read this query string on Page Load method, then read data from the database and so on...
Now, since its pure HTML pages, I'm trying to think of an approach that will allow me to do the same, I have an idea but its 99% a bad idea.
The idea is to read URL parameters using JS (after the page has loaded), and then make an AJAX request, and then fetch the data and show them on the page. I know that instead of having 1 request to the server (Web Forms), we are now having 2 Requests, the first request to get the page, and the second request is the AJAX request. And of course this has lots of delays, since the page will be loaded at the beginning without the actual data that I need inside it.
Is my goal impossible or there's a mature approach out there?
Is my goal impossible or there's a mature approach out there?
Lately there are a good handful of JavaScript frameworks designed around this very concept ("single page app") of having a page load up without any data pre-loaded in it, and accessing all of the data over AJAX. Some examples of such frameworks are AngularJS, Backbone.js, Ember.js, and Knockout. So no, this is not at all impossible. I recommend learning about these frameworks and others to find one that seems right for the site you are making.
The idea is to read URL parameters using JS (after the page has loaded), and then make an AJAX request, and then fetch the data and show them on the page.
This sounds like a fine idea.
Here is an example of how you can use JavaScript to extract the query parameters from the current page's URL.
I know that instead of having 1 request to the server (Web Forms), we are now having 2 Requests, the first request to get the page, and the second request is the AJAX request. And of course this has lots of delays, since the page will be loaded at the beginning without the actual data that I need inside it.
Here is why you should not worry about this:
A user's browser will generally cache the HTML file and associated JavaScript files, so the second time they visit your site, the browser will send requests to check whether the files have been modified. If not, the server will send back a short message simply saying that they have not been modified and the files will not need to be transmitted again.
The AJAX response will only contain the data that the page needs and none of the markup. So retrieving a page generated on the server would involve more data transfer than an approach that combines a cacheable .html file and an AJAX request.
So the total load time should be less even if you make two requests instead of one. If you are concerned that the user will see a page with no content while the AJAX data is loading, you can (a) have the page be completely blank while the data is loading (as long as it's not too slow, this should not be a problem), or (b) Throw up a splash screen to tell the user that the page is loading. Again, users should generally not have a problem with a small amount of load time at the beginning if the page is speedy after that.
I think you are overthinking it. I'd bet that the combined two calls that you are worried about are going to run in roughly the same amount of time as the single webforms page_load would if you coded otherwise - only difference now being that the initial page load is going to be really fast (because you are only loading a lightweight, html/css/images page with no slowdown for running any server code.
Common solution would be to then have a 'spinner' or some sort (an animated GIF) that gives the user an visual indication that the page isn't done loading while your ajax calls wait to complete.
Watch a typical page load done from almost any major website in any language, you are going to see many, many requests that make up a single page being loaded, wether it be pulling css/images from a CDN, js from a CDN, loading google analytics, advertisements from ad networks etc. Trying to get 100% of your page to load in a single call is not really a goal you should be worried about.
I don't think the 2-requests is a "bad idea" at all. In fact there is no other solution if you want to use only static HTML + AJAX (that is the moderm approach to web development since this allow to reuse AJAX request for other non-HTML clients like Android or iOS native apps). Also performance is very relative. If your client can cache the first static HTML it will be much faster compared to server-generated approach even if two requests are needed. Just use a network profiler to convince yourself.
What you can do if you don't want the user to notice any lag in the GUI is to use a generic script that shows a popup hiding/blocking all the full window (maybe with a "please wait") until the second request with the AJAX is received and a "data-received" (or similar) event is triggered in the AJAX callback.
EDIT:
I think that probably what you need is to convert your website into a webapp using a manifest to list "cacheable" static content. Then query your server only for dynamic (AJAX) data:
http://diveintohtml5.info/offline.html
(IE 10+ also support Webapp manifests)
Moderm browsers will read the manifest to know whether they need to reload static content or not. Using a webapp manifest will also allow to integrate your web site within the OS. For example, on Android it will be listed in the recent-task list (otherwise only your browser, not your app is shown) and the user can add a shorcut to the desktop.
So, you have static HTMLs and user server side code only in handlers? Why you can't have one ASP .Net page (generated on server side) to load initial data and all other data will be processed using AJAX requests?
if its possible to use any backed logic to determine what to load on server side, that will be easy to get the data.
say for example if you to load json a int he page cc.php by calling the page cc.php?json=a, you can determine from the PHP code to put a json into the page it self and use as object in your HTML page
if you are using query string to read and determine, what to load you have to make two calls.
The primary thing you appear to want is what is known as a router.
Since you seem to want to keep things fairly bare metal, the traditional answer would be Backbone.js. If you want even faster and leaner then the optimised Backbone fork ExoSkeleton might be just the ticket but it doesn't have the following that Backbone proper has. Certainly better than cooking your own thing.
There are some fine frameworks around, like Ember and Angular which have large user bases. I've been using Ember recently for a fairly complex application as it has a very sophisticated router, but based on my experiences I'm more aligned with the architecture available today in React/Flux (not just React but the architectural pattern of Flux).
React/Flux with one of the add-on router components will take you very far (Facebook/Instrgram) and in my view offers a superior architecture for web applications than traditional MVC; it is currently the fastest framework for updating the DOM and also allows isomorphic applications (run on both client and server). This represents the so called "holy grail" of web apps as it sends the initial rendered page from the server and avoids any delays due to framework loading, subsequent interactions then use ajax.
Above all, checkout some of the frameworks and find what works best for you. You may find some value comparing framework implementations over at TodoMVC but in my view the Todo app is far too simple and contrived to really show how the different frameworks shine.
My own evolution has been jQuery -> Backbone -> Backbone + Marionette -> Ember -> React/Flux so don't expect to get a good handle on what matters most to you until you have used a few frameworks in anger.
The main issue is from a UX / UI point of view.
Once you get your data from the server (in Ajax) after the page has been loaded - you'll get a "flickering" behavior, once the data is injected into the page.
You can solve this by presenting the page only after the data has arrived, OR use a pre-loader of some kind - to let the user know that the page is still getting its data, but then you'll have a performance issue as you already mentioned.
The ideal solution in this case is to get the "basic" data that the page needs (on the first request to the server), and manipulate it via the client - thus ease-in the "flickering" behavior.
It's the consideration between performance and "flickering" / pre-loading indication.
The most popular library for this SPA (Single Page Application) page - is angularJS
If I understand your inquiry correctly. You might want to look more about:
1) window.location.hash
Instead of using the "?", you can make use of the "#" to manipulate your page based on query string.
Reference: How to change the querystring on the same page without postback
2) hashchange event
This event fires whenever there's a changed in the fragment/hash("#") of the url. Also, you might want to track the hash to compare between the previous hash value and the current hash value.
e.g.
$(window).on('hashchange', function () {
//your manipulation for query string goes here...
prevHash = location.hash;
});
var prevHash = location.hash; //For tracking the previous hash.
Reference: On - window.location.hash - Change?
3) For client-side entry-point or similar to server-side PageLoad, you may make use of this,
e.g.
/* Appends a method - to be called after the page(from server) has been loaded. */
function addLoadEvent(func) {
var oldonload = window.onload;
if (typeof window.onload != 'function') {
window.onload = func;
} else {
window.onload = function () {
if (oldonload) {
oldonload();
}
func();
}
}
}
function YourPage_PageLoad()
{
//your code goes here...
}
//Client entry-point
addLoadEvent(YourPage_PageLoad);
Since you're doing pure ajax, the benefit of this technique is you would be able to easily handle the previous/next button click events from the browser and present the proper data/page to the user.
I would prefer AngularJS. This will be a good technology and you can do pagination with one HTML. So i think this will be good framework for you as your using static content.
In AngularJS MVC concept is there please read the AngularJS Tutorial. So this framework will be worth for your new project. Happy coding
Say I have page1.html, and this is a queue list of people waiting to see me.
I have access to page2.html, which I use to see the list, and call specific people. When I click on a name on page2.html, page1.html is supposed to append a class to that person's name (which uses css3 animations to make it blink).
The example is lame, but you get what I'm trying to do here... I have read a little bit about XMLHttpRequests, and the 'onreadystatechange', but I'm not sure how this works...
Ideas...
I suppose you could do it like this. Have your page2.html update a database when you click on a person's name. You can flag that person in your database when you click on his name on page2.html
Now on the other end make your page1.html continuously query the database in the background in a JavaScript loop. On requesting the information from the database you can accordingly update page1.html
I hope you get it. Do let me know if there are any loop holes.
This link might be of help: PHP long polling, without excessive database access
What you can do is use the XMLHttpRequest to load data from your page1.html and display it on your page2.html accordingly. You would have to use the request on page2 and return the data on page1.
You can think of page2 as your frontend or client where you would have to interpret the information and display it (in your example display the list and change the class of the person), while page1 is your backend that provides the information for your page2.
For further information about XMLHttpRequests or the concept that it's used for I suggest you have a look at the examples here: https://developer.mozilla.org/en/AJAX
As far as I understand you want to update page1 when you click on person's name on page2. I think there are some cases here:
page2 is a dialog window. When you click a name it can be directly manipulated on page1. No XMLHttpRequest needed here.
page2 is a pop-up window. Almost the same. This article may help you about this one.
page2 is a standalone page, in a different tab/window. Here you need an ajax ( XMLHttpRequest ). When you click on name, this action should be write down somewhere ( server-side, sql or something ), and page1 should check for any changes at some time interval.
I did get a fair number of responses, and while the majority of them required constant javascript pinging or somewhat roundabout approaches (such as Reverse AJAX or COMET, Long Polling, etc.), they each solved the problem.
Nonetheless, a little extra digging, and I found a nugget of tech that fits exactly:
HTML5's Server-Sent Events
(http://dev.w3.org/html5/eventsource/)
This allows the client to declare itself as a 'recipient' of communication from the server, effectively reversing the 'Request first, serve later' approach that is dominant in web technology. The server can initiate communication with the client, without an initial request, allowing me to push updates to clients as and when they become available (such as DB changes).
In Facebook, when you add a link to your wall, it gets the title, pictures and part of the text. I've seen this behavior in other websites where you can add links, how does it work? does it has a name? Is there any javascript/jQuery extension that implements it?
And how is possible that facebook goes to another website and gets the html when it's, supposedly, forbidden to make a cross site ajax call ??
Thanks.
Basic Methodology
When the fetch event is triggered (for example on Facebook pasting a URL in) you can use AJAX to request the url*, then parse the returned data as you wish.
Parsing the data is the tricky bit, because so many websites have varying standards. Taking the text between the title tags is a good start, along with possibly searching for a META description (but these are being used less and less as search engines evolve into more sophisticated content based searches).
Failing that, you need some way of finding the most important text on the page and taking the first 100 chars or so as well as finding the most prominent picture on the page.
This is not a trivial task, it is extremely complicated trying to derive semantics from such a liquid and contrasting set of data (a generic returned web page). For example, you might find the biggest image on the page, that's a good start, but how do you know it's not a background image? How do you know that's the image that best describes that page?
Good luck!
*If you can't directly AJAX third party URL's, this can be done by requesting a page on your local server which fetches the remote page server side with some sort of HTTP request.
Some Extra Thoughts
If you grab an image from a remote server and 'hotlink' it on your site, many sites seem to sometimes have 'anti hotlinking' replacement images when you try and display this image, so it might be worth comparing the requested image from your server page with the actual fetched image so you don't show anything nasty by accident.
A lot of title tags in the head will be generic and non descriptive, it would be better to fetch the title of the article (assuming an article type site) if there is one available as it will be more descriptive, finding this is difficult though!
If you are really smart, you might be able to piggy back off Google for example (check their T&C though). If a user requests a certain URL, you can google search it behind the scenes, and use the returned google descriptive text as your return text. If google changes their markup significantly though this could break very quickly!
You can use a PHP server side script to fetch the contents of any web page (look up web scraping). What facebook does is it throws out a call to a PHP server side script via ajax which has a PHP function called
file_get_contents('http://somesite.com.au');
now once the file or webpage has been sucked into your server-side script you can then filter the contents for anything in particular. eg. Facebooks get link will look for the title, img and meta property="description parts of the file or webpage via regular expression
eg. PHP's
preg_match(); Function.
This can be collected then returned back to your webpage.
You may also want to consider adding extra functions for returning the data you want as scraping some pages may take longer than expected to return your desired information. eg. filter out irrelevant stuff like javascript, css, irrelavant tags, huge images etc. to make it run faster.
If you get this down pat you could potentialy be on your way to building a web search engine or better yet, collecting data off sites like yellowpages, eg. phone numbers, mailing addresses, etc.
Also you may want to look further into:
get_meta_tags('http://somesite.com.au');
:-)
There are several API's that can provide this functionality, for example PageMunch lets you pass in a url and callback so that you can do this from the client-side or feed it through your own server:
http://www.pagemunch.com
An example response for the BBC website looks like:
{
"inLanguage": "en",
"schema": "http:\/\/schema.org\/WebPage",
"type": "WebPage",
"url": "http:\/\/www.bbc.co.uk\/",
"name": "BBC - Homepage",
"description": "Breaking news, sport, TV, radio and a whole lot more. The BBC informs, educates and entertains - wherever you are, whatever your age.",
"image": "http:\/\/static.bbci.co.uk\/wwhomepage-3.5\/1.0.64\/img\/iphone.png",
"keywords": [
"BBC",
"bbc.co.uk",
"bbc.com",
"Search",
"British Broadcasting Corporation",
"BBC iPlayer",
"BBCi"
],
"dateAccessed": "2013-02-11T23:25:40+00:00"
}
You can always just look what it in the tag. If you need this in javascript it shouldn't be that hard. Once you have the data you can do:
var title = $(data).find('title').html();
The problem will be getting the data since I think most browsers will block you from making cross site ajax requests. You can get around this by having a service on your site which will act as a proxy and make the request for you. However, at that point you might as well parse out the title on the server. Since you didn't specify what your back-end language is, I won't bother to guess now.
It's not possible with pure JavaScript due to cross domain policy - client side script can't read contents of pages on other domains unless that other domain explicitly expose JSON service.
The trick is sending server side request (each server side language has its own tools), parse the results using Regular Expressions or some other string parsing techniques then using this server side code as "proxy" to AJAX call made "on the fly" when posting link.
So I have two documents dA and dB hosted on two different servers sA and sB respectively.
Document dA has some JS which opens up an iframe src'ing document dB, with a form on it. when the form in document dB is submitted to a form-handler on server sB, I want the iframe on page dA to close.
I hope that was clear enough. Is there a way to do this?
Thanks!
-Mala
UPDATE: I have no control over dA or sA except via inserted javascript
This isn't supposed to be possible due to browser/JavaScript security sandbox policy. That being said, it is possible to step outside of those limitations with a bit of hackery. There are a variety of methods, some involving Flash.
I would recommend against doing this if possible, but if you must, I'd recommend the DNS approach referred to here:
http://www.alexpooley.com/2007/08/07/how-to-cross-domain-javascript/
Key Excerpt:
Say domain D wants to
connect to domain E. In a nutshell,
the trick is to use DNS to point a
sub-domain of D, D_s, to E’s server.
In doing so, D_s takes on the
characteristics of E, while also being
accessible to D.
Assume that I create page A, that lies withing a frame that covers the entire page.
Let A link to yourbank.com, and you click on that link. Now if I could use javascript that modifies the content of the frame (banking site), I would be able to quite easily read the password you are using and store it in a cookie, send it to my server, etc.
That is the reason you cannot modify the content in another frame, whose content is NOT from the same domain. However, if they ARE from the same domain, you should be able to modify it as you see fit (both pages must be on your server).
You should be able to access the iframe with this code:
window["iframe_name"].document.body
If you just want the top-level to close, you can just call something like this:
window.top.location = "http://www.example.com/dC.html";
This will close out dA and sent the user to dC.html instead. dC.html can have the JS you want to run (for example, to close the window) in the onload handler.
Other people explained security implications. But the question is legitimate, there are use cases for that, and it is possible in some scenarios to do what you want.
W3C defines a property on document called domain, which is used to check security permissions. This property can be manipulated cooperatively by both documents, so they can access each other in some cases.
The governing document is DOM Level 1 Spec. Look at the description of document. As you can see this property is defined there and … it is read-only. In reality all browsers allow to modify it:
Mozilla's document.domain description.
Microsoft's domain property description.
Modifications cannot be arbitrary. Usually only super-domains are allowed. It means that you can make two documents served by different server to access each other, as long as they have a common super-domain.
So if you want two pages to communicate, you need to add a small one-liner, which should be run on page load. Something like that should do the trick:
document.domain = "yourdomain.com";
Now you can serve them from different subdomains without losing their accessibility.
Obviously you should watch for timing issues. They can be avoided if you establish a notification protocol of some sort. For example, one page (the master) sets its domain, and loads another page (the server). When the server is operational, it changes its domain and accesses the master triggering some function.
A mechanism to do so would be capable of a cross-site scripting attack (since you could do more than just remove a benign bit of page content).
A safe approach would limit to just the iframe document emptying/hiding itself, but if the iframe containing it is fixed size, you will just end up with a blank spot on the page.
If you don't have control over dA or Sa this isn't possible because of browser security restrictions. Even the Flash methods require access to both servers.
This is a bit convoluted but may be more legitimate than a straight XSS solution:
You have no control over server A other than writing javascript to document A. But you are opening an iframe within document A, which suggests that you only have write-access to document A. This a bit confusing. Are you writing the js to document A or injecting it somehow?
Either way, here is what I dreamed up. It won't work if you have no access to the server which hosts the page which has the iframe.
The user hits submit on the form within the iframe. The form, once completed, most likely changes something on the server hosting that form. So you have an AJAX function on Document A which asks a server-side script to check if the form has been submitted yet. If it has, the script returns a "submitted" value to the AJAX function, which triggers another js function to close the iframe.
The above requires a few things:
The iframe needs to be on a page hosted on a server where you can write an additional server-side script (this avoids the cross-domain issue, since the AJAX is pointing to the same directory, in theory).
The server within the iframe must have some url that can be requested which will return some kind of confirmation that the form has been submitted.
The "check-for-submitted" script needs to know both the above-mentioned URL and what to look for upon loading said URL.
If you have all of the above, the ajax function calls the server-script, the server-script uses cURL to go the URL that reflects if the form is done, the server-script looks for the "form has been submitted" indicators, and, depending on what it finds, returns an answer of "not submitted" or "submitted" to the ajax function.
For example, maybe the form is for user registration. If your outer document knows what username will be entered into the form, the server-side script can go to http://example.org/username and if it comes up with "user not found" you know the form has yet to be submitted.
Anything that goes beyond what is possible in the above example is probably outside of what is safe and secure anyway. While it would be very convenient to have the iframe close automatically when the user has submitted it, consider the possibility that I have sent you an email saying your bank account needs looking at. The email has a link to a page I have made which has an iframe of your bank's site set to fill the entire viewable part of my page. You log in as normal, because you are very trusting. If I had access to the fact that you hit submit on the page, that would imply I also had access to what you submitted or at the very least the URL that the iframe redirected to (which could have a session ID in or all sorts of other data the bank shouldn't include in a URL).
I don't mean to sound preachy at all. You should just consider that in order to know about one event, you often are given access to other data that you ought not have.
I think a slightly less elegant solution to your problem would be to have a link above the iframe that says "Finished" or "Close" that kills the iframe when the user is done with the form. This would not only close the iframe when the user has submitted the form, but also give them a chance to to say "oops! I don't want to fill out this form anyway. Nevermind!" Right now with your desired automatic solution, there is no way to get rid of the iframe unless the user hits submit.
Thank you everybody for your answers. I found a solution that works:
On my server, I tell the form to redirect to the url that created the iframe.
On the site containing the iframe, I add a setInterval function to poll for the current location of the iframe.
Due to JS sandboxing, this poll does not work while the url is foreign (i.e. before my form is submitted). However, once the url is local (i.e. identical to that of the calling page), the url is readable, and the function closes the iframe. This works as soon as the iframe is redirected, I don't even need to wait for the additional pageload.
Thank you very much Greg for helping me :)
Which of the following code is better in building a delete -action for removing a question?
1 My code
<a href='index.php?delete_post=777>delete</a>
2 Stack Overflow's code
<a id="delete_post_777>">delete</a>
I do not understand completely how Stack Overflow's delete -button works, since it points to no URL.
The id apparently can only be used by CSS and JavaScript.
Stack Overflow apparently uses JavaScript for the action.
How can you put start the delete -action based on the content of CSS -file by JavaScript?
How can you start a SQL delete -command by JavaScript? I know how you can do that by PHP, but not by JavaScript.
Your method is not safe as a user agent could inadvertently crawl the link and delete the post without user intervention. Googlebot might do that, for instance, or the user's browser might pre-fetch pages to speed up response time.
From RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1
9.1.1 Safe Methods
Implementors should be aware that the
software represents the user in their
interactions over the Internet, and
should be careful to allow the user to
be aware of any actions they might
take which may have an unexpected
significance to themselves or others.
In particular, the convention has been
established that the GET and HEAD
methods SHOULD NOT have the
significance of taking an action other
than retrieval. These methods ought to
be considered "safe". This allows user
agents to represent other methods,
such as POST, PUT and DELETE, in a
special way, so that the user is made
aware of the fact that a possibly
unsafe action is being requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore cannot
be held accountable for them.
The right way to do this is to either submit a form via POST using a button, or use JavaScript to do the deletion. The JavaScript could submit a hidden form, causing the entire page to be reloaded, or it could use Ajax to do the deletion without reloading the page. Either way, the important point is to avoid having bare links in your page that might inadvertantly be triggered by an unaware user agent.
Bind a click event on the anchor which start with "delete_post_" and use that to start an Ajax request.
$("a[id^='delete_post_']").click(function(e){
e.preventDefault(); // to prevent the browser from following the link when clicked
var id = parseInt($(this).attr("id").replace("delete_post_", ""));
// this executes delete.php?questionID=5342, when id contains 5342
$.post("delete.php", { questionID: id },
function(data){
alert("Output of the delete.php page: " + data);
});
});
//UPDATE
With the above $.post(), JavaScript code calls a page like delete.php?id=3425 in the background. If delete.php contains any output it will be available to you in the data variable.
This is using jQuery. Read all about it at http://docs.jquery.com/How_jQuery_Works.
The url you are looking for is in the js code. Personally I would have an id that identifies each <a> tag with a specific post, comment... or whatever, and then have a class="delete_something" on each one, this then posts to the correct place using javascript.
Like so:
Delete
<script type="text/javascript">
jQuery('a.delete_post').live('click', function(){
jQuery.post('delete.php', {id: jQuery(this).attr('id')}, function(data){
//do something with the data returned
})
});
</script>
You're quite correct that absent an href="..." attribute, the link would not work without JavaScript.
Generally, what that JavaScript does is use AJAX to contact the server: that's Asynchronous JavaScript and XML. It contacts a server, just as you would by visiting a page directly, but does so in the background, without changing what page the browser is showing.
That server-side page can then do whatever processing you require. In either case, it's PHP doing the work, not JavaScript.
The primary difference when talking about efficiency is that in a traditional model, where you POST a form to a PHP page, after finishing the request you must render an entire page as the "result," complete with the <head>, and with all the visible page content.
However, when you're doing a background request with AJAX, the visitor never sees the result. In fact, it's usually not even a human-readable result. In this model, you only need to transfer the new information that JavaScript can use to change the page.
This is why AJAX is usually seen as being "more efficient" than the traditional model: less data needs to travel back and forth, and the browser (typically) needs to do less work in order to show the data as part of the page. In your "delete" example, the only communication is "delete=777" and then perhaps "success=true" (to simplify only slightly) — a tiny amount of information to communicate for such a big effect!
It all depends on how your application is built, what happens at Stack Overflow is that the delete link click is caught by JavaScript and an Ajax request is being made to delete the post.
You can use a JavaScript library to easily catch clicks on all elements that match your selector rule(s).
Then you can use Ajax to send a request to the PHP script to do the SQL work.
On a side note, ideally you would not use GET for deleting entries, but rather POST, but that's another story.