What purpose is of "&rnd=" parameter in http requests? - javascript

Why do some web-applications use the http-get parameter rnd? What is the purpose of it? What problems are solved by using this parameter?

This could be to make sure the page/image/whatever isn't taken from the user's cache. If the link is different every time then the browser will get it from the server rather than from the cache, ensuring it's the latest version.
It could also be to track people's progress through the site. Best explained with a little story:
A user visits example.com. All the links are given the same random number (let's say 4).
The user opens a link in a new window/tab, and the link is page2.php?rnd=4. All the links in this page are given the random number 7.
The user can click the link to page3.php from the original tab or the new one, and the analytics software on the server can tell which one by whether it has rnd=4 or rnd=7.
All we can do is suggest possibilities though. There's no one standard reason to put rnd= in a URL, and we can't know the website designer's motives without seeing the server software.

Internet Explorer and other browsers will read an image URL, download the image, and store it in a cache.
If your application is going to be updating the image regular, and so you want your users to not see a cached image, the URL needs to be unique each time.
Therefore, adding a random string ensures this will be unique and downloaded into the cache each time.

It's almost always for cache-busting.

As has been suggested by others. This kind of behaviour is usually used to avoid caching issues when you are calling a page that returns dynamic content data.
For example, say you have a page that gets some current user information such as "mysite.com/CurrentUserData". Now on the first call to this page, the user data will be returned as expected, but depending on the timing and caching settings, the second call may return the same data - even though the expected data may have been updated.
The main reason for caching is of course to optimise the speed of frequent request. But in the instance where this is not wanted, adding a random value as a query string parameter is known to be a widely used solution.
There are however other ways to get around this issue. For example if you were doing an Ajax request with javascript/JQuery. You could set the cache to false in your call...
$.ajax({url: 'page.html', cache: false});
you could also change it for all page calls on document load with...
$.ajaxSetup({cache: false}});
If you were to do an MVC application, you can even disable the caching on the control action methods with an attribute like so...
[OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")]
public ActionResult NonCacheableData()
{
return View();
}
(thanks to a quick copy and paste from here)
I dare say there are also settings in IIS you could apply to get the same affect - though I have not been that far with this yet.

Related

AJAX calls done during javascript initialization follow cache rules of browser however later on calls do not

I've recently started using JQuery AJAX calls to fetch some content within a document ready function. I am setting headers for cache control in the AJAX call that get overridden when a forced reload of the page is done (Chrome) which is exactly what I want.
Unfortunately later on calls to AJAX through user interaction after the page and content is completely materialized do not follow these cache rules.
For instance if I control-reload a page that initially accesses /dostuff/ during initialization with a cache control header set to an obscenely high max age time the browser overrides the cache control header and sets the max age to 0 which is nice.. it gives the
user a lot of control to refresh content.
Is this proper? Should I always expect AJAX calls that are part of initialization to override request headers the way I'm beginning to expect them to. It seems like there is a lot of room for inconsistency.
If I call the same URL later on it does what I want and the browser automagically adds in an if-modified-since header that helps me return properly from the server.
If I call a URL that hasn't been part of the initialization however.. like /dootherstuff/ .. It won't set the max age to 0 if the page initialized through a force reload.
I don't expect the be able to fix this problem since it appears to be working as it should be.. I would however like to know how to reliably detect if the page was force reloaded so that I can handle the cache control headers properly.
Resolving this issue using version keys on the URL that are fudged to deal with reloads, rather than actual content versions, will cause me a lot of grief and extra network traffic and processing time.

Problems with cached result when sending a XMLHttpRequest?

I'm new to the idea of AJAX as well as caching.
On the AJAX - Send a Request To a Server from W3Schools, it says you should add "?t=" + Math.random() to the end of the URL of the script to run to prevent caching.
On Wikipedia, the simple definition of "cache" is:
In computer science, a cache is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere.
But, shouldn't this be better? The script will run faster if the computer already has some duplicate data stored. Also, the first example on the tutorial page, without the addition to the URL, worked fine.
Can somebody please tell me the reason behind using "?t=" + Math.random()?
But, shouldn't this be better?
Yes it's better to have a cache system for perfomance reason, your application pages will load quickly because the elements loaded once will be retrieved without making each time a HTTP request to the server.
Can somebody please tell me the reason behind using "?t=" + Math.random()?
Adding this "?t=" + Math.random() is like renaming the URL of the script each time you reload it. The caching system will see it as a new element and not as an old one he as already stored even if nothing as really changed. So it's forcing to reload the element from the server.
Generally, we may want to do that on elements (like images, scripts) that are often updated. For example, it's the case for a profile picture in a website that a user could change, if the old picture file is in cache, the user will not see the new picture appear immediatly if we don't use that trick of the random number. The user could think his upload didn't work. He would have to empty the cache manually in his browser, which is not always very intuitive.
A second reason could be that it's good to do it while we are developping because we don't need to empty the cache every minutes that our code changes are taken into account...
However don't use this trick on elements you are sure will don't change or very rarely.
The reason behind adding some random element to the end of a web service request like that is because in many cases you want the data to always be fresh. If you are caching it, it is possible the data won't be fresh.
For example, say you have an AJAX request which gives you the current high score of a game. You call it with http://example.com/get_high_score.php. Say it returns 100. Now, say you wait 5 seconds and call this again (or the user refreshes their page). If that request was cached, it may return 100 again. However, in that time, the score may actually now be 125.
If you call http://example.com/get_high_score.php?t=12345786, the score would be the latest value, because it wasn't cached.
url + "?t=" + Math.random() is just one means of doing this. I actually prefer to use a timestamp instead, as that is guaranteed to always be unique.
url + "?t=" + (new Date()).getTime()
On the flip side, if you don't need the data to always be fresh (e.g., you are just sending a list of menu item options which almost never change), then caching is okay and you'd want to leave off the extra bit.
An alternative is to use a timestamp, or design one that changes every few seconds. Although the best method (if you can) is to add entries in the header in your server response to tell the browser not to cache the result.
var t = new Date().getTime();
var t2 = Math.floor(t/10000);
url = target_url + "?t=" + t2;
Although its unlikely in this case, be aware if your site continually generates links to random internal URLs, say through server side code, then it becomes a "spider trap" and crawlers such as search engines get stuck in a loop following these random links around causing peaks in your server load.

(Temp) Storage of JSON Search Results in Web App

I'm working on a search function for my Web app (HTML, JS & CSS only). I'm using jQuery's .getjson() method to retrieve data from a feed and display those results on a page. Inside of an .each() statement I'm adding HTML markup to the results making some of the elements links to outside sources.
The issue is when a visitor initiates a search on my Web app and clicks on a link from the results to an outside page, then uses the Back button on the browser to go back to the results page, all of the search results are cleared and another search needs to be initiated.
I'd like to temporarily save the search results so if a user clicks on a link from the results, then presses the Back button to come back to the app, all of the results will be available without the new for another search.
Taking this one step further, it would also be cool is the results for past searchs also persists so if the visitor continues to press the Back button, they can see all of their previous searches (with a given limit of course).
HTML5 sessionStorage seems to be ideal for this, but the information that I found points to a tedious coding solution. Can't I just save all of the json results as a JS object and have them re-rendered by my each statement when the visitor presses the Back button? I'm definitely open to using a code library or plugin for this problem.
http://brian.io/lawnchair/ is a good little library for API for persistence. You can use the same syntax as an abstraction for different storage options http://brian.io/lawnchair/adapters/
You have two ways to approach this issue, one is caching the results on your server and populating the view on-demand, and number two is like you previously mentioned - use sessionStorage. sessionStorage (IMO) has a very straightforward API. You can either use sessionStorage.setItem(key, value) or sessionStorage.getItem(key) -- other methods are available as well such as sessionStorage.key(index), sessionStorage.removeItem(key) and sessionStorage.clear(). It would probably be useful to include a cross-browser polyfill solution for sessionStorage, you can check out the "Web Storage" polyfills section at Modernizr: https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills -- Have fun :-/
Off the top of my head:
Every time the user searches, change the hash in the url to a unique string (e.g. 'search-{userInput}' ... you could of course just forget about the 'search-', but I like my urls in pretty). This should give you back-button support. Then:
Alternative A:
Listen for the hashChange Event, parse the window.location.hash and resend the request to your search URL. Theoretically, unless adding the timestamp to the URL or crazy stuff like that, the caching mechanism of your browser should kick in here. If not, it means an additional request, but that should be ok, shouldn't it?
Alternative B:
Extend your existing search query mechanism by caching the results to localStorage (just don't forget to JSON.stringify it beforehand and use a something-{timestamp} key). Then listen for the hashChange Event and pull the results from your localStorage. Personally, I wouldn't recommend this solution as you're clogging up the localStorage (afaik there's a limit at 2.5mB for some browsers).
You're probably going to have to find ways to circumvent missing browser support for at least the hashChange Event, JSON stringify/parse and LocalStorage, but I'm optimistic that there are enough libs/plugins out there by now.
You think too complicated: your search form most likely does not change the url! Use GET instead of POST and you have the desired result. Right now the browser has no way of knowing which state of the website you want to show and by default shows the first - the empty search form
Caching could be added as suggested, but that really is not the problem here

Disable browser cache

I implemented a REST service and i'm using a web page as client.
My page has some javascript functions that performs several times the same http get request to REST server and process the replies.
My problem is that the browser caches the first reply and not actualy sends the following requests..
Is there some way to force the browser execute all the requests without caching?
I'm using internet explorer 8.0
Thanks
Not sure if it can help you, but sometimes, I add a random parameter in the URL of my request in order to avoid being cached.
So instead of having:
http://my-server:8080/myApp/foo?bar=baz
I will use:
http://my-server:8080/myApp/foo?bar=baz&random=123456789
of course, the value of the random is different for every request. You can use the current time in milliseconds for that.
Not really. This is a known issue with IE, the classic solution is to append a random parameter at the end of the query string for every request. Most JS libraries do this natively if you ask them to (jQuery's cache:false AJAX option, for instance)
Well, of course you don't actually want to disable the browser cache entirely; correct caching is a key part of REST and the fact that it can (if properly followed by both client and server) allow for a high degree of caching while also giving fine control over the cache expiry and revalidation is one of the key advantages.
There is though an issue, as you have spotted, with subsequent GETs to the same URI from the same document (as in DOM document lifetime, reload the page and you'll get another go at that XMLHttpRequest request). Pretty much IE seems to treat it as it would a request for more than one copy of the same image or other related resource in a web page; it uses the cached version even if the entity isn't cacheable.
Firefox has the opposite problem, and will send a subsequent request even when caching information says that it shouldn't!
We could add a random or time-stamped bogus parameter at the end of a query string for each request. However, this is a bit like screaming "THIS IS SPARTA!" and kicking our hard-won download into a deep pit that no Health & Safety inspector considered putting a safety rail around. We obviously don't want to repeat a full unconditional request when we don't need to.
However, this behaviour has a time component. If we delay the subsequent request by a second, then IE will re-request when appropriate while Firefox will honour the max-age and expires headers and not re-request when needless.
Hence, if two requests could be within a second of each other (either we know they are called from the same function, or there's the chance of two events triggering it in close succession) using setTimeout to delay the second request by a second after the first has completed will make it use the cache correctly, rather than in the two different sorts of incorrect behaviour.
Of course, a second's delay is a second's delay. This could be a big deal or not, depending primarily on the size of the downloaded entity.
Another possibility is that something that changes so rapidly shouldn't be modelled as GETting the state of a resource at all, but as POSTing a request for a current status to a resource. This does smell heavily of abusing REST and POSTing what should really be a GET though.
Which can mean that on balance the THIS IS SPARTA approach of appending random stuff to query strings is the way to go. It depends, really.

To build a `Delete` -button efficiently with JavaScript / PHP

Which of the following code is better in building a delete -action for removing a question?
1 My code
<a href='index.php?delete_post=777>delete</a>
2 Stack Overflow's code
<a id="delete_post_777>">delete</a>
I do not understand completely how Stack Overflow's delete -button works, since it points to no URL.
The id apparently can only be used by CSS and JavaScript.
Stack Overflow apparently uses JavaScript for the action.
How can you put start the delete -action based on the content of CSS -file by JavaScript?
How can you start a SQL delete -command by JavaScript? I know how you can do that by PHP, but not by JavaScript.
Your method is not safe as a user agent could inadvertently crawl the link and delete the post without user intervention. Googlebot might do that, for instance, or the user's browser might pre-fetch pages to speed up response time.
From RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1
9.1.1 Safe Methods
Implementors should be aware that the
software represents the user in their
interactions over the Internet, and
should be careful to allow the user to
be aware of any actions they might
take which may have an unexpected
significance to themselves or others.
In particular, the convention has been
established that the GET and HEAD
methods SHOULD NOT have the
significance of taking an action other
than retrieval. These methods ought to
be considered "safe". This allows user
agents to represent other methods,
such as POST, PUT and DELETE, in a
special way, so that the user is made
aware of the fact that a possibly
unsafe action is being requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore cannot
be held accountable for them.
The right way to do this is to either submit a form via POST using a button, or use JavaScript to do the deletion. The JavaScript could submit a hidden form, causing the entire page to be reloaded, or it could use Ajax to do the deletion without reloading the page. Either way, the important point is to avoid having bare links in your page that might inadvertantly be triggered by an unaware user agent.
Bind a click event on the anchor which start with "delete_post_" and use that to start an Ajax request.
$("a[id^='delete_post_']").click(function(e){
e.preventDefault(); // to prevent the browser from following the link when clicked
var id = parseInt($(this).attr("id").replace("delete_post_", ""));
// this executes delete.php?questionID=5342, when id contains 5342
$.post("delete.php", { questionID: id },
function(data){
alert("Output of the delete.php page: " + data);
});
});
//UPDATE
With the above $.post(), JavaScript code calls a page like delete.php?id=3425 in the background. If delete.php contains any output it will be available to you in the data variable.
This is using jQuery. Read all about it at http://docs.jquery.com/How_jQuery_Works.
The url you are looking for is in the js code. Personally I would have an id that identifies each <a> tag with a specific post, comment... or whatever, and then have a class="delete_something" on each one, this then posts to the correct place using javascript.
Like so:
Delete
<script type="text/javascript">
jQuery('a.delete_post').live('click', function(){
jQuery.post('delete.php', {id: jQuery(this).attr('id')}, function(data){
//do something with the data returned
})
});
</script>
You're quite correct that absent an href="..." attribute, the link would not work without JavaScript.
Generally, what that JavaScript does is use AJAX to contact the server: that's Asynchronous JavaScript and XML. It contacts a server, just as you would by visiting a page directly, but does so in the background, without changing what page the browser is showing.
That server-side page can then do whatever processing you require. In either case, it's PHP doing the work, not JavaScript.
The primary difference when talking about efficiency is that in a traditional model, where you POST a form to a PHP page, after finishing the request you must render an entire page as the "result," complete with the <head>, and with all the visible page content.
However, when you're doing a background request with AJAX, the visitor never sees the result. In fact, it's usually not even a human-readable result. In this model, you only need to transfer the new information that JavaScript can use to change the page.
This is why AJAX is usually seen as being "more efficient" than the traditional model: less data needs to travel back and forth, and the browser (typically) needs to do less work in order to show the data as part of the page. In your "delete" example, the only communication is "delete=777" and then perhaps "success=true" (to simplify only slightly) — a tiny amount of information to communicate for such a big effect!
It all depends on how your application is built, what happens at Stack Overflow is that the delete link click is caught by JavaScript and an Ajax request is being made to delete the post.
You can use a JavaScript library to easily catch clicks on all elements that match your selector rule(s).
Then you can use Ajax to send a request to the PHP script to do the SQL work.
On a side note, ideally you would not use GET for deleting entries, but rather POST, but that's another story.

Categories

Resources