Situation looks like that:
I need to have button on my site that will link to subpage with video.
There are two subpages - one with high quality video and second with low quality video.
When somebody click on button first time then it redirect him to subpage high quality video. From this subpage he can switch to second subpage (with low quality video).
The problem:
I want to remember in cookies on which web page with video client was last in (low or high quality). So that when client returns to my website, button will lead him to page with video that he was last in.
I use ASP.NET MVC 2. But I think that solution to this problem is probably some javascript.
Any help here much appreciated!
Cookies are passed to the server with each HTTP request.
Assuming your button is generated dynamically on the server, you can inspect the incoming cookies to see if the user has the parameter in question set to low quality and update the button URL accordingly.
ASP docs
From experience with ASP.Net WebForms, its pretty straightforward to access cookies and I am pretty sure things are setup similarly w/ MVC.
String GetBandwidthSetting()
{
HttpCookie bandwidth = Context.Request.Cookies["bandwidth"];
return (bandwidth != null) ? bandwidth.Value : null;
}
String SetBandwidthSetting(String value)
{
HttpCookie bandwidth = new HttpCookie("bandwidth", value);
bandwidth.Expires = DateTime.Now.AddYears(1);
Context.Response.Cookies.Add(bandwidth);
}
You can check this script:
http://javascript.internet.com/cookies/cookie-redirect.html
It is similar to what you need.
In .js you have to change last if statement to one looking similar to that:
if (favorite != null) {
switch (favorite) {
case 'videohq': url = 'url_of_HQ_Video'; // change these!
break;
case 'videolq': url = 'url_of_LQ_Video';
break;
}
And then add this to button/link:
onclick="window.location.href = url"
to your site on which you are redirectiong to those videos.
Remember also to add code that set cookies. You can add action similar to this:
onClick="SetCookie('video', 'videohq' , exp);
Related
I'm using this javascript for a click counter in my blogger blog:
function clickCounter() {
if(typeof(Storage) !== "undefined") {
if (sessionStorage.clickcount) {
sessionStorage.clickcount = Number(sessionStorage.clickcount)+1;
} else {
sessionStorage.clickcount = 1;
}
document.getElementById("result").innerHTML = "Correct! " + sessionStorage.clickcount + " Smart answers 'til now.";
} else {
document.getElementById("result").innerHTML = "Sorry, your browser does not support this quiz...";
}
}
<button onclick="clickCounter()" type="button">Suspension</button>
Is there any way to create something similar through a non javascript method?
Can you help me triger an event (extra text message through popup or within the page) every 5, 10, 20, 100 clicks?
Thank you very much
HTML, and the Web in general, was designed to be stateless.
When you pull up a page, it should be like the first time -- and every time -- you pull up the page.
Since then, people have come up with a number of techniques to add state -- to save data, but they all involved one of two methods -- or sometimes both.
Method 1: Store state on the server.
This method uses HTML forms or cookies to slip information to the server when you load and reload a page.
Method 2: Store state in the client
While there are some older versions of Internet Explorer that can be coded in VBA, we are going to ignore that. The only "real" way to run any kind of code on the client, to store any data, is to use JavaScript.
Method 3: Use the client to talk to the server
Using Ajax, you can let your client talk to the server, but without doing a page reload. This still uses JavaScript.
So, to answer your question:
Without a server
Without JavaScript
No, you cannot save or store anything.
I have not tried this but...
What if you put multiple buttons positioned on top of each other. As each one is clicked, it can be made to vanish with something like
a:visited { display: none; }
The ones that need to display a message (5th, 10th, etc.) have different behavior attached.
See on click hide this (button link) pure css
I have a web app that I would like to restrict to a single browser tab or window. So the idea is a user logs in and if they open a link in a tab/window or open a new browser tab/window it kills their session. I know many are against this but that's how the app needs to be.
The controller checks if the user is logged in via:
if (!isset($_SESSION['user_logged_in'])) {
Session::destroy();
header('location: '.URL.'login');
}
I have tried setting $_SESSION['user_logged_in'] to false if its true but then obviously you don't go any further than one page.
Is there a way to destroy the session when a new browser tab or window is opened? I'm guessing probably jquery/javascript but not across that side of things.
It's very complex to achieve, unfortunately.
And almost impossible to do it true cross-browser and supported by every browser.
Technically, every new browser tab doesn't differ from the latter, form server's point of view. They share cookies and session too.
The only things that differ is JavaScript session. Say, an example: a site that is fully AJAX-based. First page is always login page. Then everything's changed with AJAX. If you dare to open another tab with this site it will open the first page which is always logging you out be default, for example. This can make it possible, but it's very complex.
New technologies stack like localStorage might make this possible, where you can communicate between tabs sending messages in localStorage. But this isn't fully cross-browser and isn't supported by all browsers versions.
So if you are ok with only limited choice of latest browsers — then dig on localStorage and postMessage.
Just to piggy back on what Oleg said, it would be incredibly difficult since HTTP is stateless and browser tabs share data. One potential way of doing it COULD be on the front end, but a very specific set of circumstances would need to be present and they could easily be bypassed. IF the application is a SPA and the primary body is only loaded once, you could potentially generate a key on the body load and send that with each request. Then, if the body is reloaded (say in a new tab or new window), you could generate a new key which would start a new session.
However, the real question is why you would want to do this. Your user experience will suffer and no real security gains exist.
I have some solution and I want share it with you.
To restrict user to only one tab per session, you may use cookie. I describe here how you may build your webapp in order to archieve that goal.
Each time the web module needs to render the auth/login page, create and store a cookie with a given name. Let's call it browserName. The value of the cookie must be a generated value. You may use java.util.UUID if your programming language is java.
When the browser finished loading your auth/login page, set the browser's name with the generated cookie value. You have to know how to read cookie using JavaScript.
Each time the user load other page than auth/login page, check whether the current browser's name is that one stored in the cookie. If they are differents, prompt user and then you can run a snipt that reset session and redirect to auth/login page.
The following is an example of implementing what I've said.
Snipt to be added in the method that runs before your login page in shown Map<String, Object> v$params = new TreeMap<>();
v$params.put("path", "/");
FacesContext.getCurrentInstance()
.getExternalContext()
.addResponseCookie("browserName", UUID.randomUUID().toString(), v$params);
The mini JavaScript library that help you with cookie and other. Add it globally in your webapp.
/**
* http://stackoverflow.com/questions/5639346/shortest-function-for-reading-a-cookie-in-javascript
*/
(function() {
function readCookie(name, c, C, i) {
if (cookies) {
return cookies[name];
}
c = document.cookie.split('; ');
cookies = {};
for (i = c.length - 1; i >= 0; i--) {
C = c[i].split('=');
cookies[C[0]] = C[1];
}
return cookies[name];
}
window.readCookie = readCookie; // or expose it however you want
})();
// function read_cookie(k,r){return(r=RegExp('(^|;
// )'+encodeURIComponent(k)+'=([^;]*)').exec(document.cookie))?r[2]:null;}
function read_cookie(k) {
return (document.cookie.match('(^|; )' + k + '=([^;]*)') || 0)[2];
}
/**
* To be called in login page only
*/
function setupWebPage(){
window.name = read_cookie("browserName");
}
/**
* To be called in another pages
*/
function checkWebPageSettings(){
var curWinName = window.name;
var setWinName = read_cookie("browserName");
if( curWinName != setWinName){
/**
* You may redirect the user to a proper page telling him that
* your application doesn't support multi tab/window. From this page,
* the user may decide to go back to the previous page ou loggout in
* other to have a new session in the current browser's tab or window
*/
alert('Please go back to your previous page !');
}
}
Add this to your login page <script type="text/javascript">
setupWebPage();
</script>
Add this to your other page template <script type="text/javascript">
checkWebPageSettings();
</script>
I have a ASP.net MVC web application which consists of several pages. The requirement is like this:
when users are using the application, suppose user is in page 7, suddenly user navigates away from the application by typing a external internet URL say Google.com.
Now when user presses the back button of the browser, Instead of bringing him back to page 7, we need to redirect him to Page 0 which is the landing page of the application.
Is there any way to achieve this? we have a base controller which gets executed every time a page loads as well as a master page (aspx). Can we do something there so that this behavior can be implemented in all the pages?
I think the best solution is to use iframe and switch between your steps inside of iframe. It would be quite easy to do, because you don't need to redesign your application. Anytime when user tries to switch to other url and come back, the iframe will be loaded again from the first step.
Be sure to disable caching on every step of your application. You can do this by applying NoCache attribute to your controller's actions:
public class NoCache : ActionFilterAttribute
{
public override void OnResultExecuting(ResultExecutingContext filterContext)
{
filterContext.HttpContext.Response.Cache.SetExpires(DateTime.UtcNow.AddDays(-1));
filterContext.HttpContext.Response.Cache.SetValidUntilExpires(false);
filterContext.HttpContext.Response.Cache.SetRevalidation(HttpCacheRevalidation.AllCaches);
filterContext.HttpContext.Response.Cache.SetCacheability(HttpCacheability.NoCache);
filterContext.HttpContext.Response.Cache.SetNoStore();
base.OnResultExecuting(filterContext);
}
}
There is 2 case over here
First is browser in online mode, in this case you have to store your last page get request in session, if user hit back button it will re initiate get request for that page again you can trap it and send them to landing page, You have to take care that get request for page happen only once other action must be post.
Second is browser in offline mode, in this case you have to take care that your response should not put any cache foot print in browser, there are many code example you can found on net for this purpose.
I can offer the following idea:
When user press <a href='external url' onclick='clearHistory'>link</a>
You can save in browser history of the desired url:
<script>
function clearHistory()
{
var reternUrl = getReternUrl();
History.pushState({}, null, reternUrl);
}
</script>
more about history.js
Edit: ok, then handle beforeunload event:
$(window).on('beforeunload', function () {
var reternUrl = getReternUrl();
History.pushState({}, null, reternUrl);
});
EDIT: Shortened and slightly changed code to better answer exact question (based on first comment to this answer)
Addition to answer above about editing the browser history for the case where the user types the external URL in the browser address bar.
You could try to detect url change as posted in How to detect URL change in JavaScript.
Example of this using jquery (taken and edited slightlyfrom post linked to above):
For newer browsers:
$(window).bind('hashchange', function() {
/* edit browser history */
});
For older browsers:
function callback(){
/* edit browser history */
}
function hashHandler(callback){
this.oldHash = window.location.hash;
this.Check;
var that = this;
var detect = function(){
if(that.oldHash!=window.location.hash){
callback("HASH CHANGED - new hash" + window.location.hash);
that.oldHash = window.location.hash;
}
};
this.Check = setInterval(function(){ detect() }, 100);
}
hashHandler(callback); //start detecting (callback will be called when a change is detected)
I'll get back to you on bookmarks (still need to check that out).
We use the js lib retina.js which swaps low quality images with "retina" images (size times 2). The problem is, that retina.js throws a 404 for every "retina" image which can't be found.
We own a site where users can upload their own pictures which are most likely not in a retina resolution.
Is there no way to prevent the js from throwing 404s?
If you don't know the lib. Here is the code throwing the 404:
http = new XMLHttpRequest;
http.open('HEAD', this.at_2x_path);
http.onreadystatechange = function() {
if (http.readyState != 4) {
return callback(false);
}
if (http.status >= 200 && http.status <= 399) {
if (config.check_mime_type) {
var type = http.getResponseHeader('Content-Type');
if (type == null || !type.match(/^image/i)) {
return callback(false);
}
}
RetinaImagePath.confirmed_paths.push(that.at_2x_path);
return callback(true);
} else {
return callback(false);
}
}
http.send();
There are a few options that I see, to mitigate this.
Enhance and persist retina.js' HTTP call results caching
For any given '2x' image that is set to swap out a '1x' version, retina.js first verifies the availability of the image via an XMLHttpRequest request. Paths with successful responses are cached in an array and the image is downloaded.
The following changes may improve efficiency:
Failed XMLHttpRequest verification attempts can be cached: Presently, a '2x' path verification attempt is skipped only if it has previously succeeded. Therefore, failed attempts can recur. In practice, this doesn't matter much beacuse the verification process happens when the page is initially loaded. But, if the results are persisted, keeping track of failures will prevent recurring 404 errors.
Persist '2x' path verification results in localStorage: During initialization, retina.js can check localStorage for a results cache. If one is found, the verification process for '2x' images that have already been encountered can be bypassed and the '2x' image can either be downloaded or skipped. Newly encounterd '2x' image paths can be verified and the results added to the cache. Theoretically, while localStorage is available, a 404 will occur only once for an image on a per-browser basis. This would apply across pages for any page on the domain.
Here is a quick workup. Expiration functionality would probably need to be added.
https://gist.github.com/4343101/revisions
Employ an HTTP redirect header
I must note that my grasp of "server-side" matters is spotty, at best. Please take this FWIW
Another option is for the server to respond with a redirect code for image requests that have the #2x characters and do not exist. See this related answer.
In particular:
If you redirect images and they're cacheable, you'd ideally set an HTTP Expires header (and the appropriate Cache-Control header) for a date in the distant future, so at least on subsequent visits to the page users won't have to go through the redirect again.
Employing the redirect response would get rid of the 404s and cause the browser to skip subsequent attempts to access '2x' image paths that do not exist.
retina.js can be made more selective
retinajs can be modified to exclude some images from consideration.
A pull request related to this: https://github.com/imulus/retinajs/commit/e7930be
Per the pull request, instead of finding <img> elements by tag name, a CSS selector can be used and this can be one of the retina.js' configurable options. A CSS selector can be created that will filter out user uploaded images (and other images for which a '2x' variant is expected not to exist).
Another possibility is to add a filter function to the configurable options. The function can be called on each matched <img> element; a return true would cause a '2x' variant to be downloaded and anything else would cause the <img> to be skipped.
The basic, default configuration would change from the current version to something like:
var config = {
check_mime_type: true,
retinaImgTagSelector: 'img',
retinaImgFilterFunc: undefined
};
The Retina.init() function would change from the current version to something like:
Retina.init = function(context) {
if (context == null) context = root;
var existing_onload = context.onload || new Function;
context.onload = function() {
// uses new query selector
var images = document.querySelectorAll(config.retinaImgTagSelector),
retinaImages = [], i, image, filter;
// if there is a filter, check each image
if (typeof config.retinaImgFilterFunc === 'function') {
filter = config.retinaImgFilterFunc;
for (i = 0; i < images.length; i++) {
image = images[i];
if (filter(image)) {
retinaImages.push(new RetinaImage(image));
}
}
} else {
for (i = 0; i < images.length; i++) {
image = images[i];
retinaImages.push(new RetinaImage(image));
}
}
existing_onload();
}
};
To put it into practice, before window.onload fires, call:
window.Retina.configure({
// use a class 'no-retina' to prevent retinajs
// from checking for a retina version
retinaImgTagSelector : 'img:not(.no-retina)',
// or, assuming there is a data-owner attribute
// which indicates the user that uploaded the image:
// retinaImgTagSelector : 'img:not([data-owner])',
// or set a filter function that will exclude images that have
// the current user's id in their path, (assuming there is a
// variable userId in the global scope)
retinaImgFilterFunc: function(img) {
return img.src.indexOf(window.userId) < 0;
}
});
Update: Cleaned up and reorganized. Added the localStorage enhancement.
Short answer: Its not possible using client-side JavaScript only
After browsing the code, and a little research, It appears to me that retina.js isn't really throwing the 404 errors.
What retina.js is actually doing is requesting a file and simply performing a check on whether or not it exists based on the error code. Which actually means it is asking the browser to check if the file exists. The browser is what gives you the 404 and there is no cross browser way to prevent that (I say "cross browser" because I only checked webkit).
However, what you could do if this really is an issue is do something on the server side to prevent 404s altogether.
Essentially this would be, for example, /retina.php?image=YOUR_URLENCODED_IMAGE_PATH a request to which could return this when a retina image exists...
{"isRetina": true, "path": "YOUR_RETINA_IMAGE_PATH"}}
and this if it doesnt...
{"isRetina": false, "path": "YOUR_REGULAR_IMAGE_PATH"}}
You could then have some JavaScript call this script and parse the response as necessary. I'm not claiming that is the only or the best solution, just one that would work.
Retina JS supports the attribute data-no-retina on the image tag.
This way it won't try to find the retina image.
Helpful for other people looking for a simple solution.
<img src="/path/to/image" data-no-retina />
I prefer a little more control over which images are replaced.
For all images that I've created a #2x for, I changed the original image name to include #1x. (* See note below.) I changed retina.js slightly, so that it only looks at [name]#1x.[ext] images.
I replaced the following line in retina-1.1.0.js:
retinaImages.push(new RetinaImage(image));
With the following lines:
if(image.src.match(/#1x\.\w{3}$/)) {
image.src = image.src.replace(/#1x(\.\w{3})$/,"$1");
retinaImages.push(new RetinaImage(image));
}
This makes it so that retina.js only replaces #1x named images with #2x named images.
(* Note: In exploring this, it seems that Safari and Chrome automatically replace #1x images with #2x images, even without retina.js installed. I'm too lazy to track this down, but I'd imagine it's a feature with the latest webkit browsers. As it is, retina.js and the above changes to it are necessary for cross-browser support.)
One of solutions is to use PHP:
replace code from 1st post with:
http = new XMLHttpRequest;
http.open('HEAD', "/image.php?p="+this.at_2x_path);
http.onreadystatechange = function() {
if (http.readyState != 4) {
return callback(false);
}
if (http.status >= 200 && http.status <= 399) {
if (config.check_mime_type) {
var type = http.getResponseHeader('Content-Type');
if (type == null || !type.match(/^image/i)) {
return callback(false);
}
}
RetinaImagePath.confirmed_paths.push(that.at_2x_path);
return callback(true);
} else {
return callback(false);
}
}
http.send();
and in yours site root add file named "image.php":
<?php
if(file_exists($_GET['p'])){
$ext = explode('.', $_GET['p']);
$ext = end($ext);
if($ext=="jpg") $ext="jpeg";
header("Content-Type: image/".$ext);
echo file_get_contents($_GET['p']);
}
?>
retina.js is a nice tool for fixed images on static web pages, but if you are retrieving user uploaded images, the right tool is server side. I imagine PHP here, but the same logic may be applied to any server side language.
Provided that a nice security habit for uploaded images is to not let users reach them by direct url: if the user succeeds in uploading a malicious script to your server, he should not be able to launch it via an url (www.yoursite.com/uploaded/mymaliciousscript.php). So it is usually a good habit to retrieve uploaded images via some script <img src="get_image.php?id=123456" /> if you can... (and even better, keep the upload folder out of the document root)
Now the get_image.php script can get the appropriate image 123456.jpg or 123456#2x.jpg depending on some conditions.
The approach of http://retina-images.complexcompulsions.com/#setupserver seems perfect for your situation.
First you set a cookie in your header by loading a file via JS or CSS:
Inside HEAD:
<script>(function(w){var dpr=((w.devicePixelRatio===undefined)?1:w.devicePixelRatio);if(!!w.navigator.standalone){var r=new XMLHttpRequest();r.open('GET','/retinaimages.php?devicePixelRatio='+dpr,false);r.send()}else{document.cookie='devicePixelRatio='+dpr+'; path=/'}})(window)</script>
At beginning of BODY:
<noscript><style id="devicePixelRatio" media="only screen and (-moz-min-device-pixel-ratio: 2), only screen and (-o-min-device-pixel-ratio: 2/1), only screen and (-webkit-min-device-pixel-ratio: 2), only screen and (min-device-pixel-ratio: 2)">#devicePixelRatio{background-image:url("/retinaimages.php?devicePixelRatio=2")}</style></noscript>
Now every time your script to retrieve uploaded images is called, it will have a cookie set asking for retina images (or not).
Of course you may use the provided retinaimages.php script to output the images, but you may also modify it to accomodate your needs depending on how you produce and retrieve images from a database or hiding the upload directory from users.
So, not only it may load the appropriate image, but if GD2 is installed, and you keep the original uploaded image on the server, it may even resize it and crop accordingly and save the 2 cached image sizes on the server. Inside the retinaimages.php sources you can see (and copy) how it works:
<?php
$source_file = ...
$retina_file = ....
if (isset($_COOKIE['devicePixelRatio'])) {
$cookie_value = intval($_COOKIE['devicePixelRatio']);
}
if ($cookie_value !== false && $cookie_value > 1) {
// Check if retina image exists
if (file_exists($retina_file)) {
$source_file = $retina_file;
}
}
....
header('Content-Length: '.filesize($source_file), true);
readfile($source_file); // or read from db, or create right size.. etc..
?>
Pros: image is loaded only once (retina users on 3G at least won'tload 1x+2x images), works even without JS if cookies are enabled, can be switched on and off easily, no need to use apple naming conventions. You load image 12345 and you get the correct DPI for your device.
With url rewriting you may even render it totally transparent by redirecting /get_image/1234.jpg to /get_image.php?id=1234.jpg
My suggestion is that you recognize the 404 errors to be true errors, and fix them the way that you are supposed to, which is to provide Retina graphics. You made your scripts Retina-compatible, but you did not complete the circle by making your graphics workflow Retina-compatible. Therefore, the Retina graphics are actually missing. Whatever comes in at the start of your graphics workflow, the output of the workflow has to be 2 image files, a low-res and Retina 2x.
If a user uploads a photo that is 3000x2400, you should consider that to be the Retina version of the photo, mark it 2x, and then use a server-side script to generate a smaller 1500x1200 non-Retina version, without the 2x. The 2 files together then constitute one 1500x1200 Retina-compatible image that can be displayed in a Web context at 1500x1200 whether the display is Retina or not. You don’t have to care because you have a Retina-compatible image and Retina-compatible website. The RetinaJS script is the only one that has to care whether a client is using Retina or not. So if you are collecting photos from users, your task is not complete unless you generate 2 files, both low-res and high-res.
The typical smartphone captures a photo that is more than 10x the size of the smartphone’s display. So you should always have enough pixels. But if you are getting really small images, like 500px, then you can set a breakpoint in your server-side image-reducing script so that below that, the uploaded photo is used for the low-res version and the script makes a 2x copy that is going to be no better than the non-Retina image but it is going to be Retina-compatible.
With this solution, your whole problem of “is the 2x image there or not?” goes away, because it is always there. The Retina-compatible website will just happily use your Retina-compatible database of photos without any complaints.
I have a "new items" badge on a page that I want to update immediately the page is loaded from the cache (i.e. when hitting "Back" or "Forward" to return to this page). What is the best way to accomplish this?
The setup is pretty simple. The layout for the app looks for new items every 8 seconds, and updates the badge + list of items accordingly.
$(function() {
setInterval( App.pollForNewItems, 8000 );
});
When someone navigates away from this page to look at the details of an item, a lot can happen. Things are "new" until any user has viewed them, and the app will likely have several user using it simultaneously (the kind of workflow used for a call center or support tickets).
To make sure that the badges are always up to date, I have:
$(window).bind('focus load', function ( event ) {
App.pollForNewItems();
});
..And though this works, polling for new items on 'load' is only useful when the page is loaded from the cache. Is there a reliable cross-browser way to tell if a page is being loaded from the cache?
Navigation Timing is in most browsers now(ie9+)
http://www.w3.org/TR/navigation-timing/#sec-navigation-info-interface
if (!!window.performance && window.performance.navigation.type === 2) {
// page has been hit using back or forward buttons
} else {
// regular page hit
}
You can ask the web browser to not cache the page. Try these HTTP headers:
Cache-control: no-cache
Cache-control: no-store
Pragma: no-cache
Expires: 0
Particularly, Cache-control: no-store is interesting because it tells the browser to not store the page in memory at all which prevents a stale page being loaded when you hit the back/forward button.
If you do this instead, you don't have to poll for data on page load.
A partial hacky solution is to have a var with the current time set on the server, and set a var with the current client time at the top of the page. If they differ by more than a certain threshold (1 minute?) then you could assume it's a cached page load.
Example JS (using ASP.Net syntax for the server side):
var serverTime = new Date('<%= DateTime.Now.ToUniversalTime().ToString() %>');
var pageStartTime = Date.UTC(new Date());
var isCached = serverTime < pageStartTime &&
pageStartTime.getTime() - serverTime.getTime() > 60000;
Alternatively, using cookies on the client side (assuming cookies are enabled), you can check for a cookie with a unique key for the current version of the page. If none exists, you write a cookie for it, and on any other page access, the existence of the cookie shows you that it's being loaded from the cache.
E.g. (assumes some cookie helper functions are available)
var uniqueKey = '<%= SomeUniqueValueGenerator() %>';
var currentCookie = getCookie(uniqueKey);
var isCached = currentCookie !== null;
setCookie(uniqueKey); //cookies should be set to expire
//in some reasonable timeframe
Personally, I would set data attribute containing the item id for each element.
I.e.
<ul>
<li data-item-id="123">Some item.</li>
<li data-item-id="122">Some other item.</li>
<li data-item-id="121">Another one..</li>
</ul>
Your App.pollForNewItems function would grab the data-item-id attribute of the first element (if newest are first) and send it to the server with your original request.
The server would then only return the items WHERE id > ... which you can then prepend them to the list.
I'm still confused as to why you want to know if the browser has a cached version of the page.
Also, is there a reason for binding to load instead of ready?
Christian
good answer: https://stackoverflow.com/a/9870920/466363
You could also use Navigation Timing to measure the network latency in great detail.
Here is a good article: http://www.html5rocks.com/en/tutorials/webperformance/basics/
If the time difference between fetchStart and responseStart is very low, the page was loaded from cache, for example.
by stewe