I want to update a webpage periodically and I want the browser only to update if the page is available. Right nov I'm using using this:
<script language="javascript" type="text/javascript">
$(document).ready(function() {
setInterval("location.reload()", 66000);
});
</script>
Is it possible not to update if the page for some reason isn't available?
You could use an ajax check for a tiny resource ahead of time, something like:
$(document).ready(function() {
setInterval(maybeRefresh, 66000);
function maybeRefresh() {
$.ajax({
url: "/path/to/tiny/resource",
type: "GET",
success: function() {
// That worked, do the full refresh
location.reload();
}
});
}
});
...but I would tend to think you'd be better off just loading the updated content instead. Put all of the primary content in an element with the id "main", then:
$(document).ready(function() {
setInterval(function() {
$("#main").load("/path/to/new/content");
}, 66000);
});
That replaces the content of the #main element with the content received from the GET to /path/to/new/content. If the GET fails, no update.
I would probably also use a chained setTimeout instead of a setInterval and handle errors by trying to refresh more aggressively. Something like:
$(document).ready(function() {
setTimeout(refresh, 66000);
function refresh() {
$.ajax({
url: "/path/to/new/content",
type: "GET",
success: function(html) {
// Success, update the content
$("#main").html(html);
// Reload in 66 seconds
setTimeout(refresh, 66000);
},
error: function() {
// Failed, try again in five seconds
setTimeout(refresh, 5000);
}
});
}
});
You may want to use the HTML5 cache manifest feature.
In order to do that, specify a manifest file path in your html element's manifest attribute:
<!DOCTYPE HTML>
<html manifest="cache.manifest">
Then add all the website's resources to the manifest file, e.g.:
CACHE MANIFEST
# v11
/index.html
/main/features.js
/main/settings/index.css
http://example.com/images/scene.jpg
http://example.com/images/world.jpg
Now, if you do a GET request for index.html, the browser will only do an actual request if the cache manifest file is available and it has changed (in order to force an update you can store a version number in cache manifest comment and update it when needed). Otherwise, the page will be reloaded from browser's cache.
Related
I'm using billboard.js to display this type of chart somewhere below the fold of my page.
Adding the billboard JS code is causing my mobile page speed to decrease from 90+ to 75-80.
My code is structured like this:
<!-- chart area -->
<div class="chart_holder">
<div id="chart_area"></div>
</div>
...
...
...
<script src="/js/d3.v5.min.js"></script>
<script src="/js/billboard.min.js"></script>
<script defer src="/js/moment.min.js"></script>
<script type="text/javascript">function drawChart(t){$(".chart_holder").show(),.........);</script>
Is there any way to make the chart load later, in order to solve google page speed issues like:
Reduce JavaScript execution time
Keep request counts low and transfer sizes small (Extra JS files are pretty big)
Minimize main-thread work (Script Evaluation)
While at the same time, allow Search engine crawlers to understand that I'm providing added value to my visitors by displaying them data charts? It might be good for SEO so I don't want to hide the chart in a way that google can't see it for example.
EDIT:
This is how the chart is called:
$(document).ready(function() {
$.ajax( {
type:"GET", url:"chart/data.php", dataType:"json", data: {
item_id: 'item'
}
, success:function(t) {
void 0!==t.criteria?t.criteria[0].length<=2?$(".chart_holder").hide(): drawChart(t): $(".chart_holder").hide()
}
, error:function() {
console.log("Data extraction failed")
}
}
)
}
CAUTION: this is based on the assumption that Google page speed calculations will not count the script loading if it is delayed 1 second, I haven't verified myself. Google page speed calculation script may change in the future, be sure to test with and without the delay.
NOTE: of course it can be damaging for user experience if your entire page display rely on the script. For the OP's problem, it seems acceptable.
The trick here is to use a timeout of 1000ms after page load to load the script and when done, display the chart:
$(document).ready(function() {
setTimeout(function() {
//you may need to change URL to a full URL here (with 'http://domain.etc')
$.getScript('/js/billboard.min.js', function() {
//you probably can keep this in your <script> tag, put it here if you have 'X is undefined' errors coming from calls to billboard's stuff
function drawChart(t){$(".chart_holder").show(),.........);
//your ajax call and the chart display
$.ajax( {
type:"GET", url:"chart/data.php", dataType:"json", data: {
item_id: 'item'
},
success:function(t) {
void 0!==t.criteria?t.criteria[0].length<=2?$(".chart_holder").hide(): drawChart(t): $(".chart_holder").hide()
},
error:function() {
console.log("Data extraction failed")
}
} );
});
}, 1000);
});
ALSO: It's not clear from the question if you need the 3 scripts in the code for the chart or only billboard. If you want to delay the 3 of them, you need to chain getScript calls like this:
$.getScript('/js/d3.v5.min.js', function() {
$.getScript('/js/billboard.min.js', function() {
$.getScript('/js/moment.min.js', function() {
//the rest
});
});
});
Minimizing http request
billboard.js provides a packaged version of d3 + billboard.js.
https://cdn.jsdelivr.net/npm/billboard.js/dist/billboard.pkgd.min.js
So, if you need to deal with file counts, deal with packaged version.
Lazy rendering
From the 1.11.0 release, added new feature to delay chart rendering. (Internally it will hold the rendering process)
https://naver.github.io/billboard.js/demo/#ChartOptions.LazyRender
var chart = bb.generate({
...,
render: {
lazy: true,
observe: false
}
});
setTimeout(function() {
// call '.flush()' at the point you want to be rendered
chart.flush();
}, 1000);
This is my code. Basically I have a Reload.gif that I want to start playing on my page before I send an ajax request to reload my data. However what ends up happening is it will only start playing after my success function is called. This is what the timeline looks like:
0:00 seconds: 1 logged to console
0:00 seconds: 2 logged to console
0:10 seconds: 3 logged to console
0:10 seconds: GIF starts playing.
This doesn't make sense, is setting the src of an img async or something?
Image code:
<img id="reloadIcon" src="/img/Reload.png" style="width: 25px; height: 25px"/>
jQuery.ajax({
url: url,
type: 'GET',
timeout: 20000,
beforeSend: function() {
console.log(1);
document.getElementById('reloadIcon').src = "/img/Reload.gif";
console.log(2);
},
success: function (result) {
console.log(3);
}
});
Load the image before $.ajax() call. Toggle the CSS display property of the <img> element at beforeSend function, instead of changing the .src of the <img> element.
jQuery.ajax({
url: url,
type: 'GET',
timeout: 20000,
beforeSend: function() {
console.log(1);
document.getElementById('reloadIcon').style.display = "block";
console.log(2);
},
success: function (result) {
document.getElementById('reloadIcon').style.display = "none";
console.log(3);
}
, error: function() {
document.getElementById('reloadIcon').style.display = "none";
}
});
Fetching the image by changing "src" is asynchronous, I think you can just set the src when the page is load but make the image element invisible. And set the image visible when ajax begin.
If you change the src attribute, the browser must complete a bunch of things like look for the image in its cache, probably re-request the image, parse, a.s.o.
Second, your JS code shares one thread with a lot of other things. Browsers tend to internally cache CSS assignments and execute the following JS code first, as long as the right-then missing, cached changes will not affect the functionality of your code. If you want to force assignment in this case, read the specific CSS property, and the browser will assign the changes before. I tend to state (without having tested), there's such behaviour on attributes as well.
Best practice in this case is the post from guest271314, hiding and showing the outer container is much faster and reliable.
I am developing a local site for a company (only local internal use, offline and without server). I have a main page that has a main div, that contain 3 different div. Each div is linked to a page and the "onclick" event of each div will load the page linked into the main div. So i have to check, with the document ready function, if each page exists and, if not, I want to delete the div linked to that page. How can I check if a page exist locally? I've found many answere that check with status of connection if a page exists, but my html will only work offline and locally, so I can't use that method.
EDIT - SOLVED
I've solved this using the script of #che-azeh:
function checkIfFileLoaded(fileName) {
$.get(fileName, function(data, textStatus) {
if (textStatus == "success") {
// execute a success code
console.log("file loaded!");
}
});
}
If the file was successfully load, i'd change the content of a new hidden div that will tell to another script if it have to remove or not each of the three div.
This function checks if a file can load successfully. You can use it to try loading your local files:
function checkIfFileLoaded(fileName) {
$.get(fileName, function(data, textStatus) {
if (textStatus == "success") {
// execute a success code
console.log("file loaded!");
}
});
}
checkIfFileLoaded("test.html");
I suggest you run a local web server on the client's computer. (See also edit below on local XHR access).
With a local web server they can start it up as if it was an application. You could for example use node's http-server. You could even install it as an node/npm package, which makes deployment also easier.
By using a proper http server (locally in your case) you can use xhr requests:
$(function(){
$.ajax({
type: "HEAD",
async: true,
url: "http://localhost:7171/myapp/somefile.html"
}).done(function(){
console.log("found");
}).fail(function () {
console.log("not found");
})
})
EDIT:
Firefox
Another post has (#che-azeh) has brought to my attention that firefox does allow XHR on the file "protocol". At the time of this writing the above works in firefox using a url of just somefile.html and using the file scheme.
Chrome
Chrome has an option allow-file-access-from-files (http://www.chrome-allow-file-access-from-file.com/). This also allows local XHR request
This flag is intended for testing purposes:
you should be able to run your tests in Google Chrome with no hassles
I would still suggest the local web server as this make you independent of these browser flags plus protect you from regression once firefox/chrome decide to disable support for this.
You can attempt to load the page within a try-catch construct. If the page exists, it will be loaded though. If it doesn't, you can (within the catch) set the related div as hidden.
Try to access the page using $.ajax. Use the error: option to run a callback function that removes the DIV linked to the page.
$.ajax({
url: "page1.html",
error: function() {
$("#page1_div").remove();
});
You can loop this code over all the DIVs.
You can use jquery load function
$("div").load("/test.html", function(response, status, xhr) {
if (status == "error") {
var msg = "Sorry but there was an error: ";
$(this).html(msg + xhr.status + " " + xhr.statusText);
}
});
I am using the following code on our dashboard to refresh it constantly without flicker How can I refresh a page with jQuery?
:
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script>
setTimeout(function() {
$.ajax({
url: "",
context: document.body,
success: function(s,x){
$(this).html(s);
}
});
}, 4000);
</script>
However, this is causing the javascript to reload each time too due to some cache breakers.
Google is sending with the following headers:
In the interest of not getting myself and my clients blocked from Google (might as well become a Mennonite at that point) is there a way use Google CDN without causing these extra requests?
Warning untested:
$.ajax({
url: "",
dataType: "text", //dont parse the html you're going to do it manually
success: function(html) {
var $newDoc = $.parseHTML(html, document, false); //false to prevent scripts from being parsed.
$('body').replaceWith(newDoc.find("body")); //only replace body
}
});
A better solution would be to template your body.
My site structure is sequential (as in page1.html leads to page2.html, page2.html leads to page3.html, etc.). I'm wanting to preload some images from the third page on the second page. I've found this wonderful bit of code here on SO:
$.ajax({
url : 'somePage.html',
dataType : "html",
success : function(data) {
$(data).hide().appendTo('#someDiv');
var imagesCount = $('#someDiv').find('img').length;
var imagesLoaded = 0;
$('#someDiv').find('img').load( function() {
++imagesLoaded;
if (imagesLoaded >= imagesCount) {
$('#someDiv').children().show();
}
});
var timeout = setTimeout(function() {
$('#someDiv').children().show();
}, 5000);
}
});
It works beautifully at dumping the entire contents of page3.html onto page2.html. The problem is, I don't want the entire contents; I just want the images, and I want them hidden and ready for when the user actually loads page3.html. The above snippet brings audio and, well, everything else along with it. So my question is, will this hacked up version below work for my purposes?
$.ajax({
url : 'page3.html',
dataType : "html",
success : function(data) {
var imagesCount = $(data).find('img').length;
var imagesLoaded = 0;
$(data).find('img').load( function() {
++imagesLoaded;
if (imagesLoaded >= imagesCount) {
//halt? do something?
}
});
}
});
Again, all I want is for page3.html's images to be preloaded on page2.html. Will this do the trick? And how can I test to verify?
I believe the simplest way, in your case, is just to use jQuery.get and specify the images (or any other objects) you want to preload.
For example,
$.get('images/image1.jpg');
$.get('images/image2.jpg');
// etc
This way, you can specify which images from the next page you want to preload in the browser.
The $.get function is just an abbreviated version of the $.ajax function. In your case, you just want to "get" the images so that they are in the browser's cache, so that when you get to the next html page, the images are already loaded.
How to verify
If you were to add the sample code above to your page2, then visit that page while having the Network tab open in Firebug, or Chrome dev tools, you'll see that GET requests are sent for the images and they are loaded to the browser's cache.