loading multiple google charts takes too long - javascript

I have HTML file with a javascript like this:
function drawCharts(){
var graph; var data; var options;
options = { title: 'Title1' };
data = google.visualization.arrayToDataTable([['A','B'],['0',7.2],['2',7.2],['4',7.3],['6',6.4],['8',6.4],['10',6.3],['12',6.2],['14',6.2],['16',6],['18',6],['20',6.5],['22',7.2]]);
graph = new google.visualization.LineChart(document.getElementById('chart_div1'));
graph.draw(data, options);
...
data = google.visualization.arrayToDataTable([['A','B'],['0',14.5],['2',14.5],['4',14.5],['6',13.2],['8',13.1],['10',12.9],['12',12.8],['14',12.7],['16',12.5],['18',12.4],['20',12.2],['22',14.5]]);
graph = new google.visualization.LineChart(document.getElementById('chart_div50'));
graph.draw(data, options);
}
google.setOnLoadCallback(drawCharts);
(This file was generated previously.) The problem is, that when loading 50 charts, it's takes quite a long time. The best would be drawing the charts independently, not waiting for the others to draw - I would like to ask you how could it be made.
Edit: Inspired by PhonicUK's answer, I've found an Element ‘in view’ plugin with which I'm calling a function when certain div is being viewed and it's empty.

You can do a few different things here:
The first is that with 50 charts, I'm assuming not all of them are visible at the same time. So you can trap the window.onscroll event to know when the user is scrolling and only create the graphs when they're just below the bottom of the visible area. This means that only the currently visible and soon-to-be visible graphs are created, and the rest don't exist until they're needed.
The second is to generate the graphs server side and send them across as images. This could also be combined with the above to save on load time.

Related

Can't center an element generated from XMLHttpRequest

I am building a fairly complex web app. The main page loads and then all menu items (300+), form submissions, ect. are loaded via XMLHttpRequest. I have a basic "panel" template that allows the the panel to look and act (drag, resize, ect.) like a child window of the app. I load all XMLHttpRequest requested pages into the the content section of the "panel" template.
The problem I am running into is that if I try to center the new "panel" it does not seem to find the new "panels" size. My code is setup so that when a menu item is clicked it runs a function that calls the XMLHttpRequest function, the originating function passes to the XMLHttpRequest a callback function. The callback function then clones the panel template, so I can change several element attributes, I then append the response to the cloned "panel" template, in a document fragment. And then all that is appended to the displayed HTML, after which I find the new "panels" size and try to center it but it always fails.
As each function has a lot more going on than just what I spelled out above what follows is hopefully an accurate striped down version of the relevant parts of the code.
The XMLHttpRequest function in nothing unusual, and once it has a successful response the callback will run the "OpenPanel" function (see below).
Callback function:
function OpenPanel(e, response)
{
var rescontent = response.querySelector('.content');
var newid = rescontent.getAttribute('data-id');
var titlebar = rescontent.getAttribute('data-title');
var frag = document.createDocumentFragment();
var clonepanel = doc.getElementById('paneltemplate').cloneNode(true);
clonepanel.id = newid;
frag.appendChild(clonepanel);
frag.querySelector('.titlebar').innerHTML = titlebar;
var replacelem = frag.querySelector('.content');
replacelem.parentNode.replaceChild(rescontent, replacelem);
doc.getElementById('mainbody').appendChild(frag);
var newpanel = document.getElementById(newid);
newpanel.addEventListener('mousedown', PanelSelect, true);
newpanel.style.cssText = PanelPosition(newpanel);
}
PanelPosition function:
function PanelPosition(panel)
{
var lh = panel.clientHeight;
var lw = panel.clientWidth;
var wh = panel.parentNode.clientHeight;
var ww = panel.parentNode.clientWidth;
var paneltoppos = (wh - lh) / 2;
var panelleftpos = (ww - lw) / 2;
return 'top: ' + paneltoppos + 'px; left: ' + panelleftpos + 'px;';
}
I tried using setTimeout with a delay of 1ms, but that causes the panel to flash on the screen, in the wrong position, before its moved. Which from my perspective makes the app feel cheap or like only a second best effort was given. And even if it didn't flash setTimeout seems like a hack more than a solution.
I have tried this code with a few different "pages" (xhr requests) and I almost get the sense that the XMLHttpRequest hasn't finished loading when the callback function is ran (which I doubt is possible). For example, I put
console.log('top: '+wh+' - '+lh+'(wh - lh) left: '+ww+' - '+lw+'(ww - lw)');
in the "PanelPosition" function, and without the setTimeout the panel height (lh) and width (lw) are between 100 and 200 pixels. But with setTimeout the panels usually are over 500 pixels in height and width. And of course that severely effects where centered is.
I have tried several searches over the last few days but nothing has turned up. So if there is a good post or article describing the problem and the solution, feel free point me to it.
Should note that as I am running the web app exclusively in node-webkit/nw.js (chromium/webkit browser) there is no need for a cross-browser solution.
Well I guess I am going to answer my own question.
While looking for something completely unrelated I found this this SO post. The accepted answer gives a clue that explains the issue.
Unfortunately, it seems that you have to hand the controls back to the browser (using setTimeout() as you did) before the final dimensions can be observed; luckily, the timeout can be very short.
Basically javascript does not draw the appended element till the end of the function call. So setTimeout is one solution to my problem. But, there is a second solution. If I just have to wait till the end of the function then lets make that one heck of a small (focused) function. So I just moved all the code need to create the appended "panel" to a totally separate function. And in so doing I solved another pending issue.
Edit:
Or not. Apparently a separate function doesn't work now, but it did before I posted. Who knows maybe I didn't save the change before reloading the page.

Popout Map for Google Maps application

I've written a single-page application using Google Maps in conjunction with AngularJS.
In short I have three columns - one filled with tools, one with a table of data, and the final is the geographical representation of the table's data.
The data is updated every minute.
The users all have multiple monitors, and it's been requested that adding the ability to have the map on one window, and table on the other - so that the map can have more real estate and we can add more columns to the table.
The solution that I really want is to create a new window using JS, and move the map's DIV to this window.
This works in chrome and it works exactly as I expected - moving the div doesn't break google map's or Angular's ties to the DOM, as the DIV is never deleted nor cloned, just moved within the DOM.
However IE11 treats the new window as an entirely separate document, and disallows the use of appendChild (throwing heirachy exceptions).
I have tried using adoptNode to move the node to the new window's document, but IE throws a 'No such interface supported' error on the adoptNode call, however IE allows you to use adoptNode to move the node between iframes, see here: http://jsfiddle.net/nx9u4y9w/9/
This works in FF/Chrome.
var f1 = window.f1,
f2 = window.f2,
pop = window.open(f2.location.href, 'pop'),
x = 0;
window.fx = function(o) {
var source = f1,
target = f2,
map;
if (x++ == 1) {
source = f2;
target = pop;
o.disabled = true;
}
map = source.document.getElementById('map-canvas');
var adopted = target.document.adoptNode(map);
target.document.body.appendChild(adopted);
}
Is there a simple way to get this working in IE11?
I wanted to avoid creating a new HTML page to load into the window, as it'd cause a serious break in the UX whilst the map reloads, plus it'll there's the code overhead of having to create an intra-browser messaging using postMessage, etc.

External Ajax: All at Once, As Needed, or Something Else?

I created a Magic The Gathering site for my friends and I to use. On this site, we upload our decks of cards, and on the page where you can view all the cards in the deck each card is a link to the card on http://gatherer.wizards.com/. For ease of use, though, I made it so that when you hover over any of the card names, the card image gets Ajax'd in from gatherer, thus letting you see the card without having to click the link.
The question is: should I load all of the ~40 or so card images all at once when the page loads, or should I continuously load the images as they are hovered over, or is there some other way I should be doing it?
As it stands, I load each card as it is hovered over. My concern is that, as people mouse up and down the list, that is a LOT of requests to Gatherer. It would probably save requests to load them all up at the start, but I'm not sure if Gatherer would be upset with me for a sudden flurry of requests every time someone loads one of the decks on my site.
A solution I thought of was to load cards as they are hovered over, but save the image in a hidden container and just reload it when they mouse over it AGAIN. Thus if they load the page and don't look at anything, no needless requests were sent, but if they stay on the page for 30 minutes looking at every card over and over again, we don't inundate Gatherer with requests.
I just don't know if the method I'm using is wasteful - from a bandwidth standpoint for me or for gatherer, or from any other standpoint that I'm not familiar with. Are there any golden rules of external Ajax that I should know, for instance?
The method I'm currently using, which I assume is probably the worst implementation possible, but it was a proof of concept:
$(document).ready(function(){
var container = $('#cardImageHolder');
$('.bumpin a').mouseenter(function(){
doAjax($(this).attr('href'));
return false;
});
function doAjax(url){
// if it is an external URI
if(url.match('^http')){
// call YQL
$.getJSON("http://query.yahooapis.com/v1/public/yql?"+
"q=select%20*%20from%20html%20where%20url%3D%22"+
encodeURIComponent(url)+
"%22&format=xml'&callback=?",
// this function gets the data from the successful
// JSON-P call
function(data){
// if there is data, filter it and render it out
if(data.results[0]){
var data = filterData(data.results[0]);
var src = $(data).find('.leftCol img').first().attr('src');
var fixedImageSrc = src.replace("../../", "http://gatherer.wizards.com/");
var image = $(data).find('.leftCol img').first().attr('src', fixedImageSrc);
container.html(image);
// otherwise tell the world that something went wrong
} else {
var errormsg = "<p>Error: can't load the page.</p>";
container.html(errormsg);
}
}
);
// if it is not an external URI, use Ajax load()
} else {
$('#target').load(url);
}
}
// filter out some nasties
function filterData(data){
data = data.replace(/<?\/body[^>]*>/g,'');
data = data.replace(/[\r|\n]+/g,'');
data = data.replace(/<--[\S\s]*?-->/g,'');
data = data.replace(/<noscript[^>]*>[\S\s]*?<\/noscript>/g,'');
data = data.replace(/<script[^>]*>[\S\s]*?<\/script>/g,'');
data = data.replace(/<script.*\/>/,'');
return data;
}
});
No, there are no Golden Rules of Ajax. Loading 40 images up front would minimize load time upon hover, but would greatly increase how much bandwidth is used when the page is first loaded.
You will always have these types of balance questions. It's up to you to decide what is best, and tweak it based on empirical data.
"A solution I thought of was to load cards as they are hovered over,
but save the image in a hidden container and just reload it when they
mouse over it AGAIN. Thus if they load the page and don't look at
anything, no needless requests were sent, but if they stay on the page
for 30 minutes looking at every card over and over again, we don't
inundate Gatherer with requests."
This sounds reasonable.
If I were you, though, I would load every picture when the user first loads the page. Let the browser cache the images and you don't have to worry about it. Plus, this is likely the easiest method. Don't over complicate things when you don't have to :)

Is it possible to define a layer which will retry to load, e.g. with exponential back-off?

I am using OpenLayers to connect to a home-grown server, and unlike professional grade servers like Google or Cloudmade that box will actually take a while to calculate the result for a specific tile. And as it is a mathematical function I am plotting, there is no big chance to accelerate the server or even pre-render the tiles.
My initial trials with Leaflet quickly came to the conclusion that Leaflet actually leaves all of the reloading and load-error handling to the browser, while OpenLayers at least has an event that is fired when the tile server does return with an error code.
The idea I am following was to basically start rendering a tile when it was requested and fire an HTTP 503 immediately, relying on the client to try again.
To try again, I implemented a simple layer like this:
var myLayer = new OpenLayers.Layer.OSM.MYLayer("mine", {
'transparent':"true",
'format':"image/png",
'isBaseLayer':false});
myLayer.events.register("tileerror", myLayer, function (param) {
// Try again:
var targetURL = param.tile.layer.getURL(param.tile.bounds);
var tile = param.tile;
tile.timeout = tile.hasOwnProperty("timeout") ? tile.timeout * 2 : 1000;
setTimeout(function (tileToLoad, url) {
if (tileToLoad.url === url) {
tileToLoad.clear();
tileToLoad.url = url;
tileToLoad.initImage();
}
}.bind(undefined, tile, targetURL), tile.timeout);
});
I figured out the code required to reload a tile from the source of OpenLayers, but maybe there is a cleaner way to accomplish this.
My problem is: The tiles themselves are reused, as are the divs in the DOM, so the reload procedure might actually try to reload a tile into a DIV that long as been successfully reused, e.g. because the user scrolled to someplace else where the server was able to provide data quickly.
The question I guess boils down to - is there an official way to use the tileerror event to simply try to reloading, or at least a simpler way in the API to trigger a reload? I spent quite a while in the source of OpenLayers itself but couldn't shed light on why it is still going wrong (the test for tileToLoad.url == url didn't really do it).
Thanks for your help!
Ok, after some more trial and error I found that I could actually add an eventListener to my Layer class, which will do what I want - try to reload the tile again after a certain wait. The trick was the consecutive call of setImgSrc() for cleanup and to draw with the true parameter, which effectively is an (undocumented) force flag. Thanks to the code!
OpenLayers.Layer.OSM.MyLayer= OpenLayers.Class(OpenLayers.Layer.OSM, {
initialize:function (name, options) {
var url = [
"xxxx"
];
options = OpenLayers.Util.extend({
"tileOptions":{
eventListeners:{
'loaderror':function (evt) {
// Later reload
window.setTimeout(function () {
console.log("Drawing ", this);
this.setImgSrc();
this.draw(true);
}.bind(this), 3000); // e.g. after 3 seconds
}
}
}
}, options);
var newArguments = [name, url, options];
OpenLayers.Layer.OSM.prototype.initialize.apply(this, newArguments);
},
CLASS_NAME:"OpenLayers.Layer.OSM.MyLayer"
});
You should have a look at the following resources:
http://dev.openlayers.org/docs/files/OpenLayers/Util-js.html#Util.IMAGE_RELOAD_ATTEMPTS
http://dev.openlayers.org/apidocs/files/OpenLayers/Tile-js.html
http://dev.openlayers.org/docs/files/OpenLayers/Tile/Image-js.html

Delaying execution of Javascript function relative to Google Maps / geoxml3 parser?

I'm working on a implementing a Google map on a website with our own tiles overlays and KML elements. I've been previously requested to create code so that, for instance, when the page is loaded from a specific URL, it would initialize with one of the tile overlays already enabled. Recently, I've been requested to do the same for the buildings which are outlined by KML elements so that, arriving at the page with a specific URL, it would automatically zoom, center, and display information on the building.
However, while starting with the tile overlays work, the building KML does not. After doing some testing, I've determined that when the code which checks the URL executes, the page is still loading the KML elements and thus do not exist for the code to compare to or use:
Code for evaluating URL (placed at the end of onLoad="initialize()")
function urlClick() {
var currentURL = window.location.href; //Retrieve page URL
var URLpiece = currentURL.slice(-6); //pull the last 6 digits (for testing)
if (URLpiece === "access") { //If the resulting string is "access":
access_click(); //Display accessibility overlay
} else if (URLpiece === "middle") { //Else if the string is "middle":
facetClick('Middle College'); //Click on building "Middle College"
};
};
facetClick();
function facetClick(name) { //Convert building name to building ID.
for (var i = 0; i < active.placemarks.length; i++) {
if (active.placemarks[i].name === name) {
sideClick(i) //Click building whose id matches "Middle College"
};
};
};
Firebug Console Error
active is null
for (var i = 0; i < active.placemarks.length; i++) {
active.placemarks is which KML elements are loaded on the page, and being null, means no KML has been loaded yet. In short, I have a mistiming and I can't seem to find a suitable place to place the URL code to execute after the KMl has loaded. As noted above, I placed it at the end of onLoad="initialize()", but it would appear that, instead of waiting for the KML to completely load earlier in the function, the remainder of the function is executed:
onLoad="initialize()"
information(); //Use the buttons variables inital state to set up description
buttons(); //and button state
button_hover(0); //and button description to neutral.
//Create and arrange the Google Map.
//Create basic tile overlays.
//Set up parser to work with KML elements.
myParser = new geoXML3.parser({ //Parser: Takes KML and converts to JS.
map: map, //Applies parsed KML to the map
singleInfoWindow: true,
afterParse: useTheData //Allows us to use the parsed KML in a function
});
myParser.parse(['/maps/kml/shapes.kml','/maps/kml/shapes_hidden.kml']);
google.maps.event.addListener(map, 'maptypeid_changed', function() {
autoOverlay();
});
//Create other tile overlays to appear over KML elements.
urlClick();
I suspect one my issues lies in using the geoxml3 parser (http://code.google.com/p/geoxml3/) which converts our KML files to Javascript. While the page has completed loading all of the elements, the map on the page is still loading, including the KML elements. I have also tried placing urlClick() in the parser itself in various places which appear to execute after all the shapes have been parsed, but I've had no success there either.
While I've been intending to strip out the parser, I would like to know if there is any way of executing the "urlClick" after the parser has returned the KML shapes. Ideally, I don't want to use an arbitrary means of defining a time to wait, such as "wait 3 seconds, and go", as my various browsers all load the page at different times; rather, I'm looking for some way to say "when the parser is done, execute" or "when the Google map is completely loaded, execute" or perhaps even "hold until the parser is complete before advancing to urlClick".
Edit: Here are links to the map with the basic form of the issue found above. Since I've been developing the next update to the map on a test server, facetClick() is not part of this live version and I instead use its output function sideClick(); however the error is still the same in this arrangement:
active is null
google.maps.event.trigger(active.gpolygons[poly],'click');
Map: http://www.beloit.edu/maps/
Map w/Accessibility: http://www.beloit.edu/maps/?access
Map w/Building Click: http://www.beloit.edu/maps/?middle
EDIT: Spent most of my day working on rebuilding the functionality of the parser in Javascript and, low and behold, without the parser it works just fine. I figure that is obvious as I have to define each shape individually before the code, rather than waiting for it to be passed along by the parser. It would seem the answer is "if you want unique URLs, drop the parser". >_<
I've come across a similar problem when dealing with waiting for markers and infoWindows to load before executing a function. I found a solution here ( How can I check whether Google Maps is fully loaded? see #Veseliq's answer) that using the google maps event listener function for checking when the map is 'idle', does the trick. I assume this solution would work for KML layers as well. Essentially what you will have to do is include the following at the end of your initialize function:
google.maps.event.addListenerOnce(map, 'idle', function(){
// do something only the first time the map is loaded
});
In the API reference ( https://developers.google.com/maps/documentation/javascript/reference ) it states that the 'idle' event "is fired when the map becomes idle after panning or zooming". However, it seems to hold true that it is also fires on initial page load after everything in the map_canvas has loaded. And by using the addListenerOnce call, you ensure that it is never executed again after the initial page load (meaning it won't fire after a zoom or a pan action).
Second option:
As I mentioned you can take the callback approach, I believe this will only call your urlClick function after completing the parsing. Here's how you should probably arrange your code to make it work:
function someFunction(callback){
myParser.parse(['/maps/kml/shapes.kml','/maps/kml/shapes_hidden.kml']);
callback();
}
and then in your initialize you will have:
someFunction(function(){
urlClick();
});
You will have to make your map and myParser variables global.
Resources: This link had an excellent and detailed brief on how callback functions work in javascript, http://www.impressivewebs.com/callback-functions-javascript/

Categories

Resources