OpenLayers 3 Reload Layer(s) - javascript

I am working on a project using OL3 in which I need to be able to manually (by button press) or automatically (time based) reload vector layers IF they have been updated since the last load using HTTP conditional GETs (304 headers and such).
I found this very old post (https://gis.stackexchange.com/questions/333/how-to-dynamically-refresh-reload-a-kml-layer-in-openlayers) for KML layers, but it appears to use variables no longer found in OL3 and I am not sure that it would allow for only loading files that have been modified since the last load. At first glance it appears that a full reload is forced, even if the file has not been modified.
There does not seem to be anything in the API that resembles a reload function for the map or layer objects in OL3. Is there a way to do this?
Update 1:
I found a possible way to do this as an answer in this question: https://gis.stackexchange.com/questions/125074/openlayers3-how-to-reload-a-layer-from-geoserver-when-underlying-data-change using the code:
layer.getSource().updateParams({"time": Date.now()});
however when I run this code I get the error:
TypeError: selectedLayer.getSource(...).updateParams is not a function
Upon checking the API Reference for OL3 it appears that no such functions exist. The closest is setProperties() or setAttributions(). Neither of which work. It also appears that not all layer types implement getSource().
Update 2:
The refresh() reloads the tiles, but does not appear to be requesting them from the server. Rather, it appears they are being loaded from a cache (but not the HTTP cache). No requests are made, no HTTP 304s or anything like that. Will be trying a variant of the KML approach and posting the results soon.
Update 3:
After trying LOTS of different solutions I accidentally stumbled upon something that worked for vector layers. By calling the layer source.clear() function and then calling Map.updateSize(), the layer is automagically reloaded from it's source URL. An XHR GET request is issued and if the source file has changed, it will be reloaded from the file. If the source file has not changed, a 304 will be issued and the source will be reloaded from the cache.
Below is a function that should uses this method to reload a given layer:
function refreshLayer(selectedLayer)
{
var selectedLayerSource = selectedLayer.getSource();
if(selectedLayerSource instanceof ol.source.Vector)
{
//do vector reload
selectedLayerSource.clear();
map.updateSize();
}
else
{
//reload the entire page
window.location.reload();
}
}
However, it appears that on the first few tries (depending on the browser) the request is sent, a 200 code is sent back, but the layer does not reflect any changes. After a few tries (and reloading the page a few times) it works. Once it starts working for a layer it continues to work as often as the source file is changed. Does anyone have any idea what is going on?
Update 4:
Using an adaptation of Jonatas' answer I am getting better results. New features pop up instantly on a reload. However, old features are not removed from the map and many features that have moved locations are shown on the map twice. Below is my code:
function refreshSelectedLayer()
{
console.log("This feature is still in the process of being implemented. Refresh may not actually occur.");
var selectedLayerSource = selectedLayer.getSource();
if(selectedLayerSource instanceof ol.source.Vector)
{
var now = Date.now();
var format = selectedLayerSource.getFormat();
var url = selectedLayerSource.getUrl();
url = url + '?t=' + now;
loader = ol.featureloader.xhr(url, format);
selectedLayerSource.clear();
loader.call(selectedLayerSource, [], 1, 'EPSG:3857');
map.updateSize();
}
else if(selectedLayerSource instanceof ol.source.Tile)
{
selectedLayerSource.changed();
selectedLayerSource.refresh();
}
}
Note that the var selectedLayer is set elsewhere in the code. Any ideas why these very odd results are occuring?
Update 5:
I noticed that if I remove all other code besides the:
source.clear();
call an XHR GET request is made and the features do not disappear. Why is clearing the source not removing all of the features?
Update 6:
After discovering that ol.source.clear() was not actually removing features from a given data source/layer I replaced it using the following code:
selectedLayerSource.forEachFeature(function(feature){
selectedLayerSource.removeFeature(feature);
});
By outputting the features in the layer before and after each step, I got this:
var now = Date.now();
var format = selectedLayerSource.getFormat();
var url = selectedLayerSource.getUrl();
url = url + '?t=' + now;
console.log("time: "+now+" format: "+format+" url: "+url);
loader = ol.featureloader.xhr(url, format);
console.log(selectedLayerSource.getFeatures());
console.log("Deleting features...");
/*
Try adding code here to manually remove all features from source
*/
selectedLayerSource.forEachFeature(function(feature){
selectedLayerSource.removeFeature(feature);
});
console.log(selectedLayerSource.getFeatures());
console.log("Loading features from file...");
loader.call(selectedLayerSource, [], 1, 'EPSG:3857');
window.setTimeout(function(){
console.log(selectedLayerSource.getFeatures());
map.updateSize();
}, 500);
Which outputs into the console:
"time: 1471462410554 format: [object Object] url: http://server/file.ext?t=1471462410554" file.php:484:3
Array [ Object, Object, Object, Object, Object, Object, Object, Object, Object, Object, 1 more… ] file.php:491:3
Deleting features... file.php:492:3
Array [ ] file.php:501:3
Loading features from file... file.php:503:3
GET XHR http://server/file.ext [HTTP/1.1 200 OK 34ms]
Array [ Object, Object, Object, Object, Object, Object, Object, Object, Object, Object, 1 more… ]
After several tests with GeoJSON and KML layers I confirmed that this method works!!!
However, because the loader makes its request asynchronously, I am left with the problem of how to execute code after the loader function has been called. Obviously using setTimeout() is a horrible way to do this and was only implemented for testing purposes. A success/failure callback function would be perfect and when looking at the source of featureloader.js it appears that they are offered as parameters in ol.featureloader.loadFeaturesXhr. See below code block from featureloader.js:
/**
* #param {string|ol.FeatureUrlFunction} url Feature URL service.
* #param {ol.format.Feature} format Feature format.
* #param {function(this:ol.VectorTile, Array.<ol.Feature>, ol.proj.Projection)|function(this:ol.source.Vector, Array.<ol.Feature>)} success
* Function called with the loaded features and optionally with the data
* projection. Called with the vector tile or source as `this`.
* #param {function(this:ol.VectorTile)|function(this:ol.source.Vector)} failure
* Function called when loading failed. Called with the vector tile or
* source as `this`.
* #return {ol.FeatureLoader} The feature loader.
*/
ol.featureloader.loadFeaturesXhr = function(url, format, success, failure)
I attempted to implement these functions like so when creating the loader:
loader = ol.featureloader.xhr(url, format,
function(){
console.log(selectedLayerSource.getFeatures());
map.updateSize();
console.log("Successful load!");
},
function(){
console.log("Could not load "+selectedLayerName+" layer data from "+url);
}
);
but neither function is being called. Any suggestions? I feel like I am missing something really simple here...
Update 7:
Using the solution provided by #Jonatas Walker I adapted it to use jQuery:
var now = Date.now();
var format = selectedLayerSource.getFormat();
var url = selectedLayerSource.getUrl();
url = url + '?t=' + now;
//make AJAX request to source url
$.ajax({url: url, success: function(result){
//manually remove features from the source
selectedLayerSource.forEachFeature(function(feature){
selectedLayerSource.removeFeature(feature);
});
//create features from AJAX results
var features = format.readFeatures(result, {
featureProjection: 'EPSG:3857'
});
//add features to the source
selectedLayerSource.addFeatures(features);
},
error: function(err){
alert("Could not load features from "+selectedLayerName+" at "+url+" error code: "+err.status);
}
});
After extensive testing with GeoJSON and KML sources this has proved an extremely reliable refresh method!

Well, there are another options! Have your own loader.
Load this script - just in case someone is still using old browsers
<script src="//cdn.polyfill.io/v2/polyfill.min.js?features=fetch"></script>
Then load your JSON file and know when it's ready/loaded:
function refreshSelectedLayer(layer) {
var now = Date.now();
var source = layer.getSource();
var format = new ol.format.GeoJSON();
var url = '//your_server.net/tmp/points.json?t=' + now;
fetch(url)
.then(function(response) {
return response.json();
}).then(function(json) {
console.log('parsed json', json);
source.clear(); // if this is not enough try yours
var features = format.readFeatures(json, {
featureProjection: 'EPSG:3857'
});
source.addFeatures(features);
}).catch(function(ex) {
console.log('parsing failed', ex);
});
}

Try an adaptation of this:
function refreshSource() {
var now = Date.now();
var source = vectorLayer.getSource();
var format = new ol.format.GeoJSON();
var url = '//your_server.net/tmp/points.json?t=' + now;
var loader = ol.featureloader.xhr(url, format);
source.clear();
loader.call(source, [], 1, 'EPSG:3857');
}
The trick is to tell the browser this is a new loading by changing the url.

Related

document.documentElement.outerHTML.length is not equal with doc type in chrome network tab

I want to get the document size, when the page is ready. (just in time after server request).
document.addEventListener("DOMContentLoaded", function (event) {
var pageSizelenght = document.documentElement.outerHTML.length;
//});
This does not give me the exact result with chrome dev-tools document type file in network section.
F.e the dcoument size is shown as 1.5 MB, but in the code it returns 1.8MB with
document.documentElement.outerHTML.length
If it is not proper way, how can I get the document size listed in network section?
If you can help me, so much appreciated.
As has been said in comments, the outerHTML is a serialization of the DOM representation after parsing of the original markup. This may be completely different than the original markup and will most likely not match in size at all:
const markup = "<!doctype html><div><p><div>foo";
// DOMParser helps parsing markup into a DOM tree, like loading an HTML page does
const doc = (new DOMParser()).parseFromString(markup, "text/html");
const serialized = doc.documentElement.outerHTML;
console.log({ serialized });
console.log({ markup: markup.length, serialized: serialized.length });
To get the size of the original markup you can call the performance.getEntriesByType("navigation") method, which will return an array of PerformanceNavigationTiming objects, where you'll find the one of the initial document (generally at the first position).
These PerformanceNavigationTiming objects have fields that let you know the decoded-size of the resource, its coded-size (when compressed over the wire), and its transferred-size (can vary if cached).
Unfortunately, this API doesn't work well for iframes (moreover the ones that are fetched through POST requests like StackSnippets), so I have to outsource the live demo to this Glitch project.
The main source is:
const entry = performance.getEntriesByType("navigation")
// That will probably be the first here,
// but it might be better to check for the actual 'name' of the entry
.find(({name}) => name === location.href);
const {
decodedBodySize,
encodedBodySize,
transferSize
} = entry;
log({
decodedBodySize: bToKB(decodedBodySize),
encodedBodySize: bToKB(encodedBodySize),
transferSize: bToKB(transferSize)
});
function bToKB(b) { return (Math.round(b / 1024 * 100) / 100) + " KB"; }

Display generated Google Map image on a web page

I am using Google Apps Script to create a page, on which I would like to embed maps. The maps themselves would be static, but the map could be different depending on other parameters (it’s a genealogy page, and I’d like to display a map of birth and death locations, and maybe some other map points, based on a selected individual).
Using Google’s Maps service, I know that I can create a map, with a couple points built in.
Function getMapImage() {
var map = Maps.newStaticMap()
.setSize(600,400)
.addMarker('Chicago, Illinois') // markers would be based on a passed parm; this is just test data
.addMarker('Pocatello, Idaho');
// *** This is where I am looking for some guidance
return(); // obviously, I'm not returning a blank for real
}
Within the map class, there are a number of things I can do with it at this point.
I could create a URL, and pass that back. That appears to require an API account, which at this point, I do not have (and ideally, would like to avoid, but maybe I’ll have to do that). It also appears that I will run into CORB issues with that, which I think is beyond my knowledge (so if that’s the solution, I’ll be back for more guidance).
I could create a blob as an image, and pass that back to my page. I have tried this using a few different examples I have found while researching this.
Server Side
function getMapImage() {
var map = Maps.newStaticMap()
.setSize(600,400)
.addMarker('Chicago, Illinois')
.addMarker('Pocatello, Idaho');
var mapImage = map.getAs("image/png");
// OR
// var mapImage = map.getBlob();
return(mapImage);
}
Page side
<div id=”mapDiv”></div>
<script>
$(function() {
google.script.run.withSuccessHandler(displayMap).getMapImage();
}
function displayMap(mapImage) {
var binaryData = [];
binaryData.push(mapImage);
var mapURL = window.URL.createObjectURL(new Blob(binaryData, {type: "image/png"}))
var mapIMG = "<img src=\'" + mapURL + "\'>"
$('#mapDiv').html(mapIMG);
}
</script>
The page calls getMapImage() on the server, and the return data is sent as a parm to displayMap().
var mapIMG ends up resolving to <img src='blob:https://n-a4slffdg23u3pai7jxk7xfeg4t7dfweecjbruoa-0lu-script.googleusercontent.com/51b3d383-0eef-41c1-9a50-3397cbe83e0d'> This version doesn't create any errors in the console, which other options I tried did. But on the page, I'm just getting the standard 16x16 image not found icon.
I’ve tried a few other things based on what I’ve come across in researching this, but don’t want to litter this post with all sorts of different code snippets. I’ve tried a lot of things, but clearly not the right thing yet.
What’s the best / correct (dare I ask, simplest) way to build a map with Google’s Map class, and then serve it to a web page?
EDIT: I added a little more detail on how the server and page interact, in response to Tanaike's question.
Modification points:
I think that in your script, Blob is returned from Google Apps Script to Javascript using google.script.run. Unfortunately, in the current stage, Blob data cannot be directly sent from from Google Apps Script to Javascript. I think that this might be the reason of your issue.
In this case, I would like to propose to directly create the data URL at the Google Apps Script side. When your script is modified, it becomes as follows.
Modified script:
Google Apps Script side:
function getMapImage() {
var map = Maps.newStaticMap()
.setSize(600, 400)
.addMarker('Chicago, Illinois')
.addMarker('Pocatello, Idaho');
var blob = map.getAs("image/png"); // or map.getBlob()
var dataUrl = `data:image/png;base64,${Utilities.base64Encode(blob.getBytes())}`;
return dataUrl;
}
Javascript side:
$(function() {
google.script.run.withSuccessHandler(displayMap).getMapImage();
});
function displayMap(mapURL) {
var mapIMG = "<img src=\'" + mapURL + "\'>"
$('#mapDiv').html(mapIMG);
}
In your Javascript side, $(function() {google.script.run.withSuccessHandler(displayMap).getMapImage();} is not enclosed by ). Please be careful this.
Note:
In my environment, when I saw <div id=”mapDiv”></div>, this double quote ” couldn't be used. So if in your environment, an error occurs by <div id=”mapDiv”></div>, please modify ” to " like <div id="mapDiv"></div>.
Reference:
base64Encode(data)

MarkLogic JavaScript scheduled task

I try to schedule a script using the 'Scheduled Tasks' in ML8. The documentation explains this a bit but only for xQuery.
Now I have a JavaScript file I'd like to schedule.
The error in the log file:
2015-06-23 19:11:00.416 Notice: TaskServer: XDMP-NOEXECUTE: Document is not of executable mimetype. URI: /scheduled/cleanData.js
2015-06-23 19:11:00.416 Notice: TaskServer: in /scheduled/cleanData.js [1.0-ml]
My script:
/* Scheduled script to delete old data */
var now = new Date();
var yearBack = now.setDate(now.getDate() - 65);
var date = new Date(yearBack);
var b = cts.jsonPropertyRangeQuery("Dtm", "<", date);
var c = fn.subsequence(cts.uris("", [], b), 1, 10);
while (true) {
var uri = c.next();
if (uri.done == true){
break;
}
xdmp.log(uri.value, "info"); // log for testing
}
Try the *.sjs extension (Server-side JavaScript).
The *.js extension can be used for static JavaScript resources to return to the client instead of executed on the server.
Hoping that helps,
I believe that ehennum found the issue for you (the extension - which is what the mime-type error is complaining about.
However, on the same subject, not all items in ML work quite as you would expect for Serverside Javascript. For example, using sjs as a target of a trigger is (or recently) did not work. So for things like that, it is also possible to wrap the sjs call inside of xqy using xdmp-invoke.

jquery.get(url) synchronization

OS X 10.6.8, Chrome 15.0.874.121
I'm experiencing an issue with javascript/jquery: I want to download a header file from base url, then add some more text to it and than spit it out to the client. I'm using the following code:
var bb = new window.WebKitBlobBuilder;
$.get('js/header.txt', function(data) {
bb.append(data);
console.log("finished reading file");
});
console.log("just before getting the blog");
var blob = bb.getBlob('text/plain');
// append some more
saveAs(blob,"name.dxf");
But that fails because getting the file is only finished way after the saveAs(blob) is executed. I know I can fix it with:
var bb = new window.WebKitBlobBuilder;
$.get('js/header.txt', function(data) {
bb.append(data);
//append some more
var blob = bb.getBlob('text/plain');
saveAs(blob,"name.dxf");
});
But that does not really look attractive: I only want to use the get statement only to append the header to the blob, and if I want to read a footer from the file system, I have to do a get inside a get, and spit out the blob in the inner get
Are there alternative ways to withhold the code after the get statement from executing until the whole file has been successfully loaded?
No.*
But, if you want it to look more attractive, try to describe semantically what you are trying to achieve and then write functions accordingly. Maybe:
function loadBlob (loadHeader, loadBody) {
loadHeader(loadBody);
}
loadBlob(function (oncomplete) {
$.get("js/header.txt", function(data) {
bb.append(data);
oncomplete();
});
}, function () {
var blob = bb.getBlob('text/plain');
// append some more
saveAs(blob,"name.dxf");
});
I don't know, is that more attractive? Personally, I find the original just fine, so maybe mine isn't any better, but the point is to use sematics.
* You could use setTimeout to poll and see if the response has been received. That's technically an alternative, but certainly not more attractive, is it?

'Uncaught Error: DATA_CLONE_ERR: DOM Exception 25' thrown by web worker

So I'm creating a web worker:
var arrayit = function(obj) {
return Array.prototype.slice.call(obj);
};
work = arrayit(images);
console.log(work);
//work = images.push.apply( images, array );
// Method : "load+scroll"
var worker = new Worker('jail_worker.js');
worker.postMessage(work)
worker.onmessage = function(event) {
console.log("Worker said:" + event.data);
};
Here's what images is:
$.jail.initialStack = this;
// Store the selector into 'triggerEl' data for the images selected
this.data('triggerEl', (options.selector) ? $(options.selector) : $window);
var images = this;
I think my problem has something to do with this:
http://dev.w3.org/html5/spec/Overview.html#safe-passing-of-structured-data
How can I get around this? as you can see, I tried slicing the host object into a real array, but that didn't work.
Here's a link to the file I'm hacking on:
https://github.com/jtmkrueger/JAIL
UPDATE--------------------------------------------------
This is what I had to do based on the accepted answer from #davin:
var arrayit = function(obj) {
return Array.prototype.slice.call(obj);
};
imgArray = arrayit(images);
work = _.map(images, function(i){ return i.attributes[0].ownerElement.outerHTML; });
var worker = new Worker('jail_worker.js');
worker.postMessage(work)
worker.onmessage = function(event) {
console.log("Worker said:" + event.data);
};
NOTE: I used underscore.js to assure compatibility.
The original exception was most likely thrown because you tried passing a host object to the web worker (most likely a dom element). Your subsequent attempts don't throw the same error. Remember two key points: there isn't shared memory between the different threads, and the web workers can't manipulate the DOM.
postMessage supports passing structured data to threads, and will internally serialise (or in some other way copy the value of the data recursively) the data. Serialising DOM elements often results in circular reference errors, so your best bet is to map the object you want serialised and extract relevant data to be rebuilt in the web worker.
Uncaught DataCloneError: An object could not be cloned was reproduced when tried save to indexeddb function as object's key. Need double recheck that saved object is serializable

Categories

Resources