There must be something simple I am missing, but alas, I do not know what I do not know. Below is the code I have thus far for trying to get current streamflow conditions from the USGS.
// create site object
function Site(siteCode) {
this.timeSeriesList = [];
this.siteCode = siteCode;
this.downloadData = downloadData;
this.getCfs = getCfs;
// create reference to the local object for use inside the jquery ajax function below
var self = this;
// create timeSeries object
function TimeSeries(siteCode, variableCode) {
this.variableCode = variableCode;
this.observations = [];
}
// create observation object
function TimeSeriesObservation(stage, timeDate) {
this.stage = stage;
this.timeDate = timeDate;
}
// include the capability to download data automatically
function downloadData() {
// construct the url to get data
// TODO: include the capability to change the date range, currently one week (P1W)
var url = "http://waterservices.usgs.gov/nwis/iv/?format=json&sites=" + this.siteCode + "&period=P1W¶meterCd=00060,00065"
// use jquery getJSON to download the data
$.getJSON(url, function (data) {
// timeSeries is a two item list, one for cfs and the other for feet
// iterate these and create an object for each
$(data.value.timeSeries).each(function () {
// create a timeSeries object
var thisTimeSeries = new TimeSeries(
self.siteCode,
// get the variable code, 65 for ft and 60 for cfs
this.variable.variableCode[0].value
);
// for every observation of the type at this site
$(this.values[0].value).each(function () {
// add the observation to the list
thisTimeSeries.observations.push(new TimeSeriesObservation(
// observation stage or level
this.value,
// observation time
this.dateTime
));
});
// add the timeSeries instance to the object list
self.timeSeriesList.push(thisTimeSeries);
});
});
}
// return serialized array of cfs stage values
function getCfs() {
// iterate timeseries objects
$(self.timeSeriesList).each(function () {
// if the variable code is 00060 - cfs
if (this.variableCode === '00060') {
// return serialized array of stages
return JSON.stringify(this.observations);
}
});
}
}
When I simply access the object directly using the command line, I can access individual observations using:
> var watauga = new Site('03479000')
> watauga.downloadData()
> watauga.timeSeriesList[0].observations[0]
I can even access all the reported values with the timestamps using:
> JSON.stringify(watauga.timeSeriesList[0].observations)
Now I am trying to wrap this logic into the getCfs function, with little success. What am I missing?
I don't see anything in the code above that enforces the data being downloaded. Maybe in whatever execution path you're using to call getCfs() you have a wait or a loop that checks for the download to complete prior to calling getCfs(), but if you're simply calling
site.downloadData();
site.getCfs()
you're almost certainly not finished loading when you call site.getCfs().
You'd need to do invoke a callback from within your success handler to notify the caller that the data is downloaded. For example, change the signature of Site.downloadData to
function downloadData(downloadCallback) {
// ...
Add a call to the downloadCallback after you're finished processing the data:
// After the `each` that populates 'thisTimeSeries', but before you exit
// the 'success' handler
if (typeof downloadCallback === 'function') {
downloadCallback();
}
And then your invocation would be something like:
var watauga = new Site('03479000');
var downloadCallback = function() {
watauga.timeSeriesList[0].observations[0];
};
watauga.downloadData(downloadCallback);
That way, you're guaranteed that the data is finished processing before you attempt to access it.
If you're getting an undefined in some other part of your code, of course, then there may be something else wrong. Throw a debugger on it and step through the execution. Just bear in mind that interactive debugging has many of the same problems as interactively calling the script; the script has time to complete its download in the background before you start inspecting the variables, which makes it look like everything's hunky dory, when in fact a non-interactive execution would have different timing.
The real issue, I discovered through just starting over from scratch on this function, is something wrong with my implementation of jQuery.().each(). My second stab at the issue, I successfully used a standard for in loop. Here is the working code.
function getCfs() {
for (var index in this.timeSeriesList) {
if (this.timeSeriesList[index].variableCode === '00060'){
return JSON.stringify(this.timeSeriesList[index].observations);
}
}
}
Also, some of the stuff you are talking about #Palpatim, I definitely will have to look into. Thank you for pointing out these considerations. This looks like a good time to further investigate these promises things.
Related
I have a function that gets input constantly but then only processes it every minute with a cron job.
The most recent output should be stored in a variable and retrieved from the outside at random times.
Here in a very simplified form:
let input = 'something';
let data = '';
data += input;
require('node-schedule').scheduleJob('* * * * *', somethingMore);
function somethingMore() {
let output = data += 'More';
// return output;
}
console.log(output);
Initializing the variable outside the function like above doesn't seem to work in this case.
Calling the function directly or assigning it to a variable doesn't help, as it would run it before it's due.
I also tried with buffers, but they don't seem to work either, unless I missed something.
The only thing that does work is writing a file to disk with fs and then reading from there, but I guess it's not the best of solutions.
It seems like you just let your chron function run as scheduled and you save the latest result in a module-scoped variable. Then, create another exported function that anyone else can call to get the latest result.
You're only showing pseudo-code (not your real code) so it is not clear exactly what you want to save for future inquiries to return. You will have to implement that part yourself.
So, if you just wanted to save the most recent value:
// module-scoped variable to save recent data
// you may want to call your function to initialize it when
// the module loads, otherwise it may be undefined for a little bit
// of time
let lastData;
require('node-schedule').scheduleJob('* * * * * *', () => {
// do something that gets someData
lastData = someData;
});
// let outside caller get the most recent data
module.exports.getLastData = function() {
return lastData;
}
I am extending mxgraph delete control example to add delete like controls to nodes which are generated dynamically in my graph. The source code for the example is available here
The problem is in this part of the code -
// Overridden to add an additional control to the state at creation time
mxCellRendererCreateControl = mxCellRenderer.prototype.createControl;
mxCellRenderer.prototype.createControl = function(state)
{
mxCellRendererCreateControl.apply(this, arguments);
var graph = state.view.graph;
if (graph.getModel().isVertex(state.cell))
{
if (state.deleteControl == null)
mxCellRendererCreateControl.apply inside the overridden call back of createControl seems to work as intended (calls the original function before creating additional controls) with the initial state of the graph on load. But, once I add nodes dynamically to the graph and the callback is invoked by mxgraph's validate/redraw, the control goes into an infinite loop, where 'apply' function basically keeps calling itself (i.e, the callback).
I am a bit clueless because when I debug, the context(this) looks fine, but I can't figure out why instead of invoking the prototype method, it just keeps invoking the overridden function in a loop. What am I doing wrong?
It looks like you are not cloning your original function the right way, please try the following :
Function.prototype.clone = function() {
var that = this;
return function theClone() {
return that.apply(this, arguments);
};
};
Add that new method somewhere in your main code so it will available in the whole application, now you can change your code to :
// Overridden to add an additional control to the state at creation time
let mxCellRendererCreateControl = mxCellRenderer.prototype.createControl.clone();
mxCellRenderer.prototype.createControl = function(state) {
mxCellRendererCreateControl(state);
var graph = state.view.graph;
if (graph.getModel().isVertex(state.cell)) {
if (state.deleteControl == null) {
// ...
}
}
// ...
};
This should work if I understood your problem correctly, if it does not, please change the old function call back to the apply. Otherwise let me know if something different happened after the Function prototype change.
It seems that your overriding code is being called multiple times (adding a simple console.log before your overriding code should be enough to test this)
Try to ensure that the code that overrides the function only gets called once, or validate whether the prototype function is the original or yours.
Here is an example of how you can check if the function is yours or not
if (!mxCellRenderer.prototype.createControl.isOverridenByMe) {
let mxCellRendererCreateControl = mxCellRenderer.prototype.createControl;
mxCellRenderer.prototype.createControl = function(state) { /* ... */ };
mxCellRenderer.prototype.createControl.isOverridenByMe = true;
}
There are other ways, like using a global variable to check if you have overriden the method or not.
If this doesn't fix your issue, please post more about the rest of your code (how is this code being loaded/called would help a lot)
I'm working on a geoprocessing web application. My application will provide users with a specific set of options, the user will provide some data, and then I will process the data on the server and finally return the results. If it matters, I am using the CMV http://docs.cmv.io/en/1.3.3/ as a framework and trying to build my own plugin, but I suspect my problems are more general JS problems. Here is a pseudocode sample (note that this is pseudocode and not my actual code, which is a mess at the moment):
initializeTool: function() {
//here I am able to access my map object through this.map
//and I need it for my output
on(dom.byId("mybutton"), "click", processInput);
}
processInput: function() {
//pull user data from webpage
var userData, queries;
//launch query for all data
for(var i in userData){
queries[i] = query(userData[i]);
}
//deferredlist is from Dojo, doc here: http://dojotoolkit.org/api/?qs=1.10/dojo/DeferredList
new DeferredList(queries).then(function (results) {
//iterate over query responses and perform work
for(var i in queries){
//peform some synchronus operations
}
//and now we're done! but how do I get to my output?
}
}
The desired output in this case is a group of objects that have had various operations done on them, but are only accessible in the scope of the then() block and the inline function. My problem is that the output I am trying to use is only in the scope of the initialize function. I'm not sure what the best way to get my processed data to where I want it to be. This is a problem because the processed data is geometry information - it isn't very human readable as text, so it needs to be displayed on a map.
I've been pouring over JS scoping and looking at references to try and figure out what my issue is, but I seriously cannot figure it out.
One of the main points of promises is that then returns a promise for whatever is eventually returned inside its onFulfill handler. This is what enables you to get the outcome out of your processInput() function and into the world outside it.
So you can (and should) do this:
function processInput() {
//pull user data from webpage
var userData;
//launch query for all data
return Promise.all(userData.map(query))
.then(function (results) {
var theResult;
//iterate over query responses and perform work
results.forEach(function (result) {
//peform some synchronus operations and determine theResult
});
return theResult;
});
}
processInput().then(function (theResult) {
// do something with theResult
});
I'm reading the Google Drive Realtime API documentation on Building a Collaborative Data Model.
I really like the way gapi.drive.realtime.databinding.bindString behaves. It doesn't mess up your cursor placement when multiple people are typing in the same text box. But it requires that you pass it a CollaborativeString.
But if you register a custom type, you have to use gapi.drive.realtime.custom.collaborativeField no matter what type of field you are defining, and you can't pass one of these to bindString. In fact, the collaborativeField type does not appear to be documented anywhere, and inspecting it in the console shows that it has no methods. That means there's no registerReference method, which CollaborativeString uses to keep track of cursor positions.
How frustrating. So I guess I have to work around it. I see a few options:
Ignore the fact that the cursor gets messed up during collaboration
Use a CollaborativeMap instead of a custom type, and wrap it with my custom type at runtime
Probably going to do option 2.
I think you misunderstand how this site works, the onus is not on other people to show you how to do something - you're asking other people to take time from their day and help you.
That being said, taking a quick look at the page that you linked shows that what you want to do is not only possible but quite straightforward and compatible with bindString. Stealing from the example code from that page:
// Call this function before calling gapi.drive.realtime.load
function registerCustomTypes()
{
var Book = function () { };
function initializeBook()
{
var model = gapi.drive.realtime.custom.getModel(this);
this.reviews = model.createList();
this.content = model.createString();
}
gapi.drive.realtime.custom.registerType(Book, 'Book');
Book.prototype.title = gapi.drive.realtime.custom.collaborativeField('title');
Book.prototype.author = gapi.drive.realtime.custom.collaborativeField('author');
Book.prototype.isbn = gapi.drive.realtime.custom.collaborativeField('isbn');
Book.prototype.isCheckedOut = gapi.drive.realtime.custom.collaborativeField('isCheckedOut');
Book.prototype.reviews = gapi.drive.realtime.custom.collaborativeField('reviews');
Book.prototype.content = gapi.drive.realtime.custom.collaborativeField('content');
gapi.drive.realtime.custom.setInitializer(Book, initializeBook);
}
and
// Pass this as the 2nd param to your gapi.drive.realtime.load call
function onDocLoaded(doc)
{
var docModel = doc.getModel();
var docRoot = docModel.getRoot();
setTimeout(function ()
{
var book = docModel.create('Book');
book.title = 'Moby Dick';
book.author = 'Melville, Herman';
book.isbn = '978-1470178192';
book.isCheckedOut = false;
book.content.setText("Call me Ishmael. Some years ago - never mind how long precisely - having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world.");
docRoot.set('tbook', book);
debugger;
}, 0);
}
Good luck and have fun with the Realtime API - it's a lot of fun to play with.
I know this question and answer are getting old, but for reference's sake, just the last part of Grant Watters' very good answer, the onDocLoaded routine, is rather misleading. That function as written, is more suited for the 3rd parameter to the gapi.drive.realtime.load call, the onInitializeModel callback.
The 2nd parameter is called every time the Doc is loaded. You wouldn't normally add the same object over and over as the above routine would... Instead, you would normally set up your event handling, your dataBinds etc. This version might clarify somewhat:
// Pass this as the 2nd param to your gapi.drive.realtime.load call
function onDocLoaded(doc)
{
var docModel = doc.getModel();
var docRoot = docModel.getRoot();
var text = doc.getModel().getRoot().get("text");
// Add an event listener...
text.addEventListener(gapi.drive.realtime.EventType.TEXT_INSERTED, onStringChanged);
// ...and/or bind to collaborative objects:
var textArea = document.getElementById('textArea1')
textBinding = gapi.drive.realtime.databinding.bindString(text, textArea);
etc...
}
Not incidentally, bindString returns the binding object, which is needed to "unbind" later, preventing an AlreadyBound error or other unexpected behavior when the next Doc is loaded. Do something like this:
function onDocLoaded(doc)
{
// Clear any previous bindings etc:
if (textBinding) { textBinding.unbind() };
textBinding = null;
etc...
I am using IndexedDB, Web SQL or Web Storage to store some data on the client (or fallback to AJAX in the event the client doesn't support any storage). When the page loads I want to display some data from the store. But I can't display the data when the DOM is ready because the store may not be ready and I can't display the data when the store is ready because the DOM might not be ready.
Obviously I could implement some conditional that checks flags set by the dom and store or I could use a timeout but that seems sloppy (and wouldn't scale well if more than 2 conditions needed to be met). Is there a generally "good" way to handle this situation? I would prefer a cross-browser solution (E.g. watch won't work).
Example of the situation:
// FooServiceFactory decides which storage method to use based on
// what the browser allows and returns a new instance of the
// implementation and starts initializing resources.
var fooService = FooServiceFactory.getInstance();
// DOM is ready
window.onload = function() {
// fooService may not be ready yet depending on whether
// storage has already been setup or if resources need to
// be retrieved from the server. But I don't want this calling
// JS to know about that.
fooService.getAllFoo(request, function(response, status) {
// do something with response
});
};
Note: I accepted my own answer for now but am still open to better ways of handling this.
I usually go for a counter when doing asynchronous stuff that relies on each other.
var running = 0;
function fire(){
running++;
//fire ajax bind callback
}
function callback(data){
//do some stuff for this request
if(--running == 0){
//do special stuff that relies on all requests
}
}
The possibility of two requests coming back the same time and both evaluating the if-clause to true is almost zero.
Since some storage (Web Storage and the IndexedDB Synchronous API) are not asynchronous, there won't always be a need to keep track of this. It would be best to have the service implementation handle it by itself.
One way would be to have the implementation queue up calls and execute them when the store is ready. This would be especially important with some clients which will make the user "allow" the store before it is ready, which could take an indefinite amount of time. Here's an example of how the IndexedDB implementation could be handled.
var FooServiceIndexedDB = function() {
var db = null;
var queue = [];
var dbRequest = window.indexedDB.open("footle", "All kinds of foo");
var initStore = function() {
// Misc housekeeping goes here...
if (queue.length > 0) {
// Things to do if there are queued functions
for (var i = 0; i < queue.length; i++) {
queue[i](); // Run queued function
}
}
};
dbRequest.onsuccess = function(dbRequestEvent) {
db = dbRequestEvent.target.result;
if (db.getVersion() != "1.0") {
db.setVersion("1.0").onsuccess = function(versionEvent) {
// Create stores/indexes
initStore();
};
} else {
initStore();
}
};
// Public accessor
this.getAllFoo = function(request, callback) {
_getAllFoo(request, callback);
};
// Private accessor
var _getAllFoo = function(request, callback) {
if (db == null) {
// This method was called before the store was ready
queue.push(function() {_getAllFoo(request, callback);});
return;
}
// Proceed getting foo
};
};