In SAPUI5/OpenUI5, I have a JSONModel I populate by a file from server:
var oModel = new JSONModel();
oModel.loadData("http://127.0.0.1/data/config.json");
console.log(JSON.stringify(oModel.getData()));
The console logs undefined since the request is asynchronous.
How to make it synchronous so console.log() is called after the data was loaded?
Using synchronous ajax requests is not recommended as it blocks the UI and will probably result in a warning in the console.
You can attach to the Model.requestCompleted event to access the asynchronously loaded data:
oModel.attachRequestCompleted(function() {
console.log(oModel.getData());
});
The keyword you are looking for is "Deferred"-object --> it enables you to wait for an AJAX request in SAPUI5.
Check this for SAPUI5 context: SAPUI5 Wait for an Deferred-Object // wait for .done() function
Since UI5 version 1.64.0, the API loadData returns a Promise instance:
logLoadedData: async function () {
const jsonModel = new JSONModel();
await jsonModel.loadData("<host>/data/config.json");
console.log(jsonModel.getData()); // after the loadData promise is resolved
},
Alternatively, there is also the API dataLoaded which returns a promise as well. It will resolve when all requests sent by loadData are finished. Here is a syntax without async-await:
doSomethingWith: async function (jsonModel) {
// Not sure if the model has all data loaded? Just use dataLoaded:
await jsonModel.dataLoaded();
console.log(jsonModel.getData());
},
The API loadData is also called internally when the constructor function of JSONModel was called with a string (URL) as an argument. In that case, dataLoaded might come in handy as well.
You can use the attachRequestCompleted-listener from the Model [1]
model.attachRequestCompleted(function(){
console.log(this.getData()); //"this" is the model
});
Another function to use is
$.get(url, function(response){
console.log(response);
model.setData(response);
});
// or
$.ajax(url, {
success: function(){
console.log(response);
model.setData(response);
}
});
This has the advantage that you can configure the request with every setting that jQuery.ajax accepts [2]
Another way to achieve this is to use the attachEventOnce method from EventProvider.
oModel.attachEventOnce("requestCompleted", function(oEvent) {
console.log(JSON.parse(oEvent.getParameter("response").responseText));
}, this);
It's best to use this approach when you only need to react to one request, and not all. Otherwise, if you use oModel.attachRequestCompleted(...), all requests will go through the same handler function.
You can also use method chaining to make this a little easier.
oModel.attachEventOnce(...) returns the object that called the method, so you can load your data and handle the callback all in one statement.
oModel.attachEventOnce("requestCompleted", function(oEvent) {
console.log(JSON.parse(oEvent.getParameter("response").responseText));
}, this).loadData("http://127.0.0.1/data/config.json");
This will first execute the loadData() request, and then console the response when the request has been completed. It will only use the callback function the first time a request is made. Subsequent requests will not go through the callback function.
If you want ALL requests to go through the SAME callback function, you can do the same thing but using oModel.attachRequestCompleted(...)
oModel.attachRequestCompleted(function(oEvent) {
console.log(JSON.parse(oEvent.getParameter("response").responseText));
}, this).loadData("http://127.0.0.1/data/config.json");
This will execute the loadData() request, console the response, and also console the response of all subsequent requests.
NOTE: Be careful using this in the callback functions. If you don't pass this as a parameter of the attachRequestCompleted(...) or attachEventOnce(...) methods, then this will lose it's original context as the controller, and inherit the context of the object calling the function. herrlock's answer demonstrates how the context of this changes.
Event Provider API Reference
Turned out there is a parameter in the .loadData() function to create a sync- call:
oModel.loadData("http://127.0.0.1/data/config.json", "", false);
See API-Reference as well.
Related
I am trying to create a test for an ajax function in my code base that I can't change which is pretty much like this:
function myFetch (src) {
return $.ajax({
url: src
})
}
I call it like this and get back an array of json objects (as an example)
myFetch('https://jsonplaceholder.typicode.com/posts').then(i => console.log(i))
How would I create a fake XHR or fake server (or other ideas that come to mind) for this? I've only ever seen examples, including the sinon docs that show how to do it with a callback, so if my function was myFetch(src, callback) I could use:
sinon.replace(jQuery, 'ajax', sinon.fake());
myFetch(fake.com, sinon.fake());
Or something like this I assume, since I am not totally sure if this would work.,
But how do I do this since my function does not have a callback, or even a 'success' or anything to stub?
fetch is not callback function but return Promise. That's why you use then when calling it. For testing Promise, sinon has good methods resolves and reject. So, based on your code, you can do it like this.
const fakeResponse = [{ todo_id: 1, todo_name: 'get wake up' }];
sinon.stub(jQuery, 'ajax').resolves(fakeResponse);
getTodos(42);
Reference: http://sinonjs.org/releases/v4.1.2/stubs/
Hope it helps
Say you have the following three functions and variable
var someList = [];
makeObject() {
// loops through someList here to create an object
// then calls sendObject function
sendObject()
}
sendObject() {
// sends object to database using HTTP call
}
resetList() {
// resets the list to be empty
// e.g. someList = []
}
Then you call them like so
makeObject()
resetList()
Is there any possiblity or any situation that the list will be reset before the makeObject function has a chance to loop through it?
There are plenty of things you can do in JavaScript which are asynchronous and non-blocking (XMLHttpRequest and setTimeout are classic examples). If you use any of those inside makeObject then resetList will run before the asynchronous parts get called.
resetList() will be called directly after the HTTP call is made. Unless you do other async work before the HTTP call, the order will always be:
makeObject()
loop inside makeObject()
sendObject() is called from inside makeObject()
sendObject() does the HTTP call
resetList() gets triggered right after the HTTP call since that HTTP call is async.
The HTTP returns and any handlers attached to it are triggered.
But make sure that you don't do any other async work, else this will not apply.
I have the following:
if(typeof searchDOM === "undefined"){
dojo.xhrPut({
url: addrPath + "/ContServlet?mod=1&act=23",
handleAs: "xml",
timeout: xhrTimeout(TIMEOUT_LRG),
load: function(dom, ioArgs){
if(dom instanceof Error){
console.error(dom);
} else{
cacheDOM = dom;
}
},
error: function(response, ioArgs){
xhrError(ioArgs, methodName);
}
});
}
The variable cacheDOM is a global variable declared(but not initialised) elsewhere in another script. It is an xml document containing the entire dom, and it is passed into:
the problem is, cacheDOM is undefined when it gets to fetchXml, and this is causing problems for methods like selectNode further down the function.
I haven't had much exposure to xhr calls, or things such as deferreds or promises, but I think that they may be able to help with this. How do i code this so that the rest of the method that this block is in will only execute if cacheDOM has been assigned the value of dom? Or if deferreds are the answer, how would i incorporate them into this code? The version of dojo i am using is 1.7.8
Well, the problem is indeed that you're using an XHR request which is asynchronous. So, the fetchXml function has to wait until that request is completed.
There are several ways to do this, you could call the fetchXml function from within the load function of dojo.xhrPut, but this is not really a good solution when your project grows because it creates a lot of dependencies on each other.
So, some smart people created an API for resolving asynchronous requests, called promises/deferreds.
So, what you have to do is assigning a new deferred to cacheDOM, for example:
require(["dojo/_base/Deferred"], function(Deferred) {
cacheDOM = new Defered();
});
Then, in the fetchXml() code you have to change your code a bit to do this:
function fetchXml() {
cacheDOM.then(function(realCache) {
console.log(realCache);
});
}
So in stead of directly using cacheDOM you have to wait for it using cacheDOM.then(). It will fire a callback when it's resolved, and the data will be available in realCache.
An alternative would be to call the entire fetchXml function when the XHR request has fired:
cacheDOM.then(fetchXml);
function fetchXml(cacheDOM) {
// Work with cacheDOM
}
This might take less work and less alteration to the fetchXml function depending on how much it relies on cacheDOM.
Then finally, inside your dojo.xhrPut you will have to do the following:
cacheDOM.resolve("My data");
Where "My data" would be the actual data which you would put inside cacheDOM.
DEMO: http://jsfiddle.net/rf20s9hb/1/
We are trying to implement QUnit JavaScript tests for a JS-heavy web app. We are struggling to find a way to successfully test methods that involve jQuery AJAX requests. For example, we have the following constructor function (obviously this is a very simplistic example):
var X = function() {
this.fire = function() {
$.ajax("someURL.php", {
data: {
userId: "james"
},
dataType: "json",
success: function(data) {
//Do stuff
}
});
};
};
var myX = new X();
myX.fire();
We are trying to find a way to test the fire method, preferably with a stubbed URL instead of the real someURL.php.
The only obvious solution to me at the moment is add the URL, and the success callback, as arguments to the constructor function. That way, in the test, we can create a new instance of X and pass in the stub URL, and a callback to run when the stub returns a response. For example:
test("Test AJAX function", function() {
stop();
var myX = new X();
//Call the AJAX function, passing in the stub URL and success callback
myX.fire("stub.php", function(data) {
console.log(data);
start();
});
});
However, this doesn't seem like a very nice solution. Is there a better way?
With jQuery, you can use the xhr object that .ajax() returns as a promise, so you can add more handlers (see below) than just the single success, complete and error ones you define in the options. So if your async function can return the xhr object, you can add test-specific handlers.
As for the URL, that's a little trickier. I've sometimes set up a very simple Node server on localhost, which just serves canned responses that were copied from the real server. If you run your test suite off that same server, your URLs just need to be absolute paths to hit the test server instead of the production server. And you also get a record of the requests themselves, as a server sees them. Or you can have the test server send back errors or bad responses on purpose, if you want to see how the code handles it.
But that's of course a pretty complex solution. The easier one would be to define your URLs in a place where you can redefine them from the test suite. For instance:
/* in your code */
var X = function () {
this.fire = function () {
return $.ajax({ url: this.constructor.url, ... });
};
};
X.url = "someURL.php"; // the production url
/* in your tests */
X.url = "stub.php"; // redefine to the test url
Also, QUnit has an asyncTest function, which calls stop() for you. Add a tiny helper to keep track of when to start again, and you've got a pretty good solution.
Here's what I've done before
// create a function that counts down to `start()`
function createAsyncCounter(count) {
count = count || 1; // count defaults to 1
return function () { --count || start(); };
}
// ....
// an async test that expects 2 assertions
asyncTest("testing something asynchronous", 2, function() {
var countDown = createAsyncCounter(1), // the number of async calls in this test
x = new X;
// A `done` callback is the same as adding a `success` handler
// in the ajax options. It's called after the "real" success handler.
// I'm assuming here, that `fire()` returns the xhr object
x.fire().done(function(data, status, jqXHR) {
ok(data.ok);
equal(data.value, "foobar");
}).always(countDown); // call `countDown` regardless of success/error
});
Basically countDown is a function that counts down to zero from whatever you specify, and then calls start(). In this case, there's 1 async call, so countDown will count down from that. And it'll do so when the ajax call finishes, regardless of how it went, since it's set up as an always callback.
And because the asyncTest is told to expect 2 assertions, it'll report an error if the .done() callback is never called, since no assertions will be run. So if the call completely fails, you'll know that too. If you want to log something on error, you can add a .fail() callback to the promise chain.
If it's a unit test that can (and should) be run in isolation from the server side, you can simply "replace" $.ajax to simulate whatever behavior.
One easy example:
test("Test AJAX function", function() {
// keep the real $.ajax
var _real_ajax = $.ajax;
// Simulate a successful response
$.ajax = function(url, opts) {
opts.success({expected: 'response'});
}
var myX = new X();
// Call your ajax function
myX.fire();
// ... and perform your tests
// Don't forgot to restore $.ajax!
$.ajax = _real_ajax;
});
Obviously you can also perform a real ajax call with stubbed url/data:
// Simulate a successfully response
$.ajax = function(url, opts) {
opts.success = function(data) {
console.log(data);
start();
}
_real_ajax('stub.php', opts)
}
If you haven't a complex response, I prefer the first approach, because it is faster and easy to comprehend.
However, you can also take another way and put the Ajax logic in it's own method, so you can easily stub it during tests.
Because of the complexity of this application, I have a need to wrap Facebook API calls, like so.
//In main file, read is always undefined
var read = fb_connect.readStream();
// In fb_wrapper.js
function readStream () {
var stream;
FB.api('/me/feed', {limit:10000}, function (response) {
stream = response.data;
});
return stream;
}
I know that due to the asynchronous nature of the call, the rest of the readStream() function will return stream (which has no value). I am having trouble finding a way of getting the data out of the callback function scope and back up to a higher scope. The FB API call is returning fine (I have debugged it a hundred times), but getting that response data has been the battle thus far.
If anyone has any suggestions, it would be much appreciated. I searched for Facebook jQuery plug-ins (as a pre-made wrapper, perhaps) with little luck.
Judging from your question, it seems that you are looking for a synchronous call. Which means that you'd want to use the data returned from the api call right after calling it. In that case, you'll need to check whether FB.api supports synchronous calls (mostly doesn't).
Otherwise, you'll need to understand that you are making an async call here. Which means that you should put your handling code INSIDE the callback function that you pass to FB.api. This is called the "continuation" style of writing code and is the standard way to use async calls.
FB.api('/me/feed', {limit:10000}, function (response) {
var stream = response.data;
// Do your processing here, not outside!!!
});
Or:
function handlerFunction(response) {
// Do your processing here
}
FB.api('/me/feed', {limit:10000}, handlerFunction);