I have a question I don't understand how this code works.
ans.map((val, indx) => {
const options = {
host: 'www.xxxx.com',
path: '/path'
port: 80,
path: path,
method: 'GET',
};
console.log(val)
send.getJSON(options, (code ,result) => {
console.log("oke22");
});
})
For [1,2,3], the output I get is:
1
2
3
oke
oke
oke
Why is the output not the following instead?
1
oke
2
oke
3
The issue is your .getJSON is asynchronous, running in synchronous code, this isn't bad, however it's handled slightly differently.
The design of Node.js uses an event-loop to provide asynchronicity on a single-threaded language like JavaScript.
So your callback won't actually be called until the .getJSON() has completed.
https://jsfiddle.net/uva5o10d/
Have a look here, I've made you an example to demonstrate what I mean, I simply fill an array with values (all 1s for this example), and set a callback function up using setTimeout (this delays by 1s), notice however, the program will continue to run. (Event-loop)
At the bottom of the file, notice the test(), this calls the version you currently have (very similar), including the call to a longer-running job, like retrieving data from an API.
Comment out test() and uncomment working() and you'll see the different, you may want to use console.log(value + " " + oke) inside your .getJSON, producing the results you're looking for.
https://codeforgeek.com/asynchronous-programming-in-node-js/
Also side-note, forEach would probably be a better method for iterating over an array, unless you're wanted a transformed array back (map).
I've attached some resources I think you may find helpful:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array
https://www.techopedia.com/definition/3821/iteration (you're already on track but always worth having a read if you're not overly familiar).
Related
One of my angularjs files had a code construct similar to the below:
this.examplecall = function() {
var a = apiconfig.getconfig();
....
//some lines of code
....
var b = apiconfig.getconfig();
}
And I started to write unit tests for it through angular spec - Jasmine, ended up with the below stub of code.
describe("test examplecall")...
it("should test cofig call in examplecall", function() {
$httpBackend.expectGET(GET_CONFIG).respond(200);
});
The above code throws exception telling "unexpected GET.."
When I added an extra expectGET, things worked out fine. See below:
describe("test examplecall")...
it("should test cofig call in examplecall", function() {
$httpBackend.expectGET(GET_CONFIG).respond(200);
$httpBackend.expectGET(GET_CONFIG).respond(200);
});
From this, I infer that if there are two api calls in a particular function, then I have to expect it two times.
Does that mean, if there are n same api calls in a particular code stub, I have to expect them n number of times ?
Are there any similar constructs like,
$httpBackend.WheneverexpectGET(GET_CONFIG).respond(200);
so, whenever we call a API just return 200 status like above?
Thank you for your comments on this...
EDIT: (read the accepted answer before going through this.)
Thanks to #kamituel for the wonderful answer.
To summarise with the information provided in his answer:
Use of expect :
Expects the order of the API call. It expects that the code should call the api's in the exact order that you expect.
If there are 3 api calls, then you should expect them 3 times.
Use of when : ($httpBackend.when)
Does not expect if an API call is made or not. Just doesn't throw any error.
$httpBackend.when behaves like a mini mock database. Whenever your code, expects some response from an API, supply it. Thats it.
Yes, .expectGET is used to assert that a given request has been made by the application. So you need to call it n times if you expect the application to make n requests.
If you don't need to assert on that, but only want to make application logic work through any requests it makes, you might want to use .whenGET instead. Difference between .expectXXX and .whenXXX has been already described in another answer.
Edit: not sure which Angular version are you using, but you will find this in the implementation of .expectGET:
expectations.shift();
This is invoked once a request is made and matches what was expected. Which means the same expectation is only asserted on once.
It's usually also a good idea to call .verifyNoOutstandingExpectation() after your test is done, to ensure that each request you specified as epxected using .expectXXX() has indeed been made by the application.
While reading node.js tutorial, I came across this page where they have explained the scenario using "Restaurant service" as an example.
In Blocking IO they have a code:
// requesting drinks for table 1 and waiting...
var drinksForTable1 = requestDrinksBlocking(['Coke', 'Tea', 'Water']);
// once drinks are ready, then server takes order back to table.
serveOrder(drinksForTable1);
// once order is delivered, server moves on to another table.
In Non-blocking IO, they have changed it to:
// requesting drinks for table 1 and moving on...
requestDrinksNonBlocking(['Coke', 'Tea', 'Water'], function(drinks){
return serveOrder(drinks);
});
From what I understood, the second code will also take the same time to execute and then move to next line of code. How to differentiate ?
Also, how to write 'function requestDrinksNonBlocking()' which can process array ['Coke', 'Tea', 'Water'] and then execute serveOrder using anonymous function.
Please help me understand the scenario.
In the first example the requestDrinksNonBlocking execute and you use the output to call serveOrder
In the second example the requestDrinksNonBlocking takes a callback (serveOrder) which it will call when the request is done
Example of requestDrinksNonBlocking implementation
requestDrinksNonBlocking=function(drinks,callback){
//Handle request from the array {drinks}
callback(drinks);
}
From what I understood, the second code will also take the same time to execute and then move to next line of code. How to differentiate ?
How much time this code will take in isolation is not what is important. In the second case something else can happen before the function passed as a callback is called so for example other requests can be handled instead of waiting.
Also, how to write 'function requestDrinksNonBlocking()' which can process array ['Coke', 'Tea', 'Water'] and then execute serveOrder using anonymous function.
There are infinitely many answers to the question on how to write a function without telling what it should do exactly.
I'm novice in reactive streams and now trying to understand them. The idea looks pretty clear and simple, but on practice I can't understand what's really going on there.
For now I'm playing with most.js, trying to implement a simple dispatcher. The scan method seems to be exactly what I need for this.
My code:
var dispatch;
// expose method for pushing events to stream:
var events = require("most").create(add => dispatch = add);
// initialize stream, so callback in `create` above is actually called
events.drain();
events.observe(v => console.log("new event", v));
dispatch(1);
var scaner = events.scan(
(state, patch) => {
console.log("scaner", patch);
// update state here
return state;
},
{ foo: 0 }
);
scaner.observe(v => console.log("scaner state", v));
dispatch(2);
As I understand, the first observer should be called twice (once per event), and scaner callback and second observer – once each (because they were added after triggering first event).
On practice, however, console shows this:
new event 1
new event 2
scaner state { foo: 0 }
Scaner is never called, no matter how much events I push in stream.
But if I remove first dispatch call (before creating scaner), everything works just as I expected.
Why is this? I'm reading docs, reading articles, but so far didn't found anything even similar to this problem. Where am I wrong in my suggestions?
Most probably, you have studied examples like this from the API:
most.from(['a', 'b', 'c', 'd'])
.scan(function(string, letter) {
return string + letter;
}, '')
.forEach(console.log.bind(console));
They are suggesting a step-by-step execution like this:
Get an array ['a', 'b', 'c', 'd'] and feed its values into the stream.
The values fed are transformed by scan().
... and consumed by forEach().
But this is not entirely true. This is why your code doesn't work.
Here in the most.js source code, you see at line 1340 ff.:
exports.from = from;
function from(a) {
if(Array.isArray(a) || isArrayLike(a)) {
return fromArray(a);
}
...
So from() is forwarding to some fromArray(). Then, fromArray() (below in the code) is creating a new Stream:
...
function fromArray (a) {
return new Stream(new ArraySource(a));
}
...
If you follow through, you will come from Stream to sink.event(0, array[i]);, having 0 for timeout millis. There is no setTimeout in the code, but if you search the code further for .event = function, you will find a lot of additional code that uncovers more. Specially, around line 4692 there is the Scheduler with delay() and timestamps.
To sum it up: the array in the example above is fed into the stream asynchronously, after some time, even if the time seems to be 0 millis.
Which means you have to assume that somehow, the stream is first built, and then used. Even if the program code doesn't look that way. But hey, isn't it always the target to hide complexity :-) ?
Now you can check this with your own code. Here is a fiddle based on your snippet:
https://jsfiddle.net/aak18y0m/1/
Look at your dispatch() calls in the fiddle. I have wrapped them with setTimeout():
setTimeout( function() { dispatch( 1 /* or 2 */); }, 0);
By doing so, I force them also to be asynchronous calls, like the array values in the example actually are.
In order to run the fiddle, you need to open the browser debugger (to see the console) and then press the run button above. The console output shows that your scanner is now called three times:
doc ready
(index):61 Most loaded: [object Object]
(index):82 scanner state Object {foo: 0}
(index):75 scanner! 1
(index):82 scanner state Object {foo: 0}
(index):75 scanner! 2
(index):82 scanner state Object {foo: 0}
First for drain(), then for each event.
You can also reach a valid result (but it's not the same behind scenes) if you use dispatch() synchronously, having them added at the end, after JavaScript was able to build the whole stream. Just uncomment the lines after // Alternative solution, run again and watch the result.
Well, my question appears to be not so general as it sounds. It's just a lib-specific one.
First – approach from topic is not valid for most.js. They argue to 'take a declarative, rather than imperative, approach'.
Second – I tried Kefir.js lib, and with it code from topic works perfect. Just works. Even more, the same approach which is not supported in most.js, is explicitly recommended for Kefir.js.
So, the problem is in a particular lib implementation, not in my head.
I am using protractor to test a series of webpages (actually mostly one webpage of Angular design, but others as well). I have created a series of page objects to handle this. To minimize code maintenance I have created an object with key value pairs like so:
var object = {
pageLogo: element(by.id('logo')),
searchBar: element.all(by.className('searchThing')),
...
};
The assumption being that I will only need to add something to the object to make it usable everywhere in the page object file. Of course, the file has functions (assuming you are not familiar with the page object pattern) as such:
var pageNamePageObject = function () {
var object = {...}; //list of elements
this.get = function() {
brower.get('#/someWebTag');
}
this.getElementText = function(someName){
if (typeof someName == 'number')
... (convert or handle exception, whatever)
return object[name].getText();
}
...
*Note these are just examples and these promises can be handled in a variety of ways in the page object or main tests
The issue comes from attempting to "cycle" through the object. Given that the particular test is attempting to verify, among other things, that all the elements are on the particular web page I am attempting to loop through these objects using the "isPresent()" function. I have made many attempts, and for brevities sake I will not list them here, however they include creating a wrapper promise (using "Q", which I must admit I have no idea how it works) and attempting to run the function in the 'expect' hoping that the jasmine core will wait for all the looping promises resolve and then read the output (it was more of a last ditch effort really).
You should loop like you did before on all of the elements, if you want it in a particular order, create a recursive function that just calls itself with the next element in the JSON.
Now, to handle jasmine specs finishing before and that stuff.
this function needs to be added to protractor's flow control for it to wait to continue, read more about it here. and also, dont use Q in protractor, use protractor's implementation of webdriverJS promises.
Also, consider using isDisplayed instead, assuming you want it to be dispalayed on the page.
So basically, your code skeleton will look like this:
it(.....{
var flow = webdriver.promise.controlFlow();
return webdriver.execute(function () {//your checks on the page here,
//if you need extract to external function as i described in my first paragraph
well i think that should provide you enough info on how to handle waiting for promises in protractor, hope i helped.
I'm learning Node.js through the learnyounode project. I have completed the first few assignments and they all seemed reasonably straightforward.
Then, I got to the 'Async Juggling' one, and the assignment's description went completely over my head in terms of what I need to do.
The gist of it, is I need to write a Javascript that accepts 3 URLs as arguments, but that associates the correct response to the correct server. The assignment itself notes that you cannot naively assume that things will be properly associated with the correct URL.
The (incorrect) code I came up with proved that restriction true:
var http = require('http');
var bl = require('bl');
var httpCallback = function(response) {
var pipeHandler = function (err, data) {
if(err)
return console.error(err);
console.log(data.toString());
};
response.pipe(bl(pipeHandler));
};
var juggleAsyncConnections = function(connA, connB, connC) {
http.get(connA, httpCallback);
http.get(connB, httpCallback);
http.get(connC, httpCallback);
};
juggleAsyncConnections(process.argv[2], process.argv[3], process.argv[4]);
The problem, and thus my question, is, what is the correct way to handle asynchronous connection juggling, and what are the underlying concepts I need to understand to do it correctly?
Note: I've seen other questions, like "OMG why doesn't my solution work?" I'm not asking that, I deliberately set out to see the 'naive' solution fail for myself. I don't understand the underlying principles of why it doesn't work, or what principles actually do work. Additionally, I'm not asking for someone to 'solve the problem for me.' If the general algorithm can be explained, I can probably implement it on my own.
Counting callbacks is one of the fundamental ways of managing async in Node. [...]
That's an important piece.
You know how many inputs there are (3), and, because of that, you know how many outputs there should be. Keep a running tally as responses come back, then check if you received the full set before printing to the screen. You also want to keep the original order in mind (now if there were only a datatype that had numeric indexes... :grin:).
Good luck!