Inconsistent results from elementLocated vs findElements - javascript

I am writing Webdriver automation for a web app. I have a test that looks like this:
it('has five items', async function(done) {
try {
await driver.wait(until.elementLocated(By.className('item-class')),5000);
const items = await driver.findElements(By.className('item-class'));
expect(items.length).toBe(5);
done();
}
catch(err) {
console.log(err)
}
}
This test will pass about 2/3 of the time, but will sometimes fail with:
Expected 0 to be 5.
I would think that there should be no way to get this response, since the first line is supposed to make it wait until some of these items exist. I could understand a result of "Expected 1 to equal 5.", in the case that one item was added to the page, and the rest of the test completed before they were all there, but reaching the expect() call with 0 items on the page does not make sense to me.
The questions, then, are:
1) What am I missing / not understanding, such that this result is in fact possible?
2) Is there a different construct / method I should be using to make it wait until the expected items are on the page?

I checked the source code and elementLocatedBy uses findElements, see here. And findElements can return an empty array of elements after the timeout and hence 0 is expected (learnt something new today).
You can write something custom or use some ready-made method from here that doesn't use findElements
driver.wait(async function() {
const items = await driver.findElements(By.className('item-class'))
return items.length > 0;
}, 5000);

well I think a good way to solve this issue would be
try {
const items = await driver.wait(until.elementsLocated(By.className('item-class')));
return items.length > 0;
}
catch(err) {
console.log(err)
}
this way will always wait for ALL elementS (it's elementSlocated) to be located and will return an array of items (remember that without await it will return an array of promises).
It has no timeout so it will wait until they are all ready (or you can put a limit so if something weird is happening you can see it).

Related

Is there a way to run particular Protractor test depending on the result of the other test?

This kind of question has been asked before but most of this question has pretty complicated background.
The scenario is simple. Let's say we are testing our favorite TODO app.
Test cases are next:
TC00 - 'User should be able to add a TODO item to the TODO list'
TC01 - 'User should be able to rename TODO item'
TC02 - 'User should be able to remove TODO item'
I don't want to run the TC01 and TC02 if TC00 fails (the TODO item is not added so I have nothing to remove or rename)
So I've been researching on this question for the past 3 days and the most common answers fro this question are:
• Your tests should not depend on each other
• Protractor/Jasmine does not have feature to dynamically turn on/off tests ('it' blocks)
There reason why I'm asking this question here is because it looks like a very widespread case and still no clear suggestion to handle this (I mean I could not find any)
My javascript skills are poor but I understand that I need to play around, let's say' passing 'done' or adding the if with the test inside...
it('should add a todo' ()=> {
todoInput.sendKeys('test')
addButton.click();
let item = element(by.cssContainingText('.list-item','test')
expect(item.isPresent()).toBe(true)
}
In my case there are like 15 tests ('it' blocks) after adding the item to the list. And I want to skip SOME OF THE tests if the 'parent' test failed.
PLEASE NOTE:
There is a solution out there which allows to skip ALL remaining test if one fails. This does not suit my needs
Man, I spent good couple of weeks researching this, and yes there was NO clear answers, until I realized how protractor works in details. If you understand this too you'll figure out the best option for you.
SOLUTION IS BELOW AFTER SHORT THEORY
1) If you try to pass async function to describe you see it'll fail, because it only accepts synchronous function
What it means for you, is that whatever condition you want to pass to it block, it can't be Promise based (Promise == resolves somewhen, but not immediately). What you're trying to do essentially IS a Promise (open page, do something and wait to see if the condition satisfies your criteria)
if (conditionIsTrue) { // can't be Promise
it('name', () => {
})
}
Thats first thing to consider...
2) When you run protractor, it picks up spec files specified in config and builds the queue of describe/it AND beforeAll/afterAll blocks. IMPORTANT DETAIL HERE IS THAT IT HAPPENS BEFORE THE BROWSER EVEN STARTED.
Look at this example
let conditionIsTrue; // undefined
it('name', () => {
conditionIsTrue = true;
})
if (conditionIsTrue) { // still undefined
it('name', () => {
})
}
By the time Protractor reaches if() statement, the value of conditionIsTrue is still undefined. And it maybe overwritten inside of it block, when browser starts, later on, but not when it builds the queue. So it skips it.
In other words, protractor knows which describe blocks it'll run before it even opens the browser, and this queue can NOT be modified during execution
POSSIBLE SOLUTION
1.1 Define a global variable outside of describe
let conditionIsTrue; // undefined
describe("describe", () => {
it('name1', async () => {
conditionIsTrue = await element.isPresent(); // NOW IT'S TRUE if element is present
})
it('name2', async () => {
if (conditionIsTrue) {
//do whatever you want if the element is present
} else {
console.log("Skipping 'name2' test")
}
})
})
So you won't skip the it block itself, however you can skip anything inside of it
1.2 The same approach can be used for skipping it blocks across different specs, using environment variable. Example:
spec_1.js
describe(`Suite: 1`, () => {
it("element is present", async () => {
if (await element.isPresent()) {
process.env.FLAG = true
} else {
process.env.FLAG = false
}
});
});
spec_2.js
describe(`Suite: 2`, () => {
it("element is present", async () => {
if (process.env.FLAG) {
// do element specific actions
}
});
});
Another possibility I found out, but never had a chance to check is to use Grunt task runner, which may help you implement the following scenario
Run protractor to execute one spec
Check a desired condition
Export this condition to environment variable
Exit protractor
In your Grunt task implement a conditional logic for executing the rests of conditional specs, by starting protractor again
But honestly, I don't see why you'd want to go this time consuming route, which requires a lot of code... But just as an FYI
There is one way provided by Protractor which might achieve what you want to achieve.
In protractor config file you can have onPrepare function. It is actually a callback function called once protractor is ready and available, and before the specs are executed. If multiple capabilities are being run, this will run once per capability.
Now as i understand you need to do a test or we can say execute a parent function and then based on its output you want to run some tests and do not want to run other tests.
onPrepare function in protractor config file will look like this :
onPrepare: async () => {
await browser.manage().window().maximize();
await browser.driver.get('url')
// continue your parent test steps for adding an item and at the last of function you can assign a global variable say global.itemAdded = true/false based on the result of above test steps. Note that you need to use 'global.' here to make it a global variable which will then be available in all specs
}
Now in you specs file you can run tests (it()) based on global.itemAdded variable value
if(global.itemAdded === true) {
it('This test should be running' () => {
})
}
if(global.itemAdded === false) {
it('This test should not be running' () => {
})
}

Promise inside promise don't wait Axios finishing

I have a trouble. I receive many objects where I mapping them and I make a external consult using axios and save the return, let's the code:
let savedClients = Object.entries(documents).map(personDocument => {
let [person, document] = personDocument
documentFormated = document
documentNumbers = document.replace(/\D/g, '')
return ConsultDocuments.getResponse(documentNumbers).then(resultScore => { // Calling the axios
const info = { ...resultScore }
return Save.saveClient(info)
})
})
Promise.all(savedClients).then(results => {
console.log(results) // Come only one document, repeted with the total documents passed in map
})
The problem is when it realized the all map first and then make the consults with only the last result many time (the total of documents passed)
This code is legacy and use async/await don't work (serious, if i don't stay here)
I'am tried N ways to make this, and with the libary Q(), it's make the map in correcty order but it's doesn't wait the axios, and all results come with "pending"
Thanks!

How to wait for all dynamic number of forks to complete with redux-saga?

I'm trying to use redux saga to query a number of rest endpoints to get a human readable name associated with each discovered network, to fill a dropdown list with selectable networks.
I'm having trouble doing this, my forking is not working. I keep getting the error message:
TypeError: __webpack_require__.i(..) is not a function(...)
Every example using all() I've found online use call and know ahead of time every request being made. However, judging from the API I tried something like this:
const pendingQueries = [];
for(networkId in discoveredNetworks) {
pendingQueries.push(fork(getApplicationName, networkId);
}
const queryResults = yield all(pendingQueries);
This failed. I've tried a number of other permutations since. From testing I'm able to verify that I can do this:
const results = [];
for(networkId in discoveredNetworks) {
results.push(yield fork(getApplicationName, networkId));
}
and if There is a long enough delay the method will run and complete, though this approach obviously doesn't gaurentee that the forked methods will complete before I use result as I want. Still it seems to confirm the problem is in my use of all.
What is wrong with my all command?
Why don’t you wrap each request in a promise and call them like so:
var promises = []
for(networkId in discoveredNetworks) {
promises.push(new Promise((res, rej) => {
// code from getApplicationName goes here
// call res(result) on success and rej(error) on failure
}));
}
const results = yield call(Promise.all(promises))
I got this to work by giving up on the all() method, which I never got to work as advertised but wasn't really the right method for the job.
For forks I should have been using join() instead. so something along the lines of this:
const pendingQueries = [];
for(networkId in discoveredNetworks) {
pendingQueries.push(yield fork(getApplicationName, networkId);
}
const results = yield join(...pendingQueries);
results.forEach((result) => {
// my logic for handling the response and generating action
}

RXJS expand operator

var offset = 1;
var limit = 500;
var list = new Promise(function (resolve, reject) {
rets.getAutoLogoutClient(config.clientSettings, (client) => {
var results = client.search.query(SearchType, Class, Query, {
limit: limit,
offset: offset
});
resolve(results);
});
});
var source = Rx.Observable.fromPromise(list);
source.subscribe(results => console.log(results.count));
I am doing a real estate site, using RETS.
What I am trying to do my query are limited from the RETS server, is run this in a loop increasing my Offset until I have all my data. I don't know what the count is until I run the query and find the count value.
I have tried to use expand but I have no clue of how exactly it works. Tried to do these multiple ways, even using the old fashion while loop, which while doesn't work with .then method. So I have turned to RXJS since I been using it in Angular 4.
This is done in express. I need to eventually run corn jobs to fetch for updated properties, but my problem is fetching all the data and increasing the offset each time if the count is higher than my offset. So for example, run a query with an offset of 1 with a limit of 500. Total here is 1690. So next go around my offset would be:
offset += limit
Once I have my data, I need to save it to MongoDB. Which I already been able successfully to do. It's just finding a way to get all the data without having to manually set my offset.
Note the server limit is 2500, yes I can fetch all this in one shot but there are also other data, such as media, which could have well over 2500.
Any suggestions?
This is actually a fairly common use case for RxJS, as there are a lot of paginated data sources, or sources that are otherwise limited in what you can request at one time.
My two cents
To my mind expand is probably the best operator for this given that you are paginating against an unknown data source and you require at least one query in order to determine the final count. If you knew how much data you were going to be querying a more easier option would be to use something like mergeScan, but I digress.
Proposed Solution
This may take a little effort to wrap your head around so I have added annotations wherever possible to break down how this all works. Note I haven't actually tested this, so forgive me any syntax errors.
// Your constant limit for any one query
const limit = 500;
// RxJS helper method that wraps the async call into an Observable
// I am basing this on what I saw of your sample which leads me to believe
// that this should work.
const clientish = Rx.Observable.bindCallback(rets.getAutoLogoutClient);
// A method wrapper around your query call that wraps the resulting promise
// into a defer.
const queryish = (client, params) =>
// Note the use of defer here is deliberate, since the query returns
// a promise that will begin executing immediately, this prevents that behavior
// And forces execution on subscription.
Rx.Observable.defer(() => client.search.query(SearchType, Class, Query, params));
// This does the actual expansion function
// Note this is a higher order function because the client and the parameters
// are available at different times
const expander = (client) => ({limit, count}) =>
// Invoke the query method
queryish(client, {limit, count})
// Remap the results, update offset and count and forward the whole
// package down stream
.map(results => ({
limit,
count: results.count,
offset: offset + limit,
results
}));
// Start the stream by constructing the client
clientish(config.clientSettings)
.switchMap(client =>
// This are the arguments for the initial call
Rx.Observable.of({limit, offset: 0})
// Call the expander function with the client
// The second argument is the max concurrency, you can change that if needed
.expand(expander(client), 1)
// Expand will keep recursing unless you tell it to stop
// This will halt the execution once offset exceeds count, i.e. you have
// all the data
.takeWhile(({count, offset}) => offset < count)
// Further downstream you only care about the results
// So extract them from the message body and only forward them
.pluck('results')
)
.subscribe(results => /*Do stuff with results*/);
const retsConnect = Rx.Observable.create(function(observer) {
rets.getAutoLogoutClient(config.clientSettings, client => {
return searchQuery(client, 500, 1, observer);
});
});
function searchQuery(client, limit, offset, observer) {
let currentOffset = offset === undefined || offset === 0 ? 1 : offset;
return client.search.query(SearchType, Class, Query, {limit: limit, offset: currentOffset})
.then(results => {
offset += limit;
observer.next(results.maxRowsExceeded);
if (results.maxRowsExceeded) {
console.log(offset);
return searchQuery(client, limit, offset, observer);
} else {
console.log('Completed');
observer.complete();
}
});
}
retsConnect.subscribe(val => console.log(val));
So this is getting somewhere with what I have tried here. I am still in process of tweaking this. So what I am looking to do is break the searchQuery down more. Not sure if I should be passing observer.next there, so I am going to is figure out where to map and takeUntil install of returning searchQuery again. I'm not sure takeUntil will take a true or false though. All I need this data to do is be saved into mongodb. So I guess I could leave it like this and have my save method put in there, but I still like to figure this out.
Note: results.maxRowsExceeded returns true when there are still more data. So once the maxRows returns false, it will stop and all the data has been fetched.

why array.push doesnt work?

i have this code
function imagenesIn(errU,errores)
{
if(errU) throw errU;
var directorios=new Array();
var origenDir='';
var destinoDir='';
if(errores=='')
{
if(campos.img instanceof Array)
{
for(file in campos.img)
{
origenDir='';
destinoDir='';
origenDir=campos.img[file].path;
destinoDir='/uploads/publish/alquiler/'+req.session.passport.user+campos.img[file].name;
fs.rename(origenDir,process.cwd()+'/public'+destinoDir,function(err)
{
if (err) throw err;
directorios.push(destinoDir);
console.dir(directorios)
})
}
}
}else{
res.send(errores)
}
return directorios;
},
i want to get in directorios an array of the destiny of all files content in req.files.img
that are in campos.img
but when i print in console this happend
"img": [
"/uploads/publish/alquiler/andres#hotmail.comTulips.jpg",
"/uploads/publish/alquiler/andres#hotmail.comTulips.jpg"
],
im trying to get this result
"img": [
"/uploads/publish/alquiler/andres#hotmail.comTulips.jpg", //first img
"/uploads/publish/alquiler/andres#hotmail.flowers.jpg"//second img
],
why .push() method put only the first image directory and not the second???
i miss something???
tnx
Your problem is that in
fs.rename(origenDir,process.cwd()+'/public'+destinoDir,function(err)
{
if (err) throw err;
directorios.push(destinoDir);
console.dir(directorios)
})
your push() won't have actually run by the time you do
return directorios;
You need to make sure that the call to fs.rename(...) that finishes last (which is not, I repeat not, necessarily going to be the same call that starts last) handles the case where all the calls have finished. Using asynchronous calls, you cannot just fall through after firing up a bunch of them and do a return; you will have to put the code that you want to run after all the work is done in a callback that addresses what I called "handles" earlier.
Control-flow libraries like async.js could simplify your code, but you'll need to get your head around the notion that once your function goes async everything that follows it has to be async as well.
ebohlman pretty much called it. Right now your for loop is setting up the anonymous functions that the rename function will call once it finishes.
As soon as these are setup, imagenesIn will return directories. It may contain some or none of the directories, depending on whether or not rename finished before your return.
The power of node is that it is asynchronous. You could use fs.renameSync yes, and it would follow what you expect. Node is not like an apache php server. The php server gets a request, and reserves a small slice of memory for the request. That's why other requests can still be processed, because they all get their own memory. Node doesn't do this. It runs on a single thread and if you do anything that is blocking (like synchronous IO), other requests have to wait until it's finished before they can be processed.
Ideally, your imagenesIn should also be asynchronous as well, taking a function as the final parameter. The standard for the function usually follows function(error, data). Error should be null if there was none. fs.rename follows this pattern.
Also the function that calls imagenesIn should ideally handle the server response. This allows the function to be used in other types of cases. What if you don't want to send that specific response on error? What if you don't want to send a response at all? Right now this is a good recipe for accidentally sending headers twice (and getting an error).
If it were me, this is how I would write your function (I didn't test but should give you some direction).
function imagenesIn(callback) {
var directorios=new Array();
var origenDir='';
var destinoDir='';
if(campos.img instanceof Array) {
function recursion(index){
//the case that ends the recursion and returns directories
if (index >= campos.img.length) {
callback(null, directorios);
return;
}
origenDir=campos.img[index].path;
destinoDir='/uploads/publish/alquiler/'+req.session.passport.user+campos.img[index].name;
fs.rename(origenDir, process.cwd() + '/public' + destinoDir, function(err) {
//the case that ends recursion and sends an error
if (err) {
callback(err);
return;
}
directorios.push(destinoDir);
console.dir(directorios);
recursion(index++);
})
}
recursion(0);
}
else {
callback('Campos.img was not an array.');
}
}
and your code that calls it might look something like this
imagenesIn(function(err, directories) {
if (err) {
res.send(err);
}
else {
//do cool stuff with directories.
}
});
Also, I wanted to make sure you understood the unique difference between for( ; ; ) and for(key in object). "For in" iterates through the keys of an object. This works on an array because it is essentially an object with numeric keys. I could do this however
var array = ['data', 'data'];
array.nonNumericKey = 'otherdata';
If you did for (var i = 0; i < array.length; i++), you would only iterate through the array data. If you used for (key in array), you would also iterate through the nonNumericKey. This is why personally, I only use "for in" on objects that are not arrays.

Categories

Resources