I'm trying to write a test download works, which requires to check if xhr response has status READY. I created a client function in TestCafe, using promises, but it's failing in case of recursion.
How should I fix my code to handle this situation?
P.S. many apologies for newbies questions, I have just started my journey in automation testing.
fixture`Download report works`
test
.requestHooks(logger)//connected a request hook, will wait for logger request
('I should be able to download PDF report from header of the page', async t => {
//recursively check if response status is READY, and then go to assertions
const waitForDownloadResponseStatus = ClientFunction((log) => {
return new Promise((resolve,rejects)=>{
const waitForStatus=()=>{
const arrayFromResponse = JSON.parse(log.response.body);
const responseStatus = arrayFromResponse.status;
if (responseStatus == 'READY')
{
resolve(responseStatus);
}
else {
waitForStatus();
}
}
waitForStatus();
})
});
//page objects
const reportTableRaw = Selector('div.contentcontainer').find('a').withText('April 2019').nth(0);
const downloadPdfButton = Selector('a.sr-button.sr-methodbutton.btn-export').withText('PDF');
//actions.
await t
.navigateTo(url)
.useRole(admin)
.click(reportTableRaw)//went to customise your report layout
.click(downloadPdfButton)
.expect(logger.contains(record => record.response.statusCode === 200))
.ok();//checked if there is something in logger
const logResponse = logger.requests[0];
// const arrayFromResponse = JSON.parse(logResponse.response.body);
// const responseStatus = arrayFromResponse.status;
console.log(logger.requests);
await waitForDownloadResponseStatus(logResponse).then((resp)=>{
console.log(resp);
t.expect(resp).eql('READY');
});
});
When you pass an object as an argument or a dependency to a client function, it will receive a copy of the passed object. Thus it won't be able to detect any changes made by external code. In this particular case, the waitForStatus function won't reach its termination condition because it can't detect changes in the log object made by an external request hook. It means that this function will run indefinitely until it consumes the all available stack memory. After that, it will fail with a stack overflow error.
To avoid this situation, you can check out that response has status READY if you change the predicate argument of the contains function.
Take a look at the following code:
.expect(logger.contains(record => record.response.statusCode === 200 &&
JSON.parse(record.response.body).status === 'READY'))
.ok({ timeout: 5000 });
Also, you can use the timeout option. It's the time (in milliseconds) an assertion can take to pass before the test fails.
Related
I have a component that I want to cover with some e2e tests. This component takes the URL provided by the user in the input, calls the API after the button click and then returns the shortened version of that URL. After that, shortened url is added to the list below the input on the UI and makes some localStorage assertion. I want Cypress to wait for the API response and only then check the UI if the list item was added. I made this working but I hardcoded the wait time in the wait() method. How Can I achieve that programatically ?
describe("Shortener component", () => {
it("Should add the list item and data to localStorage", () => {
cy.visit("http://127.0.0.1:5500"); //Live server extension address
cy.get("#url-input").type("https://facebook.com");
cy.get("#form-submit-button").click();
// wait for the api response and make sure that the value has been added to the localStorage
cy.wait(40000); //todo - wait for the api response instead of hardcoding the wait time
const localStorageData = localStorage.getItem("linksData");
if (JSON.parse(localStorageData)) {
expect(JSON.parse(localStorageData)[0].inputValue).to.eq(
"https://facebook.com"
);
}
// check if the new list item with the corrct value has been addded
cy.get(".shortener-component__list-item")
.contains("https://facebook.com")
.should("be.visible");
//validation mesasge should not be visible
cy.get("#validationMesage")
.contains("Please add a valid link")
.should("not.be.visible");
});
});
I tried with intercept() however I failed. Not sure how to make it working. I also saw some similar SE topics on that but it did not help me.
Any ideas / examples apreciated :)
Thx !
From the order of events you've given
short URL returned
added to localStorage
added to list
just change the order of feature testing
test list - it is last event, but has retriable commands (you can increase the timeout)
now test localStorage, if UI has the short URL so will localStorage
cy.contains('.shortener-component__list-item', 'https://facebook.com', { timeout: 40000 })
.then(() => {
// nested inside .then() so that the above passes first
const localStorageData = localStorage.getItem("linksData");
const linksData = JSON.parse(localStorageData);
expect(linksData).not.to.eq(undefined);
expect(linksData[0].inputValue).to.eq("https://facebook.com");
})
Alternatively, to make use of retry and timeout on the localStorage check,
cy.wrap(localStorage.getItem("linksData"))
.should('not.eq', undefined, { timeout: 40000 }) // will retry the above until true
.then(data => JSON.parse(data))
.should(parsedData => {
expect(parsedData.inputValue).to.eq("https://facebook.com");
});
I guess you should also start the test with
cy.clearLocalStorage("linksData")
There're examples in the documentation, it only takes some reading and experimentation.
In general, you need three commands: cy.intercept(), .as(), and cy.wait():
cy.intercept(your_url).as('getShortenedUrl');
cy.wait('#getShortenedUrl');
you can also use .then() to access the interception object, e.g. a response:
cy.wait('#getShortenedUrl').then(interception => { });
or you can check something in the response using .its():
cy.wait('#getShortenedUrl').its('response.statusCode').should('eq', 200);
The point is that after cy.wait('#getShortenedUrl'), the response has been received.
Please bear with me, as the question is long and detailed, and I am fairly new to RxJS.
I am attempting to create Amazon S3 browser in Angular which looks like Windows Explorer.
Something like this...
The left list will contain all the root folders (and it will not be a tree view), and when clicked on any root folder, the subfolders and files inside it will be shown in the right-side details view.
I need a new S3 Access token for each of the root folders in the left-list. I have a backend service which does so. This token is valid for certain time duration. So the cases in which the current token is invalid are:-
If the user clicks on some other root folder in left list.
If the token expiry is reached.
This is what I have written to manage this token expiry condition :-
private accessTokenSource: BehaviorSubject<AccessToken | null> = new BehaviorSubject(null);
accessToken$ = this.accessTokenSource.asObservable();
getAccessToken() {
return this.http.get(${this.accessTokenEndpoint}).pipe(
tap((accessToken) => {
// set Access token in a subject
this.accessTokenSource.next(accessToken);
}),
switchMapTo(timer(55*60*1000).pipe(
tap(() => {
// reset access token in subject since now token is invalid - Expiry case
this.accessTokenSource.next(null);
})
))
);
}
// Whoever subscribes to this will fetch the token and start the expiration timer
Since not wanting to expose access token fetching logic in view layer, each of my left-list and details component calls a method getDetails(currentPrefix: string) in the s3Service. This method first checks validity of the token for being able to call S3 API, and then calls listObjects operation and returns the result. Here's what I have so far :-
// Checks the validity of token for the current prefix
checkAccessTokenValidity(currentPrefix: string) {
let isTokenValid: boolean = true;
// This uses access token set in the subject
// According to me, it will be reset by the timer's tap operation (when it expires)
const sub = this.accessToken$.subscribe((token) => {
// Check token's expiry or usability for current folder and update isTokenValid accordingly
if(!token || !currentPrefix.includes(token.rootFolder)) {
isTokenValid = false;
}
});
sub.unsubscribe();
return isTokenValid;
}
// Public method to call from list and details components
getDetails(currentPrefix: string) {
const isTokenValid = this.checkAccessTokenValidity(currentPrefix);
if(!isTokenValid) {
// This will fetch the token and start the TIMER
this.getAccessToken().subscribe(() => {});
}
// I think that this will not work, since if getAccessToken takes time,
// then accessToken$ will still be invalid!
const objectList$ = this.accessToken$.pipe(
map(token => {
// S3 List Objects method here, with current token
})
)
}
How do I solve the problem of checking token validity and then waiting for my service to return new valid token in order to call the S3 API? Any help would be really appreciated. This approach may be dead wrong as well, so please feel free to correct me as well.
I'd say that there is no need to create another subscription just to get the current value of a BehaviorSubject.
This means that these lines:
const sub = this.accessToken$.subscribe((token) => {
if(!token || !currentPrefix.includes(token.rootFolder)) {
isTokenValid = false;
}
});
sub.unsubscribe();
could be replaced with
const isTokenValid = !!this.accessTokenSource.value;
As you already mentioned, getAccessToken takes some time, meaning you can't get its result synchronously.
A quick fix would be this:
const tokenValid$ = of(this.accessTokenSource.value);
const tokenInvalid$ = merge(
// Not interested in the values emitted as side effects are produced in `tap()`
// With this, we're just subscribing. This way, an HTTP call will be made
this.getAccessToken().pipe(ignoreElements()),
// `accessTokenSource` is a `BehaviorSubject` and we don't want its current value,
// that's why we're skipping it. Next time it emits, it will have the value returned from `getAccessToken`
this.accessTokenSource.pipe(skip(1))
);
const objectList$ = iif(() => isTokenValid, tokenValid$, tokenInvalid$);
iff() is used to decide at subscription time to which observable to subscribe.
if(() => booleanValue, subscribeToThisIfTrue, subscribeToThisIfFalse)
This way, when you subscribe to objectList$, it will pick up the proper observable, depending on whether the token is valid or not.
I wanted to swap a profile picture of a user. For this, I have to check the database to see if a picture has already been saved, if so, it should be deleted. Then the new one should be saved and entered into the database.
Here is a simplified (pseudo) code of that:
async function changePic(user, file) {
// remove old pic
if (await database.hasPic(user)) {
let oldPath = await database.getPicOfUser(user);
filesystem.remove(oldPath);
}
// save new pic
let path = "some/new/generated/path.png";
file = await Image.modify(file);
await Promise.all([
filesystem.save(path, file),
database.saveThatUserHasNewPic(user, path)
]);
return "I'm done!";
}
I ran into the following problem with it:
If the user calls the API twice in a short time, serious errors occur. The database queries and the functions in between are asynchronous, causing that the changes of the first API call weren't applied when the second API checks for a profile pic to delete. So I'm left with a filesystem.remove request for an already unexisting file and an unremoved image in the filesystem.
I would like to safely handle that situation by synchronizing this critical section of code. I don't want to reject requests only because the server hasn't finished the previous one and I also want to synchronize it for each user, so users aren't bothered by the actions of other users.
Is there a clean way to achieve this in JavaScript? Some sort of monitor like you know it from Java would be nice.
You could use a library like p-limit to control your concurrency. Use a map to track the active/pending requests for each user. Use their ID (which I assume exists) as the key and the limit instance as the value:
const pLimit = require('p-limit');
const limits = new Map();
function changePic(user, file) {
async function impl(user, file) {
// your implementation from above
}
const { id } = user // or similar to distinguish them
if (!limits.has(id)) {
limits.set(id, pLimit(1)); // only one active request per user
}
const limit = limits.get(id);
return limit(impl, user, file); // schedule impl for execution
}
// TODO clean up limits to prevent memory leak?
Hello I am trying to access an Httprequest triggerd by a react component from my javascript code in order to test the url?
can anybody help please ?
Screenshot of the httprequest I want to access
Here is an example of the unit test I'am running, I want to add an other unit test that checks if the httprequest is called correctly.
.add('Search with "Occasion" keyword', () => {
const result = search('Iphone Occasion');
specs(() =>
describe('SEO Navigation Links', () => {
it('Should not contain "Occasion" keyword', () => {
const searchValue = result.find(Search).node.state.value.toLowerCase();
const contains = searchValue.includes('occasion');
expect(contains).toBeTruthy();
});
}),
);
return result;
});
The best i can recommend is to "monkey patch" the fetch function (if it uses that)
const realFetch = fetch;
fetch = (...args) => realFetch(...args).then(doStuff);
It creates a "middleware", and when the website tries to call the fetch function, it will call yours
Make sure you make a copy of the original function to avoid infinite recursion
If you install a Service worker, you can run some code client-side on all the requests your page makes. I'm not sure what you need to do in order to test the code you are talking about, but a Service Worker could report the request back to your own test code on the page, or respond with whatever content you want, or modify the server's response.
Goal: Front-end of application allows users to select files from their local machines, and send the file names to a server. The server then matches those file names to files located on the server. The server will then return a list of all matching files.
Issue: This works great if you a user select less than a few hundred files, otherwise it can cause long response times. I do not want to limit the number of files a user can select, and I don't want to have to worry about the http requests timing out on the front-end.
Sample code so far:
//html on front-end to collect file information
<div>
<input (change)="add_files($event)" type="file" multiple>
</div>
//function called from the front-end, which then calls the profile_service add_files function
//it passes along the $event object
add_files($event){
this.profile_service.add_files($event).subscribe(
data => console.log('request returned'),
err => console.error(err),
() => //update view function
);
}
//The following two functions are in my profile_service which is dependency injected into my componenet
//formats the event object for the eventual query
add_files(event_obj){
let file_arr = [];
let file_obj = event_obj.target.files;
for(let key in file_obj){
if (file_obj.hasOwnProperty(key)){
file_arr.push(file_obj[key]['name'])
}
}
let query_obj = {files:title_arr};
return this.save_files(query_obj)
}
//here is where the actual request to the back-end is made
save_files(query_obj){
let payload = JSON.stringify(query_obj);
let headers = new Headers();
headers.append('Content-Type', 'application/json');
return this.http.post('https://some_url/api/1.0/collection',payload,{headers:headers})
.map((res:Response) => res.json())
}
Possible Solutions:
Process requests in batches. Re-write the code so that the profile-service is only called with 25 files at a time, and upon each response call profile-service again with the next 25 files. If this is the best solution, is there an elegant way to do this with observables? If not, I will use recursive callbacks which should work fine.
Have the endpoint return a generic response immediately like "file matches being uploaded and saved to your profile". Since all the matching files are persisted to a db on the backend, this would work and then I could have the front-end query the db every so often to get the current list of matching files. This seem ugly, but figured I'd throw it out there.
Any other solutions are welcome. Would be great to get a best-practice for handling this type of long-lasting query with angular2/observables in an elegant way.
I would recommend that you break up the number of files that you search for into manageable batches and then process more as results are returned, i.e. solution #1. The following is an untested but I think rather elegant way of accomplishing this:
add_files(event_obj){
let file_arr = [];
let file_obj = event_obj.target.files;
for(let key in file_obj){
if (file_obj.hasOwnProperty(key)){
file_arr.push(file_obj[key]['name'])
}
}
let self = this;
let bufferedFiles = Observable.from(file_arr)
.bufferCount(25); //Nice round number that you could play with
return bufferedFiles
//concatMap will make sure that each of your requests are not executed
//until the previous completes. Then all the data is merged into a single output
.concatMap((arr) => {
let payload = JSON.stringify({files: arr});
let headers = new Headers();
hearders.append('Content-Type', 'application/json');
//Use defer to make sure because http.post is eager
//this makes it only execute after subscription
return Observable.defer(() =>
self.post('https://some_url/api/1.0/collection',payload, {headers:headers})
}, resp => resp.json());
}
concatMap will keep your server from executing more than whatever the size of your buffer is, by preventing new requests until the previous one has returned. You could also use mergeMap if you wanted them all to be executed in parallel, but it seems the server is the resource limitation in this case if I am not mistaken.
I'd suggest to use websocket connections instead because they don't time out.
See also
- https://www.npmjs.com/package/angular2-websocket
- http://mmrath.com/post/websockets-with-angular2-and-spring-boot/
- http://www.html5rocks.com/de/tutorials/websockets/basics/
An alternative approach would be polling, where the client makes repeated requests in a defined interval to get the current processing state from the server.
To send multiple requests and waiting for all of them to complete
getAll(urls:any[]):Observable {
let observables = [];
for(var i = 0; i < items.length; i++) {
observables.push(this.http.get(urls[i]));
}
return Observable.forkJoin(observables);
}
someMethod(server:string) {
let urls = [
'${server}/fileService?somedata=a',
'${server}/fileService?somedata=b',
'${server}/fileService?somedata=c'];
this.getAll(urls).subscribe(
(value) => processValue(val),
(err) => processError(err),
() => onDone());
}