I'm writing an automation program for a Web Application. I am accessing the Web Application through a javascript API and have wrapper functions with custom assertions that currently just write output to a table in an HTML page.
Now I need to get the data output into my hudson (https://hudson.dev.java.net/) automation, where I have a lot of flexibility when it comes to arranging, sharing and presenting the results.
When I wrote NUnit tests, the hudson-integration was impeccable. I saw there was a thing called JSUnit, but it is no longer actively maintained(?), so maybe I shouldn't spend too much time learning it?
I have seen that tools like Firebug can output javascript results to a console, though I don't know where to go from there. The console output seems to stay in firefox and come no further.
Any help or tips are most welcome.
Thanks!
/ Jakob
If I understand correctly, you want your Hudson build to run a test of your Web Application which is set up and running somewhere else. (This gets a little harder if you're also building your Web Application and want to set it up for a test run all inside Hudson.)
The easy option: As one of your build steps, retrieve the HTML page with your output and tell Hudson that the page is a build artifact. That way you can look at the test output manually.
Somewhat harder: change your test output (or pass a parameter to specify the format) to match the XML format used by NUnit -- see example XML output. This is a direct link to an XML file and may not display well in your browser; try viewing source or saving as text.
Update: On re-reading your question, it wasn't clear to me whether you were interested solely in Hudson integration (which my original answer assumed), or in other possibilities for testing frameworks.
Depending on what you want to test:
you might look at testing your Web Application with Selenium. I know there's a Hudson plugin for Selenium, but I've also noticed several questions here recently describing problems with Selenium+Hudson. I don't have any experience with the combination myself.
there are lots of javascript testing frameworks with different capabilities.
Related
I am a beginner in in Python3.6 using BeautifulSoup to perform "web-scraping."
Once I have ran a request.get() and prettyify the output I notice that the webpage does not return the values, it would seem to be storing code which would be related to the value.
Here is the link to the webpage in specific:
http://www.tennisabstract.com/cgi-bin/wplayer.cgi?p=AngeliqueKerber&f=r1
I am trying to extract the hand which the player uses in Tennis. Highlighted Yellow from picture below:
Picture of what I am trying to obtain:
I would appreciate feedback concerning the outline of the question, if it is confusing (or non-standard) feedback such as this will help me in the future to ensure I am asking questions appropriately.
There are two options (mostly).
The first one is easier and slower - browser emulation. You just try to use the site as a normal user - with browser. There is a python module for this task - selenium. It uses specific webdriver to use browser. There are plenty of webdrivers available (for example chromedriver to use chrome). Also, there are headless solutions (PhantomJS for example).
The other way is smarter and faster - XMLHttpRequests (XHRs). Basically - site uses some hidden API to get info via JS, and you try to find out how exactly. In most cases you can use Inspect Element toolbox of your browser. Switch to the network tab of it, clear it an try to get results. Then sort it to see only XHRs. It usually returns JSON-based values that are easily converted into a python dictionary using json() method of Response object.
Here's a really great GitHub that someone made on this website, an API practically you can change/edit few things (fork it) and then use it the way you want to.
HERE
It uses Selenium webdriver but it's high quality.
I'm writing a UI to my R script, which asks the user some names of organisms and the location of a folder, using javascript/html that will be local (not hosted, ever).
At the moment, I have just that: a couple of text boxes that take input and pass an executable R script. Originally this UI was being written as a very user friendly option, but slowly I've realized that some nifty tricks can be added such as a textbox that completes the word for the user (so if the user misspells the name of the organism, the UI will correct the input based on the files uploaded. And this would come from a list of organisms text file that R would generate immediately once the files have been added).
Is there a way to make this more efficient? For example, retrieving plots from R (as .pngs) and updating my local webpage and being able to share a log file between R and the UI (mind you, I am aware of the potential File I/O errors)..but for the sake of brainstorming.
I'm aware of Shiny, but what I would like is a simple local UI, as I will be dealing with big data (average ~ 1 gigabyte worth of files that my script will process).
Another way to ask my question that is more to the point:
Here's an example of integrating PHP and R: http://www.r-bloggers.com/integrating-php-and-r/
I am looking to create something similar with javascript/css/html/jquery etc.
Thanks
You could definitely use nodejs (nodejs.org) for that. Take a look at https://github.com/elijah/r-node and r-node. Confusingly enough, this is two different projects with the same name. More info on the latter here: squirelove.net/r-node/doku.php
In recent years JavaScript has become one of the fastest programming languages. In one case I know of, JavaScript is faster than C++. See: benchmarksgame.alioth.debian.org/u32/performance.php?test=regexdna
Bear in mind, though, that memory is very difficult to manage in JavaScript, so you should run some sort of memory leak detection program on your code, if you plan to create long running processes.
E.I: memwatch (npmjs.org/package/memwatch) or nodeheap (npmjs.org/package/memwatch)
Good luck with your endeavors!
PS. sorry for the lack of real links. I'm apparently not allowed to post more than 2 links.
Why wouldn't you be able to use Shiny locally? You design your app on your computer and run it locally with runApp('myapp') from an R-prompt. Unless you are experienced with javascript I would give shiny another look: http://www.rstudio.com/shiny/
The example you linked to can be very easily implemented using Shiny. See link below for a tutorial on how to write the app:
http://rstudio.github.com/shiny/tutorial/#hello-shiny
To run that example locally:
install.packages('shiny')
shiny::runExample('01_hello')
I have a similar case, and shiny looked like a good idea to me. However, after I did a few first steps, I am no longer sure about this. Note that most of the examples use shiny to display results. When you get into editing some fields and using a database, things can become messy; the reactive-ness gets in the way once fields can be change by program and by the user.
As an example see https://gist.github.com/dmenne/4721235/edit. The main problem for the current state of shiny is that you must use the dynamic UI for this type of work, which kills any separation of ui and server because you have to create the ui elements in the server.
shiny is a great idea, but for anything larger with interaction it is too early now. Knowing that the amazing RStudio team is behind it, I am sure the stress should be on now.
What else is there around to make user interfaces for R? TclTk makes me shudder. I working in c# a lot, and I had been using R(D)COM for interfacing some years ago, but gave up after installation and licensing problems. There is R.DOTNet which works better now; it is the most hazzle-free installation-wise, but it is not a very active project, and tends to crash. Interfacing via RServe/RServeCLI is stable, but is too difficult to install on Windows, for example on hospital computers with their strict security issues.
And there is Qt. With the active RInside community, it would be a good choice and the interface is great. I wish however my programming skills were at the level of the RStudio-guys. The fact that even Dirk is one the proof-of-concept level (using rinside with qt in windows) is not encouraging.
So, you are using a bunch of javascript libraries in a website. Your javascript code calls the several APIs, but every once in a while after an upgrade, one of the API changes, and your code breaks, without you knowing it.
How do you prevent this from happening?
I'm mostly interested in javascript, but any answer regarding dynamically typed languages would be valuable.
I don't think there's much you can do. You always run a risk when updating any piece of software. The best advice is to:
Read and understand documentation about upgrading
Upgrade in your test environment
TEST
Roll out live when you are happy there are no regressions
You should consider building unit tests using tools such as JsUnit and Selenium. As long as your code passes the tests, you're good to go. If some tests fail, you would quickly identify what needs to be fixed.
As an example of a suite of Selenium tests, you can check the Google Maps API Tests, which you can download and run locally in your browser.
Well there are two options:
Don't upgrade
Retest everything after you upgrade.
There is no way to guarantee that an upgrade won't break something. Even if you have something that could check the underlying API and make sure it still all lines up, you can't be certain that the underlying functionality is the same.
I'm looking for a way to write a non-GUI bot using Mozilla Framework. The bot should be able to work like normal browser (automatically download relevant JS files, make XMLHTTPRequests, run JS operations, modify DOM), except no GUI will be needed.
I wonder if it is possbile to build XULRunner without X, GTK/KDE (without any GUI dependencies), as I will run the bot on FreeBSD server 6.4.
It may sound a bit weird but I need a bot with capacity to operate like browser, runs JS, modifies DOM, submit forms running on non-GUI environments.
I've looked into other browsers such as Lynx, Links, Hulahop, Chrome V8 engine, WebKit JavascriptCore but yet to find desirable output.
It's a part of school project, thesis. We will use to observe price change of budget airlines and after one year long data collection, we need to deduce pricing strategy and customer behavior. It is a serious Final Year Project.
Any hint or help is greatly appreciated! Thank you in advance!
Regards.
You should be able to make progress with selenium. It's a record/test/play tool but its core is manipulating the DOM.
Update from Grundlefleck's comment: As for launching the actual tests there is selenium remote-control, which allows you to write your tests in Java, Ruby, plain HTML and other possible drivers.
Yes, it is possible (but it might very well require LOTS of code changes).
No, I do not know any of the details.
I would not recommend this approach for your purposes. From your comment, it sounds like you are trying to scrape webpages. If you really need to use JavaScript, you can use a stand-alone JavaScript-engine (Mozilla's is available here). Otherwise, I would use Beautiful Soup with Python or Twill. You might also want to read this question.
My Application has a lot of calculation being done in JavaScript according to how and when the user acts on the application. The project prints out valuable information (through console calls) as to how this calculation is going on, and so we can easily spot any NaNs creeping in.
We are planning to integrate Selenium (RC with python) to test or project, but if we could get the console output messages in the python test case, we can identify any NaNs or even any miscalculations.
So, is there a way that Selenium can absorb these outputs (preferably in a console-less environment)?
If not, I would like to know if I can divert the console calls, may be by rebinding the console variable to something else, so that selenium can get that output and notify the python side. Or if not console, is there any other way that I can achieve this.
I know selenium has commands like waitForElementPresent etc., but I don't want to show these intermediate calculations on the application, or is it the only way?
Any help appreciated.
Thank you.
There is GetEval() call that returns the result of a JavaScript call to the page. If you have the JavaScript on the page then you can do something like
self.assertEqual(selenium.GetEval("this.browserbot.getUserWindow().functionUnderTest().isNaN();"),"false","There was a NaN detected")
The browserbot access allows you to call the javascript functions on the page and get the result. The isNaN() will return false if you get a decent result
If you are purely testing that the JavaScript functions are performing the correct calculations with the given inputs, I would suggest separating your JavaScript from your page and use a JavaScript testing framework to test the functionality. Testing low level code using Selenium is a lot of unnecessary overhead. If you're going against the fully rendered page, this would require your application to be running to a server, which should not be a dependency of testing raw JavaScript.
We recently converted our application from using jsUnit to use YUI Test and it has been promising so far. We run about 150 tests in both FireFox and IE in less than three minutes. Our testing still isn't ideal - we still test a lot of JavaScript the hard way using Selenium. However, moving some of the UI tests to YUI Test has saved us a lot of time in our Continuous Integration environment.