I've got got a page with some AJAX code that I'm trying to build a spec for using Capybara. This test works:
context 'from the JobSpec#index page' do
scenario 'clicks the Run now link' do
visit job_specs_path
within( find('tr', text: #job_spec.name) ) { click_link 'run_now' }
visit current_path
expect(find('tr', text: #job_spec.name)).to have_content(/(running|success)/)
end
end
After clicking the link 'run_now', a job is launched and the user is redirected to the launched_jobs_path, which has a Datatable that uses some AJAX to pull currently running jobs. This test passes. However, I wanted to add to the test to check to make sure that the job wasn't already running before clicking the 'run_now' button (which would give me a false positive). I started playing around with it, but then I noticed that even simply putting visit launched_jobs_path before visit job_specs_path would cause the test to fail with the error
Failure/Error: expect(find('tr', text: #job_spec.name)).to have_content(/(running|success)/)
Capybara::ElementNotFound:
Unable to find css "tr" with text "job_spec_collection-1"
I'm fairly certain that this is an issue with the Javascript not running at the right time, but I don't quite know how to resolve it. Any ideas? This is Rails 4.1 with Capybara 2.4, and Capybara-webkit 1.3.
Thanks!
Update
In the parent context of the spec, I do have :js => true
feature 'User runs a JobSpec now', :js => true do
before do
Capybara.current_driver = :webkit
#job_spec = FactoryGirl.create(:job_spec, :with_client_spaces)
end
after { Capybara.use_default_driver }
context 'from the JobSpec#index page' do
...
end
end
I think it might be as simple as adding :js => true to your scenario header so that it knows to launch webkit / selenium to test the js / ajax.
scenario 'clicks the Run now link', :js => true do
Hope that helps!
UPDATE
Well here's one more idea. You could try adding this to your 'rails_helper' or 'spec_helper' file to set the :js driver.
Capybara.javascript_driver = :webkit
Do you have other :js tests that are working properly with :webkit? Have you tried using save_and_open_page to see if the element you are looking for is actually on the page and Capybara just isn't finding it?
Ok, pretty sure I've got something that will at least work for now. If anyone has more elegant solutions I'd love to hear them.
I think the issue is in the complexity of what happens in the 'run_now' controller. It adds a job to a Rufus scheduler process, which then has to start the job (supposed to start immediately). Only after the launched job is running via Rufus does an entry in the launched_jobs table get created.
While developing, I would see the job just launched as soon as I clicked 'Run now'. However, I'm guessing that WebKit is so fast that the job hasn't been launched by the time it gets to the launched_jobs#index page. The extra visit current_path there seem to help at first because it would refresh the page, but it only worked occasionally. Wrapping these last two steps in a synchronize block seems to work reliably (I've run it about 20 times now without any failures):
context 'from the JobSpec#index page' do
scenario 'clicks the Run now link' do
# First make sure that the job isn't already running
visit launched_jobs_path
expect(page).to have_no_selector('tr', text: #job_spec.name)
# Now launch the job
visit job_specs_path
within( find('tr', text: #job_spec.name) ) { click_link 'run_now' }
page.document.synchronize do
visit current_path
expect(find('tr', text: #job_spec.name)).to have_content(/(running|success)/)
end
end
end
Related
Update:
Please see the answer noted below as, ultimately, the problem had nothing to do with jsquery.
=============
Issue:
I submit an object to jquery to convert into a serialized string that will become part of a "POST" request to a server, and the data returned from the serialization request is different than the data sent on many occasions.
An example:
The JavaScript code that implements the server POST request:
function send_data(gpg_data) {
var query_string;
query_string = '?' + $.param(gpg_data, traditional = true);
console.log('gpg_data =', gpg_data)
console.log('query_string =', query_string);
$.post(server_address + query_string);
return;
}
This is the structure sent to the jquery param() function.
(copied from the browser console in developer mode.)
gpg_data =
{controller_status: 'Connected', motion_state: 'Stopped', angle_dir: 'Stopped', time_stamp: 21442, x_axis: 0, …}
angle_dir: "Stopped"
controller_status: "Connected"
force: 0
head_enable: 0
head_x_axis: 0
head_y_axis: 0
motion_state: "Stopped"
time_stamp: 21490
trigger_1: 0
trigger_2: 0
x_axis: 0
y_axis: "0.00"
. . . and the returned "query string" was:
query_string = ?controller_status=Connected&motion_state=Stopped&angle_dir=Stopped&time_stamp=21282&x_axis=0&y_axis=0.00&head_x_axis=0&head_y_axis=0&force=0&trigger_1=1&trigger_2=1&head_enable=0
The data received by the server is:
ImmutableMultiDict([('controller_status', 'Connected'), ('motion_state', 'Stopped'), ('angle_dir', 'Stopped'), ('time_stamp', '21282'), ('x_axis', '0'), ('y_axis', '0.00'), ('head_x_axis', '0'), ('head_y_axis', '0'), ('force', '0'), ('trigger_1', '1'), ('trigger_2', '1'), ('head_enable', '0')])
For example, note that "trigger_1" returns 1 when the data sent to it is a zero.
I have tried setting the query to "traditional = true" to revert to an earlier style of query handling as some articles suggested - which did not work. I tried this with jquery 3.2 and 3.6.
I am not sure exactly how jquery manages to munge the data so I have no idea where to look.
I have looked at my script and at the unpacked jquery code, and I can make no sense out of why or how it does what it does.
Any help understanding this would be appreciated.
P.S.
web searches on "troubleshooting jquery" returned very complex replies that had more to do with editing e-commerce web pages with fancy buttons and logins than with simply serializing data.
P.P.S.
I am tempted to just chuck the jquery and write my own serialization routine. (grrrr!)
===================
Update:
As requested, a link to the browser-side context.
To run: unpack the zip file in a folder somewhere and attach an analog joystick/gamepad to any USB port, then launch index.html in a local browser. Note that a purely digital gamepad - with buttons only or with a joystick that acts like four buttons - won't work.
You will want to try moving joystick axes 1 and 2, (programmatically axes 0 and 1) and use the first (0th) trigger button.
You will get a zillion CORS errors and it will complain bitterly that it cannot reach the server, but the server side context requires a GoPiGo-3 robot running GoPiGo O/S 3.0.1, so I did not include it.
Note: This does not work in Firefox as Firefox absolutely requires a "secure context" to use the Gamepad API. It does work in the current version of Chrome, (Version 97.0.4692.99 (Official Build) (64-bit)), but throws warnings about requiring a secure context.
Please also note that I have made every attempt I know how to try to troubleshoot the offending JavaScript, but trying to debug code that depends on real-time event handling in a browser is something I have not figured out how to do - despite continuous searching and efforts. Any advice on how to do this would be appreciated!
======================
Update:
Researching debugging JavaScript in Chrome disclosed an interesting tidbit:
Including the line // #ts-check as the first line in the JavaScript code turns on additional "linting" (?) or other checks that, (mostly) were a question of adding "var" to the beginning of variable declarations.
However. . . .
There was one comment it made:
gopigo3_joystick.x_axis = Number.parseFloat((jsdata.axes[0]).toFixed(2));
gopigo3_joystick.y_axis = Number.parseFloat(jsdata.axes[1]).toFixed(2);
I could not assign gopigo3_joystick.y_axis to a string object, (or something like that), and I was scratching my head - that was one of the pesky problems I was trying to solve!
If you look closely at that second line, you will notice I forgot a pair of parenthesis, and that second line should look like this:
gopigo3_joystick.y_axis = Number.parseFloat((jsdata.axes[1]).toFixed(2));
Problem solved - at least with respect to that problem.
I figured it out and it had nothing to do with jquery.
Apparently two things are true:
The state of the gpg_data object's structure is "computed", (snapshot taken), the first time the JavaScript engine sees the structure and that is the state that is saved, (even though the value may change later on). In other words, that value is likely totally bogus.
Note: This may only be true for Chrome. Previous experiments with Firefox showed that these structures were updated each time they were encountered and the values seen in the console were valid. Since Firefox now absolutely requires a secure context to use the gamepad API, I could not use Firefox for debugging.
I was trying to be "too clever". Given the following code snippet:
function is_something_happening(old_time, gopigo3_joystick) {
if (gopigo3_joystick.trigger_1 == 1 || gopigo3_joystick.head_enable == 1) {
if (old_time != Number.parseFloat((gopigo3_joystick.time_stamp).toFixed(0))) {
send_data(gopigo3_joystick)
old_time = gopigo3_joystick.time_stamp
}
}
return;
}
The idea behind this particular construction was to determine if "something interesting" is happening, where "something interesting" is defined as:
A keypress, (handled separately)
A joystick movement if either the primary trigger or the pinky trigger is pressed.
Movement without any trigger pressed is ignored so that if the user accidentally bumps against the joystick, the robot doesn't go running around.
Therefore the joystick data only gets updated if the trigger is pressed. In other words, trigger "release" events - the trigger is now = 0 - are not recorded.
The combination of these two events - Chrome taking a "snapshot" of object variables once and once only, (or not keeping them current) - and the trigger value persisting, lead me to believe that jquery was the problem since the values appeared to be different on each side of the jquery call.
I'm using Capybara, the selenium-webdriver gem, and chromedriver in order to drive my javascript enabled tests.
The problem is that about 50% of our builds fail due to a Net::ReadTimeout error. At first this was manifesting as a 'could not find element' error, but after I upped Capybara's default max wait time to 30 seconds, I started seeing the timeout.
I examined the screenshots of when the timeout happens, it's stuck on a 'Successfully logged in' modal that we show briefly before using the Javascript function, location.reload(), to reload the page.
I've ran the test locally and can sometimes reproduce it, also randomly. Sometimes it zips by this modal and does the reload so fast you can barely see it, and other times it just hangs forever.
I don't feel like it's an asset compilation issue, since the site has already loaded at that point in order for the user to access the login form.
Wondering if anyone has seen this before and knows a solution.
The specific code:
visit login_path
page.within '#sign-in-pane__body' do
fill_in 'Email', with: user.email
click_button 'Submit'
end
expect(page).to have_content 'Enter Password'
page.within '#sign-in-pane__body' do
fill_in 'Password', with: user.password
click_button 'Submit'
end
expect(page).to have_text 'Home page landing text'
The hang up happens between click_button 'Submit' and expecting the home page text.
The flow of the logic causing the timeout is the user submits the login form, we wait for the server to render a .js.erb template that triggers a JS event upon successful login. When that trigger happens we show a modal saying that login was successful, then execute a location.reload().
It turned out this wasn't exclusive to doing a location.reload() in JS. It sometimes happened just visiting a page.
The solution for me was to create an HTTP client for the selenium driver and specify a longer timeout:
Capybara.register_driver :chrome do |app|
client = Selenium::WebDriver::Remote::Http::Default.new
client.read_timeout = 120
Capybara::Selenium::Driver.new(app, {browser: :chrome, http_client: client})
end
Solved similar problem by using my own version of visit method:
def safe_visit(url)
max_retries = 3
times_retried = 0
begin
visit url
rescue Net::ReadTimeout => error
if times_retried < max_retries
times_retried += 1
puts "Failed to visit #{url}, retry #{times_retried}/#{max_retries}"
retry
else
puts error.message
puts error.backtrace.inspect
exit(1)
end
end
end
Here is what you need to do if you need to configure it for headless chrome
Capybara.register_driver :headless_chrome do |app|
client = Selenium::WebDriver::Remote::Http::Default.new
client.timeout = 120 # instead of the default 60
options = Selenium::WebDriver::Chrome::Options.new
options.headless!
Capybara::Selenium::Driver.new(app, {
browser: :chrome,
http_client: client,
options: options
})
end
Capybara.default_driver = :headless_chrome
Capybara.javascript_driver = :headless_chrome
Passing headless argument in capabilities was not working for me.
capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
chromeOptions: { args: %w[headless disable-gpu] }
)
Here is more details about why headless in capabilities was not working.
Here is the issue that has been nagging for weeks and all solutions found online do not seem to work... ie. wait for ajax, etc...
here is versions of gems:
capybara (2.10.1, 2.7.1)
selenium-webdriver (3.0.1, 3.0.0)
rspec (3.5.0)
running ruby 2.2.5
ruby 2.2.5p319 (2016-04-26 revision 54774) [x64-mingw32]
in the env.rb
Capybara.register_driver :selenium do | app |
browser = (ENV['browser'] || 'firefox').to_sym
Capybara::Driver::Selenium.new(app, :browser => browser.to_sym, :resynchronize => true)
Capybara.default_max_wait_time = 5
end
Here is my dynamicpage.feature
Given I visit page X
Then placeholder text appears
And the placeholder text is replaced by the content provided by the json service
and the step.rb
When(/^I visit page X$/) do
visit('mysite.com/productx/')
end
When(/^placeholder text appears$/) do
expect(page).to have_css(".text-replacer-pending")
end
Then(/^the placeholder text is replaced by the content provided by the json service$/) do
expect(page).to have_css(".text-replacer-done")
end
the webpage in question, which I cannot add it here as it is not publicly accessible, contains the following on page load:
1- <span class="text-replacer-pending">Placeholder Text</span>
after a call to an external service (which provides the Json data), the same span class gets refreshed/updated to the following;
2- <span class="text-replacer-done">Correct Data</span>
The problem I have with the "visit" method in capybara + selenium is that as soon as it visits the page, it thinks everything loaded and freezes the browser, and it never lets the service be called to dynamically update the content.
I tried the following solutions but without success:
Capybara.default_max_wait_time = 5
Capybara::Driver::Selenium.new(app, :browser => browser.to_sym, :resynchronize => true)
add sleep 5 after the visit method
wait for ajax solution from several websites, etc...
adding after hooks
etc...
I am at a complete loss why "visit" can't wait or at least provide a simple solution to an issue i am sure is very common.
I am aware of the capybara methods that wait and those that don't wait such as 'visit' but the issue is;
there is no content that goes from hidden to displayed
there is there is no user interaction either, just the content is getting updated.
also unsure if this is a capybara issue or a selenium or both.
Anyhow have insight on any solutions? i am fairly new to ruby and cucumber so specifically what code goes in what file/folder would be much appreciated.
Mel
Restore wait_until method (add it to your spec_helpers.rb)
def wait_until(timeout = DEFAULT_WAIT_TIME)
Timeout.timeout(timeout) do
sleep(0.1) until value = yield
value
end
end
And then:
# time in seconds
wait_until(20) { has_no_css?('.text-replacer-pending') }
expect(page).to have_css(".text-replacer-done")
#maxple and #nattf0dd
Just to close the loop on our issue here...
After looking at this problem from a different angle,
we finally found out Cucumber/Capybara/ is not a problem at all :-)
The issue we are having lies with the browser Firefox driver (SSL related), since we have no issues when running the same test with the Chrome driver.
I do appreciate the replies and suggestions and will keep those in mind for future.
thanks again!
I am working on a program for downloading bills that will automatically navigate to a webpage using the web browser object, log into it, navigate to the Bills page and download the most recent bill.
It all works well up until where I actually download it, as using the invokemember("click") code fires up IE and asks me to log in again which is not what I want to do. The DownloadFile method also does not work, and downloads a file that says the object has been moved to he login URL.
Right clicking and selecting "save target as" from the web browser works, but I'm not sure how to go about automating it.
Edit: Here's the code for the downloading part, parts of it were borrowed from another question on here but I can't remember who or where.
Dim IsRightElement As Boolean = False
For Each curElement As HtmlElement In Browser.Document.Links()
If curElement.GetAttribute("InnerText") = "Download Call Charges" Then
IsRightElement = True
End If
If IsRightElement Then
Dim Link As String = curElement.DomElement.href.ToString()
'This is where I'm stuck
'My.Computer.Network.DownloadFile(Link, "C:\Users\user\Desktop\PhoneBill.csv", "<username>", "<password>")
'The above does not work
'curElement.InvokeMember("contextmenu")
'Not sure what to do here
IsRightElement = False
Exit For
End If
Next
Me.Close()
I'm new to Capybara and testing on Rails in general, so please forgive me if this is a simple answer.
I've got this test
it "should be able to edit an assignment" do
visit dashboard_path
select(#project.client + " - " + #project.name, :from => "assignment_project_id")
select(#team_member.first_name + " " + #team_member.last_name, :from => "assignment_person_id")
click_button "Create assignment"
page.should have_content(#team_member.first_name)
end
it passes as is, but if I add :js => true it fails with
cannot select option, no option with text 'Test client - Test project' in select box 'assignment_project_id'
I'm using FactoryGirl to create the data, and as the test passes without JS, I know that part is working.
I've tried with the default JS driver, and with the :webkit driver (with capybara-webkit installed)
I guess I don't understand enough what turning on JS for Capybara is doing.
Why would the test fail with JS on?
I've read the Capybara readme at https://github.com/jnicklas/capybara and it solved my issue.
Transactional fixtures only work in the default Rack::Test driver, but
not for other drivers like Selenium. Cucumber takes care of this
automatically, but with Test::Unit or RSpec, you may have to use the
database_cleaner gem. See this explanation (and code for solution 2
and solution 3) for details.
But basically its a threading issue that involves Capybara having its own thread when running the non-Rack driver, that makes the transactional fixtures feature to use a second connection in another context. So the driver thread is never in the same context of the running rspec.
Luckily this can be easily solve (at least it solved for me) doing a dynamic switching in th DatabaseCleaner strategy to use:
RSpec.configure do |config|
config.use_transactional_fixtures = false
config.before :each do
if Capybara.current_driver == :rack_test
DatabaseCleaner.strategy = :transaction
else
DatabaseCleaner.strategy = :truncation
end
DatabaseCleaner.start
end
config.after do
DatabaseCleaner.clean
end
end
A variation of brutuscat's answer that fixed our feature specs (which all use Capybara):
config.before(:suite) do
DatabaseCleaner.clean_with(:truncation)
end
config.before(:each) do
# set the default
DatabaseCleaner.strategy = :transaction
end
config.before(:each, type: :feature) do
DatabaseCleaner.strategy = :truncation
end
config.before(:each) do
DatabaseCleaner.start
end
config.append_after(:each) do
DatabaseCleaner.clean
end
There is another way to deal with this problem now described and discussed here: Why not use shared ActiveRecord connections for Rspec + Selenium?