Can storeEval process ajax requests in selenum IDE? - javascript

I am trying to create a dynamic tester which will get the input from a database and compare it to what comes out of the page. Can Selenium IDE do something like that using storeEval?

Selenium IDE is built on JavaScript technology. This tool will not allow you to do what you want.
You will need to upgrade to something more. Consider Selenium WebDriver, together with Java, which allows you to do (almost) anything. Including access to database via JDBC.

Related

Electron GUI with C# backend

Use case
I've got an existing project developed in C# using WinForms with custom controls for the GUI. We are amazed by the approach to write GUIs using HTML/CSS/JS and we are looking for the best way to write a GUI for our desktop application using the above mentioned languages. We only need to support Windows devices.
My worries:
It doesn't take long to come across recommendations using electron-edge. While I am not so worried to get everything working, I am worried about:
Debugging my C# code (I still want to be able to start my whole application right from VS and debug it look I am used to it). I read that I would need to attach to the node.js application in order to debug my C# code afterwards. Since the whole program language is written in C# that sounds like a pain?
As far as I got edge will let it run as just one process. Can I consider the electron application as an own thread which would still run while my C# code is stuck somewhere?
My option:
I am still positive I want to write my desktop GUI with HTML/CSS/JS. What I considered instead of using electron-edge is writing an own electron application which does communicate with my C# backend using named pipes. I wonder if there are larger roadblocks why I wouldn't want to do this and use electron-edge instead?
My question:
I would like to get feedback for my two concerns mentioned above and I also would like to get input about my option to create the GUI as own electron process, so that I have two processes (GUI+Backend) when someone runs my application.
Electron.NET may be a option for you. You can write c# code with electron.
You can do it in many ways
1) COM. Create C# COM DLL. Create wrapper functions for the DLL using N-API (Native node module) or use FFI. You can access the functions from JS.
2) Create a .Net web server and include your functions as REST endpoints. From UI make http request to communicate (Clear separation of UI & BEnd)
You can checkout my github repo for a few alternatives to electron.
I think a most import question would be how your frontend interacts with the backend? Is there any notifications need push to the frontend?
WebSocket could be a good option for two ends communication.

Why do I need to have a Selenium Server instead of calling the WebDriver implementation directly

My Situation
I am trying to run automated headless browser tests with PhantomJS and the provided GhostDriver. Of course, I need some kind of a library that wraps the WebDriver implementation because I don't want to call the API implementation myself. During my investigation for some kind of a library for that, I stumbled across things like WebDriverIO. Reading the documentation it says that I need to install a standalone selenum server in order to make it work.
My Question
Why do I need a dedicated Selenium server for that?
Isn't there a library that calls the HTTP API of the GhostDriver directly?
Selenium is the wrapper to talk to HTTP-API of many browsers.
You can talk directly to GhostDriver or Chrome .. but have to talk to their individual API's. You can use selenium to easily match your preferred code binding (python, java, js, ruby, c#, whateves) to the desired browser by using Selenium to drive the browser.
http://www.seleniumhq.org/projects/webdriver/
Otherwise, you'd want to connect to the GhostDriver and drive it yourself.

Evaluate javascript on a local html file (without browser)

This is part of a project I am working on for work.
I want to automate a Sharepoint site, specifically to pull data out of a database that I and my coworkers only have front-end access to.
I FINALLY managed to get mechanize (in python) to accomplish this using Python-NTLM, and by patching part of it's source code to fix a reoccurring error.
Now, I am at what I would hope is my final roadblock: Part of the form I need to submit seems to be output of a JavaScript function :| and lo and behold... Mechanize does not support javascript. I don't want to emulate the javascript functionality myself in python because I would ideally like a reusable solution...
So, does anyone know how I could evaluate the javascript on the local html I download from sharepoint? I just want to run the javascript somehow (to complete the loading of the page), but without a browser.
I have already looked into selenium, but it's pretty slow for the amount of work I need to get done... I am currently looking into PyV8 to try and evaluate the javascript myself... but surely there must be an app or library (or anything) that can do this??
Well, in the end I came down to the following possible solutions:
Run Chrome headless and collect the html output (thanks to koenp for the link!)
Run PhantomJS, a headless browser with a javascript api
Run HTMLUnit; same thing but for Java
Use Ghost.py, a python-based headless browser (that I haven't seen suggested anyyyywhere for some reason!)
Write a DOM-based javascript interpreter based on Pyv8 (Google v8 javascript engine) and add this to my current "half-solution" with mechanize.
For now, I have decided to use either use Ghost.py or my own modification of the PySide/PyQT Webkit (how ghost works) to evaluate the javascript, as apparently they can run quite fast if you optimize them to not download images and disable the GUI.
Hopefully others will find this list useful!
Well you will need something that both understands the DOM and understand Javascript, so that comes down to a headless browser of some sort. Maybe you can take a look at the selenium webdriver, but I guess you already did that. I don't hink there is an easy way of doing this without running the stuff in an actually browser engine.

Using python to login a site using javascript and https

I would like to use python to login to a site that uses both javascript and https encrypted comm.
Specifically - it's this site:
https://registration.orange.co.il/he-il/login/login/?TYPE=100663297&REALMOID=06-73b4ebbc-5fd9-4b19-b3f4-42671c0df793&GUID=&SMAUTHREASON=0&METHOD=GET&SMAGENTNAME=vmwebadmin3&TARGET=-SM-http%3a%2f%2fwww1%2eorange%2eco%2eil%2fSendSMS%2f
All I wish to do is writing a python script, successfully logging in, and later on to transfer the algorithm to java.
Every solution I've tried so far, just got me back to the same login form.
Thank you all!
Is using a browser automation tool too far off from what you're trying to do? Without knowing the exact goal this could be way off base, but what about using something like Selenium? You can use Selenium from Java, Python, C#, and Ruby.
I've used Selenium to automatically log in to a private wiki and retrieve, edit, and submit changes to articles. If that's similar to what you're trying to do, it could work.
It's a pretty heavyweight approach though, since you actually have to be running a realtime browser to do the work.
You should check spynner module, you can process javascript and https.

Python BeautifulSoup on javascript tables with multiple pages

I used to have a python script that pulled data from the below table properly using Mechanize and BeautifulSoup. However, this site has recently changed the encoding of the table to javascript, and I'm having trouble working with it because there are multiple pages to the table.
http://www.fangraphs.com/leaders.aspx?pos=all&stats=bat&lg=all&qual=0&type=8&season=2011&month=0&season1=&ind=0&team=25&players=0
For example, in the link above, how could I grab the data from both page 1 and page 2 of the table? FWIW, The URL doesn't change.
Your best bet is to run a headless browser e.g phantomjs which understands all the intricacies of JavaScript, DOM etc but you will have to write your code in Javascript, benefit is that you can do whatever you want, parsing html using BeautifulSoup is cool for a while but is headache in long term. So why scrape when you can access the DOM
Mechanize doesn't handle javascript.
You could observe what requests are made when you click the button (using Firebug in Firefox or Developer Tools in Chrome). Than try to reverse engineer the javascript running behind the page and try to do the similar thing using your python code, for that take a look at Spidermonkey or
Try using Selenium.
Selenium is a funcitonal testing framework which automates the browser to perform certain operations which in turn test the underlying actions of your code

Categories

Resources