python with javascript - javascript

I am working on parsing a html page.
I tried spynner, selenium, mechanize but didnt able to solve issue with javascript with this case.
Can anyone let me know how can i work with such issue to fetch the data to next page?
when I worked on selenium, in this url first we have to get the data in other select box and then proceed but using selenium I am able to get only same url after click on the next page,
same incase of spynner too.

From what I can tell, mechanize doesn't support javascript. So, if you're doing any automation with javascript heavy sites, mechanize is probably not the way to go. Rather, you probably need python to script a fully functional web browser. You can do this with Mozilla via PyXPCOM, with Ruby and WATIR, or with spynner. Of these options, I'd probably try spynner, first, as spynner is well integrated with python.
Good luck with your project, and happy coding!

Related

How to bypass JavaScript detection when using requests in Python?

So there is a problem with JavaScript and requests (in Python) and that is, it does not use JavaScript when requesting a webpage.
The website I'm working with (https://access.paylocity.com/) requires JavaScript and without it, it changes the content of the page to just a text at the top saying, "Please enable JavaScript to view the page content."
(I could be wrong here but) I think one solution is the use of Selenium, but that would replace requests which I'm fine with as long as there are no other ways of fixing/bypassing this JavaScript detection.
(For those wondering, this python project of mine is supposed to automatically fetch the events on the Paylocity calendar, then port those events to another calendar that I frequently use everyday. It's also just intended for myself.)
Edit: Here is the code I have if that will help https://pastecode.io/s/GXTUO1BgtR (I didn't know where to paste my code, so I decided on that website. If I should change it, please comment or say something about it.)
Since the website you're working with is dynamically loading the JS as far I can tell, I think you have no other choice as to making use of Selenium. I had a project on my own a couple weeks ago and run into a similar problem which I could also solve using Selenium. But, I'm no expert, I'm just giving away my thoughts on this.

Evaluate javascript on a local html file (without browser)

This is part of a project I am working on for work.
I want to automate a Sharepoint site, specifically to pull data out of a database that I and my coworkers only have front-end access to.
I FINALLY managed to get mechanize (in python) to accomplish this using Python-NTLM, and by patching part of it's source code to fix a reoccurring error.
Now, I am at what I would hope is my final roadblock: Part of the form I need to submit seems to be output of a JavaScript function :| and lo and behold... Mechanize does not support javascript. I don't want to emulate the javascript functionality myself in python because I would ideally like a reusable solution...
So, does anyone know how I could evaluate the javascript on the local html I download from sharepoint? I just want to run the javascript somehow (to complete the loading of the page), but without a browser.
I have already looked into selenium, but it's pretty slow for the amount of work I need to get done... I am currently looking into PyV8 to try and evaluate the javascript myself... but surely there must be an app or library (or anything) that can do this??
Well, in the end I came down to the following possible solutions:
Run Chrome headless and collect the html output (thanks to koenp for the link!)
Run PhantomJS, a headless browser with a javascript api
Run HTMLUnit; same thing but for Java
Use Ghost.py, a python-based headless browser (that I haven't seen suggested anyyyywhere for some reason!)
Write a DOM-based javascript interpreter based on Pyv8 (Google v8 javascript engine) and add this to my current "half-solution" with mechanize.
For now, I have decided to use either use Ghost.py or my own modification of the PySide/PyQT Webkit (how ghost works) to evaluate the javascript, as apparently they can run quite fast if you optimize them to not download images and disable the GUI.
Hopefully others will find this list useful!
Well you will need something that both understands the DOM and understand Javascript, so that comes down to a headless browser of some sort. Maybe you can take a look at the selenium webdriver, but I guess you already did that. I don't hink there is an easy way of doing this without running the stuff in an actually browser engine.

Using python to login a site using javascript and https

I would like to use python to login to a site that uses both javascript and https encrypted comm.
Specifically - it's this site:
https://registration.orange.co.il/he-il/login/login/?TYPE=100663297&REALMOID=06-73b4ebbc-5fd9-4b19-b3f4-42671c0df793&GUID=&SMAUTHREASON=0&METHOD=GET&SMAGENTNAME=vmwebadmin3&TARGET=-SM-http%3a%2f%2fwww1%2eorange%2eco%2eil%2fSendSMS%2f
All I wish to do is writing a python script, successfully logging in, and later on to transfer the algorithm to java.
Every solution I've tried so far, just got me back to the same login form.
Thank you all!
Is using a browser automation tool too far off from what you're trying to do? Without knowing the exact goal this could be way off base, but what about using something like Selenium? You can use Selenium from Java, Python, C#, and Ruby.
I've used Selenium to automatically log in to a private wiki and retrieve, edit, and submit changes to articles. If that's similar to what you're trying to do, it could work.
It's a pretty heavyweight approach though, since you actually have to be running a realtime browser to do the work.
You should check spynner module, you can process javascript and https.

Scraping websites with Javascript enabled?

I'm trying to scrape and submit information to websites that heavily rely on Javascript to do most of its actions. The website won't even work when i disable Javascript in my browser.
I've searched for some solutions on Google and SO and there was someone who suggested i should reverse engineer the Javascript, but i have no idea how to do that.
So far i've been using Mechanize and it works on websites that don't require Javascript.
Is there any way to access websites that use Javascript by using urllib2 or something similar?
I'm also willing to learn Javascript, if that's what it takes.
I wrote a small tutorial on this subject, this might help:
http://koaning.io.s3-website.eu-west-2.amazonaws.com/dynamic-scraping-with-python.html
Basically what you do is you have the selenium library pretend that it is a firefox browser, the browser will wait until all javascript has loaded before it continues passing you the html string. Once you have this string, you can then parse it with beautifulsoup.
I've had exactly the same problem. It is not simple at all, but I finally found a great solution, using PyQt4.QtWebKit.
You will find the explanations on this webpage : http://blog.motane.lu/2009/07/07/downloading-a-pages-content-with-python-and-webkit/
I've tested it, I currently use it, and that's great !
Its great advantage is that it can run on a server, only using X, without a graphic environment.
You should look into using Ghost, a Python library that wraps the PyQt4 + WebKit hack.
This makes g the WebKit client:
import ghost
g = ghost.Ghost()
You can grab a page with g.open(url) and then g.content will evaluate to the document in its current state.
Ghost has other cool features, like injecting JS and some form filling methods, and you can pass the resulting document to BeautifulSoup and so on: soup = bs4.BeautifulSoup(g.content).
So far, Ghost is the only thing I've found that makes this kind of thing easy in Python. The only limitation I've come across is that you can't easily create more than one instance of the client object, ghost.Ghost, but you could work around that.
Check out crowbar. I haven't had any experience with it, but I was curious about the answer to your question so I started googling around. I'd like to know if this works out for you.
http://grep.codeconsult.ch/2007/02/24/crowbar-scrape-javascript-generated-pages-via-gecko-and-rest/
Maybe you could use Selenium Webdriver, which has python bindings I believe. I think it's mainly used as a tool for testing websites, but I guess it should be usable for scraping too.
I would actually suggest using Selenium. Its mainly designed for testing Web-Applications from a "user perspective however it is basically a "FireFox" driver. I've actually used it for this purpose ... although I was scraping an dynamic AJAX webpage. As long as the Javascript form has a recognizable "Anchor Text" that Selenium can "click" everything should sort itself out.
Hope that helps

How do I use Mechanize to process JavaScript?

I'm connecting to a web site, logging in.
The website redirects me to new pages and Mechanize deals with all cookie and redirection jobs, but, I can't get the last page. I used Firebug and did same job again and saw that there are two more pages I had to pass with Mechanize.
I took a quick look at the pages and saw that there is some JavaScript and HTML code but couldn't understand it because it doesn't look like normal page code. What are those pages for? How they can redirect to other pages? What should I do to pass these?
If you need to handle pages with Javascript, try WATIR or Selenium - those drive a real web browser, and can thus handle any Javascript. WATIR Classic requires either IE or Firefox with a certain extension installed, and you will see the pages flash on the screen as it works.
Your other option would be understanding what the Javascript on the offending page does and bypassing it manually, but that seems onerous.
At present, Mechanize doesn't handle JavaScript. There's talk of eventually merging Johnson's capabilities into Mechanize, but until that happens, you have two options:
Figure out the JavaScript well enough to understand how to traverse those pages.
Automate an actual browser that does understand JavaScript using Watir.
what are those pages for? how they can redirect to other pages. what should i do to pass these?
Sometimes work is done on those pages. Sometimes the JavaScript is there to prevent automated access like what you're trying to do :). A lot of websites have unnecessary checks to make sure you have a "good" browser, so make sure that your user_agent is set to something common, like IE. Sometimes setting the user_agent to look like an old browser will let you get past without JavaScript.
Website automation is fun because you have to outsmart the website and its software developers, using multiple strategies. Like the others said, Watir is the best tool for getting past JavaScript at the moment.

Categories

Resources