Scraping Javascript-rendered page that requires login using Python - javascript

My issue is that I can't scrape a website that uses login when it renders the page using Javascript.
I can easily log in using this code:
import requests
from lxml import html
payload ={
"username":"username",
"password":"password"
}
session_requests = requests.session()
result = session_requests.get(login_url)
tree = html.fromstring(result.text)
result = session_requests.post(
login_url,
data = payload,
headers = dict(referer=login_url)
)
Then I can get some values using this code:
result = session_requests.get(agent_url, headers = dict(referer = agent_url ))
tree = html.fromstring(result.content)
needed_info = tree.xpath("//div[#class='col-md-6']/div[#class='table-responsive']/table/tbody/tr[22]/td[2]")[0].text
However, not everything is rendered.
I've also tried to use dryscrape, however, it does not work on Windows.
Selenium is just too heavy for my needs and I'm having issues installing Spynner (probably because it does not support Python 3.6?)
What would you recommend?

I just went and did it using selenium. Everything else was just too much of a hassle for this little project.

Related

Cheerio selector doesn't select some elements

I am trying to build a module that does some basic scraping on an official NBA box score page (e.g. https://stats.nba.com/game/0021800083) using request-promise and cheerio. I wrote the following test code:
const rp = require("request-promise");
const co = require("cheerio");
// the object to be exported
var stats = {};
const test = (gameId) => {
rp(`http://stats.nba.com/game/${gameId}`)
.then(response => {
const $ = co.load(response);
$('td.player').each((i, element) => {
console.log(element);
});
});
};
// TESTING
test("0021800083");
module.exports = stats;
When I inspect the test webpage, there are multiple instances of td tags with class="player", but for some reason selecting them with cheerio doesn't work.
But cheerio does successfully select some elements, including a, script and div tags.
Help would be appreciated!
Using a scraper like request-promise will not work for a site built with AngularJS. Your response does not consist of the rendered HTML, as you are probably expecting. You can confirm by console logging the response. In order to properly scrape this site you could use PhantomJS, Selenium Webdriver, and the like.
An easier approach is to identify the AJAX call that is providing the data your are after and scrape that instead. For this, go to the site and in the developer tools, open the Network tab. Look at the list of requests and identify which one has the data you are after.
Assuming you are after the player stats in the tables, the one I believe you are looking for is "0021800083_gamedetail.json"
Further reading:
Can you scrape a Angular JS website
Scraping Data from AngularJS loaded page
Best of luck!

How to yield fragment URLs in scrapy using Selenium?

from my poor knowledge about webscraping I've come about to find a very complex issue for me, that I will try to explain the best I can (hence I'm opened to suggestions or edits in my post).
I started using the web crawling framework 'Scrapy' long ago to make my webscraping, and it's still the one that I use nowadays. Lately, I came across this website, and found that my framework (Scrapy) was not able to iterate over the pages since this website uses Fragment URLs (#) to load the data (the next pages). Then I made a post about that problem (having no idea of the main problem yet): my post
After that, I realized that my framework can't make it without a JavaScript interpreter or a browser imitation, so they mentioned the Selenium library. I read as much as I could about that library (i.e. example1, example2, example3 and example4). I also found this StackOverflow's post that gives some tracks about my issue.
So Finally, my biggest questions are:
1 - Is there any way to iterate/yield over the pages from the website shown above, using Selenium along with scrapy?
So far, this is the code I'm using, but doesn't work...
EDIT:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# The require imports...
def getBrowser():
path_to_phantomjs = "/some_path/phantomjs-2.1.1-macosx/bin/phantomjs"
dcap = dict(DesiredCapabilities.PHANTOMJS)
dcap["phantomjs.page.settings.userAgent"] = (
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/53 "
"(KHTML, like Gecko) Chrome/15.0.87")
browser = webdriver.PhantomJS(executable_path=path_to_phantomjs, desired_capabilities=dcap)
return browser
class MySpider(Spider):
name = "myspider"
browser = getBrowser()
def start_requests(self):
the_url = "http://www.atraveo.com/es_es/islas_canarias#eyJkYXRhIjp7ImNvdW50cnlJZCI6IkVTIiwicmVnaW9uSWQiOiI5MjAiLCJkdXJhdGlvbiI6NywibWluUGVyc29ucyI6MX0sImNvbmZpZyI6eyJwYWdlIjoiMCJ9fQ=="
yield scrapy.Request(url=the_url, callback=self.parse, dont_filter=True)
def parse(self, response):
self.get_page_links()
def get_page_links(self):
""" This first part, goes through all available pages """
for i in xrange(1, 3): # 210
new_data = {"data": {"countryId": "ES", "regionId": "920", "duration": 7, "minPersons": 1},
"config": {"page": str(i)}}
json_data = json.dumps(new_data)
new_url = "http://www.atraveo.com/es_es/islas_canarias#" + base64.b64encode(json_data)
self.browser.get(new_url)
print "\nThe new URL is -> ", new_url, "\n"
content = self.browser.page_source
self.get_item_links(content)
def get_item_links(self, body=""):
if body:
""" This second part, goes through all available items """
raw_links = re.findall(r'listclickable.+?>', body)
links = []
if raw_links:
for raw_link in raw_links:
new_link = re.findall(r'data-link=\".+?\"', raw_link)[0].replace("data-link=\"", "").replace("\"",
"")
links.append(str(new_link))
if links:
ids = self.get_ids(links)
for link in links:
current_id = self.get_single_id(link)
print "\nThe Link -> ", link
# If commented the line below, code works, doesn't otherwise
yield scrapy.Request(url=link, callback=self.parse_room, dont_filter=True)
def get_ids(self, list1=[]):
if list1:
ids = []
for elem in list1:
raw_id = re.findall(r'/[0-9]+', elem)[0].replace("/", "")
ids.append(raw_id)
return ids
else:
return []
def get_single_id(self, text=""):
if text:
raw_id = re.findall(r'/[0-9]+', text)[0].replace("/", "")
return raw_id
else:
return ""
def parse_room(self, response):
# More scraping code...
So this is mainly my problem. I'm almost sure that what I'm doing isn't the best way, so for that I did my second question. And to avoid having to do these kind of issues in the future, I did my third question.
2 - If the answer to the first question is negative, how could I tackle this issue? I'm opened to another means, otherwise
3 - Can anyone tell me or show me pages where I can learn how to solve/combine webscraping along javaScript and Ajax? Nowadays are more the websites that use JavaScript and Ajax scripts to load content
Many thanks in advance!
Selenium is one of the best tools to scrape dynamic data.you can use selenium with any web browser to fetch the data that is loading from scripts.That works exactly like the browser click operations.But I am not prefering it.
For getting dynamic data you can use scrapy + splash combo. From scrapy you wil get all the static data and splash for other dynamic contents.
Have you looked into BeautifulSoup? It's a very popular web scraping library for python. As for JavaScript, I would recommend something like Cheerio (If you're asking for a scraping library in JavaScript)
If you are meaning that the website uses HTTP requests to load content, you could always try to manipulate that manually with something like the requests library.
Hope this helps
You can definitely use Selenium as a standalone to scrap webpages with dynamic content (like AJAX loading).
Selenium will just rely on a WebDriver (basically a web browser) to seek content over the Internet.
Here are a few of them (but the most often used) :
ChromeDriver
PhantomJS (my favorite)
Firefox
Once your started, you can start your bot and parse the html content of the webpage.
I included a minimal working example below using Python and ChromeDriver :
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome(executable_path='chromedriver')
driver.get('https://www.google.com')
# Then you can search for any element you want on the webpage
search_bar = driver.find_element(By.CLASS_NAME, 'tsf-p')
search_bar.click()
driver.close()
See the documentation for more details !

Python web drivers

Im trying to scrape products from a web store similar to how Dropified scrapes items from Ali express,
Current Solution(the way it's set up it will only try and access the first item):
from bs4 import BeautifulSoup
import requests
import time
import re
# Get search inputs from user
search_term = raw_input('Search Term:')
# Build URL to imdb.com
aliURL = 'https://www.aliexpress.com/wholesale?SearchText=%(q)s'
payload = {'q': search_term, }
# Get resulting webpage
r = requests.get(aliURL % payload)
# Build 'soup' from webpage and filter down to the results of search
soup = BeautifulSoup(r.text, "html5lib")
titles = soup.findAll('a', attrs = {'class': 'product'})
itemURL = titles[0]["href"]
seperatemarker = '?'
seperatedURL = itemURL.split(seperatemarker, 1)[0]
seperatedURL = "http:" + seperatedURL
print seperatedURL
IR = requests.get(seperatedURL)
Isoup = BeautifulSoup(IR.text, "html5lib")
productname = Isoup.findAll('h1')
print productname
This solution works assuming that the items on the page don't require javascript if the item does it will only retrieve the initial page before the document is ready.
I realise I can use a python web driver, but I was wondering if there was any other solution to this problem that would allow for easy automation of a web scraping tool.
Checkout selenium with phantomjs. selenium and phantomjs handle most of the issues related to JS generated content on the page. You don't even need to think about these things anymore.
If you're scraping many pages and want to speed things up, you might want to do things asynchronously. For a small to mid-sized setup, you can use RQ. For larger projects you can use celery. What these tools allow you to do is scrape multiple pages at the same time(not concurrently though).
Note that the tools I've mentioned so far have nothing to do with asyncio or other async frameworks.
I tried scraping some e-commerce pages and noticed that the program was spending 80% of its time waiting for HTTP calls to return something. Using the above tools you can reduce that 80% to 10% or less.

Can i scrape this site using just node?

im very new to JavaScript so be patient.
I've been trying to scrape a site and get all the product URLs in a list that i will use later in other function like this:
url='https://www.fromuthtennis.com/frm/c-10-mens-tops.aspx'
var http = require('http-get');
var request = require("request");
var cheerio = require("cheerio");
function getURLS(url) {
request(url, function(err, resp, body){
var linklist = [];
$ = cheerio.load(body);
var links = $('#productResults a');
for(valor in links) {
if(links[valor].attribs && links[valor].attribs.href && linklist.indexOf(links[valor].attribs.href) == -1){
linklist.push(links[valor].attribs.href);
}
}
var extended_links = [];
linklist.forEach(function(link){
extended_link = 'https://www.fromuthtennis.com/frm/' + link;
extended_links.push(extended_link);
})
console.log(extended_links);
})
};
This does work unless you go to the second page of items like this:
url='https://www.fromuthtennis.com/frm/c-10-mens-tops.aspx#Filter=[pagenum=2*ava=1]'
var http = require('http-get');
var request = require("request");
var cheerio = require("cheerio"); //etc...
As far as i know this happens because the content on the page is loaded dynamically.
To get the contents of the page i believe i need to use PhantomJS because that would allow me to get the html code after the page has been fully loaded, so i installed the phantomjs-node module. I want to use NodeJS to get the URL list because the rest of my code is written on it.
I've been reading a lot about PhantomJS but using the phantomjs-node is tricky and i still don't understand how could i get the URL list using it because i'm very new to JavaScript or coding in general.
If someone could guide me a little bit i'd appreciate it a lot.
Yes, you can. That page looks like it implements Google's Ajax Crawling URL.
Basically it allows websites to generate crawler friendly content for Google. Whenever you see a URL like this:
https://www.fromuthtennis.com/frm/c-10-mens-tops.aspx#Filter=[pagenum=2*ava=1]
You need to convert it to this:
https://www.fromuthtennis.com/frm/c-10-mens-tops.aspx?_escaped_fragment_=Filter%3D%5Bpagenum%3D2*ava%3D1%5D
The conversion is simply take the base path: https://www.fromuthtennis.com/frm/c-10-mens-tops.aspx, add a query param _escaped_fragment_ who's value is URL fragment Filter=[pagenum=2*ava=1] encoded into Filter%3D%5Bpagenum%3D2*ava%3D1%5D using standard URI encoding.
You can read the full specification here: https://developers.google.com/webmasters/ajax-crawling/docs/specification
Note: This does not apply to all websites, only websites that implement Google's Ajax Crawling URL. But you're in luck in this case
You can see any product you want without using dynmic content using this url:
https://www.fromuthtennis.com/frm/showproduct.aspx?ProductID={product_id}
For example to see product 37023:
https://www.fromuthtennis.com/frm/showproduct.aspx?ProductID=37023
All you have to do is for(var productid=0;prodcutid<40000;productid++) {request...}.
Another approach is to use phantom module. (https://www.npmjs.com/package/phantom). It will let you run phantom command directly from your NodeJS app

Scraping authenticated website in node.js

I want to scrape my college website (moodle) with node.js but I haven't found a headless browser able to do it. I have done it in python in just 10 lines of code using RoboBrowser:
from robobrowser import RoboBrowser
url = "https://cas.upc.edu/login?service=https%3A%2F%2Fatenea.upc.edu%2Fmoodle%2Flogin%2Findex.php%3FauthCAS%3DCAS"
browser = RoboBrowser()
browser.open(url)
form = browser.get_form()
form['username'] = 'myUserName'
form['password'] = 'myPassword'
browser.submit_form(form)
browser.open("http://atenea.upc.edu/moodle/")
print browser.parsed
The problem is that the website requires authentication. Can you help me? Thanks!
PD: I think this can be useful https://www.npmjs.com/package/form-scraper but I can't get it working.
Assuming you want to read a 3rd party website, and 'scrape' particular pieces of information, you could use a library such as cheerio to achieve this in Node.
Cheerio is a "lean implementation of core jQuery designed specifically for the server". This means that given a String representation of a DOM (or part thereof), cheerio can traverse it in much the same way as jQuery can.
An example from Max Ogden show how you can use the request module to grab HTML from a remote server and then pass it to cheerio:
var $ = require('cheerio')
var request = require('request')
function gotHTML(err, resp, html) {
if (err) return console.error(err)
var parsedHTML = $.load(html)
// get all img tags and loop over them
var imageURLs = []
parsedHTML('a').map(function(i, link) {
var href = $(link).attr('href')
if (!href.match('.png')) return
imageURLs.push(domain + href)
})
}
var domain = 'http://substack.net/images/'
request(domain, gotHTML)
Selenium has support for multiple languages and multiple platforms and multiple browsers.

Categories

Resources