Python Web Scraping in Pagination in Single Page Application - javascript

I am currently researching on how to scrape web content using python in pagination driven by javascript in single page application (SPA).
For example,
https://angular-8-pagination-example.stackblitz.io/
I googled and found that using Scrapy is not possible to scrape javascript / SPA driven content.
It needs to use Splash. I am new to both Scrapy and Splash.
Is this correct?
Also, how do I call the javascript pagination method? I inspect the element, it's just an anchor without href and javascript event.
Please advise.
Thank you,
Hatjhie

You need to use a SpalshRequest to render the JS. You then need to get the pagination text. Generally I use re.search with the appropriate regex pattern to extract the relevent numbers. You can then assign them to current page variable and total pages variables.
Typically a website will move to the next page by incrementing ?page=x or ?p=x at the end of the url. You can then increment this value to scrape all the relevant pages.
The overall pattern looks like this:
import scrapy
from scrapy_splash import SplashRequest
import re
from ..items import Item
proxy ='http//your.proxy.com:PORT'
current_page_xpath='//div[your x path selector]/text()'
last_page_xpath='//div[your other x path selector]/text()'
class spider(scrapy.Spider):
name = 'my_spider'
allowed_domains =['domain.com']
start_urls =['https://www.domaintoscrape.com/page=1']
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url=url, callback=self.parse, meta ={'proxy':proxy})
def get_page_nbr(value):
#you may need more complex regex to get page numbers.
#most of the time they are in form "page X of Y"
#google is your friend
if re.search('\d+',value):
value = re.search('\d+',value)
value = value[0]
else:
value =None
return value
def parse(self, response):
#get last and current page from response:
last_page = page_response.xpath(last_page_xpath).get()
current_page = page_response.xpath(current_page_xpath).get()
#do something with your response
# if current page is less than last page make another request by incrmenenting the page in the URL
if current_page < last_page:
ajax_url = response.url.replace(f'page={int(current_page)}',f'page={int(current_page)+1}')
yield scrapy.Request(url=ajax_url, callback=self.parse, meta ={'proxy':proxy})
#optional
if current_page == last_page:
print(f'processed {last_page} items for {response.url}')
finally, its worth having a look on Youtube as there are a number of tutorials on scrapy_splash and pagination.

Related

Scrapy, scraping elements only visible during runtime

I am new to Scrapy, HTML and Javascript. I am trying to source a list of all branches and agents for our agency from the website. Most of the information I need can be extracted from an AJAX result: www.tysonprop.co.za/ajax/agents/?branch_id=[id]
The challenge is two fold:
The branch names displayed on the website (https://www.tysonprop.co.za/agents/) are contained within span elements not visible when viewing the page source. This means that Scrapy cannot find the information. For example, "Tyson Properties Fourways Office" should in theory be located at: xpath(//div[#id="select2-result-label-76"]/span[#class="select2-match"]/text()) [![see inspect element][1]][1])
The AJAX call requires the branch-id. I can't figure out how the page translates the branch name selected in the drop-down to a branch id to intercept the logic. I.e. how do I extract a list of branch names with corresponding ID's?
I have done an extensive web search without much success. Any help would be appreciated.
[1]: https://i.stack.imgur.com/1kjk8.png
class TysonSpider(scrapy.Spider):
name = 'tyson_spider'
def start_requests(self):
url = 'https://www.tysonprop.co.za/ajax/agents/?branch_id=25'
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
agent = Agent()
json_data = json.loads(response.text)
branch_id = json_data['branch']['id']
branch_name = json_data['branch']['branch_name']
branch_tel = json_data['branch']['get_dialable_telephone_number']
# Loop through all of th agents
agent_list = json_data['agents']
for key in range(len(agent_list)):
agent['id'] = agent_list[key]['id']
agent['branch_id'] = branch_id
agent['branch_name'] = branch_name
agent['branch_tel'] = branch_tel
agent['privy_seal_url'] = agent_list[key]['privy_seal_url']
Related question: Scrapy xpath not extracting div containing special characters <%=
If you look into the page source, you can see the branch id's and names are present in the HTML, and located under 'name="agent_search"'.
With the logic below you go through the different branches and get their id & name:
branches_xpath = response.xpath('//*[#name="agent_search"]//option')
for branch_xpath in branches_xpath[1:]: # skip first option as that one is empty
branch_id = branch_xpath.xpath('./#value').get()
branch_name = branch_xpath.xpath('./text()').get()
print(f"branch_id: {branch_id}, branch_name: {branch_name}")

Python Flask data feed from Pandas Dataframe, dynamically define with unique endpoint

Hi I am building a web app with Flask Python. I got a problem here:
#app.route('/analytics/signals/<ticker_url>')
def analytics_signals_com_page(ticker_url):
all_ticker = full_list
ticker_name = com_name
ticker = ticker_url.upper()
pricerec = sp500[ticker_url.upper()].tolist()
timerec = sp500[ticker_url.upper()].index.tolist()
return render_template('company.html', all_ticker=all_ticker, ticker_name=ticker_name, ticker=ticker, pricerec=pricerec, timerec=timerec)
Here I am defining company pages based on the a page will contain different content. The problem is that everything is fine upto ticker = ticker_url.upper(). It works perfectly fine. But for pricerec and timerec, they make problems.
sp500 is a pandas DataFrame columns being companies like "AAPL", "GOOG","MSFT", and so forth 505 companies and the index are timestamps, and values are the prices at each time.
So what I am doing for the pricerec, I am taking the ticker_url and use it to take the specific company's price and make it as a list. And timerec is to take the index (timestamps) and make it as a list. And I am passing these two variables into the company.html page.
But it makes internal server error. I do not know why it happens.
My expectation was that when a user click a button that href to "~/analytics/signals/aapl" then the company.html page will contain the pricerec and timerec for me to draw a graph. But it didn't work like that. It makes internal server error. I defined those two variables in the javascript also like I did for the other variables(all_ticker, ticker_name, and ticker)
Can anyone help me with this issue?
Thanks!

scrapy-splash usage for rendering javascript

This is a follow up of my previous quesion
I installed splash and scrapy-splash.
And also followed the instructions for scrapy-splash.
I edited my code as follows:
import scrapy
from scrapy_splash import SplashRequest
class CityDataSpider(scrapy.Spider):
name = "citydata"
def start_requests(self):
urls = [
'http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=0',
'http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=1',
]
for url in urls:
yield SplashRequest(url=url, callback=self.parse)
def parse(self, response):
page = response.url.split("/")[-2]
filename = 'citydata-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
But still i get the same output. only one html file is generated and the result is only for http://www.city-data.com/advanced/search.php
is there anything wrong in the code or any other suggestions please.
First off, I wanted to clear up some possible points of confusion from your last question which "#paul trmbrth" wrote:
The URL fragment (i.e. everything including and after #body) is not sent to the server and only http://www.city-data.com/advanced/search.php is fetched
So for Scrapy, the requests to [...] and [...] are the same resource, so it's only fetch once. They differ only in their URL fragments.
URI standards dictate that the number sign (#) be used to indicate the start of the fragment, which is the last part of the URL. In most/all browsers, nothing beyond the "#" is transmitted. However, it's fairly common for AJAX sites to utilize Javascript's window.location.hash grab the URL fragment, and use it to execute additional AJAX calls. I bring this up because city-data.com does exactly this, which may confuse you as it does in fact bring back two different sites for each of those URLs in a browser.
Scrapy does by default drop the URL fragment, so it will report both URLs as being just "http://www.city-data.com/advanced/search.php", and filter the second one.
With all of that out of the way, there will still be problem after you remove "#body" from the URLs caused by a combination of of page = response.url.split("/")[-2] and filename = 'citydata-%s.html' % page. Neither of your URL's redirect, so the URL provided is what will populate the response.url string.
Isolating that, we get the following:
>>> urls = [
>>> 'http://www.city-data.com/advanced/search.php?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=0',
>>> 'http://www.city-data.com/advanced/search.php?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=1',
>>> ]
>>> for url in urls:
... print(url.split("/")[-2])
advanced
advanced
So, for both URL's, you're extracting the same piece of information, which means when you use filename = 'citydata-%s.html' % page you're going to get the same filename, which I assume would be 'citydata-advanced.html'. The second time it's called, you're overwriting the first file.
Depending on what you're doing with the data, you could either change this to append to the file, or modify your filename variable to something unique such as:
from urlparse import urlparse, parse_qs
import scrapy
from scrapy_splash import SplashRequest
class CityDataSpider(scrapy.Spider):
[...]
def parse(self, response):
page = parse_qs(urlparse(response.url).query).get('p')
filename = 'citydata-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)

Scrapy - making selections from dropdown(e.g.date) on webpage

I'm new to scrapy and python and i am trying to scrape data off the following start url .
After login, this is my start url--->
start_urls = ["http://www.flightstats.com/go/HistoricalFlightStatus/flightStatusByFlight.do?"]
(a) from there i need to interact with the webpage to select ---by-airport---
and then make ---airport, date, time period selection---
how can i do that? i would like to loop over all time periods and past dates..
I have used firebug to see the source, I cannot show here as I do not have enough points to post images..
i read a post mentioning the use of Splinter..
(b) after the selections it will lead me to a page where there are links to the eventual page with the information i want. How do i populate the links and make scrapy look into every one to extract the information?
-using rules? where should i insert the rules/ linkextractor function?
I am willing to try myself, hope help can be given to find posts that can guide me.. I am a student and I have spent more than a week on this.. I have done the scrapy tutorial, python tutorial, read the scrapy documentation and searched for previous posts in stackoverflow but I did not manage to find posts that cover this.
a million thanks.
my code so far to log-in and the items to scrape via xpath from the eventual target site:
`import scrapy
from tutorial.items import FlightItem
from scrapy.http import FormRequest
class flightSpider(scrapy.Spider):
name = "flight"
allowed_domains = ["flightstats.com"]
login_page = 'https://www.flightstats.com/go/Login/login_input.do;jsessionid=0DD6083A334AADE3FD6923ACB8DDCAA2.web1:8009?'
start_urls = [
"http://www.flightstats.com/go/HistoricalFlightStatus/flightStatusByFlight.do?"]
def init_request(self):
#"""This function is called before crawling starts."""
return Request(url=self.login_page, callback=self.login)
def login(self, response):
#"""Generate a login request."""
return FormRequest.from_response(response,formdata= {'loginForm_email': 'marvxxxxxx#hotmail.com', 'password': 'xxxxxxxx'},callback=self.check_login_response)
def check_login_response(self, response):
#"""Check the response returned by a login request to see if we aresuccessfully logged in."""
if "Sign Out" in response.body:
self.log("\n\n\nSuccessfully logged in. Let's start crawling!\n\n\n")
# Now the crawling can begin..
return self.initialized() # ****THIS LINE FIXED THE LAST PROBLEM*****
else:
self.log("\n\n\nFailed, Bad times :(\n\n\n")
# Something went wrong, we couldn't log in, so nothing happens.
def parse(self, response):
for sel in response.xpath('/html/body/div[2]/div[2]/div'):
item = flightstatsItem()
item['flight_number'] = sel.xpath('/div[1]/div[1]/h2').extract()
item['aircraft_make'] = sel.xpath('/div[4]/div[2]/div[2]/div[2]').extract()
item['dep_date'] = sel.xpath('/div[2]/div[1]/div').extract()
item['dep_airport'] = sel.xpath('/div[1]/div[2]/div[2]/div[1]').extract()
item['arr_airport'] = sel.xpath('/div[1]/div[2]/div[2]/div[2]').extract()
item['dep_gate_scheduled'] = sel.xpath('/div[2]/div[2]/div[1]/div[2]/div[2]').extract()
item['dep_gate_actual'] = sel.xpath('/div[2]/div[2]/div[1]/div[3]/div[2]').extract()
item['dep_runway_actual'] = sel.xpath('/div[2]/div[2]/div[2]/div[3]/div[2]').extract()
item['dep_terminal'] = sel.xpath('/div[2]/div[2]/div[3]/div[2]/div[1]').extract()
item['dep_gate'] = sel.xpath('/div[2]/div[2]/div[3]/div[2]/div[2]').extract()
item['arr_gate_scheduled'] = sel.xpath('/div[3]/div[2]/div[1]/div[2]/div[2]').extract()
item['arr_gate_actual'] = sel.xpath('/div[3]/div[2]/div[1]/div[3]/div[2]').extract()
item['arr_terminal'] = sel.xpath('/div[3]/div[2]/div[3]/div[2]/div[1]').extract()
item['arr_gate'] = sel.xpath('/div[3]/div[2]/div[3]/div[2]/div[2]').extract()
yield item`

Scrapy: Modify rules for scraping web page

I've started to use scrapy for a project of mine to scrape data off a tennis website. Here is an example page that I want to scrape data off. As you can see, I want to scrape data for a tennis player. I need to recursively go through the entire page and gather 'Match Stats' (Theres a link titled 'Match Stats' next to every match) for a player's matches. I've already written code to parse data from the opened match stats popup. All I need to do now is open these match stats pages through the initial spider.
In all the examples I've read up on, we can write rules to navigate scrapy to the different urls that need scraping. In my case, I just want to write a rule to the different match stats links. However, if you saw the page I want to scrape, 'Match Stats' links are in the following format: javascript:makePopup('match_stats_popup.php?matchID=183704502'). As I've read online (I might be wrong!), scrapy can't deal with javascript and hence cant 'click' on that link. However, since the links are javascript popups, its possible to add the match_stats_popup.php?matchID=183704502 part of the link to the main url to get a standard html page:
http://www.tennisinsight.com/match_stats_popup.php?matchID=183704502
I am hoping I could modify the rules before scraping. In summary, I just want to find the links that are of the type: javascript:makePopup('match_stats_popup.php?matchID=183704502, and modify them so that they are now of the type http://www.tennisinsight.com/match_stats_popup.php?matchID=183704502
This is what I've written in the rules so far, which doesnt open any pages:
rules = (
Rule(SgmlLinkExtractor(allow='/match_stats_popup.php?matchID=\d+'),
'parse_match', follow=True,
),
)
parse_match is the method which parses data from the opened match stats popup.
Hope my problem is clear enough!
Using BaseSgmlLinkExtractor or SgmlLinkExtractor you can specify both the tag(s) from which to extract and process_value function used for extracting the link. There is nice example in the official documentation. Here is the code for your example:
class GetStatsSpider(CrawlSpider):
name = 'GetStats'
allowed_domains = ['tennisinsight.com']
start_urls = ['http://www.tennisinsight.com/player_activity.php?player_id=1']
def getPopLink(value):
m = re.search("javascript:makePopup\('(.+?)'\)", value)
if m:
return m.group(1)
rules = (
Rule(SgmlLinkExtractor(allow=r"match_stats_popup.php\?matchID=\d+",
restrict_xpaths='//td[#class="matchStyle"]',
tags='a', attrs='href', process_value=getPopLink), callback='parse_item', follow=True),
)
def parse_item(self, response):
sel = Selector(response)
i = TennisItem()
i['url_stats'] = response.url
return i

Categories

Resources