This is a follow up of my previous quesion
I installed splash and scrapy-splash.
And also followed the instructions for scrapy-splash.
I edited my code as follows:
import scrapy
from scrapy_splash import SplashRequest
class CityDataSpider(scrapy.Spider):
name = "citydata"
def start_requests(self):
urls = [
'http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=0',
'http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=1',
]
for url in urls:
yield SplashRequest(url=url, callback=self.parse)
def parse(self, response):
page = response.url.split("/")[-2]
filename = 'citydata-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
But still i get the same output. only one html file is generated and the result is only for http://www.city-data.com/advanced/search.php
is there anything wrong in the code or any other suggestions please.
First off, I wanted to clear up some possible points of confusion from your last question which "#paul trmbrth" wrote:
The URL fragment (i.e. everything including and after #body) is not sent to the server and only http://www.city-data.com/advanced/search.php is fetched
So for Scrapy, the requests to [...] and [...] are the same resource, so it's only fetch once. They differ only in their URL fragments.
URI standards dictate that the number sign (#) be used to indicate the start of the fragment, which is the last part of the URL. In most/all browsers, nothing beyond the "#" is transmitted. However, it's fairly common for AJAX sites to utilize Javascript's window.location.hash grab the URL fragment, and use it to execute additional AJAX calls. I bring this up because city-data.com does exactly this, which may confuse you as it does in fact bring back two different sites for each of those URLs in a browser.
Scrapy does by default drop the URL fragment, so it will report both URLs as being just "http://www.city-data.com/advanced/search.php", and filter the second one.
With all of that out of the way, there will still be problem after you remove "#body" from the URLs caused by a combination of of page = response.url.split("/")[-2] and filename = 'citydata-%s.html' % page. Neither of your URL's redirect, so the URL provided is what will populate the response.url string.
Isolating that, we get the following:
>>> urls = [
>>> 'http://www.city-data.com/advanced/search.php?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=0',
>>> 'http://www.city-data.com/advanced/search.php?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=1',
>>> ]
>>> for url in urls:
... print(url.split("/")[-2])
advanced
advanced
So, for both URL's, you're extracting the same piece of information, which means when you use filename = 'citydata-%s.html' % page you're going to get the same filename, which I assume would be 'citydata-advanced.html'. The second time it's called, you're overwriting the first file.
Depending on what you're doing with the data, you could either change this to append to the file, or modify your filename variable to something unique such as:
from urlparse import urlparse, parse_qs
import scrapy
from scrapy_splash import SplashRequest
class CityDataSpider(scrapy.Spider):
[...]
def parse(self, response):
page = parse_qs(urlparse(response.url).query).get('p')
filename = 'citydata-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
Related
I am currently researching on how to scrape web content using python in pagination driven by javascript in single page application (SPA).
For example,
https://angular-8-pagination-example.stackblitz.io/
I googled and found that using Scrapy is not possible to scrape javascript / SPA driven content.
It needs to use Splash. I am new to both Scrapy and Splash.
Is this correct?
Also, how do I call the javascript pagination method? I inspect the element, it's just an anchor without href and javascript event.
Please advise.
Thank you,
Hatjhie
You need to use a SpalshRequest to render the JS. You then need to get the pagination text. Generally I use re.search with the appropriate regex pattern to extract the relevent numbers. You can then assign them to current page variable and total pages variables.
Typically a website will move to the next page by incrementing ?page=x or ?p=x at the end of the url. You can then increment this value to scrape all the relevant pages.
The overall pattern looks like this:
import scrapy
from scrapy_splash import SplashRequest
import re
from ..items import Item
proxy ='http//your.proxy.com:PORT'
current_page_xpath='//div[your x path selector]/text()'
last_page_xpath='//div[your other x path selector]/text()'
class spider(scrapy.Spider):
name = 'my_spider'
allowed_domains =['domain.com']
start_urls =['https://www.domaintoscrape.com/page=1']
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url=url, callback=self.parse, meta ={'proxy':proxy})
def get_page_nbr(value):
#you may need more complex regex to get page numbers.
#most of the time they are in form "page X of Y"
#google is your friend
if re.search('\d+',value):
value = re.search('\d+',value)
value = value[0]
else:
value =None
return value
def parse(self, response):
#get last and current page from response:
last_page = page_response.xpath(last_page_xpath).get()
current_page = page_response.xpath(current_page_xpath).get()
#do something with your response
# if current page is less than last page make another request by incrmenenting the page in the URL
if current_page < last_page:
ajax_url = response.url.replace(f'page={int(current_page)}',f'page={int(current_page)+1}')
yield scrapy.Request(url=ajax_url, callback=self.parse, meta ={'proxy':proxy})
#optional
if current_page == last_page:
print(f'processed {last_page} items for {response.url}')
finally, its worth having a look on Youtube as there are a number of tutorials on scrapy_splash and pagination.
I'm referencing this url: https://tracker.icon.foundation/block/29562412
If you scroll down to "Transactions", it shows 2 transactions with separate links, that's essentially what I'm trying to grab. If I try a simple pd.read_csv(url) command, it clearly omits the data I'm looking for, so I thought it might be JavaScript based and tried the following code instead:
from requests_html import HTMLSession
session = HTMLSession()
r = session.get('https://tracker.icon.foundation/block/29562412')
r.html.links
r.html.absolute_links
and I get the result "set()"
even though I was expecting the following:
['https://tracker.icon.foundation/transaction/0x9e5927c83efaa654008667d15b0a223f806c25d4c31688c5fdf34936a075d632', 'https://tracker.icon.foundation/transaction/0xd64f88fe865e756ac805ca87129bc287e450bb156af4a256fa54426b0e0e6a3e']
Is JavaScript even the right approach? I tried BeautifulSoup instead and found no cigar on that end as well.
You're right. This page is populated asynchronously using JavaScript, so BeautifulSoup and similar tools won't be able to see the specific content you're trying to scrape.
However, if you log your browser's network traffic, you can see some (XHR) HTTP GET requests being made to a REST API, which serves its results in JSON. This JSON happens to contain the information you're looking for. It actually makes several such requests to various API endpoints, but the one we're interested in is called txList (short for "transaction list" I'm guessing):
def main():
import requests
url = "https://tracker.icon.foundation/v3/block/txList"
params = {
"height": "29562412",
"page": "1",
"count": "10"
}
response = requests.get(url, params=params)
response.raise_for_status()
base_url = "https://tracker.icon.foundation/transaction/"
for transaction in response.json()["data"]:
print(base_url + transaction["txHash"])
return 0
if __name__ == "__main__":
import sys
sys.exit(main())
Output:
https://tracker.icon.foundation/transaction/0x9e5927c83efaa654008667d15b0a223f806c25d4c31688c5fdf34936a075d632
https://tracker.icon.foundation/transaction/0xd64f88fe865e756ac805ca87129bc287e450bb156af4a256fa54426b0e0e6a3e
>>>
I want to parse the information in the following url. I want to parse the Name of the trade, the strategy description and the transactions in the "Trading History" and "Open Positions". When I parse the page, I do not get this data.
I am new to parsing javascript rendered webpages so I would appreciate some explanation why my code below isn't working.
import bs4 as bs
import urllib
import dryscrape
import sys
import time
url = 'https://www.zulutrade.com/trader/314062/trading'
sess = dryscrape.Session()
sess.visit(url)
time.sleep(10)
sauce = sess.body()
soup = bs.BeautifulSoup(sauce, 'lxml')
Thanks!
Your link in the code doesn't allow you to get anything cause the original url you should play with is the one I'm pasting below .The one you tried to work with automatically gets redirected to the one I mentioned here.
https://www.zulutrade.com/zulutrade-client/traders/api/providers/314062/tradeHistory?
Scraping json data out of the table from that page is as follows:
import requests
r = requests.get('https://www.zulutrade.com/zulutrade-client/traders/api/providers/314062/tradeHistory?')
j = r.json()
items = j['content']
for item in items:
print(item['currency'],item['pips'],item['tradeType'],item['transactionCurrency'],item['id'])
I'm trying to scrape the historical NAVPS tables found on this page:
http://www.philequity.net/pefi_historicalnavps.php
All the code here are the contents of my minimal working script. So it starts with:
import urllib
import urllib2
from BeautifulSoup import BeautifulSoup
opener = urllib2.build_opener()
urllib2.install_opener(opener)
After studying the web page using Chrome's Inspect Element, I find that the Form Data sent are the following:
form_data = {}
form_data['mutualFund'] = '1'
form_data['year'] = '1995'
form_data['dmonth'] = 'Month'
form_data['dday'] = 'Day'
form_data['dyear'] = 'Year'
So I continue building up the request:
url = "http://www.philequity.net/pefi_historicalnavps.php"
params = urllib.urlencode(form_data)
request = urllib2.Request(url, params)
I expect this to be the equivalent of clicking "Get NAVPS" after filling in the form:
page = urllib2.urlopen(request)
Then I read it with BeautifulSoup:
soup = BeautifulSoup(page.read())
print soup.prettify()
But alas! I only get the web page as though I didn't click "Get NAVPS" :( Am I missing something? Is the server sending the table in a separate stream? How do I get to it?
When I look at the POST request in firebug, I see one more parameter that you aren't passing: "type" is "Year". I don't know if this will get the data for you, there's any number of other reasons it might not serve you the data.
Trying to get more than just the stock information at the current time period, and I can't figure out if Google Finance allows for retrieving information for more than just one date. For example, if I wanted to find out the Google Stock value over the last 30 days and return that data as a list... how would I go about doing this?
Using the code below only gets me a single value:
class GoogleFinanceAPI:
def __init__(self):
self.prefix = "http://finance.google.com/finance/info?client=ig&q="
def get(self,symbol,exchange):
url = self.prefix+"%s:%s"%(exchange,symbol)
u = urllib2.urlopen(url)
content = u.read()
obj = json.loads(content[3:])
return obj[0]
c = GoogleFinanceAPI()
quote = c.get("MSFT","NASDAQ")
print quote
Here is a recipe to get a historical values from Google Finance:
http://code.activestate.com/recipes/576495-get-a-stock-historical-value-from-google-finance/
It looks like it returns the data in .csv format.
Edit: Here is your script modified to get the .csv. It works for me.
import urllib2
import csv
class GoogleFinanceAPI:
def __init__(self):
self.url = "http://finance.google.com/finance/historical?client=ig&q={0}:{1}&output=csv"
def get(self,symbol,exchange):
page = urllib2.urlopen(self.url.format(exchange,symbol))
content = page.readlines()
page.close()
reader = csv.reader(content)
for row in reader:
print row
c = GoogleFinanceAPI()
c.get("MSFT","NASDAQ")
The best way to go forward is use the API's provided by Google. Specifically look for returns parameter where you specify how long you want.
Instead, if you want to do it via Python, find out query pattern as where the date entry goes and substitute it in the URL and do a GET, parse the result and include it in your result list.