Python: Javascript rendered webpage not parsing - javascript

I want to parse the information in the following url. I want to parse the Name of the trade, the strategy description and the transactions in the "Trading History" and "Open Positions". When I parse the page, I do not get this data.
I am new to parsing javascript rendered webpages so I would appreciate some explanation why my code below isn't working.
import bs4 as bs
import urllib
import dryscrape
import sys
import time
url = 'https://www.zulutrade.com/trader/314062/trading'
sess = dryscrape.Session()
sess.visit(url)
time.sleep(10)
sauce = sess.body()
soup = bs.BeautifulSoup(sauce, 'lxml')
Thanks!

Your link in the code doesn't allow you to get anything cause the original url you should play with is the one I'm pasting below .The one you tried to work with automatically gets redirected to the one I mentioned here.
https://www.zulutrade.com/zulutrade-client/traders/api/providers/314062/tradeHistory?
Scraping json data out of the table from that page is as follows:
import requests
r = requests.get('https://www.zulutrade.com/zulutrade-client/traders/api/providers/314062/tradeHistory?')
j = r.json()
items = j['content']
for item in items:
print(item['currency'],item['pips'],item['tradeType'],item['transactionCurrency'],item['id'])

Related

Webscraping Blockchain data seemingly embedded in Javascript through Python, is this even the right approach?

I'm referencing this url: https://tracker.icon.foundation/block/29562412
If you scroll down to "Transactions", it shows 2 transactions with separate links, that's essentially what I'm trying to grab. If I try a simple pd.read_csv(url) command, it clearly omits the data I'm looking for, so I thought it might be JavaScript based and tried the following code instead:
from requests_html import HTMLSession
session = HTMLSession()
r = session.get('https://tracker.icon.foundation/block/29562412')
r.html.links
r.html.absolute_links
and I get the result "set()"
even though I was expecting the following:
['https://tracker.icon.foundation/transaction/0x9e5927c83efaa654008667d15b0a223f806c25d4c31688c5fdf34936a075d632', 'https://tracker.icon.foundation/transaction/0xd64f88fe865e756ac805ca87129bc287e450bb156af4a256fa54426b0e0e6a3e']
Is JavaScript even the right approach? I tried BeautifulSoup instead and found no cigar on that end as well.
You're right. This page is populated asynchronously using JavaScript, so BeautifulSoup and similar tools won't be able to see the specific content you're trying to scrape.
However, if you log your browser's network traffic, you can see some (XHR) HTTP GET requests being made to a REST API, which serves its results in JSON. This JSON happens to contain the information you're looking for. It actually makes several such requests to various API endpoints, but the one we're interested in is called txList (short for "transaction list" I'm guessing):
def main():
import requests
url = "https://tracker.icon.foundation/v3/block/txList"
params = {
"height": "29562412",
"page": "1",
"count": "10"
}
response = requests.get(url, params=params)
response.raise_for_status()
base_url = "https://tracker.icon.foundation/transaction/"
for transaction in response.json()["data"]:
print(base_url + transaction["txHash"])
return 0
if __name__ == "__main__":
import sys
sys.exit(main())
Output:
https://tracker.icon.foundation/transaction/0x9e5927c83efaa654008667d15b0a223f806c25d4c31688c5fdf34936a075d632
https://tracker.icon.foundation/transaction/0xd64f88fe865e756ac805ca87129bc287e450bb156af4a256fa54426b0e0e6a3e
>>>

Unable to export particular data from a .json file from a website

I'm using the following to parse data from a website:
import requests
import pandas as pd
resp = requests.get("https://thisiscriminal.com/wp-json/criminal/v1/episodes?posts=1000000&page=1").json()
df = pd.DataFrame(resp['posts'], columns=['episodeNumber','slug','image','excerpt','audioSource'])
df.to_csv("output9.csv", encoding='utf-8', index='false')
data = pd.read_csv("output9.csv")
As you can see, I've had to pull the entire 'excerpt' column which pulls all three instead of just one. How would I go about just pulling say the 'short' one? What is the heading called instead of 'column'? Also, the 'title' doesn't seem to be under any sort of header - how would I pull this too?
A quick visual of the .json is here if it helps:
https://www.dropbox.com/s/v9l81ber6i4nbgw/11111111.jpg?dl=0
Any help would be greatly appreciated.
The workaround which I can think of is to normalizes the resp['posts'] json and dont mention the columns. Below is the code to generate the above dataframe:
import requests
import pandas as pd
from pandas.io.json import json_normalize
resp = requests.get("https://thisiscriminal.com/wp-json/criminal/v1/episodes?posts=1000000&page=1").json()
# print(resp['posts'][0])
df = pd.DataFrame(json_normalize(resp['posts']))
df.to_csv("output2_9.csv", encoding='utf-8', index='false')
Now once you have this dataframe u can filter which ever column you want it has all the field of json and column names as :
audioSource content date episodeNumber excerpt.full excerpt.long excerpt.short id image.full image.large image.medium image.thumb musicCredits next next.slug next.title permalink prev prev.slug prev.title slug title
The title header is also present in this dataframe
I've taken the excerpt series, called the apply function and took the 'short' series which was created from apply. You might have to handle the extra double quotes, consider the following code:
import requests
import pandas as pd
resp = requests.get("https://thisiscriminal.com/wp-json/criminal/v1/episodes?posts=1000000&page=1").json()
df = pd.DataFrame(resp['posts'], columns=['episodeNumber','slug','image','excerpt','audioSource'])
df['excerpt'] = df['excerpt'].apply(pd.Series)['short']#.replace({'"': '\'','""': '\'','"""': '\'' }, regex=True)
df.to_csv("output9.csv", encoding='utf-8', index='false')
data = pd.read_csv("output9.csv")

scrapy-splash usage for rendering javascript

This is a follow up of my previous quesion
I installed splash and scrapy-splash.
And also followed the instructions for scrapy-splash.
I edited my code as follows:
import scrapy
from scrapy_splash import SplashRequest
class CityDataSpider(scrapy.Spider):
name = "citydata"
def start_requests(self):
urls = [
'http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=0',
'http://www.city-data.com/advanced/search.php#body?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=1',
]
for url in urls:
yield SplashRequest(url=url, callback=self.parse)
def parse(self, response):
page = response.url.split("/")[-2]
filename = 'citydata-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
But still i get the same output. only one html file is generated and the result is only for http://www.city-data.com/advanced/search.php
is there anything wrong in the code or any other suggestions please.
First off, I wanted to clear up some possible points of confusion from your last question which "#paul trmbrth" wrote:
The URL fragment (i.e. everything including and after #body) is not sent to the server and only http://www.city-data.com/advanced/search.php is fetched
So for Scrapy, the requests to [...] and [...] are the same resource, so it's only fetch once. They differ only in their URL fragments.
URI standards dictate that the number sign (#) be used to indicate the start of the fragment, which is the last part of the URL. In most/all browsers, nothing beyond the "#" is transmitted. However, it's fairly common for AJAX sites to utilize Javascript's window.location.hash grab the URL fragment, and use it to execute additional AJAX calls. I bring this up because city-data.com does exactly this, which may confuse you as it does in fact bring back two different sites for each of those URLs in a browser.
Scrapy does by default drop the URL fragment, so it will report both URLs as being just "http://www.city-data.com/advanced/search.php", and filter the second one.
With all of that out of the way, there will still be problem after you remove "#body" from the URLs caused by a combination of of page = response.url.split("/")[-2] and filename = 'citydata-%s.html' % page. Neither of your URL's redirect, so the URL provided is what will populate the response.url string.
Isolating that, we get the following:
>>> urls = [
>>> 'http://www.city-data.com/advanced/search.php?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=0',
>>> 'http://www.city-data.com/advanced/search.php?fips=0&csize=a&sc=2&sd=0&states=ALL&near=&nam_crit1=6914&b6914=MIN&e6914=MAX&i6914=1&nam_crit2=6819&b6819=15500&e6819=MAX&i6819=1&ps=20&p=1',
>>> ]
>>> for url in urls:
... print(url.split("/")[-2])
advanced
advanced
So, for both URL's, you're extracting the same piece of information, which means when you use filename = 'citydata-%s.html' % page you're going to get the same filename, which I assume would be 'citydata-advanced.html'. The second time it's called, you're overwriting the first file.
Depending on what you're doing with the data, you could either change this to append to the file, or modify your filename variable to something unique such as:
from urlparse import urlparse, parse_qs
import scrapy
from scrapy_splash import SplashRequest
class CityDataSpider(scrapy.Spider):
[...]
def parse(self, response):
page = parse_qs(urlparse(response.url).query).get('p')
filename = 'citydata-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)

Accepting form data from HTML in Python

I am trying to write a simple web app as a school assignment. Basically, the app uses the twitter stream API to pull tweets, and then I'm going to graph the results using Google Charts.
I have written a python script that is able to make the connection to twitter and pull the tweets in real time, but I want to be able to define the search criteria based off of user input on an HTML page. (I don't have it written yet, but I plan to have a list of radio buttons that a user can select from to choose the search option)
My question is how do I import data from an HTML document into my python script, and then return the data back to my HTML page.
I am very new to python and web programming in general, so over simplification would be nice!
This is the code in my python file:
from tweepy import Stream
from tweepy import OAuthHandler
from tweepy.streaming import StreamListener
import time
ckey = ''
csecret = ''
atoken = ''
asecret = ''
class listener(StreamListener):
def on_data(self,data):
try:
tweet = data.split(',"text":"')[1].split('","source')[0]
print tweet
saveThis = str(time.time())+ '::' + tweet
print saveThis
saveFile = open ('twitDB.csv' , 'a')
saveFile.write (saveThis)
saveFile.write ('\n')
saveFile.close()
return True
except BaseException, e:
print 'failed ondata, ', str(e)
time.sleep(5)
def on_error(self,status):
print status
auth =OAuthHandler(ckey,csecret)
auth.set_access_token(atoken, asecret)
twitterStream = Stream(auth, listener())
twitterStream.filter(track=["car"])

Cannot get entire web page after query

I'm trying to scrape the historical NAVPS tables found on this page:
http://www.philequity.net/pefi_historicalnavps.php
All the code here are the contents of my minimal working script. So it starts with:
import urllib
import urllib2
from BeautifulSoup import BeautifulSoup
opener = urllib2.build_opener()
urllib2.install_opener(opener)
After studying the web page using Chrome's Inspect Element, I find that the Form Data sent are the following:
form_data = {}
form_data['mutualFund'] = '1'
form_data['year'] = '1995'
form_data['dmonth'] = 'Month'
form_data['dday'] = 'Day'
form_data['dyear'] = 'Year'
So I continue building up the request:
url = "http://www.philequity.net/pefi_historicalnavps.php"
params = urllib.urlencode(form_data)
request = urllib2.Request(url, params)
I expect this to be the equivalent of clicking "Get NAVPS" after filling in the form:
page = urllib2.urlopen(request)
Then I read it with BeautifulSoup:
soup = BeautifulSoup(page.read())
print soup.prettify()
But alas! I only get the web page as though I didn't click "Get NAVPS" :( Am I missing something? Is the server sending the table in a separate stream? How do I get to it?
When I look at the POST request in firebug, I see one more parameter that you aren't passing: "type" is "Year". I don't know if this will get the data for you, there's any number of other reasons it might not serve you the data.

Categories

Resources