google finance api not working from 6/september/2017 - javascript

I was using google finance api to get the stock quotes and display the contents on my site. All of a sudden from 6/september/2017 this stopped working. The url i used to get the stock quotes is https://finance.google.com/finance/info?client=ig&q=SYMBOL&callback=?.
Previously, i was using yahoo finance api and it was inconsistent. So, i switched over to google finance api.
Could you please help me on this?
Thanks,
Ram

This url works. I think just the url changed from www.google.com to finance.google.com
https://finance.google.com/finance/getprices?q=ACC&x=NSE&p=15&i=300&f=d,c,o,h,l,v

In the end i started using yahoo finance. The data is not live, there is a 20 minutes delay. I thought it will be helpful to people who are facing issues like me.
The yahoo api url is https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.quotes%20where%20symbol%3D%22MSFT%22&env=store://datatables.org/alltableswithkeys
This will return the stock data in xml format. You can parse the xml to get your desired fields.
Thanks,
Ram

We had a same issue & we found below alternative API provided by Microsoft Bing API for Stock Markets. Below API returns the stock data in JSON format.
https://finance.services.appex.bing.com/Market.svc/ChartAndQuotes?symbols=139.1.500209.BOM&chartType=1d&isETF=false&iseod=False&lang=en-IN&isCS=false&isVol=true
Thanks, Shyamal

I was dying to look for thread like this yesterday when I faced the issue!
Like Salketer said, Google Finance API was officially "closed" in 2012. However, for some reason it was still working until September 5, 2017. I built a program to manage my portfolio that uses GF API to get live quotes for US stocks. It stopped working on Sep 6, 2017, so I am assuming that engineers behind "secretly providing" API now "actually" stopped the service.
I found an alternative https://www.alphavantage.co/documentation/ , and this seems like the best alternative for free live US equity quotes. They just require your email, nothing else. It's a bit slow because it doesn't have multi-symbol query yet, but beggars can't be choosers.

I had to switch to Google finance after using Yahoo finance for a long time after Verizon bought yahoo this May and ended the free API service. I went back and re-researched this issue and someone created a new Yahoo finance API call that works with the new yahoo API. https://stackoverflow.com/a/44092983/8316350
The python source and installer can be found here: https://github.com/c0redumb/yahoo_quote_download
The arguments are (ticker, start_date, and end_date) where dates are yyyymmdd format and returns a list of unicode strings. The following test will download a couple weeks worth of data and then extract only the adjusted close price to return a list called adj_close:
from yahoo_quote_download import yqd
import string
quote = yqd.load_yahoo_quote('AAPL', '20170515', '20170530')
print(quote[0]) # print the column headers
print(quote[1]) # print a couple rows of data
print(quote[2]) # just to make sure it looks right
quote.pop() # get rid of blank string at end of data
quote = [row.encode("utf-8") for row in quote] # convert to byte data
quote = [string.split(row, ',') for row in quote] # split the string to create a list of lists
adj_close = [row[5] for row in quote] # grab only the 'adj close' data and put into a new list
print(adj_close)
Returns:
Date,Open,High,Low,Close,Adj Close,Volume
2017-05-15,156.009995,156.649994,155.050003,155.699997,155.090958,26009700
2017-05-16,155.940002,156.059998,154.720001,155.470001,154.861862,20048500
['Adj Close', '155.090958', '154.861862', '149.662277', '151.943314', '152.461288', '153.387650', '153.198395', '152.740189', '153.268112', '153.009140', '153.068893']

I was manually reading from Google Finance page for each stock before I got the ?info link. As this is not working anymore, I am going back to the webpage.
Here is my python snippet:
def get_market_price(symbol):
print "Getting market price: " + symbol
base_url = 'http://finance.google.com/finance?q='
retries = 2
while True:
try:
response = urllib2.urlopen(base_url + symbol)
html = response.read()
except Exception, msg:
if retries > 0:
retries -= 1
else:
raise Exception("Error getting market price!")
soup = BeautifulSoup(html, 'lxml')
try:
price_change = soup.find("div", { "class": "id-price-change" })
price_change = price_change.find("span").find_all("span")
price_change = [x.string for x in price_change]
price = soup.find_all("span", id=re.compile('^ref_.*_l$'))[0].string
price = str(unicode(price).encode('ascii', 'ignore')).strip().replace(",", "")
return (price, price_change)
except Exception as e:
if retries > 0:
retries -= 1
else:
raise Exception("Can't get current rate for scrip: " + symbol)
Example:
Getting market price: NSE:CIPLA
('558.55', [u'+4.70', u'(0.85%)'])

You can simply parse the result of this request:
https://finance.google.com/finance/getprices?q=GOOG&x=NASD&p=1d&i=60&f=d,c,o,h,l,v
(GOOG at NASDAQ, one day, frequency 60 seconds, DATE,CLOSE,HIGH,LOW,OPEN,VOLUME)

I had a same problem in PHP.
I replace the URL https://www.google.com/finance/converter?a=$amount&from=$from_Currency&to=$to_Currency
to
https://finance.google.com/finance/converter?a=1&from=$from_Currency&to=$to_Currency
Works fine for me.

Related

How to use Google Sheets as read-only database for Angular, without making the sheet public

I'm trying to use Google Sheets document as read-only database for my angular application.
I tried some methods to do that, but the problem with all of these methods is that they require the Sheet to be shared publicly (anyone with the link can access the sheet). But what I want is to share it with specific user using Service Account through credentials.
I'm using Angular 14
There is no reference to Angular in Google Sheets for Developers.
If you know any solution or come across an article about this topic, please share it with me.
Thank you.
Here are the steps you'll need to take in order to read from Google Sheets into Angular:
Step 1: Prepare your Google Sheet
1.) Make sure ALL the cells in your sheet are formatted as "Plain text". To do this, click in the upper-left corner of the sheet nexus where the rows and columns intersect to select all cells, then select Format > Number > Plain text from the top menu.
2.) Go to Share, then under "General Access", select "Anyone with the link", then click Done. I believe in your case that this step is optional, since you do not want the sheet to be public.
3.) Go to File > Share > Publish to web in the top menu. Set the scope of what you want to publish, then click Publish. Unfortunately in your case, this step is NOT optional!
Step 2: Fetch the Google Sheet Data
Use the following code example to fetch the raw data from your Google Sheet as plain text:
const docId = '1vLjJqvLGdaS39ccsvoU58kEWXngzV_VXtto07Ki6qVo';
const sheetId = ''; // to get a specific sheet by ID, use '&gid=###'
const url = `https://docs.google.com/spreadsheets/d/${docId}/gviz/tq?tqx=out:json${sheetId}`;
this.http.get(url, {
responseType: 'text',
}).subscribe((response: string): void => {
console.log(response);
});
Step 3: Parse the Raw Text as JSON
Use the following example to parse the raw text to JSON:
const rawJSONText = response.match(/google\.visualization\.Query\.setResponse\(([\s\S\w]+)\)/); // strip the header response
const json = JSON.parse(rawJSONText[1]);
console.log(json);
Hope this helps. Cheers!

What is the optimal way to show custom data from MSAccess DB at Wordpress site?

I need an advice from skilled Wordpress developers. My organization has internal MS Access database which contains numerous tables, reports and input forms. The structure of DB is not too complicated (persons information, events, third parties info and different relations between them). We woild like to show some portion of this info at our Wordpress site, which currently has only news section.
Because information in our DB updated each day, also we would like to make simple synchronization between MS Access DB and Wordpress (MySQL DB). Now I try to find the best way how to connect MS Access and Wordpress.
At present I see only these ways how to do this:
Make export requests and save to XML files.
Import to MySQL DB of Wordpress.
Show content on Wordpress site using Custom fields feature (or develop own plugin).
-OR-
Build own informational system on some very light PHP engine (for example CodeIgniter) on same domain as Wordpress site, which will actually show imported content.
These variants needs manual transfer info between DB each day. And I don't know possibilities of Wordpress to show custom data from DB. Would you suggest me what ways will you prefer to use in my case?
P.S. MS Access used is ver 2007+ (file .accdb). Name of fields, db's and content is on Russian language. In future we planning to add 2 new languages (English, Ukrainian). MS access DB also contains persons photos included.
---Updated info---
I was able to make semi-atomatic import/export operations using following technique:
Javascript library ACCESSdb (little bit modified for new DB format)
Internet Explorer 11 (for running ADODB ActiveX)
small VBS script for extracting attached files from MSAccess tables.
latest jQuery
Wordpress plugins for custom data (Advanced Custom Fields, Custom Post Type UI)
Wordpress Rest-API enabled (with plugins JSON Basic Authentication, ACF to REST API)
At first I've constructed data scheme at Wordpress site using custom post and custom fields technique. Then I locally run JS queries to MSAccess DB, received info I sending via jQuery to WP Rest-API endpoints. Whole transfer operation can be made with in 1 click.
But I can't upload files automatically via JS due to security limitations. This can be done in +1 click per file.
Your question is too broad.
It consist of two parts: 1. export from Access and 2. import to Wordpress. Since i'm not familiar with Wordpress I can only give you advice about 1 part. At least google shows that there is some plugins that able to import from CSV like this one:
https://ru.wordpress.org/plugins/wp-ultimate-csv-importer/
You can create a scheduled task that runs Access that runs macro that runs VBA function as described here:
Running Microsoft Access as a Scheduled Task
In that VBA function you can use ADODB.Stream object to create a UTF-8 CSV file with you data and make upload to FTP of your site.
OR
Personally i use a python script to do something similar. I prefer this way because it is more straitforward and reliable. There is my code. Notice, that i have two FTP servers: one of them is for testing only.
# -*- coding: utf-8 -*-
# 2018-10-31
# 2018-11-28
import os
import csv
from time import sleep
from ftplib import FTP_TLS
from datetime import datetime as dt
import msaccess
FTP_REAL = {'FTP_SERVER':r'your.site.com',
'FTP_USER':r'username',
'FTP_PW':r'Pa$$word'
}
FTP_WIP = {'FTP_SERVER':r'192.168.0.1',
'FTP_USER':r'just_test',
'FTP_PW':r'just_test'
}
def ftp_upload(fullpath:str, ftp_folder:str, real:bool):
''' Upload file to FTP '''
try:
if real:
ftp_set = FTP_REAL
else:
ftp_set = FTP_WIP
with FTP_TLS(ftp_set['FTP_SERVER']) as ftp:
ftp.login(user=ftp_set['FTP_USER'], passwd=ftp_set['FTP_PW'])
ftp.prot_p()
# Passive mode off otherwise there will be problem
# with another upload attempt
# my site doesn't allow active mode :(
ftp.set_pasv(ftp_set['FTP_SERVER'].find('selcdn') > 0)
ftp.cwd(ftp_folder)
i = 0
while i < 3:
sleep(i * 5)
i += 1
try:
with open(fullpath, 'br') as f:
ftp.storbinary(cmd='STOR ' + os.path.basename(fullpath),
fp=f)
except OSError as e:
if e.errno != 0:
print(f'ftp.storbinary error:\n\t{repr(e)}')
except Exception as e:
print(f'ftp.storbinary exception:\n\t{repr(e)}')
filename = os.path.basename(fullpath)
# Check if uploaded file size matches local file:
# IDK why but single ftp.size command sometimes returns None,
# run this first:
ftp.size(filename)
#input(f'overwrite it: {filename}')
ftp_size = ftp.size(os.path.basename(fullpath))
# import pdb; pdb.set_trace()
if ftp_size != None:
if ftp_size == os.stat(fullpath).st_size:
print(f'File \'{filename}\' successfully uploaded')
break
else:
print('Transfer failed')
# input('Press enter for another try...')
except OSError as e:
if e.errno != 0:
return False, repr(e)
except Exception as e:
return False, repr(e)
return True, None
def make_file(content:str):
''' Make CSV file in temp directory and return True and fullpath '''
fullpath = os.environ['tmp'] + f'\\{dt.now():%Y%m%d%H%M}.csv'
try:
with open(fullpath, 'wt', newline='', encoding='utf-8') as f:
try:
w = csv.writer(f, delimiter=';')
w.writerows(content)
except Exception as e:
return False, f'csv.writer fail:\n{repr(e)}'
except Exception as e:
return False, repr(e)
return True, fullpath
def query_upload(sql:str, real:bool, ftp_folder:str, no_del:bool=False):
''' Run query and upload to FTP '''
print(f'Real DB: {real}')
status, data = msaccess.run_query(sql, real=real, headers=False)
rec_num = len(data)
if not status:
print(f'run_query error:\n\t{data}')
return False, data
status, data = make_file(data)
if not status:
print(f'make_file error:\n\t{data}')
return False, data
fi = data
status, data = ftp_upload(fi, ftp_folder, real)
if not status:
print(f'ftp_upload error:\n\t{data}')
return False, data
print(f'Done: {rec_num} records')
if no_del: input('\n\nPress Enter to exit and delete file')
os.remove(fi)
return True, rec_num

how to fix yandex api code leak in twitter bot?

I'm a beginner in programming and I'm developing a bot on node-red in the ibm cloud, and I've had problems with the return of the yandex translation API. It returns part of the api code in the tweets, which is not pleasant at all.
The api of yandex allows the api to return in json or xml, I tried both and I could not solve the problem. The bot in question has other api's in use and I was able to configure them normally, something that does not occur with the result of that, which would be the final result for the tweet to be released.
to send the translation to be made use the following request in a function of node red:
var translate = msg.method ='GET';
msg.url = "https://translate.yandex.net/api/v1.5/tr/translate?key= *API KEY* &text=" + recipe + "&lang=pt"
return [msg,null];]
in the next block, and the last one before sending the message, I'm using something like:
var yandex= msg.payload;
yandex = 'a' + msg.payload.text;
return msg;
and this makes me return something like this in the public tweet
"<?xml version="1.0" encoding="utf-8"?>
< Translation code="200" lang="en-pt"><text>é uma receita com Estilo grego Desfrutar de sua comida!</text>< / T"
hope to remove all this code that is being sent to output and only send the translation to the tweet, which is what is inside .
Forgive me code redundancies, but I do not know javascript fully and my college is teaching languages a bit old, like pascal.
You can perform the translation without writing a single line of code. This is the power of Node-RED. All you need is to send a properly formatted payload to an http request node. In the configuration dialog of this node tick the option Append msg.payload as query string parameters. You will get the translation by extracting msg.payload.text.
The payload to send to the http request has to be structured as follows:
{
"key": "you key",
"lang": "en-pt",
"format": "plain",
"text": "Life is like a game"
}
The code you posted has syntax errors and will not produce the output you want.
I recommend you to study a little bit more Node-RED and ask questions their forum in case you did not understand what is explained above.

Python Flask data feed from Pandas Dataframe, dynamically define with unique endpoint

Hi I am building a web app with Flask Python. I got a problem here:
#app.route('/analytics/signals/<ticker_url>')
def analytics_signals_com_page(ticker_url):
all_ticker = full_list
ticker_name = com_name
ticker = ticker_url.upper()
pricerec = sp500[ticker_url.upper()].tolist()
timerec = sp500[ticker_url.upper()].index.tolist()
return render_template('company.html', all_ticker=all_ticker, ticker_name=ticker_name, ticker=ticker, pricerec=pricerec, timerec=timerec)
Here I am defining company pages based on the a page will contain different content. The problem is that everything is fine upto ticker = ticker_url.upper(). It works perfectly fine. But for pricerec and timerec, they make problems.
sp500 is a pandas DataFrame columns being companies like "AAPL", "GOOG","MSFT", and so forth 505 companies and the index are timestamps, and values are the prices at each time.
So what I am doing for the pricerec, I am taking the ticker_url and use it to take the specific company's price and make it as a list. And timerec is to take the index (timestamps) and make it as a list. And I am passing these two variables into the company.html page.
But it makes internal server error. I do not know why it happens.
My expectation was that when a user click a button that href to "~/analytics/signals/aapl" then the company.html page will contain the pricerec and timerec for me to draw a graph. But it didn't work like that. It makes internal server error. I defined those two variables in the javascript also like I did for the other variables(all_ticker, ticker_name, and ticker)
Can anyone help me with this issue?
Thanks!

Values Hidden From Source Code- Webscraping Python

I'm trying to web-scrape a website and when I go through the source code, what I'm looking for is not there. Website is http://www.providentmetals.com/2016-1-oz-canadian-silver-cougar.html and what I'm looking for is the price in the table in the top-right. It says "1+" followed by prices. It's around $18.04 right now.
When I "inspect element" with web developer tools, I can see the price though.
Using BeautifulSoup I've tried to get the value, but it doesn't show up. Here's rougly the code.
It doesn't return a value at all
import res,bs4
url='http://www.providentmetals.com/2016-1-oz-canadian-silver.cougar.html'
res=requests.get(url)
soup=bs4.BeautifulSoup(res.text,'lxml')
elems=soup.findAll('class',{'table':'table table-striped border-light pricing data-table'})
#table name found from inspect element web dev tool
Questions: How do I find the hidden data? Are there any ways that you know of to use bs4/requests to find the data? I'm not great at coding and webscraping, so any help would be good.
The values are not hidden, they are obtained with an Ajax request and then inserted into the page's DOM. That's why you see them in your browser, but not in the page's HTML.
You can directly access the Ajax request that obtains the data you require. The response is in JSON format, so it's really easy to use. You need to know the SKU which, for your example, is BBFS-04253.
The URL is:
http://www.providentmetals.com/services/products.php?type=product&sku=BBFS-04253
Using the requests module:
import requests
url = 'http://www.providentmetals.com/services/products.php'
params = {'type': 'product', 'sku': 'BBFS-04253'}
response = requests.get(url, params)
data = response.json()
>>> from pprint import pprint
>>> pprint(data)
[{u'as_low_as': {u'crypto_price': u'$18.22',
u'list_price': u'$18.78',
u'price': u'$18.03',
u'qty': 1,
u'to_tier': u' + '},
u'crypto_price': u'$18.22',
u'crypto_special_price': u'$0.00',
u'id': u'6034',
u'inStock': None,
u'list_price': u'$18.78',
u'list_special_price': u'$0.00',
u'name': u'2016 1 oz Canadian Silver Cougar | Predator Series',
u'price': u'$18.03',
u'sell_to_us': u'$16.69',
u'sku': u'BBFS-04253',
u'special_price': None,
u'status_allows_price': True,
u'stock_status_code': u'pre-sale',
u'tier_price': [{u'crypto_price': u'$18.22',
u'list_price': u'$18.78',
u'price': u'$18.03',
u'qty': 1,
u'to_tier': u' + '}]}]
print data['price']
So you can access the price and other details directly:
>>> data[0]['price']
u'$18.03'
>>> data[0]['name']
u'2016 1 oz Canadian Silver Cougar | Predator Series'

Categories

Resources