Scrape post generate dom elements from stock flare - javascript

Wondering if anyone can point me in the right direction on how I would be able to scrape the data from this website. I understand the data is filled in after the page is fully loaded, and have seen js libraries that can request the data be loaded, but can't remember the name of them. I prefer to code in python though if possible.
https://stockflare.com/stocks/BBY
I think this would work. Found it from another answer regarding react webpages. Can anyone confirm?
python-casperjs

It's not legal to scrape our data. :( Please drop me a message if you'd like to work with us. Shane at Stockflare dot com.
Shane

Related

Data transfer from one HTML page to another

When creating a technical site, I faced the need to transfer data and save it in a file. I am making a website in Django. In models I create input data. I receive these data in HTML and, if necessary, the user changes them and JS performs the necessary calculations, the results of which are sent to the same page. It is possible to do a lot of analytical work on outgoing data (build graphs, calculate a loan, calculate various project risks). I prefer to create several pages so as not to fill one. All pages have already been built. There was a problem of this nature: On the main page, JS did his job but I do not know how to access this data on other pages. Is there a solution to write a JSON string to a file or send them to Django model (update static data). I read a lot and looked for solutions, but I could not figure it out until the end. Maybe someone knows the solution.
Thank you in advance.
If need more information, please let me know.
This is my first request for help, so please don’t be strict if I cannot ask correctly.
Writing to a temporary or holding table in a database might be a solution. You would be able to fetch the data as needed when a new page loads.
There is probably a better solution though.

Alternative to Ajax content loading for a One Page Website for proper meta tags

I meanwhile spent over two months to observe the SERP of my website Arda Maps. It is still the fact that Google does not list all of the subpages and the incorrect meta tags.
I got a tip to use the Dublin Core. But Google themself say "as a last resort". For me this means that they do not completely ignore the DC, do they? But yeah DC does not work for me.
I searched a lot but I not even found a single One Page Website with linked subpages which have different meta tags in the SERP. And if there are some these are usual PHP scripts or they load content via Ajax.
So my question is, is there anything out there that makes it possible to get indexed in a proper way without using Ajax. I mean loading subpages via Ajax is fine... but there you have to mirror so much in this case. So do you know any alternative to Ajax for loading proper meta tags?
I searched a lot around, but it does not seem that there is a working solution without PHP or Node.js. That's really bad but as Single-Page are not frequently used in the web, I think that's ok. So Single-Pages have some huge disadvatanges in that term.

Unable to see complete scraped web page in Google Apps Script logs

A few weeks ago I started learning Javascript and the Google Apps Script API, specifically in regard to spreadsheets. I have been trying to make a spreadsheet that fetches web pages and pulls stats about my friends for the game League of Legends. However, I have been running into a problem with the site I want to use, which is basically the only free LoL stats site that updates frequently. I'm not familiar at all with web development, but it seems when I try to access a page on lolking.net, for example http://www.lolking.net/summoner/na/60783 with Google's UrlFetchApp.fetch() it does not load the dynamic page. So instead of the final source, I get this which doesn't help me. Is there an easy way around this or would I simply have to use another website?
Thanks for thie info! Although it turns out I was mistaken. The UrlFetchApp was indeed returning the full source code, but I was using GAS's Logger to view the text. It seems the Logger has a length limit, so when I searched for the stats I wanted they weren't there simply because the source code got truncated. So, due to an oversight on my part, I never had a problem in the first place. For other people reading this question, in the end I have no idea how UrlFetchApp works with dynamic pages using clientside js (you'd probably want to talk to the poster below or post a new question).
You are getting fhe raw html page with clientside js included. That wont work from any system not just gas. You need to debug that page js and find where it does an ajax call to get the data you want.
Then do the same from your gas. Might not work if the call is authenticated etc.

Display Comic Book files on a webpage?

I'm thinking about creating a webpage and I'm trying to brainstorm some ways to display them in the page.
If i wanted to get dirty and create everything myself, i think i could do it with html5, CSS3, and javascript/jquery. Just do some kind of page buttons with an image tag and maybe get into some more detailed stuff as it comes up (i dont know how i would do zooming and multiple pages).
But wahat i really want to know is if there is already some way to do this? I've looked around for a bit and cant seem to find any sort of plugin that would read a cbz file or display an set of images with the 'e-reader' type of tools in mind. Just wondering if anyone knows of anything?
Thanks
I used to use an online reader for a long time so I started an experiment to build one myself a while back: netcomix
It's open source so you can see if you find anything appealing in what I did. I figured I'd do all the real UI work client side with HTML, CSS, and JavaScript and the server was strictly responsible for acting as a service (for example, to supply a list of comics or a list of all the pages in a particular issue) and serving up the individual JPG/PNG/GIF files. That compartmentalized things nicely and I was very pleased with how jQuery BBQ gave me a history that I could back through even though I stayed on one page the whole time.
Now if I were to do the same experiment again, I'd use Backbone.js to give some structure to the client side and obviously it needs a lot of love because the server side really does nothing at the moment. Early versions were strictly hard coded although I started putting in some simple SQL stuff in there in the latest version. It's nothing more than an experiment though and should be treated as such. It's there for ideas and little else. If you find it interesting and want some more ideas contact me and I'll be happy to let you know all my wacky ideas for such a program.
I know this is an old question. But web technologies have gotten better in the last few years. There are several comic book readers that can work in the browser using pure HTML and JavaScript. I wrote one called: http://comic-book-reader.com .
If you want to see a very simple example of how to read CBR and CBZ files in the browser. You should check out http://workhorsy.github.io/uncompress.js/examples/simple/index.html which uses the JavaScript library https://github.com/workhorsy/uncompress.js

Getting most recent youtube video links for a user using API

So, I've been reading the Youtube API--I'm interested in showing the three most recent videos uploaded by a user. But, as I've never navigated an API or done this kind of work before, I'm a bit confused by what exactly the API here is trying to tell me. What I DO understand is that if I enter a URL like the following:
https://gdata.youtube.com/feeds/api/users/aosjeff/uploads
Then I'll get a ton of information in a kind of list. What I don't understand is how to navigate that list in HTML, and make it return a link to the most recent video (or second most recent, etc) so that I can embed that video into the page. Can anyone explain this to me? Really appreciate the help!
Note: I'm working within site building software that will not allow me to use PHP or reference .php files.
Simon
You should look at parsing the xml data with php. This is probably the easiest way of doing things. Beginners tutorial here:
http://www.kirupa.com/web/xml_php_parse_beginner.htm

Categories

Resources