Get number of online users from public website page - javascript

Say there is a website homepage that publicly displays stats like online users, page generation time, etc.
Is it possible to retrieve those specific values and display them on a piece of software, like for example a simple charting extension written in Javascript? Or does it depend on the software used to get them?
Just to be clear, I'm asking about a generic public webpage, not about my own website - i.e. I do not have any access to internal phps, codes, variables, etc. of the website/domain.

The method which you are looking for is web scraping. Basically it's the process of getting the required data which is already shown in that webpage. You can find tons of tutorial regarding this.

The server is the only thing that knows how many people are browsing. This is not obtainable from the front end unless the back end is providing it to you.

Related

How to get CandidateID from SuccessFactors Career (recruitment) site?

I'm trying to build a Fiori application (sapui5) and it is deployed, working through link. My problem is: I have to put it on a recruitment site, the Career Opportunities site (it is somehow connected to SAP Successfactors), and I am supposed to know the person's CandidateID because my app is based on that.
So when somebody applying for a job in Career Opportunities site, and clicks on "Next" (after filling in the fields), they get to a site where there are some information and my link. When they click on the link, I should get their candidateId to my link.
How could I get that? Is there any way?
The Information of all candidates can be retrieved via OData API.
The site you are referring to is probably a career site builder page. Within SFSF this pages can be adapted with standard elements how the page should look like.
One element is the custom Javascript plugin where you are quite free in what you are doing.
Unfortunately I doubt that you can compose a link containing the candidate ID without doing some heavy stuff you should NOT do like reading the URL and extracting the candidate ID or something like this.
Long answer short: it might be possible technically but you really should avoid it, also in terms of security - it also sounds like you do not really know the system where your link magically appears. So: don’t do it :)

Take information from web page and paste it into excell C# or Javascript

Working in a huge virtual environment were we build dozens of servers a day. Right now we take a ticket from ServiceDesk copy important information over into a excel sheet then use that excel sheet to automate the build process. I want to automate the first part.
Give an array of tickets
Opens default service desk webpage
Searches ticket number (which opens it in another web page automatically)
Copy's information
Loop
I have experience with C# and JavaScript. I am looking for the best way to do the above task. What language, what extensions etc.
My main question is how do I get the information from this web page.
Thank you!
Try excel parser/builder for Node.js - node-xlsx.
Or you could use the native C# library from Microsoft. I think it has more functionality and is more productive than solution based on Node.js. Just take a look at this article.

Fetching data from thirdparty websites

I work in a small healthcare related office and we often have to look up license and other related official numbers of physicians. We use websites that are free and available to the public to do so. I've been tasked with figuring out a way to enter in the physician name and then return the results from all of the websites in a single entry to reduce the amount of time spent going through each website. I'm familiar with javascript, php and ruby but by no means an expert. My question is, where should I start? I don't need anyone to write the code for me or anything, but I can't seem to form the right question to google for some answers. I'm fairly sure this is possible, just not sure where to start developing my idea. Any help would be appreciated.
It sounds like you need to do some screen scraping, which may or may not be allowed by the terms and conditions of the sites you're using - you should check that first.
If there aren't any restrictions on automatic retrieval and querying, you'll want to read up on PHP's cURL module, and simulate the form actions that are performed when you manually query the sites. You can use your browser's developer console to see what scripts and pages are called when you run queries - it's quicker than trying to work it out from the page source.
You'll get back the HTML from the pages, which you'll need to parse. Depending on the format on the page, a few simple regexes might do the trick, but you'll likely need to tailor them for each site you query.
Again, please double check that the sites you're using allow you to run scripted queries - if you're in any doubt, you should email them and explain what you plan to do, and ask if they're ok with it.

How to extend HTML of an existing site via JavaScript or similar

I want to add a bit of extra HTML to an existing site based on a REST API call response.
Specifically, www.arbookfind.com lets you search for kids school books with an "AR" test. (My son has to read a certain number of books at a level.) It has a link to amazon.com if you find a book you want to buy. However I would like to know if available for Kindle (most are not). Right now I have to click the Amazon link, check the page, go back and try next one - it can take 10 tries to find one available on the Kindle. Painful!
I was after ideas of the easiest way to do this. That is, without touching the arbookfind.com web site, can I add some JavaScript (jQuery) to all the returned HTML pages. The JavaScript will look in the returned page for each book, fire off a Amazon ItemSearch query (?) to see if available on Kindle, then inject a HTML link to the Kindle book on Amazon. I can learn how to write the JavaScript - I am just after some pointers for the easiest way to augment the current site.
That way I can use the current arbookfind.com site to find a book, but it is faster for me to identify which books are available on Kindle without manually trying each link by hand.
E.g. a web browser plugin that runs some javaScript on each returned page? A varnish proxy with some smart logic to fiddle pages on the way through? A PHP app acting like as a proxy server? Thanks!
Maybe you want to have something like the chrome extension Tampermonkey.
It allows to add and manage userscripts for websites. Means, a javascript "snippet" which is added to websites maching specific patterns.

print html page to PDF on a schedule

I have a HTML page that uses javascript to generate dynamic images using a graph handler on a different server. The images will contain the same data for 1 week but will change when the 1 week window expires.
I am trying to come up with a way to automatically save the contents of the page to either a local file on the server or write to a PDF file.
I tried to use a 'web downloader' like HTTTrack, but it does not get the dynamic images...
I am running the html page off IIS.
I have no experience with IIS or ASP.
Thanks!
I'm not sure that I see any way to do this directly off the front end in an automatic manner. The challenge is that any "screen scraper" you have go out and grab the site with would need to be running javascript to get the tables, which isn't how I see many such systems operating. It's partially why you see strangeness on Archive.org when you have a site that's heavily augmented with javascript or flash.
An untested concept you might attempt was posted in this Stack Question
I could see some sort of a system that you rig together with another computer that schedules an browser load then prints to .pdf in some fashion. I've been unable to find any specific software that would automate that process, so you'd be left cobbling such a system together on your own.
Clearly you have the data available to make your dynamic images. The most feature-rich way I could think of would be to use a system like Jasper Reports or Crystal Reports, which you could feed your data, replicate the report, and easily output via pdf, a built-in export in both systems.
Perhaps its worth questioning your end purpose. To me, creating a "snapshot" of the relevant data in another table and using another system to render your graphs from that snapshot data seems far more valuable than just a print of the screen. You can then go back and adjust data as needed, or use it for other reporting purposes, exporting in any number of tools that are even as simple as Access. Heck, 10 years down the road you may want the data to look better than the graph system you're currently using, and you'd have the data to render it any way you want. When the VP of marketing comes looking for his numbers, a simple click would output those numbers that could be manipulated as needed from there.
I was able to accomplish what I wanted to do using wkhtmltopdf to convert my HTML page with Javascript to PDF. I ran the job via a task scheduler to supply my website url and output file name as parameters.
I then used a windows batch file to check if the file was created and then rename/email it to interested parties.
This of course requires that you have the ability to install wkhtmltopdf on your server.

Categories

Resources