Querying FB Graph API feed with fields parameter? - javascript

what happens when a Facebook page is queried for its feed with Facebook Graph API (JS SDK) and a URL parameter fields (like for the post object) is added?
I would like to load from the feed only the posts id's, shares, likes and comments, because the page is pretty active and the text data in JSON for 2 days is about 2MB (25 items)...
I would hope one could do like this: FB.api('/SomePage?fields=id,shares,likes'), but I suppose the only fields you can access are the direct children (for feed, that is data & paging)? If this unfortunately is the case, is there any other way to retrieve all posts from date x to date y without downloading the entire feed?

That's not correct. The feed edge is basically an array of Post objects. You can use the Field Expansion together with time pagination as follows:
GET /{page_id}/feed?fields=id,shares,likes.summary(true)&since={start_unix_timestamp}&until={end_unix_timestamp}
See
https://developers.facebook.com/docs/graph-api/reference/v2.2/page/feed/#read
https://developers.facebook.com/docs/graph-api/using-graph-api/v2.2#fieldexpansion
https://developers.facebook.com/docs/graph-api/using-graph-api/v2.2#paging

Related

Can't get one query parameter to pre-filter returned content in a query string URL

My apologies for asking what may be a dumb question.
I've spent a couple days searching for an answer.
I'm trying to create query string URLs that can be used as hyperlinks to take viewers to pre-filtered content on this site: https://permits.ops.usace.army.mil/orm-public#
I want to pre-filter for view mode, content type and district.
I'm able to create URLs that prefilter for view mode and content type but not district.
Example: https://permits.ops.usace.army.mil/orm-public/?mode=table&type=jds
The network request(from DevTools) with all three parameters filtered gives me this URL: https://permits.ops.usace.army.mil/ormapi/permits/search?da=false&max=25&start=0&q=vtype:jds%20AND%20*:*%20AND%20org:NWP
This returns the raw content, but not in the formatted UI.
If I plug in the last parameter (org=variable) to the URL like this, it doesn't prefilter for district as I hoped:
https://permits.ops.usace.army.mil/orm-public/?mode=table&type=jds&org=NWP

Force Instagram profile page's source to load remotely with JavaScript

I'm creating a web based live total like count for Instagram users. Since Instagram does not offer getting the total amount of likes on an Instagram profile via their API, I'm scraping like counts off of the target users profile page by retrieving the html source code and extracting the data I need out of that. (https://instagram.com/USERNAME). This has all worked fine, however there are only 12 posts being loaded in the source since you have to scroll down for more posts to be loaded (you can see what I mean better by going to https://instagram.com/selenagomez and scrolling down. You'll see it loads quickly before displaying more posts). My goal is to be able to load all of the posts and then extract the data I need from that source file.
The amount of posts that are loaded is pretty unpredictable. It seems for verified users it loads 24 posts, while unverified it loads 12 which doesn't make much sense to me. I've looked around in Instagram's html source files but there doesn't seem to be any easy way to load additional posts without actually doing it yourself in a browser. (but that won't work because I'm looking to accomplish this all remotely via code)
To load the source file I'm using the following code:
var name = "selenagomez";
var url = "http://instagram.com/" + name;
$.get(url, function(response) {
... regex ...
}
In the source, Instagram has like counts attached to posts in the following form:
edge_liked_by':{'count':1234}
After the source is retrieved I'm using regex to get rid of everything but these edge_liked_by':{'count':1234}'s numbers. Then the numbers are put into an array like the following:
[1, 2, 3, 4, 5 etc, etc]
After that the array is added together to get the total number of likes and displayed on the web page. All this code is working fine.
Ultimately I'm just looking to see how I can force the Instagram profile page to load all posts remotely so I can extract the like counts from the source.
Thank in advance for any help with this.
I found another way of going about doing this by utilizing the END_CURSOR value provided by https://instagram.com/graphql/query for pagination.
For anyone wondering the link for retrieving post's JSON is as follows:
https://www.instagram.com/graphql/query/?query_hash=42323d64886122307be10013ad2dcc44&variables={"id":"PROFILE ID","first":"INT","after":"END_CURSOR"}
Where PROFILE ID is the profile's numeric id which can be retrieved from another JSON link: https://www.instagram.com/USERNAME?__a=1
and INT is the amount of posts JSON to fetch. It can be anywhere between 1 and 50 per request.
The trick to move past 50 is to add the provided END_CURSOR string in the next link, which will progress to the next page of posts where you can get another 50.
Notes:
You don't have to provide an END_CURSOR value in the link if you're only getting the most recent 1-50 posts from a user. The end cursor is really only useful if you're looking to fetch beyond the 50 most recent posts.
As of now the query_hash is static and can be left at 42323d64886122307be10013ad2dcc44

AJAX Pagination how to handle new data which is updated in the background(Database)

I have a page with user data, with ajax pagination using PHP. User's name is listed in descending order(last inserted data will be shown in first page). Suppose there are 3 pages having 2 data's(user name) per page . Imagine, i visited the second page it shows some data, imagine(Alan and Arun - user names). At the same time couple of data is inserted in the database and i requested the 3rd page. But it will show the same data as previous,as a couple of data is inserted in the database. My Question is - How can i handle this. It will create a confusion to the user. Hope you get my point.
One easy solution I could think of would be a timestamp, when you first requested the page. Now you keep this timestamp while navigating through the pages and only display entries older than the timestamp. This way, no new entries would pollute the result and mess with the pagination.
But you'd need to think of a mechanism to let the user include newer entries. E.g. a box display at the top saying "New entries available. Click here to refresh.". A click on this box would refresh the timestamp and navigate back to page 1.
You can create some kind of ping service in JS to see if there is any new update on the server. If there is: fetch it and update in your App. Also, you can create a unique identifier to solve the problem of duplication.

Populating multiple webpage output with one global database query?

How can one make one global database query (with PHP) and then use all the output (multiple rows) on various places on a webpage?
I read that javascripts can be called for each specific field on a webpage that needs data, but this is inefficient with regards to performance.
The webpage would contact a sort of table-of-contents with version numbers next to each of them. Those version numbers are stored inside the database and calling the database 20 times for the 20 different fields would be inefficient.
Any suggestions on how to run say a PHP query globally when the page loads and then later in the page use the different output at different locations on the page?
QUESTION UPDATE WITH EXAMPLE OF DESIRED OUTPUT:
The webpage should show the following output:
Document Name Document Version
DEPT A DOCS:
Doc ABC 1.2
- Description of doc
Doc another doc 2.3
- Description of doc
DEPT B DOCS:
Yet another doc 0.9
- Description of doc
Doc XYZ 3.0
- Description of doc
Each of the documents have its own version associated with it. Each document has its own table inside the database with its associated version and this can be queried from a Postgres function or View. I wish to query this function or view only once and then display the results in a sort of 'table-of-contents' style (or table sort of view) on the webpage.
Thank you
P.S. This is my first post here.
Make the query in a separate PHP page that is included in all the pages you want to use the information on.
In the beginning of your page make one database query to get data for all versions.
Using PHP split it into associative array having version number as key.
Then during in different section of your page just apply to that array with version number. output the data the way you need. and it will be the data of this version

What's the best way to store whether or not an rss item has been read

As a non-professional programmer, I'm trying to self teach myself a little HTML and javascript. My learning project is a desktop gadget that will retrieve rss items from an rss feed.
I would like an option to toggle so the user of the gadget can decide to display all items or only new items (unread items). It's displaying only the new items that I have a question about.
I realize I have to locally store some kind of data that I can use to compare to the most recent fetch results to see if something is new or not.
What is the typical data that is used in this comparison and is it typically stored in a xml file, or some other kind of file?
Thanks.
In RSS Specifications, guid element should contain a unique identifier for each item, but not all rss feeds respect that, so you may combine that with a date check.
Suggested Simple Storage:
http://example.com/link/to/file.rss guid abcd-ef-12345678
http://example.ord/some/other.rss date 1283647074
This file contains info on the last item of each rss feed in the gadget, space separated (you can comma-separate them as in .csv files as well), first field is the RSS URL, second is the method used to check last item, either via guid or via pubDate, last is the value to check. In the sample file I put the timestamp instead of the pubDate that arrives, for storage purposes.

Categories

Resources