Way to asynchronously load SharePoint profile images - javascript

I'm working on a sharepoint farm that has User Profiles enabled. We're creating a community feature which has a profile wall of all members of that community. I need to retrieve and display profile pictures from a search based source and display the results as they are returned in an appealing efficient way.
We have two avenues:
1: FAST search indexes the profiles of every user every 6 hours. We can run a membership query and return all members of [x] community.
2: We can use the profile API to do a search. This is slower but does not rely on the 6 hour index and therefore gives us up to date information.
We need to make this call via JavaScript, as sever side code is locked down and not an option. I'd like to write a function that calls these profiles and loads the images into a wall one at a time as they are retrieved. Possibly in a timed loop, so an image loads every 100 milliseconds.
I believe profile photos are stored as a text property containing the photo URL, so the URL can be set as an images source.
How would I go about quickly loading a set of images asynchronously to provide a good user experience?

Since you do not have server side code option, I would suggest you to go for a Jquery Script which would render these images. This javascript code can be loaded asynchronously as suggested in this article:
https://wiki.base22.com/display/btg/How+to+load+JavaScript+dynamically+with+jQuery

Related

How can I avoid getting the same posts twice

I'm writing the code for a web application using PHP, MySQL, and javascript.
It's a very simple social network where users can create posts and see, like and comment other users posts. On the main page it loads posts and orders them based on an index based on the number of likes, comments and when the post was created.
Since I can't load every post at once (because ideally there can be millions of them), I load the top N posts and then when I scroll down it loads more posts (with an Ajax request) and adds them at the bottom of the page. The problem is that since the posts are ordered dynamically, if I just limit the number of posts and then offset them in the later requests, I sometimes get the same post twice, and some posts never gets shown. How can I solve this?
Right now it just checks with javascript the id of every new post and it just discards the ones that are already on the page (checking the id of the posts on the page), but I dont like it because every time it loads more posts it has to check if every single post is already on the page or not and if the number of post grows it will get very slow.
If you have a lot of processing power in the server and you're able to do the scoring for any time in the past (e.g. taking into account only likes previous this time), you can make your page send: (1) the timestamp of the first loading, and (2) the number of post already there (or, better, the score of the worst post it received, because posts could have been deleted since then).
Let the server compute the list of the past and send the next posts.
If you have more memory in the server than processing power, then you could just save a copy of the scored list for some time (1h, 24h…) whenever someone request it.
And if you want the server to send the new best post of the moment: if you know the present listing and you know what the user already got (with their timestamp and last score they got), you can make the server removes what they already have and gives them the rest.

Force Instagram profile page's source to load remotely with JavaScript

I'm creating a web based live total like count for Instagram users. Since Instagram does not offer getting the total amount of likes on an Instagram profile via their API, I'm scraping like counts off of the target users profile page by retrieving the html source code and extracting the data I need out of that. (https://instagram.com/USERNAME). This has all worked fine, however there are only 12 posts being loaded in the source since you have to scroll down for more posts to be loaded (you can see what I mean better by going to https://instagram.com/selenagomez and scrolling down. You'll see it loads quickly before displaying more posts). My goal is to be able to load all of the posts and then extract the data I need from that source file.
The amount of posts that are loaded is pretty unpredictable. It seems for verified users it loads 24 posts, while unverified it loads 12 which doesn't make much sense to me. I've looked around in Instagram's html source files but there doesn't seem to be any easy way to load additional posts without actually doing it yourself in a browser. (but that won't work because I'm looking to accomplish this all remotely via code)
To load the source file I'm using the following code:
var name = "selenagomez";
var url = "http://instagram.com/" + name;
$.get(url, function(response) {
... regex ...
}
In the source, Instagram has like counts attached to posts in the following form:
edge_liked_by':{'count':1234}
After the source is retrieved I'm using regex to get rid of everything but these edge_liked_by':{'count':1234}'s numbers. Then the numbers are put into an array like the following:
[1, 2, 3, 4, 5 etc, etc]
After that the array is added together to get the total number of likes and displayed on the web page. All this code is working fine.
Ultimately I'm just looking to see how I can force the Instagram profile page to load all posts remotely so I can extract the like counts from the source.
Thank in advance for any help with this.
I found another way of going about doing this by utilizing the END_CURSOR value provided by https://instagram.com/graphql/query for pagination.
For anyone wondering the link for retrieving post's JSON is as follows:
https://www.instagram.com/graphql/query/?query_hash=42323d64886122307be10013ad2dcc44&variables={"id":"PROFILE ID","first":"INT","after":"END_CURSOR"}
Where PROFILE ID is the profile's numeric id which can be retrieved from another JSON link: https://www.instagram.com/USERNAME?__a=1
and INT is the amount of posts JSON to fetch. It can be anywhere between 1 and 50 per request.
The trick to move past 50 is to add the provided END_CURSOR string in the next link, which will progress to the next page of posts where you can get another 50.
Notes:
You don't have to provide an END_CURSOR value in the link if you're only getting the most recent 1-50 posts from a user. The end cursor is really only useful if you're looking to fetch beyond the 50 most recent posts.
As of now the query_hash is static and can be left at 42323d64886122307be10013ad2dcc44

Homepage Ajax/Google Maps - Server overhead

Adding a Google Map plugin to our homepage, which updates a single marker dynamically whenever there is a new product search on our site (which we read from our database). So, "has there been a new search via our site? If yes, reposition the marker based on the new search's coords".
Currently every "n" seconds (haven't settled on a seconds value yet) an Ajax call is made (using SetInterval) to determine if there has been a new search, and if there has it returns a small JSON response. The script run via the Ajax call is a PHP script, which queries the database for the last row in our searches table (order by desc limit 1).
So, my question is (not being a sysadmin), could this setup put an undesirable strain on our server? Should i incorporate a timeout session, or something, which turns off the Ajax call after 100 goes, or after 15 mins (i mean, who sits for 15 mins looking at markers dynamically generate on a Google map?!).
Our homepage only receives roughly 200 visits a day.
As you have you given the statistics that your website gets 200 visits per day and that your server is a spitting JSON that you have to extract and display it on the UI, It is a normal practice to have a set up like this one. You can rather ping the server data using AJAX in every 5 sec to get more precise data but it wont cause any performance issue at this level.
Please be sure that you dont have servers that are separated geographically else you have to use some other synchronization mechanism to track users location based on there search.
For AJAX JQuery implementation details please see the following page.
For project implementation as a tutorial please visit this tutorial.

Twitter Streaming API Multiple Stream vs Custom Filter

I'm building a node.js application that opens up a connection to the Twitter Streaming API (v1.1)
I would like to filter multiple keywords (hashtags & words) as separate queries. My original idea was to have multiple public streams.
However, I understand that I can only have one open connection to the Twitter streaming api per application and per IP address and that Twitter encourages us to come up with creative solutions to get what we want.
So my question is this:
If I stream with no filters, such as using statuses/sample (which I believe is 1%) and use custom javascript to filter the output, would I get the same tweets if I used the API method of filtering (i.e track='twitter').
Edit: I have created a diagram explaining this:
As you can see, I want to know if the two outputs wil be the same. I suspect that they won't be because although both outputs are effectively the same filter, one source is a 1% sample, and maybe the other source is a 100% sample but only delivering 1% tweets from that.
So can someone please clarify if both outputs are the same?
Thank you.
According to the Twitter streaming api rules, if the keywords that you track doesn't exceed 1% of the whole global traffic you will receive all data (some tweets might be lost due to network issues etc but it is not significant). This is called garden-hose (firehose is a special filter which gives you all the data but it is given as a paid service through third parties such as http://datasift.com/)
So if a tweet is filtered through public stream then it would be part of your custom filter too unless your keyword set is too broad.
By using custom filters you can track multiple search keywords, and if you miss some data because your keyword set is too broad twitter sends a track limitation notice indicating how much data you are missing.
My suggestion to you would be to use a custom filter and analyze what you get from the stream and what you get as a result for the same keywords from twitter. And when you start getting track limitation notice from twitter, it is time for you to split your keyword set into chunks and start streaming through different streamers by running them from different machines.
The details of the filter streaming is below (taken from official website https://dev.twitter.com/docs/api/1.1/post/statuses/filter)
Returns public statuses that match one or more filter predicates. Multiple parameters may be specified which allows most clients to use a single connection to the Streaming API. Both GET and POST requests are supported, but GET requests with too many parameters may cause the request to be rejected for excessive URL length. Use a POST request to avoid long URLs.
The default access level allows up to 400 track keywords, 5,000 follow userids and 25 0.1-360 degree location boxes. If you need elevated access to the Streaming API, you should explore our partner providers of Twitter data here.
I would like to answer my question with the results of my findings.
I tested both side by side in the same time frame and concluded that the custom filter method, whilst it supports multiple filters does not provide enough tweets to create an interesting enough visualisation.
I think the only way to get something more interesting with concurrent filters is to look at other methods but I am wondering if its not possible. Maybe with a third party.
I have attached a screenshot of the visualisation tracking 'barackobama' The left is the custom filter, the right is statuses/filter.
The statuses/filter api operate on all tweets, instead of those returned by statuses/sample, you can tell by looking at their tweet id's: sample tweets all come from a specific time window. So from millisecond-resolution creation time, you can definitely tell that filter returns tweets outside of sample.
For more details about getting creation time from tweet id and the time window on sample tweets, consult this post: http://blog.falcondai.com/2013/06/666-and-how-twitter-samples-tweets-in.html

How to check user entrance and exit URL

I've looked at a couple different analytics programs (like Google Analytics) that will tell me what URL my users have entered my site from, and which URL they are going to when they exit.
It certainly must be possible to gather this data somehow, I just can't find any code examples of how to do it. I would imagine that it involves the javascript function onBeforeLoad, I just don't know how to get the URL from that point on. This is a pretty important feature, as it will help me to tailer my website more towards my users specific needs.
I appreciate the help,
Sorry, I think I was unclear originally.
One of my other sites uses a service called StatCounter, and they have a section called "Came From". This shows where users were at directly before they visited your page. So, for instance, if someone google'd "Inside Out Ministry", and found the link to my site www.insideoutministry.com, my stats page would show that the user Came From www.google.com .
What would be the code to do this?
A simple approach would be to have a db with ip, time, lasturl and firsturl fields. Every time someone calls a page, it get's checked if his IP is already in the db. if not, a new entry gets written with firsturl as the actual url and i with his ip. Every time now he loads a new page on your site, the lastpage field gets updated. I don't know how exactly to determine that he's left the page, e.G. if he hasn't accessed any page on your sithe within 10min.
To track the first/last page your users visit, you just track all pages the user visits, and the one with the earliest timestamp is the first, and the one with the latest timestamp is the last.

Categories

Resources