DevTools Google Chrome:
On this site (https://booyah.live/users/41874362/followers), to load the complete list of followers it is necessary to keep scrolling down the page to reload more profiles, but there comes a time when the page weighs so much that the browser crashes and it ends up needing to be closed.
Is there any way to be able to collect the follow buttons without this happening?
The current script I use is:
setInterval(function(){
document.getElementById("layout-content").scrollTo(0, 50000000000000000000000000000000000000);
document.querySelectorAll('.components-button.components-button-size-mini.components-button-type-orange.desktop.components-button-inline').forEach(btn => btn.click());
}, 10)
I use setInterval to create a loop of:
1 - Scrolling the page
2 - Loading more profiles
3 - Clicking the follow buttons
My need:
For the study I'm doing for learning, the idea is that my profile follows all profiles followers of a single most famous profile in order to analyze how many people follow back on this social media.
Additional:
In this answer provided by Leftium, it is possible to follow only one profile:
https://stackoverflow.com/a/67882688/11462274
In this answer given by KCGD, it is possible to collect the entire list of followers but during this collection the profiles are not followed, it is possible to create a list and save the data, but not follow the profiles:
https://stackoverflow.com/a/67865968/11462274
I tried to contact them both, but they haven't returned yet. It was a good way but I couldn't combine the two answers so I can follow all the profiles, I thought about the possibility according to which I would collect the profiles of the KCGD response, I would follow the profiles too, but not only the first one but also the answer of the Leftium.
Would it be possible to take advantage of the loop created by the response from KCGD and from each response, already follow all profiles instead of just the first one as in Leftium's response?
I tried to create but was unsuccessful.
The browser crashes because too much memory is used. As you scroll down the page, the HTML DOM tree is extended and more avatar images are downloaded. These HTML and image resources are not necessary for your goal.
It is possible to avoid crashing by calling the (internal) Booyah API directly. This will be much faster and consume less resources since only the text is transferred. There are two API endpoints of interest:
GET /api/v3/users/[USERID]/followers?cursor=0&count=100
Gets list of followers following a certain user:
[USERID] is the ID of the user being studied (WEEDZAO's id).
cursor is where in the list of followers to start listing. When the page first loads, this is 0. As you scroll down, the following API calls increment this (101, 201, 301...)
count is how many results to return.
Since this is a GET call, you can open this URL in your browser.
POST /api/v3/users/[USERID]/followings
Follows a user (same as clicking their 'Follow' button).
Here [USERID] is ID of the user whose follower list will be updated (your own ID).
A payload must be sent that looks like this: {followee_uid: ID, source: 43}. I'm not sure what source is.
Also a CSRF header must be included.
Because this is a POST type call, it is not possible to open this type of URL directly in your browser.
DELETE /api/v3/users/[USERID]/followings
There is also an API to unfollow a user. (Just for reference).
If you call these API's from outside the browser, you probably need to send session cookies.
This script will list WEEDZAO's first 10 followers, then follow the first one from the list:
You must replace USERID and CSRF_TOKEN with your own values.
You can copy/paste this code into the browser dev console.
Alternatively, you can use this code from a web scraping framework like Puppeteer.
// Find these values in dev console "Network" tab:
var CSRF_TOKEN, USERID, USERID_TARGET, main;
USERID_TARGET = '41874362';
USERID = '12345678';
CSRF_TOKEN = 'MTYy...p8wg';
main = async function() {
var body, followers, json, options, payload, response, url;
// Get list of first 10 followers
console.log(url = `/api/v3/users/${USERID_TARGET}/followers?cursor=0&count=10`);
response = (await fetch(url));
json = (await response.json());
followers = json.follower_list;
console.table(followers);
// Follow first member from list above
console.log(url = `/api/v3/users/${USERID}/followings`);
payload = JSON.stringify({
followee_uid: followers[0].uid,
source: 43
});
response = (await fetch(url, options = {
method: 'POST',
body: payload,
headers: {
'X-CSRF-Token': CSRF_TOKEN
}
}));
body = (await response.text());
return console.log(body);
};
main();
It crashes because the interval is too fast
setInterval(function(){}, 10)
you are trying to call a scroll and click function every 10 milliseconds (that's 100 function call every 1 second). Which also interferes with the server as they fetch new users while scrolling.
Your script could work if you will adjust the interval to atleast 1000 milliseconds (1 second). Of course, it may take a while, but it will work. You should also expect that the page may become laggy specially when the page already loaded tons of users because Virtual Scrolling is not implemented in this page.
Even with slowing down the rate of the scrolling it still really bogs down the browser, the solution to this may be in the API the page contacts. To get the user's followers it contacts the site's V3 API
https://booyah.live/api/v3/users/41874362/followers?cursor=[LAST USER IN API RETURN]&count=100
to get all the users that would show up in the page. I wrote a script that can contact the api over and over again to get all the follower data, just run it in the page's console and use print() when you want to export the data
and copy/paste it into a .json file
//WARNING: THIS SCRIPT USES RECURSION, i have no clue how long the followers list goes so use at your own risk
var followers = []; //data collected from api
function getFollowers(cursor){
httpGet(`https://booyah.live/api/v3/users/41874362/followers?cursor=${cursor}&count=100`, function (data) { //returns data from API for given cursor (user at the end of last follower chunk)
console.log("got cursor: "+cursor);
var _followChunk = JSON.parse(String(data));
console.log(_followChunk)
followers.push(_followChunk.follower_list); //saves followers from chunk
var last_user = _followChunk.follower_list[_followChunk.follower_list.length - 1]; //gets last user of chunk (cursor for the next chunk)
setTimeout(function(){ //1 second timeout so that the API doesnt return "too many requests", not nessicary but you should probably leave this on
getFollowers(last_user.uid); //get next chunk
},1000)
})
}
var print = function(){console.log(JSON.stringify(followers))};
getFollowers(0); //get initial set of followers (cursor 0)
function httpGet(theUrl, callback) {
var xmlHttp = new XMLHttpRequest();
xmlHttp.open("GET", theUrl, false); // false for synchronous request
xmlHttp.setRequestHeader("Cache-Control", "no-store");
xmlHttp.send(null);
callback(xmlHttp.responseText);
};
if you really only need the button elements then the only way is to scroll all the way down for each time it loads new followers, as the page creates the elements as you scroll down
This is a fully working solution that I have tested in my own Chrome browser with a fresh account, successfully following all the follower accounts of the account you are targeting.
UPDATE (2021-06-18)
I've updated my solution to a drastically improved and faster function, rewritten with async/await. This new function reduces the estimated runtime from ~45min to ~10min. 10min is still a long while, but that's to be expected considering the large number of followers the user you are targeting has.
After a few iterations, the latest function not only improves speed, performance, and error reporting, but it also extends what is possible with the function. I provide several example below my solutions of how to use the function completely.
For the sake of de-cluttering my answer, I am removing my older function from this solution altogether, but you can still reference it in my solution's edit history if you like.
TL;DR
Here is the final, fastest, working solution. Make sure to replace PUT_YOUR_CSRF_TOKEN_HERE with your own CSRF token value. Detailed instructions on how to find your CSRF token are below.
You must run this in your console on the Booyah website in order to avoid CORS issues.
const csrf = 'PUT_YOUR_CSRF_TOKEN_HERE';
async function booyahGetAccounts(uid, type = 'followers', follow = 1) {
if (typeof uid !== 'undefined' && !isNaN(uid)) {
const loggedInUserID = window.localStorage?.loggedUID;
if (uid === 0) uid = loggedInUserID;
const unfollow = follow === -1;
if (unfollow) follow = 1;
if (loggedInUserID) {
if (csrf) {
async function getUserData(uid) {
const response = await fetch(`https://booyah.live/api/v3/users/${uid}`),
data = await response.json();
return data.user;
}
const loggedInUserData = await getUserData(loggedInUserID),
targetUserData = await getUserData(uid),
followUser = uid => fetch(`https://booyah.live/api/v3/users/${loggedInUserID}/followings`, { method: (unfollow ? 'DELETE' : 'POST'), headers: { 'X-CSRF-Token': csrf }, body: JSON.stringify({ followee_uid: uid, source: 43 }) }),
logSep = (data = '', usePad = 0) => typeof data === 'string' && usePad ? console.log((data ? data + ' ' : '').padEnd(50, 'β')) : console.log('β'.repeat(50),data,'β'.repeat(50));
async function getList(uid, type, follow) {
const isLoggedInUser = uid === loggedInUserID;
if (isLoggedInUser && follow && !unfollow && type === 'followings') {
follow = 0;
console.warn('You alredy follow your followings. `follow` mode switched to `false`. Followings will be retrieved instead of followed.');
}
const userData = await getUserData(uid),
totalCount = userData[type.slice(0,-1)+'_count'] || 0,
totalCountStrLength = totalCount.toString().length;
if (totalCount) {
let userIDsLength = 0;
const userIDs = [],
nickname = userData.nickname,
nicknameStr = `${nickname ? ` of ${nickname}'s ${type}` : ''}`,
alreadyFollowedStr = uid => `User ID ${uid} already followed by ${loggedInUserData.nickname} (Account #${loggedInUserID})`;
async function followerFetch(cursor = 0) {
const fetched = [];
await fetch(`https://booyah.live/api/v3/users/${uid}/${type}?cursor=${cursor}&count=100`).then(res => res.json()).then(data => {
const list = data[type.slice(0,-1)+'_list'];
if (list?.length) fetched.push(...list.map(e => e.uid));
if (fetched.length) {
userIDs.push(...fetched);
userIDsLength += fetched.length;
if (follow) followUser(uid);
console.log(`${userIDsLength.toString().padStart(totalCountStrLength)} (${(userIDsLength / totalCount * 100).toFixed(4)}%)${nicknameStr} ${follow ? 'followed' : 'retrieved'}`);
if (fetched.length === 100) {
followerFetch(data.cursor);
} else {
console.log(`END REACHED. ${userIDsLength} accounts ${follow ? 'followed' : 'retrieved'}.`);
if (!follow) logSep(targetList);
}
}
});
}
await followerFetch();
return userIDs;
} else {
console.log(`This account has no ${type}.`);
}
}
logSep(`${follow ? 'Following' : 'Retrieving'} ${targetUserData.nickname}'s ${type}`, 1);
const targetList = await getList(uid, type, follow);
} else {
console.error('Missing CSRF token. Retrieve your CSRF token from the Network tab in your inspector by clicking into the Network tab item named "bug-report-claims" and then scrolling down in the associated details window to where you see "x-csrf-token". Copy its value and store it into a variable named "csrf" which this function will reference when you execute it.');
}
} else {
console.error('You do not appear to be logged in. Please log in and try again.');
}
} else {
console.error('UID not passed. Pass the UID of the profile you are targeting to this function.');
}
}
booyahGetAccounts(41874362);
Detailed explanation of the process
As the function runs, it logs the progress to the console, both how many users have been followed so far, and how much progress has been made percentage-wise, based on the total number of followers the profile you are targeting has.
Retrieving your CSRF token
The only manual portion of this process is retrieving your CSRF token. This is rather simple though. Once you log into Booyah, navigate to the Network tab of your Chrome console and click on the item named bug-report-claims, then scroll all the way down the details window which appears on the right. There should see x-csrf-token. Store this value as a string variable in your console as csrf, which my function will reference when it runs. This is necessary in order to use the POST method to follow users.
Here is what it will look like:
The solution
The function will loop through all users the account you are targeting follows in batches of 100 (the max amount allowed per GET request) and follow them all. When the end of each batch is met, the next batch is automatically triggered recursively.
π Version 3 (Fastest and most flexible, using async/await and fetch())
My previous two solution versions (π β¦π’) can be referenced in this answer's edit history.
Make sure to replace PUT_YOUR_CSRF_TOKEN_HERE with your own CSRF token value. Detailed instructions on how to find your CSRF token are below.
You must run this in your console on the Booyah website in order to avoid CORS issues.
const csrf = 'PUT_YOUR_CSRF_TOKEN_HERE';
async function booyahGetAccounts(uid, type = 'followers', follow = 1) {
if (typeof uid !== 'undefined' && !isNaN(uid)) {
const loggedInUserID = window.localStorage?.loggedUID;
if (uid === 0) uid = loggedInUserID;
const unfollow = follow === -1;
if (unfollow) follow = 1;
if (loggedInUserID) {
if (csrf) {
async function getUserData(uid) {
const response = await fetch(`https://booyah.live/api/v3/users/${uid}`),
data = await response.json();
return data.user;
}
const loggedInUserData = await getUserData(loggedInUserID),
targetUserData = await getUserData(uid),
followUser = uid => fetch(`https://booyah.live/api/v3/users/${loggedInUserID}/followings`, { method: (unfollow ? 'DELETE' : 'POST'), headers: { 'X-CSRF-Token': csrf }, body: JSON.stringify({ followee_uid: uid, source: 43 }) }),
logSep = (data = '', usePad = 0) => typeof data === 'string' && usePad ? console.log((data ? data + ' ' : '').padEnd(50, 'β')) : console.log('β'.repeat(50),data,'β'.repeat(50));
async function getList(uid, type, follow) {
const isLoggedInUser = uid === loggedInUserID;
if (isLoggedInUser && follow && !unfollow && type === 'followings') {
follow = 0;
console.warn('You alredy follow your followings. `follow` mode switched to `false`. Followings will be retrieved instead of followed.');
}
const userData = await getUserData(uid),
totalCount = userData[type.slice(0,-1)+'_count'] || 0,
totalCountStrLength = totalCount.toString().length;
if (totalCount) {
let userIDsLength = 0;
const userIDs = [],
nickname = userData.nickname,
nicknameStr = `${nickname ? ` of ${nickname}'s ${type}` : ''}`,
alreadyFollowedStr = uid => `User ID ${uid} already followed by ${loggedInUserData.nickname} (Account #${loggedInUserID})`;
async function followerFetch(cursor = 0) {
const fetched = [];
await fetch(`https://booyah.live/api/v3/users/${uid}/${type}?cursor=${cursor}&count=100`).then(res => res.json()).then(data => {
const list = data[type.slice(0,-1)+'_list'];
if (list?.length) fetched.push(...list.map(e => e.uid));
if (fetched.length) {
userIDs.push(...fetched);
userIDsLength += fetched.length;
if (follow) followUser(uid);
console.log(`${userIDsLength.toString().padStart(totalCountStrLength)} (${(userIDsLength / totalCount * 100).toFixed(4)}%)${nicknameStr} ${follow ? 'followed' : 'retrieved'}`);
if (fetched.length === 100) {
followerFetch(data.cursor);
} else {
console.log(`END REACHED. ${userIDsLength} accounts ${follow ? 'followed' : 'retrieved'}.`);
if (!follow) logSep(targetList);
}
}
});
}
await followerFetch();
return userIDs;
} else {
console.log(`This account has no ${type}.`);
}
}
logSep(`${follow ? 'Following' : 'Retrieving'} ${targetUserData.nickname}'s ${type}`, 1);
const targetList = await getList(uid, type, follow);
} else {
console.error('Missing CSRF token. Retrieve your CSRF token from the Network tab in your inspector by clicking into the Network tab item named "bug-report-claims" and then scrolling down in the associated details window to where you see "x-csrf-token". Copy its value and store it into a variable named "csrf" which this function will reference when you execute it.');
}
} else {
console.error('You do not appear to be logged in. Please log in and try again.');
}
} else {
console.error('UID not passed. Pass the UID of the profile you are targeting to this function.');
}
}
Usage
To run the function (for either of the above solutions), just call the function name with the desired User ID name as an argument, in your example case, 41874362. The function call would look like this:
booyahGetAccounts(41874362);
The function is quite flexible in its abilities though. booyahGetAccounts() accepts three parameters, but only the first is required.
booyahGetAccounts(
uid, // required, no default
type = 'followers', // optional, must be 'followers' or 'followings' -> default: 'followers'
follow = 1 // optional, must be 0, 1, or -1, -> default: 1 (boolean true)
)
The second parameter, type, allows you to choose whether you would like to process the targeted user's followers or followings (the users which that user follows).
The third parameter allows you to choose whether you would like to follow/unfollow the returned users or only retrieve their User IDs. This defaults to 1 (boolean true) which will follow the users returned, but if you only want to test the function and not actually follow the returned users, set this to a falsy value such as 0 or false. Using -1 will unfollow the users returned.
This function intelligently retrieves your own User ID for you from the window.localStorage object, so you don't need to retrieve that yourself. If you would like to process your own followers or followings, simply pass 0 as the main uid parameter value, and the function will default the uid to your own User ID.
Because you can't re-follow users you already follow, if you try to follow your followings, the function will produce the warning You already follow your followings. 'follow' mode switched to 'false'. Followings will be retrieved instead of followed. and instead return them as if you had set the follow parameter to false.
However, it can be very useful to process your own list. For example, if you want to follow all of your own followers back, you could do so like this:
booyahGetAccounts(0); // `type` and `follow` parameters already default to the correct values here
On the other hand, if you were strategically using a follow/unfollow technique in order to increase your number of followers and needed to unfollow all of your followers, you could do so like this:
booyahGetAccounts(0, 'followers', -1);
By setting the follow parameter value to -1, you instruct the function to run its followUser function on all returned User IDs using the DELETE method instead of the POST method, thereby unfollowing those users returned instead of following them.
Desired outcome
Function call
Follow all your own followers
booyahGetAccounts(0, 'followers');
Unfollow all your own followers
booyahGetAccounts(0, 'followers', -1);
Unfollow all your own followings
booyahGetAccounts(0, 'followings', -1);
Follow users that follow User ID #12345
booyahGetAccounts(12345, 'followers');
Follow users followed by User ID #12345
booyahGetAccounts(12345, 'followings');
Retrieve User IDs of accounts following User ID #12345
booyahGetAccounts(12345, 'followers', 0);
Retrieve User IDs of accounts followed by User ID #12345
booyahGetAccounts(12345, 'followings', 0);
Other notes
To improve the performance of this function, as it's very heavy, I've replaced all calls to userIDs.length with a dedicated userIDsLength variable which I add to using += with each iteration rather than calling length each time. Similarly, I store the length of the stringified followerCount in the variable followerCountStrLength rather than calling followerCount.toString().length with each iteration. Because this is a rather heavy function, it is possible for your browser window to crash. However, it should eventually complete.
If the page appears to crash by flickering and auto-closing the console, FIRST try to re-open the console without refreshing the page at all. In my case, the inspector occasionally closed on its own, likely due to the exhaustion from the function, but when I opened the inspector's console again, the function was still running.
I'm trying to return a boolean value within a function about whether a user is blocked or not but I can't as I'm trying to save a variable inside of the chrome sync callback function but it doesn't save it, and even though my local storage has a variable inside that list it shows it's empty as the variable didn't copy the values of the data.blocked_users.
//Return true if the user by url is blocked, false otherwise
function isblocked(blocked_id) {
var blocked_users = [];//The list of blocked users
chrome.storage.sync.get(['blocked_users'], function(data){
blocked_users = data.blocked_users;
});
console.log("U" + blocked_users + "U");
//Goes through blocked users
for(var current_blocked of blocked_users) {
console.log("B"+current_blocked+"B");
if(current_blocked == blocked_id) return true;//If the user is blocked
}
console.log(blocked_users+"QQ");
return false;
}
It says "UU" and "QQ" despite my actual list being not empty.
I've tried many things but can't find a solution.
The chrome.storage.sync.get does not mean sync code. It means that this data will be sync in chrome by google account. AFAICS, there has no way to do that get data in storage by sync, it must use the callback.
The code like:
function getIsBlocked(blocked_id,callback) {
var blocked_users = [];//The list of blocked users
chrome.storage.sync.get(['blocked_users'], function(data){
blocked_users = data.blocked_users;
console.log("U" + blocked_users + "U");
//Goes through blocked users
for(var current_blocked of blocked_users) {
console.log("B"+current_blocked+"B");
if(current_blocked == blocked_id){
callback(true);//If the user is blocked
return;
}
}
console.log(blocked_users+"QQ");
callback(false);
});
}
getIsBlocked(blocked_id,function(isblocked){
//do sth
});
I am working on an app to store data offline. My problem is when I try to retrieve the data from local storage for update/edit, it keeps calling only the id of the first item, and not calling the id of the data in view.
Please what am I doing wrong?
Here is my code for loading employees:
// load cases from localStorage
var employees;
if (localStorage.getItem('employees')) {
employees = JSON.parse(localStorage.getItem('employees'));
} else {
// If no cases, create and save them
employees = [];
// offling storing of our cases
localStorage.setItem('employees', JSON.stringify(employees));
}
// show case listing in list view page
var showEmployees = function () {
//erase existing content
$('#employee_list').html('');
//insert each employee
for (var i = 0; i<employees.length; i++) {
addEmployees(employees[i]);
}
};
Here is my code to add an employee to list view:
//add an eliment to list view
var addEmployees = function (empData) {
//HTML content of one list element
var listElementHTML = '<li><a class="employee_list" ui-btn ui-btn-e ui-btn-icon-right ui-icon-carat-r" data-transition="fade" data-split-icon="delete" href="#item'+empData.id+'">' + empData.employeename + '<br> ' + empData.dateofbirth + '</br></a></li>';
//appending the HTML code to list view
$('#employee_list').append(listElementHTML);
};
Here is my code for Edit function:
//User input to edit form
$('#edit_employee_page').on('click' , function () {
var editEmployee = JSON.stringify({
id: employees.length+1,
employeeno: $('#employeeno').val(),
employeename:$('#employeename').val(),
stateoforigine:$('#stateoforigine').val(),
employeephone: $('#employeephone').val(),
dateofbirth:$('#dateofbirth').val()
});
//Alter the slected data
localStorage.setItem("employees", JSON.stringify(employees));
return true;
});
for (var i in employees) {
var id = JSON.parse(localStorage.getItem(employees[i]));
}
Here is my code for the Edit button:
//register Edit button
$('.edit_button').live('click', function (e) {
alert('I was Cliked!');
e.stopPropagation();
$.each(employees, function(a, b) {
//if(b.id == employees[i]){
$('#id').val(b.id);
$('#employeeno').val(b.employeeno);
$('#employeename').val(b.employeename);
$("#stateoforigine").val(i.stateoforigine);
$('#employeephone').val(b.employeephone);
$('#dateofbirth').val(b.dateofbirth);
$("#id").attr("readonly","readonly");
$('#employeeno').focus();
$.mobile.changePage('#edit_employee_page');
return false;
//}
});
});
Here is my local Storage:
[
{"id":1,
"employeeno":"DEF/234/20014",
"employeename":"Bill Gates",
"stateoforigine":"Osun",
"employeephone":"080765432",
"dateofbirth":"12/11/1965"},
{"id":2,
"employeeno":"DEF/234/20014",
"employeename":"Bill Gates",
"stateoforigine":"Osun",
"employeephone":"080765432",
"dateofbirth":"12/11/1966"},
{"id":3,
"employeeno":"DEF/234/20014",
"employeename":"Bill Gates",
"stateoforigine":"Osun",
"employeephone":"080765432",
"dateofbirth":"12/11/1966"},
{"id":4,
"employeeno":"DAST/003/2003",
"employeename":"Gold Base",
"stateoforigine":"",
"employeephone":"",
"dateofbirth":"12/03/1986"}
]
Thanks for helping me out
The way you are storing your employees into localStorage is correct, but the way you are getting them out is incorrect. You stored your employees by stating:
localStorage.setItem("employees", JSON.stringify(employees));
So, in order to retrieve them, you must use:
var employees = JSON.parse(localStorage.getItem("employees"));
You see, you stored the data as a string with a key of "employees"; therefore, you can only retrieve it by that key. Since all data stored in localStorage is saved as a string, you must use JSON.parse() to convert the data back into an object - an array in this case. Then you can iterate over your employees.
Update:
You should be running this code as soon as the page is rendered (see below). I'm not sure how you're doing that - if you're using an IIFE or jQuery's document.ready() function. I don't think it's necessary to store an empty array into localStorage if none were loaded initially, so, I took your else clause out.
var employees = [];
if (localStorage.getItem('employees') !== null) {
employees = JSON.parse(localStorage.getItem('employees'));
}
Debug this line-by-line when it runs and make positive your employees variable contains data. If it doesn't contain data, well then, there's nothing to edit.
If, however, there is data, then execute your showEmployees() function. Oddly, I'm not seeing in your code where you actually call this. Is it bound to a button or action in your UI? Also, what is that for loop doing after your $('#edit_employee_page') click event function? It's trying to read data from localStorage improperly and it does nothing.
I think if you simply stepped through your code one line at a time using breakpoints and desk-checking your inputs/outputs you'd find out where you're going wrong.
It also appears that there's a disconnect in your code. May be you left out some lines; you define a string editEmployee but out of the blues you store JSON.stringify(employees) whereas employees is not defined in your code:
$('#edit_employee_page').on('click' , function(){
var editEmployee = JSON.stringify({
id: employees.length+1,
//........
});
//Alter the slected data
localStorage.setItem("employees", JSON.stringify(employees));
return true;
});
I had a similar task to do . I did it this way.
I passed the dynamic Id to be passed as an id attribute
id="'+empData.id+'"
and then inside the
$('.edit_button').live('click', function (e) {
alert('I was Cliked!');
var empId=$(this).attr('id');
rest of the code is same.
In my Win 8 app, based on a blank template, I have successfully added search contract and it seems to work despite the fact that I have not linked it to any data yet, so, for now, when I search any term in my app it simply takes me to the searchResults page with the message "No Results Found" this is what I was expecting initially.
Now what I wish to do is link my database into the searchResults.js file so that I can query my database. Now outside of the search contract I have tested and connected my Db and it works; I did this using WinJS.xhr, to connect to my web-service which in turn queries my database and returns a JSON object.
In my test I only hardcoded the url, however I now need to do two things. Move the test WinJS.xr data for connecting my DB into the search contract code, and second - change the hardcoded url to a dynamic url that accepts the users search term.
From what I understand of Win 8 search so far the actual data querying part of the search contract is as follows:
// This function populates a WinJS.Binding.List with search results for the provided query.
_searchData: function (queryText) {
var originalResults;
// TODO: Perform the appropriate search on your data.
if (window.Data) {
originalResults = Data.items.createFiltered(function (item) {
return (item.termName.indexOf(queryText) >= 0 || item.termID.indexOf(queryText) >= 0 || item.definition.indexOf(queryText) >= 0);
});
} else {`enter code here`
originalResults = new WinJS.Binding.List();
}
return originalResults;
}
});
The code that I need to transfer into this section is as below; now I have to admit I do not currently understand the code block above and have not found a good resource for breaking it down line by line. If someone can help though it will be truly awesome! My code below, I basically want to integrate it and then make searchString be equal to the users search term.
var testTerm = document.getElementById("definition");
var testDef = document.getElementById("description");
var searchString = 2;
var searchFormat = 'JSON';
var searchurl = 'http://www.xxx.com/web-service.php?termID=' + searchString +'&format='+searchFormat;
WinJS.xhr({url: searchurl})
.done(function fulfilled(result)
{
//Show Terms
var searchTerm = JSON.parse(result.responseText);
// var terms is the key of the object (terms) on each iteration of the loop the var terms is assigned the name of the object key
// and the if stament is evaluated
for (terms in searchTerm) {
//terms will find key "terms"
var termName = searchTerm.terms[0].term.termName;
var termdefinition = searchTerm.terms[0].term.definition;
//WinJS.Binding.processAll(termDef, termdefinition);
testTerm.innerText = termName;
testDef.innerText = termdefinition;
}
},
function error(result) {
testDef.innerHTML = "Got Error: " + result.statusText;
},
function progress(result) {
testDef.innerText = "Ready state is " + result.readyState;
});
I will try to provide some explanation for the snippet that you didn't quite understand. I believe the code you had above is coming from the default code added by Visual Studio. Please see explanation as comments in line.
/**
* This function populates a WinJS.Binding.List with search results
* for the provided query by applying the a filter on the data source
* #param {String} queryText - the search query acquired from the Search Charm
* #return {WinJS.Binding.List} the filtered result of your search query.
*/
_searchData: function (queryText) {
var originalResults;
// window.Data is the data source of the List View
// window.Data is an object defined in YourProject/js/data.js
// at line 16 WinJS.Namespace.defineοΌ"Data" ...
// Data.items is a array that's being grouped by functions in data.js
if (window.Data) {
// apply a filter to filter the data source
// if you have your own search algorithm,
// you should replace below code with your code
originalResults = Data.items.createFiltered(function (item) {
return (item.termName.indexOf(queryText) >= 0 ||
item.termID.indexOf(queryText) >= 0 ||
item.definition.indexOf(queryText) >= 0);
});
} else {
// if there is no data source, then we return an empty WinJS.Binding.List
// such that the view can be populated with 0 result
originalResults = new WinJS.Binding.List();
}
return originalResults;
}
Since you are thinking about doing the search on your own web service, then you can always make your _searchData function async and make your view waiting on the search result being returned from your web service.
_searchData: function(queryText) {
var dfd = new $.Deferred();
// make a xhr call to your service with queryText
WinJS.xhr({
url: your_service_url,
data: queryText.toLowerCase()
}).done(function (response) {
var result = parseResultArrayFromResponse(response);
var resultBindingList = WinJS.Binding.List(result);
dfd.resolve(result)
}).fail(function (response) {
var error = parseErrorFromResponse(response);
var emptyResult = WinJS.Binding.List();
dfd.reject(emptyResult, error);
});
return dfd.promise();
}
...
// whoever calls searchData would need to asynchronously deal with the service response.
_searchData(queryText).done(function (resultBindingList) {
//TODO: Display the result with resultBindingList by binding the data to view
}).fail(function (resultBindingList, error) {
//TODO: proper error handling
});
So I'm using node.js and the module instagram-node-lib to download metadata for Instagram posts. I have a couple of hashtags that I want to search for, and I want to download all existing posts (handling request failure during pagination) as well as monitor all new posts.
I have managed to crack the first part - downloading all existing posts and handling failure (I noticed that sometimes the Instagram API would just fail on me, so I've added redundancy to remember the last successful page I downloaded and attempt again from that point). For anyone who is interested, here is my code (note, I use Postgres to save the posts, and I've abbreviated/obfuscated some of the code for ease of reading and for commercial purposes) **apologies for the length of code, but I think this will come in useful to someone:
var db = new (require('./postgres'))
,api = require("instagram-node-lib")
;
var HASHTAGS = ["fluffy", "kittens"] //this is just an example!
,CLIENT_ID = "YOUR_CLIENT_ID"
,CLIENT_SECRET = "YOUR_CLIENT_SECRET"
,HOST = "https://api.instagram.com"
,PORT = 443
,PATH = "/v1/media/popular?client_id=" + CLIENT_ID
;
var hashtagIndex = 0
,settings
;
/**
* Initialise the module for use
*/
exports.initialise = function(){
api.set("client_id", CLIENT_ID);
api.set("client_secret", CLIENT_SECRET);
if( !settings){
settings = {
hashtags: []
}
for( var i in HASHTAGS){
settings.hashtags[i] = {
name: HASHTAGS[i],
maxTagId: null,
minTagId: null,
nextMaxTagId: null,
}
}
}
// console.log(settings);
db.initialiseSettings(); //I haven't included the code for this - basically just loads settings from the database, overwriting the defaults above if they exist, otherwise it creates them using the above object. I store the settings as a JSON object in the DB and parse them on load
execute();
}
function execute(){
var params = {
name: HASHTAGS[hashtagIndex],
complete: function(data, pagination){
var hashtag = settings.hashtags[hashtagIndex];
//from scratch
if( !hashtag.maxTagId){
console.log('Downloading old posts from scratch');
getOldPosts();
}
//still loading old (previously failed)
else if( hashtag.nextMaxTagId){
console.log('Downloading old posts from last saved position');
getOldPosts(hashtag.nextMaxTagId);
}
//new posts only
else {
console.log('Downloading new posts only');
getNewPosts(hashtag.minTagId);
}
},
error: function(msg, obj, caller){
apiError(msg, obj, caller);
}
}
api.tags.info(params);
}
function getOldPosts(maxTagId){
console.log();
var params = {
name: HASHTAGS[hashtagIndex],
count: 100,
max_tag_id: maxTagId || undefined,
complete: function(data, pagination){
console.log(pagination);
var hashtag = settings.hashtags[hashtagIndex];
//reached the end
if( pagination.next_max_tag_id == hashtag.maxTagId){
console.log('Downloaded all posts for #' + HASHTAGS[hashtagIndex]);
hashtag.nextMaxTagId = null; //reset nextMaxTagId - that way next time we execute the script we know to just look for new posts
saveSettings(function(){
next();
}); //Another function I haven't include - just saves the settings object, overwriting what is in the database. Once saved, executes the next() function
}
else {
//from scratch
if( !hashtag.maxTagId){
//these values will be saved once all posts in this batch have been saved. We set these only once, meaning that we have a baseline to compare to - enabling us to determine if we have reached the end of pagination
hashtag.maxTagId = pagination.next_max_tag_id;
hashtag.minTagId = pagination.min_tag_id;
}
//if there is a failure then we know where to start from - this is only saved to the database once the posts are successfully saved to database
hashtag.nextMaxTagId = pagination.next_max_tag_id;
//again, another function not included. saves the posts to database, then updates the settings. Once they have completed we get the next page of data
db.savePosts(data, function(){
saveSettings(function(){
getOldPosts(hashtag.nextMaxTagId);
});
});
}
},
error: function(msg, obj, caller){
apiError(msg, obj, caller);
//keep calm and try again - this is our failure redundancy
execute();
}
}
var posts = api.tags.recent(params);
}
/**
* Still to be completed!
*/
function getNewPosts(minTagId){
}
function next(){
if( hashtagIndex < HASHTAGS.length - 1){
console.log("Moving onto the next hashtag...");
hashtagIndex++;
execute();
}
else {
console.log("All hashtags processed...");
}
}
Ok so here is my dilema about solving the next piece of the puzzle - downloading new posts (in other words, only those new posts that have come into existence since I last downloaded all the posts). Should I use Instagram subscriptions or is there a way to implement paging similar to what I've already used? I'm worried that if I use the former solution then if there is a problem with my server and it goes down for a period of time then I will miss out on some posts. I' worried that if I use the latter solution then it might not be possible to page through the records, because is the Instagram API set up to enable forward paging rather than backward paging?
I've attempted to post questions in the Google Instagram API Developers Group a couple of times and none of my messages seem to be appearing in the forum so I thought I'd resort to trusty stackoverflow