I want to get all of my facebook friends every day and I was searching which is the more efficient way to do it.
My first approach was to create a selenium webdriver script. It opens a web browser, visits /me/friends, scrolls down automatically until the list finishes and then it parses the names. It is working pretty good but it takes some time (approximately 4-5 minutes).
After some search on the graph API, it appears that it isn't possible to have your complete list.
Another approach is to request from facebook to download all of your data, but facebook sends you an email and you need to wait and download it etc. and still you have to wait.
Final approach was to use a chrome extension (the name is who deleted me) which it worked and it was faster than my approach. I am wondering how this extension worked and one thing that I noticed is that it didn't found a friend of mine which has passed away. I don't like a 3rd party extension to have my data and I prefer to do it on my own. So I am wondering is there an endpoint which returns your public friends? or what is the approach of this chrome extension to do that?
Is there any other programming way of fetching your friends? Sure you could use a headless browser and do some requests to the /me/friends and get the responses as it does a web browser when it scrolls down, but it is pretty difficult to understand which ajax calls are the correct ones.
Update:
My approach of scrapping: https://gist.github.com/johndel/cd01a854e8bf36d9d30b44758607cf3d#file-check_friends-rb
It isn't the best code I can write, just a hack for seeing it done, just replace the sendkeys with your email / password and it will do the trick. My approach gets all of my friends (the chrome extension approach didn't found a friend of mine, that's why I think it is hitting another endpoint and it is doing it differently).
/me/friends is a Graph API endpoint that will only get your a list of friends who authorized your App. It is not possible at all to get friends who did not authorize your App. Everything that WOULD be possible is not allowed on Facebook, because it involves scraping.
Scraping Terms: https://www.facebook.com/apps/site_scraping_tos_terms.php
More information: Facebook Graph Api v2.0+ - /me/friends returns empty, or only friends who also use my app
I found another url where I can fetch the users and it seems there are a bunch of them and more than one solution. So, a headless and faster approach is this gist:
https://gist.github.com/johndel/29afec4b159203baf7521cd5a50dbb60 and I guess it can be optimized even further with threads (maybe typhoeus and hydra).
Related
First off sorry if this question is a 'bad' one; I am very new to the world of web apps, API's and Javascript.
As the title says I am trying to get a user's steamID using the Steam API using javascript.
This is for a web app that needs to get information about the games a user plays. (Which from my understanding is only obtainable using this special ID.)
My initial thoughts on how to do this would be to use openID so that the user gives Steam their info and the ID is returned to my app.
I have seen that there are lots of examples of this using php; however, because of project requirements, everything must be run in the browser. There is no backend server to even run php on so it is not an option what so ever.
I have spent a better part of a day trying to figure this out and have made no real progress, everything seems to lead back to using php. (maybe what I need under my requirements is not possible?) So an example or anything really would be much appreciated.
welcome to Stack Overflow I can see you are confused as to where to start or progress so here is a basic roadmap:
You will need to contact Steam to get an API key first.
Next part is more tricky for a beginner. You will need to get node.js which is a package manager. Effectively it's a large library of javascript programs which you can import into your own program to do things with. One of these so called "packages" is openid-client, which is an implementation of openID that you can add to your site. Users click on the openID button on your site, it redirects them to steam servers, they login, then your site gets their info without leaking their username or password to you.
Now that you have both the steamID and your API Key your program can input these two into the url steam provides to get owned games. You can input this completed url into fetch or axios (another node package that can request data from servers) and it will respond with the user's owned games in json format.
Here are a list of resources in order to get you started:
https://steamcommunity.com/dev
https://nodejs.org/en/
https://www.npmjs.com/package/openid-client
https://www.npmjs.com/package/node-fetch OR https://www.npmjs.com/package/axios
https://developer.valvesoftware.com/wiki/Steam_Web_API#GetOwnedGames_.28v0001.29
I suggest reading the documentation in each of these links and seeing if there is any youtube tutorials that try and do what you are doing with these technologies to help you implement them as a newbie. Overall there is a lot to learn on each of these technologies but as long as you orient yourself using guides or tutorials you will succeed. Good luck.
Check out this page: How to retrieve Steam username using SteamWorks API?.
However, it seems like an issue you might be having is connecting to the actual API. What exactly are you using for testing?
I know from experience, for example, that you can connect using Postman to various API's, and format a request for them in a chosen language, including JS>
I would like to implement notifications through WhatsApp API into my app. I've done lots of research but I couldn't find anything official.
Officially WhatsApp API Bussiness exists, but it is a beta version and only for companies that send massive volume of messages (1 million+). There is also Twilio, it requires a business approval and I got denied because (again) my volume requirement isn't into the millions of messages per month.
Unofficial libraries exist that potentially could get the job done. I looked into it and the one that I was contemplating seemed to be unreliable. But is this really the only way?
Since the start of pandemic I've been receiving all sorts of ads with apps that offer WhatsApp notification for orders and customer services... how are they doing it? I know they are small businesses, so there must be a way.
My app was built using JavaScript/React, any information is appreciated.
I think what you need is this-
These provide APIs that you can use in your App.
Moreover WhatsApp will terminate your account if you use tools other than officials ones.
I haven't tried this, but have used this for other web related tasks. Since WebWhatsApp runs in the browser on a PC, a VERY hacky way of doing this might (using a normal account) be as follows:
Web WhatsApp on the browser.
Selenium plug-in for the browser.
Java/C# or other programming language with Selenium libraries.
Then:
Record a macro in Selenium of your typical WhatsApp message (Search for contact, select contact, type message, send).
Manipulate the macro in C#/Java.
For anyone with time it's worth a try.
I have a few single page web apps on multiple domains that heavily rely on javascript/ajax to fetch and show content. Based on logs and search results I can tell that googlebot runs javascript on some of the domains but not on others. On some it indexes everything thats only available with js on others it doesn't even seem to run js at all.
Can anybody tell me how googlebot decides what js to run and if I can to anything to get it to run js on my other domains?
PS: I know that normally I should use something like serverside rendering for this, but I'm not at all depended on search results and rankings, so its not really worth the effort. I'm just curious how googlebot decides whether it should run js or not and if there's anything easy I can do to change that on my other domains.
You can learn more about how Google render ajax based website and a list of best practice directly from Google developer website here:
https://webmasters.googleblog.com/2014/10/updating-our-technical-webmaster.html
https://developers.google.com/webmasters/ajax-crawling/
Regarding your specific problem as first thing, I suggest you to analyse each domain using Google Webmaster tool with functionality "Fetch as Google" and go trough every technical aspects mentioned in Google guide.
https://support.google.com/webmasters/answer/158587?hl=en
I think Google Updated Research on the Subject
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
Now the functionality to fetch your page by Google Bot and see the results has moved into Google Search Console.
You can use URL Inspection Tool to analyze your live URL.
I've tested it on AngularJS App and Google Bot was able to crawl page content with data fetched from AJAX request.
One very important restriction is that the Googlebot does not allow AJAX requests while the page is loaded.
In my blog post I am explaining how to adapt a Single Page Application so that it becomes crawlable – without the need to render HTML snapshots on the server.
A lot of services send your browser to another url when you click a link on their sites.
Google search results and links on Facebook would be two well-known examples.
The slower a connection, the more noticeable it is, when browsing on a mobile network, you can be looking at a blank page for seconds. During this time you can see the query string in the address bar, showing some of what is being logged.
I understand the reason for the redirects is so user-behaviour can be analysed, for any reason from service quality to target marketing, and that's fine. But why do we have to be sent to a different page? Is that not sub-optimal from the users perspective?
Could it not be handled by a javascript call instead? If not why not?
Edit: by using javascript I mean something like:
bind function to link
On click, have the function delay the redirect until the tracking query has been sent using ajax
Continue to page
Is that not sub-optimal from the users perspective?
Detailed user profiles are, apparently, more valuable than happy users.
Could it not be handled by a javascript call instead? If not why not?
Browsers would leave the page before the request was sent with JavaScript and the data would be lost.
I am trying to prevent fraud in a webproject I am building.
The project is a game which includes multiple websites.
Each website does a ajax check for with each pageview to a webpage on my server for a status update of the game.
The response page, lets say www.domain.com/response.cfm (it is coldfusion) normally returns nothing, but at a certain point of time within the games timeframe, it will display a JSON string with information.
This information is then used by the script that is included on the websites.
So website A has been viewed 100 times (all of its pages), which will generate 100 ajax calls.
The problem I have is that a robot could check the ajax destination too, and much faster. Now I can detect a robot, or could make it difficult for him by using a session or checking for cookies, BUT...
the biggest issue is that I found out you can do a lot in the Firebug script console, or the Safari console. Probably Chrome too.
With this console, they can even evade the crossdomain restriction. I created a simple script that does a couple of calls to the Ajax page and when I go to the same domain first, and then use the console...there is no crossdomain limitation. And you execute all kind of javascript, so in essence someone like me could commit fraud in the game by using the javascript console which masks him as regular browser user.
My question now is: Does anyone know how to prevent this? I tried to disable the usage of the console but I don't think I can. It may be possible to detect if the console is active and then disable MY scripts so the game doesn't work. But I think they can load the script source in the console manually and then the game does work.
Looks like console is a beautiful thing, but a nightmare for me now to prevent people cheating in the game I am creating.
Hope anyone has suggestions.
ps: of course I am trying to implement som server side checks to detect cheating, but most of the time it is not realtime.
UPDATE 19/3/2012
The fraud that I am trying to prevent is cheating in the game by polling the page that generates logic for the next step of the game. This is a serverscript page which generates json code which will trigger a change on the website the game is played on. For your information, websites the are involved have a script in there header, like google analytics, so they will communicate with my server every pageview.
Polling that serverpage can reveal information which will gain the cheaters knowledge or progress.
So i have to prevent people from getting knowledge ahead of other earnest players by monitoring the serverpage which will reveal information at a certain time. I don't want them auto polling it and when info is revealed, the send themselves a notifcation and check the website.
So what I will do is make sure that if people have to many pageviews per second, they are blocked. Plus you need a cookie to be able to join in and you only get a cookie by logging in. Hopefully this will give me enough tools to make it as robust as possible.
Thanks for all your knowledge, people.
It would be very, very difficult to disable web consoles across the majority of browsers, and anyone who managed to do this would probably be exploiting a browser bug. But read on...
First rule of web programming: You can never trust anything you receive from the web client. Anything that gets sent to your data might have been forged or altered intentionally or unintentionally, and even if you did manage to block a web console, what's to stop me from opening it in a different browser which specifically disallows websites with the console? So that's out. As #DCoder mentions in the comments, there are other methods as well, including browser extensions, which would allow user-defined JavaScript to be executed.
So any checking you do has to be server side. I know you're trying to do some checking already, and it's hard to give advice without having more specifics. That said, one way to do this, as far as I can see right now, is to issue each client an ID and store that in a database somewhere. They can't be sequential IDs, and make sure that they're not trivially forgeable even if someone has a bunch of different IDs (for example, you might want to salt the username, and then hash it). Each time a request is made to the server, only issue a response if the last request was >500 ms ago, and update the database accordingly. Expire the ID after logoff or some time.
The first thing you should think about is securing your server, not the client. It's impossible to hide client code from the client. While it might arguably help prevent a few people who want to cheat from cheating, it's not your primary objective. You have to do this from the server side. This means validating the requests on the server to ensure that they conform to your expectations to some degree.
Game companies will
Require user authentication of some kind so they can identify users
Create some rules about possibilities. For example, the laws of physics should apply, so you know when someone has cheated. Something they can validate as human activity.
Ban people who cheat
If you are not sending data continuously over the network, then you have an issue which is unsolvable unless you are willing to make checks on the server securely and continuously over the course of the game. This will increase server load, but that's the unfortunate cost of preventing cheats.