Can you re-"init" a Facebook pixel with different dataProcessing options? - javascript

With the advent of the California Consumer Privacy Act (CCPA), it's been necessary for some of our clients to implement Limited Data Usage (LDU) policies for Facebook. Our accepted practice has been to explicitly disable LDU fbq('dataProcessingOptions', []) until a user opts out (via a consent plugin). Here's the crux of my problem. Once a user opts out, I'd like to re-initialize the Facebook pixel with LDU enabled fbq('dataProcessingOptions', ['LDU'], 0, 0) so that future events on the page are processed using the LDU policies. Is it possible to simply call fbq('init', '{pixel_id}') a second time and have this "flag" set?

The Google Chrome Facebook Pixel extension will show what is sent for each event.
I was hoping that maybe sending something like fbq('trackCustom', 'optOut') might trigger it to re-send updated dataProcessing options, but it doesn't seem to.
Facebook is shooting everyone in the foot by not making this process clearer - it should absolutely be possible to wipe out data collected for the session and that's clearly the best way to do it.
I've spent all weekend trying to do this correctly from both technical and legal stand point and it's just a nightmare. CCPA is supposed to be opt-out!
This doesn't work:
// CCPA Notice. We allow California users to opt-out from Facebook's data collection by means
// of our 'Do not sell my information' link at the bottom of our website. Please use this link
// to trigger an opt-out via Facebook's API. Questions: privacy at example.com
fbq('dataProcessingOptions', []);
fbq('init', account_id);
fbq('track', 'PageView');
optOut()
{
fbq('dataProcessingOptions', ['LDO'], 1, 1000);
fbq('trackCustom', 'registerOptOut');
}
I'd recommend putting some text here because people are out to get us by finding vulnerable websites and this makes it look like I know what I'm doing.

Related

Massive differences between Google Analytics and own data collection

The use of a web app is to be evaluated statistically. It has been publicly available since spring of this year.
The web app is linked to Google Analytics. The following is done for the own user data collection:
A Unique User ID is created when the web app is called for the first time. It is stored in the localStorage and is compared each time the page is called up again.
if (localStorage.getItem("uuid") === null) {
localStorage.setItem("uuid", get_uuid());
}
function get_uuid() {
return ([1e7]+-1e3+-4e3+-8e3+-1e11).replace(/[018]/g, c =>
(c ^ crypto.getRandomValues(new Uint8Array(1))[0] & 15 >> c / 4).toString(16)
)
}
This data is written to a database together with other information (concrete page, time, device type, etc.). Users without Javascript or localStorage will not be included; however, they will probably not be able to use the web app correctly anyway.
If I now compare the data from Google Analytics with my own variant, the discrepancy is considerable.
Different users according to Google: about 900
Different users due to UUID: about 400
Additionally about 100 visits (or interactions) without UUID were registered.
Now my question is why these big differences exist. In my opinion, my data collection should be pretty accurate. But maybe I have a thinking error with the approach of the UUID? Or could it be that Google counts quite differently; for example, any robots that don't leave a UUID behind?
Thank you very much for your answers and considerations.
I'm quite sure you have encountered Google Analytics (GA) spam.
This is because GA is JavaScript and your ID is listed in the html source.
So anyone who wants to create spam on your data can use your ID.
Why you ask... When you notice it you see that there are webpages listed you don't know in your GA data, you (the admin) open them and get a virus or worse.
Don't open the webpages...
There are as far as I know two ways to fix it. Regex filter wich is a common way.
All webpages that has refferals from other domains you don't "know" you need to block.
This takes time and is not a good approach.
My method is to pass a dimension from the html to GA.
If that dimension is missing the data is not real.
Your JavaScript probably looks something like:
.....
ga('require', 'linkid', 'linkid.js');
ga('require', 'displayfeatures');
ga('send', 'pageview');
</script>
If we add a dimension which we pick up in GA admin tools
.....
ga('require', 'linkid', 'linkid.js');
ga('require', 'displayfeatures');
ga('send', 'pageview', {
'dimension1': 'FooBar'
});
</script>
Go to admin -> Property (the middle column) and at the bottom you have Dd Custom Definitions.
Open Custom Dimensions and add the dimension you added to the html.
Now you can set up a filter in the view tab of GA admin to only show data with your custom dimension "FooBar".
Any data that does not have this "FooBar" is spam that is not generated from your webpage.
Just remember you need to change all GA JavaScript codes and add the dimension.
You can see this spam (if I'm correct) in the Acquisition -> All Traffic -> Referrals report.
If you see Sources that you don't recognize and looks odd it's most likely the spam.
Before I used this method my Referrals looked something like this, there is about 50 of these fake referrals.

Password protection for a page with a simple function - what are the downsides?

I am doing work on an e-commerce platform, and I was asked to come up with a solution so that a certain group of customers could enter a password protected page on the site. The platform doesn't allow for this, as in the functionality is not available, so according to customer support, it's something you would have to create a custom template for and build from scratch. It doesn't need to be fancy, or hacker proof, just secure enough. So instead of doing that, I dropped the script below into the body of the page.
My first version: I use a prompt to ask for an input (password). If you click "prevent this page from creating additional dialouges", it creates sort of an infinite reload loop for that tab (not ideal, but problem?). Are there other serious problems? Easy hacks for your average person?
$("body").hide();
var passwordCheckFunction = function() {
var testPassword = window.prompt("YOU SHALL NOT PASS");
if (testPassword === "thisPredefinedPassword") {
$("body").show();
} else {
location.reload();
}
};
passwordCheckFunction();
Any advice would be much appreciated, and thank you for your time.
Create your secret page as a category.
Customize it to your heart's desire by choosing a custom template
file for it.
Finally, restrict it to only the authorized customer group
by removing it from view from guests and every group except the
authorized one.
Using this method, the customer only has to sign into his/her own customer account. BigCommerce will prevent access to the page by reading the assigned customer group of the customer.
I realize this isn't your desired method, but you might consider instead just making your page inactive in the admin area of your BC store, then instead of a password provide the direct url for users that are able to see that page.
I'm not sure about the implications for google indexing with an inactive page, but I would assume that they are set not to index it, and if not you could set it in robots.txt

How to block the link from malicious bot visitors?

I'm producing an event registration website. When someone click on a link:
Reserve id=10 event
The system is doing a "lock" on this event for ten minutes for this visitor. In that case no one else can reserve this event in next ten minutes. If the payment is done in that time, everything is OK, else the event is unlocked again. I hope the idea is clear.
PROBLEM: When bot (google bot, malicious bot, or angry customer script :P) visits this page, he see this link. Then he enters the page. Then the lock is done...
Also if someone visit recursive: /reserve/1, /reserve/2, /reserve/3, ... He can lock all the events.
I thought about creating a random md5 string for each event. In that case, every event has (next to id) unique code, for example: 1987fjskdfh938hfsdvpowefjosidjf8243
Next, I can translate libraries, to work like this:
<a href="/reserve/1987fjskdfh938hfsdvpowefjosidjf8243" rel="nofollow">
Reserve
</a>
In that case I can prevent the "bruteforce" lock. But the link is still visible for bots.
Then I thought about entering the captcha. And that is the solution. But captchas are... not so great in case of usability and user experience.
I saw few websites with reservation engine working like this. Are they protected? Maybe there is a simple ajax / javascript solution to prevent the bots from reading this as a pure text? I thought about:
Reserve
<script type="text/javascript">
$('#reserve').click(function(e) {
e.preventDefault();
var address = ...;
// something not so obvious to follow?
// for example: md5(ajaxget(some_php_file.php?salt=1029301))
window.location('/reserve/' + address);
});
</script>
But I'm not sure what shall I do there to prevent bots form calculating it. I mean stupid bots will not be able even to follow javascript or jquery stuff, but sometimes, someone wants to destroy something, and if the source is obvious, it can be broken in few lines of code. And whole database of events will be locked down with no reservation option for noone.
CRFS + AJAX POST + EVENT TOKEN generated on each load.
Summary: don't rely on GET requests especially through a elements.
And better if you add some event block rate limits (by IP for instance).
EDIT: (this is a basic sketch)
replace all the href="..." with data-reservation-id=ID
delegate click on the parent element for a[data-reservation-id]
in the callback, simply make a POST ajax call to the API
in the API's endpoint check rate limits using IP for instance
if OK, block the event and return OK, if not return error.
IP-Specific maximum simultaneous reservations
Summary: Depend on the fact that many simple bots operate from one host. Limit the number of simultaneous reservations for a host.
Basic scetch:
Store the requesting IP alongside the reservation
On reservation request count the IP's which have a non-completed reservation.
SELECT Count(ip) FROM reservations WHERE ip=:request_ip AND status=open;
If the number is above a certain threshold, block the reservation.
(this is mostly an expansion of point 4 given in avetist's excellent answer)

Browser addon to automatically opt-in for cookies - EU cookie law

In EU we have this law that requires web pages to request permission to store cookies. Most of us know about cookies and agree to them but still are forced to accept them explicitly everywhere. So I plan to write this add on (ff & chrome) that will automatically add session cookie with standard name that would mean agreement. Just wondering about few things:
1) What should be the name of the cookie? What should be the value? Should I cover only user agreement option? My proposition is
_cookieok=1
the benefit is that it is short, yet descriptive.
2) Should I add only single cookie - the one I suggested above? Many pages do it in different ways already. They use different cookie names and check for different values. I thought maybe use names and values from popular scripts like http://cookiecuttr.com/ but I don't want to increase upload traffic with a number of mostly not needed cookies.
3) Should I differentiate between types of cookies? I have seen here http://demo.cookieconsent.silktide.com/ there are multiple cookie types you can opt-in/opt-out?
4) Does this have chances to become popular or is it better to use something like point 2 - adding multiple values manually?
5) I could probably also remove those cookies after some event (like after all js onload functions have finished) but I could not find proper hook in firefox addons. Plus maybe some people would like to do filtering out of the script on server side so maybe it is better to keep sending the cookie.
Is there something I have not thought about? My suggested code is in case of FF:
var pageMod = require("sdk/page-mod");
pageMod.PageMod({
include: "*",
contentScriptWhen: 'start',
contentScript: 'document.cookie="_cookieok=1;path=/";'
});
Update
To explain how does it work
1) Most sites already compliant to cookie law do something like this:
if ($.cookie('_cookieok') == null) {
$('#cookie-close').on('click', function (evt) {
$.cookie('cookieok', 1, 300);
});
$('.cookie-prompt').show();
}
so if we agree on same name existance of such ff plugin would be possible. If someone does not plugin - site will prompt him. If has site would recognize addon added cookie as their own.
Not too sure what the point of this really is to be honest?
If you're looking to build something that would be used across a portfolio of sites you manage, then you're probably pushing your luck to force a user to install an extension simply to show they accept your cookies. If it's aimed at a wider audience, i.e. potentially anyone using any website, then the other issue you'll have is getting both users to see the benefit of installing another extension and secondly website operators to write the code necessary to detect your cookie and act accordingly.
Most sites seem to be striving to make cookies and the associated obligations under the legislation as unintrusive as possible - requiring installation of an extension & changes to website code seems to be heading in the opposite direction..

How to use pure JavaScript to determine whether Facebook/Twitter is blocked?

Some countries like China is blocking Facebook/Twitter. How to use JavaScript to check whether a website is not accessible?
update:
I am adding a "Share to Facebook" button on a web page. 50% of the visitors are from China and 50% are from outside of China.
For those China visitors, they would never see that Facebook button because it's blocked. I want to use $.hide() or $.empty() to remove the related HTML if I detected that Facebook is blocked. How can I do that?
You can check if loading the facebook SDK is blocked in china (//connect.facebook.net/en_UK/all.js)
If this is the case then you could do something like this:
$.getScript('//connect.facebook.net/en_UK/all.js')
.success(function(){
// do something if facebook is available
});
You need to take care because you need to define a timeout if you want to make a callback for the fail case. I need to check the correct settings later, but currently i don't have time to.
EDIT
Based on the comment of funkybro it would be better to do a JSONP request. Loading the API would inject a butch of code you probably don't need.
So just request e.g.:
$.getJSON('https://graph.facebook.com/feed?callback=?')
.success(function(){
// do something if facebook is available
});
The request will include a failure code because you don't provide at graph node, but knowing that you get an error message from facebook means that it is reachable for the client.
Use jQuery.get like this:
$.get("http://facebook.com").fail(function() {
$(...).hide()
}).done(function() {
$(...).show()
})
Note that this is a cross-site request that will fail for security reasons unless you disable that browser feature.
If that's not possible for you, I suggest you use GeoIP or similar technologies to determine the users origin.

Categories

Resources