I'm working with Rails 3.2 and I'm trying to set up a tracking pixel at the end of a website's signup process.
The issue I have though is that the signup process is done via js/ajax and once they complete it, I send a user to one of a few different pages. ie some get A, some get B, some get C.
I don't want to put the same pixel on all 3 because I may change the pages or swap them out in the future.
So is it possible to simulate the same request that the facebook tracking pixel makes but server side?
When you create a conversion pixel for Facebook you get a code snippet you can add to your html. This code snippet it's just a javascript code that fires the pixel. If you debug the page you'll see that the only thing it does is a GET call to a URL. You can copy that URL and make a cURL call (GET) from your server any time you want.
This is an example of the URL:
https://www.facebook.com/tr/?id=null&ev=CONVERSION_PIXEL_ID&dl=http%3A%2F%2Fmyurl.com%2F&rl=&if=false&cd[value]=0.01&cd[currency]=USD
However, I do not recommend to do this. It's better if you leave Facebook building the URL and firing it through the javascript snippet. This is not reliable and your data might be alterated. If you do not use the code snippet you won't be able to complain to Facebook if your data doesn't match as you're not following the standard process.
What I really recommend is create a different pixel id for each page.
I hope it makes sense.
Javier.
Related
I have a problem.
I have a piece of code, that I send to websites to be placed inside the <head></head> right at the very beginning of the header. My code will run and collect some data and pass the result to the rest of the page (ad servers, forms, etc., for example).
My problem is that I need to limit the number of times this code can make calls back to my server and that needs to be based on user location (city, more specifically). I cannot use the ad server, and benefit from its targeting capabilities because I do need to run this code before everything else.
I was thinking if there was any sort of script that would allow me to identify the user location and make the decision to whether or not to call my server, therefore making sure that I am only using my resources when it is actually needed.
I thought about using the html5 geolocation API but it requires user interaction. Then I was thinking about using a service like IpRegistry.co, to:
query for the location and if ;
location=desired location;
then load javascript code;
otherwise do nothing;
I am not sure if this is possible and how to do it. Any help would be greatly appreciated and desired. I am ready to explore opportunities here.
If I didn't stress enough, it needs to happen inside the header, not on the body.
Thanks in advance.
PC
I have a web page A created by a PHP script which would need to use a service only available on another page B – and for various reasons, A and B can't be merged. In this particular instance, page A is a non-WordPress page and page B is WordPress-generated. And the service in question is sending emails in a specific format which is supplied by a WP plugin.
My idea is to use page A to generate the email content and then send that content to page B which then, aided by the plugin, sends the email in the appropriate format and transfers control back to page A. This would be perfectly doable – but what I would like in addition is for page B never to be displayed. The visitor should have the impression that they are dealing only with page A all the time. Can that be done and if so, how?
I do not intend this to be a WordPress question (although maybe it is), rather more generally about using another page's script in passing without displaying that other page.
If you do have source access, it would be most reliable to use the addon directly... But if you cannot, the second easiest would be to use curl to mimic the form post on page B. This would happen server side so the user wouldn't see it happening.
To figure out what you need to send in your POST request, open your browser's developer tools and watch the network tab when you send the form manually, take the URL, and all the post data. Then you'll be able to mimic it.
You may proxy https://SITEA.com/siteB/whatever to http://SITEB.com/whatever - or the other way around... I didn't fully understand the process :P
In case you just want the siteB service call, you may also send the requests via curl or a HTTP library of your choice - which might be better as you will have to get a nonce first and stuff like that.
So I'm using a Raspberry Pi 2 with a rfid scanner and wrote a script in python that logs people in and out of our attendance system, connects to our postgresql database and returns some data like how much overtime they have and whether their action was a login or logout.
This data is meant to be displayed on a very basic webpage (that is not even on a server or anything) that just serves as a graphical interface to display said data.
My problem is that I cannot figure out how to dynamically display that data that my python script returns on the webpage without having to refresh it. I'd like it to simply fade in the information, keep it there for a few seconds and then have it fade out again (at which point the system becomes available again to have someone else login or logout).
Currently I'm using BeautifulSoup4 to edit the Html File and Chrome with the extension "LivePage" to then automatically update the page which is obviously a horrible solution.
I'm hoping someone here can point me in the right direction as to how I can accoumplish this in a comprehensible and reasonably elegant way.
TL;DR: I want to display the results of my python script on my web page without having to refresh it.
you can make a html file and send XHR request to the primary one every x seconds .
setTimeout(function(){
$.get( "yourPrimaryFile.xyz", function( data ) {
$( "body" )
.append(data) // Mr.X
}) }, 3000);
I assume an more or less obvious solution like building a REST API with e.g. Flask and using some javascript framework (e.g. Angular or React) on the frontend is out of scope / too much?
Besides that I can only think of using 'plain' jquery or similar frameworks, which is more or less what you do currently.
I would recommend trying the flask/angular combination. A simple app (few API endpoints for login and out and a few checks) and a basic website with dynamic content is setup pretty quickly.
To update page data without delay you need to use websockets.
There is no need in using heavy frameworks.
Once page is loaded first time you open websocket with js and listen to it.
Every time you read a tag you post all necessary data to this open socket and it instantly appear on client side.
I've got a problem. I'm working with a food supplier and I need save the content of each order as html. Orders are listed on a single page as links, but this has 2 difficulties
Page uses authentication (need to log me in in advance)
This is the real problem: the page use a lot of javascript. Actually everything works without changing the web address so I can't use wget or rio gem (url not like www.fooddoe.com/order, www.fooddoe.com/order/1, etc. but always like www.fooddoe.com/suplierx).
I think firewatir would be a good option but the problem is than I need to save the page in a format similar to html (including images). Is it possible using firewatir? Are there other options in clojure or javascript?
Thanks so much!!
I had to read your question twice to understand what you mean.
From web address from example I assume this is yours supplier web page. So IMHO the easiest way is:
Look into source of web page to get an idea how it gets the data (99% for some kind of AJAX request).
Request goes to the server which responds to it.
Now there are two ways:
Get idea how the request is made and write and app to make such request and generate web page with it (more difficult, more general)
Contact your supplier and get original database (simpler but one-time solution)
And I think that this is not the question specific to any language.
Using Python, I built a scraper for an ASP.NET site (specifically a Jenzabar course searching portlet) that would create a new session, load the first search page, then simulate a search by posting back the required fields. However, something changed, and I can't figure out what, and now I get HTTP 500 responses to everything. There are no new fields in the browser's POST data that I can see.
I would ideally like to figure out how to fix my own scraper, but that is probably difficult to ask about on StackOverflow without including a ton of specific context, so I was wondering if there was a way to treat the page as a black box and just fire click events on the postback links I want, then get the HTML of the result.
I saw some answers on here about scraping with JavaScript, but they mostly seem to focus on waiting for javascript to load and then returning a normalized representation of the page. I want to simulate the browser actually clicking on the links and following the same path to execute the request.
Without knowing any specifics, my hunch is that you are using a hardcoded session id and the web server's app domain recycled and created new encryption/decryption keys, rendering your hardcoded session id (which was encrypted by the old keys) useless.
You could try using Firebugs NET tab to monitor all requests, browse around manually and then diff the requests that you generate with ones that your screen scraper is generating.
If you are just trying to simulate load, you might want to check out something like selenium, which runs through a browser and handles postbacks like a browser does.