Prebid Example Does Not Work When Running on Local Dev Server - javascript

TL;DR: Is it possible to test Prebid header bidding with Prebid.js v1.6.0 with a locally running web server?
I have created a library for integrating Prebid header bidding into web applications built with React. It works fine using Prebid 0.34.6 and I use it successfully in production.
I'm now migrating my library to use the latest version of Prebid, 1.6.0. I've followed the migration guide carefully and implemented all the changes outlined there.
To test my code, I've set up a demo application that runs on a local dev server.
In the debug output of the application, I can see that bids are received (log says INFO: Bids Received for Auction with id: aa5d34f4-3eb7-4cb0-a756-6f7cc4a18568).
However, no creatives are shown in the ad slots. My bidsBackHandler callback function receives an empty object as argument. When I call pbjs.getAdserverTargeting() on the browser's developer console, I also get an empty object.
On the Prebid example pages, there is a basic Prebid.js example shown for integrating Prebid into a web page, along with a JSFiddle.
I use the exact same and units and GPT configuration as used in the fiddle in my demo application, to no avail – no creative in the ad slots, only the “house ad” fallback, empty response to the bids back handler, empty ad server targeting.
Then I discovered if I copy the code from the basic Prebid.js example fiddle to an HTML page on my local dev server, it also fails the same way – no creative in the ad slots, only the “house ad” fallback, empty response to the bids back handler, empty ad server targeting.
I then created a sandbox with my demo (→ https://codesandbox.io/s/k5w8mr9o23), and there, I do get the desired result, the demo creative is shown.
It seems that with Prebid 1.x, you cannot get ad slots filled when running on localhost.
Can anyone confirm this? Is there a way to make this work?

What's happening
The Prebid library needs cookies to work. It fails on your local dev server because cookies are problematic when running the web server on localhost.
You can verify this by typing document.cookie in your JavaScript console on the codesandbox demo and on your local server: a __gads cookie will be set where the demo works, but not on localhost.
__gads is a cookie set on your domain by Google ads. This is probably an issue on their side, not Prebid's.
How to fix it
The easiest way to fix this, is to avoid pointing your browser to localhost (or to 127.0.0.1 or other local IP addresses).
Instead set a different hostname for 127.0.0.1 (preferably containing at least two dots), and use such one.
You can do this by adding an entry to your hosts file. Here instructions for Windows, Mac, Linux).
In my case (Linux system; should also work on Mac) I added the following line to /etc/hosts:
127.0.0.1 www.localhost.localdomain
Then directed my browser to www.localhost.localdomain:3000 and everything was working.
I can see that the __gads cookie is set at this address, while it's not set on localhost:3000.

Related

Etherpad disconnects when collaborate with multiple users

Inside a Vue component, I run multiple instances of Etherpad editor which are embedded in iframes. Below is an example of that component-
Etherpad.vue (A page component)
<div>
<iframe src="https://mydomain.in/p/1505-g4Q?token=uuid&userName=jo&userColor=%23ffc7c7&rtl=false&lang=en"></iframe>
<iframe src="https://mydomain.in/p/1506-g4Q?token=uuid&userName=jo&userColor=%23ffc7c7&rtl=false&lang=en"></iframe>
<iframe src="https://mydomain.in/p/1507-g4Q?token=uuid&userName=jo&userColor=%23ffc7c7&rtl=false&lang=en"></iframe>
</div>
Now, I prepare a URL for this page suppose its, https://mydomain.in/etherpad/collaborate and share it with others to collaborate on this page. Each user who has access to this URL can type inside these three editors.
The issue
If only a single user is collaborating on this page, things are fine. As soon as other users started working on it, each user is having the following popup of disconnection-
And once a user clicks force reconnects, it appears on the other user's side which is resultant to allow only one user to type at a time.
Environment
vue: 2.6.11
node : 13.14.0"
Etherpad version: 1.8.16,
Web server: Nginx
Browser: Chrome
Authentication: Custom JWT verification (can see in iframe URLs)
Etherpad database type: postgress (database is hosted on a separate machine)
Etherpad client and Vue are being served on the same domain to avoid the cross-domain
cookies, and cross-domain embedding issues.
I am serving Etherpad behind Nginx reverse proxy. On the root
path ('/') I serve the Vue application and on /p/ path requests,
I pass them to the Etherpad node server.
Things I tried to debug
Tested locally, one user in Firefox and another in Chrome, working fine.
Tested with two users with different IP addresses, giving a disconnection popup.
Some answers say to look into the settings to alter the connection limit but didn't mention which settings.
For some users, this issue occurs when pasting a long text but for me, it's happening even when typing a single sentence.
Any help to suggest where to debug or which settings to change even in code or at server configuration would be great.

Not a valid origin

I'm trying to replicate a prior project written in pure JS into a React project. This project makes extensive use of the Google API JavaScript Client to access the Youtube Data API, and I'm encountering problems I did not encounter on the original project.
The error I encounter is odd and I shall explain. I have added the following to my api key/ oauth client credentials: http://localhost:8000. Actually, this is how it was on the original project - it is unchanged. What is odd now is this error I get:
"Not a valid origin for the client: http://Localhost:8000 has not been whitelisted for client ID {id}. Please go to https://console.developers.google.com/ and whitelist this origin for your project's client ID."
I double checked and it is in fact present, but I noticed that the first letter in "local" is uppercased, so I added that specifically. The whitlisted urls are now as follows:
http://localhost:8000
http://Localhost:8000
After adding that, I get the same error again, but missing a second L - http://LocaLhost:8000.
A few prior searches on stack overflow mentioned clearing browser cache and hard reloading but that has not solved this issue. Does anyone have any suggestions as to what the error may actually be?
EDIT: I've narrowed this down to have something to do with React. If I pull up my old project on my local host it works successfully on the default port for Live Server - 5500. Though if I try to run the React app on my localhost:5500 after closing live server, it fails. Any ideas?

Simulate XMLHttpRequest as from localhost or Remote Connection to a machine

I have a website hosted in ISS (can be other) that loads when it's called on localhost but not from extern :) like: http://:8081/Website.html.
The verification whether the website is called from localhost it's on the client in a js script that I can’t modify as it’s encrypted.
So I was thinking at two options:
Develop an ASP application that has a remote desktop connection to the machine that host the website (not some many example on how to).
Maybe configure the IIS configuration (didn't found how)
I'm out of ideas
Do you have any other solution or can you point on how can I do one of the above?
I have tried the WinForm solution from here: https://www.codeproject.com/kb/cs/remotedesktop_csharpnet.aspx and it doesn't work. And I prefer a website.
Updates:
The only working solution that I have for now is to configure a Remote Desktop Services (Web Access) as I hosted the application on Server 2008 R2. Then I only shared the browser that has the localhost page as default page
The javascript files are all minified and encrypted, meaning that if I search localhost as a word in all the files, nothing shows up. So fixing the client will be hard.
Is it possible to create a new Site Binding on IIS and access the site using the binding hostname? This requires your network DNS to register the hostname to the IP Address.
I assume you are dealing with encrypted(???) javascript that is hardcoded to display DOM only if it is loaded from localhost.
If by encrypted you mean minified you should still be able to find reference to "localhost" and modify javascript in minified version. If it is really encrypted by a wrapper of third party javascript library then I would suggest you to rewrite javascript. I mean how can there be any quality code in javascript code that is hardcoded to load only from localhost?
Fix the client and stop exploring other solutions like remote desktop connection. None of them are practical and sustainable solutions.
I think you need to use WebRTC, but it's supported for Chrome and Firefox. It allows two users to communicate directly, browser to browser using the RTCPeerConnection API.

Spiderable package working very sporadically due to fonts from typography.com [UPDATE]

Update
OK, I've tracked down the error! I'm using fonts from http://www.typography.com/ and if I remove the link to the fonts from the <head> (or even put it in the body instead) the site is fetched correctly every time!
Summary: If you're using webfonts which are loaded from a remote domain (with some kind of license approval process taking place as well) then the spiderable package will break!
The original question:
So I got this simple site built using meteor.js. It's on Digital Ocean, deployed using meteor up (with phantomjs enabled) and it's using the spiderable package.
Here's the site, it's a simple portfolio.
Now when I for exampel do curl http://portfolio.new-doc.com/?_escaped_fragment_= it will first return an empty body (classic meteor-without-spiderable-behaviour), but if I do the same curl within a few seconds it returns the correct result. (The same is true if I curl localhost:3000 on my machine).
So first the spiderable package does not do it's thing, and then it does. It kind of feels like on the first curl it returns the empty site (but loads up all the publiscations/subscriptions on the server) and on the second curl it uses the now loaded subscriptions and returns the correct result.
This is also true for Google webmaster tools. My first fetch as google bot returns an empty body, and the second one (if made quickly after the first one) returns the correct page.
The site only has one publish and one subscription. The publish either returns one or more pages from a subscription or runs this.stop(). The subscription is set up in a waitOn function in the app's only iron-router route. No complicated stuff here.
Since the curl command returns the correct result sometimes I don't think the error is in the publish/subs?
I've gotten the spiderable package to work in the past, but I've also spent a lot of time battling it!
Quite frustrating.
Any ideas? Thanks!
Update
OK, I've tracked down the error! I'm using fonts from http://www.typography.com/ and if I remove the link to the fonts from the (or even put it in the body instead) the site is fetched correctly every time!
Summary: If you're using webfonts which are loaded from a remote domain (with some kind of license approval process taking place as well) then the spiderable package will break!

How to check the authenticity of a Chrome extension?

The Context:
You have a web server which has to provide an exclusive content only if your client has your specific Chrome extension installed.
You have two possibilities to provide the Chrome extension package:
From the Chrome Web Store
From your own server
The problem:
There is a plethora of solutions allowing to know that a Chrome extension is installed:
Inserting an element when a web page is loaded by using Content Scripts.
Sending specific headers to the server by using Web Requests.
Etc.
But there seems to be no solution to check if the Chrome extension which is interacting with your web page is genuine.
Indeed, as the source code of the Chrome extension can be viewed and copied by anyone who want to, there seems to be no way to know if the current Chrome extension interacting with your web page is the one you have published or a cloned version (and maybe somewhat altered) by another person.
It seems that you are only able to know that some Chrome extension is interacting with your web page in an "expected way" but you cannot verify its authenticity.
The solution?
One solution may consist in using information contained in the Chrome extension package and which cannot be altered or copied by anyone else:
Sending the Chrome extension's ID to the server? But how?
The ID has to be sent by you and your JavaScript code and there seems to be no way to do it with an "internal" Chrome function.
So if someone else just send the same ID to your server (some kind of Chrome extension's ID spoofing) then your server will consider his Chrome extension as a genuine one!
Using the private key which served when you packaged the application? But how?
There seems to be no way to access or use in any way this key programmatically!
One other solution my consist in using NPAPI Plugins and embed authentication methods like GPG, etc. But this solution is not desirable mostly because of the big "Warning" section of its API's doc.
Is there any other solution?
Notes
This question attempts to raise a real security problem in the Chrome extension's API: How to check the authenticity of your Chrome extension when it comes to interact with your services.
If there are any missing possibilities, or any misunderstandings please feel free to ask me in comments.
I'm sorry to say but this problem as posed by you is in essence unsolvable because of one simple problem: You can't trust the client. And since the client can see the code then you can't solve the problem.
Any information coming from the client side can be replicated by other means. It is essentially the same problem as trying to prove that when a user logs into their account it is actually the user not somebody else who found out or was given their username and password.
The internet security models are built around 2 parties trying to communicate without a third party being able to imitate one, modify or listen the conversation. Without hiding the source code of the extension the client becomes indistinguishable from the third party (A file among copies - no way to determine which is which).
If the source code is hidden it becomes a whole other story. Now the user or malicious party doesn't have access to the secrets the real client knows and all the regular security models apply. However it is doubtful that Chrome will allow hidden source code in extensions, because it would produce other security issues.
Some source code can be hidden using NPAPI Plugins as you stated, but it comes with a price as you already know.
Coming back to the current state of things:
Now it becomes a question of what is meant by interaction.
If interaction means that while the user is on the page you want to know if it is your extension or some other then the closest you can get is to list your page in the extensions manifest under app section as documented here
This will allow you to ask on the page if the app is installed by using
chrome.app.isInstalled
This will return boolean showing wether your app is installed or not. The command is documented here
However this does not really solve the problem, since the extension may be installed, but not enabled and there is another extension mocking the communication with your site.
Furthermore the validation is on the client side so any function that uses that validation can be overwritten to ignore the result of this variable.
If however the interaction means making XMLHttpRequests then you are out of luck. Can't be done using current methods because of the visibility of source code as discussed above.
However if it is limiting your sites usability to authorized entities I suggest using regular means of authentication: having the user log in will allow you to create a session. This session will be propagated to all requests made by the extension so you are down to regular client log in trust issues like account sharing etc. These can of course be managed by making the user log in say via their Google account, which most are reluctant to share and further mitigated by blocking accounts that seem to be misused.
I would suggest to do something similar to what Git utilises(have a look at http://git-scm.com/book/en/Git-Internals-Git-Objects to understand how git implements it), i.e.
Creating SHA1 values of the content of every file in your
chrome-extension and then re-create another SHA1 value of the
concatenated SHA1 values obtained earlier.
In this way, you can share the SHA1 value with your server and authenticate your extension, as the SHA1 value will change just in case any person, changes any of your file.
Explaining it in more detail with some pseudo code:
function get_authentication_key(){
var files = get_all_files_in_extension,
concatenated_sha_values = '',
authentication_key;
for(file in files){
concatenated_sha_values += Digest::SHA1.hexdigest(get_file_content(file));
}
$.ajax({
url: 'http://example.com/getauthkey',
type: 'post'
async: false,
success:function(data){
authentication_key = data;
}
})
//You may return either SHA value of concatenated values or return the concatenated SHA values
return authentication_key;
}
// Server side code
get('/getauthkey') do
// One can apply several type of encryption algos on the string passed, to make it unbreakable
authentication_key = Digest::<encryption>.hexdigest($_GET['string']);
return authentication_key;
end
This method allows you to check if any kind of file has been changed maybe an image file or a video file or any other file. Would be glad to know if this thing can be broken as well.

Categories

Resources