I'm implementing a plug-in that's embeddable in different sites (a la Meebo, Wibiya), and I want to use Facebook Connect. The plug-in is supposed to be embeddable in sites with different domain names. Problem is, Facebook connect allows only one domain per application you register.
The question is, how can I have multiple domains for a single Facebook application, assuming:
When users "Allow" the application on one site, they won't have to "Allow" it on other sites as well.
Preferably, after the initial log-in, users won't see a pop-up opening on every site they log-in to (i.e. - I'd rather not open a link to my domain and do the log-in process from there).
Is there anyway of doing that?
If not, is my only option is to manage all the log-ins from a single domain and pass the cookies back to the original domains?
And if I pass the cookies between domains, how can I be sure that Facebook won't block this kind of behavior in the future?
I'd appreciate any suggestions, though I'd prefer an official solution over hacks, if at all possible.
Im assuming you are using facebook.php by Naitik Shah? Your widget would need to be on every page of course and include the async script connect-js.
I am currently developing a facebook login based application myself.
I would say the best solution is too login through your own domain and pass the cookie. Your app/widget will be the only one they allow to share information with. Nothing should be different in operation from a single page solution. I envisage a PHP plugin which executes a login from an outside domain and passes through the cookie to the site via the widget. return the cookie securely how you wish (except for something dodgy like storing it in a div and retrieving it..or something a hacker could try to spoof). the site will then use the cookie for account and user id purposes and the widget will control all login actions and session finding using the async script (but routed through a different domain).
Sorry I can't be more help but this is the only solution I can think of, and it seems you have already anyway.
In terms of keeping session control across different domains you only need the 3rd party cookie to be active. Once your page is activated for your domain you will already have the cookie for that domain if you haven't logged out or it hasn't expired. A benefit of using an outside management domain.
It would seem this is also the most reliable way compared to any successful hack for multiple domains, because I would see fb and Oauth2.0 as being ok with an approved party sharing info (cookies) to another party approved by the approved party. But.. It could be problematic if they think the user will have privacy issues, because you could potentially share the cookie on any site without the users permission. So you have to be careful about notifying the user about all the sites they will be auto logged into and treating them with respect.
Good luck with it, hope you let us know how it goes.
There is easy and clean technique -> Single Sign On (SSO). You can search on about it.
Related
I'm working on a hobby project. The project is basically an integrable live support service. To describe my questions easily, I will call my service service.com and call the website that uses my service website.com. I'm thinking on implementing session management to restore disconnected visitors chat. To do that I'm planning to use cookie based session management. If owner of the website.com wants to use my service I will provide them a JavaScript file which will inject some HTML on the body, style tags on head and implement interaction. All the website.com's have to do will be importing that JS file and calling a function defined by that JS file. To set 3rd party cookies on that website.com from my service.com I will use this request/response. When website.com requests my JS file from service.com, my service will respond the request with the JS file along with a cookie to manage visitor's sessions. This way service.com will set 3rd party on website.com's visitors.
1st Question: Could this stage of setting cookie on website.com's visitor done on the front-end with that requested JS file or locally (from the website.com's web server) requested JS file? Would that still be a 3rd party cookie since it would be set on the front-end of the website.com?
2nd Questios: My other question is about cookie consents. Can a website that sets 3rd party cookies (e.g service.com) on some other website (e.g website.com) ask to allow their cookies on that website.com? In other words, can I ask website.com's visitors to allow only 3rd party cookies that are set by service.com with the JS file I serve/give to website.com? Would that be legal?
3rd Question: How do cookie consent banners work behind the scenes? What happens when you accept/deny all of the 3rd party cookies used on a website? Or what happens when you filter and accepy only a few of them? How does the process of allowing/disallowing work? Is there some kind of JavaScript that is triggered when you click that "Accept" button or "Decline" button? You can provide me any resources on this topic.
Thanks!
http vs javascript cookies
All the website.com's have to do will be importing that JS file and calling a function defined by that JS file. To set 3rd party cookies on that website.com from my service.com I will use this request/response. When website.com requests my JS file from service.com, my service will respond the request with the JS file along with a cookie to manage visitor's sessions. This way service.com will set 3rd party on website.com's visitors.
If by by "request/response" you mean an http request to service.com which will reply with cookies to be stored under website.com (customer domain)...that doesn't work with http cookies because you are limited to reading setting cookies within your domain namespace. i.e. a response to a request to api.foo.example.com can receive and set cookies at:
api.foo.example.com
foo.example.com
example.com
but NOT cookies at www.example.com.
So if that request from website.com to service.com... service.com can only set cookies under service.com. These are called "third party cookies" in this scenario as the "first party" is website.com and your service.com is a third party (site visitor is interacting with website.com). Many browsers (safari, firefox) block third party cookies by default.
To work around this problem and have a more reliable cookie (even if you are only using it for a session and not across multiple visits to website.com), you have two options:
customer whitelabel DNS. customer creates DNS record livechat.website.com and CNAMEs that to api.service.com. api.service.com then handles traffic via the livechat.website.com domain and can read/set cookies there. However this requires a more technical connection on the customer's side as it involves adding a DNS record in addition to adding your script tag.
javascript cookies. instead of setting the cookie in the http response from service.com, the javascript returned from service.com is running in the website.com domain and so can set javascript cookies as well as reading cookies under that domain (as long as they weren't set with the httponly option). Take a look at js-cookie library if you don't want to worry about cross-browser issues when coding against the native browser document.cookies API.
If you don't do one of the above, your cookie set on a response to a request to service.com will be a third party cookie and may not work consistently.
http cookies
...are cookies set via http response header set-cookie and are only able to be set for the domain namespace of the host that was requested. If this host (full domain name with sub domains) is different than the domain in the user's address bar, this is considered a third party cookie and subject to some limitations.
You can set first-party http cookies as a third-party if the customer will point a DNS record under their domain at your service.
javascript cookies
are cookies set by javascript within the page. They can set cookies within the domain/namespace of the frame the javascript is running within. They can read cookies from that domain/namespace as well as long as they weren't set with httponly option (often done to prevent third party javascript from hijacking session cookies).
You can use javascript cookies as a third party by being loaded into a frame of appropriate domain.
You may also want to read up on Content Security Policy which can prevent your third party javascript from running as part of your customer deployment documentation if the customer is using CSP to lock down their site.
1st Question
Could this stage of setting cookie on website.com's visitor...
done on the front-end with that requested JS file
Yes, this is javascript cookie. See above.
or locally (from the website.com's web server) requested JS file
Not sure exactly what you mean. The website.com webserver could host/proxy your js file, but that is just static file serving so it doesn't really help you with the session cookie logic.
The customer could host a proxy to your api that included re-writing the cookie headers on your response to make them first party. Though technically possible, this is way over-complicated and I don't recommend it. Just showing that many things are possible.
You can contrive many solutions of course. For instance your customers could host a very simple webapp that handles reading/setting cookies on demand in response to a javascript request. ie the customer hosts a little app you built under their domain in order to read/set certain http cookies and provide this info in response to API calls from your javascript. However I would argue this requires more technical integration on the customer's end than the custom-DNS (above) option.
I suggest you stick with one of the following...
third party cookies directly set on the http response from service.com
first party cookies set by your javascript client-side after being loaded/run within website.com frame.
first party cookies directly set on the http response from livechat.website.com which has been pointed at service.com via DNS.
2nd Questios
Can a website that sets 3rd party cookies (e.g service.com) on some other website (e.g website.com) ask to allow their cookies on that website.com?
There are two relevant pieces of regulation where all these cookie consents come from. GDPR in the EU and CCPA in California.
Most cookie popups you see are GDPR related and are following a standard called Transparency Consent Framework (TCF) managed by the Internet Advertising Bureau (IAB). The technical party that provides the cookie popup functionality is called a Consent Management Platform (CMP) within the TCF spec. They sit between the website (aka "publisher") and the various third party vendors that might want to do something with visitor data on that website (cookies or otherwise). Vendors/cookies are grouped into "purposes" which allow visitors to consent to one type of data use but not another. There are required cookies (required for website to work...like a login cookie) and analytics and marketing and other types of purposes. Feel free to read the spec if you want to know all the technical details of how these guys (Publishers - CMPs - Vendors) work.
But long story short, you don't request anything from the cookie popup, your company is registered to participate in the spec as a vendor and then the CMP can include you in the list of third party vendors on a website that a visitor can consent to. As a personal hobby-project, forming a company and joining the TCF framework is probably beyond what you want to do at this point.
However!
this is only required in the EU, if you don't have customers/users in the EU, you probably don't need to worry about this.
And!
your livechat would fall under the required/functional cookies for the website in order to make the livechat function of the website work...so as long as you are careful about data collection/storage-location/processing...you probably can also operate in the EU no problem and don't require any special additional cookie consent as you can fall under the required/functional cookie umbrella for the website. Leave data processing and privacy responsibility in website.com's hands.
ideally use a session cookie with the DNS option (under website.com domain). Don't track user beyond session restoration or put any sensitive data in the cookie (or local storage) that will persist across sessions.
if you are going to store chat logs on your own servers, then there is a high risk you get personal data as the user provides it to a support agent (phone number, name, address, etc). This gets hairy fast in terms of legal requirements and disclosures. If you aren't a company, no legit company doing business in europe will use your live chat because of lack of data security/privacy accountability.
So you need to maybe make the chats ephemeral. e.g. Using client-side storage under the customer's domain (website.com) so that the chat logs are never stored/persisted on your own servers. Your servers just connect visitors to agents and pipe data back and forth without storing it.
If a customer wants to save chat logs, offer them some option where they are streamed to files on their own servers so you don't touch them. Or offer a self-hosted add-on they can host themselves that gives them data retention and reporting (under their control, not yours so easier sale). If it gets big enough and you form a company around this...you can always do the compliance things to provide a SaaS hosted app that has the data in it.
ideally use servers in the EU for EU visitors to avoid inadvertently transferring data abroad without consent (even if it is ephemeral).
Don't log any personally identifiable info, user ids, etc on your service.com servers. Just log a chat ID, start/end time, agent ID, topic and other stats you need for billing...but nothing about the visitor. If you want to record the IP address, truncate the last octet (or set it to 0) to semi-anonymize the IP.
Make a privacy explainer "one sheet" that explains technically how you avoid ever touching (or persisting) any potentially sensitive data ("private by design") and include this with your marketing materials as it will help short-circuit any inquiries from prospective customer legal teams.
3rd Question
How do cookie consent banners work behind the scenes?
Most large companies are using legit cookie consent banners that implement the TCF framework (policy and technical specs) from IAB Europe. All the tech specs are public on that website (for CMPs for Vendors, etc).
You can't just integrate with a callback. Doesn't work that way. You need to be registered to participate in the framework as a vendor. Then you can call a specific API function provided by the CMP to check whether the visitor has provided consent yet and whether you (as a third party vendor) have received consent for any specific purposes and which ones.
However as I mentioned in the answer to question 2 you probably don't need to worry about this if you are careful.
Some websites have rolled their own cookie consent widgets because they are too small to deal with complicated licensing of a full CMP and because they often have very limited third party vendors they need to disclose (maybe just google analytics and a google ads and facebook remarketing pixel). These ones if they are well built should prevent any of those third party javascripts (or other http calls) from being loaded until consent has been given (or rejected).
One I built years ago uses google tag manager (post consent) to manage what gets loaded using GTM Triggers. We don't load GTM until we have received a signal from the user. Before we fire GTM we add consent signals to the data layer indicating which purposes (functional/analytics/marketing) a user has consented to (or not). If the user has visited the site before, the previous consent is loaded from a cookie so the widget doesn't pop up again. If the consent disclosure details (vendors, purposes) have changed, all users get the popup again. They also get the popup after a year has passed. Anyway in GTM we setup triggers so that only tags fire when the appropriate consent has been given. Functional/Required cookies are always loaded outside GTM. If we don't have any analytics or marketing consent, then we never load GTM at all speeding up the site for "no" visitors. GTM has added consent-specific features as well at some point.
TCF works the opposite way, most vendors will always load but they are supposed to "self govern" themselves and check the signal from the CMP whether they have consent or not... which means their code has to be modified extensively to support what to do with requests in the case where they don't have consent to set/read cookies (for instance). A vendor may get consent for one purpose but not for another...so their code has to respect that. Gets complicated fast if a vendor has many different cookies and purposes. Following the policy is part of what vendors agree to when joining the TCF framework. TCF also is facing some big challenges at the moment due the Belgian Data Protection Authority's ruling about the validity of the TCF for implementing the privacy legislation. But that's another can of worms. Point is: clicking "no" to cookies doesn't necessarily mean less network requests or javascript running in the TCF world.
And you probably don't need to worry about cookie popups as a functional cookie if you are careful about what data you store (don't) and keeping things you do store under the customer's domain.
If you decide to build a business model based on the chat data (e.g. disqus style) then you have a lot more you will need to do to be legally compliant as well as to reassure your larger customers' legal/privacy teams.
Some other cookie popups are pure optics. Old sites with lots of manually added script tags and no tag management. Nightmare for them technically to get compliant. So they add a widget that makes it look like they are compliant but nothing changes behind the scenes. These are usually small websites with little to no revenue and so they figure the European DPAs will never bother to come after them... however it is just a matter of time until the specialty lawfirms have bots and letter generation to automate the mass harassment of long-tail sites. Main problem at the moment is how those lawfirms get paid, but if they manage to negotiate a percentage of the DPA fines for providing enforcement as a service...then it will become a big thing.
1st Question
It depends on how the cookie is created and stored. If the cookie is storing a user-specific, website-specific session ID and will only ever be used on that website, it can be stored using a 1st party cookie set by the JavaScript you serve to the front-end. If it's to be used on other websites (such as a unique user ID for adtech firms) then that would be 3rd party.
2nd Question
That's not your responsibility. It is the responsibility of the website provider as a "data controller" (the website owner) to declare their "data providers" (you) to their users and give them a choice whether or not they would like to have their data stored and (potentially) processed.
You can however respect the DoNotTrack setting the browser provides and you can also implement a workflow which allows your code to await permission of some sort. By that I mean, you can ensure your code doesn't execute until a function such as cookiePermissionProvided() is called. That would allow the developer of the site to implement your code into their site's cookie consent callback effectively.
3rd Question
You may or may not be surprised to here this, but some of them do absolutely diddly squat.
However, the ones that actually work usually use some kind of promise or callback functionality such as ...
const cookieConsentGiven = new Promise(resolve, reject => {
// Add HTML to page with a 2x button
// one triggering resolve (accepted)
// one triggering reject (not accepted)
});
cookieConsentGiven.then(
//resolved
(val) => {
// Handle cookie approval, run code
},
//rejected
(val) => {
// Handle cookie disapproval
// only run code which doesn't control/process personal data
})
Again, the responsibility of which code to run when filtering particular cookies is placed upon the website owner, not you. Your responsibility is to ensure your code respects that it must wait to be told to run/store user-specific data.
Hopefully this has come in useful.
I had very similar questions when implementing this for our ecommerce platform which is ran on hundreds retailers' websites. Ultimately we just choice a promise-based system which awaits permission before running any code which stores user-sensitive data. Some cookies can't be avoided, such as ASP.net sessions (these are accounted for in legislation).
In summary, I don't believe you have to worry about half as much as you think you may have to. Just ensure you code doesn't execute until it is told to. If you can, provide an alternative callback so your functionality can run without storing personal data. e.g. the chat functionality won't work across browser sessions or page reloads - you should account for this in your UI by letting the user know before they start chatting (explain why this is the case and even allow them to opt-in after the fact [you must explain what is stored and why] - this is also allowed).
I have a website developed in PHP and JavaScript language and I am using cookies on my website. Also there are many third party scripts like google analytics, mouseflow, third party chat script etc on my website. These scripts are also storing cookies.
To get my website GDPR compliant, before storing marketing cookies (like analytics) I need to make sure that the visitor has given his/her consent for storing it.
We can show the visitors a pop-up stating the cookie policy and once they accept, we will start storing cookies.
So, How can we prevent any of the cookie to be stored before the consent of the user.
Well what I always do is check if the cookie exists, if not I won't allow access to the features of the website that allow that. If it exists, the user has given consent.
To make it cleanly, you need to check server side that their is a "cookie consent" cookie/session var present. If that's the case, serve the normal website, if not, render a barebone page with just the cookie consent form and 0 tracking scripts.
It's not so easy, but i guess getting rid of the tracking scripts is not an option for you
SITUATION
I have a main public Liferay website, that is therefore accessible both by intranet and not-intranet (i.e. public) users.
I also have a Liferay intranet website, which is accessible only to intranet users because is protected via a login page.
The login page to the intranet website is public.
After you successfully login, the intranet website is loaded.
EXPECTED:
In my Google Analytics account for the main website, I want to differentiate intranet users from public users (e.g. in order to understand how the 2 categories behave).
Questions
Can I use a custom dimension to solve this problem, or is there a better way?
Custom dimension data has to be sent via hits (UPDATE: by "hits" I meant either pageview or event hits, I am not referring to the dimension scope, cfr. https://developers.google.com/analytics/devguides/collection/analyticsjs/custom-dims-mets), therefore I should:
load the Google Analytics tracking code of the main website on the intranet website (the site displayed after successfully logging in)
send a pageview hit from this Intranet website to the main website together with a custom dimension, e.g.
ga('send', 'pageview', {
'dimension1': 'I am a intranet user'
});
Is this correct?
Does the above mentioned solution have any impact on my Analytics data in the main website (e.g. more pageviews due to the tracking code added to the intranet website, or strange behaviours in counting user sessions, etc.)?
Thanks a lot.
UPDATE:
Actually, the solutions proposed below would not work because the 2 websites (intranet and not-intranet) are considered different domains.
So, even if I had the following domains
intranet website: http://intranet.mycompany.com
company website: http://www.mycompany.com
and I sent data to the same UA account (i.e. the company website UA account), they would be counted as different visits.
Quoting Google (see https://developers.google.com/analytics/devguides/collection/gajs/gaTrackingSite#profilesKey)
If a user independently visits two sites that are tracking in the same
view (profile), such as through a bookmark, these visits will still be
counted under separate sessions. In this scenario, the linking methods
are not invoked, and thus there is no way to determine the initiating
session for a given user.
So, how could I solve my problem?
Would it be possible to solve it by implementing cross-domain tracking (https://support.google.com/analytics/answer/1034342?hl=en), and how?
Thanks a lot.
Can I use a custom dimension to solve this problem, or is there a better way?
Yes, custom dimension is perfect for this.
Custom dimension data has to be sent via hits
The User-level scope is more appropriate than the hit-level one for what you want to achieve. The linked document explains in detail why, and gives an example similar to your use case.
Does the above mentioned solution have any impact on my Analytics data in the main website
Yes, impact is mainly that you will have extra data corresponding to the visits to the intranet.
A custom dimension works well for your purpose. You will get additional hits for visits on your intranet site, but you can segment them out via the custom dimension to separate between inter/intranet.
Since the intranet requires a login there is one other way you could try, which would have the additional benefit of allowing for cross-device tracking (if that is beneficial to you).
Google calls this "userID", despite the fact that it must not be used to identify individual users. On login you pass in a unique value per user that is set by your backend system (UUID format is suggest but any unique string would work). Since it is not assigned by the tracking code but set by your system it will be the same id on every device. It is used to de-duplicate users, i.e. persons that log in from multiple devices will be recognized as single users (also useful if people delete their cookies - the userID can be used to aggregate sessions into unique visitors).
To make this work you need to set up a special view that contains only data from visits where the userId is set (so you would have a view for your public site and a view only for your logged-in users). You get a few special reports, for example one to tell you how many users log in from different device categories.
What the userID should not do, and in fact must not do according to Googles terms of service, is to identify individuals. The userId is not exposed in the Interface, and you must not store it as a custom dimension. If you store it on the client side in a cookie you must unset it once the users log out. It is merely there to allow continuous tracking of users independently from cookies (plus you need to amend your privacy policy if you want to use this).
Of course you can combine both approaches to get even more insights.
I am making a personal (resume type) website. I was hoping to retrieve all of the data from my facebook page and display it on the about page using the Graph API.
The issue is, it seems like a user always has to give credentials to get an authorization token. I don't want to require people to log into facebook just to view my page. I also don't want to login everyone using my credentials (which would mean they would be stored in JavaScript). Does anyone see a way around this?
I looked into the creating a "page" and using the "page access token" instead. Then I could get the page access token using my userid stored in JavaScript (in my opinion much better than username and password). Is there a problem doing it this way?
I would prefer to retrieve this data directly from my account and not have to make a separate "page." Any and all information is appreciated. Thanks for your time.
This appears to be banned in Facebook's Terms of Service:
Safety
We do our best to keep Facebook safe, but we cannot guarantee it. We need your help to keep Facebook safe, which includes the following commitments by you:
You will not post unauthorized commercial communications (such as spam) on Facebook.
You will not collect users' content or information, or otherwise access Facebook, using automated means (such as harvesting bots, robots, spiders, or scrapers) without our prior permission.
You will not engage in unlawful multi-level marketing, such as a pyramid scheme, on Facebook.
You will not upload viruses or other malicious code.
You will not solicit login information or access an account belonging to someone else.
You will not bully, intimidate, or harass any user.
You will not post content that: is hate speech, threatening, or pornographic; incites 9. violence; or contains nudity or graphic or gratuitous violence.
You will not develop or operate a third-party application containing alcohol-related, dating or other mature content (including advertisements) without appropriate age-based restrictions.
You will follow our Promotions Guidelines and all applicable laws if you publicize or offer any contest, giveaway, or sweepstakes (“promotion”) on Facebook.
You will not use Facebook to do anything unlawful, misleading, malicious, or discriminatory.
You will not do anything that could disable, overburden, or impair the proper working or appearance of Facebook, such as a denial of service attack or interference with page rendering or other Facebook functionality.
You will not facilitate or encourage any violations of this Statement or our policies.
Sorry to be a downer, but I don't think that page scraping is the best way to go.
I want to create a web widget that will display information from my site.
The widget will be included in the client's website HTML using JavaScript, and should only be usable for my clients -- web sites that were registered at my site.
The information in the widget should be specific to the user who is currently visiting the client's site.
So, I need to authenticate both the client (website owner) and the resource owner (website visitor). This seems to map nicely to OAuth 2.0, but I couldn't find a complete example or explanation for such an implementation.
Any resources or pointers to such information will be appreciated.
Update: I've stumbled upon this article, which provides an outline for an approach that uses OAuth. However, it is not detailed enough for me to really understand how to use this with OAuth 2.
There are many large organizations that have done this, and I'm sad to see no other answers for this question since it's such an important web pattern.
I'm going to presume that you are not rolling your own OAuth 2.0 provider from scratch, if you are - well done otherwise you should be using something kickass like Doorkeeper to do this for you.
Now, in OAuth 2.0 you have the following entities:
Users registered on your website
Applications registered on your website (who subscribe to your oauth2)
User Permissions which is a list of Applications that a user has 'allowed'
Developer (who is consuming your auth API / widgets and building an Application)
The first thing to note is you must have a domain name associated with each Application. So if a developer registers for a API token / secret on your website, the Application he creates is mapped to a unique domain.
Now, I presume that the flow for an application to authenticate users via your website is already clear. That being said, you don't need to do much for this to work.
When an Application sends the user to your website (in order to sign in) you place a session cookie on the user's computer. Lets call this "Cookie-X".
Now the user is authenticated by your website and goes back to the Application. There we want to show a custom widget with information pertaining to that user.
The developer will be need to copy paste some code into this app.
The flow is like this:
The code will contain a url to your website with his Application ID (not secret) which he got when registering his application on your website.
When that code runs, it will ping your website with his appId. You need to check that AppID with your database, and additionally check that the referrer url is from the same domain as that which is registered in your website for that AppID. Edit: Alternatively or additionally, the code can check for document.domain and include it in the ping to your website, allowing you to verify that the request has come from the domain that has registered with the given AppID.
If that is correct, you reply back with some JS code.
Your JS code looks for the session cookie your website had set when the user had signed in. If that cookie is found, it pings back to your website with the session and your website responds with the custom view content.
Edit: as rightfully mentioned in a comment, the cookie should be HttpOnly to safeguard against common XSS attacks.
Additional Notes
The reasons this is a secure approach:
The AppId and domain name are a good enough combination to verify that other people are not fetching this information. Even thou the appId is visible in the applications html source, the domain name would have to be spoofed by anyone attempting to use someone else's AppID.
Presuming someone takes an AppID which is not his, and writes code to spoof the domain name of the referrer when requesting for your widget, he still won't be able to see any information. Since you are showing user specific information, the widget will only render if your website can find the session cookie it placed on the users browser which can't really be spoofed. There are ways around like session-hijacking, etc. But I think that's beyond the scope of this question.
Other Methods
Just by looking at Facebook's Social Plugins, you can tell that there are other options.
For example, one might be to use an Iframe. If you ask the developer to add an Iframe to his application, you can even reduce a few of the steps mentioned above. But you will have to add JS along with it (outside the iframe) to grab the correct domain, etc. And ofcourse from an accessibility and interface standpoint I'm not very found of Iframes.