Pass user request (headers, etc.) to external URI - javascript

Preface:
We have written some software that allows our users to embed the content of many of their webpage building tools directly into their websites. For instance, our clients use a medley of Landing Page or "Funnel" building tools. Many of these tools are pretty antiquated and don't have a native way to bring that content elsewhere, and our clients don't like to send users to our-client.some-landing-page-builder.com/landing-page.
We have developed some software that will fetch the content at our-client.some-landing-page-builder.com/landing-page, do some work on the action, src, etc. attributes (some of these tools use relative URLs and wouldn't change them to absolute for us), and then it gets embedded directly into their website, our-client.com/landing-page/. It uses an optional cache or microcache to make sure that the content is fresh, and it has been working excellently for years.
The Problem:
When a user visits our-client.com/landing-page/, and the software makes a request to the content on the Landing Page builders, the request that is sent belong to the our-client.com and not the user that visited the site. For most of our clients, this isn't an issue.
The Question:
We have one client in particular that is asking if there's a way to send that user's request headers from our-client.com to some-landing-page-builder.com?

Related

How to tell if a website is static or dynamic?

My prof said that dynamic pages get created by the computer, while static pages are created by the user.
Thank you so much!
The difference between static pages and dynamic pages.
a static page has a generic URL suffix, such as .htm, .html, .shtml, and does not contain "?";
Websites using dynamic page skills can perform more functions such as user registration, login, online survey, user management, order management, etc.;
Application and web languages:
Static web pages: HTML, JavaScript, CSS, etc.
Dynamic Web Pages: PHP, CGI, AJAX, ASP, ASP.NET, etc.
Dynamic web pages are used where information changes frequently, such as stock prices, weather information, news and sports news.
Static web pages have fixed content, while dynamic web pages can have changing content.
Static web pages must be modified manually, while changes to a dynamic page can be loaded through an application whose resources are stored in a database.
Static web pages only use a web server, while dynamic web pages use a web server, an application server, and a database.
Regarding: "How to tell if a website is static or dynamic?"
Static websites are simple web pages (typically written in languages like JavaScript, HTML, CSS, etc.) and stored in a web server. In the case of static web pages, as soon as a server receives a request for a page, it immediately sends a response to the client with no additional processing. Users will always view the same content regardless of their location, device type, and web browser.
In static websites, the displayed content remains the same unless someone manually edits the HTML source code on every page that’s part of the website. These pages contain no alterations based on any user input. Hence the name- static web pages. You don't necessarily need any prior experience with database design and web programming to create and maintain a static website. As long as they don't change when we update them, the code for static web pages stays the same.
On the other hand, Dynamic web pages have greater complexity than static ones because they display different content for each user while retaining the same layout and design. A dynamic website generates web pages in real-time. The flexible nature of the content allows for customization based on the requests from the user or the browser used by them. Such pages are usually written in languages like CGI, AJAX, ASP or ASP.NET, and they usually take more time to load than static web pages. They are frequently implemented to show information that changes frequently, e.g., weather updates, stock prices, etc.
Server-side code used to construct a dynamic web page can generate real-time HTML pages for each request from an individual user. While static websites are mostly informational, dynamic websites contain interactive, continually changing elements. In order to provide an interactive website experience for visitors, web developers usually combine both client-side and server-side programming techniques.
Dynamic web pages usually contain application programs for various services and require server-side resources like databases. A dynamic website accesses content from a CMS (Content Management System), which means that the website reflects any changes made in the database content. These sites use client-side scripting, server-side scripting, or both for generating content. Separating the site’s design from its content makes it easier for web designers to create pages without having to worry about formatting issues. After uploading content into the database, websites retrieve their content from there when responding to user requests.
Now, regarding "Would www.tagpro.gg (the homepage) be static or dynamic?"
I have visited the homepage and it is a dynamic webpage actually as you mentioned.
My prof said that dynamic pages get created by the computer, while static pages are created by the user.
Well, actually also static pages can be generated by the computer, since there are a lot of static sites generators out there. Take for example https://astro.build or https://gohugo.io
Would www.tagpro.gg be static or dynamic?
You are right, it is dynamic, since you can see a login/sign-up feature on the page. That's nothing you can achieve with a 100% static site.
Its very simple... Only major two factors matter -
A static website does not have artificial intelligence means it cannot add something automatically the user has to type the code for that is he wants to do so but a dynamic website can do it on its own.
A static website cannot store a information means it only includes frontend no backend no php, node.js or something like that. In simpler words if user signin to your website you would not able to store his username and password.

Embed legacy application DOM inside angular application

I'm intending to embed a legacy website inside my ng4 application, as temporary transition solution, until the same functionality is implemented in ng4, when old site will be decomisioned.
Until then, we'd like at least to onboard legacy site clients onto the new ng4 app, and make them access to their legacy service through ng4 app.
The legacy website is based on 20+ years old technology where page is making POST requests to the server, and server is responding with already rendered HTML page, for the browser just to display it. There are no AJAX calls, and there are no cookies. Authentication is done through a login page.
Once authenticated, the security is done based on the encrypted tokens which are in URL of every POST request.
My idea was to build a shell ng4 application which makes calls to the server with AJAX, receives HTML, and injecs it in the ng4's DOM. This gives me control on what I'd like to be shown and what not.
Optionally, I'd like to select some HTML elements in the received DOM (menus that I'd like to hide from user) and remove them, before I inject the received DOM.
In a different approach, my ng4 appliction wouldn't make an AJAX call, but it would have an iFrame which would load the legacy site, and inside onLoad event of the iFrame I would make manipulation of the DOM
Could you please tell me what you think about these approaches, if it would be better to do it differently, and if you know already some open source projects / libs that are doing the similar thing.

Have facebook scrape a different URL than what was shared

I have a Single Page Application built in ember.js, we have this hosted on AWS S3 and I'm trying to come up with a solution for when someone shares a URL from our site to facebook to have facebook be able to scrape the content on that page properly.
Obviously this won't work at this time because facebook does not support indexing javascript like the google search engine does. So one solution I've seen is to use apache .htaccess to redirect requests from facebook to a server file that can make a barebones html page with the necessary open graph tags like in this post
https://rck.ms/angular-handlebars-open-graph-facebook-share/
However since we're on S3 I can't do an apache .htaccess, and from what I've been able to gather from the sparse docs on how their S3 redirect rules work and what they can do I'm not sure if there is a way to do this with that method.
So my question is does facebook or open graph or even just normal meta tags have away of allowing the user to share a url, have facebook use that but follow a link to a server generated file, and then if someone clicks that link actually have it point the user to the real single page application page instead of the server file facebook will use for the scrape data.
Facebook supports “pointers” to request the meta data from a different URL – but that likely won’t help you here, because the reference to the URL that serves the meta data would again have to be part of the HTML code of your original URL that you want to share.
You might do better the other way around: Let your users share the URL to your server-generated document that contains the correct meta data – and redirect human visitors that follow that link to the real target URL within your application. You can either do that via JS (location.href='…'), or server-side (but in that case you need to implement an exception from that redirect for the FB scraper; it can be recognized by its User Agent, see https://developers.facebook.com/docs/plugins/faqs#scraperinfo)

Security in embedded iframe/javascript widget

I'm building a website that is functionally similar to Google Analytics. I'm not doing analytics, but I am trying to provide either a single line of javascript or a single line iframe that will add functionality to other websites.
Specifically, the embedded content will be a button that will popup a new window and allow the user to perform some actions. Eventually the user will finish and the window will close, at which point the button will update to a new element reflecting that the user completed the flow.
The popup window will load content from my site, but my question pertains to the embedded line of javascript (or the iframe). What's the best practice way of doing this? Google analytics and optimizely use javascript to modify the host page. Obviously an iFrame would work too.
The security concern I have is that someone will copy the embed code from one site and put it on another. Each page/site combination that implements my script/iframe is going to have a unique ID that the site's developers will generate from an authenticated account on my site. I then supply them with the appropriate embed code.
My first thought was to just use an iframe that loads a page off my site with url parameters specific to the page/site combo. If I go that route, is there a way to determine that the page is only loaded from an iframe embedded on a particular domain or url prefix? Could something similar be accomplished with javascript?
I read this post which was very helpful, but my use case is a bit different since I'm actually going to pop up content for users to interact with. The concern is that an enemy of the site hosting my embed will deceptively lure their own users to use the widget. These users will believe they are interacting with my site on behalf of the enemy site but actually be interacting on behalf of the friendly site.
If you want to keep it as a simple, client-side only widget, the simple answer is you can't do it exactly like you describe.
The two solutions that come to mind for this are as follows, the first being a compromise but simple and the second being a bit more involved (for both you and users of your widget).
Referer Check
You could validate the referer HTTP header to check that the domain matches the one expected for the particular Site ID, but keep in mind that not all browsers will send this (and most will not if the referring page is HTTPS) and that some browser privacy plugins can be configured to withhold it, in which case your widget would not work or you would need an extra, clunky, step in the user experience.
Website www.foo.com embeds your widget using say an embedded script <script src="//example.com/widget.js?siteId=1234&pageId=456"></script>
Your widget uses server side code to generate the .js file dynamically (e.g. the request for the .js file could follow a rewrite rule on your server to map to a PHP / ASPX).
The server side code checks the referer HTTP header to see if it matches the expected value in your database.
On match the widget runs as normal.
On mismatch, or if the referer is blank/missing, the widget will still run, but there will be an extra step that asks the user to confirm that they have accessed the widget from www.foo.com
In order for the confirmation to be safe from clickjacking, you must open the confirmation step in a popup window.
Server Check
Could be a bit over engineered for your purposes and runs the risk of becoming too complicated for clients who wish to embed your widget - you decide.
Website www.foo.com wants to embed your widget for the current page request it is receiving from a user.
The www.foo.com server makes an API request (passing a secret key) to an API you host, requesting a one time key for Page ID 456.
Your API validates the secret key, generates a secure one time key and passes back a value whilst recording the request in the database.
www.foo.com embeds the script as follows <script src="//example.com/widget.js?siteId=1234&oneTimeKey=231231232132197"></script>
Your widget uses server side code to generate the js file dynamically (e.g. the .js could follow a rewrite rule on your server to map to a PHP / ASPX).
The server side code checks the oneTimeKey and siteId combination to check it is valid, and if so generates the widget code and deletes the database record.
If the user reloads the page the above steps would be repeated and a new one time key would be generated. This would guard against evil.com from page scraping the embed code and parameters.
The response here is very thorough and provides lots of great information and ideas. I solved this problem by validating X-Frame-Options headers on the server-side , though the support for those is incomplete in browsers and possibly spoofable.

Can the Google +1 Javascript API be used in a way that requests are sent via visitor's PC/IP, and not my web server?

Google +1 API reference: http://code.google.com/apis/+1button/
What I want to do is use the Google+1 API on my website that contains pages with links to other websites. When a visitor clicks the +1 button next to a link they like, I want the request to come from the user's computer, not from my web server.
My concern is that Google may think the +1s are spammy or whatnot if they all come from my web server, so I want them to appear natural, coming from IPs all over the world.
Hoping that someone who REALLY understands HTTP requests and Javascript can help answer this.
Thanks in advance!
EDIT:
Turns out the JSON request that's sent when the +1 button is clicked contains a field called "container" that contains the source page URL, not the URL that's actually being +1'd. Also, when the .js files are GET to a visitor's machine, the "Referrer" is set to be the source page URL (of course).
I'm looking for a way to prevent the Referrer and the "container" field from containing the source page URL.
A google +1 link in a web page already comes from the user's computer. The user is displaying your web page on their computer and when a Google +1 link is clicked, the user's own browser makes the Google +1 request to Google's computers. Your web site provides the code in the web page, but the user's own computer makes the Google +1 request. I don't think you need to worry about this issue as your web server is not making the actual Google +1 request.

Categories

Resources