I'm intending to embed a legacy website inside my ng4 application, as temporary transition solution, until the same functionality is implemented in ng4, when old site will be decomisioned.
Until then, we'd like at least to onboard legacy site clients onto the new ng4 app, and make them access to their legacy service through ng4 app.
The legacy website is based on 20+ years old technology where page is making POST requests to the server, and server is responding with already rendered HTML page, for the browser just to display it. There are no AJAX calls, and there are no cookies. Authentication is done through a login page.
Once authenticated, the security is done based on the encrypted tokens which are in URL of every POST request.
My idea was to build a shell ng4 application which makes calls to the server with AJAX, receives HTML, and injecs it in the ng4's DOM. This gives me control on what I'd like to be shown and what not.
Optionally, I'd like to select some HTML elements in the received DOM (menus that I'd like to hide from user) and remove them, before I inject the received DOM.
In a different approach, my ng4 appliction wouldn't make an AJAX call, but it would have an iFrame which would load the legacy site, and inside onLoad event of the iFrame I would make manipulation of the DOM
Could you please tell me what you think about these approaches, if it would be better to do it differently, and if you know already some open source projects / libs that are doing the similar thing.
Related
I was going through page source of famous websites like twitter, instagram, snapchat etc. One thing I found common in all these websites is that there are no input tags in the page source of login/signup page. Infact there is no form tag at all. I wonder if its a security measure or something else. Can anyone answer me this?
It is a side-effect of an overreliance of client-side programming without server-side rendering as a fallback.
It has nothing to do with security whatsoever.
In the early days of the web, all was present in the HTML code. There was little dynamic on those pages.
Through history the pages became more and more dynamic, with more JavaScript. Some document elements were created through JavaScript, in response to some user events (like adding rows to a table). This became a pattern that extended also to the initial build up of the page: some popular libraries are completely built around that idea. In the extreme, the HTML only contains one script tag (besides the html and body elements), and the JavaScript does the whole job of generating the page, based on some other configuration (in a database, configuration files, JSON, ...). It takes away the HTML design, and moves this responsibility to JavaScript.
This is not a security concern. Just one of the ways some frameworks work.
The browser's page source only displays what is rendered on the server. We call this Server Side Rendering.
These days, with the concept of Client Side Rendering, most of the webpage is generated on the client side, with JavaScript frameworks like React.js. The server just generates the basic skeleton HTML.
So, to answer your question: Why are there no input tags in the page source of login/signup page - This is because these input tags are generated at the client-side using Javascript. This does not mitigate any security risks, as the input tags are still accessible via the DOM inspector
It's not bad, it's just not the way those sites work. They likely use a javascript front end app and POST data to a back end API of some sort to do the login.
Input tags hark from an older variation of the web where the server prepared a page, sent it to the browser, the browser user filled in form fields, sent the data to the server as an encoded form, the server prepared a new block of html and sent ti to the browser. You'd see the page refresh with different content and huge amounts of html was flying round all the time. Some tricks were employed by browsers to make it look like a more seamless experience but there is stil lthe underlying method of working, that the whole page was replaced
The modern web tends towards single page applications; a page loads with a javascript application and the script manipulates the browser document to draw the UI, there is never again a situation where the server is wholesale shipping entire pages to the browser for display. It's all "script sends some data, probably json, to the server, gets a response and programmatically updates the local document making the browser change what it displays"
To this end somewhat the browser has become an operating system or development environment, with programming features, and access to local resources like webcams and file, and it forms a front end UI with the back end server holding and processing the data that makes the application valuable (worth purchasing or using). It's not about security, but instead about making a modern, dynamic and usable UI for a web-based application. There's no reason why they could't use an input tag; the javascript app running in the browser could create an input box, the user could type into it, the javascript app could pull the value out of it and send it to the server (not in a form) and get the response.. It's just that they don't need to work that way; there's a lot more freedom to get user input in various ways, send it to the server in various ways and act on the response. We are no longer tied to that older "post a form to the server" way of having dynamic content, and we haven't been tied for a long time but it's relatively more recently that really good frameworks and libraries for creating these single page apps have come along, so increasingly we see sites using them for the benefits they provide and they might bring with them ways of garnering user input without using an input tag - ways that are not subject to some limitation (probably styling) that using the tag incurs.
The modern web is mainly centred on end user experience with a more rich and fluid ui, less data being transferred for faster response times etc. All this chat between front end and back end (should) happens over HTTPS in the modern web (the CPU cost of encrypting being relatively low in these days of high powered servers and clients) so it's at least as secure as it ever was.
TLDR; input tags are less often used because they're less often needed and may bring some problems and blockers to the modern ways of developing. They aren't inherently insecure.
I'm currently designing a MEAN.js web application, and for some reason, whenever I refresh the page on a route or window.reload, it does not rerender the page, but only returns the JSON file found at that current route.
For example, when I'm at localhost:8080/people:
If I click here from the main page, I get
But if I hit refresh or reload the page for whatever reason I get
Does anyone have any idea why this is happening and how to fix it?
Presumably you are using what Angular call's html5Mode routing.
This uses pushState and friends. These are browser features designed to allow you to build a web app which has what appear to be separate pages with unique, real URLs but when you change the page you actually modify the DOM of the current page (to State B) instead of loading a new one from scratch.
The design intention for pushState and friends is that if you request the unique, real URL that maps onto State B then the server will provide the browser with the HTML for State B directly.
This means that:
if you arrive on the site without going to the homepage first, then you
load the content you are trying to load directly (which is faster than loading the homepage and then modifying it with JavaScript).
if you arrive on the site without JavaScript working (which could be for many reasons, then everything still works. See also Progressive Enhancement and Unobtrusive JavaScript.
What you've done wrong is that your URLs are mapping onto your JSON based API instead of server side processes that generate the pages.
You need to write the server side processes. You could consider using the Accept header to allow them to share URLs with the API (so the server returns either JSON or HTML depending on what the client says it accepts).
I'd like to create a site with Angular (I'm new), but also want to be able to have different "views" be cachable in the search engines and have their own URL routes. How would I achieve this with Angular, or is best not to use it?
Enable pushState in Angular with $locationProvider.html5Mode(true); so that you have real URLs and make sure that, when the URL is requested by the client, you deliver the complete page for that URL from the server (and not a set of empty templates that you populate with JS).
When a link is followed, you'll go through an Angular view and update the existing DOM (while changing the URL with pushState) but the initial load should be a complete page.
This does mean duplicating effort (you need client and server side versions of the code for building each page). Isomorphic JS is popular for dealing with that issue.
If you want to expose Angular views to search engines and other bots, I suggest using an open source framework that we developed at Say Media. It uses node.js to render the pages on the server when it detects a bot vs a real user. You can find it here:
https://github.com/saymedia/angularjs-server
I would suggest not using different routes, however, as most search engines will penalize you for having duplicate content on multiple urls. And while you might think they would just hit the bot version of your site, they are getting more sophisticated about crawling single page app like sites. I would be cautious about duplicate routes for the same content.
Good Luck!
I'm building a website that is functionally similar to Google Analytics. I'm not doing analytics, but I am trying to provide either a single line of javascript or a single line iframe that will add functionality to other websites.
Specifically, the embedded content will be a button that will popup a new window and allow the user to perform some actions. Eventually the user will finish and the window will close, at which point the button will update to a new element reflecting that the user completed the flow.
The popup window will load content from my site, but my question pertains to the embedded line of javascript (or the iframe). What's the best practice way of doing this? Google analytics and optimizely use javascript to modify the host page. Obviously an iFrame would work too.
The security concern I have is that someone will copy the embed code from one site and put it on another. Each page/site combination that implements my script/iframe is going to have a unique ID that the site's developers will generate from an authenticated account on my site. I then supply them with the appropriate embed code.
My first thought was to just use an iframe that loads a page off my site with url parameters specific to the page/site combo. If I go that route, is there a way to determine that the page is only loaded from an iframe embedded on a particular domain or url prefix? Could something similar be accomplished with javascript?
I read this post which was very helpful, but my use case is a bit different since I'm actually going to pop up content for users to interact with. The concern is that an enemy of the site hosting my embed will deceptively lure their own users to use the widget. These users will believe they are interacting with my site on behalf of the enemy site but actually be interacting on behalf of the friendly site.
If you want to keep it as a simple, client-side only widget, the simple answer is you can't do it exactly like you describe.
The two solutions that come to mind for this are as follows, the first being a compromise but simple and the second being a bit more involved (for both you and users of your widget).
Referer Check
You could validate the referer HTTP header to check that the domain matches the one expected for the particular Site ID, but keep in mind that not all browsers will send this (and most will not if the referring page is HTTPS) and that some browser privacy plugins can be configured to withhold it, in which case your widget would not work or you would need an extra, clunky, step in the user experience.
Website www.foo.com embeds your widget using say an embedded script <script src="//example.com/widget.js?siteId=1234&pageId=456"></script>
Your widget uses server side code to generate the .js file dynamically (e.g. the request for the .js file could follow a rewrite rule on your server to map to a PHP / ASPX).
The server side code checks the referer HTTP header to see if it matches the expected value in your database.
On match the widget runs as normal.
On mismatch, or if the referer is blank/missing, the widget will still run, but there will be an extra step that asks the user to confirm that they have accessed the widget from www.foo.com
In order for the confirmation to be safe from clickjacking, you must open the confirmation step in a popup window.
Server Check
Could be a bit over engineered for your purposes and runs the risk of becoming too complicated for clients who wish to embed your widget - you decide.
Website www.foo.com wants to embed your widget for the current page request it is receiving from a user.
The www.foo.com server makes an API request (passing a secret key) to an API you host, requesting a one time key for Page ID 456.
Your API validates the secret key, generates a secure one time key and passes back a value whilst recording the request in the database.
www.foo.com embeds the script as follows <script src="//example.com/widget.js?siteId=1234&oneTimeKey=231231232132197"></script>
Your widget uses server side code to generate the js file dynamically (e.g. the .js could follow a rewrite rule on your server to map to a PHP / ASPX).
The server side code checks the oneTimeKey and siteId combination to check it is valid, and if so generates the widget code and deletes the database record.
If the user reloads the page the above steps would be repeated and a new one time key would be generated. This would guard against evil.com from page scraping the embed code and parameters.
The response here is very thorough and provides lots of great information and ideas. I solved this problem by validating X-Frame-Options headers on the server-side , though the support for those is incomplete in browsers and possibly spoofable.
This is more of a discussion, rather than a real question...
I'm building a site and I am struggling with how to force a user to login to the site to access certain areas and take certain actions. I'm using spring security and have it integrated fairly well, however, I also have some AJAX calls that need to be secured and when Spring Security intercepts my calls it attempts to send back the HTML for login page to my AJAX callback which doesn't do me any good.
In passed applications, when I was using Struts, I was able to override the html:link tag and check login in the tag and rewrite the href to point to my login page (instead of my ajax script), however, I'm using Spring MVC and I don't have that luxury (if you can call it one).
I'm playing with some ideas such as:
Iterating through all my links on the page and rewriting the href of the links that have a certain class if the user is not logged in
Create a custom tag from scratch
Ditch the fancy AJAX stuff
It looks like other sites, such as, DZONE and Digg do something similar; so I know it's possible.
I'm looking for any ideas at this point, just something fresh to try, other than the three options above. Of the three, I think I'm leaning more towards #1.
If I understand your question correctly, you are asking how you can get around your secured AJAX requests returning redirects to your login page (or the login page itself) to your AJAX handler?
I would organise your application so that calls to secured AJAX request are only done from pages that are secured themselves. That way all requests that you initiate (from your own pages) are guaranteed to work and any requests that come from somewhere else (potentially malicious ones) from AJAX or normal page requests will be redirected to the login page.