Questions on Security in React.js - javascript

I am building my first application on React.JS and I am wondering how to implement simple security in on the client-side because there are some vulnerabilities that I see such as if you view page source the script tags show you all the Components that you made with all the functioning and rendered pages, Also if there is a basic method to stop XSS from happening that I can build on I would like to see that as well.
I am concerned about how anyone can view a page source on react and see the components from the script tag

You're not going to be able to prevent people from looking at the code source on the browser, since the browser has to see it and render it. You can make it a little harder for people to get to the inspect element, but there is always a way to get to it.
As for XSS, all you can do on the client side is validating input and sanitizing, but you can get around that via watching the network traffic and submitting bad data directly through your own http requests.
Client side is just that, served to the client.

What do you mean by sources? It's still a JavaScript and everybody can see the sources, but they will be uglified and minified by Webpack.
Regarding XSS. Don't worry about it using React. Your code is already protected thanks to JSX. String variables in views are escaped automatically.

I suggest you to secure you're service in back-end, because anyone can request to the server trough Postman and can record and repeat your request in BurpSuite;
then use security solutions in your back-end code after that use Object Schema validator in your React app to prevent users XSS attacks.

Related

I have trouble entering URLs in the address bar when creating routes with vanila javascript

I am trying to create routes with vanilla javascript but every time I type a URL in the address bar I get an error saying, 'Cannot GET /about'. I am requesting a link to a tutorial or an answer to this kind of problem since it is my first time doing it with vanilla javascript and I have no clue.
Taking "Vanilla JavaScript" to mean "JavaScript, running in the browser, without the use of third-party libraries":
What you want is not (reasonably) possible.
When you type a URL into the address bar, the browser makes an HTTP request to that URL, and the HTTP server for the origin of the URL (i.e. the scheme + hostname + port) is responsible for delivering something (typically a webpage) back to the client.
You can't substitute client-side JavaScript for that initial request to the HTTP server.
There is an edge case. I think a progressive web app can use a service worker to intercept the request and generate a response internally. This is no good for handling the initial request though since the PWA wouldn't be installed at the time.
Generally, when you are writing a single page application you will need two parts for your URL handling.
The first part is the History API. This allows you to write JavaScript which tells the browser:
In response to the click the user just performed, I am going to update the DOM. If you were to visit this URL then you would get the same result as the changes I am making to the DOM, so go ahead and update the address bar to represent that.
It also lets you hook into the browser's back navigation so you can undo those changes if the user click's back.
The second part is where you make sure that the server really does deliver the same content for that other URL.
There are three strategies for achieving this:
Have the server return a more-or-less empty HTML document that checks the URL as it loads and then populates itself entirely with JavaScript. This is a poor approach which might as well just use hash bangs.
Generate all the HTML documents in advance. This is a strategy employed by Gatsby and Next.js. This is very efficient, but doesn't work for frequently updated content.
Generate the HTML documents on demand with server side code. Next.js can do this too.
You can do this when you write vanilla JavaScript (kinda), but it takes a lot of work since you need to write all the code to run on Node.js (where you might not count it as vanilla any more) to generate the HTML documents. I strongly recommend using a framework.

Issue with Adblockers on server-side GTM

I am using Server side GTM, but I am facing adblocking issues when calling the below request when I want to retrieve the gtm?js file:
https://example.gtmdomain.com/gtm.js?id=GTM-MY_GTM_ID
The request works fine when I don't use adblockers.
Is there a way to rename the endpoint to something else, such as https://example.gtmdomain.com/secret_file_name.js?id=GTM-MY_GTM_ID in order to not be blocked by adblockers?
So. Server-side gtm is exactly what it says. It's executed on the server. It listens for network requests. It doesn't have any exposure to what happens on the front-end. And the front-end has no clue about there being a server-side GTM. Well, unless there are explicit calls to its endpoint, which you can proxy with your backend mirrors when needed.
What you experience is adblockers blocking your front-end GTM container. Even though it's theoretically possible to track all you need, including front-end events with server-side GTM, it's considered to be the best practice to use both GTMs and stream front-end events to back-end GTM through front-end GTM.
This, of course, makes you dependant on adblockers since they will block your front-end GTM. A way to avoid it is... Well, not to use the front-end GTM and have all your tracking implemented either in a tag manager that is not blocked (I doubt there is one) or just have your own custom javascript library doing all the front-end tracking and sending it to the backend GTM to be properly processed and distributed.
Generally, it's too expensive to implement tracking with no TMS, since now you really have to know your JS, so only the cool kids can afford to do this. A good example would be Amazon.
Basically, it would cost about two to five times more (depending on particulars) to implement tracking with no TMS, but adblockers typically cut only about 10% of traffic. 10% is not vital for reporting, measuring effectiveness of funnels and what not. All the critically important data is not being reported on by analytics anyway. Backend is the real source of critical data.
You can easily do this if you use sGTM hosting from https://stape.io
There is a feature called Custom Loader. With it, you can download Web GTM from different paths and all other related scripts will be also downloaded from different URLs, for example, gtag.js for GA4.
More info https://stape.io/blog/avoiding-google-tag-manager-blocking-by-adblockers
You can also create your custom loader client for Web GTM. However, there will be problems with related scripts. UA/GA4 still will be blocked then, but GTM itself not.
So, I finally implemented a great solution using GTM client templates. It works like a charm.
To summarize, the steps are:
Create a client template from your server container. You can import this template from https://raw.githubusercontent.com/gtm-templates-simo-ahava/gtm-loader/main/template.tpl
Create a new client from this client template
name your path as you want
This article explains perfectly the required steps: https://www.simoahava.com/analytics/custom-gtm-loader-server-side-tagging/

How do I correctly deal with 404 HTTP errors when using an SPA and no server-side computation?

I am currently using Vue.js on a website project of mine.
The server that returns this Vue.js SPA will not be capable of computation or checks, and will only serve the static resources required to run the SPA in the browser, where all computation will then take place.
For SEO purposes, I want to ensure error pages are handled correctly. Currently, every URL returns a 200 OK and serves the SPA which can confuse search engines for pages that are actually supposed to be invalid. What would be the correct way of telling both users and search engines that the page is invalid if I cannot change the response the server provides?
For context, the SPA will get data from an API on another domain.
I have looked at other questions that are similar, but they normally recommend server-side checks, a dedicated 404-page redirect or a soft-404 page. Server-side checks and a dedicated 404 page will not be possible, and I have read that soft-404 pages are disliked by search engines.
Is there any proper way to go about this?
I have seen this post, but it is quite old and only suggests a no-index tag, which still makes the page valid in the eyes of search engines, just not indexable.
You can't return a 404 error without any server-side computation/rendering, because the fact that the resource/page wasn't found relies on some logic that only gets executed on the client-side in your case. Hence, your only options are the following:
If the resource wasn't found, redirect the user to a pre-defined 404 page (that returns the correct HTTP status)
Blacklist paths that are invalid inside your proxy, or whitelist those that are valid, and proxy to a 404 page on all other paths
Manually create a sitemap with your valid pages/routes
None of these options are optimal if your project grows or you have dynamic routes, but those are the limitations of client-side rendering. Hence I don't think there's a good answer to your question, since this is a well-known limitation of client-side rendering, and one of the main reasons why projects that care about SEO prefer server-side rendering.
From an SEO perspective: As long as you add all your valid pages to a sitemap, and don't link (anchor tag) to the invalid ones on any of your other pages, no search engine crawler will ever crawl or index these pages.
But if you really care about SEO and have a dynamic app with hundreds/thousands of dynamic links that cannot be added to a sitemap manually, I'd advise you to switch to a production framework like Nuxt.js or Next.js because they offer what you're looking for and much other SEO features out of the box.

How to make sure a request is sent from original software?

I'm currently making an open source browser extension that will send requests to my site. This can easily be done with Ajax, a request will be sent to the page action.php.
My site will use PHP, well now the question is, how can I make sure action.php receives the request from the original extension? I mean griefers could easily send false information to the server, or a fork could be used and send incorrect data. I thought of generating a token of some sort, but anyone could recreate it I guess.
How can I prevent this situation?
I have some experience with this myself. I've been building an extension with a login and eventually came to the inevitability that security in an extension is inherently difficult.
The issue is that an extension is just a bundle of JS and HTML that anyone can inspect the values of. This means that anyone determined enough to dig through your code can potentially find out how to bypass anything you have built in.
The solution I eventually came to is that, the extension itself cannot hold any long-lasting secrets. A session with a timeout is the only safe thing to store. The actual login for my extension is done via a website over HTTPS.
If you are trying to do this without any such login, your only recourse is to make it as difficult as possible to determine what needs to be sent by using an algorithm that can generate server verifiable tokens, and then only publishing minified code to the webstore.
EDIT: Reread the question and noticed that you said you are doing this open source. Without some sort of authentication on the webserver via HTTPS, there is little you can do to stop those determined to bypass your protections because they will be on display in your public repository.
For sensitive endpoints like this, it would make sense do to the data processing server-side. The client would only have to query the server to process the data.

Protecting "back-end" angular source files

I have a Angular system that solely talks with my Go back-end and with Gorilla I take care of my sessions for login.
I started working on my admin environment, but I wondered what would be best practice for protecting the angular code for it. It's not really a problem for security because even the admin code will just have logic, and not dangerous data, still I prefer to not have it open to just anyone in the world.
I was thinking of doing the following;
I have a mux router that catches all my resource calls (deployment with Yeoman) and I was wondering that I would make 3 exceptions there for images/admin, scripts/admin and styles/admin. These paths can then only be served if you have a valid session active. Otherwise throwing a 401 header.
Would this be a good solution or is there a more efficient way to achieve this?
If you need a valid (and preferably authorized) session to get some static assets (being them JS code, stylesheets, images...), you need to pass through the application, the stack you use is not relevant at all.
What I'd do is to point the resource to something controlled by your application, and then return either a 401 or an empty response with a X-Sendfile or X-Accel-Redirect header so the actual serving is offloaded to whatever reverse proxy you have in place.

Categories

Resources