Is SSL adequate enough to keep a website safe? [closed] - javascript

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I’ve been looking into programming a website for a company (no data or information other than forms are on it) I’ve been looking at BlueHost as a hosting and the SSL secure feature for security, I don’t know any other securing or encrypting myself, will BlueHosts SSL be enough?

No. SSL only protects data in transit from interception by a 3rd party-- and there are many factors to consider on how SSL is implemented required to decide if it even does that satisfactorily.
There are many countless facets of security that you must consider beyond how to make sure that data doesn't get read by someone else while it is being transmitted between a browser and a server over SSL. Too many to elaborate on in a stackoverflow answer. To put it in perspective, when I deploy a system that contains any sensitive data (I work for a payment processing company, but "sensitive" covers a lot more than just credit card numbers), I have to answer around 800 questions on a security audit. Only about 30 of those questions relate to SSL and making sure SSL is implemented properly. Then a team of experts have to review the implementation of such a system, deliberate, and vote unanimously that it meets requirements. Even after all that, routine security audits find potential vulnerabilities that were overlooked and must be mitigated.
No. SSL by itself is not enough to consider a system "secure". If it has data that needs SSL, it almost certainly has more needs than just SSL.

First of all, you should have SSL. With that said, no, that is not enough.
So, what do you need? While it is true that there is a lot to cover, you clearly need to be pointed to the right direction...
As far as hosting provider goes, there are other features that you could be interested in:
Backups (although, you probably can roll a solution yourself)
Antimalware (although, if your site will not allow to upload files, it is less relevant)
DDoS mitigations.
I want to remind you that information security is not only confidentiality, it is also availability and integrity (and traceability).
Please set up a test enviroment, on a local server, make sure it works. You can do security audits on that before they go the production enviroment. Remember that the sooner you discover the bugs, the cheaper it will be to fix them... thus, you do not want to discover them in production.
Yeah, it is ok to not have HTTPS on the test enviroment. There is plenty to do besides that. And yeah, you should review the security in production too.
Ideally there will be team doing test, and among those tests they might look for potential vulnerabilities. There are also security scanners that can help with that.
However, you should be writing secure code to begin with. Right?
I have to tell you, the server-side is more relevant than the tagged HTML and jQuery. The golden rule is to not trust the client. Remember that request might not be coming from an actual browser (despite what the user agent might say). You must do validation on the server. Even though it is also a good idea to validate in the client (for a better user experience and to safe network capacity), client-side validations are virtally irelvant for security.
That is not the same to say that there are not things that can improve the security that can be done client-side. For example, figerprinting a client can be useful to detect when the client is coming from an unusual source for a given user, rising a red flag (a partial fingerprint is possible server-side). Also you can do mitigations for screen-recorders/keyloggers/shoulder-surfing.
There are also very specific cases where doing cryptography on the client makes sense. That is not the usual case. You probably do not need to do that. And in the odd case you do, please hire an expert.
Anyway, these are some things that developers often overlook (this is by no means a complete list):
Do not trust the client.
Validate all input.
Sanitize all output (considering where it is going to).
Use prepared statements on database access.
Specify character charsets (for HTML, server-side string manipulation, and both database storage and connections).
Do proper authentication and access control.
Store credentials properly.
Stay up to date with browser security features.
Use Cross-Origin Resource Sharing correctly.
Use HttPOnly cookies when possible.
Use a Service Worker and Web Cache properly.
Log errors instead of showing them to the client.
Have traceability for all modifications.
Also, consider two factor authentication.
You might also be interested in OWASP Top Ten project and OWASP Cheat Sheet Series. Note: These are not a security check list, and are not a replacement for a security audit. They aren't gospel either, however if you are not to follow them, let it not be because you are unaware.
Finally let me point you to Information Security Stack Exchange, a Q&A site dedicated to information security (hence the name) sister to this one.
Addendum: If you are not developing a web application, but setting up a content management system instead, then you must keep it up to date. Also, research and apply security hardening for whatever it is you are using.

Related

Method of low-level encryption of section of website

I'm looking for a simple method of encrypting a small portion of my personal developer website. I'd like to display my resume directly on the site, but would prefer to protect it with a password so as to prevent those who are not potential employers from viewing it. What is a safe way of doing so while imposing a limited strain on potential employers (e.g. not requiring them to create an account)?
Notably, I will not be including information like my SSN or anything particularly sensitive -- just regular resume info. For this reason, would it be okay to provide all potential employers with the key, and rotating it every month-or-so?
I'm using Lit as a web component tool, but otherwise the site is vanilla JS + html.
Thanks for any guidance!
Some shared hosting providers offer password protection as part of their package. You could contact your host and see if that's an option.
Otherwise the simplest password protection solution which doesn't require any third party tools would be to update your .htaccess file to require a password. See this question for examples on setting it up.
Please note, that this should not be considered a completely secure solution, because it's only basic authentication which can be vulnerable to brute force attacks. However it should satisfy your requirement of adding some protection your personal information.

Is it possible to hack with javascript? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm familiar with the term hacking. Although i never will become one of those hackers who scam people or make trouble, i believe it's an essential aspect any person who calls themselves a programmer must be able to do, as it is mainly about problem solving. But how does it work? When those people hack games or websites or the guy who hacked sony, do they use a programming langauge like ANSI C or c++ or assembly. Assuming they use a programming language, would it be possible to use javascript to hack in the same way you'd use any other language to hack. Furthermore, what do you have to do to be able to hack too. I just wanna know how it works, and the theory behind it all.
Hacking is unique every time. Exploiting some types of vulnerability doesn't require any particular language at all. Sometimes you hack whatever's there, in whatever language or form you find it. Sometimes it's necessary to automate part of the process, for example password cracking, for which you can use any language you like.
Cracking a commercial game commonly involves studying its disassembled machine code, figuring out which part does the CD or license check, and surgically replacing a few bytes of the code such that the check is skipped.
Hacking a website commonly involves discovery of some small clumsiness on the part of its developers, which allows viewing of should-be private data, or permits execution of custom code if it does not sanitize data properly. SQL injection is a common flaw when values sent to a database are not properly protected within quotes ("...") and so you can give values which break out of the quotes to execute your own commands. Cross-site scripting is a type of hack that uses JavaScript: If a website blindly takes parameters from the URL or submitted form data and displays them on the page without safely encoding them as text (worryingly common), you can provide a <script> tag for execution on that page. Causing someone to visit such a URL allows execution of scripted actions on their behalf. Code injection is also possible on the server side with PHP (or Perl, or similar), if sloppy code gives access to an eval-like function, or if server misconfiguration allows user-uploaded files to be interpreted by the server as scripts.
Hacking into an operating system or server program remotely may exploit bugs in its handling of network commands. Improper handling of malformed network commands can cause it to bungle user authentication checks or even to directly execute code provided in the network packet, such as via a buffer overflow.
Bugs in web browsers, plugins, and document viewers are similar. What should be safe types of file can be crafted with non-standard or broken values. If the programmer forgot to handle these cases safely they can often be used to escape the normal limits of the file type.
Viruses can also be slipped onto a machine via physical exchange of USB stick or CD or convincing someone to install virus-laden software. Such viruses are often written anew for each purpose (to avoid anti-virus software), but there are some common ones available.
Weak or badly implemented encryption can permit brute-force decoding of encrypted data or passwords. Non-existent encryption can permit wiretapping of data directly. A lack of encryption can also permit unauthenticated commands to be sent to a user or server.
Very obvious passwords, or unchanged default passwords, may allow simple guesswork to get into a system. Some people use the same password everywhere. This give websites the power to simply walk in to a user's email account and then take control of everything associated with it. It also means a password leak on one insecure website may be used to access accounts on many other websites.
And sometimes, "hacking" is social engineering. For example, imagine phoning up a junior employee and pretending to be someone in charge, to trick them into revealing internal information or reset a password. Phishing emails are a common, semi-automated form of social engineering.
Breaking into a system is rarely just one of these steps. It's often a long process of analyzing the system, identifying minor flaws and leveraging them to see if a useful attack vector is exposed, then branching out from that to gain more access.
I've never done it, of course.
There is a sort of "hacking" possible with javascript. You can run javascript from the adressbar. Try typing javascript: alert("hello"); in your address bar while on this website.
This way it it possible to hijack local variables, cookies for instance.
But since javascript runs on the client-side. People would have to use your workstation in order to gain access to your cookies. It is a technique that can be used to alter login data and pretend to be somebody else (if the site had been badly built).
If you really want to learn more about this there are some 'javascript hacking lessons' that can be found here: http://www.hackthissite.org/pages/index/index.php
Side note, there is a difference between hacking and cracking. Reference: http://en.wikipedia.org/wiki/Hacker_(programmer_subculture)
There are many exploits that can use javascript, probably the most well-known is going to be cross-site scripting (XSS).
http://en.wikipedia.org/wiki/Cross-site_scripting
To follow up on Michael's answer, it is good to know the vulnerabilities in software and how people exploit those vulnerabilities in order to protect against them, however hacking is not something you want to be known for.
For a start, you are actually referring to what is known as Cracking, not hacking. This question will surely be closed, however some brief observations ^_^
Cracking comes from a base level understanding of how computer systems are built, hence you don't learn how to crack/hack, you learn about computer engineering in order to reverse-engineer.
A popular platform for hacking is Linux; over windows for example as its open source so experienced programmers can write their own programs. Although experienced hackers can accomplish their goal on any platform.
There are many levels of hacking however, simple website security is worlds apart from hacking in to Sony and facing jail ^_^
You may have some fun on http://www.hackthis.co.uk though
It depends on many things, for example: how the html code is structured, for example, I don't know if you could hack an iframe, but a normal web-page would be relative easy to hack. Another security pitfall programmers usually do is passing sensitive information via url query-string, then you could get those pieces of data and submitting them wherever they are used in the html code.
There could be another details, but I don't figure them out right now.

Security Concerns on clientside (Javascript)

We are going to design and implement a UI for a big website. Owner of the site is really cautious about security issues. I wonder if there is a check list for client-side security recommendations, when designing and coding in Javascript.
You may use the OWASP guide as a start. It offers a suite of tests that you can systematically use to check your application for common vulnerabilities.
Web application pen testing is a buzz word for what you are trying to achieve. Scan the net for automated tools and background information.
Edit:
You mentioned that not only the client side is your concern, but the overall security of the entire application including the server. My advice would be that if you have never done security assessment of an application before, your boss/the owner of the site should probably consider hiring an external company/consultant for the job. They will do the job for less than it would probably cost if you and your team had to learn the details first. Plus, they have the advantage of having this done over and over again, so they are much less likely to overlook important details.
Javascript can easily be tricked. You need to build a system with server side has all the security and the client side will only acts as a interface similar to browser.
Encrypting using strong security certificate will also be an option you may consider.

What would be the most ethical way to consume content from a site that is not providing an API? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I was wondering what would be the most ethical way to consume some bytes (386 precisely) of content from a given Site A, with an application (e.g. Google App Engine) in some Site B, but doing it right, no scraping intended, I really just need to check the status of a public service and they're currently not providing any API. So the markup in Site A has a JavaScript array with the info I need and being able to access that let's say once every five minutes would suffice.
Any advice will be much appreciated.
UPDATE:
First all thanks much for the feedback. Site A is basically the website of the company that currently runs our public subway network, so I'm planning to develop a tiny free Android app for anyone to have not only a map with the whole network and its stations but also updated information about the availability of the service (and those are the bytes I will eventually be consuming), etcétera.
There will be some very differents points of view, but hopefully here is some food for thought:
Ask the site owner first, if they know ahead of time they are less likely to be annoyed.
Is the content on Site A accessible on a public part of the site, e.g. without the need to log in?
If the answer to #2 is that it is public content, then I wouldn't see an issue, as scraping the site for that information is really no different then pointing your browser at the site and reading it for yourself.
Of course, the answer to #3 is dependent on how the site is monetised. If Site A provides advertistment for generating revenue for the site, then it might not be an idea to start scraping content, as you would be bypassing how the site makes money.
I think the most important thing to do, is talk to the site owner first, and determine straight from them if:
Is it ok for me to be scraping content from their site.
Do they have an API in the pipeline (simply highlighting the desire may prompt them to consider it).
Just my point of view...
Update (4 years later): The question specifically embraces the ethical side of the problem. That's why this old answer is written in this way.
Typically in such situation you contact them.
If they don't like it, then ethically you can't do it (legally is another story, depending on providing license on the site or not. what login/anonymousity or other restrictions they have for access, do you have to use test/fake data, etc...).
If they allow it, they may provide an API (might involve costs - will be up to you to determine how much the fature is worth to your app), or promise some sort of expected behavior for you, which might itself be scrapping, or whatever other option they decide.
If they allow it but not ready to help make it easier, then scraping (with its other downsides still applicable) will be right, at least "ethically".
I would not touch it save for emailing the site admin, then getting their written permission.
That being said -- if you're consuming the content yet not extracting value beyond the value
a single user gets when observing the data you need from them, it's arguable that any
TOU they have wouldn't find you in violation. If however you get noteworthy value beyond
what a single user would get from the data you need from their site -- ie., let's say you use
the data then your results end up providing value to 100x of your own site's users -- I'd say
you need express permission to do that, to sleep well at night.
All that's off however if the info is already in the public domain (and you can prove it),
or the data you need from them is under some type of 'open license' such as from GNU.
Then again, the web is nothing without links to others' content. We all capture then re-post
stuff on various forums, say -- we read an article on cnn then comment on it in an online forum,
maybe quote the article, and provide a link back to it. Just depends I guess on how flexible
and open-minded the site's admin and owner are. But really, to avoid being sued (if push
comes to shove) I'd get permission.
Use a user-agent header which identifies your service.
Check their robots.txt (and re-check it at regular intervals, e.g. daily).
Respect any Disallow in a record that matches your user agent (be liberal in interpreting the name). If there is no record for your user-agent, use the record for User-agent: *.
Respect the (non-standard) Crawl-delay, which tells you how many seconds you should wait before requesting a resource from that host again.
"no scraping intended" - You are intending to scrape. =)
The only reasonable ethics-based reasons one should not take it from their website is:
They may wish to display advertisements or important security notices to users
This may make their statistics inaccurate
In terms of hammering their site, it is probably not an issue. But if it is:
You probably wish to scrape the minimal amount necessary (e.g. make the minimal number of HTTP requests), and not hammer the server too often.
You probably do not wish to have all your apps query the website; you could have your own website query them via a cronjob. This will allow you better control in case they change their formatting, or let you throw "service currently unavailable" errors to your users, just by changing your website; it introduces another point of failure, but it's probably worth it. This way if there's a bug, people don't need to update their apps.
But the best thing you can do is to talk to the website, asking them what is best. They may have a hidden API they would allow you to use, and perhaps have allowed others to use as well.

Client side javascript driven websites [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Is it possible to build dynamic web applications using client side javascript as the pivotal point? I'm not talking about server side javascript (like node), I'm talking about handling most of the site with javascript: templating, form handling etc.
Of course, the short answer is "yes, it is possible". But my main concern is about database data manipulation and security when the database is traditionally located on a server. A clientside javascript driven application should ideally talk almost directly with the database. I know Couchdb allows this, but how to prevent users to submit queries meant to see data they should not be allowed to see? How to handle input validation considering that the main validation should be also client side and so easily forged?
This seems to me very interesting but not really doable, but maybe there are solutions I'm not aware of, or tiny security layers to wrap around some db, or projects I ignore etc.
I'm aware of CouchDb Standalone apps (couchapp) , it's a technology close to what i'm after, but it enforces an open approach that makes it not ideal for every scenario I can think of.
Any suggestion on this topic is welcome.
EDIT: As examples are required, think at the simples blog. I want to show the last five posts in the front page. If someone "hacks" the page in a very simple way, they could retrieve older posts. That's fine. Bu what when I want to insert a new post? If javascript has open access to the database, anyone can also write posts in my blog - I don't want it. Also, anyone can delete my posts or other users comment, a privilege that I want. And what if I want to avoid comments longer than 500 characters and containing bad words? Again, being validation on the client-side, users can bypass it.
First, running code on your clients to directly access the database sounds like trouble; it seems to go against the very idea of information hiding that has been so instrumental in designing more complex systems.
So, I assume most of this is an academic exercise. :)
Most centralized database systems have multiple users or roles themselves; typically, we use a single "user account" per application, and allow the applications to define users in their own way. (This has always bothered me, it always felt like admin vs user roles should also be separated in the database. But I appear to be alone in my opinion. :)
However, you could define an admin role and a user role in your database, GRANT SELECT privileges to your user role, and GRANT SELECT, UPDATE, INSERT privileges to your admin role.
PostgreSQL at least has a predefined PUBLIC that can take the place of user; the following example is copied from http://www.postgresql.org/docs/9.0/static/sql-grant.html :
GRANT SELECT ON mytable TO PUBLIC;
GRANT SELECT, UPDATE, INSERT ON mytable TO admin;
Correct configuration of the pg_hba.conf file could allow admin users to log in from specific IPs, and user users to log in from everywhere else, so you could restrict UPDATE and INSERT to just specific IPs.
Now the difficulty is in writing a PostgreSQL client library in JavaScript. :) I honestly haven't got a clue if the browser-based JavaScript virtual machines are flexible enough to allow arbitrary sockets communication with a remote host. (Given how WebSockets have been so heartily embraced, I'm guessing JavaScript is very much stuck in the HTTP world.)
Of course, you have to serve your HTML pages to the web browsers somehow, and that means having an HTTP server. You might as well ask that server to sit between the clients and the database. And you might as well do some processing tasks on the server to avoid sending redundant / unnecessary data to clients as well... which is exactly how we have the situation we have today. :)
Yes it's possible to have a web application which depends heavily on JavaScript; however, most of the time this layer is just additional; not a replacement. Most of the Developers out there think about JavaScript as just a convenience layer which makes transactions "easier" for the end-user, and you could say that that's the best approach security-wise. JavaScript as a "definitive" tool, to sanitize malleable data to a trusted database is just not efficient; unless you want to treat all your data as insecure, and sanitize it every single time you display it and deal with it from JavaScript itself.
Fancy animations, AJAX, validations, calculations, are usually there only for convinience, under the thinking that sometimes it's better to use the client processor rather than the server's. And of course, the fact that making things "faster" has been something everyone has been wanting to achieve since the 56k internet connection.
Security wise, having the security logic without any extra layer of protection available to the end user is just insane. Think about JavaScript as an Extra hand. What JavaScript can read, the user can read. You wouldn't be storing database credentials, or password hashing keys on there would you?
Specially since JavaScript can be modified in the run with a debugger, and pretty much any obfuscated code can be solved to something readable by a human.
Leaving that insertion and 'secure' data aside, you shouldn't have much trouble fetching publicly available information, if your source is secure.
For the javascript front-end I would recommend Backbone.js, which will give you a solid base on the organization and interaction side:
Backbone supplies structure to
JavaScript-heavy applications by
providing models with key-value
binding and custom events, collections
with a rich API of enumerable
functions, views with declarative
event handling, and connects it all to
your existing application over a
RESTful JSON interface.
The only thing that would be left is having a thin layer that could rest on top of your DB (or even the db itself) to sanitize the data upon insertion and to store / compute sensitive information that can't be exposed under ANY situation to the client.
UPDATE
The Blog Example
The 3 real requirements to do what you want.
Authentication (backend) : this would be needed to access the db, to erase comments, and so on.
Validation & Sanitization (backend) : Limit amount of characters, escape malicious code, ban words.
Sensitive Data Handling (backend) : Using a Hash to for passwords...
Note: You can pretty much bypass the "Validation & Sanitization" by dealing all your data as insecure when you display it; like I mentioned highly inefficient since you would need the client to parse it somehow to make it secure; and it's still highly probable it'll be bypassed. The other two, really require you to have a secure way to interact with your valuable db.
Usually: If your back-end fails, your front-end fails. And vice versa. A XSS can do a heavy amount of damage on a JavaScript ruled site (like Facebook for instance).
It is not possible to talk to the database from the client-side. For security's sake, and validation you do must be done twice. Once on the client-side, once on the server-side.
Anything that you can do on the client-side can be forged by a malicious user. Client-side validation is mainly done as a convenience to the user. It's the server-side validation that provides database security.

Categories

Resources