I am creating a website with ASP.NET MVC 4. The application consists of two pages, whose workflow is similar to Google Maps. On the first page, the user types in a patient's name, date of birth, and some basic data about that patient. Then the user submits the form, and is brought to the second page in the application. The second page is just a print preview that the user can print. I want the user to be able to navigate between the two pages using the browser's back and forward buttons (for example, to change inputs on the first page after seeing the second page)
Actually calculating the data that appears on the printout is very complicated, and I really want to have all that code be executed server-side, where I can use C#. So I need to send the patient's data to the server. The problem is that I don't have an SSL certificate, and I don't want to send a patient's name with their data over HTTP (as this is a violation of privacy). I am willing to send the patient's data over HTTP, as long as it remains detached from the patient's identity (except for at the client). The name and date of birth are simply displayed in the corner of the printout, and do not affect the server-side calculations in the least.
I can think of two possible ways to accomplish this task. The first, more preferred solution, would be a way to send only some of the form data over HTTP, yet still somehow get the name and date of birth from the first page in client-side jquery running on the second page. Maybe I can make a cookie and somehow specify not to send it as part of the http request?
The other way to accomplish this is to make the entire application into a single page, and dynamically change the contents via client-side jquery. In this solution, when the user submits the form, I can fire off an ajax request that will return JSON. I can then populate the print preview with data returned from the server (i.e. the JSON) as well as from the form (i.e. the patient's name and date of birth). Is there a way to accomplish this while still allowing the user to use the browser's back and forward commands to navigate between the data input page and the print preview page if they are in fact the same page?
I don't believe what you described is possibly without severe drawbacks. Sure, you could roll up the data into a cookie or local storage and avoid the POST--but this is a lot of logic in your view, and a pretty nasty hack.
The options I would advise are:
Get an SSL cert. If that's the driving force behind your approach then spend the $6 to get one. Seriously.
Keep the print view in the same page as the form; use css #media types to specify the print styles.
Related
I have a web page A created by a PHP script which would need to use a service only available on another page B – and for various reasons, A and B can't be merged. In this particular instance, page A is a non-WordPress page and page B is WordPress-generated. And the service in question is sending emails in a specific format which is supplied by a WP plugin.
My idea is to use page A to generate the email content and then send that content to page B which then, aided by the plugin, sends the email in the appropriate format and transfers control back to page A. This would be perfectly doable – but what I would like in addition is for page B never to be displayed. The visitor should have the impression that they are dealing only with page A all the time. Can that be done and if so, how?
I do not intend this to be a WordPress question (although maybe it is), rather more generally about using another page's script in passing without displaying that other page.
If you do have source access, it would be most reliable to use the addon directly... But if you cannot, the second easiest would be to use curl to mimic the form post on page B. This would happen server side so the user wouldn't see it happening.
To figure out what you need to send in your POST request, open your browser's developer tools and watch the network tab when you send the form manually, take the URL, and all the post data. Then you'll be able to mimic it.
You may proxy https://SITEA.com/siteB/whatever to http://SITEB.com/whatever - or the other way around... I didn't fully understand the process :P
In case you just want the siteB service call, you may also send the requests via curl or a HTTP library of your choice - which might be better as you will have to get a nonce first and stuff like that.
I need to understand and maybe ideas about single page apps.
I want to create a project, i'll do it with MVC. I also want to use AngularJS for client side programming.
I know that AngularJS is good for single page applications and when working with SPAs you send your data to API to process. But data sent from Angular is visible to user and open to be manipulated.
I don't want users to be able to see any data or access to the API from the internet. Witch way i should follow?
I'm thinking about keeping sensitive user data in MVC controller. For example let's say user Id is very sensitive for my project. If i keep user id in javascript variable, when i'm sending it to API with some command user will able to change the id and manipulate the system. But if i keep user-id in MVC controller, via user authentication, and send request to my MVC controller then the user won't be able to change it. But i know this is not the best way of doing things, there must be a more clever way.
I'll be glad if someone can explain how this things works in SPAs or when you use Angular and MVC together.
This won't work, you can't prevent user from tampering the data, crafting custom request and doing whatever she wants at her side.
What you should do is to never trust upcoming data - which means validate every incoming id twice, once when you produce it and then when it comes back. Either it comes plain and you verify if it's legal or you encrypt it so when it comes back you decrypt it.
Some data can be stored at the server side, the id you mention is such example. This way user never sees the data, what you pass is the session id which is a long random value, rather impossible to craft. This approach comes with the cost of server side resources that are used, the more users the more resources at the server stored between requests.
I've been looking for better ways to secure my site. Many forums and Q/A sites say jquery variables and HTML attributes may be changed by the end user. How do they do this? If they can alter data and elements on a site, can they insert scripts as well?
For instance I have 2 jquery scripts for a home page. The fist is a "member only" script and the second is a "visitor only" script. Can the end user log into my site, copy the "member only" script, log off, and inject the script so it'll run as a visitor?
Yes, it is safe to assume that nothing on the client side is safe. Using tools like Firebug for Firefox or Developer Tools for Chrome, end users are able to manipulate (add, alter, delete):
Your HTML
Your CSS
Your JS
Your HTTP headers (data packets sent to your server)
Cookies
To answer your question directly: if you are solely relying on JavaScript (and most likely cookies) to track user session state and deliver different content to members and guests, then I can say with absolute certainty that other people will circumvent your security, and it would be trivial to do so.
Designing secure applications is not easy, a constant battle, and takes years to fully master. Hacking applications is very easy, fun for the whole family, and can be learned on YouTube in 20 minutes.
Having said all that, hopefully the content you are containing in the JS is not "mission-critical" or "sensitive-data". If it is, I would seriously weigh the costs of hiring a third party developer who is well versed in security to come in and help you out. Because, like I said earlier, creating a truly secure site is not something easily done.
Short Answer: Yes.
Anything on the users computer can be viewed and changed by the user, and any user can write their own scripts to execute on the page.
For example, you will up vote this post automatically if you paste this in your address bar and hit enter from this page:
javascript: $('#answer-7061924 a.vote-up-off').click();
It's not really hacking because you are the end user running the script yourself, only doing actions the end user can normally do. If you allow the end user on your site to perform actions that affect your server in a way they shouldn't be able to, then you have a problem. For example, if I had a way to make that Javascript execute automatically instead of you having to run it yourself from your address bar. Everyone who came to this page would automatically upvote this answer which would be (obviously) undesired behavior.
Firebug and Greasemonkey can be used to replace any javascript: the nature of the Browser as a client is such that the user can basically have it do anything they want. Your specific scenario is definitely possible.
well, if your scripts are public and not protected by a server side than the Hacker can run it in a browser like mozilla.
you should always keep your protected content in a server side scripting and allow access by the session (or some other server side method)
Yes a user can edit scripts however all scripts are compiled on the user's machine meaning that anything they alter will only affect their machine and not any of your other visitors.
However, if you have paid content which you feed using a "members-only" script then it's safest if you use technology on the server to distribute your members-only content rather than rely on the client scripts to secure your content.
Most security problems occur when the client is allowed to interact with the server and modify data on the server.
Here's a good bit on information you can read about XSS: http://en.wikipedia.org/wiki/Cross-site_scripting
To put it very simply:
The web page is just an interface for clients to use your server. It can be altered in all possible ways and anyone can send any kind of data to your server.
For first, you have to check that the user sending that data to your server has privileges to do so. Usually done by checking against server session.
Then you have to check at your server end that you are only taking the data you want, and nothing more or less and that the data is valid by validating it on your server.
For example if there is a mandatory field in some form that user has to fill out, you have to check that the data is actually sent to server because user may just delete the field from the form and send it without.
Other example is that if you are trying to dynamically add data from the form to database, user may just add new field, like "admin", and set it to 1 and send the form. If you then have admin field in database, the user is set as an admin.
The one of the most important things is to remember avoid SQL injection.
There are many tools to use. They are made for web developers to test if their site is safe. Hackbar is one for example.
We send follow up emails for inquiries on our products and I wanted to track how effective they are.
This is my plan:
Update the url in the hyperlink of the email to include a query string like:
href=http://www.somepage.htm?source=fromEmail
And then track how many visits I get with the query string = fromEmail
My problem is that the page is a .htm and I didn't really want to rewrite it so I'm looking for a javascript counter that can accomodate the query string. Ideally I would like to be able to track the total page hits, as well as the hits that come specifically from these emails. Even more ideally I would like be able to track various information in SQL Server so that the person that requested this could do some reporting on it.
Am I going about this the right way or should I just rewrite it in .net (as we are a .net shop)?
While it is definitely possible to put some javascript on your .htm page that fires an AJAX request that increments a SQL counter table if the source=fromEmail, I would say that it is more reliable to have the server increment this counter when serving up the page.
Having the server do the work when the hit originally comes in will also allow you to track more specific information about the request for the report.
Javascript on emails is a no-no. Outlook by default blocks Javascript, so there goes 50% of your users. Other email systems are not keen on running javascript either. Remember, when you're doing HTML emails, you need to think 1995-vintage HTML. Thanks, Microsoft.
You've got a few (ok, but not great) options:
Include an image file on it. When it gets loaded, count it as a hit. This is how all the major services handle email tracking, with a 1px X 1px white image file that they most often place at the bottom of the page. The obvious problem with doing this is that if they use Outlook's preview pane with images enabled, it counts as a hit that they may not have read. If they read it on Gmail while not unblocking images (set to hidden by default) you've got a real hit that doesn't get recorded. So, either way, your numbers are wrong.
Track link clicks by routing links through your server. You use your server to then re-write urls for the browser to follow. Again, it works well enough, but won't capture the real numbers because only a small percentage of people who get an email actually click a link on them. Here's an example using link tagging with Google Analytics
A combination of the two above. It covers both cases, yes, but could result in double counting one user. You could also hybridize the two by setting a variable on each image that could track back to the source email, then store hits in a DB to eliminate dupes. That's a LOT of work, though.
My company sends (and tracks) thousands of emails daily as part of its core business, and we always encourage clients to do emails with "teasers" that draw them into other websites for the main content. Why? The closer we get a user to the main site, the closer we are to a sale--nobody has ever done an ecommerce transaction solely on email yet (that I know of) Also, it's one heck of a lot easier and offers far more options to do tracking via Google Analytics on a site than it is to track emails. Since you can't reliably embed Analytics in emails, your best bet is to get 'em to a website that can.
Using Python, I built a scraper for an ASP.NET site (specifically a Jenzabar course searching portlet) that would create a new session, load the first search page, then simulate a search by posting back the required fields. However, something changed, and I can't figure out what, and now I get HTTP 500 responses to everything. There are no new fields in the browser's POST data that I can see.
I would ideally like to figure out how to fix my own scraper, but that is probably difficult to ask about on StackOverflow without including a ton of specific context, so I was wondering if there was a way to treat the page as a black box and just fire click events on the postback links I want, then get the HTML of the result.
I saw some answers on here about scraping with JavaScript, but they mostly seem to focus on waiting for javascript to load and then returning a normalized representation of the page. I want to simulate the browser actually clicking on the links and following the same path to execute the request.
Without knowing any specifics, my hunch is that you are using a hardcoded session id and the web server's app domain recycled and created new encryption/decryption keys, rendering your hardcoded session id (which was encrypted by the old keys) useless.
You could try using Firebugs NET tab to monitor all requests, browse around manually and then diff the requests that you generate with ones that your screen scraper is generating.
If you are just trying to simulate load, you might want to check out something like selenium, which runs through a browser and handles postbacks like a browser does.