I am receiving an ID that is encrypted from the backend and I can see in the response it is coming without any question marks. But when passing this value in a request, I see some values are being changed. We are simply taking this value and passing it. Why would this be?
Example:
Getting from backend... ID = 7$ĄrÂŬÛ,ŕ4Ŀ+
While passing to another service... ID = 7$?rÂ?Û,?4?+
Edit: a few things to note
this is all happening within an iFrame
on page load, initial value is held in redux as an empty string ('')
issue does not happen when characters from the extended ascii table are not there, but only some ascii characters are changing to ?
Well, without any code to see what's really happening, I have a few guesses on what could be going on.
If the text is still encrypted or it's partially encrypted, then there won't actually be any character to display because it's in an encrypted binary format.
Make sure the encrypted text is being fully decrypted.
Character Support
If you are viewing the encrypted text in a terminal, it may not be able to display the characters correctly, (as those specific characters are only supported in
Unicode,
and not in
ASCII).
Try outputting the text to a file.
If you are sending the request in a URL, the URL can only take certain characters without special formatting, only a subset (64) of the characters from the ASCII character set can be used.
Make sure it's being encoded (and decoded) with
base64
for
URLs
and
forms.
Related
To prevent web application input from XSS or any other attack, we would like to decode all the input coming from the client (browser).
To bypass the standard validation, bad guys encode the data. Example:
<IMG SRC=javascript:alert('XSS')>
That gets translated to
<IMG SRC=javascript:alert('XSS')>
In C#, we can use HttpUtility.HtmlDecode & HttpUtility.UrlDecode to decode the client input. But, it does not cover all the type of encoding. For example, following encoded values are not getting translated using above methods. However, all the browser decode and execute them properly. One can verify them at https://mothereff.in/html-entities as well.
<img src=x onerror="javascript:alert('XSS')">
It gets decoded to <img src=x onerror="javascript:alert('XSS')">
There are some more encoded text that does not get decoded using HtmlDecode method. In Java, https://github.com/unbescape/unbescape handles all such varieties.
Do we have a similar library in .Net or how do handle such scenarios?
Generally, you should not allow users to enter code into a text box.
Client side
Judging from the comments on your post, I'd simply add some client-side validation to prevent users from adding any sort of malicious inputs (such as verifying email fields contain emails) and then add the same validation techniques to your server.
Server side
As soon as you read a user's input in a model, you should validate and sanitise it before you do any further processing. Have a generic AntiXSS() class that can remove any malicious characters such as the <> symbols by checking myString.Contains("<") or myString.Contains(">") for example. If it does, remove that character. Validate your types. If you're checking the userEmail field, make sure it conforms to email syntax.
The general idea is that you can pass data to the client, but never trust any of the data that comes back from the client without first sanitising and cleansing everything.
I found the solution. HtmlUtility.HtmlDecode decodes the chars between ampersand '&' and semicolon ';'. However, the browsers do not bother about the suffixed ';'.
In my case, semicolon ';' was missing. I have written simple code to insert a semicolon before calling HtmlDecode method. Now, it's decoding properly as expected.
I have been breaking my head for the last couple of days trying to save the screenshot from a ThreeJS on the server using .NET Web API.
I have gone through all the possible questions on this specific topic and related ones on StackOverflow and tried the suggestions as well.
The issue details are specified below:
I am successfully able to get the base64 encoded string from renderer.domElement.toDataUrl() which contains a valid threejs image.
When I pass this string as is to my .NET WebAPI, the Convert.FromBase64String() method fails saying invalid length of the Base64 string or invalid padding characters. I ensured to extract out the "data:image/png;base64," part before passing.
I tried a number of things to resolve this issue like adding padding characters to ensure the length is mod 4, using regular expression to extract out the right data. I was able to get through the Convert.FromBase64String() and saved the resulting byte array as a png on my server. It resulted in a blank image.
I also discovered that when I used the chrome extension Advanced Rest Client to hit my WebAPI and used the Encode Payload feature before posting the string, I was able to get the image saved on my server successfully and got back the desired image as well.
Seeing this, I used the encodeURIComponent() function in Javascript to pass my base64 string to the WebAPI from my web app, but failed, getting back the same behavior as my earlier attempts.
One important observation was that the whitespaces were getting eliminated in case of Encode Payload but not in case of encodeURIComponent.
I compared the strings between encodeURIComponent() and the Encode Payload from Advanced Rest Client. Although, on a high level, they do the same thing by replacing the special characters with their escape sequences, there is still a significant difference between them.
Request help on this issue.
I would like to know if there is any other way of getting the threejs base64 string passed to .NET successfully.
What might be the difference between the encoding of encodeURIComponent and Advanced Rest Client Encode Payload feature?
Thanks in advance!
I am posting a large amount of data. I need this to be as small as possible for performance reasons.
My data starts out as JS objects. I then stringify it using json. I then send it in a post.
The thing is that I have a lot of objects:lists [] and dict {}, as well as short texts, which are placed in quotes "" by json.
These are then uri encoded before being posted. I do not do this; the browser does it. I can see the result when I look in the request body.
So, every [, {, and "" is now uri encoded, meaning that my string becomes much longer. In fact, if I compare
alert( JSON_local.stringify(myStuff).length);
alert(encodeURI(JSON_local.stringify(myStuff).length);
the uri encoded string is 50% larger. That's a lot bigger when the string starts out big.
Am I missing something here? json is standard, but it seems to have a negative side effect for me. is there an alternative to using json? Or am I doing something wrong here? Data alsways has to be send as uri encoded, no?
Data always has to be send as uri encoded, no?
Not true. This depends on the content type you're sending it.
If you use x-www-form-urlencoded content-type when sending it, you need to encode the data. If you use multipart/form-data, for example, you do not need to. This has been discussed in more length in here. For considerable amount of data, I don't see any real reason to use x-www-form-urlencoded.
Of course, there is more to it than just changing the content type, you need to supply the mime boundaries then. It does sound to me however that that'd be more efficient for you. From http://www.w3.org/TR/html401/interact/forms.html#h-17.13.4:
The content type "application/x-www-form-urlencoded" is inefficient
for sending large quantities of binary data or text containing
non-ASCII characters. The content type "multipart/form-data" should be
used for submitting forms that contain files, non-ASCII data, and
binary data.
I am creating a string variable in javascript and the length of that string could be any.
I am sending this string by jquery post method to a servlet. This servlet writes the string to a file.
I can alert the string anywhere in my javascript and can see the complete string.
But whenever the string length exceeds 5345 characters, then I get "aborted" message in firebug (I assume data is not sent) and no error message is displayed in server's console.
(For chrome, length limit is little more i.e. 5389)
I guess there is a problem in length of the data that is being sent to the servlet. But I wonder, to my knowledge there is no limit to the amount of data sent by post.
I am using jquery's $.post method as below
$.post('servlet', function(data) {
});
I want to print the error that has occurred while sending data to the servlet. Can I do that?
If you are using the GET method, you are limited to a maximum of 2,048 characters, minus the number of characters in the actual path.
However, the POST method is not limited by the size of the URL for submitting name/value pairs. These pairs are transferred in the request body and not in the URL.
I'm writing a JavaScript function that needs to uphold three properties:
be very small and lightweight - no external libraries
encode a string in such a way as to be able to be passed as a GET parameter
this string must be decoded again at its destination
Effectively, it authenticates the user by sending his username and password to a PHP page which then verifies it. This is done via GET because I haven't yet found a way of doing a background cross-domain POST request. The trouble is that if the user has a character such as '#' or similar in his password, it doesn't get sent properly.
Currently to avoid this, I encode() the password string before sending it, which allows it to be received without problems. However, I read that PHP's urldecode() is not a perfect analog for this, as there are corner cases which are treated differently (i.e. ' ', '+', etc). Sadly I cannot find this document anymore, so I cannot quote it, but the gist was that one of them converts spaces into '+' signs, which the other treats as an actual plus sign, or something like that...
As such, I'm looking for a Javascript function that can take a string and make it URL-safe, and which has a perfect reversal function in PHP so that the original string can be recovered.
The arguably awful code I currently use to achieve this:
login.onsubmit = function(){
loginFailMsg.style.display = 'none';
var inputs = login.getElementsByTagName('input');
var formdata =
'username='+inputs[0].value+'&password='+encode(inputs[1].value);
submit.src = formtarget+'/auth/bklt?'+formdata;
userinfo = undefined;
setTimeout(getUserinfo,300);
return false;
};
encodeURIComponent, PHP will decode it automatically when populating $_POST or $_GET
'&password='+encode(inputs[1].value)
Where's encode function coming from? Seems to me the quick answer to your question is using encodeURIComponent() instead, available since JavaScript 1.5. See also Comparing escape(), encodeURI(), and encodeURIComponent(); it does not encode everything either, but does encode all the server expects it to.
(As for cross-domain AJAX POST calls, I'd really have a look at "JSON with Padding". See JSONP with jQuery that I mentioned in the comments earlier. This will also prevent issues with the timeout you've randomly chosen, and jQuery will also help you, a lot, to get rid of inputs[0].value and the like. And, as you apparently already have a MD5 hash on the server, I'd really hash the password client side as well --see Karl's answer-- and compare those hashes instead. Respect your user's password and your own time, drop that no external libraries requirement!)
I don't think there's such a thing as a reversible hash function. There are plenty of javascript md5 libraries available, however.