Unable to parse bulk JSON POST request to Kinvey back-end - javascript

When I try to parse this JSON:
[
{"name":"name1","id":12},
{"name":"name2","id":11},
{"name":"name3","id":111},
{"name":"name4","id":1115}
]
in a POST request to Kinvey's BAAS, I get the error:
{
"error": "Unable to parse the JSON in the request"
}
Here is a screenshot of my back-end (Kinvey).
Here is a screenshot of my request (Postman).
When I send the single entity {"name":"name1","id":12} it doesn't throw an error and places it in the back-end as it should. Picture here: Kinvey worked

As a security measure, some frameworks won't parse top-level arrays as JSON. Doing so enabled exploits in some older browsers.
The exploit goes something like this:
Write some Javascript that replaces Array with a function that stores its contents to some other variable.
In your malicious site, include a request to some privileged (JSON Array) resource on another server using a <script> tag.
Trick a user with privileges on that server into visiting your site.
The requested resource will be pulled from the benign server, loaded in the user's browser as a script, and evaluated— but the array gets handled by your malicious substitute function, which you can use however you like. A form of cross-site request forgery.
Update
Regarding the question, "how do I upload multiple entities to a Kinvey collection?", the answer is in the Kinvey documentation:
"For bulk upload, see the CSV/JSON import feature on the Kinvey console (navigate to the collection, click Settings, then click Import Data)."

You can only POST one entity at a time with the POST function in Kinvey. So this is not a JSON parsing error.
Also, you should look into calling Kinvey through the official Kinvey SDK for the mobile platform you're developing for, rather than using the REST API. That way, you can take advantage of many other features such as caching, offline sync, implicit authentication, etc.

Related

How can I protect an HTTP POST request from only being received by an approved javascript client?

I have a URL that will accept POSTed data and store a record of it — this is for analytics. I need to secure that against anyone in the world from POSTing to it, but what possible authentication mechanism can I use that it is safe to expose the details of? I don't think the javascript can access any secret data without making it public.
I could base it off any HTTP header, but these can all be spoofed, right?
If it helps, both client and server are https.
Edit: I think I need to state the problem more explicitly; sorry, I thought I could explain it concisely, but it's clearly not coming across! Imagine the following:
A static page at https://example.com/index.html includes a script, https://example.com/script.js.
The script makes a request to another remote URL, e.g.
ajax_call('https://stats.example.com/stats.php', 'some data');
The stats.php script simply writes 'some data' to a file
Now, the flaw is that anyone can simply POST anything to stats.php and it will write to the file. I want to restrict that to just my 'client', i.e. https://example.com/index.html.
If I could, I would do something like this in stats.php:
if ($_SERVER["HTTP_REFERER"] == 'https://example.com') {
do_stuff();
} else {
die('not allowed');
}
but I've always been under the impression that HTTP_REFERER (and other similar headers) could just be spoofed, so that woud be pointless.
I need to secure that against anyone in the world from POSTing to it, but what possible authentication mechanism can I use that it is safe to expose the details of?
If the endpoint needs to be accessed by the browser of everybody who visits a public website, then you can't.
The browser is completely under the user's control. They can inspect everything the browser does. They can then recreate that using some other HTTP client.
If you trust the browser enough to let it make the request to your analytics API then you must trust the owner of the browser too. It is impossible to separate them.
The best you can do is analyse the data sent to the analytics API for usual or impossible patterns of behaviour.
You Can USE CORS and so enable your BackEnd to receive ONLY REQUEST from a specific HOST/DOMAIN

Scraping a webpage that is using a firebase database

DISCLAIMER: I'm just learning by doing, I have no bad intentions
So, I would like to fetch the list of the applications listed on this website: http://roaringapps.com/apps
I've done similar things in the past, but with simpler websites; this time I'm having problems getting my hands on the data behind this webpage.
The scrolling from page to page is blazing fast so, to understand how the webpage works, I've fired up a packet sniffer and analyzed the traffic. I've noticed that, after the initial loading, no traffic is exchanged between the server and my client, even if I scroll over 2500 records in the browser. How is that possible?
Anyhow. My understanding is that the website is loading the data from a stream of some sort, and render it via Javascript. Am I correct?
So, I've fired up chromium devtools a looked at the "network" tab, and saw that a WebSocket request is made to the following address: wss://s-usc1c-nss-123.firebaseio.com
At this point, after googling a bit, I've tried to query the very same server, using the "v=5&ns=roaringapps" query I saw on the devtools window:
from websocket import create_connection
ws = create_connection('wss://s-usc1c-nss-123.firebaseio.com')
ws.send('v=5&ns=roaringapps')
print json.loads(ws.recv())
And got this reply:
{u't': u'c', u'd': {u't': u'h', u'd': {u'h': u's-usc1c-nss-123.firebaseio.com', u's': u'JUL5t1nC2SXfGaIjwecB6G13j1OsmMVv', u'ts': 1476799051047L, u'v': u'5'}}}
I was expecting to see a json response with the raw data about applications & so on. What I'm doing wrong?
Thanks a lot!
UPDATE
Actually, I just found out that the website is using json to load its data. I was not seeing it in iterated requests probably because of caching - but disabling it in chromium did the trick.
While the Firebase Database allows you to read/write JSON data. But its SDKs don't simply transfer the raw JSON data, they do many tricks on top of that to ensure an efficient and smooth experience. W
hat you're getting there is Firebase's wire protocol. The protocol is not publicly documented and (if you're new to it) trying to unravel it is going to give you an unpleasant time.
To retrieve the actual JSON at a location, it's easiest to use Firebase's REST API. You can get that by simply appending .json to the URL and firing a HTTP GET request against that.
So if the initial data is being loaded from:
https://mynamespace.firebaseio.com/path/to/data
You'd get the raw JSON by firing a HTTP GET against:
https://mynamespace.firebaseio.com/path/to/data.json

text/html output when requesting JSONP

I have been playing around with the jQuery library the last week or two.
Very handy! I am now playing with the AJAX requests to retrieve things such as the weather, current downloads and more, which have been going well so far!
I have now tried to connect up to my ISP to get my current data usage (peak, off peak etc).
When I use Chrome, I can manually type the variables into the URL and have the required JSON code show in the browser. The issue is, that it seems to return text/html instead of application/json.
When you go into developer tools, it shows text/html. This make it difficult for me to retrieve the data from my home server using AJAX and JSONP. See here for a failed query (but you can still see the text/html output, which is in a JSON format! Failed JSON Query on ISP
My question is, how could I get this data from the server URL, then make it into JSON that jQuery can read?
When I try the .load , $.get functions I run into Cross Origin Issues...
EDIT:Here is the PDF documentation for the API (Download at the bottom of the page)
Notice that I need to append certain values (user / pass / token). My ultimate aim is to have my JS read these values and store them.
The issue is, that it seems to return text/html instead of application/json.
That's a serverside issue. Go and file a bug report.
This make it difficult for me to retrieve the data
Not by itself. You should be able to override the settings how responses are parsed, e.g. in jQuery by using the datatype parameter.
using AJAX and JSONP
Notice that you cannot use JSONP, as it is not supported by that API (judging from the docs and a simple ?callback=test try). If you want support for that, file a bug report against the service provider.
When I try the .load, $.get functions I run into Cross Origin Issues...
Yes. They don't send CORS headers either. I suspect that this API is only used internally, and by devices that are not subject to a same-origin policy.
how could I get this data from the server URL, then make it into JSON that jQuery can read?
Use a proxy on your own server (that runs in the same domain as your app). It can also fix that content-type header.
For more details see also Ways to circumvent the same-origin policy, though most of the methods require cooperation of the service provider (to implement serverside features).
If i understand you correctly You ask for a certain value and it gives you a string. For most API's in the world they send a string that you have to parse into JSON or some language code. I would suggest looking at Parsing JSON Strings link. It explains how to take well formated strings and parse them into JSON readable objects.
var obj = jQuery.parseJSON( '{ "name": "John" }' );
alert( obj.name === "John" );
if you go on further and start using php take a look that Parsing JSON Strings with PHP
EDIT:
use .done() method to grab text from other pages after AJAX call.
$.ajax(...).done(function(html){
//do what you want with the html from the other page
var object = $.parseJSON(html)
}

Javascript hijacking, when and how much should I worry?

Ok, so I'm developing a web app that has begun to be more ajaxified. I then read a blog that talked about javascript hijacking, and I'm a little confused about when it's actually a problem. I want some clarification
Question 1:
Is this the problem/vulnerability?
If my site returns json data with a 'GET' request that has sensitive
information then that information can get into the wrong hands.
I use ASP.NET MVC and the method that returns JSON requires you to explicitly allow json get requests. I'm guessing that they are trying to save the uninitiated from this security vulnerability.
Question 2:
Does the hijacking occur by sniffing/reading the response as it's being sent through the internet? Does SSL mitigate that attack?
Question 3:
This led me to ask this question to myself. If I'm storing page state in local javascript object(s) of the page, can someone hijack that data(other than the logged in user)?
Question 4:
Can I safely mitigate against THIS vulnerability by only returning JSON with a 'POST' request?
The post you linked to is talking about CSRF & XSS (see my comment on the question), so in that context:
Is this the problem/vulnerabiliy ("If my site returns json data with a 'GET' request that has sensitive information then that information can get into the wrong hands.")?
No.
Does the hijacking occur by sniffing/reading the response as it's being sent through the internet?
No.
If I'm storing page state in local javascript object(s) of the page, can someone hijack that data(other than the logged in user)?
It depends. It depends on whether you're storing the data in cookies and haven't set the right domain, or path. It depends on whether there's a security vulnerability on the client browser that would allow a script to gain access to data that typically is restricted. There are numerous other vectors of attack, and new ones are discovered all the time. The long and the short of it is: don't trust the browser with any confidential or secure data.
Can I safely mitigate against THIS vulnerability by only returning JSON with a 'POST' request?
No (it's not a single vulnerability, it's a set of classes of vulnerabilities).
Well you can check if there was a get and if the get was from a correct referrer.
You are not really much safer getting it from a POST because that is just as easy to simulate.
In general there are a lot of things you can do to prevent cross site forgery and manipulation.
The actually vulnerability is being able to overwrite Array.
If one overwrites the native Array then one get's access to the JSON data that's constructed as an Array.
This vulnerability has been patched in all major browsers.
You should only worry about this if your clients are using insecure browsers.
Example:
window.Array = function() {
console.log(arguments);
// send to secret server
}
...
$.get(url, function(data) { ... });
When the data is constructed if there are any arrays in the returned JSON the browser will call window.Array and then that data in that array gets send to the secret server.

How to prevent direct access to my JSON service?

I have a JSON web service to return home markers to be displayed on my Google Map.
Essentially, http://example.com calls the web service to find out the location of all map markers to display like so:
http://example.com/json/?zipcode=12345
And it returns a JSON string such as:
{"address": "321 Main St, Mountain View, CA, USA", ...}
So on my index.html page, I take that JSON string and place the map markers.
However, what I don't want to have happen is people calling out to my JSON web service directly.
I only want http://example.com/index.html to be able to call my http://example.com/json/ web service ... and not some random dude calling the /json/ directly.
Quesiton: how do I prevent direct calling/access to my http://example.com/json/ web service?
UPDATE:
To give more clarity, http://example.com/index.html call http://example.com/json/?zipcode=12345 ... and the JSON service
- returns semi-sensitive data,
- returns a JSON array,
- responds to GET requests,
- the browser making the request has JavaScript enabled
Again, what I don't want to have happen is people simply look at my index.html source code and then call the JSON service directly.
There are a few good ways to authenticate clients.
By IP address. In Apache, use the Allow / Deny directives.
By HTTP auth: basic or digest. This is nice and standardized, and uses usernames/passwords to authenticate.
By cookie. You'll have to come up with the cookie.
By a custom HTTP header that you invent.
Edit:
I didn't catch at first that your web service is being called by client-side code. It is literally NOT POSSIBLE to prevent people from calling your web service directly, if you let client-side Javascript do it. Someone could just read the source code.
Some more specific answers here, but I'd like to make the following general point:
Anything done over AJAX is being loaded by the user's browser. You could make a hacker's life hard if you wanted to, but, ultimately, there is no way of stopping me from getting data that you already freely make available to me. Any service that is publicly available is publicly available, plain and simple.
If you are using Apache you can set allow/deny on locations.
http://www.apachesecurity.net/
or here is a link to the apache docs on the Deny directive
http://httpd.apache.org/docs/2.0/mod/mod_access.html#deny
EDITS (responding to the new info).
The Deny directive also works with environment variables. You can restrict access based on browser string (not really secure, but discourages casual browsing) which would still allow XHR calls.
I would suggest the best way to accomplish this is to have a token of some kind that validates the request is a 'good' request. You can do that with a cookie, a session store of some kind, or a parameter (or some combination).
What I would suggest for something like this is to generate a unique url for the service that expires after a short period of time. You could do something like this pretty easily with Memcache. This strategy could also be used to obfuscate the service url (which would not provide any actual security, but would raise the bar for someone wanting to make direct calls).
Lastly, you could also use public key crypto to do this, but that would be very heavy. You would need to generate a new pub/priv key pair for each request and return the pubkey to the js client (here is a link to an implementation in javascript) http://www.cs.pitt.edu/~kirk/cs1501/notes/rsademo/
You can add a random number as a flag to determine whether the request are coming from the page just sent:
1) When generates index.html, add a random number to the JSON request URL:
Old: http://example.com/json/?zipcode=12345
New: http://example.com/json/?zipcode=12345&f=234234234234234234
Add this number to the Session Context as well.
2) The client browser renders the index.html and request JSON data by the new URL.
3) Your server gets the json request and checks the flag number with Session Context. If matched, response data. Otherwise, return an error message.
4) Clear Session Context by the end of response, or timeout triggered.
Accept only POST requests to the JSON-yielding URL. That won't prevent determined people from getting to it, but it will prevent casual browsing.
I know this is old but for anyone getting here later this is the easiest way to do this. You need to protect the AJAX subpage with a password that you can set on the container page before calling the include.
The easiest way to do this is to require HTTPS on the AJAX call and pass a POST variable. HTTPS + POST ensures the password is always encrypted.
So on the AJAX/sub-page do something like
if ($_POST["access"] == "makeupapassword")
{
...
}
else
{
echo "You can't access this directly";
}
When you call the AJAX make sure to include the POST variable and password in your payload. Since it is in POST it will be encrypted, and since it is random (hopefully) nobody will be able to guess it.
If you want to include or require the PHP directly on another page, just set the POST variable to the password before including it.
$_POST["access"] = "makeupapassword";
require("path/to/the/ajax/file.php");
This is a lot better than maintaining a global variable, session variable, or cookie because some of those are persistent across page loads so you have to make sure to reset the state after checking so users can't get accidental access.
Also I think it is better than page headers because it can't be sniffed since it is secured by HHTPS.
You'll probably have to have some kind of cookie-based authentication. In addition, Ignacio has a good point about using POST. This can help prevent JSON hijacking if you have untrusted scripts running on your domain. However, I don't think using POST is strictly necessary unless the outermost JSON type is an array. In your example it is an object.

Categories

Resources