Calling HTTP method when something in DB changes, Meteor.js - javascript

I am stuck at this. I have an OracleDB where there is a table with some locations. I am calling an HTTP method via RESTful webservice to get my data. Now I want to make this smooth and use this method to get my data on the server only when something in OracleDB changes. I call it something like this:
HTTP.call("GET", "my_url", {data: "json"}, function (error, result) {
if (error) {
console.log(error);
} else {
console.log("Webservice success - data");
// parseJson(result);
}
});
This is the server code. I put data in a collection and then use it on the client. I want to achieve that this method is called only when something changes in DB. I checked Tracker.autorun function which can help me with this I think. But how can I achieve this is going to be called once on the server and not everytime? (something like bodyOnLoad function but on the server). If I am missing something really obvious please give me a link where I can read the life cycle, because I can't really find a proper one.

You might want to check this article
REST organizes these Requests in predictable ways, using HTTP operation types, or verbs, to contruct appropriate responses. Requests originate from the client, and the common HTTP verbs include GET, POST, PUT, DELETE
And consider using a WebSocket for such a task.

You can definitely refer above link to understand the requirement for REAL_TIME application. what you want is real time solution, where, user just connects to server only once and never asks for data, instead data is automatically provided when demanded.
You may also like to see here.
Above link gives you information on how to establish a robust and responsive relation between client and server.

Related

How to call MongoDB from Browser JavaScript?

I want to call the MongoDB directly from Browser JavaScript. Without wasting my time writing Server API with Express.JS.
Is there libraries that can do that?
How it may work:
It's trivial to execute query from the Browser, but the problem is the security. I see one possible way how it could be made safe. You write the Browser JavaScript and mark such server calls with special tags like in code below:
...
// SERVER_QUERY_START
async function getPosts(query_params) {
return db.collection("posts")
.find({ user_id: query_params.user_id })
.toArray()
}
// SERVER_QUERY_END
...
Then during the Client Build process Client JavaScript source code is scanned for such queries and they are extracted and stored somewhere on the server, as allowed queries.
Then when the Browser sends the query function as string to Server for execution, the server validates if this query function is in the list of allowed queries, and if so it executes it. Also, when the Server executes query, it overrides some parameters in the query_params like user_id.
Are there libraries that would do that? (I know about Meteor.JS, but it's doing something different, too complicated. GraphQL also too complicated and bloated. I want simple way I can use with React.JS and Svelte).
You need to use an intermediate service since there are no common protocols between browsers and MongoDB (browsers speak FTP/HTTP and MongoDB uses its own binary protocol).
https://www.mongodb.com/cloud/stitch/faq is a MongoDB-provided intermediate service that should fit the bill.

Dojo dstore: both server-side queries and client-side filtering

I'm a little confused with how to support both server-side queries and client-side filtering with dstore, and am hoping for some guidance. My scenario:
I am communicating with an archive server, so I only have get and query requests, nothing that updates the data.
I want to perform both server-side queries and client-side filtering.
I'd like to cache the results so I'm not accessing the server for every fetch().
If I use a Request, filter() will pass its query parameters to the server, but the data isn't cached and I can't tell how to filter on the client side.
If I use a RequestMemory, filter() is applied to the local cache, and I can't tell how to specify parameters for the server.
All the pieces seem to be there with dstore, I just haven't figured out how to put them all together yet. Thanks for any help.
Looks like I figured it out. There were a couple issues with how I was using RequestMemory. The first was that I didn't realize RequestMemory invoked fetch() automatically. The second issue was that I used an object as the queryParam when it should have been an array.
To meet my requirements I created a new store that extended from Request and Cache, just like RequestMemory, but I did not call fetch() in the postscript() function. Then I could pass parameters to the server:
store.fetch({queryParams: ['key=value']}).then(function(data) {
console.log("fetch", data);
});
I could then 'freeze' the store by setting store.isValidFetchCache = true and subsequently perform client-side filters:
store.filter({type: 'xyz'}).fetch().then(function(data) {
console.log("filter", data);
});

POST manipulation, Tamper Data and AJAX security issues

Frequently when I work on AJAX applications, I'll pass around parameters via POST. Certain parts of the application might send the same number of parameters or the same set of data, but depending on a custom parameter I pass, it may do something completely different (such as delete instead of insert or update). When sending data, I'll usually do something like this:
$.post("somepage.php", {action: "complete", somedata: data, moredata: anotherdata}, function(data, status) {
if(status == "success") {
//do something
}
});
On another part of the application, I might have similar code but instead setting the action property to deny or something application specific that will instead trigger code to delete or move data on the server side.
I've heard about tools that let you modify POST requests and the data associated with them, but I've only used one such tool called Tamper Data for Firefox. I know the chances of someone modifying the data of a POST request is slim and even slimmer for them to change a key property to make the application do something different on the backend (such as changing action: "complete" to action: "deny"), but I'm sure it happens in day to day attacks on web applications. Can anyone suggest some good ways to avoid this kind of tampering? I've thought of a few ways that consist of checking if the action is wrong for the event being triggered and validating that along with everything else, but I can see that being an extra 100 lines of code for each part of the application that needs to have these kinds of requests protected.
You need to authorize clients making the AJAX call just like you would with normal requests. As long as the user has the rights to do what he is trying to do, there should be no problem.
You should also pass along an authentication token that you store in the users session, to protect against CSRF.
Your server can't trust anything it receives from the client. You can start establishing trust using sessions and authentication (make sure the user is who she says she is), SSL/TLS (prevent tampering from the network) and XSRF protection (make sure the action was carried out from html that you generated) as well as care to prevent XSS injection (make sure you control the way your html is generated). All these things can be handled by a server-side framework of good quality, but there are still many ways to mess up. So you should probably take steps to make sure the user can't do anything overly destructive for either party.

Are there any Backbone.js tutorials that teach ".sync" with the server?

I read many Backbone.js tutorials, but most of them deal with static objects.
Of course, I have data on the server. I want a tutorial that shows how backbone.js can communicate with the server to fetch data, post data, etc.
This is .sync, right? I read the backbone.js documentation, but still fuzzy on how to use this feature.
Or can someone show me an example?
According to: http://documentcloud.github.com/backbone/#Sync
Backbone.sync is the function that Backbone calls every time it
attempts to read or save a model to the server.
But when? Where do I put the function? I don't know how to use it, and the documentation doesn't give any examples. When does the data get loaded into my models? I get to define when...right?
You never really have to look at .sync, unless you plan to overwrite it. For normal uses, you can simply call model.save() whenever you want and that will execute a post or put (depending on whether the record exists already). If you want to get the data from your backend, use collection.fetch()
You'll of course also need to specify a URL, do so through your collection attribute, collection.url
You can override Backbones native sync functionality if you override it:
Backbone.sync = function() {
//Your custom impl here
}
After that this function is called whenever you call a backbone function like .save() on models or .fetch() on collections. You do not have to care about data transport anymore.
I would suggest taking a look into Backbones source and look how the default sync function is implemented. Then create your own or adopt your server to support the native function.
They are not free, but the following screencasts both have a piece on backend work and how to send data to and get data from Backbone.
Tekpub is a 9 part screencast about asp.net MVC3, with the whole 6th part about using backbone to write an admin module to manage productions. it shows all about handling routing in MVC3 and sending & receiving data
Peepcode
http://peepcode.com/products/backbone-js about basic backbone stuff
http://peepcode.com/products/backbone-ii about interactivity
http://peepcode.com/products/backbone-iii about persistance (it's this third one you will need for server connection information).

How to prevent direct access to my JSON service?

I have a JSON web service to return home markers to be displayed on my Google Map.
Essentially, http://example.com calls the web service to find out the location of all map markers to display like so:
http://example.com/json/?zipcode=12345
And it returns a JSON string such as:
{"address": "321 Main St, Mountain View, CA, USA", ...}
So on my index.html page, I take that JSON string and place the map markers.
However, what I don't want to have happen is people calling out to my JSON web service directly.
I only want http://example.com/index.html to be able to call my http://example.com/json/ web service ... and not some random dude calling the /json/ directly.
Quesiton: how do I prevent direct calling/access to my http://example.com/json/ web service?
UPDATE:
To give more clarity, http://example.com/index.html call http://example.com/json/?zipcode=12345 ... and the JSON service
- returns semi-sensitive data,
- returns a JSON array,
- responds to GET requests,
- the browser making the request has JavaScript enabled
Again, what I don't want to have happen is people simply look at my index.html source code and then call the JSON service directly.
There are a few good ways to authenticate clients.
By IP address. In Apache, use the Allow / Deny directives.
By HTTP auth: basic or digest. This is nice and standardized, and uses usernames/passwords to authenticate.
By cookie. You'll have to come up with the cookie.
By a custom HTTP header that you invent.
Edit:
I didn't catch at first that your web service is being called by client-side code. It is literally NOT POSSIBLE to prevent people from calling your web service directly, if you let client-side Javascript do it. Someone could just read the source code.
Some more specific answers here, but I'd like to make the following general point:
Anything done over AJAX is being loaded by the user's browser. You could make a hacker's life hard if you wanted to, but, ultimately, there is no way of stopping me from getting data that you already freely make available to me. Any service that is publicly available is publicly available, plain and simple.
If you are using Apache you can set allow/deny on locations.
http://www.apachesecurity.net/
or here is a link to the apache docs on the Deny directive
http://httpd.apache.org/docs/2.0/mod/mod_access.html#deny
EDITS (responding to the new info).
The Deny directive also works with environment variables. You can restrict access based on browser string (not really secure, but discourages casual browsing) which would still allow XHR calls.
I would suggest the best way to accomplish this is to have a token of some kind that validates the request is a 'good' request. You can do that with a cookie, a session store of some kind, or a parameter (or some combination).
What I would suggest for something like this is to generate a unique url for the service that expires after a short period of time. You could do something like this pretty easily with Memcache. This strategy could also be used to obfuscate the service url (which would not provide any actual security, but would raise the bar for someone wanting to make direct calls).
Lastly, you could also use public key crypto to do this, but that would be very heavy. You would need to generate a new pub/priv key pair for each request and return the pubkey to the js client (here is a link to an implementation in javascript) http://www.cs.pitt.edu/~kirk/cs1501/notes/rsademo/
You can add a random number as a flag to determine whether the request are coming from the page just sent:
1) When generates index.html, add a random number to the JSON request URL:
Old: http://example.com/json/?zipcode=12345
New: http://example.com/json/?zipcode=12345&f=234234234234234234
Add this number to the Session Context as well.
2) The client browser renders the index.html and request JSON data by the new URL.
3) Your server gets the json request and checks the flag number with Session Context. If matched, response data. Otherwise, return an error message.
4) Clear Session Context by the end of response, or timeout triggered.
Accept only POST requests to the JSON-yielding URL. That won't prevent determined people from getting to it, but it will prevent casual browsing.
I know this is old but for anyone getting here later this is the easiest way to do this. You need to protect the AJAX subpage with a password that you can set on the container page before calling the include.
The easiest way to do this is to require HTTPS on the AJAX call and pass a POST variable. HTTPS + POST ensures the password is always encrypted.
So on the AJAX/sub-page do something like
if ($_POST["access"] == "makeupapassword")
{
...
}
else
{
echo "You can't access this directly";
}
When you call the AJAX make sure to include the POST variable and password in your payload. Since it is in POST it will be encrypted, and since it is random (hopefully) nobody will be able to guess it.
If you want to include or require the PHP directly on another page, just set the POST variable to the password before including it.
$_POST["access"] = "makeupapassword";
require("path/to/the/ajax/file.php");
This is a lot better than maintaining a global variable, session variable, or cookie because some of those are persistent across page loads so you have to make sure to reset the state after checking so users can't get accidental access.
Also I think it is better than page headers because it can't be sniffed since it is secured by HHTPS.
You'll probably have to have some kind of cookie-based authentication. In addition, Ignacio has a good point about using POST. This can help prevent JSON hijacking if you have untrusted scripts running on your domain. However, I don't think using POST is strictly necessary unless the outermost JSON type is an array. In your example it is an object.

Categories

Resources