I want to call the MongoDB directly from Browser JavaScript. Without wasting my time writing Server API with Express.JS.
Is there libraries that can do that?
How it may work:
It's trivial to execute query from the Browser, but the problem is the security. I see one possible way how it could be made safe. You write the Browser JavaScript and mark such server calls with special tags like in code below:
...
// SERVER_QUERY_START
async function getPosts(query_params) {
return db.collection("posts")
.find({ user_id: query_params.user_id })
.toArray()
}
// SERVER_QUERY_END
...
Then during the Client Build process Client JavaScript source code is scanned for such queries and they are extracted and stored somewhere on the server, as allowed queries.
Then when the Browser sends the query function as string to Server for execution, the server validates if this query function is in the list of allowed queries, and if so it executes it. Also, when the Server executes query, it overrides some parameters in the query_params like user_id.
Are there libraries that would do that? (I know about Meteor.JS, but it's doing something different, too complicated. GraphQL also too complicated and bloated. I want simple way I can use with React.JS and Svelte).
You need to use an intermediate service since there are no common protocols between browsers and MongoDB (browsers speak FTP/HTTP and MongoDB uses its own binary protocol).
https://www.mongodb.com/cloud/stitch/faq is a MongoDB-provided intermediate service that should fit the bill.
Related
When I've coded in Ruby or Python, I've been able to use libraries like VCR that intercept HTTP requests, and record them, so when for example I'm hitting an 3rd party API in tests, I can save that response as a fixture instead of manually building a huge mock objects to check behaviour against.
It's not perfect, but it has saved a load of time when I've been exploring which API requests to make against a third party API, (often wrapping a 3rd party library), then writing tests to check this behaviour.
What's the closest thing in JS these days to this?
I'm looking for an open source tool I can require in my test files, so when I run tests where I might call methods on third party APIs, I don't make expensive, slow HTTP requests. I imagine the code might look a bit like:
it('does something I expect it to', () {
// set up some state I care about
let someVar = someSetupCode()
let library = thirdPartyLib({creds: 'somecreds'})
library.someMethod()
// check state has changed
expect(someVar.value).toBe('what I Expect after calling someMethod')
})
Where here, when I call library.someMethod(), instead of hitting actual servers, I'm checking against the values the server would be returning, that I've saved previously.
Monkey patching an existing library or function
I see things fetch-vcr, or axios-vcr, but these seem to rely on explicitly reaching into a library to replace say, a call to fetch with the http-intercepting version instead, be reading a 'cassette' file containing the canned response.
I'm looking for a way to avoid patching 3rd party code if I can help it, a this is how I understand VCR works for other languages.
Presumably, if there's an HTTP client built somewhere into node then that would be the place you'd patch a function - I haven't come across a specific library that does this.
Running an entire HTTP server
Alternatively I can see libraries like vcr.js, or yakbak, which essentially set up an HTTP server which serves JSON blobs you define, at various urls, like serving a saved users.json file at http://localhost:8100/users/
This is okay, but again, if I don't need to spin up a whole HTTP server, and make actual HTTP requests, that would be wonderful.
Oh, hang on, it looks like sepia from linkedin works well for nodejs at least.
I haven't looked into it too much, but I'd welcome comments if you have been using it.
Probably SoapUI works for you. Although its name, it also works with REST API.
I am stuck at this. I have an OracleDB where there is a table with some locations. I am calling an HTTP method via RESTful webservice to get my data. Now I want to make this smooth and use this method to get my data on the server only when something in OracleDB changes. I call it something like this:
HTTP.call("GET", "my_url", {data: "json"}, function (error, result) {
if (error) {
console.log(error);
} else {
console.log("Webservice success - data");
// parseJson(result);
}
});
This is the server code. I put data in a collection and then use it on the client. I want to achieve that this method is called only when something changes in DB. I checked Tracker.autorun function which can help me with this I think. But how can I achieve this is going to be called once on the server and not everytime? (something like bodyOnLoad function but on the server). If I am missing something really obvious please give me a link where I can read the life cycle, because I can't really find a proper one.
You might want to check this article
REST organizes these Requests in predictable ways, using HTTP operation types, or verbs, to contruct appropriate responses. Requests originate from the client, and the common HTTP verbs include GET, POST, PUT, DELETE
And consider using a WebSocket for such a task.
You can definitely refer above link to understand the requirement for REAL_TIME application. what you want is real time solution, where, user just connects to server only once and never asks for data, instead data is automatically provided when demanded.
You may also like to see here.
Above link gives you information on how to establish a robust and responsive relation between client and server.
I'm a little confused with how to support both server-side queries and client-side filtering with dstore, and am hoping for some guidance. My scenario:
I am communicating with an archive server, so I only have get and query requests, nothing that updates the data.
I want to perform both server-side queries and client-side filtering.
I'd like to cache the results so I'm not accessing the server for every fetch().
If I use a Request, filter() will pass its query parameters to the server, but the data isn't cached and I can't tell how to filter on the client side.
If I use a RequestMemory, filter() is applied to the local cache, and I can't tell how to specify parameters for the server.
All the pieces seem to be there with dstore, I just haven't figured out how to put them all together yet. Thanks for any help.
Looks like I figured it out. There were a couple issues with how I was using RequestMemory. The first was that I didn't realize RequestMemory invoked fetch() automatically. The second issue was that I used an object as the queryParam when it should have been an array.
To meet my requirements I created a new store that extended from Request and Cache, just like RequestMemory, but I did not call fetch() in the postscript() function. Then I could pass parameters to the server:
store.fetch({queryParams: ['key=value']}).then(function(data) {
console.log("fetch", data);
});
I could then 'freeze' the store by setting store.isValidFetchCache = true and subsequently perform client-side filters:
store.filter({type: 'xyz'}).fetch().then(function(data) {
console.log("filter", data);
});
My objective is to have a domain model for writing (so no writable entities exposed to client) and queries for reading (i.e. specialized entity model for reading only).
While watching pluralsight trainings related to Breeze I could not find any example of use I need. What was shown is placing queries to entities inside client java script. But I can't really see any reason for doing so apart from 3 cases:
1. Frequent: Paging in response to user click
2. Frequent: Sorting in response to user sort selection
3. Rare: filtering basing on dynamic filter built by end user - rare because these days you'd rather use a single search box to match any column - no one would normally bother clicking to create complex boolean expressions.
These three cases are crying for queryability at client end. Querying with any other set of conditions (e.g. "orders over $100") I consider as business logic and I don't want to put it into java script as I want to keep client as thin and as dumb as possible (do you really want to decide if $100 is net amount or gross amount at client end?). I prefer to use strongly-typed LINQ queries to the entity model at server end for this. And there is the limitation: the controller methods that return IQueryable will not accept arguments. So there is no way to write a parameterized query where the parameter (like "order value threshold") is handled internally by server and only in the way known to server.
The question: is there a Java Script library which would give me this functionality? What I really need is just a fluent Java Script querying API which - like in Breeze - is automatically converted to a Lambda at server end and applied to my IQueryable or even better - that I can myself inject into my LINQ query at server end.
I'm not entirely sure I understand your question but I do think you've missed out on some of Breeze's capabilities.
Breeze will allow you to expose any client side query that will be applied on top of an existing arbitrary IQueryable on the server... and you can also pass in parameters that operate in concert with the merged query. For example:
On the server you can have an endpoint that looks like this:
[HttpGet]
public IQueryable<Employee> EmployeesFilteredByCountryAndBirthdate(DateTime birthDate, string country) {
return ContextProvider.Context.Employees.Where(emp => emp.BirthDate >= birthDate && emp.Country == country);
}
That can be queried from the client like this:
var q = EntityQuery.from("EmployeesFilteredByCountryAndBirthdate")
.withParameters({ birthDate: "1/1/1960", country: "USA" });
myEntityManager.executeQuery(q).then(function (data) {
... process the results.
}
You can even add a further filter the query so that it represents the application of both the server side logic as well as the client side filter. Something like:
var q = EntityQuery.from("EmployeesFilteredByCountryAndBirthdate")
.withParameters({ birthDate: "1/1/1960", country: "USA" })
.where("lastName", "startsWith", "S");
In general, I like to think of the client side query as simply a way of "further" restricting the data returned by whatever endpoint you have defined on the server. The server query can be a precise and focused as you want either for security or business logic consolidation purposes. The client side query simply allows you to further restrict the payload returned to whatever subset is useful to your use case.
The Breeze DocCode samples in the Breeze zip contain a number of useful examples like this.
I have a JSON web service to return home markers to be displayed on my Google Map.
Essentially, http://example.com calls the web service to find out the location of all map markers to display like so:
http://example.com/json/?zipcode=12345
And it returns a JSON string such as:
{"address": "321 Main St, Mountain View, CA, USA", ...}
So on my index.html page, I take that JSON string and place the map markers.
However, what I don't want to have happen is people calling out to my JSON web service directly.
I only want http://example.com/index.html to be able to call my http://example.com/json/ web service ... and not some random dude calling the /json/ directly.
Quesiton: how do I prevent direct calling/access to my http://example.com/json/ web service?
UPDATE:
To give more clarity, http://example.com/index.html call http://example.com/json/?zipcode=12345 ... and the JSON service
- returns semi-sensitive data,
- returns a JSON array,
- responds to GET requests,
- the browser making the request has JavaScript enabled
Again, what I don't want to have happen is people simply look at my index.html source code and then call the JSON service directly.
There are a few good ways to authenticate clients.
By IP address. In Apache, use the Allow / Deny directives.
By HTTP auth: basic or digest. This is nice and standardized, and uses usernames/passwords to authenticate.
By cookie. You'll have to come up with the cookie.
By a custom HTTP header that you invent.
Edit:
I didn't catch at first that your web service is being called by client-side code. It is literally NOT POSSIBLE to prevent people from calling your web service directly, if you let client-side Javascript do it. Someone could just read the source code.
Some more specific answers here, but I'd like to make the following general point:
Anything done over AJAX is being loaded by the user's browser. You could make a hacker's life hard if you wanted to, but, ultimately, there is no way of stopping me from getting data that you already freely make available to me. Any service that is publicly available is publicly available, plain and simple.
If you are using Apache you can set allow/deny on locations.
http://www.apachesecurity.net/
or here is a link to the apache docs on the Deny directive
http://httpd.apache.org/docs/2.0/mod/mod_access.html#deny
EDITS (responding to the new info).
The Deny directive also works with environment variables. You can restrict access based on browser string (not really secure, but discourages casual browsing) which would still allow XHR calls.
I would suggest the best way to accomplish this is to have a token of some kind that validates the request is a 'good' request. You can do that with a cookie, a session store of some kind, or a parameter (or some combination).
What I would suggest for something like this is to generate a unique url for the service that expires after a short period of time. You could do something like this pretty easily with Memcache. This strategy could also be used to obfuscate the service url (which would not provide any actual security, but would raise the bar for someone wanting to make direct calls).
Lastly, you could also use public key crypto to do this, but that would be very heavy. You would need to generate a new pub/priv key pair for each request and return the pubkey to the js client (here is a link to an implementation in javascript) http://www.cs.pitt.edu/~kirk/cs1501/notes/rsademo/
You can add a random number as a flag to determine whether the request are coming from the page just sent:
1) When generates index.html, add a random number to the JSON request URL:
Old: http://example.com/json/?zipcode=12345
New: http://example.com/json/?zipcode=12345&f=234234234234234234
Add this number to the Session Context as well.
2) The client browser renders the index.html and request JSON data by the new URL.
3) Your server gets the json request and checks the flag number with Session Context. If matched, response data. Otherwise, return an error message.
4) Clear Session Context by the end of response, or timeout triggered.
Accept only POST requests to the JSON-yielding URL. That won't prevent determined people from getting to it, but it will prevent casual browsing.
I know this is old but for anyone getting here later this is the easiest way to do this. You need to protect the AJAX subpage with a password that you can set on the container page before calling the include.
The easiest way to do this is to require HTTPS on the AJAX call and pass a POST variable. HTTPS + POST ensures the password is always encrypted.
So on the AJAX/sub-page do something like
if ($_POST["access"] == "makeupapassword")
{
...
}
else
{
echo "You can't access this directly";
}
When you call the AJAX make sure to include the POST variable and password in your payload. Since it is in POST it will be encrypted, and since it is random (hopefully) nobody will be able to guess it.
If you want to include or require the PHP directly on another page, just set the POST variable to the password before including it.
$_POST["access"] = "makeupapassword";
require("path/to/the/ajax/file.php");
This is a lot better than maintaining a global variable, session variable, or cookie because some of those are persistent across page loads so you have to make sure to reset the state after checking so users can't get accidental access.
Also I think it is better than page headers because it can't be sniffed since it is secured by HHTPS.
You'll probably have to have some kind of cookie-based authentication. In addition, Ignacio has a good point about using POST. This can help prevent JSON hijacking if you have untrusted scripts running on your domain. However, I don't think using POST is strictly necessary unless the outermost JSON type is an array. In your example it is an object.