I have created a mySQL database at a Host IP, but now wish to use GraphQL to make queries easier from the front-end. I've know how to set up a GraphQL server from scratch, but was unsure how to access my pre-existing tables at the back-end, and where to define the schema to use them. How would I connect to the database from a GraphQL server?
Prisma was an option I considered, but the service doesn't allow connections to mySQL databases which have pre-existing data.
Thanks for the help!
Prisma and other ORMs are a good option if you want a relatively easy/cheap way to expose CRUD operations for your entire database.
If you only need to expose specific aspects of your data or just need to start iterating quickly, you can define your GraphQL API schema at the server level and write resolvers that connect to your database as needed. Your schema does not need to reflect your entire database, but only the data you'd like to expose to clients.
In my experience with GraphQL APIs, I've found that manually writing query schemas and creating resolvers as needed for servicing the client is both faster and easier to maintain for smaller applications.
You can use a SQL client like https://github.com/mysqljs/mysql to interface with your database. The resolvers you write for your schema would then query your database for any data needed to serve the client's request, even if it spans multiple tables.
The GraphQL spec learning site graphql.org has a good description of this process https://graphql.org/learn/execution/#root-fields-resolvers
You can try a new open source tool called SwitchQL (github.com/SwitchQL/SwitchQL). I've been working on the project for a while.
You pass it your connection string and it returns everything you need to run a graphql server on top of an existing database. It also returns Apollo compliant client mutation and queries.
We only support Postgres at the moment. If you end up trying it out, please let me know what you think!
Related
I have a .csv that I want to use as a database and run SQL queries on it from the browser. (Ideally I want to upload the .csv, first. But It could also be stored). Thought this could be done with Django and a Postgres database. Are there simpler ways of accomplishing this?
Is WebSQL an option? Is there something else, I haven't thought of?
Ideally I would want to avoid SQL injections. I tried searching on stack overflow and found this (Display SQL query results in php), but it's not what I'm looking for.
Basically the desired functionality is: when one comes to webpage, they can run SQL queries on the data in the .csv. They type queries in an HTML form and submit the form and then the results would be shown on the same page with actual query.
Use an in-browser library to load the data from the csv file, for example Papa Parse, then equally using an in-browser library, but this time for SQLite, create an empty in-memory database, populate it with the loaded data from the csv file, and then query the database with the same library.
It appears that you are asking if you trigger/run SQL queries against some SQL database directly from a UI. While this is theoretically possible, in practice it is a very bad idea. The reason it is a bad idea is that to do so you would have to open one or more database ports to the outside. This in turn would expose the database to DOS (denial of service) and other types of malicious attacks.
The proper way to proceed would be to place your database behind the backend of your web application. Then, expose one or more endpoints in your backend which in turn talk privately to the database. Finally, allow your UI to hit the backend endpoints to run whatever SQL logic you want.
I'm making a list of tasks to learn how to use PouchDB / CouchDB, the application is quite simple, would have authentication and the user would create their tasks.
My question is regarding how to store each user's information in the database. Should I create a database for each user with their tasks? Or is there a way to put all of the tasks of all users into a database called "Tasks" and somehow filter the synchronization so that PouchDB does not synchronize the whole database (including other users' tasks) that is in the server?
(I have read the pouchdb documentation a few times and I have not been able to define this, if it is documented, please inform me where.)
You can use both approaches to fulfill your use case:
Database per user
A database per user, is the db-per-user pattern in CouchDB. CouchDB can handle the database creation/deletion each time a user is created/deleted in CouchDB. In this case each PouchDB client will replicate the complete user database.
You can enable it in the server config
This is a proper approach if the users data is isolated and you don't need to share information between users. In this case you can have some scalability issues if you need you sync many user databases with another one in CouchDB. See this post.
Single database for every user
You need to use the filtered-replication feature in CouchDB/PouchDB. This post explains how to use it.
With this approach you can replicate a subset of the CouchDB database in PouchDB
As you have a single database is easier to share info between users
But, this approach has some performance problems. The filtering process is very inefficient. As it has to process the whole dataset, including the deleted documents to determine the set of documents to be included in the replication. This filtering is done in a couchdb external process in the server which add more cost to the process.
If you need to use the filtering approach it is better to use a Mango Selector for this purpose as it is evaluated in the CouchDB main process and it could be indexed. See options.selector in the PouchDB replication filtering options.
Conclusion
Which is better? depends on your use case... In any case you should consider the scalability issues in both cases:
In the case of filtered replication, you will face some issues as the number of documents grow if you have to filter the complete dataset. This is reported to be 10x faster when using mango selectors.
In the case of db-per-user, you will have some issues if you need to consolidate the different user databases in a single one when the number of users grow.
Both pattern are valid. The only difference is that in order to use the filtered replication, you need to provide access to the main database.
Since it's in javascript, it's easy to get credentials and then access the main database. This would give users the ability to see everyone's data.
A more secure approach would be to use a database-per-user pattern. Each database will be protected by the user's credentials.
I'm using passport.js and MongoDB for user login and postAPI authentications. However whenever I deploy my node server to another AWS instance, I need to go through the signup process, do the login and get a new token.
I know I can see the saved users and their jwt tokens from MongoDB. Are there anyway that I can copy the token and, when initialize new database, save the same username-jwttoken pair by default, so I can use the same token string (not with password, though it is more easily to be done) to pass the passport authentication test?
Thanks!
It sounds like your deployment process involves tearing everything (application and MongoDB) down & rebuilding from zero, possibly with some seed data but without any of the "live" data in the AWS instance. Here are a couple of ideas:
copy all the data from the old MongoDB instance to the new one as part of your deployment process. This will ensure that the users are present on the new instance and (should) ensure that users don't have to go through the signup process again. I'm not too familiar with MongoDB so I don't know how to do this, but I'm sure there's a way - maybe you can copy the data files from one to the other?
set up your environment with two servers: a MongoDB server and an application server. This way you can tear down your application and create a new AWS instance just for the application without touching your MongoDB server. Just update the MongoDB connection configuration in your new application instance to point to the same MongoDB server you've been using.
The first option is more suitable if you have a very small application without too much data. If your database gets too large, you're going to experience long periods of downtime during deployment as you take the application down, copy the data out of the old Mongo instance, copy the data into the new Mongo instance, and bring the application back up.
The second option is probably the better one, although it does require some knowledge of networking and securing MongoDB so that only your application has access to your data.
I want to create a website that queries and inserts data to and from my configured SQL Database. I have not been able to write any code yet because I can't find any reference or documentation for Javascript.
If you are looking for any solution about querying data against SQL Database from browser in javascript, I strongly don't recommend it. Because everyone browsing your website, can find your SQL Database's connection info. And your SQL Database will be exposed to public.
I recommend you to build a backend application for querying data from your SQL Database, and provides the data to your front website.
For more info about how to use the backend languages to connect to SQL Database, please refer to https://azure.microsoft.com/en-us/documentation/articles/sql-database-develop-nodejs-simple/.
I'm trying to create a simple in-browser web app to display the contents on a given rethink table with some nice formatting. I'm having trouble finding a way to actually connect to rethink without having to use node.js. All I want to do is get the data out and then run it through some styling/layout stuff. Node + dependencies are overkill for a tiny browser-only app.
Unfortunately, you're going to need a server. It might be node.js or it might be another language, but you'll need a server.
RethinkDB is not Firebase. It can't be queried from your browser. If you absolutely need browser side querying and can't have a server, you should use Firbase.
If you want to use RethinkDB, you can just have a very thin server that just redirects your queries to RethinkDB. This can be done over HTTP or over WebSockets.
Why
Ultimately, the reason why you don't want to query your database from the browser is security. RethinkDB has no users or read only accounts. That means that if your database is accessible from your browsers, anyone can come and delete all your databases (including your system tables) with a simple query.
For example:
r.db('rethinkdb').tableList().forEach(function (tableName) {
return r.db('rethinkdb').tableDrop(tableName);
});
And now, all your database is gone :).
Keep in mind that this is something the RethinkDB team is aware of and working on.
https://github.com/rethinkdb/rethinkdb/issues/218