How to add database system to WebGL application - javascript

I'm currently working on a WebGL sketch drawing project where users can draw arbitrary objects on an html canvas. The javascript libraries and files are all stored on a node.js server which is currently being started up locally every time the software has to be run. Essentially all of the functionality for saving all of the drawn objects on the page has been implemented where the drawings are being written as JSON objects, but the next step is to persist these objects to a database where they can be mapped to a user id. I will also need to implement a login system where users will login and be able to select previously drawn objects to edit from the database.
If this was just a normal website, I would probably just use express.js or something similar, but as the views are essentially rendered entirely in WebGL, I wouldn't think that frameworks would work well with this construct.
Given that I currently just need to create a login system and implement a feature for persisting the JSON object to the DB, are there any frameworks or existing software that accommodates the specified needs of the system?

With regard to authentication, I would recommend taking a look at OAuth and using existing identity providers (e.g. Google, Facebook, etc). You can still retain profiles for your users but you don't have to deal with all of the intricacies of authentication, authorization, security, etc.
There are a ton of JavaScript libraries out there for handling OAuth/OAuth2 interactions. Some even have built-in identity providers. Here are a couple links that returned all sorts of potentially useful libraries:
https://www.npmjs.com/search?q=oauth2
https://www.google.com/search?q=javascript%20oauth2%20library
As for a database, you have a lot of options for storing raw JSON. Some that I've used recently for my JavaScript projects are PostgreSQL, MongoDB, and ArangoDB. You can find well written JS libraries for interacting with any of those.
Another thing to think about is if you want to install the database on your server or use a hosted solution such as RDS or DynamoDB (available from Amazon).
Regardless of the exact authentication and persistence options you choose you will likely use a pattern similar to this:
Your Node.js server is deployed somewhere accessible on the internet, where it exposes the endpoints for your WebGL application, authentication, and saving/loading sketches.
When the user navigates to the WebGL application endpoint on your Node.js server they are required to authenticate (which will utilize your authentication endpoints on the Node.js server).
When the user requests a "save" in your WebGL application you will submit the JSON representation of their sketch (and their authorization token) to the "save" endpoint of your Node.js server.
This "save" endpoint validates the user's authorization token and inserts the JSON into whatever database you've chosen.
The "load" endpoint works just like the "save" endpoint but in reverse. The user asks for a specific sketch. The identity of the sketch (id, name, etc) is sent along with their authorization token. The "load" endpoint on your Node.js server validates their authorization token and then queries the database for the sketch.
The key pattern to notice here is that users don't send requests to your database directly. Your WebGL application should communicate back to your Node.js server and it should commmunicate with your database. This is essential for controlling security, performance, and future updates.
Hopefully this gives you an idea of where to go next.
EDIT based on comments:
I searched around for a Node.js + PostgreSQL guide but didn't find anything I would feel comfortable recommending. I did find this JS library though, which I would check out:
https://github.com/vitaly-t/pg-promise
For MongoDB I would just use their official guide:
https://docs.mongodb.org/getting-started/node/introduction/

Related

What is the best way to connect two applications using APIS? (An E-commerce and a chatbot)

I have two applications settled up. One is a E-commerce (TrayCommerce) that has itself an Api (Oauth), from which I can get order, clients, products information, etc. The other one is a chatbot (Take Blip).
My goal is to make the chatbot retrieve information from the e-commerce's API so I can send it to final user.
I thought in two ways of doing it:
Hosting a javascript code inside the bot, so I can call the API whenever user requests data. However, I don't know how to implement the authentication flow on this approach and how I would, in the future, set up a system to receive notifications from the API to send information each time it is updated, since I can only host one js file per action.
Creating a NodeJS API, which will be hosted on a third party, and that will return the information I want, in a formatted way, to the chatbot. I don't know if this is over-engineering, because I already have an API from the e-commerce.
I am sorry if it is a dumb question, I am new to web development, but any information would be valuable for me to choose a workflow for this integration.
To be able to answer, the right question to ask yourself is the sensitiveness of the data inside the e-commerce; and the power granted to the generated token in the auth implementation.
Typically, a chatbot (assuming a web one) is a piece of Javascript held in the client (browser). This piece of code is perfectly readable by the user, thus you have to assume the generated token could be used to perform a request that you didn't intended him to perform.
So as a simple answer :
If — and only if — the implemented OAuth mecanism lets you limit the scope of authorization to the customer, then you can make the customer authenticate directly with TrayCommerce and the appropriate scopes (and use his token to perform on the API). Said differently, if typically TrayCommerce lets you register your Chatbot as a "client" (this is an OAuth keyword), and generate Auth journeys with appropriate 3-parties flows, granting only something like "orders:view:self" for customers, it's OK.
If the TrayCommerce API is more like a "management API"; with auth implemented in a way that you (yourself, not the customer) have to authenticate on it; then this auth mecanism is not suitable for your use-case. You then have to make an API like you described, that would act like a proxy to TrayCommerce. With considerations (see below).
In the case of you making a "Proxy API" to TrayCommerce; you are basically hiding the TrayCommerce Authentication on your server-side, and shifting that responsibility from TrayCommerce to yourself. In such a case, you have to implement your own authentication (+ authorization) mecanism on this API, to be able not to expose TrayCommerce data to the world.

Use separate server for centralized users database

I am using Meteor 1.10 + mongodb.
I have multiple mobile chat & information applications.
These mobile application are natively developed using Meteor DDP libraries.
But I have same users base for all the apps.
Now I want to create a separate meteor instance on separate individual server to keep the users base centralized.
I need suggestions that how can I acheive this architecture with meteor.
Keeping reactivity and performance in mind.
For a centralized user-base with full reactive functionality you need an Authorization Server which will be used by your apps (= Resource Servers) in order to allow an authenticated/authorized request. This is basically the OAuth2 3-tier workflow.
See:
https://www.rfc-editor.org/rfc/rfc6749
https://www.oauth.com/
Login Service
You will also have to write your own login handler (Meteor.loginWithMyCustomAuthServer) in order to avoid DDP.connect because you would then have to manage two userbases (one for the app itself and one for the Authorization Server) and this will get really messy.
This login handler is then retrieving the user account data after the Oauth2 authorization request has been successful, which will make the Authorization Server's userbase the single point of truth for any of your app that is registered (read on Oauth2 workflow about clientId and secret).
Subcribing to users
The Auth server is the single point of truth where you create, updat or delete your users there and on a successfull login your local app will always get the latest user data synced from this accounts Auth Server (this is how Meteor does it with loginWith<Service> too)
You then subscribe to your users to the app itself without any ddp remote connection. This of course works only if the user data you want to get is actually for online users.
If you want to subscribe for any user (where the data might have not been synced yet) you still need a remote subscription to a publication on the Authorizazion server.
Note, that in order to authenticate users with this remote subscription you need an authenticated DDP request (which is also backed by the packages below).
Implementation
Warning - the following is an implementation by myself. This is due to I have faced the same issue and found no other implementation before mine.
There is a full working Accounts server (but constantly work in progress)
https://github.com/leaonline/leaonline-accounts
it uses an Oauth2 nodejs implementation, which has been wrapped inside a Meteor package:
https://github.com/leaonline/oauth2-server
and the respective login handler has also been created:
https://github.com/leaonline/meteor-accounts-lea
So finally I got a work around. It might not be the perfect way to handle this, but to my knowledge it worked for me so well. But yes I still open for suggestions.
Currently I have 4 connecting applications which are dependent on same users base.
So I decided to build SSO (Centralized Server for managing Users Database)
All 4 connecting applications ping SSO for User-Authentication and getting users related data.
Now those 4 connecting applications are developed using Meteor.
Main challenge here was to make things Reactive/Realtime.
E.g Chat/Messaging, Group Creations, Showing users list & listeners for newly registered users.
So in this scenario users database was on other remote server (SSO), so on connecting application I couldn't just:
Meteor.publish("getUsers")
So on connecting applications I decided to create a Temporary Collection called:
UserReactiveCollection
With following structure:
UserReactiveCollection.{
_id: 1,
userId: '2',
createdAt: new Date()
}
And I published subscription:
Meteor.publish("subscribeNewUserSso", function () {
return UserReactiveCollection.find({});
});
So for updating UserReactiveCollection I exposed Rest Api's on each connecting application respectively.
Those apis receive data from SSO and updates in UserReactiveCollection.
So on SSO side when ever a new user is registered. I ping those Apis (on connecting applications) and send the inserted userId in the payload.
So now those connecting applications receives onDataChanged ping from the subscription and gets userId.
Using that userId the connecting applications pings back to SSO and get user details of that specific userId and prepends to the users list.
Thats how I got it all working so for now I am just marking my answer accepted but as I mentioned above that: "It might not be the perfect way to handle this, but to my knowledge it worked for me so well. But yes I still open for suggestions."
And special thanks to #Jankapunkt for helping me out.

Secure way to store sensitive API details of users (localStorage or database?)

I am playing around with the idea of creating a website for cryptocurrencies, where a user can sign up on my website, enter his API details for one of the exchange markets that I will support, which allows him to trade on that exchange, but using my “more user friendly” web interface.
My main goal is to create a more user friendly interface than what most exchange websites offer. I am not hooking directly into any cryptocurrencies or wallets, all I do is use the API of existing exchange markets, relay the information to my website, where I have a more user friendly interface.
Since this is a very sensitive subject in regards to security, I am trying to figure out, what the best way would be to store the API details of the users.
In general I don’t like the idea of storing the API details on my database server, nor on my server in general. The thought of having my website hacked and all the API details being exposed is terrifying. Of course each exchange website that supports APIs has their own security built in, such as API sessions with 2FA, IP restrictions, weekly generations of new API secret keys, daily trading limits via API, and not allowing withdrawals of wallets via API. But damage can still be done if those API details get stolen.
I would prefer if there would be a way where I would not need to store the API details on my server at all, but rather have the user save them locally on his PC. That way he is in charge of keeping the API details secure.
This thought brought me then to the idea of creating a desktop app using electron (https://electron.atom.io/). That way I can still create the website the way I want, but it’s wrapped into electron, so it always run locally. Before I pursue this idea, I would like to keep investigating my previous idea of a regular website, as I prefer to have my website cloud based, SaaS, to prevent piracy.
So I wonder, storing API details of a user, without saving them on the server, what other options would I have?
Cookies? Probably not secure.
What about localStorage? https://developer.mozilla.org/en-US/docs/Web/API/Web_Storage_API
Are there other options or am I too paranoid about this? Is it generally accepted to store sensitive API details on a database server along with the rest of the users details?
I think saving data in to users computers is wrong way, because when you will save user's personal data in to your server, you will be able to control security of your server, when it will be saved on user compputer the security of your server will be depended from users. Today we know many methods how to deceve users and I think, that the programmers must take care of his users. when you will save data in server db you can switch many methods, like email verification or verification by phone you can send message with some verification code, switch ssl service, also you can avoid on sql injection using a modern framework like Laravel or Yii 2, in any case if you will save user data in you server the security of your application will be depended of you.
if you will save user data in local computer, today hackers uses many methods to steal users cookies or methods to get a controll on pc, for example you can read this post
https://krebsonsecurity.com/2011/09/right-to-left-override-aids-email-attacks/
today hackers using this method, creates an exe file which extension on first look is docx or other some extension for example pdf and so on ...
but in real it is an exe file and it is runnable, user can download it, and run... I think you understood what can do hacker with users computers by this way, today so many viruses which even very professional users cant recognize.

Securing JS client-side SDKs

I'm working on a React-Redux web-app which integrates with AWS Cognito for user authentication/data storage and with the Shopify API so users can buy items through our site.
With both SDKs (Cognito, Shopify), I've run into an issue: Their core functionality attaches data behind the scenes to localStorage, requiring both SDKs to be run client-side.
But running this code entirely client-side means that the API tokens which both APIs require are completely insecure, such that someone could just grab them from my bundle and then authenticate/fill a cart/see inventory/whatever from anywhere (right?).
I wrote issues on both repos to point this out. Here's the more recent one, on Shopify. I've looked at similar questions on SO, but nothing I found addresses these custom SDKs/ingrained localStorage usage directly, and I'm starting to wonder if I'm missing/misunderstanding something about client-side security, so I figured I should just ask people who know more about this.
What I'm interested in is whether, abstractly, there's a good way to secure a client-side SDK like this. Some thoughts:
Originally, I tried to proxy all requests through the server, but then the localStorage functionality didn't work, and I had to fake it out post-request and add a whole bunch of code that the SDK is designed to take care of. This proved prohibitively difficult/messy, especially with Cognito.
I'm also considering creating a server-side endpoint that simply returns the credentials and blocks requests from outside the domain. In that case, the creds wouldn't be in the bundle, but wouldn't they be eventually scannable by someone on the site once that request for credentials has been made?
Is the idea that these secret keys don't actually need to be secure, because adding to a Shopify cart or registering a user with an application don't need to be secure actions? I'm just worried that I obviously don't know the full scope of actions that a user could take with these credentials, and it feels like an obvious best practice to keep them secret.
Thanks!
Can't you just put the keys and such in a .env file? This way nobody can see what keys you've got stored in there. You can then access your keys through process.env.YOUR_VAR
For Cognito you could store stuff like user pool id, app client id, identity pool id in a .env file.
NPM package for dotenv can be found here: NPM dotenv
Furthermore, what supersecret stuff are you currently storing that you're worried about? By "API tokens", do you mean the OpenId token which you get after authenticating to Cognito?
I can respond to the Cognito portion for this. Your AWS Secret Key and Access Key are not stored in the client. For your React.js app, you only need the Cognito User Pool Id and the App Client Id in your app. Those are the only keys that are exposed to the user.
I cover this in detail in a comprehensive tutorial here - http://serverless-stack.com/chapters/login-with-aws-cognito.html

Get access to DocumentDB with JS

I'm developing an app, which should connect to an external DocumentDB database (not mine). The app is build with Cordova/Ionic.
I founda JavaScript library from Microsoft Azure in order to ensure a DocumentDB database connection, but it is asking for some weird stuff like collection_rid and tokens.
I've got the following from the guys of the external DocumentDB database:
Endpoint: https://uiuiui.documents.azure.com:443/
Live DocumentDB API ReadOnly Key: P8riQBgFUH...VqFRaRA==
.Net Connection String: AccountEndpoint=https://uiuiui.documents.azure.com:443/;AccountKey=jl23...lk23==;
But how am I supposed to retrieve the collection_rid and token from this information?
Without row-level authorization, DocumentDB is designed to be accessed from a server-side app, not directly from javascript in the browser. When you give it the master token, you get full access which is generally not what you want for your end-user clients. Even the read-only key is usually not what you want to hand out to your clients. The Azure-provided javascript library is designed to be run from node.js as your server-side app.
That said, if you really want to access it from the browser without a proxy app running on a server, you can definitely do so using normal REST calls directly hitting the DocumentDB REST API. I do not think the Azure-provided SDK will run directly in the browser, but with help from Browserify and some manual tweaking (it's open source) you may be able to get it to run.
You can get the collection name from the same folks who provided you the connection string information and use name-based routing to access the collection. I'm not sure exactly what you mean by token but I'm guessing that you are referring to the session token (needed for session-level consistency). Look at the REST API specs if you want to know the details about how that token gets passed back and forth (in HTTP headers) but it's automatically taken care of by the SDKs if you go that route.
Please note that DocumentDB also provides support equivalent to row-level authorization by enabling you to create specific permissions on the desired entities. Once you have such a permission, you can retrieve the corresponding token, which is scoped to be valid for a certain time period. You would need to set up a mid-tier that can fetch these tokens and distribute to your user application. The user application can then use these tokens as bearer-tokens instead of using the master key.
You can find more details at https://msdn.microsoft.com/en-us/library/azure/dn783368.aspx
https://msdn.microsoft.com/en-us/library/azure/7298025b-bcf1-4fc7-9b54-6e7ca8c64f49

Categories

Resources