Will this create any security exploit? - javascript

I've a mongodb collection "users" in which i want to add a new field "wallet_amount" when user add money in their wallet. Right now at the time of user registration, i'm trying to insert a document like this
db.users.insert( { email: "test#test.com", wallet_amount: 0 } )
Is this the correct way of doing this or there are chances this will create some security exploits since i'm passing wallet_amount default value as 0?
Or wallet_amount should be inserted only at the time when user add money in wallet.

In theory there are no security implications as to whether you set initial amount on user creation or at a later stage.
However, what you face as a more general security concern is that every time you have any query against the users table, you need to triple check it to make sure there is no way it can alter the wallet_amount incorrectly. Any developer who is coding against this table is touching potentially very sensitive data.
To mitigate against this, if you are dealing with a sensitive field like this:
Actually store the wallet amount in a separate table or database
Have a very limited set of APIs to adjust the wallet amount, test them extensively and only ever use those APIs when working with the wallet amount
This means you decouple the sensitive data from your user table and allow you to isolate the part of your domain which needs extra care and attention.
If you want to take this a step further, consider not storing a wallet amount at all. A common approach for very secure financial systems is to actually store a ledger, which is an immutable record of every transaction. In your case it might look like:
Day 1: I add $100 to my wallet
Day 2: I spend $10
Day 3: I spend $13
etc. You can then actually set up your database so you never mutate any data, only ever add more lines to the ledger. A cache can be used to keep track of the current balances, but this can always be recreated by running over the ledger items. This might be overkill for your scenario, but can provide an extra layer of protection, because you essentially forbid anyone from arbitrarily changing what is in the wallet, they can only add transactions (which makes it easier to spot suspicious behaviour or patterns, and trace where money moves).

Related

Confusion regarding bounded contexts and interaction between them

I'm trying to implement my first domain driven application, after going through Eric Evan's book on Domain-Driven Design. I'm a bit confused on how to go about this.
In my application, a user can purchase a service for getting them certain number of views on a video they post in Youtube, which is fulfilled by the other users of my app who watch those videos(Basically a replica of the many youtube promoter apps already available, for learning).
Say the service is represented in the app as an entity called WatchTime aggregate. The WatchTime entity contains some information like the id of user who purchased this service, the max number of views purchased, number of views already fulfilled, and points earned by someone who views the video once.
I decided to go with 3 bounded contexts, one for authentication, one for handling the watchtimes, like adding or removing them, and one for managing users and their data. Now the user has his personal info and some points that he collected while using the application.
At first I was thinking that all the user data and related actions be in the 3rd context, like adding more points to a user and or reducing his points, but then while making the model, I realized that that if the watch time purchasing service is going to be in the second one, then its going to have to communicate to the third one every time a WatchTime is purchased to tell a service there to reduce points for that purchase. It wouldn't make sense to keep them in two different ones.
So instead what I'm thinking of is have a model of the user in the 2nd bounded context, but with only points and the WatchTimes that this user purchased, so now it doesnt have to call something on the 3rd context.
My question is how to properly seperate things into contexts? Is it like based on the models, or should it be based on the functionality, and all models related to those functionality are going to be in the same context?
And another thing, how to ensure that all the objects of the same entity have the same value and properly persisted in the database? Should only one object representing a particular entity be present at a time, which will be persisted and disposed by the end of a function? Because I was thinking that if two objects representing the same entity be present at the same time, there's a possibility of both having different values or changing to different values.
If i sound like im rambling, please let me know if I have to be more clear. Thanks.
Bounded contexts basically define areas of functionality where the ubiquitous language (and thus the model) are the same. In different bounded contexts, "user" can mean different things: in a "user profile" context, you might have their email address but in the "viewing time" context, you'd just have the points granted and viewership purchased.
Re "another thing", in general you need to keep an aggregate strongly consistent and only allow an update to succeed if the update is cognizant of every prior update which succeeded, including any updates which succeeded after a read from the datastore. This is the single-writer principle.
There are a couple of ways to accomplish this. First, you can use optimistic concurrency control and store a version number with each aggregate. You then update the aggregate in the DB only if the version hasn't changed; otherwise you attempt the operation (performing all the validations etc.) against the new version of the aggregate. This requires some support in the DB for an atomic check of the version and update (e.g. a transaction).
An alternative approach (my personal preference) is to recognize that a DDD aggregate has a high level of mechanical sympathy to the actor model of computation (e.g. both are units of strong consistency). There are implementations of the actor model (e.g. Microsoft Orleans, Akka Cluster Sharding) which allow an aggregate to be represented by at most one actor at a given time (even if there is a cluster of many servers).

Many small or one bigger POST request?

I need to gather information when user sees an article. User will browse through 1-30 articles in a minute (maybe even more if user just scrolls through everything looking something specific). I was wondering which way i can keep my server costs at minimum:
At client side javascript i push article id's into an array and send it to server when there is 30-60 id's. At server i loop through all the id's and insert them into database.
Every single time when user sees an article i will send one article id to server. In some cases this can cause over 60 requests in a minute. At server i insert the id into database.
In most of the cases, there is always a trade-off. And a lot of times, the optimal solution lies somewhere in the middle. I feel you should support both and use them interchangeably depending on the situation. Please go through following scenarios:
Will your end-user have bandwidth issues? If yes, it may make sense to go with option 2 or reduce the number of articles to a number such that it can be easily fetched at lower bandwidth as well.
Assuming the user does not has bandwidth issues such that loading of 30-60 articles won't take a lot of time for user, you can go with option 1 and keep using this option for subsequent fetch as well.
A lot of times it will make sense to go with option 1 for initial fetch and then fetch a lower number of articles after that.
Regarding server cost, it will make sense to send 30-60 articles together provided user reads them all. If you feel he won't read them all, find an optimal number using your app's analytics and send those number of articles in one go, provided bandwidth won't be an issue for user.
tl;dr; In data you should trust. Use your intuition, existing app usage patterns, and bandwidth availability of the user to make an informed decision. Also, server cost is not the only thing. Experience matters more, I think.

Caching query results, to do or not to do, overkill or performance energizer?

Good evening,
my project uses the MEAN Stack and has a few collections and a single database from which the data is retrieved.
Thinking about how the user would interface itself with the webapp I am going to build, I figured that my idea of the application is quite a bit of a waste.
Now, the application is hosted on a private server on the LAN, making it very fast on requests and it's running an express server.
The application is made around employee management, services and places where the services can take place. Just describing, so to have an idea.
The "ring to rule them all" is pretty much the first collection, services, which starts the core of the application. There's a page that let's you add rows, one for each service that you intend to do and within that row you choose an employee to "run the service", based on characteristics that this employee has, meaning that if the service is about teaching Piano, the employee must know how to play Piano. The same logic works for the rest of the "columns" that will build up my row into a full service recognized by the app as such.
Now, what I said above is pretty much information retrieval from a database and logic to make the application model the information retrieved and build something with it.
My question or rather my doubt comes from how I imagined the querying to work for each field that is part of the service row. Right now I'm thinking about querying the database (mongodb) each time I have to pick a value for a field, but if you consider that I might want to add a 100 rows, each of which would have 10 fields, that would make up for a lot of requests to the database. Now, that doesn't seem elegant, nor intelligent to me, but I can't come up with a better solution or idea.
Any suggestions or rule of thumbs for a MEAN newb?
Thanks in advance!
EDIT: Answer to a comment question which was needed.
No, the database is pretty static (unless the user willingly inserts a new value, say a new employee that can do a service). That wouldn't happen very often. Considering the query that would return all the employees for a given service, those employees would (ideally) be inside an associative array, with the possibility to be "pop'd" from it if chosen for a service, making them unavailable for further services (because one person can't do two services at the same time). Hope I was clear, I'm surely not the best person at explaining oneself.
It would query the database on who is available when a user looks at that page and another query if the user assigns an employee to do a service.
In general 1 query on page load and another when data is submitted is standard.
You would only want to use an in memory cache for
frequent queries but most databases will do this automatically.
values that change frequently like:
How many users are connected
Last query sent
Something that happens on almost every query (>95%)

Timestamp-based conflict resolution without reliable time synchronization

Let's take as an example a js "app", that basically does CRUD, so it creates, updates and deletes (not really) some "records".
In the most basic case, one does not need to resolve conflicts in such an application because the ACID properties of the DBMS are used to eleminate concurrent updates (I'm skimming over a ton of details here, I know). When there's no way to emulate serial execution of updates, one can use timestamps so determine whch update "wins". Even then the client need not worry about timestamps, because they can be generated at request time on the server.
But what if we take it one step further and allow the updates to queue up on the client for some unspecified amount of time (say, to allow the app to work when there's no network connectivity) and then pushed to the server? Then the timestamp can not be generated on the server, since the time when the update was pushed to the server and the actual time when the update was performed may vary greatly.
In the ideal world, where all the clocks are synchronized this is not a problem - just generate a timestamp on the client at the time when the update is performed. But in reality, time often drifts from the "server" time (which is assumed to be perfect, after all, its us configuring the server, what could ever go wrong with it?) or is just plain wrong by hours (possible when you don't set the time zone, but instead update the time / date of the system to match). What would one do to account for reality in such a case?
Perhaps there's some other way of conflict resolution, that may be used in such a case?
Your question has two aspects :
Synchronizing/serializing at server using timestamps via ACID properties of database.
Queues that are with client (delays which server is not aware of).
If you are maintaining queues at client which push to server when it sees fit, then it better have trivial synchronizing. Because it is just defeating the purpose of timestamps, which server relies on.
The scope of ACID is limited here because if clients updates are not realtime, it cannot serialize based on timestamp of request created than or request arrival. It creates a scenario where a request R2 created later than request R1 arrives before R1.
Time is a relative concept, using a local time for client or for server will cause drift for the other. Also it does not scale (inefficient if you have several peer nodes - distributed). And it introduces a single point of failure.
To resolve this vector-clocks were designed. They are logical clocks that increment clock when event occurs on the machine atomically. BASE databases (Basically Available, Soft state, Eventual consistency) use it.
The result is that 1 and 2 will never work out. You should never queue requests which use timestamp for conflict resolution.
Nice challenge. While I appreciated the user568109's anwser, this is how I handled a similar situation in a CQRS/DDD application.
While in a DDD application I had a lot of different commands and queries, in a CRUD application, for each type of "record" we have CREATE, UPDATE and DELETE commands and a READ query.
In my system, on the client I kept track of the previous sync in a tuple containing: UTC Time on the Server, Time on the Client (lets call this LastSync).
READ
Read query won't partecipate to syncronization. Still, in some situation you could have to send to the server a LogRead command to keep track of the informations that were used to take decisions. Such kind of commands did contain entity's type, entity's identifier and LastSync.ServerTime).
CREATE
Create commands are idempotents by definition: they either success or fail (when a record with the same identity already exists). At sync time you will have to either notify the user of the conflict (so that he can handle the situation, eg by changing the identifier) or to fix timestamp as explained later.
UPDATE
Update commands are a bit trickier, since you should probably handle them differently on different type of records. To keep it simple you should be able to impose the users that the last update always wins and design the command to carry only the properties that should be updated (just like a SQL UPDATE statement). Otherwise you'll have to handle automatic/manual merge (but believe me, it's a nightmere: most users won't ever understand it!) Initially my customer required this feature for most of entities, but after a while they accepted that the last update wins to avoid such complexity. Moreover, in case of Update on a Deleted object you should notify the situation to the user and, according to the type of entity updated, apply the update or not.
DELETE
Detele commands should be simple, unless you have to notify the user that an update occurred that could have lead him to keep the record instead of delete it.
You should carefully analyze how to handle each of this command for each of your type of entity (and in the case of UPDATEs you could be forced to handle them differently for different set of properties to update).
SYNC PROCESS
The sync session should start sending to the server a message with
Current Time on the Client
LastSync
This way the server could calculate the offset between its time and the client's time and apply such offset to every command he recieve. Moreover it can check if the offset changed after the LastSync and choose a strategy to handle this change. Note that, this way, the server won't know when the client's clock was adjusted.
At the end of a successful sync (it's up to you to decide what successful means here), the client would update the LastSync tuple.
A final note
This is a quite complex solution. You should carefully ponder with the customer if such complexity give you enough value before starting implementing it.

What is the best way to filter spam with JavaScript?

I have recently been inspired to write spam filters in JavaScript, Greasemonkey-style, for several websites I use that are prone to spam (especially in comments). When considering my options about how to go about this, I realize I have several options, each with pros/cons. My goal for this question is to expand on this list I have created, and hopefully determine the best way of client-side spam filtering with JavaScript.
As for what makes a spam filter the "best", I would say these are the criteria:
Most accurate
Least vulnerable to attacks
Fastest
Most transparent
Also, please note that I am trying to filter content that already exists on websites that aren't mine, using Greasemonkey Userscripts. In other words, I can't prevent spam; I can only filter it.
Here is my attempt, so far, to compile a list of the various methods along with their shortcomings and benefits:
Rule-based filters:
What it does: "Grades" a message by assigning a point value to different criteria (i.e. all uppercase, all non-alphanumeric, etc.) Depending on the score, the message is discarded or kept.
Benefits:
Easy to implement
Mostly transparent
Shortcomings:
Transparent- it's usually easy to reverse engineer the code to discover the rules, and thereby craft messages which won't be picked up
Hard to balance point values (false positives)
Can be slow; multiple rules have to be executed on each message, a lot of times using regular expressions
In a client-side environment, server interaction or user interaction is required to update the rules
Bayesian filtering:
What it does: Analyzes word frequency (or trigram frequency) and compares it against the data it has been trained with.
Benefits:
No need to craft rules
Fast (relatively)
Tougher to reverse engineer
Shortcomings:
Requires training to be effective
Trained data must still be accessible to JavaScript; usually in the form of human-readable JSON, XML, or flat file
Data set can get pretty large
Poorly designed filters are easy to confuse with a good helping of common words to lower the spamacity rating
Words that haven't been seen before can't be accurately classified; sometimes resulting in incorrect classification of entire message
In a client-side environment, server interaction or user interaction is required to update the rules
Bayesian filtering- server-side:
What it does: Applies Bayesian filtering server side by submitting each message to a remote server for analysis.
Benefits:
All the benefits of regular Bayesian filtering
Training data is not revealed to users/reverse engineers
Shortcomings:
Heavy traffic
Still vulnerable to uncommon words
Still vulnerable to adding common words to decrease spamacity
The service itself may be abused
To train the classifier, it may be desirable to allow users to submit spam samples for training. Attackers may abuse this service
Blacklisting:
What it does: Applies a set of criteria to a message or some attribute of it. If one or more (or a specific number of) criteria match, the message is rejected. A lot like rule-based filtering, so see its description for details.
CAPTCHAs, and the like:
Not feasible for this type of application. I am trying to apply these methods to sites that already exist. Greasemonkey will be used to do this; I can't start requiring CAPTCHAs in places that they weren't before someone installed my script.
Can anyone help me fill in the blanks? Thank you,
There is no "best" way, especially for all users or all situations.
Keep it simple:
Have the GM script initially hide all comments that contain links and maybe universally bad words (F*ck, Presbyterian, etc.). ;)
Then the script contacts your server and lets the server judge each comment by X criteria (more on that, below).
Show or hide comments based on the server response. In the event of a timeout, show or reveal based on a user preference setting ("What to do when the filter server is down? (show/hide comments with links) ).
That's it for the GM script; the rest is handled by the server.
As for the actual server/filtering criteria...
Most important is do not dare to assume that you can guess what a user will want filtered! This will vary wildly from person to person, or even mood to mood.
Setup the server to use a combination of bad words, bad link destinations (.ru and .cn domains, for example) and public spam-filtering services.
The most important thing is to offer users some way to choose and ideally adjust what is applied, for them.

Categories

Resources