From Event driven OO to Redux architecture - javascript

I have a structure that is pretty much OO and I am about to migrate to React/Redux due to event mess.
I am curious what to do with current modules, I have objects that have schema like:
User {
getName()
getSurname()
etc...
}
And there are lots of these, they were used as fasade/factories for raw json data as I used to pass json and manipulate it (mutable data)
Now how to solve this in redux?
I get to the part where I have an async action call, I recieve raw data from api and than what?
Should I pass 'complex' object with their getters/setters to state? Its said to be immutable so it doesnt seem well with redux recomendations.
Or maybe convert the class-like elements to accessors like:
function getName(rawJson) {
return rawJson.name
}
function setName(rawJson, name) {
return Object.assign({}, rawJson, {name})
}
parse it in action and return a rawJSON chunk from action to reducer and than stick it to the new state?
EDIT:
A simple pseudocode module for user:
function User(raw) {
return {
getName: function() {
return raw.name
}
setName: function(name) {
raw.name = name
return this
}
}
}
My point is about moving all data and flattening/normalizing it in store - would it be fine to have an array of e.g. User objects in store? or should they all be pure json. I want to be sure its really only correct way to have all those objects turn into basic values cause it gonna be lots of work.

Not sure if this will be totally relevant, but if I'm understanding your question and putting it another way: where to put business logic and other validations/manipulations:
https://github.com/reactjs/redux/issues/1165
I personally follow this trend as well in that I put all of my async action manipulation in my action creators before storing them in a format of my choosing in the reducer.
Here, I choose to convert to JSON whatever objects I get back from the API. Similarly you can do whatever logic you need here before dispatching a success request to store in your reducers.
Hope that helps/is relevant to whatever you were asking...

Related

How to get component instance in data section in vuejs template?

I have a component that has complex rendering logic.
I try to carry out this logic to helper classes, for simplifying.
To do this, in the data section (for reactivity), I create class references as follows:
export default {
data: () => ({
state: new InitialState(this),
query: new QueryController(this)
})
}
As I understand it, at this point the context of this is not yet defined.
So, I have two questions.
1) Is there a way to pass the this component context in the data section (without lifecycle hooks)?
2) Is the approach with references to external classes of vuejs philosophy contrary?
Component instance is already available when data function runs, this is one of reasons why it has been forced to be a function.
Due to how lexical this works with arrow functions, it's incorrect to use them to access dynamic this. It should be:
data() {
return {
state: new InitialState(this),
query: new QueryController(this)
};
})
The problem with InitialState(this) is that the entire component instance is passed instead of relevant data, this breaks the principle of least privilege.
Despite Vue isn't focused on OOP, there's nothing wrong with using classes. One of possible pitfalls is that classes may not play well with Vue reactivity because it puts restrictions on the implementation. Another pitfall is that classes cannot be serialized to JSON and back without additional measures, this introduces limitations to how application state can be handled.
As I understand it, at this point the context of this is not yet defined.
Only because of the way you've written the code. The component instance does exist and is available. It is sometimes used to access the values of props for determining the initial values of data properties.
For example, here is an example from the documentation:
https://v2.vuejs.org/v2/guide/components-props.html#One-Way-Data-Flow
export default {
props: ['initialCounter'],
data: function () {
return {
counter: this.initialCounter
}
}
}
The reason why your code doesn't work is because you are using an arrow function. If you change it to the following then this will be available:
export default {
data () {
return {
state: new InitialState(this),
query: new QueryController(this)
}
}
}
See also the note here:
https://v2.vuejs.org/v2/api/#data
Note that if you use an arrow function with the data property, this won’t be the component’s instance, but you can still access the instance as the function’s first argument
As to your other question about whether using classes like this is contrary to Vue...
I don't think the use of classes like this is encouraged but they can be made to work so long as you understand the limitations. If you have a clear understanding of how Vue reactivity works, especially the rewriting of properties, then it is possible to write classes like this and for them to work fine. The key is to ensure that any properties you want to be reactive are exposed as properties of the object so Vue can rewrite them.
If you don't need reactivity on these objects then don't put them in data. You'd be better off just creating properties within the created hook instead so the reactivity system doesn't waste time trying to add reactivity to them. So long as they are properties of the instance they will still be accessible in your templates, there's nothing special about using data from that perspective.
I think computed is a better way to do what you want
export default {
computed:{
state(){
return new InitialState(this);
},
query(){
return new QueryController(this);
}
}
}

Functional component props/state/store not updated in function - what's a viable alternative?

First off some description of what I need to achieve. I show information in front-end (React) that mostly corresponds to database rows and the user can do regular CRUD operations on those objects. However, I also add some dummy rows into the JSON that I send to front-end because there are some objects that are "defaults" and should not be inserted into the database unless the user actually wants to modify them. So what I need to do is once a user wants to modify a "default" object in front-end, I first have to POST that object to a specific endpoint that copies it from constants to the database and only then follow that up with the request to modify that row.
Secondly, the architecture around this. For storing the state of the rows in front-end I'm using Redux via easy-peasy and I have a thunk for doing the first saving before modifying. Once a user wants to edit a "default" object anywhere in the UI (there are about 20 different ways of editing an object), the flow of the program looks something like this:
User edits something and presses "save"
Thunk is called in the save function and awaited
Thunk POSTs to backend to insert the object into database and return the corresponding row
Backend responds with the ID-s of the rows
Thunk calls action and updates these objects in store with correct ID-s
Thunk returns and the function pointer moves back to the modifying function
The modifying function makes another request with the correct ID-s
The modifying function updates the store with the modified values
Now, the problem I run into is from step 5 to 7, because the component looks basically like this:
const Foo = () => {
const insertToDatabaseIfNecessary = useStoreActions((actions) => actions.baz.insertIfNecessary)
const items = useStoreState((state) => state.baz.items);
const onSave = async () => {
await insertToDatabaseIfNecessary();
// do the actual modifying thing here
axios.post(...items);
}
return (
<button onClick={onSave}>Save!</button>
);
}
If you know functional components better than I do, then you know that in onSave() the insertToDatabaseIfNecessary() will update the values in Redux store, but when we get to the actual modifying and post(...items) then the values that are POSTed are not updated because they will be updated in the next time the component is called. They would be updated if this was a class-based component, but easy-peasy has no support for class-based components. I guess one way would be to use class-based components and Redux directly but I have feeling there might be a different pattern that I could use to solve my issue without resorting to class-based components.
The question: Is there a sane way of doing this with functional components?
Thunks in easy-peasy can handle asynchronous events, so you should put your axios post in there e.g.
insertToDatabaseIfNecessary : thunk(async (actions, payload) => {
// First update the data on the server
await axios.post(payload.items);
// Assuming that the post succeeds, now dispatch and action to update your store.
// (You'd want to check your post succeeded before doing this...)
actions.updateYourStoreData(payload);
})
This easy-peasy thunk will wait for the async post to finish, so you can use the action as follows in your Foo component:
insertToDatabaseIfNecessary();
You will not need to await it or use the onSave function in your Foo component.

Why does my sequelize model instance lose its id?

I've got a node-based microservice built on top of postgres, using sequelize to perform queries. I've got a table of Pets, each with an id (uuid) and a name (string). And, I've got a function for fetching Pets from the database by name, which wraps the nasty-looking sequelize call:
async function getPetByName( petName ) {
const sqlzPetInstance = Database.Pet.findOne({
where: { name: { [Sequelize.Op.iLike]: petName } }
})
if(!sqlzPetInstance) return undefined
return sqlzPetInstance
}
It works great.
Later, to improve performance, I added some very short-lived caching to that function, like so:
async function getPetByName( petName ) {
if( ramCache.get(petName) ) return ramCache.get(petName)
const sqlzPetInstance = await Database.Pet.findOne({ ... })
if(!sqlzPetInstance) return undefined
return ramCache.set(petName, sqlzPetInstance) // persists for 5 seconds
}
Now I've noticed that items served from the cache sometimes have their id prop removed! WTF?!
I've added logging, and discovered that the ramCache entry is still being located reliably, and the value is still an instance of the sqlz Pet model. All the other attributes on the model are still present, but dataValues.id is undefined. I also noticed that _previousDataValues.id has the correct value, which suggests to me this really is the model instance I want it to be, but modified for some reason.
What can explain this? Is this what I would see if callers who obtain the model mutate it by assigning to id? What can cause _previousDataValues and dataValues to diverge? Are there cool sqlz techniques I can use to catch the culprit (perhaps by defining custom setters that log or throw)?
EDIT: experimentation shows that I can't overwrite the id by assigning to it. That's cool, but now I'm pretty much out of ideas. If it's not some kind of irresponsible mutation (which I could protect against), then I can't think of any sqlz instance methods that would result in removing the id.
I don't have a smoking gun, but I can describe the fix I wrote and the hypothesis that shaped it.
As I said, I was storing sequelize model instances in RAM:
ramCache[ cacheKey ] = sqlzModelInstance
My hypothesis is that, by providing the same instance to every caller, I created a situation in which naughty callers could mutate the shared instance.
I never figured out how that mutation was happening. I proved through experimentation that I could not modify the id attribute by overwriting it:
// this does not work
sqlzModelInstance.id = 'some-fake-id'
// unchanged
However, I read a few things in the sqlz documentation that suggested that every instance retains some kind of invisible link to a central authority, and so there's the possibility of "spooky action at a distance."
So, to sever that link, I modified my caching system to store the raw data, rather than sqlz model instances, and to automatically re-hydrate that raw data upon retrieval.
Crudely:
function saveInCache( cacheKey, sqlzModelInstance ) {
cache[ cacheKey ] = sqlzModelInstance.get({ plain: true })
}
function getFromCache( cacheKey ) {
let data = cache[ cacheKey ]
if(!data) return undefined
return MySqlzClass.build( data, { isNewRecord: false, raw: true } )
}
I never located the naughty caller -- and my general practice is to avoid mutating arguments, so it's unlikely any straightforward mutation is happening -- but the change I describe has fixed the easily-reproducible bug I was encountering. So, I think my hypothesis, vague as it is, is accurate.
I will refrain for a while from marking my answer as correct, in the hopes that someone can shed some more light on the problem.

Passing down arguments using Facebook's DataLoader

I'm using DataLoader for batching the requests/queries together.
In my loader function I need to know the requested fields to avoid having a SELECT * FROM query but rather a SELECT field1, field2, ... FROM query...
What would be the best approach using DataLoader to pass down the resolveInfo needed for it? (I use resolveInfo.fieldNodes to get the requested fields)
At the moment, I'm doing something like this:
await someDataLoader.load({ ids, args, context, info });
and then in the actual loaderFn:
const loadFn = async options => {
const ids = [];
let args;
let context;
let info;
options.forEach(a => {
ids.push(a.ids);
if (!args && !context && !info) {
args = a.args;
context = a.context;
info = a.info;
}
});
return Promise.resolve(await new DataProvider().get({ ...args, ids}, context, info));};
but as you can see, it's hacky and doesn't really feel good...
Does anyone have an idea how I could achieve this?
I am not sure if there is a good answer to this question simply because Dataloader is not made for this usecase but I have worked extensively with Dataloader, written similar implementations and explored similar concepts on other programming languages.
Let's understand why Dataloader is not made for this usecase and how we could still make it work (roughly like in your example).
Dataloader is not made for fetching a subset of fields
Dataloader is made for simple key-value-lookups. That means given a key like an ID it will load a value behind it. For that it assumes that the object behind the ID will always be the same until it is invalidated. This is the single assumption that enables the power of dataloader. Without it the three key features of Dataloader won't work anymore:
Batching requests (multiple requests are done together in one query)
Deduplication (requests to the same key twice result in one query)
Caching (consecutive requests of the same key don't result in multiple queries)
This leads us to the following two important rules if we want to maximise the power of Dataloader:
Two different entities cannot share the same key, othewise we might return the wrong entity. This sounds trivial but it is not in your example. Let's say we want to load a user with ID 1 and the fields id and name. A little bit later (or at the same time) we want to load user with ID 1 and fields id and email. These are technically two different entities and they need to have a different key.
The same entity should have the same key all the time. Again sounds trivial but really is not in the example. User with ID 1 and fields id and name should be the same as user with ID 1 and fields name and id (notice the order).
In short a key needs to have all the information needed to uniquely identify an entity but not more than that.
So how do we pass down fields to Dataloader
await someDataLoader.load({ ids, args, context, info });
In your question you have provided a few more things to your Dataloader as a key. First I would not put in args and context into the key. Does your entity change when the context changes (e.g. you are querying a different database now)? Probably yes, but do you want to account for that in your dataloader implementation? I would instead suggest to create new dataloaders for each request as described in the docs.
Should the whole request info be in the key? No, but we need the fields that are requested. Apart from that your provided implementation is wrong and would break when the loader is called with two different resolve infos. You only set the resolve info from the first call but really it might be different on each object (think about the first user example above). Ultimately we could arrive at the following implementation of a dataloader:
// This function creates unique cache keys for different selected
// fields
function cacheKeyFn({ id, fields }) {
const sortedFields = [...(new Set(fields))].sort().join(';');
return `${id}[${sortedFields}]`;
}
function createLoaders(db) {
const userLoader = new Dataloader(async keys => {
// Create a set with all requested fields
const fields = keys.reduce((acc, key) => {
key.fields.forEach(field => acc.add(field));
return acc;
}, new Set());
// Get all our ids for the DB query
const ids = keys.map(key => key.id);
// Please be aware of possible SQL injection, don't copy + paste
const result = await db.query(`
SELECT
${fields.entries().join()}
FROM
user
WHERE
id IN (${ids.join()})
`);
}, { cacheKeyFn });
return { userLoader };
}
// now in a resolver
resolve(parent, args, ctx, info) {
// https://www.npmjs.com/package/graphql-fields
return ctx.userLoader.load({ id: args.id, fields: Object.keys(graphqlFields(info)) });
}
This is a solid implementation but it has a few weaknesses. First, we are overfetching a lot of fields if we have different field requiements in the same batch request. Second, if we have fetched an entity with key 1[id,name] from cache key function we could also answer (at least in JavaScript) keys 1[id] and 1[name] with that object. Here we could build a custom map implementation that we could supply to Dataloader. It would be smart enough to know these things about our cache.
Conclusion
We see that this is really a complicated matter. I know it is often listed as a benefit of GraphQL that you don't have to fetch all fields from a database for every query, but the truth is that in practice this is seldomly worth the hassle. Don't optimise what is not slow. And even is it slow, is it a bottleneck?
My suggestion is: Write trivial Dataloaders that simply fetch all (needed) fields. If you have one client it is very likely that for most entities the client fetches all fields anyways, otherwise they would not be part of you API, right? Then use something like query introsprection to measure slow queries and then find out which field exactly is slow. Then you optimise only the slow thing (see for example my answer here that optimises a single use case). And if you are a big ecomerce platform please don't use Dataloader for this. Build something smarter and don't use JavaScript.

Updating Data between two components in React

I am new to React and I don't know what's the best way to do this.
I have a list of cars and on clicking each row it should show slide to full page details of that car.
My code structure is:
I have App which renders two components. CarList and CarDetails. Car Details is hidden initially. The reason I rendered carDetails in app is because it's a massive fix template so I would like to render this once when app is loaded and only update it's data when each row clicked.
CarList also renders CarRow component which is fine.
Now my problem is I have a getDetails function on CarRow component which is making a call to get the details based on the car id. How to get carDetails component data updated ? I used
this.setState({itemDetails:data});
but seems state of the carRow is not the same reference as state in carDetails.
Any help?
This is a fundamental issue that lots of thought and man-hours has gone into in order to try and solve. It probably can't be answered, except on a surface level, in a StackOverflow post. It's not React-centric, either. This is an issue across most applications, regardless of the framework you're using.
Since you asked in the context of React, you might consider reading into flux, which is the de-facto implementation of this one-way data-flow idea in concert with React. However, that architecture is by no means "the best". There are simply advantages and disadvantages to it like everything else.
Some people don't like the idea of the global "event bus" that flux proposes. If that's the case, you can simply implement your own intermediate data layer API that collects query callbacks and A) invokes the callbacks on any calls to save data and B) refreshes any appropriate queries to the server. For now, though, I'd stick with flux as it will give you an idea of the general principles involved in having the things that most people consider to be "good", like a single source of truth for your data, one way flow, etc.
To give a concrete example of the callback idea:
// data layer
const listeners = [];
const data = {
save: save,
query: query
};
function save(someData) {
// save data to the server, and then...
.then(data => {
listeners.forEach(listener => listener(data));
});
}
function query(params, callback) {
// query the server with the params, then
listeners.push(callback);
}
// component
componentWillMount() {
data.query(params, data => this.setState({ myData: data }));
},
save() {
// when the save operation is complete, it will "refresh" the query above
data.save(someData);
}
This is a very distilled example and doesn't address optimization, such as potential for memory leaks when moving to different views and invoking "stale" callbacks, however it should give you a general idea of another approach.
The two approaches have the same policy (a single source of truth for data and one way data flow) but different implementations (global "event bus" which necessitates keeping track of events, or the simple callback method, which can necessitate a form of memory management).

Categories

Resources