How can I adopt my ext js models to hold data containing objects.
Data example:
fruit {
apples {
quantity: '10',
color: 'red
}
pears {
color:'yellow',
taste: 'very bad'
}
How would then my model look like? I only know how to put data on one level:
Ext.define('app.mode.Fruits', {
extend: 'Ext.data.Model',
fields: [
{
name: 'apples'
},
{
name: 'pears'
},
],
Where can I put the other properties? I am working in Sencha Architect.
Take a read through the Ext.data.Field API docs. You'll see a variety of ways in which to configure data on your model.
I think what you're specifically asking for is the mapping config.
Related
I'm creating an angular application (computer online store) with a node/express backened. I have a products page to display all the products in my DB. A product has this model (typescript):
interface Product {
name: string
properties: {name: string, value: string | number}[]
}
I have a section within the page where you can filter products by properties. for instance a user can filter all the CPUs that have 4 cores or 8 cores. right now this is implemented like this:
In the angular application i query ALL THE PRODUCTS of the requested category,
loop through all of them, collect their properties and all the possible values and filter like this...
const products = [
{
name: 'intel cpu 1',
properties: [
{name: 'cores', value: 8},
{name: 'clock speed', value: 2.6}
]
},
{
name: 'intel cpu 2',
properties: [
{name: 'cores', value: 4},
{name: 'clock speed', value: 1.2}
]
}
]
collectPropertiesFromProducts(products)
// RESULT:
[
{property: 'cores', possibleValues: [4,8]},
{property: 'clock speed', possibleValues: [1.2,2.6]}
]
For now it works great, i can filter products easily by the result and it is all dynamic (i can just add a property to a product and thats it).
The problem is that it scales VERY BADLY, because:
I have to query all of the products to know their properties
The more products/properties = more CPU time = blocks main thread
My question is how can i do better? i have a node server so moving all the logic to there its pretty useless, i could also just move the "property collecting" function to a worker thread but again, ill have to query all the products...
Instead of dealing with this in the client or in the service itself, you can let mongodb do the calculations for you. E.g. you could write the following aggregation:
db.getCollection('products').aggregate([{
$unwind: "$properties"
},
{
$project: {
name: "$properties.name",
total: {
$add: ["$properties.value", ]
}
}
}, {
$group: {
_id: "$name",
possibleValues: {
$addToSet: "$total"
}
}
}
])
You could then expose this query through a custom endpoint (e.g. GET /product-properties) on your node-server and consume the response on the client.
You should consider doing multiple requests to the backend:
First:
getQueryParams, a new endpoint which returns your RESULT
Second:
A none filtered request to receive the initial set of products
Third:
When select a filter (based on first request) you do a new request with the selected filter
I'm using react-apollos Query-Component in React Native to get data from my backend.
The result looks something like this:
[
{
id: 1,
name: 'US Election',
parties: [
{
name: 'democrats',
id: 4,
pivot: {
id: 3,
answers: [
{
id: 13,
question_id: 3,
reason: 'Lorem ipsum',
__typename: "Answer"
},
{
id: 14,
question_id: 5,
reason: 'Lorem ipsum',
__typename: "Answer"
},
],
__typename: "ElectionPartyPivot"
},
__typename: "Party"
},
],
__typename: "Election"
},
{
id: 2,
name: 'Another one',
parties: [
{
name: 'democrats',
id: 4,
pivot: {
id: 7,
answers: [
{
id: 15,
question_id: 7,
reason: 'Lorem ipsum',
__typename: "Answer"
},
{
id: 18,
question_id: 9,
reason: 'Lorem ipsum',
__typename: "Answer"
},
],
__typename: "ElectionPartyPivot"
},
__typename: "Party"
},
],
__typename: "Election"
}
]
Now, when I console.log the result, the second election "Another one" has the pivot from the first entry US Election.
I think this is because of the normalization that goes on within Apollo (Cause the ID of the parties are the same in both) but I'm unsure how to fix it, so that it does not normalize this or normalizes it correctly.
EDIT
I came up with this solution, but it looks hacky. I now get the election_id together with the party and create a different Identifier within the cache. I wonder if this is good practice?
const cache = new InMemoryCache({
dataIdFromObject: object => {
switch (object.__typename) {
case 'Party': return `${object.election_id}:${object.id}`;
default: return defaultDataIdFromObject(object);
}
}
});
const client = new ApolloClient({
uri: config.apiUrl,
cache
});
Yes, providing a custom dataIdFromObject would be necessary in this case. You should consider using Party:${object.election_id}:${object.id} as the key in case there are other Election fields in the future that will require the same treatment.
This is, at the root, an issue with the way the schema is designed. There's an underlying assumption in GraphQL that while the nodes in your GraphQL may have relationships with one another, they are fully independent of each other as well. That is to say, within the same context, the same node should not represent different data based on the presence or absence of other nodes in the response.
Unfortunately, that's exactly how this response is structured -- we have a node that represents a Party, but its fields are different depending on its relationship to another node -- the Election.
There's two ways to remedy this sort of issue. One way would be to maintain different instances of each Party with different ids for each Election. Rather than representing a political party over the course of its life, the underlying data model behind the Party type would present a political party only in the context of one election.
The other way would be to restructure your schema to more accurately represent the relationships between the nodes. For example, a schema that supported this kind of query:
{
elections {
id
name
parties {
id
name
# omit pivot field on Party type
}
# instead because pivots are related directly to elections, add this field
pivots {
id
answers
# because we may still care what party the pivot is associated with as well
# we can add a party field to pivot to show that relationship
party {
id
name
}
}
}
}
I need to develop a model using mogoose with a field that will hold my object attributes. My problem is that these attributes are totaly changable, something like:
StockItem1 : {
sku: 23492349,
class: 'computer',
subclass: 'printer',
name: 'Hp Laserjet XXX',
qty: 120,
attr: {
laser: true,
speed: 1200,
color: white
}
}
StockItem2 : {
sku: 22342349,
class: 'homeappliance',
subclass: 'refrigerator',
name: 'GE Refrigerator',
qty: 23,
attr: {
stainlessstell: true,
doors: 2,
frostfree: true
}
}
The attr attributes fields are totally different depending of what type of class/subclass it belongs to.
What type should be given to attr field in mongoose ? I need to filter those in the future, like get all itens where attr.doors == 2.
Thanks for helping.
Use a Mixed Schema Type. Here are the docs. Mixed SchemaTypes are sort of an 'anything goes' type of deal. You have flexibility when it comes to defining data but it makes your collection harder to maintain.
I need to import the following into a store but I am confused about the correct model or models I need to create.
here is an example of the JSON that is returned from my server. Basically its an array with 2 items, with an array in each. The field names are different in each.
I suspect I need to have more than one model and have a relationship but I am unsure where to start. Any ideas? Thanks
[
firstItems: [
{
name : "ProductA",
key: "XYXZ",
closed: true
},
{
name : "ProductB",
key: "AAA",
closed: false
}
],
secondItems : [
{
desc : "test2",
misc: "3333",
},
{
desc : "test1",
misc: "123"
}
]
]
What you have is not JSON, your opening and ending [] can become JSON by changing them to {} and then using the following models
Then you can model it as
// Abbreviated definitions of Models, it has changed starting at Ext 5
Ext.define('FirstItem', fields: ['name', 'key', 'closed'])
Ext.define('SecondItem', fields: ['desc', 'misc'])
Ext.define('TopLevel', {
hasMany: [
{model: 'FirstItem', name: 'firstItems'},
{model: 'SecondItem', name: 'secondItems'}
]
})
Use the reader for store's proxy, it will create appropriate model on load.
If you need to load already loaded json into the store use loadRawData but reader you will need in any case.
I would like to perform fairly complex filtering on Marionette Collections.
Is there way to search for models with a DB like querys like the MongoDB API?
Example:
MarionetteCollection.find(
{
type: 'product',
$or: [ { qty: { $gt: 100 } }, { price: { $lt: 9.95 } } ],
$and [ { active: true} ],
$sortby{'name'},
$order {'asc'}
});
Maybe an extension to Marionette.js?
There is nothing in Marionette to help you here and Marionette doesn't make any changes/additions to the regular Backbone.Collection.
You could take a look at backbone-query. It appears to do what you are wanting.
Backbone has a simple implementation of what you are asking. Collection.where() && Collection.findWhere() can take an object and will find the model based on your object. But it doesn't more complex matchings like, greater than, less than, etc.
MarionetteCollection.find(
{
type: 'product',
qty: 55,
active: true
});