I tried searching a lot of internet but could not find answer to a simple question. I am very new to mithril (do not know why people chose mithril for project :( ). I want to iterate through a list of strings and use its value in drop down with a checkbox.
const DefaultListView = {
view(ctrl, args) {
const parentCtrl = args.parentCtrl;
const attr = args.attr;
const cssClass = args.cssClass;
const filterOptions = ['Pending', 'Paid', 'Rejected'];
// as of now, all are isMultipleSelection filter
const selectedValue =
parentCtrl.filterModel.getSelectedFilterValue(attr);
function isOptionSelected(value) {
return selectedValue.indexOf(value) > -1;
}
return m('.filter-dialog__default-attr-listing', {
class: cssClass
}, [
m('.attributes', {
onscroll: () => {
m.redraw(true);
}
}, [
filterOptions.list.map(filter => [
m('.dropdown-item', {
onclick() {
// Todo: Add click metrics.
// To be done at time of backend integration.
document.body.click();
}
}, [
m('input.form-check-input', {
type: 'checkbox',
checked: isOptionSelected(filter)
}),
m('.dropdown-text', 'Pending')
])
])
])
]);
}
};
Not sure. How to do it. This is what I have tried so far but no luck. Can someone help me this?
At the beginning of the view function you define an array:
const filterOptions = ['Pending', 'Paid', 'Rejected'];
But later on in the view code where you perform the list iteration, filterOptions is expected to be an object with a list property:
filterOptions.list.map(filter =>
That should be filterOptions.map(filter =>.
There may be other issues with your code but it's impossible to tell without seeing the containing component which passes down the args. You might find it more helpful to ask the Mithril chatroom, where myself and plenty of others are available to discuss & assist with tricky situations.
Related
I have a list objects displayed, each having a checkbox to select it. I also have a checkbox at the top to select the checkbox for every object, kinda like this (assume [ ] is a checkbox):
[ ] Select all
[ ] Object 1
[ ] Object 2
[ ] Object 3
The problem I have is when I have about 100 objects and clicking "Select all", the web page freezes for a good few seconds. There is also a search bar to filter the object, but I tested this by removing it and the performance is just as slow. Each object has a property selected so we know which object is selected. Below are some snippets of my code:
HTML:
<checkbox-input
id="documentSelectAll"
:model-value="operatingRows.length === 0 ? false : allSelected"
#update:modelValue="allSelectPressed" // Calls vuex function below
/>
---
<tr
v-for="(row, index) in operatingRows"
:key="index"
>
<document-table-row
:row-idx="index"
:row-data="row"
:fields="row.fields"
:hidden-column-indices="hiddenColumnIndices"
#toggleSelectedOnRow="toggleSelectedOnRow(row.id)" // Calls vuex to select individual row
/>
</tr>
Computed properties:
operatingRows() {
const showRow = (r) => {
// Some search code, irrelevant here
};
return this.sorted.filter(showRow); // 'sorted' comes from vuex
},
selectedRows() {
return this.operatingRows.filter((r) => r.selected);
},
numSelected() {
return this.selectedRows.reduce((prev, cur) => (cur.selected ? prev + 1 : prev), 0);
},
allSelected() {
return this.numSelected === this.operatingRows.length;
},
Vuex store:
getters: {
...storeStateGetters,
sorted: (state) => state.sorted,
},
---
mutations: {
...storeStateMutations,
SET_ALL_SELECTED_ON_SORTED(state, isSelected) {
state.sorted.map((r) => {
const rn = r;
rn.selected = isSelected;
return rn;
});
},
},
I think it might be to do with the fact that there are too many computed properties? I tried removing them individually (and the associated code) but the performance still seems bad, thus I am not able to pin point the issue to any particular piece of code, rather I think it's to do with the architecture as a whole.
Any help appreciated.
Turns out it was because of the mutation. To fix this, the mutation code was moved to an action which calls a mutation to set the state of the sorted array.
selectOnSorted: ({ commit, rootGetters }, isSelected) => {
const selectedSorted = rootGetters['documents/sorted'].map((doc) => ({
...doc,
selected: isSelected,
}));
commit('SET_SORTED', selectedSorted);
},
I feel like I'm missing something obvious. I have IDs stored as [String] that I want to be able to resolve to the full objects they represent.
Background
This is what I want to enable. The missing ingredient is the resolvers:
const bookstore = `
type Author {
id: ID!
books: [Book]
}
type Book {
id: ID!
title: String
}
type Query {
getAuthor(id: ID!): Author
}
`;
const my_query = `
query {
getAuthor(id: 1) {
books { /* <-- should resolve bookIds to actual books I can query */
title
}
}
}
`;
const REAL_AUTHOR_DATA = [
{
id: 1,
books: ['a', 'b'],
},
];
const REAL_BOOK_DATA = [
{
id: 'a',
title: 'First Book',
},
{
id: 'b',
title: 'Second Book',
},
];
Desired result
I want to be able to drop a [Book] in the SCHEMA anywhere a [String] exists in the DATA and have Books load themselves from those Strings. Something like this:
const resolve = {
Book: id => fetchToJson(`/some/external/api/${id}`),
};
What I've Tried
This resolver does nothing, the console.log doesn't even get called
const resolve = {
Book(...args) {
console.log(args);
}
}
HOWEVER, this does get some results...
const resolve = {
Book: {
id(id) {
console.log(id)
return id;
}
}
}
Where the console.log does emit 'a' and 'b'. But I obviously can't scale that up to X number of fields and that'd be ridiculous.
What my team currently does is tackle it from the parent:
const resolve = {
Author: {
books: ({ books }) => books.map(id => fetchBookById(id)),
}
}
This isn't ideal because maybe I have a type Publisher { books: [Book]} or a type User { favoriteBooks: [Book] } or a type Bookstore { newBooks: [Book] }. In each of these cases, the data under the hood is actually [String] and I do not want to have to repeat this code:
const resolve = {
X: {
books: ({ books }) => books.map(id => fetchBookById(id)),
}
};
The fact that defining the Book.id resolver lead to console.log actually firing is making me think this should be possible, but I'm not finding my answer anywhere online and this seems like it'd be a pretty common use case, but I'm not finding implementation details anywhere.
What I've Investigated
Schema Directives seems like overkill to get what I want, and I just want to be able to plug [Books] anywhere a [String] actually exists in the data without having to do [Books] #rest('/external/api') in every single place.
Schema Delegation. In my use case, making Books publicly queryable isn't really appropriate and just clutters my Public schema with unused Queries.
Thanks for reading this far. Hopefully there's a simple solution I'm overlooking. If not, then GQL why are you like this...
If it helps, you can think of this way: types describe the kind of data returned in the response, while fields describe the actual value of the data. With this in mind, only a field can have a resolver (i.e. a function to tell it what kind of value to resolve to). A resolver for a type doesn't make sense in GraphQL.
So, you can either:
1. Deal with the repetition. Even if you have ten different types that all have a books field that needs to be resolved the same way, it doesn't have to be a big deal. Obviously in a production app, you wouldn't be storing your data in a variable and your code would be potentially more complex. However, the common logic can easily be extracted into a function that can be reused across multiple resolvers:
const mapIdsToBooks = ({ books }) => books.map(id => fetchBookById(id))
const resolvers = {
Author: {
books: mapIdsToBooks,
},
Library: {
books: mapIdsToBooks,
}
}
2. Fetch all the data at the root level instead. Rather than writing a separate resolver for the books field, you can return the author along with their books inside the getAuthor resolver:
function resolve(root, args) {
const author = REAL_AUTHOR_DATA.find(row => row.id === args.id)
if (!author) {
return null
}
return {
...author,
books: author.books.map(id => fetchBookById(id)),
}
}
When dealing with databases, this is often the better approach anyway because it reduces the number of requests you make to the database. However, if you're wrapping an existing API (which is what it sounds like you're doing), you won't really gain anything by going this route.
I'm calling an API and receiving an array of results, I'm checking for pagination and if more pages exist I call the next page, repeat until no more pages.
For each array of results, I call another endpoint and do the exact same thing: I receive an array of results, check for another page and call endpoint again. Wash, rinse repeat.
For instance:
I want to grab a list of countries that might be a paginated response, then for each country I want to grab a list of cities, which might also be paginated. And for each city I execute a set of transformations and then store in a database.
I already tried this, but got stuck:
const grabCountries = Observable.create(async (observer) => {
const url = 'http://api.com/countries'
let cursor = url
do {
const results = fetch(cursor)
// results = {
// data: [ 'Canada', 'France', 'Spain' ],
// next: '47asd8f76358df8f4058898fd8fab'
// }
results.data.forEach(country => { observer.next(country) })
cursor = results.next ? `${url}/${results.next}` : undefined
} while(cursor)
})
const getCities = {
next: (country) => {
const url = 'http://api.com/cities'
let cursor = url
do {
const results = fetch(cursor)
// results = {
// data: [
// 'Montreal', 'Toronto',
// 'Paris', 'Marseilles',
// 'Barcelona', 'Madrid'
// ],
// next: '89ghjg98nd8g8sdfg98gs9h868hfoig'
// }
results.data.forEach(city => {
`**** What do I do here?? ****`
})
cursor = results.next ? `${url}/${results.next}` : undefined
} while(cursor)
}
}
I tried a few approaches:
Making a subject (sometimes I'll need to do parallel processed base on the results of 'grabCountries'. For example I may want to store the countries in a DB in parallel with grabbing the Cities.)
const intermediateSubject = new Subject()
intermediateSubject.subscribe(storeCountriesInDatabase)
intermediateSubject.subscribe(getCities)
I also tried piping and mapping, but it seems like it's basically the same thing.
As I was writing this I thought of this solution and it seems to be working fine, I would just like to know if I'm making this too complicated. There might be cases where I need to make more that just a few API calls in a row. (Imagine, Countries => States => Cities => Bakeries => Reviews => Comments => Replies) So this weird mapping over another observer callback pattern might get nasty.
So this is what I have now basically:
// grabCountries stays the same as above, but the rest is as follows:
const grabCities = (country) =>
Observable.create(async (observer) => {
const url = `http://api.com/${country}/cities`
let cursor = url
do {
const results = fetch(cursor)
// results = {
// data: [
// 'Montreal', 'Toronto',
// 'Paris', 'Marseilles',
// 'Barcelona', 'Madrid'
// ],
// next: '89ghjg98nd8g8sdfg98gs9h868hfoig'
// }
results.data.forEach(city => {
observer.next(city)
})
cursor = results.next ? `${url}/${results.next}` : undefined
} while (cursor)
})
const multiCaster = new Subject()
grabCountries.subscribe(multiCaster)
multiCaster.pipe(map((country) => {
grabCities(country).pipe(map(saveCityToDB)).subscribe()
})).subscribe()
multiCaster.pipe(map(saveCountryToDB)).subscribe()
tl;dr - I call an API that receives a paginated set of results in an array and I need to map through each item and call another api that receives another paginated set of results, each set also in an array.
Is nesting one observable inside another and mapping through the results via 'callApiForCountries.pipe(map(forEachCountryCallApiForCities))' the best method or do you have any other recommendations?
Here's the code that should work with sequential crawling of next url.
You start with a {next:url} until res.next is not available.
of({next:http://api.com/cities}).pipe(
expand(res=>results.next ? `${url}/${results.next}` : undefined
takeWhile(res=>res.next!==undefined)
).subscribe()
OK, so I have spent a lot of brain power on this and have come up with two solutions that seem to be working.
const nestedFlow = () => {
fetchAccountIDs.pipe(map(accountIds => {
getAccountPostIDs(accountIds) // Has the do loop for paging inside
.pipe(
map(fetchPostDetails),
map(mapToDBFormat),
map(storeInDB)
).subscribe()
})).subscribe()
}
const expandedflow = () => {
fetchAccountIDs.subscribe((accountId) => {
// accountId { accountId: '345367geg55sy'}
getAccountPostIDs(accountId).pipe(
expand((results) => {
/*
results : {
postIDs: [
131424234,
247345345,
],
cursor: '374fg8v0ggfgt94',
}
*/
const { postIDs, cursor } = results
if (cursor) return getAccountPostIDs({...accountId, cursor})
return { postIDs, cursor }
}),
takeWhile(hasCursor, true), // recurs until cursor is undefined
concatMap(data => data.postIDs),
map(data => ({ post_id: data })),
map(fetchPostDetails),
map(mapToDBFormat),
map(storeInDB)
).subscribe()
})
}
Both seem to be working with similar performance. I read some where that leaving the data flow is a bad practice and you should pipe everything, but I don't know how to eliminate the first exit in the 'expandedFlow' because the 'expand' needs to call back an observable, but maybe it can be done.
Now I just have to solve the race condition issues from the time the 'complete' is called in getAccountPostIDs the the last record is stored in the DB. Currently in my test, the observer.complete is finishing before 3 of the upsert actions.
Any comments are appreciated and I hope this helps someone out in the future.
What you need is the expand operator. It behaves recursively so it fits the idea of having paginated results.
In my Vue application, I have Vuex store modules with large arrays of resource objects in their state. To easily access individual resources in those arrays, I make Vuex getter functions that map resources or lists of resources to various keys (e.g. 'id' or 'tags'). This leads to sluggish performance and a huge memory memory footprint. How do I get the same functionality and reactivity without so much duplicated data?
Store Module Example
export default {
state: () => ({
all: [
{ id: 1, tags: ['tag1', 'tag2'] },
...
],
...
}),
...
getters: {
byId: (state) => {
return state.all.reduce((map, item) => {
map[item.id] = item
return map
}, {})
},
byTag: (state) => {
return state.all.reduce((map, item, index) => {
for (let i = 0; i < item.tags.length; i++) {
map[item.tags[i]] = map[item.tags[i]] || []
map[item.tags[i]].push(item)
}
return map
}, {})
},
}
}
Component Example
export default {
...,
data () {
return {
itemId: 1
}
},
computed: {
item () {
return this.$store.getters['path/to/byId'][this.itemId]
},
relatedItems () {
return this.item && this.item.tags.length
? this.$store.getters['path/to/byTag'][this.item.tags[0]]
: []
}
}
}
To fix this problem, look to an old, standard practice in programming: indexing. Instead of storing a map with the full item values duplicated in the getter, you can store a map to the index of the item in state.all. Then, you can create a new getter that returns a function to access a single item. In my experience, the indexing getter functions always run faster than the old getter functions, and their output takes up a lot less space in memory (on average 80% less in my app).
New Store Module Example
export default {
state: () => ({
all: [
{ id: 1, tags: ['tag1', 'tag2'] },
...
],
...
}),
...
getters: {
indexById: (state) => {
return state.all.reduce((map, item, index) => {
// Store the `index` instead of the `item`
map[item.id] = index
return map
}, {})
},
byId: (state, getters) => (id) => {
return state.all[getters.indexById[id]]
},
indexByTags: (state) => {
return state.all.reduce((map, item, index) => {
for (let i = 0; i < item.tags.length; i++) {
map[item.tags[i]] = map[item.tags[i]] || []
// Again, store the `index` not the `item`
map[item.tags[i]].push(index)
}
return map
}, {})
},
byTag: (state, getters) => (tag) => {
return (getters.indexByTags[tag] || []).map(index => state.all[index])
}
}
}
New Component Example
export default {
...,
data () {
return {
itemId: 1
}
},
computed: {
item () {
return this.$store.getters['path/to/byId'](this.itemId)
},
relatedItems () {
return this.item && this.item.tags.length
? this.$store.getters['path/to/byTag'](this.item.tags[0])
: []
}
}
}
The change seems small, but it makes a huge difference in terms of performance and memory efficiency. It is still fully reactive, just as before, but you're no longer duplicating all of the resource objects in memory. In my implementation, I abstracted out the various indexing methodologies and index expansion methodologies to make the code very maintainable.
You can check out a full proof of concept on github, here: https://github.com/aidangarza/vuex-indexed-getters
While I agree with #aidangarza, I think your biggest issue is the reactivity. Specifically the computed property. This adds a lot of bloated logic and slow code that listens for everything - something you don't need.
Finding the related items will always lead you to looping through the whole list - there's no easy way around it. BUT it will be much faster if you call this by yourself.
What I mean is that computed properties are about something that is going to be computed. You are actually filtering your results. Put a watcher on your variables, and then call the getters by yourself. Something along the lines (semi-code):
watch: {
itemId() {
this.item = this.$store.getters['path/to/byId'][this.itemId]
}
}
You can test with item first and if it works better (which I believe it will) - add watcher for the more complex tags.
Good luck!
While only storing select fields is a good intermediate option (per #aidangarza), it's still not viable when you end up with really huge sets of data. E.g. actively working with 2 million records of "just 2 fields" will still eat your memory and ruin browser performance.
In general, when working with large (or unpredictable) data sets in Vue (using VueX), simply skip the get and commit mechanisms altogether. Keep using VueX to centralize your CRUD operations (via Actions), but do not try to "cache" the results, rather let each component cache what they need for as long as they're using it (e.g. the current working page of the results, some projection thereof, etc.).
In my experience VueX caching is intended for reasonably bounded data, or bounded subsets of data in the current usage context (i.e. for a currently logged in user). When you have data where you have no idea about its scale, then keep its access on an "as needed" basis by your Vue components via Actions only; no getters or mutations for those potentially huge data sets.
I often find my self struggling with manipulating a specific item in an array, in a React component state. For example:
state={
menus:[
{
id:1,
title: 'something',
'subtitle': 'another something',
switchOn: false
},
{
id:2,
title: 'something else',
'subtitle': 'another something else',
switchOn: false
},
]
}
This array is filled with objects, that have various properties. One of those properties is of course a unique ID. This is what i have done recentely to edit a "switchOn" property on an item, according to its ID:
handleSwitchChange = (id) => {
const newMenusArray = this.state.menus.map((menu) => {
if (menu.id === id) {
return {
...menu,
switchOn: !menu.switchOn
};
} else {
return menu;
};
})
this.setState(()=>{
return{
menus: newMenusArray
}
})
}
As you can see, alot of trouble, just to change one value. In AngularJS(1), i would just use the fact that objects are passed by reference, and would directly mutate it, without any ES6 hustle.
Is it possible i'm missing something, and there is a much more straightforward approach for dealing with this? Any example would be greatly appreciated.
A good way is to make yourself a indexed map. Like you might know it from databases, they do not iterate over all entries, but are using indexes. Indexes are just a way of saying ID A points to Object Where ID is A
So what I am doing is, building a indexed map with e.g. a reducer
const map = data.reduce((map, item) => {
map[item.id] = item;
return map;
}, {})
now you can access your item by ID simply by saying
map[myId]
If you want to change it, you can use than object assign, or the ... syntax
return {
// copy all map if you want it to be immutable
...map
// override your object
[id]: {
// copy it first
...map[id],
// override what you want
switchOn: !map[id].switchOn
}
}
As an helper library, I could suggest you use Immutable.js, where you just change the value as it were a reference
I usually use findIndex
handleSwitchChange = (id) => {
var index = this.state.menu.findIndex((item) => {
return item.id === id;
});
if (index === -1) {
return;
}
let newMenu = this.state.menu.slice();
newMenu[index].switchOn = !this.state.menu[index].switchOn;
this.setState({
menu: newMenu
});
}