sorting two associative arrays/stacks - javascript

I am implementing an algorithm I designed and am exploring different approaches
This isn't a homework problem but I am going to explain it like one: lets say a merchant has bought inventory of apples on different days, and also sold some on different days. I want the weighted average timestamp of their current purchases.
I am storing this data object as timestamp string in epoch time, and quantity of apples. My dataset actually has the purchases and the sells in separate data sets, like so:
//buys
var incomingArray = {
"1518744389": 10,
"1318744389": 30
};
//sells
var outgoingArray = {
"1518744480": 3,
"1418744389": 5,
"1408744389": 8
};
and I would like the outcome to show only the remainding incomingArray timestamp purchase pairs.
var incomingArrayRemaining = {
"1518744389": 7,
"1318744389": 17
};
Where you see there was one outgoing transaction for 3 apples at a later timestamp, therefore subtracting from 10. And there were 13 outgoing transactions before the buy of 10, but after the purchase of 30, so they only subtract from the 30.
Note, if more than 10 were transferred after 10, it would subtract from both 10 and 30. The number of apples can never be less than 0.
First, to accomplish my goals it seems that I need to know how many are actually still owned from the lot they were purchased in.
Instead of doing stack subtracting in the LIFO method, it seems like this has to be more like Tax Lot Accounting. Where the lots themselves have to be treated independently.
Therefore I would have to take the timestamp of the first index of the sell in the outgoing array and find the nearest older timestamp of the buy in the incoming array
Here is what I tried:
for (var ink in incomingArray) {
var inInt = parseInt(ink);
for (var outk in outgoingArray) {
if (inInt >= 0) {
var outInt = parseInt(outk);
if (outInt >= inInt) {
inInt = inInt - outInt;
if (intInt < 0) {
outInt = inInt * -1; //remainder
inInt = 0;
} //end if
} //end if
} //end if
} //end innter for
} //end outer for
It is incomplete and the nested for loop solution will already have poor computational time.
That function merely tries to sort the transactions so that only the remaining balance remains, by subtracting an outgoing from the nearest incoming balance, and carrying that remainder to the next incoming balance
I feel like a recursive solution would be better, or maybe something more elegant that I hadn't thought of (nested Object forEach accessor in javascript)
After I get them sorted then I need to actually do the weighted average method, which I have some ideas for already.
First sorting, then weighted average of the remaining quantities.
Anyway, I know the javascript community on StackOverflow is particularly harsh about asking for help but I'm at an impasse because not only do I want a solution, but a computationally efficient solution, so I will probably throw a bounty on it.

You could convert the objects into an array of timestamp-value pairs. Outgoing ones could be negative. Then you can easily sort them after the timestamp and accumulate how you like it:
const purchases = Object.entries(incomingArray).concat(Object.entries(outgoingArray).map(([ts, val]) => ([ts, -val])));
purchases.sort(([ts1, ts2]) => ts1 - ts2);
Now you could iterate over the timespan and store the delta in a new array when the value increases (a new ingoing):
const result = [];
let delta = 0, lastIngoing = purchases[0][0];
for(const [time, value] of purchases){
if(value > 0){
// Store the old
result.push([lastIngoing, delta]);
// Set up new
delta = 0;
lastIngoing = time;
} else {
delta += value;
}
}

Related

Vue computed property overwriting global state without vuex

I have a list of people who have scores. In state I have them listed in an array, one of the items in the array is 'scoreHistory' which is an array of objects containing their scores at different points in time. I want to filter this set for different time periods i.e. -5 days, -30 days so instead of just seeing the overall score I can see the scores if everyone started at 0 say 30 days ago.
I have it (kind of) working. See my code below:
filteredScores () {
if(!this.people) {
return
}
// Here I was trying to ensure there was a copy of the array created in order to not change the original array. I thought that might have been the problem.
let allPeople = this.people.slice(0) // this.people comes from another computed property with a simple getter. Returns an array.
let timeWindow = 30 //days
const windowStart = moment().subtract(timeWindow,'days').toDate()
for (const p of allPeople ) {
let filteredScores = inf.scoreHistory.filter(score => moment(score.date.toDate()).isSameOrAfter(windowStart,'day'))
p.scoreHistory=filteredScores
//calculate new score
p.score = inf.scoreHistory.reduce(function(sum,item) {
return sum + item.voteScore
},0)
}
return allInf
}
I expected it to return to me a new array where each person's score is summed up over the designated time period. It seems to do that OK. The problem is that it is altering the state that this.people reads from which is the overall data set. So once it filters all that data is gone. I don't know how I am altering global state without using vuex??
Any help is greatly appreciated.
Your problem isn't that you're modifying the array, but that you're modifying the objects within the array. You change the scoreHistory and score property of each item in the array. What you want to do instead is create a new array (I recommend using map) where each item is a copy of the existing item plus a new score property.
filteredScores () {
if(!this.people) {
return
}
let timeWindow = 30 //days
const windowStart = moment().subtract(timeWindow,'days').toDate()
return this.people.map(p => {
let filteredScores = p.scoreHistory.filter(score => moment(score.date.toDate()).isSameOrAfter(windowStart,'day'))
//calculate new score
let score = filteredScores.reduce(function(sum, item) {
return sum + item.voteScore
}, 0)
// Create a new object containing all the properties of p and adding score
return {
...p,
score
}
}
})

I want to get the average my firebase data

I want to average the related values ​​when the data in the FireBase is updated.
I am using Firebase functions and can not load data.
I can change the data I want when the event occurs, but I can not calculate the average of the data.
exports.taverage = functions.database.ref('/User/tsetUser/monthQuit/{pushId}')
.onCreate((snapshot, context) => {
const promiseRoomUserList = admin.database().ref('/User/tsetUser/monthQuit/{pushId}').once('value');
var sum=0;
const arrayTime = [];
snapshot.forEach(snapshot => {
arrayTime.push('/User/tsetUser/monthQuit/{pushId}'.val());
})
for(let i=0; i<arrayTime.length; i++){
sum+=arrayTime[i];
}
return admin.database().ref('/User/tsetUser/inform/standardQuit').set(sum);
});
//I Want 'standardQuit' value set average.
I'm not sure why you can't calculate the average, but a simpler version of your code would be:
exports.taverage = functions.database.ref('/User/tsetUser/monthQuit/{pushId}')
.onCreate((snapshot, context) => {
return admin.database().ref('/User/tsetUser/monthQuit/{pushId}').once('value')
.then(function(snapshot) {
let sum=0;
snapshot.forEach(child => {
sum = sum + child.val();
})
let avg = sum / snapshot.numChildren();
return admin.database().ref('/User/tsetUser/inform/standardQuit').set(avg);
});
});
The biggest differences:
This code returns promises from both the top-level, and the nested then(). This is needed so Cloud Functions knows when your code is done, and it can thus stop billing you (and potentially shut down the container).
We simply add the value of each child to the sum, since you weren't using the array in any other way. Note that the child.val() depends on your data structure, which you didn't share. So if it fails there, you'll need to update how you get the exact value (or share you data structure with us).
The code actually calculates the average by dividing the sum by the number of child nodes.
Consider using a moving average
One thing to keep in mind is that you're now reading all nodes every time one node gets added. This operation will get more and more expensive as nodes are added. Consider if you can use a moving average, which wouldn't require all child nodes, but merely the current average and the new child node. The value will be an approximate average where more recent value typically have more weight, and is much cheaper to calculate:
exports.taverage = functions.database.ref('/User/tsetUser/monthQuit/{pushId}')
.onCreate((snapshot, context) => {
return admin.database().ref('/User/tsetUser/inform/standardQuit').transaction(function(avg) {
if (!avg) avg = 0;
return (15.0 * avg + snapshot.val()) / 16.0;
});
});

Creating a constraint check in Angular

I have a list of questions that can be put into a random order by the system but at the same time there are checks for certain questions that need to come before a different question ex.
q1 before q4, q3 before q2
I have to also make sure that there isn't something like this in there though:
q1 before q4, q3 before q2, q2 before q1, q4 before q2
I'm not entirely sure how this is going to be done.
A way of storing this I was given as an example would be a number array of arrays let n: number[number, number][]
If you're lazy, which I am, since there are only a few questions, I would just randomly order the questions and check against the rules, if it violates them just run it again.
If this is a homework assignment and you'll be marked down for being lazy, you can create a function which randomly assigns questions to slots in an array, starting with questions which must come after the others, then reduce the range of the random number generator for the ones which must be less to have a maximum of the randomly generated position the one which must be greater than appears in.
For example:
lets say you have an array of questions of length 5. and q1 has to appear before q3 and q4 has to appear before q2. First place q3 in a random position in an array of length 5, lets say that poition is 2. Then randomly assign q1 to a position less than 2. Do that for each set of rules, each time a number is assigned remove it from the RNG, once all the rules are completed run the other questions.
You can represent a constraint as a tuple with the first string representing the question which must come first and the second as the question which must come second. rules: number[][] = [['q1', 'q3], ['q4', 'q2']] would be your rules and you would have all questions in a question array questions: string[] = ['q1', 'q2', ...] each time a question is assigned from the tuples remove it from the questions array using splice getting the index of the string. Heres some code which should work with those two data types, notice I use an array of available indices to exclude previously chosen indices:
questions: string[]; // put all questions here.
rules: string[][]; // put all less than rules as tuples here,
// i.e. q1 must come before q2 written as [['q1', 'q2'],...]
randomizer(): string[] {
const arr = new Array<string>(this.questions.length);
const availableIndices = [];
let rand: number;
for (let i = 0; i < arr.length; i++) {
availableIndices.push(i)
}
for (const tuple of rules) {
rand = Math.floor(Math.random() * availableIndices.length)
arr[availableIndices[rand]] = tuple[1];
questions.splice(indexOf(tuple[1]), 1);
availableIndices.splice(indexOf(rand), 1);
rand = Math.floor(Math.random() * availableIndices.indexOf(tuple[1]))
arr[availableIndices[rand]] = tuple[0];
questions.splice(indexOf(tuple[2]), 1);
availableIndices.splice(indexOf(rand), 1);
}
for (const q of questions) {
rand = Math.floor(Math.random() * availableIndices.length);
arr[availableIndices[rand]] = q;
availableIndices.splice(indexOf(rand), 1);
}
return arr;
}
This algorithm is O(n) so your teacher can't complain.

Create Groups Without Repeating Previous Groupings

I have created a random group creator, but random doesn't really guarantee that you work with people you haven't worked with before. If someone was able to generate a "Random Group Generator With History" that tracked previous groups and avoided putting people in groups with the same people over and over, I would definitely use it! Does anyone know how to do this?
For clarity: Given an array of strings
["Jason", "Kim", "Callie", "Luke"]
and an array of previous pairings (also arrays)
[[["Jason", "Kim"], ["Callie", "Luke"]], [["Jason", "Luke"], ["Callie", "Kim"]]]
return groupings with the fewest number of repeat group members
[["Jason", "Callie"], ["Luke", "Kim"]]
I'm imagining that the number I am trying to minimize is the number of repeat partners. So for each pair of two people, for every time they have already been on a team, if the result puts them on the same team, the result would have a score of that. For the example, the "scoring" to arrive at the return value could look like this:
["Jason", "Kim"] have a score of 1, they have been paired together before
["Callie", "Luke"] have a score of 1, they have been paired together before
["Jason", "Luke"] have a score of 1, they have been paired together before
["Callie", "Kim"] have a score of 1, they have been paired together before
["Jason", "Callie"] have a score of 0, they have not been paired together before
["Luke", "Kim"] have a score of 0, they have not been paired together before
Choose the sets that cover the entire list while generating the smallest score. In this case, the pairings ["Jason", "Callie"] and ["Luke", "Kim"] cover the entire set, and have a score of 0 (no repeated groupings) and therefore it is an optimal solution (0 being the best possible outcome).
This is probably the wrong way to do this (since I'm imagining it would take n squared time), but hopefully it gives a sense of what I'm trying to optimize for. This would not need to be a perfect optimization, just a "decent answer" that doesn't put the same groups together every single time.
Ideally, it would be able to handle any size group, and also be able to handle the fact that someone might be out that day (not all people will be in all of the arrays). I would love a javascript answer, but I should be able to translate if someone can come up with the logic.
You could collect all pairings in an object and count. Then take only the ones with a smaller count.
function getKey(array) {
return array.slice().sort().join('|');
}
var strings = ["Jason", "Kim", "Callie", "Luke"],
data = [[["Jason", "Kim"], ["Callie", "Luke"]], [["Jason", "Luke"], ["Callie", "Kim"]]],
object = {},
i, j,
keys;
for (i = 0; i < strings.length - 1; i++) {
for (j = i + 1; j < strings.length; j++) {
object[getKey([strings[i], strings[j]])] = 0;
}
}
data.forEach(function (a) {
a.forEach(function (b, i) {
object[getKey(b)]++;
});
});
keys = Object.keys(object).sort(function (a, b) {
return object[b] - object[a];
});
keys.forEach(function (k) {
console.log(k, object[k]);
});
console.log(object);
.as-console-wrapper { max-height: 100% !important; top: 0; }

immutable.js filter and mutate (remove) found entries

I have two loops, one for each day of the month, other with all events for this month. Let's say I have 100 000 events.
I'm looking for a way to remove events from the main events List once they were "consumed".
The code is something like:
const calendarRange = [{initialDate}, {initialDate}, {initialDate}, {initialDate}, ...] // say we have 30 dates, one for each day
const events = fromJS([{initialDate}, {initialDate}, {initialDate}, ...]) // let's say we have 100 000
calendarRange.map((day) => {
const dayEvents = events.filter((event) => day.get('initialDate').isSame(event.get('initialDate'), 'day')) // we get all events for each day
doSomeThingWithDays(dayEvents)
// how could I subtract `dayEvents` from `events` in a way
// the next celandarRange iteration we have less events to filter?
// the order of the first loop must be preserved (because it's from day 1 to day 3{01}])
}
With lodash I could just do something like:
calendarRange.map((day) => {
const dayEvents = events.filter((event) => day.get('initialDate').isSame(event.get('initialDate'), 'day')) // we get all events for each day
doSomeThingWithDays(dayEvents)
pullAllWith(events, dayEvents, (a, b) => a === b)
}
How to accomplish the same optimization with immutablejs? I'm not really expecting a solution for my way of iterating the list, but for a smart way of reducing the events List in a way it get smaller and smaller..
You can try a Map with events split into bins - based on your example, you bin based on dates - you can lookup a bin, process it as a batch and remove it O(1). Immutable maps are fairly inexpensive, and fare much better than iterating over lists. You can incur the cost of a one time binning, but amortize it over O(1) lookups.
Something like this perhaps:
eventbins = OrderedMap(events.groupBy(evt => evt.get('initialDate').dayOfYear() /* or whatever selector */))
function iter(list, bins) {
if(list.isEmpty())
return
day = list.first()
dayEvents = bins.get(day.dayOfYear())
doSomeThingWithDays(dayEvents)
iter(list.shift(), bins.delete(day))
}
iter(rangeOfDays, eventbins)
By remobing already processed elements you are not going to make anything faster. The cost of all filter operations will be halved on average, but constructing the new list in every iteration will cost you some cpu cycles so it is not going to be significantly faster (in a big O sense). Instead, you could build an index, for example an immutable map, based on the initialDate-s, making all the filter calls unnecessary.
const calendarRange = Immutable.Range(0, 10, 2).map(i => Immutable.fromJS({initialDate: i}));
const events = Immutable.Range(0, 20).map(i => Immutable.fromJS({initialDate: i%10, i:i}));
const index = events.groupBy(event => event.get('initialDate'));
calendarRange.forEach(day => {
const dayEvents = index.get(day.get('initialDate'));
doSomeThingWithDays(dayEvents);
});
function doSomeThingWithDays(data) {
console.log(data);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/immutable/3.8.1/immutable.js"></script>

Categories

Resources