I have found this code written online(not by me), and was hoping to get an answer on what the mathematical formula or concept is that makes this function work. I am curious as to how this person designed this. First, I will explain the requirements that the function must produce, then I will supply the code, and a link to a working code pen for further hacking. P.S. The problem uses the word "vector", but since this is Javascript, vector just means array.
Function Requirements
Given a vector of vectors of words, ex.
[['quick', 'lazy'], ['brown', 'black', 'grey'], ['fox', 'dog']].
Write a function that prints all combinations of one word from the first vector, one word from the second vector, etc.
The solution may not use recursion. The number of vectors and number of elements within each vector may vary.
Example output: 'quick, brown, dog', 'lazy black fox' etc.
My Current Level Of Understanding
I am already aware that by using the principle of multiplication, to find the number of possible combinations available in this scenario is to just multiply the lengths of each inner vector by each other. For this specific example, we get a total of 12(2x3x2) different possible combinations. Where I fall off however, is inside the 4 nested for loops section of the program.
Whoever wrote the code, clearly understands some concept or formula that I do not. Just two examples, are the "previous" variable used inside the loops, and the strategic placement of where they decide to increment the j variable. It seems to me that they might be aware of some mathematical formula.
Code
Below is the code without comments. If you however go to this codepen, I have included the same code with plenty of comments that explain how the program works, so you don't have to trace everything out from scratch. You can also test the output in the built-in console.
function comboMaker(vector) {
var length = vector.length;
var solutions = 1;
for (var i = 0; i < length; i++) {
solutions *= vector[i].length;
}
var combinations = [];
for (var i = 0; i < solutions; i++) {
combinations[i] = [];
}
var previous = 1;
for (var i = 0; i < length; i++) {
for (var j = 0; j < solutions;) {
var wordCount = vector[i].length;
previous *= vector[i].length;
for (var l = 0; l < wordCount; l++) {
for (var k = 0; k < (solutions/previous); k++) {
combinations[j][i] = vector[i][l];
j++
}
}
}
}
for (var i = 0; i < solutions; i++) {
console.log(combinations[i].join(" "));
}
}
comboMaker([['quick', 'lazy'], ['brown', 'black', 'grey'], ['fox', 'dog']]);
You can consider combination of items as number in mixed radix numeric system.
Radix for every position is equal to number of items in corresponding array (here {2,3,2}). Overall number of combination M is product of all radixes.
You can generate combination either
by making for-loop with counter in range 0..M-1 and separating every digit from this counter and getting corresponding item. Pseudocode
M = ProductOfLengthsOfArrays
for c = 0..M-1
t = c
combination = {}
for i = 0..NumOfArrays-1
d = t %% Array[i].Length //modulo operation
t = t / Array[i].Length //integer division
combination.add(Array[i][d])
output combination
by counting in mixed radix from 0 to M-1
if item is last in the array, get first item and increment the next array
else get the next item in the same array
A cooperation of .reduce() and .map() allows us to come up with a very efficient single liner answer for this question.
var data = [['quick', 'lazy'], ['brown', 'black', 'grey'], ['fox', 'dog'],['jumps','runs']],
result = data.reduce((p,c) => p.reduce((r,fw) => r.concat(c.map(sw => fw + " " + sw)),[]));
console.log(result);
Related
Background: I came across this issue whilst dealing with arrays of large and small numbers using the findIndex function. I have given a minimum working example below. I can avoid the issue, but I just don't understand why the issue exists in the first place.
In node.js (v12.16.3), why does getting rid of the for loop around the find function in this example cause the performance to increase dramatically? (5600 ms reduced to 250 ms)
The issue does not appear if I change the value in the second array from 1e10 to 1e9 or less, or if I change the value in the first array from 1 to 1e10 or more.
const nSims = 1e8
const arrays = [];
arrays[0] = [1];
arrays[1] = [1e10];
console.time('a')
for (var i = 0; i < nSims; i++) {
for (var j = 0; j < 2; j++) {
arrays[j].find((value) => value > 0);
}
}
console.timeEnd('a') // 5600 ms
console.time('b')
for (var i = 0; i < nSims; i++) {
arrays[0].find((value) => value > 0);
arrays[1].find((value) => value > 0);
}
console.timeEnd('b') // 250 ms
V8 developer here.
The "slow case" is the true cost of calling Array.find with a callback: for every element, the built-in implementation of Array.find performs a call to the provided callback. Aside from doing this basic work that you asked it to do, the implementation is actually pretty optimized, both the Array.find built-in and the supplied callback.
The fast case benefits from certain additional optimizations in V8: if a call to Array.find has only ever seen arrays of the same type (including internal representation, see below), then there's some special handling in the type feedback collection system and the optimizing compiler to emit a special inlined version of it, which in particular has the follow-on benefit that it can also inline the provided callback, specialized for this type of array. As you can see here, this optimization provides a massive speed boost when it is applicable.
The reason that [1e9] and [1e10] are different types of arrays under the hood is because 1e9 is a 30-bit integer, so V8 internally chooses "small integer" (aka "smi", 31-bit signed int) representation for the array's elements. 1e10, however, would require 34 bits, so V8 chooses double (64-bit floating point) representation for the array elements. So if the same occurrence of Array.find encounters both [1e9] (or [1] for that matter) and [1e10], it decides "I've seen more than one type of array here, inlining more than one special case probably costs more than it's worth, let's use the generic version". You could say that this decision is a bit overly pessimistic in this case, but such is the nature of heuristics: engines need rules to decide what to do, and since they can't predict what your code will do in the future, they just have to make some guess -- which could turn out to be a good guess, or a not so good guess :-)
It's not related to having a loop per se; looping over the list of arrays is just one way of making the same Array.find encounter several array types. You could trigger the fallback to the generic path without a loop by using a function that's called with different inputs; or you could have a loop (that loops over something else) while still staying on the fast path.
#Anton wrote:
It seems, that find method has some problems.
I wouldn't put it that way. It's not easy for an engine to optimize Array.find to the same degree as a hand-written for-loop -- for instance, because an engine generally can't inline user-provided callbacks into built-in functions. As explained above, V8 knows enough tricks to be able to pull off such inlining in some situations, but not always.
This is far from the only case where a hand-written replacement for a built-in function can achieve faster performance; in many cases this is because the built-in functions are more general (i.e.: support more weird corner cases) than the hand-written replacement. It's also the case that outside of targeted microbenchmarks it is fairly rare (though certainly not impossible) to find a case where these differences actually matter.
Note: Maybe this is not the correct answer, but it is just a very big comment (I need code snippets to illustrate).
This is example from the question (it takes more that 5 seconds for the a, and less than second for b):
const nSims = 1e8
const arrays = [];
arrays[0] = [1];
arrays[1] = [1e10];
console.time('a')
for (var i = 0; i < nSims; i++) {
for (var j = 0; j < 2; j++) {
arrays[j].find((value) => value > 0);
}
}
console.timeEnd('a') // 5600 ms
console.time('b')
for (var i = 0; i < nSims; i++) {
arrays[0].find((value) => value > 0);
arrays[1].find((value) => value > 0);
}
console.timeEnd('b') // 250 ms
This happens if we change 1e10 to 1e9 ("magic" here):
const nSims = 1e8
const arrays = [];
arrays[0] = [1];
arrays[1] = [1e9];
console.time('a')
for (var i = 0; i < nSims; i++) {
for (var j = 0; j < 2; j++) {
arrays[j].find((value) => value > 0);
}
}
console.timeEnd('a') // 5600 ms
console.time('b')
for (var i = 0; i < nSims; i++) {
arrays[0].find((value) => value > 0);
arrays[1].find((value) => value > 0);
}
console.timeEnd('b') // 250 ms
It seems, that find method has some problems. Here what happens if we replace it with for iteration (a and b becomes close, and less than 1 second):
const nSims = 1e8
const arrays = [];
arrays[0] = [1];
arrays[1] = [1e10];
function find(arr) {
for (let i = 0; i < arr.length; i++) {
if (arr[i] > 0) return arr[i];
}
}
console.time('a')
for (var i = 0; i < nSims; i++) {
for (var j = 0; j < 2; j++) {
find(arrays[j]);
}
}
console.timeEnd('a')
console.time('b')
for (var i = 0; i < nSims; i++) {
find(arrays[0]);
find(arrays[1]);
}
console.timeEnd('b')
This problem is more conceptual than in the code, so the fact that this is written in JS shouldn't matter very much.
So I'm trying to make a Neural Network and I'm testing it by trying to train it to do a simple task - an OR gate (or, really, just any logic gate). I'm using Gradient Descent without any batches for the sake of simplicity (batches seem unnecessary for this task, and the less unnecessary code I have the easier it is to debug).
However, after many iterations the output always converges to the average of the outputs. For example, given this training set:
[0,0] = 0
[0,1] = 1
[1,0] = 1
[1,1] = 0
The outputs, no matter the inputs, always converge around 0.5. If the training set is:
[0,0] = 0,
[0,1] = 1,
[1,0] = 1,
[1,1] = 1
The outputs always converge around 0.75 - the average of all the training outputs. This appears to be true for all combinations of outputs.
It seems like this is happening because whenever it's given something with an output of 0, it changes the weights to get closer to that, and whenever it's given something with an output of 1, it changes the weights to get closer to that, meaning that overtime it will converge around the average.
Here's the backpropagation code (written in Javascript):
this.backpropigate = function(data){
//Sets the inputs
for(var i = 0; i < this.layers[0].length; i ++){
if(i < data[0].length){
this.layers[0][i].output = data[0][i];
}
else{
this.layers[0][i].output = 0;
}
}
this.feedForward(); //Rerun through the NN with the new set outputs
for(var i = this.layers.length-1; i >= 1; i --){
for(var j = 0; j < this.layers[i].length; j ++){
var ref = this.layers[i][j];
//Calculate the gradients for each Neuron
if(i == this.layers.length-1){ //Output layer neurons
var error = ref.output - data[1][j]; //Error
ref.gradient = error * ref.output * (1 - ref.output);
}
else{ //Hidden layer neurons
var gradSum = 0; //Find sum from the next layer
for(var m = 0; m < this.layers[i+1].length; m ++){
var ref2 = this.layers[i+1][m];
gradSum += (ref2.gradient * ref2.weights[j]);
}
ref.gradient = gradSum * ref.output * (1-ref.output);
}
//Update each of the weights based off of the gradient
for(var m = 0; m < ref.weights.length; m ++){
//Find the corresponding neuron in the previous layer
var ref2 = this.layers[i-1][m];
ref.weights[m] -= LEARNING_RATE*ref2.output*ref.gradient;
}
}
}
this.feedForward();
};
Here, the NN is in a structure where each Neuron is an object with inputs, weights, and an output which is calculated based on the inputs/weights, and the Neurons are stored in a 2D 'layers' array where the x dimension is the layer (so, the first layer is the inputs, second hidden, etc.) and the y dimension is a list of the Neuron objects inside of that layer. The 'data' inputted is given in the form [data,correct-output] so like [[0,1],[1]].
Also my LEARNING_RATE is 1 and my hidden layer has 2 Neurons.
I feel like there must be some conceptual issue with my backpropagation method, as I've tested out the other parts of my code (like the feedForward part) and it works fine. I tried to use various sources, though I mostly relied on the wikipedia article on backpropagation and the equations that it gave me.
.
.
I know it may be confusing to read my code, though I tried to make it as simple to understand as possible, but any help would be greatly appreciated.
https://www.hackerrank.com/contests/projecteuler/challenges/euler001
Here is the problem I'm confused what the parseInt readline statement
and also the the var n statement mainly..
when i run my code it seems to count up to ten twice probably a simple problem just not seeing it and was hoping I could get it explained so I can keep working on project euler problems
Thanks
function main() {
var t = parseInt(readLine());
var sum = 0;
var arr = [];
for(var a0 = 0; a0 < t; a0++){
var n = parseInt(readLine());
for (var i = 0; i < n; i++)
if (i % 3 === 0 || i % 5 === 0){
arr.push(i);
sum += i;
};
console.log(arr);
};
}
Maybe I'm not following exactly what is your question.
The parseInt is a javascript function.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt
The readLine() function is defined for you, it will give you "the next line" that was captured from standard in.
All (or most) of the hackerrank problems offer the input for the problem thru standard-in and expect the result from standard-out. So for this problem, hackerrank has created this boilerplate code for reading that input.
process.stdin.on('end', function () {
input_stdin_array = input_stdin.split("\n");
main();
});
There is filling the input_stdin_array array that is used on the readLine() function.
And about the
when i run my code it seems to count up to ten twice
The problem mentions:
First line contains T that denotes the number of test cases. This is followed by T lines, each containing an integer, N.
So you are printing the array T times (for the default test case is 2), so that why you probably see the "up to ten 2 times"
I hope this helped, and probably you could start with a couple of the https://www.hackerrank.com/domains/tutorials/30-days-of-code challenge so you get a better grasp of how to work on the problems.
Regards
Declare the array after the first for loop. You are using the same array for every test case, even though it still contains numbers from the previous test cases. Same for the sum.
for(var a0 = 0; a0 < t; a0++) {
var arr = [];
var sum = 0;
i am trying to create google-form which is used to register students agreements on practice. Every agreement is registered and got agreement number which format is Last to digits of current year-T-number of agreement at this year/M. For example for now it is 17-T-11/M. The number of agreement currently is written by person which is responsible for practice.
Here is code of script below:
function onChange(e)
{
var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheets()[1];
var range = sheet.getDataRange();
var values = range.getValues();
var comboValues = ['16-T-105/M'];
// removing titles from 0 column and 1 line (titles)
for (var i = 1; i <= values.length; i++) {
var v = values[i] && values[i][0];
v && comboValues.push(v)
}
// Sort the values
comboValues.sort(
function(a, b) {
if (b.toLowerCase() < a.toLowerCase()) return -1;
if (b.toLowerCase() > a.toLowerCase()) return 1;
return 0;
}
);
Logger.log(comboValues);
// google-form modification
var form = FormApp.openById('1SHgVIosoE34m9cny9EQySljvgnRpzffdFEZe-kzNOzA');
var items = form.getItems();
for (i = 4; i < items.length; i++) {
Logger.log("ID: " + items[i].getId(), ': ' + items[i].getType());
}
form.getItemById('2087613006').asListItem().setChoiceValues(comboValues);
I got issue which is related with lexicographical order. Person which register agreement choose from list last registered agreement number: i tryed to do that last registered agreement number will always be at list top. As time when i started this everything was fine (it started with number 16-T-105/M), but new year come and soon after 17-T-10/M agreement was registered i got issue, that 17-T-10/M was not on list top. Soon i realised that this happens because script use lexicographical order and "thinks" that 2 is more than 10. So i understood that i somehow will have to change that order and do that 2 is less than 10, 11 is less than 101 and so on.
My question is how to do that? I guess that i need to sort array elements in natural order - but i do not have idea how to do this.
I tryed to google how to do it , but result was not satisfactory - maybe my knowledge of coding is pretty limited (i am PhD student of Psychology, not Informatics) :)
Maybe someone will help how to solve that problem.
Updates:
Link to spreadsheet: https://docs.google.com/spreadsheets/d/1FH5qYTrLUNI2SCrcaqlwgu8lzAylaTkZsiALg0zIpCM/edit#gid=1620956794
Link to google-form (Copy of actual form): https://docs.google.com/forms/d/e/1FAIpQLSerJfkv1dgHexUwxppXNyhb46twOZgvEMOIVXSOJoED3SLmyQ/viewform
You should adjust the sorting method to account of the peculiarities of the data. Here is one way to do this: the function splitConvert processes each string, splitting it by non-word characters and then converting what can be converted to integers (and lowercasing the rest). Then the comparison goes through this array one by one.
comboValues.sort(
function(a, b) {
var as = splitConvert(a);
var bs = splitConvert(b);
for (var i = 0; i < as.length; i++) {
if (bs[i] < as[i]) return -1;
if (bs[i] > as[i]) return 1;
}
return 0;
}
);
function splitConvert(str) {
return str.split(/\W/).map(function(part) {
var x = parseInt(part, 10);
return isNaN(x) ? part.toLowerCase() : x;
});
}
This is not the most performance-oriented solution: the split-parse function will be repeatedly called on the same strings as they are being sorted. If this becomes an issue (I don't really think so), one can optimize by having one run of conversion, creating an array of arrays, and then sorting that.
I have trouble dealing with my for loops now, I'm trying to compare two datum, basically it will compare 2 items, then it will write the matches and the mismatches on the webpage.
I managed to write the matches on the webpage, it was working good. But there's a bug in my mismatch compare.
It wrote all the data on the webpage X times, here's my JS code:
function testItems(i1, i2) {
var newArray = [];
var newArray2 = [];
var count = 0;
var count2 = 0;
for(var i = 0; i < i1.length; i++) {
for(var j = 0; j < i2.length; j++) {
if(i1[i] == i2[j]) {
newArray.push(i1[i]);
count++;
} if (i1[i] !== i2[j]) {
newArray2.push(i1[i]);
count2++;
}
}
}
count-=2;
count2-=2
writeHTML(count,count2, newArray, newArray2);
}
The result was horrible for the mismatches:
alt text http://www.picamatic.com/show/2009/03/01/07/44/2523028_672x48.jpg
I was expecting it to show the mistakes, not all the strings.
The issue you're seeing is because of the nested for loop. You are essentially doing a cross-compare: for every item in i1, you are comparing it to every item in i2 (remember that j starts again at 0 every time i advances... the two loops don't run in parallel).
Since I understand from the comments below that you want to be able to compare one array to the other, even if the items in each are in a different order, I've edited my original suggestion. Note that the snippet below does not normalize differences in case between the two arrays... don't know if that's a concern. Also note that it only compares i1 against i2... not both i1 to i2 and i2 to i1, which would make the task a little more challenging.
function testItems(i1, i2) {
var newArray = [];
var newArray2 = [];
for (var i = 0; i < i1.length; i++) {
var found = false;
for (var j = 0; j < i2.length; j++) {
if (i1[i] == i2[j]) found = true;
}
if (found) {
newArray.push(i1[i])
} else {
newArray2.push(i1[i])
}
}
}
As an alternative, you could consider using a hash table to index i1/i2, but since the example of strings in your comment include spaces and I don't know if you're using any javascript helper libraries, it's probably best to stick with the nested for loops. The snippet also makes no attempt to weed out duplicates.
Another optimization you might consider is that your newArray and newArray2 arrays contain their own length property, so you don't need to pass the count to your HTML writer. When the writer receives the arrays, it can ask each one for the .length property to know how large each one is.
Not directly related to the question but you should see this:
Google techtalks about javascript
Maybe it will enlighten you :)
Couple of things about your question. First you should use '!=' instead of '!==' to check inequality. Second I am not sure why you are doing decreasing counts by 2, suggests to me that there may be duplicates in the array?! In any case your logic was wrong which was corrected by Jarrett later, but that was not a totally correct/complete answer either. Read ahead.
Your task sounds like "Given two set of arrays i1 & i2 to find i1 {intersection} i2 and i1{dash} {UNION} i2{dash}) (Group theory notation). i.e. You want to list common elements in newArray and uncommon elements in newArray2.
You need to do this.
1) Remove duplicates in both the arrays. (For improving the program efficiency later on) (This is not a MUST to get the desired result - you can skip it)
i1 = removeDuplicate(i1);
i2 = removeDuplicate(i2);
(Implementation for removeDuplicate not given).
2) Pass through i1 and find i1{dash} and i1 {intersection} i2.
var newArray = [];
var newArray2 = [];
for (var i = 0; i < i1.length; i++)
{
var found = false;
for (var j = 0; j < i2.length; j++)
{
if (i1[i] == i2[j])
{
found = true;
newArray.push(i1[i]); //add to i1 {intersection} i2.
count++;
break; //once found don't check the remaining items
}
}
if (!found)
{
newArray2.push(i1[i]); //add i1{dash} to i1{dash} {UNION} i2{dash}
count2++;[
}
}
3) Pass through i2 and append i2{dash} to i1{dash}
for(var x=0; x<i2.length; x++)
{
var found = false;
//check in intersection array as it'd be faster than checking through i1
for(var y=0; y<newArray.length; y++) {
if( i2[x] == newArray[y])
{
found = true;
break;
}
}
if(!found)
{
newArray2.push(i2[x]); //append(Union) a2{dash} to a1{dash}
count2++;
}
}
writeHTML(count,count2, newArray, newArray2);
I have a feeling that this has to do with your second comparison using "!==" instead of "!="
"!==" is the inverse of "===", not "==". !== is a more strict comparison which does not do any type casting.
For instance (5 != '5') is false, where as (5 !== '5') is true. This means it's possible that you could be pushing to both arrays in the nested loop, since if(i1[i] == i2[j]) and if(i1[i] !== i2[j]) could both be true at the same time.
The fundamental problem here is that a pair of nested loops is NOT the right approach.
You need to walk a pointer through each dataset. ONE loop that advances both as needed.
Note that figuring out which to advance in case of a mismatch is a much bigger problem than simply walking them through. Finding the first mismatch isn't a problem, getting back on track after finding it is quite difficult.