I have a list with finite elements around 30-40 elements but on an average on screen the number of elements rendered is 4-7. I am using ng-repeat to render this list. Rendering of each element is very expensive I want to render elements only present in viewport.
Virtual scroll is not a solution for me as height of elements can be different, but I don't have infinite elements either. Is there a way to do this?
As you haven't add some code I'll try to imagine what you want.
In you're case I'd use something like ngInfiniteScroll.
As you can see in the demo link:
https://sroze.github.io/ngInfiniteScroll/demo_basic.html
$scope.images = [1, 2, 3, 4, 5, 6, 7, 8];
$scope.loadMore = function() {
var last = $scope.images[$scope.images.length - 1];
for(var i = 1; i <= 8; i++) {
$scope.images.push(last + i);
}
};
As you can see this is example includes a finite array or items and therefore it's easy for you to set the checkpoints.
Now I hope this solves your problem. Otherwise please share your code base.
Cheers
Related
I am trying to do a depth-first search (DFS) of a generic tree. The goal for each node is to know its level AND the maximum number of levels beneath it. An example tree looks like:
The DFS order should (I think) be: 1,2,3,5,6,7,4,8,9,10,11.
What I am trying to achieve is:
Node 1: Level 1, max levels beneath=4
Node 2: Level 2, max levels beneath=3
Node 3: Level 3, max levels beneath=2
...
Node 9: Level 2, max levels beneath=1
I am, so far, able to properly count the levels and max levels, but whenever I try and save them to a new object, what ultimately results is the last level/max-level combination of numbers (in this example, it would be level=3, max-level beneath=0. I think it is not closing over the variables properly, but I must admit I can't figure out how to change it to make it work. I assume it must be some sort of closure, but I haven't been able to adapt the other Stack answers I've found on closures.
var groupIDInfo={
BASE:[1], 1:[2,8,9], 2:[3,4], 3:[5], 4:[], 5:[6,7], 6:[], 7:[], 8:[],
9:[10,11], 10:[], 11:[]}
var levelInfo={};
var level=0;
var longestPath=0;
var levelAndPath=[];
function detLevels(groupIDInfo, parent){
if(!(parent in groupIDInfo)){
console.log("parent not in array");
return;
}
groupIDInfo[parent].forEach(function (child){
level++;
if (level>longestPath){
longestPath=level;
}
levelAndPath[0]=level;
levelAndPath[1]=longestPath;
levelInfo[child]=levelAndPath;
detLevels(groupIDInfo, child);
level--;
//set parent longest path
longestPath=level;
levelInfo[parent]=levelAndPath;
});
}
detLevels(groupIDInfo, "BASE");
You're using a single array levelAndPath, and pushing references to it into levelInfo, as opposed to pushing different arrays. (I haven't looked if there are any other errors beyond that, but this one is easily fixable by moving var levelAndPath=[]; inside forEach.)
It is not about closures. It is the fact that levelInfo[parent]=levelAndPath; doesn't copy levelAndPath - it just sticks in a reference. Here's a snazzy demo, thanks to the advances in Stack Overflow snippets:
let a = [1, 2, 3];
let b = [a, a, a];
a[2] = 4;
console.log(JSON.stringify(b)); // Huh? [[1,2,4],[1,2,4],[1,2,4]]?!?
console.log(b); // Here's what _really_ happened...
I am trying to use sortableJS with vueJS. Everything is working fine but VueJS is not able to detect the changes that are done in the scope of sortableJS since sortable JS changes the DOM directly.
I have passed 'this' from vue's instance to onEnd method of sortable. An event is triggered when I drag/drop an element on screen. I get the newIndex as well as oldIndex. When I swap the two indexes, the changes made to the DOM are now detected somehow and the array elements are swapped twice.
I want a way by which I can tell Vue to detect a change made in the DOM. Or a solution that works.
onEnd: function(evt) {
self.showIcons = true;
document.body.style.cursor = "unset";
// changing order of items
const oldIndex = evt.oldIndex;
const newIndex = evt.newIndex;
const tempTodos = JSON.parse(JSON.stringify(self.todos));
const tempItem = tempTodos.slice(oldIndex, oldIndex+1);
tempTodos.splice(oldIndex, 1);
tempTodos.splice(newIndex, 0, ...tempItem);
self.todos = tempTodos;
Actual result -
Original array - [1, 2, 3]
If I drag 3 and swap with 1 then new array gets swapped twice (once
by onEnd and once by DOM detection). So the new array is - [2, 3, 1]
Expected result -
Original array - [1, 2, 3]
If I drag 3 and swap with 1 then the new array should be - [3, 2, 1]
I am implementing a simple application, where I have a bunch of lineHeights stored like so:
lineHeights = [2, 4, 5, 7, 2, 4, 5, /* ... */]
Now given two heights, a and b I need to find the range of lines in between a and b
For example, if the heights are [2, 2, 4, 5, 2], then the range of lines in between 3 and 7 would be [1, 2] as lines 1 to 2 are contained by the range of heights given.
A naive implementation would be to store the lines as an array, and travel up the array to see which lines are in between the heights given, as shown in the following pseudo code (which I haven't tested, but I hope you can understand what I'm showing):
get_line_range_between_heights (lines start_height end_height)
index = 0
height = 0
while true
height = height + lines[index]
if height >= start_height
break
index = index + 1
result_ranges = [index]
while true
height = height + lines[index]
if height >= end_height
break
index = index + 1
push index into result_ranges
return result_ranges
However, the simple implementation comes at a cost - while inserting and removing heights is fast, querying becomes an O(n) operation, where n is the number of lines
So my question is - is there a sort of data structure (for example something like a binary search tree) which ideally would have insert, delete and search (for lines in between two heights) operations in O(n) or better, which is specialised for this sort of problem (ideally with an implementation, or a link to one)?
I really need an master of algorithm here! So the thing is I got for example an array like this:
[
[870, 23]
[970, 78]
[110, 50]
]
and I want to split it up, so that it looks like this:
// first array
[
[970, 78]
]
// second array
[
[870, 23]
[110, 50]
]
so now, why do I want it too look like this?
Because I want to keep the sum of sub values as equal as possible. So 970 is about 870 + 110 and 78 is about 23 + 50.
So in this case it's very easy because if you would just split them and only look at the first sub-value it will already be correct but I want to check both and keep them as equal as possible, so that it'll also work with an array which got 100 sub-arrays! So if anyone can tell me the algorithm with which I can program this it would be really great!
Scales:
~1000 elements (sublists) in the array
Elements are integers up to 10^9
I am looking for a "close enough solution" - it does not have to be the exact optimal solution.
First, as already established - the problem is NP-Hard, with a reduction form Partition Problem.
Reduction:
Given an instance of partition problem, create lists of size 1 each. The result will be this problem exactly.
Conclusion from the above:
This problem is NP-Hard, and there is no known polynomial solution.
Second, Any exponential and pseudo polynomial solutions will take just too long to work, due to the scale of the problem.
Third, It leaves us with heuristics and approximation algorithms.
I suggest the following approach:
Normalize the scales of the sublists, so all the elements will be in the same scale (say, all will be normalzied to range [-1,1] or all will be normalized to standard normal distribution).
Create a new list, in which, each element will be the sum of the matching sublist in the normalized list.
Use some approximation or heuristical solution that was developed for the subset-sum / partition problem.
The result will not be optimal, but optimal is really unattanable here.
From what I gather from the discussion under the original post, you're not searching for a single splitting point, but rather you want to distribute all pairs among two sets, such that the sums in each of the two sets are approximately equal.
Since a close enough solution is acceptable, maybe you could try an approach based on simulated annealing?
(see http://en.wikipedia.org/wiki/Simulated_annealing)
In short, the idea is that you start out by randomly assigning each pair to either the Left or the Right set.
Next, you generate a new state by either
a) moving a randomly selected pair from the Left to the Right set,
b) moving a randomly selected pair
from the Right to the Left set, or
c) doing both.
Next, determine if this new state is better or worse than the current state. If it is better, use it.
If it is worse, take it only if it is accepted by the acceptance probability function, which is a function
that initially allows worse states to be used, but favours them less and less as time moves on (or the "temperature decreases", in SA terms).
After a large number of iterations (say 100.000), you should have a pretty good result.
Optionally, rerun this algorithm multiple times because it may get stuck in local optima (although the acceptance probability function attempts to counter this).
Advantages of this approach are that it's simple to implement, and you can decide for yourself how long
you want it to continue searching for a better solution.
I'm assuming that we're just looking for a place in the middle of the array to split it into its first and second part.
It seems like a linear algorithm could do this. Something like this in JavaScript.
arrayLength = 2;
tolerance = 10;
// Initialize the two sums.
firstSum = [];
secondSum = [];
for (j = 0; j < arrayLength; j++)
{
firstSum[j] = 0;
secondSum[j] = 0;
for (i = 0; i < arrays.length; i++)
{
secondSum += arrays[i][j];
}
}
// Try splitting at every place in "arrays".
// Try to get the sums as close as possible.
for (i = 0; i < arrays.length; i++)
{
goodEnough = true;
for (j = 0; j < arrayLength; j++)
{
if (Math.abs(firstSum[j] - secondSum[j]) > tolerance)
goodEnough = false;
}
if (goodEnough)
{
alert("split before index " + i);
break;
}
// Update the sums for the new position.
for (j = 0; j < arrayLength; j++)
{
firstSum[j] += arrays[i][j];
secondSum[j] -= arrays[i][j];
}
}
Thanks for all the answers, the bruteforce attack was a good idea and NP-Hard is related to this too, but it turns out that this is a multiple knapsack problem and can be solved using this pdf document.
I have build a grid of div's as playground for some visual experiments. In order to use that grid, i need to know the x and y coordinates of each div. That's why i want to create a table with the X and Y position of each div.
X:0 & Y:0 = div:eq(0), X:0 Y:1 = div:eq(1), X:0 Y:2 = div:eq(2), X:0 Y:3 = div:eq(3), X:1 Y:0 = div:eq(4) etc..
What is the best way to do a table like that? Creating a OBJECT like this:
{
00: 0,
01: 1,
02: 2,
etc..
}
or is it better to create a array?
position[0][0] = 0
the thing is i need to use the table in multiple way's.. for example the user clicked the div nb: 13 what are the coordinates of this div or what is the eq of the div x: 12 y: 5.
Thats how i do it right now:
var row = 0
var col = 0
var eq = 0
c.find('div').each(function(i){ // c = $('div#stage')
if (i !=0 && $(this).offset().top != $(this).prev().offset().top){
row++
col = 0
}
$(this).attr({'row': row, 'col': col })
col++
})
I think it would be faster to build a table with the coordinates, instead of adding them as attr or data to the DOM. but i cant figure out how to do this technically.
How would you solve this problem width JS / jQuery?
A few questions:
Will the grid stay the same size or will it grow / shrink?
Will the divs stay in the same position or will they move around?
Will the divs be reused or will they be dynamically added / removed?
If everything is static (fixed grid size, fixed div positions, no dynamic divs), I suggest building two indices to map divs to coordinates and coordinates to divs, something like (give each div an id according to its position, e.g. "x0y0", "x0y1"):
var gridwidth = 20, gridheight = 10,
cells = [], // coordinates -> div
pos = {}, // div -> coordinates
id, i, j; // temp variables
for (i = 0; i < gridwidth; i++) {
cells[i] = [];
for (j = 0; j < gridheight; j++) {
id = 'x' + i + 'y' + j;
cells[i][j] = $('#' + id);
pos[id] = { x: i, y: j };
}
}
Given a set of coordinates (x, y) you can get the corresponding div with:
cells[x][y] // jQuery object of the div at (x, y)
and given a div you can get its coordinates with:
pos[div.attr('id')] // an object with x and y properties
Unless you have very stringent performance requirements, simply using the "row" and "col" attributes will work just fine (although setting them through .data() will be faster). To find the div with the right row/col, just do a c.find("div[row=5][col=12]"). You don't really need the lookup.
Let me elaborate on that a little bit.
If you were to build a lookup table that would allow you to get the row/col for a given div node, you would have to specify that node somehow. Using direct node references is a very bad practice that usually leads to memory leaks, so you'd have to use a node Id or some attribute as a key. That is basically what jQuery.data() does - it uses a custom attribute on the DOM node as a key into its internal lookup table. No sense in copying that code really. If you go the jQuery.data() route, you can use one of the plugins that allows you to use that data as part of the selector query. One example I found is http://plugins.jquery.com/project/dataSelector.
Now that I know what it's for...
It might not seem efficient at first, but I think It would be the best to do something like this:
Generate the divs once (server side), give them ids like this: id="X_Y" (X and Y are obviously numbers), give them positions with CSS and never ever move them. (changing position takes a lot of time compared to eg. background change, and You would have to remake the array I describe below)
on dom ready just create a 2D array and store jquery objests pointing the divs there so that
gridfields[0][12] is a jQuery object like $('#0_12'). You make the array once and never use selectors any more, so it's fast. Moreover - select all those divs in a container and do .each() on them and put them to proper array fields splitting their id attributes.
To move elements You just swap their css attributes (or classes if You can - it's faster) or simply set them if You have data that has the information.
Another superfast thing (had that put to practice in my project some time ago) is that You just bind click event to the main container and check coordinates by spliting $(e.target).attr('id')
If You bind click to a grid 100x100 - a browser will probably die. Been there, did that ;)
It may not be intuitive (not changing the div's position, but swapping contents etc.), but from my experience it's the fastest it can get. (most stuff is done on dom ready)
Hope You use it ;) Good luck.
I'm not 100% sure that I understand what you want, but I'd suggest to avoid using a library such as jQuery if you are concerned about performance. While jQuery has become faster recently, it still does has more overhead than "pure" JS/DOM operations.
Secondly - depending on which browsers you want to support - it may even be better to consider using a canvas or SVG scripting.