I have an application written in ExtJS 4.1.1, which uses one store a lot. I get data samples from server and after some validation I add it to the store using its "add" method. I do this
periodically and I remove records I don't need from a store as well.
Problem is, my application is eating more and more of RAM over time and it seems that I have found the source of the problem, but I do not know how to handle it.
Here is my store definition:
this.store = Ext.create('Ext.data.Store', {
fields: ['when', 'data1', 'data2', 'data3', 'data4', 'data5', 'data6', 'data7', 'data8', 'data9'],
proxy: {
type: 'memory',
reader: {
type: 'json',
root: 'users'
}
},
sorters: [{
property: 'when',
direction: 'ASC'
}]
});
And this is how I delete records from it:
var record = self.store.getAt(j);
if((record.get('when') <= newMinDate) && (record.get('data'+id) !==' ')) {
self.store.remove(record);
record.destroy();
record = null;
j--;
ln--;
}
But when I checked the console when I was debugging this issue I could see, that records are in fact deleted from a store, but not from memory.
EDIT/UPDATE:
I tried to fix the issue using advices in your answers, but neither could fix it. To be sure, that I got the problem source right, I extracted my store code to examine it more closely and see if it is really causing the problem. You can see the whole code below:
Ext.define('TestApp.App', {
extend: 'Ext.app.Application'
});
Ext.application({
extend: 'MyApp.app.Application',
store: null,
launch: function() {
var self = this;
self.store = Ext.create('Ext.data.Store', {
fields: ['when', 'data1', 'data2', 'data3', 'data4', 'data5', 'data6', 'data7', 'data8', 'data9'],
proxy: {
type: 'memory'
},
sorters: [{
property: 'when',
direction: 'ASC'
}]
});
self.beginTask();
},
beginTask: function() {
var self = this;
Ext.TaskManager.start({
run: function() {
var jsonRaw = *very large json*; //about 650 samples
var json = Ext.JSON.decode(jsonRaw, true);
// self.store.add(json.data.samples);
// var ln = self.store.getCount();
// for (var j=0; j<ln; j++) {
// var record = self.store.getAt(j);
// self.store.remove(self.store.getAt(j));
// j--;
// ln--;
// record.destroy();
// delete record;
// }
json = null;
jsonRaw = null;
},
interval: 1000
});
}
});
Now the strange part about this: memory leak is present even if the store part is commented like in code listing above. Did I made some mistake with task management?
One of the many gotchas I've discovered using Ext JS is that Ext.data.Model#destroy doesn't actually clean up the record locally. The destroy method uses the store's proxy to send a destroy request for that record (if you needed to remove the corresponding record from a database, for instance). If that's your intended behavior, then no worries.
When you remove a record from a store, that store keeps a reference to that record in an array called removed. You can see it towards the bottom of the Ext.data.Store#remove method. I recommend using a JavaScript debugger and inspecting your store object after a few removes to see if your records are being cached. If they are, it's simple enough to call store.removed.length = 0; to clear it out.
What happens if you do a store.sync() after the remove? Don't know if that helps for memory proxy, but it should remove references to removed records I think. Just doing a remove(record) on a record doesn't really remove the record, it just marks it for removal and stops exposing it as available in the store. At least that's true for other proxy types. The actual remove can only be performed after the store's modified records (add, remove, update) have been synced through store.sync();. That's when the store holds the records in their new state in it's internal list.
Perhaps sra is right anyway and you yourself hold a reference to the record in a closure somewhere else?
Related
My Rally custom data store will not update. Im having the problem described in [this][1] post.
My scenario is: I will be adding rows to a grid, which has a custom data store. then I sort a grid column, and all the new rows I added get deleted. There is nothing fancy about my custom store, and I've tried autoSync:true, but that does nothing.
Are custom stores Read-only, in the sense that any changes made to the original data are transient and will get deleted with a reload()?
This is my store that I add to the rallygrid
me.customStore = Ext.create('Rally.data.custom.Store', {
data: customData,
listeners:{
load: function(customStore){
//do some stuff
}
}
});
I looked at the source code for the memory proxy and it makes sense why nothing was getting added or removed or updating correctly with the Rally.data.custom.Store store. You have to override the create and destroy methods of the memory proxy.
CURRENT MEMORY PROXY FUNCTIONS
These are functions that are used to create and destroy records for the memory proxy. As you can see, they dont create or destroy any records...
updateOperation: function(operation, callback, scope) {
var i = 0,
recs = operation.getRecords(),
len = recs.length;
for (i; i < len; i++) {
recs[i].commit();
}
operation.setCompleted();
operation.setSuccessful();
Ext.callback(callback, scope || this, [operation]);
},
create: function() {
this.updateOperation.apply(this, arguments);
},
destroy: function() {
this.updateOperation.apply(this, arguments);
},
CORRECT MEMORY PROXY SETUP
Below is how to instantiate a custom store that will actually add and remove records in the custom store
me.customStore = Ext.create('Rally.data.custom.Store', {
data: //customData
model: //modelType
autoSync:true,
proxy: {
type:'memory',
create: function(operation) {
var me = this;
operation.getRecords().forEach(function(record){
console.log('adding record', record);
me.data.push(record);
});
this.updateOperation.apply(this, arguments);
},
destroy: function(operation) {
var me = this;
operation.getRecords().forEach(function(record){
console.log(record);
for(var i = 0;i<me.data.length;++i){
if(/*me.data[i] == record*/ ){
me.data.splice(i, 1);
return;
}
}
});
this.updateOperation.apply(this, arguments);
}
},
listeners://listener stuff here
});
Can anyone see what may be wrong in this code, basically I want to check if a post has been shared by the current logged in user AND add a temporary field to the client side collection: isCurrentUserShared.
This works the 1st time when loading a new page and populating from existing Shares, or when adding OR removing a record to the Shares collection ONLY the very 1st time once the page is loaded.
1) isSharedByMe only changes state 1 time, then the callbacks still get called as per console.log, but isSharedByMe doesn't get updated in Posts collection after the 1st time I add or remove a record. It works the 1st time.
2) Why do the callbacks get called twice in a row, i.e. adding 1 record to Sharescollection triggers 2 calls, as show by console.log.
Meteor.publish('posts', function() {
var self = this;
var mySharedHandle;
function checkSharedBy(IN_postId) {
mySharedHandle = Shares.find( { postId: IN_postId, userId: self.userId }).observeChanges({
added: function(id) {
console.log(" ...INSIDE checkSharedBy(); ADDED: IN_postId = " + IN_postId );
self.added('posts', IN_postId, { isSharedByMe: true });
},
removed: function(id) {
console.log(" ...INSIDE checkSharedBy(); REMOVED: IN_postId = " + IN_postId );
self.changed('posts', IN_postId, { isSharedByMe: false });
}
});
}
var handle = Posts.find().observeChanges({
added: function(id, fields) {
checkSharedBy(id);
self.added('posts', id, fields);
},
// This callback never gets run, even when checkSharedBy() changes field isSharedByMe.
changed: function(id, fields) {
self.changed('posts', id, fields);
},
removed: function(id) {
self.removed('posts', id);
}
});
// Stop observing cursor when client unsubscribes
self.onStop(function() {
handle.stop();
mySharedHandle.stop();
});
self.ready();
});
Personally, I'd go about this a very different way, by using the $in operator, and keeping an array of postIds or shareIds in the records.
http://docs.mongodb.org/manual/reference/operator/query/in/
I find publish functions work the best when they're kept simple, like the following.
Meteor.publish('posts', function() {
return Posts.find();
});
Meteor.publish('sharedPosts', function(postId) {
var postRecord = Posts.findOne({_id: postId});
return Shares.find{{_id: $in: postRecord.shares_array });
});
I am not sure how far this gets you towards solving your actual problems but I will start with a few oddities in your code and the questions you ask.
1) You ask about a Phrases collection but the publish function would never publish anything to that collection as all added calls send to minimongo collection named 'posts'.
2) You ask about a 'Reposts' collection but none of the code uses that name either so it is not clear what you are referring to. Each element added to the 'Posts' collection though will create a new observer on the 'Shares' collection since it calls checkSharedId(). Each observer will try to add and change docs in the client's 'posts' collection.
3) Related to point 2, mySharedHandle.stop() will only stop the last observer created by checkSharedId() because the handle is overwritten every time checkSharedId() is run.
4) If your observer of 'Shares' finds a doc with IN_postId it tries to send a doc with that _id to the minimongo 'posts' collection. IN_postId is passed from your find on the 'Posts' collection with its observer also trying to send a different doc to the client's 'posts' collection. Which doc do you want on the client with that _id? Some of the errors you are seeing may be caused by Meteor's attempts to ignore duplicate added requests.
From all this I think you might be better breaking this into two publish functions, one for 'Posts' and one for 'Shares', to take advantage of meteors default behaviour publishing cursors. Any join could then be done on the client when necessary. For example:
//on server
Meteor.publish('posts', function(){
return Posts.find();
});
Meteor.publish('shares', function(){
return Shares.find( {userId: this.userId }, {fields: {postId: 1}} );
});
//on client - uses _.pluck from underscore package
Meteor.subscribe( 'posts' );
Meteor.subscribe( 'shares');
Template.post.isSharedByMe = function(){ //create the field isSharedByMe for a template to use
var share = Shares.findOne( {postId: this._id} );
return share && true;
};
Alternate method joining in publish with observeChanges. Untested code and it is not clear to me that it has much advantage over the simpler method above. So until the above breaks or becomes a performance bottleneck I would do it as above.
Meteor.publish("posts", function(){
var self = this;
var sharesHandle;
var publishedPosts = [];
var initialising = true; //avoid starting and stopping Shares observer during initial publish
//observer to watch published posts for changes in the Shares userId field
var startSharesObserver = function(){
var handle = Shares.find( {postId: {$in: publishedPosts}, userId === self.userId }).observeChanges({
//other observer should have correctly set the initial value of isSharedByMe just before this observer starts.
//removing this will send changes to all posts found every time a new posts is added or removed in the Posts collection
//underscore in the name means this is undocumented and likely to break or be removed at some point
_suppress_initial: true,
//other observer manages which posts are on client so this observer is only managing changes in the isSharedByMe field
added: function( id ){
self.changed( "posts", id, {isSharedByMe: true} );
},
removed: function( id ){
self.changed( "posts", id, {isSharedByMe: false} );
}
});
return handle;
};
//observer to send initial data and always initiate new published post with the correct isSharedByMe field.
//observer also maintains publishedPosts array so Shares observer is always watching the correct set of posts.
//Shares observer starts and stops each time the publishedPosts array changes
var postsHandle = Posts.find({}).observeChanges({
added: function(id, doc){
if ( sharesHandle )
sharesHandle.stop();
var shared = Shares.findOne( {postId: id});
doc.isSharedByMe = shared && shared.userId === self.userId;
self.added( "posts", id, doc);
publishedPosts.push( id );
if (! initialising)
sharesHandle = startSharesObserver();
},
removed: function(id){
if ( sharesHandle )
sharesHandle.stop();
publishedPosts.splice( publishedPosts.indexOf( id ), 1);
self.removed( "posts", id );
if (! initialising)
sharesHandle = startSharesObserver();
},
changed: function(id, doc){
self.changed( "posts", id, doc);
}
});
if ( initialising )
sharesHandle = startSharesObserver();
initialising = false;
self.ready();
self.onStop( function(){
postsHandle.stop();
sharesHandle.stop();
});
});
myPosts is a cursor, so when you invoke forEach on it, it cycles through the results, adding the field that you want but ending up at the end of the results list. Thus, when you return myPosts, there's nothing left to cycle through, so fetch() would yield an empty array.
You should be able to correct this by just adding myPosts.cursor_pos = 0; before you return, thereby returning the cursor to the beginning of the results.
I'm trying to make a Chrome extension. For that extension, I need some info that is dynamically created, but I want that data to be added even later on (on a different page).
This is some sort of data that i want to be always accessible (when the plugin runs):
var villages = new Array();
villages[0][0] = "village1";
villages[0][1] = "1325848";
villages[1][0] = "village2";
villages[1][1] = "8744351";
villages[2][0] = "village3";
villages[2][1] = "8952187";
As you can see, the array is multi-dimensional. This because I want to store the names [0] and the village id 1 together.
Does anybody knows a good way of handling this problem?
I've looked at this: jQuery Cookie
But don't know if that is a proper way of handling the problem.
Alternatively do I have to create some kind of XML file that will contain all the values?
UPDATE:
This is a skeleton example, if you want to store just village.id and village.name, just change the default data, that still works.
I have changed all code for you, you will see how to iterate array and get villages data below code.
At first I should say that It's really bad practice to save data in a multidimensional array.
You should use object, it makes your data tidy, than you can manipulate it easily.
Here is an example object for your situation,
var village = {
id: "1325848",
name : "village1"
};
console.log(village.id); //1325848
console.log(village.name); //village1
This was a basic get started example.
Here is the solution for your problem with localstorage and javascript object.
var ExtensionDataName = "myfirstextensiondata";
var ExtensionData = {
dataVersion: 3, //if you want to set a new default data, you must update "dataVersion".
villages: []
};
//default data
ExtensionData.villages.push(
{id: "1325848", name: "village1"},
{id: "8744351", name: "village2"},
{id: "8952187", name: "village3"}
);
function DB_setValue(name, value, callback) {
var obj = {};
obj[name] = value;
console.log("Data Saved!");
chrome.storage.local.set(obj, function() {
if(callback) callback();
});
}
function DB_load(callback) {
chrome.storage.local.get(ExtensionDataName, function(r) {
if (isEmpty(r[ExtensionDataName])) {
DB_setValue(ExtensionDataName, ExtensionData, callback);
} else if (r[ExtensionDataName].dataVersion != ExtensionData.dataVersion) {
DB_setValue(ExtensionDataName, ExtensionData, callback);
} else {
ExtensionData = r[ExtensionDataName];
callback();
}
});
}
function DB_save(callback) {
DB_setValue(ExtensionDataName, ExtensionData, function() {
if(callback) callback();
});
}
function DB_clear(callback) {
chrome.storage.local.remove(ExtensionDataName, function() {
if(callback) callback();
});
}
function isEmpty(obj) {
for(var prop in obj) {
if(obj.hasOwnProperty(prop))
return false;
}
return true;
}
DB_load(function() {
//YOUR MAIN CODE WILL BE HERE
console.log(ExtensionData);
console.log(ExtensionData.villages); //array of villages
console.log(ExtensionData.villages[0]); //first village object
console.log(ExtensionData.villages[0].id); //first village id
console.log(ExtensionData.villages[0].name); //first village name
//HOW TO ITERATE VILLAGES
for (var i = 0; i < ExtensionData.villages.length; i++) {
console.log(ExtensionData.villages[i].id); //village id
console.log(ExtensionData.villages[i].name); //village name
}
});
QUESTIONS:
Does the ExtensionDataName to be the same? or can i change that?
ExtensionDataName is used as a name when your data is saved to localstorage, it's just a name of your data collection. Therefore of course you can change it, do what you want, it's up to you.
what is the goal that you achief when you change the number of this line:
dataVersion: 3, //if you want to set a new default data, you must update "dataVersion"?
At the first time when user run this extension there is no data in the localstorage. So default village list is used,
In my example default village list is,
[
{id: "1325848", name: "village1"},
{id: "8744351", name: "village2"},
{id: "8952187", name: "village3"}
]
this default list is saved to localstorage. After than when extension runs again(not first time), the default list is not important anymore, because there is a village list stored in localstorage, so it loads village list from localstorage.
For example if you want to add a new village during the runtime of extension you can do it like this,
ExtensionData.villages.push({id: "1215555", name: "village4"});
DB_save();
So what is the goal of dataVersion?
If you look DB_load() function, it's used there. It checks whether dataVersion is still same, If it's not same, It decides that
"There is a updated default data so I should clear localstorage and reload new data to localstorage"
So If you don't change this lines,
ExtensionData.villages.push(
{id: "1325848", name: "village1"},
{id: "8744351", name: "village2"},
{id: "8952187", name: "village3"}
);
Than you won't change dataVersion
I'm new to Backbone.js, and someone who comes out of the 'standard' model of JS development I'm a little unsure of how to work with the models (or when).
Views seem pretty obvious as it emulates the typical 'listen for event and do something' method that most JS dev's are familiar with.
I built a simple Todo list app and so far haven't seen a need for the model aspect so I'm curious if someone can give me some insight as to how I might apply it to this application, or if it's something that comes into play if I were working with more complex data.
Here's the JS:
Todos = (function(){
var TodoModel = Backbone.Model.extend({
defaults: {
content: null
}
});
var TodoView = Backbone.View.extend({
el: $('#todos'),
newitem: $('#new-item input'),
noitems: $('#no-items'),
initialize: function(){
this.el = $(this.el);
},
events: {
'submit #new-item': 'addItem',
'click .remove-item': 'removeItem'
},
template: $('#item-template').html(),
addItem: function(e) {
e.preventDefault();
this.noitems.remove();
var templ = _.template(this.template);
this.el.append(templ({content: this.newitem.val()}));
this.newitem.val('').focus();
return this;
},
removeItem: function(e){
$(e.target).parent('.item-wrap').remove();
}
});
self = {};
self.start = function(){
new TodoView();
};
return self;
});
$(function(){
new Todos(jQuery).start();
});
Which is running here: http://sandbox.fluidbyte.org/bb-todo
Model and Collection are needed when you have to persist the changes to the server.
Example:
var todo = new TodoModel();
creates a new model. When you have to save the save the changes, call
todo.save();
You can also pass success and error callbacks to save . Save is a wrapper around the ajax function provided by jQuery.
How to use a model in your app.
Add a url field to your model
var TodoModel = Backbone.Model.extend({
defaults: {
content: null
},
url: {
"http://localhost";
}
});
Create model and save it.
addItem: function(e) {
e.preventDefault();
this.noitems.remove();
var templ = _.template(this.template);
this.el.append(templ({content: this.newitem.val()}));
this.newitem.val('').focus();
var todo = new TodoModel({'content':this.newitem.val()});
todo.save();
return this;
},
Make sure your server is running and set the url is set correctly.
Learning Resources:
Check out the annotated source code of Backbone for an excellent
explanation of how things fall into place behind the scenes.
This Quora question has links to many good resources and sample apps.
The model is going to be useful if you ever want to save anything on the server side. Backbone's model is built around a RESTful endpoint. So if for example you set URL root to lists and then store the list information in the model, the model save and fetch methods will let you save/receive JSON describing the mode to/from the server at the lists/<id> endpoint. IE:
ToDoListModel = Backbone.model.extend( {
urlRoot : "lists/" } );
// Once saved, lives at lists/5
list = new ToDoListModel({id: 5, list: ["Take out trash", "Feed Dog"] });
list.save();
So you can use this to interact with data that persists on the server via a RESTful interface. see this tutorial for more.
I disagree with the idea that model is needed only to persist changes (and I am including LocalStorage here, not only the server).
It is nice to have representation of models and collections so that you have object to work with and not only Views. In your example you are only adding and removing divs (html) from the page, which is something you can do normally with jQuery. Having a Model created and added to a Collection everytime you do "add" and maybe removed when you clear it will allow you some nice things, like for example sorting (alphabetically), or filtering (if you want to implement the concept of "complete" to-do).
In your code, for example:
var TodoModel = Backbone.Model.extend({
defaults: {
content: null
complete: false
}
});
var Todos = Backbone.Collection.extend({
model: TodoModel
})
In the View (irrelevant code is skipped):
// snip....
addItem: function(e) {
e.preventDefault();
this.noitems.remove();
var templ = _.template(this.template);
var newTodo = new TodoModel({ content: this.newitem.val() });
this.collection.add(newTodo); // you get the collection property from free from the initializer in Backbone
this.el.append(templ({model: newTodo})); // change the template here of course to use model
this.newitem.val('').focus();
return this;
},
Initialize like this:
self.start = function(){
new TodoView(new Todos());
};
Now you have a backing Collection and you can do all sort of stuff, like filtering. Let's say you have a button for filtering done todos, you hook this handler:
_filterDone: function (ev) {
filtered = this.collection.where({ complete: true });
this.el.html(''); // empty the collection container, I used "el" but you know where you are rendering your todos
_.each(filtered, function (todo) {
this.el.append(templ({model: todo})); // it's as easy as that! :)
});
}
Beware that emptying the container is probably not the best thing if you have events hooked to the inner views but as a starter this works ok.
You may need a hook for setting a todo done. Create a button or checkbox and maybe a function like this:
_setDone: function (ev) {
// you will need to scope properly or "this" here will refer to the element clicked!
todo = this.collection.get($(ev.currentTarget).attr('todo_id')); // if you had the accuracy to put the id of the todo somewhere within the template
todo.set('complete', true);
// some code here to re-render the list
// or remove the todo single view and re-render it
// in the simplest for just redrawr everything
this.el.html('');
_.each(this.collection, function (todo) {
this.el.append(templ({model: todo}));
});
}
The code above would not have been so easy without Models and Collections and as you can see it does not relate in any way with the server.
I'm not experienced in Javascript but I've read a ton of articles about Meteor reactivity but still can't figure out why it is not working in my case.
When a new product is added, I want to be recalculated total cost and use it in the totalCost helper so it's almost real time visible in the browser.
Can someone please take a look at my code and try to figure out some logic error? Everything except the reactivity is working on my computer.
I have got this method in /models/Product.js :
Meteor.methods({
totalProductCost: function() {
var pipeline = [
{$match: {owner: Meteor.userId()}},
{$group: {_id: null, cost: {$sum: "$cost"}}}
];
var data = Products.aggregate(pipeline)["0"].cost;
return (data === undefined) ? 0 : data;
}
});
Then I've got layout.js in client folder:
if (Meteor.isClient) {
var handle = Meteor.subscribe("Products", Meteor.userId());
ProductManager = {
_productItems: null,
_dep: new Tracker.Dependency(),
getProducts: function () {
this._dep.depend();
return this._productItems;
},
setProducts: function (value) {
if (value !== this._productItems) {
this._productItems = value;
this._dep.changed();
}
},
getTotalCost: function () {
return ReactiveMethod.call('totalProductCost');
}
}
// TRACKER
Tracker.autorun(function () {
if (handle.ready()) {
ProductManager.setProducts(Products.find().fetch());
}
});
// HELPERS
Template.boxOverview.helpers({
"totalCost" : function () {
return ProductManager.getTotalCost();
},
});
}
It seems that you used a collection.aggregate in a method. If you need reactivity, you need to use a publication rather than a method (or you need to call the method each time you want to refresh). However, if you use your aggregation inside your publication (I assume you use a package for it) you will loose reactivity as well.
What I would advise you is to use a publication without aggregate function. You calculate your product cost by creating a new field and adding it to your cursor. Once, you do that, if you want to keep reactivity, it is necessary to use to use cursor.observeChanges() or just cursor.observe().
Have a look at this example:
var self = this;
// Modify the document we are sending to the client.
function filter(doc) {
var length = doc.item.length;
// White list the fields you want to publish.
var docToPublish = _.pick(doc, [
'someOtherField'
]);
// Add your custom fields.
docToPublish.itemLength = length;
return docToPublish;
}
var handle = myCollection.find({}, {fields: {item:1, someOtherField:1}})
// Use observe since it gives us the the old and new document when something is changing.
// If this becomes a performance issue then consider using observeChanges,
// but its usually a lot simpler to use observe in cases like this.
.observe({
added: function(doc) {
self.added("myCollection", doc._id, filter(doc));
},
changed: function(newDocument, oldDocument)
// When the item count is changing, send update to client.
if (newDocument.item.length !== oldDocument.item.length)
self.changed("myCollection", newDocument._id, filter(newDocument));
},
removed: function(doc) {
self.removed("myCollection", doc._id);
});
self.ready();
self.onStop(function () {
handle.stop();
});
This is taken from here.