In Backbone JS when I fetch a collection should I be fetching the entire collection or a small portion of it?
For example I have news feed collection in mongoDB that could have potentially 1000s of items. When the user hits the page I only want to show them the latest 10 items with the option to 'Load More'. But if they visit a specific item via URL http://site.com/#/feed/:itemID I want to be able to pull up that item's record.
1. How many document should I be fetching initially?
2. How would I got about fetching any item by id?
I ended up using the {add: true} statement when calling fetch on my collection. This prevents the collection from being replaced by the result of the fetch and but instead appends the result to the collection. I then also passed the 'skip' amount using the {data: {skip: amountOfItemsInCollectionAlready }, this is used on the server-side to get the correct batch of items from the database.
My final fetch method looks like this:
loadMore: function(e){
this.collection.fetch({
add: true,// this adds to collection instead of replacing
data:{// this is optional params to be sent with request
skip: this.collection.length// skip the number of items already in the collection
}
});
}
You probably don't want to just use Collection.fetch(), because you won't get the benefit of client-side caching - it'll drop the items you've already loaded from the server and reset the collection. You will probably need to extend Backbone.Collection with a custom function to retrieve more items. I used the following code in a recent project:
Backbone.Collection.extend({
// fetch list without overwriting existing objects (copied from fetch())
fetchNew: function(options) {
options = options || {};
var collection = this,
success = options.success;
options.success = function(resp, status, xhr) {
_(collection.parse(resp, xhr)).each(function(item) {
if (!collection.get(item.id)) {
collection.add(item, {silent:true});
}
});
if (!options.silent) collection.trigger('reset', collection, options);
if (success) success(collection, resp);
};
return (this.sync || Backbone.sync).call(this, 'read', this, options);
}
});
This is mostly copied from the default fetch() code, but instead of dropping existing items it will add new ones. You'd probably want to implement something server-side, using the options object as Julien suggests to pass in the parameters of what items you want to load, probably either a page number (if you want to control page size on the server) or a start-stop pair (if you want to control it on the client).
1 - You should be fetching 10
Add a page argument to your collection and have the backend code return the page matching (10/page). /my_objects?page=2 to get records 10-20 etc.
You do this like this (untested):
collection.fetch({data: {page:2}})
Or you alter the URL directly
2 - To fetch an item by ID you create the model
object = new Model({id: 1})
and fetch it
object.fetch()
Related
I am pretty new to ionic 1 and I am working on an application (with Ionic 1 and angular js) with multiple URLs where each URL brings up a list of categories, followed by a list of items for each category and each item has a document URL. How do I preload all these URLs on launch in the background but not display them?Is there any way this can be achieved? a good code sample or tutorial will help greatly.
Also, please let me know if this will be the best approach, as in pre-loading and pre-caching all content upon launch or should it be done category by category or some other way.
Thanks in advance!
You can make multiple Asynchronous service calls in background using $q.
Make a list of URL's in an array and call them at once using $q.all(listOfURL).
Using promises retrieve each response.
By making this asynchronous you can save lot of time.
After getting response you can either store them in $rootScope or in localStorage/sessionStorage.
Update - As the OP is already aware of and using localStorage, thus additional suggestions :-
In that case, you could either call all of your service methods for fetching data at startup or you could use a headless browser such as 'PhantomJS' to visit these URLs at startup and fetch the data.
Thus, your code would look something like :-
var webPage = require('webpage');
var page = webPage.create();
page.open('http://www.google.com/', function(status) {
console.log('Status: ' + status);
// Do other things here...
});
For more information, regarding PhantomJS, please refer to the following links :-
http://phantomjs.org/
http://phantomjs.org/api/webpage/method/open.html
Earlier Suggestions
Make an HTTP request in your service to fetch the data and store it to localStorage, as is shown below :-
$http.get('url', function(response) {
var obj = response.data;
window.localStorage.setItem('key', JSON.stringify(obj)); // Store data to localStorage for later use
});
For fetching data :-
var cachedData = JSON.parse(window.localStorage.getItem('key')); // Load cached data stored earlier
Please refer to the following link for detailed information regarding 'localStorage' :-
https://www.w3schools.com/html/html5_webstorage.asp
Hope this helps!
Best way to share data between different views in angular is to use a service as it is a singleton and can be used in other controllers.
In your main controller you can prefetch your lists of categories asynchronously through a service which can be shared for next views.Below is a small demo which you refer
angular.module("test").service("testservice",function('$http',$q){
var lists = undefined;
// fetch all lists in deferred technique
this.getLists = function() {
// if lists object is not defined then start the new process for fetch it
if (!lists) {
// create deferred object using $q
var deferred = $q.defer();
// get lists form backend
$http.get(URL)
.then(function(result) {
// save fetched posts to the local variable
lists = result.data;
// resolve the deferred
deferred.resolve(lists);
}, function(error) {
//handle error
deferred.reject(error);
});
// set the posts object to be a promise until result comeback
lists = deferred.promise;
}
// in any way wrap the lists object with $q.when which means:
// local posts object could be:
// a promise
// a real lists data
// both cases will be handled as promise because $q.when on real data will resolve it immediately
return $q.when(lists);
};
this.getLists2=function(){
//do it similarly as above
};
}).controller("mainController",function(testservice,$scope){
$scope.lists1=testervice.getLists()
.then(function(lists) {
//do something
});
};
$scope.lists2=testervice.getLists2()
.then(function(lists) {
//do something
});
};
$scope.lists1();
$scope.lists2();
}).controller("demoController1",function(testservice,$scope){
$scope.lists1=testervice.getLists()
.then(function(lists) {
//do something
});
};
$scope.lists2=testervice.getLists2()
.then(function(lists) {
//do something
});
};
$scope.lists1();
$scope.lists2();
});
I am assuming you don't want to load data in next screens, deliver user flawless experience.
Yes you can start loading URLs on you very first page as you want them to get the data you want to use in future screens.
In terms of storage
In AngularJs if you want something to persist throughout the application scope you should use $rootscope[beware keeping lot of data
may leads to memory issues, you need to clear it regularly].
Or another option is to store it in Localstorage. And fetch as per your need.
If you want you can share those arrays between different controllers of screens.
While loading[response getting from server] you can do two things
1. get single JSON response having all the data
2.have multiple urls, and load them serially.
As per your requirement of loading 5th (page)screen data in advance it's not good practice, and even stop user from seeing updates but as it's your requirement. We've couple of approaches:
Add all the category and their respective details as per your pastebin like cardiac then it's details.. kidney then details..
You can do this with managing hierarchies [categories] like parent main group and it's child sub group in JSONArray and details in JSONObject. (This change would be on sender side -server)
You need to load only one url to get all data.
So you don't need to load with different urls like now your doing. But beware this would be a big Json. So when you store it separate the categories and required data [screen-wise requirements] and store in local storage so easy for access.
Another approach would be you have to provide your [category] subgroup names to load so the loading would be like firing same URL with different category names to get data and store it in local storage.
This may lead to fire around 10-15[depends on your categories] urls may affect the UI thread response.
This won't need any changes on your server side response.
**
Programmatically approach to load urls sequentially:
**
URL Loading: This method will get detail of particular category [id or anything
works for you]. This will fire a http request and return a result.
getCategoryDetails(category){
url = url+category;
return $http({
method: 'GET',
url: url,
headers: --
}).then(function onSuccess(response) { //<--- `.then` transforms the promise here
//You can ether store in local storage
return response
}, function onError(response) {
throw customExceptionHadnler.getErrorMsg(response.status, response.data);
});
}
Parallel : This method will do it in parallel, we just load categories[ids] as we have all of them and then use $q.all to wait for all the urls loading to finish.
function loadUrlsParallel(urls) {
var loadUrls = []
for(var i = 0; i < urls.length; i++) {
loadUrls.push(getCategoryDetails(urls[i]))
}
return $q.all(loadUrls)
}
First API: This method to load first url and then Loading urls in
parallel call above method
getListOfCategories(){
url = url;
return $http({
method: 'GET',
url: url,
headers: --
}).then(function onSuccess(response) { //<--- `.then` transforms the promise here
//You can ether store in local storage or directly send response
return response
}, function onError(response) {
throw customExceptionHadnler.getErrorMsg(response.status, response.data);
});
}
urls : you have to prepare list of urls with appending category to
load after loading first url[expecting this returns you all the
categories you will require in your app beforehand] and pass to
loadUrlsParallel method.
You can write loadUrl methods as per your convenience, here whatever
is given is foe example purpose so may not run as it is.
You can load API responses every where from local storage where you've stored after API calls, So this will not ask you to execute API calls on every laoding of pages[screen]
Hope this helps you and solves your prob.
I was searching for an easy and simple database for a little highscore system for a some games I'm developing in javascript.
I saw Orchestrate.io in github's student developer pack. I found a suitable drivermodule nodejs orchestrate and have integrated them.
The problem comes with querying orchestrate for my data. I have managed saving scores and querying them with db.list('collection'), but this seems to not responding with all data. It appered to me that some values are not returned.
I read about the db.search('collection','query') function. But I don't really understand how I could return all data because I don't want to query in a specific way.
My objects are as simple as follows:
{"name":"Jack","score":1337}
As I understand, one has to send a key, when putting such values to an orchestrate-collection. But I'd like to query the whole collection and get the values in a descendant order in regard to the score.
As for now I end up sorting the result on the client-side.
I hope you guys can give me some hints for a query that can sort for specific values!
You have the option to use a SearchBuilder
db.newSearchBuilder() //Build a search object
.collection('collection') //Set the collection to be searched
.sort(score, 'desc') //Set the order of the results
.query("*") //Empty search
.then(function (res) { //Callback function for results
//Do something with the results
})
Source
By default, .list uses a pagination limit of 10. You can either increase that, e.g.:
db.list('collection', { limit: 100 })
Or use .links, .links.next (from the docs):
db.list('collection', { limit: 10 })
.then(function (page1) {
// Got First Page
if (page1.links && page1.links.next) {
page1.links.next.get().then(function (page2) {
// Got Second Page
})
}
})
I am developing a site using javascript framework BACKBONE.JS. In my site, There is one Category Selection drop down. Using Backbone collection fetch, I have rendered my category drop down successfully. In my header i have three horizontal menu[image shown below]. User click of the menu(Page navigation is done using backbone routers). My main content of the day will change. The user can filter the content based on the category. My category filter drop down option will not change frequently.
ALL = http://www.Site1.com
MOBILE = http://www.Site1.com/#all/mobile
DESKTOP = http://www.Site1.com/#all/desktop
My Router:
dealapp.AppRouter = Backbone.Router.extend({
routes: {
"": "home",
"all/mobile": "mobile",
"all/descktop": "displayAllVoucher"
},
home: function () {},
mobile: function () {},
desktop: function () {}
});
Success Case
I am loading my site Using "http://www.Site1.com/". The function home will get a call and do the listed action. If i am navigating to some other tab(mobile/desktop), my category drop down displaying.[ Note : i am fetching my category from the server in the home function]
scenario
I am loading my site using "http://www.Site1.com/#all/deal" directly. In this case my category drop down is not rendering , i am getting an empty drop down. I know that i haven't added my category fetch in the other two functions mobile and desktop. If i include the category fetch in mobile and desktop function each time then my category fetch call goes to server and fetches data from server.
My doubt
How do i know if my collection already has data? I want to reuse the already downloaded data. If data not available in the local storage then i need to fetch it from the server.
You can override fetch on the collection. Fetch returns a deferred object, you can store this on the collection itself. If the deferred is null you will call the prototype fetch. The advantage is that in your code you always call fetch, and if the collection has data you return the cached data.
fetch : function(options) {
if(this.deferred){
return this.deferred;
}
this.deferred = Backbone.Collection.prototype.fetch.call(this, options);
return this.deferred;
}
This specific problem was dealt with by others and a few plugins can be found.
The one I am currently using with success is the Thorax framework that adds a few things over Backbone.
For Model and Collection they added isPopulated() and isEmpty() method as can be seen here in their documentation.
They will tell you if there is data in the collection or not. If you don't want to use the entire framework, just copying the code from their Git repository here, would do.
In a few words they solve the problem by overriding fetch to set a property called _fetched to true when the data are fetched.
Another way would be to cache the data. Most of the time this is a good idea, but this depends. In your scenario it could be a good idea to cache it in a localStorage.
A plugin I found that seems to do it's job is Backbone fetch cache.
Description:
This plugin intercepts calls to fetch and stores the results in a
cache object (Backbone.fetchCache._cache). If fetch is called with {
cache: true } in the options and the URL has already been cached the
AJAX call will be skipped.
Yet another version is mentioned in this answer: Caching collections in backbone.js?
As the answerer there said, you could do it similar to this:
var manager = (function(){
var constructors = {
'example': ExampleCollection
};
var collections = {};
return {
getCollection: function(name) {
if(!collections[name]) {
var collection = new constructors[name]();
collection.fetch();
collections[name] = collection;
}
return collections[name];
}
}
})();
Here the manager is responsible for instantiating collections and
fetching them. When you call:
var exampleCollection = manager.getCollection('example');
you get an instance of example collection with data being already
fetched. Whenever you need this collection again you can call the
method again. You will then get the exact same instance with no need
to fetch it again.
I have a Store configured with a proxy to POST data to the server. I add records to this store dynamically. After calling the sync() method on the store the data gets sent to the server. But looking at the network traffic I see that the whole records data is sent. How can I configure the store to send only individual data (like only IDs of the record)?
I have tried seeting the WriteAllFields property to false on the JSON writter connected to the proxy but this did not help
I have also tried this approach: Ext.JS Prevent Proxy from sending extra fields but the request was not even performed
var documentStore = Ext.getStore('Document');
var trashStore = Ext.getStore('TrashDocuments');
documentStore.each (function(record) {
console.debug(record);
//record.phantom = true;
//record.setDirty();
trashStore.add(record);
documentStore.remove(record);
});
var newWriter = Ext.create('Ext.data.writer.Json',{
getRecordData: function(record){
return {'id':record.data.id};
}
});
trashStore.getProxy().setWritter(newWritter);
trashStore.sync({
success: function()
{
console.debug("success!!");
},
failure: function()
{
console.debug("failed...");
},
callback: function()
{
console.debug("calling callback");
},
scope: this
});
console.debug("END");
writeAllFields config is not working because that is only meaningful for non-phantom records; any fields of a phantom (new) record will be viewed as "changed" and therefore included in the request packet.
To exclude specific fields from being included in the request, add persist:false to the fields you don't want to include.
That being said, I don't quite understand why you'd only want to write the ids of a newly added record. Unless you are explicitly having Ext JS create those ids for you, what are you actually going to be sending to server? I don't know your use case, but typically it is desirable to have your persistence layer (e.g., a database) assign identifiers to your records.
I've been trying to wrap my head around best RESTful practices while using BackboneJS. I feel like I've written myself into a bit of a knot and could use some guidance.
My scenario is this: a user wants to create a new Playlist with N items in it. The data for the N items is coming from a third-party API in bursts of 50 items. As such, I want to add a new, empty Playlist and, as the bursts of 50 come in, save the items and add to my Playlist.
This results in my Playlist model having a method, addItems, which looks like:
addItems: function (videos, callback) {
var itemsToSave = new PlaylistItems();
var self = this;
// Create a new PlaylistItem with each Video.
videos.each(function (video) {
var playlistItem = new PlaylistItem({
playlistId: self.get('id'),
video: video
});
itemsToSave.push(playlistItem);
});
itemsToSave.save({}, {
success: function () {
// OOF TERRIBLE.
self.fetch({
success: function () {
// TODO: For some reason when I call self.trigger then allPlaylists triggers fine, but if I go through fetch it doesnt trigger?
self.trigger('reset', self);
if (callback) {
callback();
}
}
});
},
error: function (error) {
console.error("There was an issue saving" + self.get('title'), error);
}
});
}
ItemsToSave is generally a Collection with 50 items in it. Since BackboneJS does not provide a Save for Collections, I wrote my own. I didn't care much for creating a Model wrapper for my Collection.
So, when I call Save, none of my items have IDs. The database assigns the IDs, but that information isn't implicitly updated by Backbone because I'm saving a Collection and not a Model. As such, once the save is successful, I call fetch on my Playlist to retrieve the updated information. This is terrible because a Playlist could have thousands of items in it -- I don't want to be fetching thousands of items every time I save multiple.
So, I'm thinking maybe I need to override the Collection's parse method and manually map the server's response back to the Collection.
This all seems... overkill/wrong. Am I doing something architecturally incorrect? How does a RESTful architecture handle such a scenario?
My opinion is do what works and feels clean enough and disregard what the RESTafarians credence might be. Bulk create, bulk update, bulk delete are real world use cases that the REST folk just close their eyes and pretend don't exist. Something along these lines sounds like a reasonable first attempt to me:
create a bulkAdd method or override add carefully if you are feeling confident
don't make models or add them to the collection yet though
do your bulk POST or whatever to get them into the database and get the assigned IDs back
then add them as models to the collection