I have been experimenting with the localstorage module for Backbone.js (https://github.com/jeromegn/Backbone.localStorage). As I understand it this overloads Backbone.sync and therefore stops backbone from pushing to the server(?). Ideally, I would like pass my data back to the server as well and persist it locally when online and just use localstorage when offline (you know, the perfect app). I haven't found any documentation yet.
Is Backbone.localStorage a part of this?
Has anyone been able to build this scenario?
How is this done? (Please tell me I don't have to roll my own.)
Thanks.
Backbone.localStorage is an external file you can use which overwrites Backbone.Sync.
You can use simple feature detection for whether the user is offline or online and then asynchronously load Backbone.localStorage.js if they are offline.
If neccesary you can also pass in a specific version of Backbone.sync to your models and collections.
If you want to do both at the same time you'll have to write your own version of Backbone.sync that both calls the server and calls localStorage.
The easiest way to do this is to just define
Backbone.sync = function() {
originalSync.apply(this, arguments);
localStorageSync.apply(this, arguments);
}
Edit:
As mentioned in the comments, if you use the latest backbone localStorage plugin then you can do the following
Backbone.sync = function Sync() {
Backbone.ajaxSync.apply(this, arguments);
return Backbone.localSync.apply(this, arguments);
};
Related
I'm building an app that needs to detect when a user loses internet connectivity or cannot reach the server. Multiple controllers and services need to be able to check and set this. I have achieved all of this with no problem using an angular service and
window.addEventListener('offline', function() {OfflineService.checkIfOnline});
then in the service with something like
window.navigator.onLine ? online = true : online = false
The tricky part comes in when I need to update the view when the offline event occurs. I can't seem to find a way to update the scope property or a controller property when the service property gets updated by the event.
When I use $scope.$watch, the function fires 10 times (noted by console.log) and then never again.
I tried to replicate the problem in a jsfiddle, but this is my first time using that tool, and I'm not sure if I did it right:
https://jsfiddle.net/m3nx5yLm/1/
Thank you for your help.
Thank you everyone for your help.
I ended up going with a solution suggested by a buddy of mine. Using $rootScope.$emit('offlineEvent' true); in the service and listening for it in the controller with $rootScope.$on('offlineEvent' this.setControllerProperty);.
https://jsfiddle.net/m3nx5yLm/3/
constructor($scope, OfflineNotificationService){
Looks like you were referencing the class from the scope not the instance created by the injector (needed to pass it in along with $scope). I also used the watch syntax where the first arg is a function just to be clear about making that call, the string syntax is typically just used to reference properties on scope. A few other notes you can just return the window.navigator.onLine and you can store the value on the service instance and reference it directly from the view, you can then call checkOnline periodically with a $timeout loop or listening for the online/offline events on the browser instead of using the watch to fire the function.
https://jsfiddle.net/m3nx5yLm/4/
I have a Torii adapter that is posting my e.g. Facebook and Twitter authorization tokens back to my API to establish sessions. In the open() method of my adapter, I'd like to know the name of the provider to write some logic around how to handle the different types of providers. For example:
// app/torii-adapters/application.js
export default Ember.Object.extend({
open(authorization) {
if (this.provider.name === 'facebook-connect') {
var provider = 'facebook';
// Facebook specific logic
var data = { ... };
}
else if (this.provider.name === 'twitter-oauth2') {
var provider = 'twitter';
// Twitter specific logic
var data = { ... };
}
else {
throw new Error(`Unable to handle unknown provider: ${this.provider.name}`);
}
return POST(`/api/auth/${provider}`, data);
}
}
But, of course, this.provider.name is not correct. Is there a way to get the name of the provider used from inside an adapter method? Thanks in advance.
UPDATE: I think there are a couple ways to do it. The first way would be to set the provider name in localStorage (or sessionStorage) before calling open(), and then use that value in the above logic. For example:
localStorage.setItem('providerName', 'facebook-connect');
this.get('session').open('facebook-connect');
// later ...
const providerName = localStorage.getItem('providerName');
if (providerName === 'facebook-connect') {
// ...
}
Another way is to create separate adapters for the different providers. There is code in Torii to look for e.g. app-name/torii-adapters/facebook-connect.js before falling back on app-name/torii-adapters/application.js. I'll put my provider-specific logic in separate files and that will do the trick. However, I have common logic for storing, fetching, and closing the session, so I'm not sure where to put that now.
UPDATE 2: Torii has trouble finding the different adapters under torii-adapters (e.g. facebook-connect.js, twitter-oauth2.js). I was attempting to create a parent class for all my adapters that would contain the common functionality. Back to the drawing board...
UPDATE 3: As #Brou points out, and as I learned talking to the Torii team, fetching and closing the session can be done—regardless of the provider—in a common application adapter (app-name/torii-adapters/application.js) file. If you need provider-specific session-opening logic, you can have multiple additional adapters (e.g. app-name/torii-adapters/facebook-oauth2.js) that may subclass the application adapter (or not).
Regarding the session lifecycle in Torii: https://github.com/Vestorly/torii/issues/219
Regarding the multiple adapters pattern: https://github.com/Vestorly/torii/issues/221
Regarding the new authenticatedRoute() DSL and auto-sesssion-fetching in Torii 0.6.0: https://github.com/Vestorly/torii/issues/222
UPDATE 4: I've written up my findings and solution on my personal web site. It encapsulates some of the ideas from my original post, from #brou, and other sources. Please let me know in the comments if you have any questions. Thank you.
I'm not an expert, but I've studied simple-auth and torii twice in the last weeks. First, I realized that I needed to level up on too many things at the same time, and ended up delaying my login feature. Today, I'm back on this work for a week.
My question is: What is your specific logic about?
I am also implementing provider-agnostic processing AND later common processing.
This is the process I start implementing:
User authentication.
Basically, calling torii default providers to get that OAuth2 token.
User info retrieval.
Getting canonical information from FB/GG/LI APIs, in order to create as few sessions as possible for a single user across different providers. This is thus API-agnotic.
➜ I'd then do: custom sub-providers calling this._super(), then doing this retrieval.
User session fetching or session updates via my API.
Using the previous canonical user info. This should then be the same for any provider.
➜ I'd then do: a single (application.js) torii adapter.
User session persistence against page refresh.
Theoretically, using simple-auth's session implementation is enough.
Maybe the only difference between our works is that I don't need any authorizer for the moment as my back-end is not yet secured (I still run local).
We can keep in touch about our respective progress: this is my week task, so don't hesitate!
I'm working with ember 1.13.
Hope it helped,
Enjoy coding! 8-)
I very much like Meteor's pub/sub. I wonder if there is a way to get a similar workflow, using sails.js or just a socket library in general.
In particular, what I would like to be able to do is something along the lines of:
// Server-side:
App.publish('myCollection', -> collection.find({}))
// Client-side:
let myCollection = App.subscribe('myCollection')
let bob = myCollection.find({name: 'Bob'})
myCollection.insert({name: 'Amelie'}, callback)
All interaction with the server should happen in the background.
I very much like Meteor's pub/sub. I wonder if there is a way to get a similar workflow, using sails.js or just a socket library in general
Basically yes, at least about realtime sync between backend and frontend. Let's review what meteor's have and answer point by point.
Pub/sub
The Pub / Sub concept, as stated by Sabbir, is also supported by sails.js. Though the basics are slightly different :
In meteor, the client can subscribes to everything he wants, and the server control what it receives by only publishing to who he wants;
whereas in sails.js, the server both does subscribe some clients sockets and publish to all binded sockets
Note that, by default:
meteor contains the autopublish package that just notify every client without any kind of filtering. To acheive some filtering, you have to meteor remove autopublish then you can handle what will your client receive by adding a mongo request to it, like explained here.
sails by default, on its automatic "select" blueprints actions, auto-subscribes the calling socket to the events on the objects returned by the "select".
As a server-side conclusion:
Subscribe: just call findor findOne blueprint default action, through a socket (attaching some where filters or not) and your socket will automatically be subscribed to every events concerning returned objects => you don't have to code anything on the server, in most cases, for the Subscribe logic.
Publish: every blueprint default actions (create, update, destroy, add, remove) auto-publish to subscribed sockets => you don't have to code anything on the server, in most cases, for the Publish logic.
(Though, if you find yourself implementing some manual controller actions, sails API helps you publishing and subscribing easily)
Client handling
Therefore, with both meteor and sails, clients only receive what they're supposed to receive. Time for front-end now.
Philosophy
meteor in one hand, with it's isomorphic dimension, does provide a front-end connector by nature, exposing it's data-bound collections.
sails on the other hand, is front-end agnostic, and can be attacked by any http REST connector (JS or not), such as $http, $resource, or more advanced ones like Restangular.
Though, being aware of the complexity using raw sockets on their API (when it comes to session, CORS, CSRF and stuff), they developped a javascript socket.io wrapper called sails.io.js designed to be REST-like-over-socket, and just works like a charm.
Basically, The main difference is that meteor is one step higher-level than sails, because it provides the logic of syncing collections and objects.
All interaction with the server should happen in the background.
sails.io.js, the official front-end component, is just not that high-level. When it comes to Angular.js.
Though, you can find some community connectors that aim to, kinda, provide the same feature as mongo data-bound collections and objects. There is sails-resource, spinnaker or angular resource sails. I tried both of them, and I should say that I was disapointed. The abstraction level is so high that it just becomes annoying, IMHO. For example, with not-very-RESTful-friendly custom actions, like a login, it becomes very hard to adapt it for your needs.
==> I would advice to use a low-level connector, such as angularSails or (my prefered) https://github.com/janpantel/angular-sails, or even raw sails.io.js if you're not using Angular.
Edit: just foun a backbone version, by the sails' creator
It just works great, and believe me, the "keep my collection in sync with that socket" code is so ridiculous, that finding a module for this is just not worth it.
Some code please, stop talking
In particular, what I would like to be able to do is something along the lines of:
Server
Meteor
# Server-side:
App.publish('myCollection', -> collection.find({}))
Sails
//Nothing to do, just sails generate api myCollection
Client
Meteor
# Client-side:
myCollection = App.subscribe('myCollection')
Sails, with sails.io.js
(Here using lodash for convenience)
var myCollection;
sails.io.get('/myCollection').then(
function(res) {
myCollection = res.data;
},
function(err) {
//Handle error
}
);
sails.io.on('myCollection').function(msg) {
switch(msg.verb) {
case 'created':
myCollection.push(msg.data);
break;
case 'updated':
_.extend(_.find(myCollection, 'id', msg.id), msg.data);
break;
case 'destroyed':
_.remove(myCollection, 'id', msg.id);
break;
};
});
(I leave the find where and create to your imagination with [the doc])
All interaction with the server should happen in the background.
Well, Sails, only for angular, with sails ressources
I'm not pretty used to that process, so I leave you reading here or here, but once again I'd choose manual .on()method.
Since I asked this question, I've learned a few things and some new projects have popped up. I decided against sails.io, because when developing with React.js, most of the community's weight is behind webpack, but sails.io uses gulp. I realize these can be used together and there is even an npm package for this, but I wasn't too keen on making my stack bigger than it had to be, so I went with a simple express.js server that I could tailor to my needs.
In order to sync my data, I'm using rethinkdb which allows me to asynchronously watch the database for changes and then publish the changes to the clients through websockets.
I've set up a simple script where I keep an instance of a baobab tree on both the client and the server.
When the tree gets modified on the server, it sends transaction data to the appropriate clients through the websocket
The client merges the transaction with the tree.
This method does not make use of local storage and keeps the data in memory in the node.js process. The data in the transaction is also quite redundant.
The future plan has always been to set something up using redis and local storage ...
... until yesterday when I found deepstream.io!
This is a tool that does exactly what I want and need! Nothing more, nothing less.
Another project worth mention is meatier: "like meteor, but meatier". It is composed of many other well supported open source projects, so you could even pick and choose.
I'm looking for either a reference or an answer to what I think is a very common problem that people who are current implementing JavaScript MVC frameworks (such as Angular, Ember or Backbone) would come across.
I am looking for a way or common pattern to externalize application properties that are accessible in the JS realm. Something that would allow the javascript to load server side properties such as endpoints, salts, etc. that are external to the application root. The issue that I'm coming across is that browsers do not typically have access to the file systems because it is a security concerns.
Therefore, what is the recommended approach for loading properties that are configurable outside of a deployable artifact if such a thing exists?
If not, what is currently being used or is in practice that is considered the recommended approach for this types of problem?
I am looking for a cross compatible answer (Google Chrome is awesome, I agree).
Data Driven Local Storage Pattern
Just came up with that!!
The idea is to load the configuration properties based on a naming over convention configuration where all properties are derived from the targeted hostname. That is, the hostname will derive a trusted endpoint and that endpoint will load the corresponding properties to the application. These application properties will contain information that is relative at runtime. The runtime information will be supplied to the integration parts which then communicate via property iteration on the bootstrapping start up.
To keep it simple, we'll just use two properties here:
This implementation is Ember JS specific but the general idea should be portable
I am currently narrowing the scope of this question to a specific technological perspective, that is Ember JS with the following remedy that is working properly for me and hope it will help any of you out there dealing with the same issue.
Ember.Application.initializer implementation in start up
initialize: function (container, application) {
var origin = window.location.origin;
var host = window.location.hostname;
var port = window.location.port;
var configurationEndPoint = '';
//local mode
if(host === 'localhost'){
//standalone using api stub on NODEJS
if(port === '8000'){
configurationEndPoint = '/api/local';
}//standalone UI app integrating with back end application on same machine, different port
else{
configurationEndPoint = '/services/env';
}
origin += configurationEndPoint;
}else{
throw Error('Unsupported Environment!!');
}
//load the configuration from a trusted resource and store it in local storage on start up
$.get(origin,
function( data ) {
//load all configurations as key value pairs and store in localStorage for access.
configuration = data.configuration;
for(var config in configuration){
debugger;
var objectProperty = localStorage + '.' + config.toString()
objectProperty = configuration[config];
}
}
);
}
Configurable Adapter
export default DS.RESTAdapter.extend({
host: localStorage.host,
namespace: localStorage.namespace
});
No later than yesterday morning i was tackling the same issue.
Basically, you have two options:
Use localStorage/indexedDB or any other client-side persistent storage. (But you have to put config there somehow).
Render your main template (the one that gets rendered always) with a hidden where you put config JSON.
Then in your app init code you get this config and use it. Plain and simple in theory, but lets get down to nasty practice (for second option).
First, client should get config before application loads. It is not easy sometimes. e.g. user should be logged in to see config. In my case i check if i can provide config on the first request, and if not redirect user to login page. This leads us to second limitation. Once you are ready to provide config, you have to reboot app completely so that configuration code run again (at least in Angular it is necessary, as you cannot access providers after the app bootstraps).
Another constraint, the second option is useless if you use static html and cannot change it somehow on server before sending to the client.
May be a better option would be to combine both variants. This should solve some problems for returning users, but first interaction will not be very pleasant anyway. I have not tried this yet.
I'm quite new to JayData, so this may sound like a stupid question.
I've read the OData server tutorial here: http://jaydata.org/blog/install-your-own-odata-server-with-nodejs-and-mongodb - it is very impressive that one can set up an OData provider just like that. However the tutorial did not go into details about how to customize the provider.
I'd be interested in seeing how I can set it up with a custom database and how I can add a layer of authentication/authorization to the OData server. What I mean is, not every user may have permissions to every entity and not every user has the permission to add new entities.
How would I handle such use cases with JayData?
Thanks in advance for your answers!
UPDATE:
Here are two posts that will get you started:
How to use the odata-server npm module
How to set up authentication/authorization
The $data.createODataServer method frequently used in the posts is a convenience method that hides the connect/express pipleline from you. To interact with the pipeline examine the method body of $data.createODataServer function found in node_modules/odata-server folder.
Disregard text below
Authentication must be solved with the connect pipeline there are planty of middleware for that.
For authorization EntityContext constructor accepts an authorization function that must be promise aware.
The all-allow authorizator looks like this.
function checkPerm(access, user, entitysets, callback) {
var pHandler = new $data.PromiseHandler();
var clbWrapper = pHandler.createCallback(callback);
var pHandlerResult = pHandler.getPromise();
clbWrapper.success(true); // this grants a joker rw permission to everyone
//consult user, entitySet and acces to decide on success/error
//since you return a promise you can call async stuff (will not be fast though)
return pHandlerResult;
}
I have to consult with one of the team members on the syntax that let you pass this into the build up process - but I can confirm this is doable and is supported. I'll get back with the answer ASAP.
Having authenticated the user you can also use EntityContext Level Events to intercept Read/Update/Create/Delete operations.
$data.EntityContext.extend({
MySet: { type: $data.EntitySet, elementType: Foobar,
beforeDelete: function(items) {
//if delete was in batch you'll get multiple items
//check items here,access this.request.user
return false // deny access
}
});
And there is a declarative way, you can annotate Role names with permissions on entity sets, this requirest that your user object actually has a roles field with an array of role names.
I too have been researching oData recently and as we develop our platform in both node and C# naturally looked at JayStorm. From my understanding of the technical details of JayStorm the whole capability of Connect and Express are available to make this topic possible. We use Restify to provide the private API of our platform and there we have written numerous middleware modules for exactly this case.
We are using JayData for our OData Service layer also, and i have implemnment a very simple basic authentication with it.
Since the JayData is using Express, so we can leverage Express' features. For Basic Auth, the simplest way is:
app.use(c.session({ secret: 'session key' }));
// Authenticator
app.use(c.basicAuth('admin', 'admin'));
app.use("/odata.svc", $data.JayService.OData.Utils.simpleBodyReader());
you also can refer to this article for more detail for authentication with Express: http://blog.modulus.io/nodejs-and-express-basic-authentication
Thanks.
I wrote that blogpost, I work for JayData.
What do you mean by custom database?
We have written a middleware for authentication and authorization but it is not open source. We might release it later.
We have a service called JayStorm, it has a free version, maybe that is good for you.
We probably will release an appliance version of it.