SailsJS override model methods - javascript

I am adding caching (redis) into my project and would prefer to code it into the model logic rather than the controller. I would need to overwrite the model method and add the logic for the caching there.
I know I can override certain methods like find and findOne but I'm not sure what to return.
Example (pseudo)
findOne: function () {
cache.get(key, function (err, data) {
if (data === null) // No cache data
// get the data and return it
else
// return the cache data
});
}
The problem is that these model methods dont just return the data, they return an instance of the model itself (for chaining).
Not really sure how to return the data and how to get it if it isn't already set. Has anyone ever done anything like this?

Caching is something that we'd love for Waterline, but at the moment the only way to really get what you want is to create your own adapter. Overriding find and findOne is not really feasible at this point, as there's no good way to access the underlying "parent" methods in case your cache turned up empty and you wanted to proceed with the query.
In your case, forking one of the existing adapters (like sails-mysql) to add Redis caching would probably be more constructive than starting from scratch. If one could add the caching layer as a separate, installable module (i.e. a dependency) of the forked adapter, it would be easier to replicate the functionality across other adapters, and eventually roll into the adapter spec itself. If anyone felt like tackling this it would be a great contribution! You might also ask in the Sails IRC channel (irc://irc.freenode.net/sailsjs) to see if anyone's already working on something similar.

Related

Return from Model.update()

I was checking Sequelize's examples and documentation the other day, and I came across:
Albums.update(myAlbumDataObject,
{ where:
{ id: req.params.albumId },
returning: true /*or: returning: ['*'], depending on version*/
});
I was very exited when I saw this. A very good way to avoid quite a few lines of code. What I was doing before was to get the object by Model.findOne(), re-set every field to their new values and invoke the instance method .save(), instead of using a static method like that.
Needless to say I was quite happy and satisfied when I saw that such static method existed, disappointing however, was to learn the method only returns the instance it updates if you're running Sequelize with PostgreSQL.
Very sad to learn that, as I'm using MySQL.
The method sure, issues a SQL statement containing the proper UPDADE string in it, but that's it. I don't know if it hit anything, and I don't have a copy of the updated data to return.
Turns out I need a Model.findOne() first, in order to know if that object exists with that id(and/or other filtering parameters), the Model.update() to issue the updates and finally a Model.findByPk() to return the updated model to the layer above (all of it inside a transaction, naturally). That's too much code!
Also during the update, if there's a UniqueConstraintError exception thrown (witch can be quite common), it's errors[] array carries no valid model 'instance', it's just 'undefined', so it complicates matters if you want details about what happened and/or throw custom error messages defined inside the models.
My questions are: Are there workarounds out there better than those I'm already implementing? Any sequelize plugins that may give me that with MySQL? Any sequelize beta code that can give me that? Is there any effort by the part of the sequelelize dev team to give us that? I'd appreciate any help given.
I'm running Sequelize version is 6.7.0, with Node.js v14.17.5.
Ps.: I even realized now that static Model.update() under MySQL will even update something that doesn't exist without complaining about it.

Using ES6 Proxy to lazily load resources

I am building something like an ActiveRecord class for documents stored in MongoDB (akin to Mongoose). I have two goals:
Intercept all property setters on a document using a Proxy, and automatically create an update query to be sent to Mongo. I've already found a solution for this problem on SO.
Prevent unnecessary reads from the database. I.e. if a function is performed on a document, and that function only ever sets properties, and doesn't ever use an existing property of the document, then I don't need to read the document from the database, I can directly update it. However, if the function uses any of the document's properties, I'd have to read it from the database first, and only then continue on with the code. Example:
// Don't load the document yet, wait for a property 'read'.
const order = new Order({ id: '123abc' });
// Set property.
order.destination = 'USA';
// No property 'read', Order class can just directly send a update query to Mongo ({ $set: { destination: 'USA' } }).
await order.save();
// Don't load the document yet, wait for a property 'read'.
const order = new Order({ id: '123abc' });
// Read 'weight' from the order object/document and then set 'shipmentCost'.
// Now that a 'get' operation is performed, Proxy needs to step in and load the document '123abc' from Mongo.
// 'weight' will be read from the newly-loaded document.
order.shipmentCost = order.weight * 4.5;
await order.save();
How would I go about this? It seems pretty trivial: set a 'get' trap on the document object. If it's the first-ever property 'get', load the document from Mongo and cache it. But how do I fit an async operation into a getter?
arithmetic cannot be async
You can probably initiate an async read from within a getter (I haven't tried it, but it seems legit), but the getter can't wait for the result. So, unless your DB library provides some blocking access calls, this line, where order.weight is fetched just in time and the value used in multiplication, will always be pure fantasy in any lazy-read regime:
order.shipmentCost = order.weight * 4.5
(If your DB library does have blocking reads, I think it will be straightforward to build what you want by using only blocking reads. Try it. I think this is part of what Sequelize's dataLoader does.)
There's no way for multiplication to operate on Promises. There's no way to await an async value that isn't itself async. Even Events, which are not strictly async/await, would require some async facade or a callback pattern, neither of which are blocking, and so neither of which could make that statement work.
This could work, but it forces every caller to manage lazy-loading:
order.shipmentCost = (await order.weight) * 4.5
That approach will deform your whole ecosystem. It would be much better for callers to simply invoke read & save when needed.
Or you might be able to create a generator that works inside getters, but you'd still need to explicitly "prime the pump" for every property's first access, which would make the "fantasy" statement work at the cost of spawning a horrific pre-statement that awaits instead. Again, better to just use read and save.
I think what you're hoping for is impossible within javascript, because blocking and non-blocking behavior is not transparent and cannot be made to be. Any async mechanism will ultimately manifest as not-a-scalar.
You would need to create your own precompiler, like JSX, that could transform fantasy code into async/aware muck.
Serious advice: use an off-the-shelf persistence library instead of growing your own.
The problem space of data persistence is filled with many very hard problems and edge-cases. You'll have to solve more of them than you think.
Unless your entire project is "build better persistence tech," you're not going to build something better than what is out there, which means building your own is just the slowest way to get an inferior solution.
More code you write is more bugs to fix. (And you're writing tests for this magic persistence library, right?)
If you're trying to build a real app and you just need to interface with Mongo, spend 15 minutes shopping on npm and move on. Life is too short. Nobody will care how "cool" is your hand-rolled database layer that's almost like ActiveRecord (except for some opinionated customizations and bugs and missing features -- all of which will act as a barrier to others and even yourself).

JavaScript Publish / Subscribe Pattern: Showing Chain of Events?

Say I am using a pub/sub pattern with highly modularized code. When a function in one module sends out a 'publish', how I do make clear what functions in other modules are subscribing to that trigger?
For example:
// In module 1
function foo(data) {
publish("trigger", data);
}
// In module 2
function bar(data) {}
subscribe("trigger", bar);
// In module 3
function baz(data) {}
subscribe("trigger", baz);
After reading module 1 and seeing that a 'publish' is been sent out, how would someone know where to look in my code for the subscribed callbacks?
An obvious solution might be to comment what modules contain functions that subscribe to the trigger, but that seems an impractical solution when dealing with a large number of publishes / subscribers.
I feel like I'm not fully understanding how to use the pub/sub pattern, since to me, the pattern seems to have no transparency whatsoever regarding function chains.
EDIT
My question pertains to making my code clear and easy to understand for someone reading my source code. I understand that during runtime, I could programmatically find the list of stored subscribers, by access the array of stored callbacks. But that does nothing for making my raw source code more easily understood.
For example, I current use a pattern like this:
// Main Controller Module
function foo(data) {
module2.bar();
module3.bar();
}
// In module 2
function bar(data) {}
// In module 3
function baz(data) {}
For starters, what is the proper term for this module? I thought it was a 'mediator' pattern, but looking here, it seems a mediator pattern is more like what I thought a pub/sub was?
With this pattern I feel the flow of my code is completely transparent. The reader doesn't need to dig around to find out what functions in other modules foo() might call.
But with the pub/sub pattern, once I send out the publish from foo(), it's like the reader has to somehow find the modules where the subscribed functions are.
But of course the downside of the above pattern is heavy dependency: module 1 needs both module 2 and 3 injected before it can call bar() and baz().
So I want to adopt the loose coupling of the pub/sub pattern, but I also want to keep the function flow transparency that the above pattern gives me. Is this possible? Or is this just the inherent trade-off of a pub/sub pattern?
To Mod:
Please delete question. I wrote this question poorly and would like to re-ask the question in a clearer manner. thanks.
I thought the whole idea of publish subscribe or mediator is to loosely couple objects. Object1 doesn't need to know what happens who does what it is only concerned doing it's own thing and notifying whoever is interested that it's done doing what it does.
I register listeners only in a controller class and not all over the code. When the controller needs to do add or remove listeners then break up your process in steps that will inform the controller first (create appropriate events for it).
For example:
We fetch data with XHR.
Based on the data we create processors, processors are created with factory.
Processors process data.
Data is displayed.
Process is finished.
In your controller you could have:
var Controller = {
//fetch information and display it
fetch : function(paramObj){
var subscribeIds = [];
//to have xhr listen to fetch can be done in an init function
// no need to add and remove every time but make a note here
// that it's registered in init and part of this process
subscribeIds.push(Mediator.subscribe(xhr.fetch,"fetch"));
//xhr will trigger dataFetched
subscribeIds.push(Mediator.subscribe(Controller.initProsessor,"dataFetched"));
//Controller will trigger displayFetched
subscribeIds.push(Mediator.subscribe(dom.displayFetched,"displayFetched"));
subscribeIds.push(Mediator.subscribe(Controller.displayedFetched,"displayedFetched"));
paramObj.suscribeIds = subsribeIds;
Mediator.trigger("fetch",paramObj);
},
initProsessor : function(paramObj){
var processor = Processor.make(paramObj.data.type);
paramObj.html = processor.process(data);
Mediator.trigger("displayFetched",paramObj);
},
displayedFetched : function(paramObj){
//You can decide the process is done here or take other steps
// based on paramObj
//You can unsubscribe listeners or leave them, when you leave them
// they should not be registered in the fetch function but rather
// in an init function of Controller with comments saying
// basic fetch procedure
Controller.cleanupListeners(paramObj.subscribeIds);
},
cleanupListeners : function(listenersIds){
Mediator.unSubscribe(listenersIds);
}
}
The code looks more complicated than needs to be. Someone looking at it may think why not let XHR make a Processor instance and tell it to process? The reason is that the Controller literally controls the flow of the application, if you want some other things to happen in between you can add them. As your application grows you'll add more and more processes and sometimes re factor functions to do less specific things so they can be better re used. Instead of possibly having to change your code in several files you now only re define the process(es) in the Controller.
So to answer your question as to where to find the listeners and where events are registered: in the controller.
If you have a single Mediator object you can have it dump listeners at any time, just write a dump method that will console.log the event names and functions.toString().

Durandal (knockout) app with multilanguage support

I am building multilingual support for the app I'm working on. After doing some research and reading SO (internationalization best practice) I am trying to integrate that in a 'framework-friendly' way.
What I have done at the moment is following:
Created .resource modules formatted like so:
resources.en-US.js
define(function () {
return {
helloWorlLabelText: "Hello world!"
}
});
On the app.start I get the resource module with requirejs and assign all data to app.resources. Inside of each module specific resource is assigned to observables and bonded with text binding to labels and other text related things. Like so:
define(function (require) {
var app = require('durandal/app'),
router = require('durandal/plugins/router')
};
return{
helloWorldLabelText: ko.observable(app.resources.helloWorldLabelText),
canDeactivate: function () {
}
}
});
On the view:
<label for="hello-world" data-bind="text: helloWorldLabelText"></label>
The resources are swapped just by assigning new module to app.resources.
Now the problem is when the language is changed and some of the views have been already rendered, the values of previous language are still there. So I ended up reassigning observables inside of activate method. Also tried wrapping app.resources into observable, but that didn't work either.
I don't think I ended up with the most clean way and maybe anybody else had some other way that could share. Thanks.
For those who are still confused about best practices, those who feel that something is lacking, or those who are simply curious about how to implement things in a better way with regard to Durandal, Knockout, RequireJS, and client-side web applications in general, here is an attempt at a more useful overview of what's possible.
This is certainly not complete, but hopefully this can expand some minds a little bit.
First, Nov 2014 update
I see this answer keeps being upvoted regularly even a year later. I hesitated to update it multiple times as I further developed our particular solution (integrating i18next to Durandal/AMD/Knockout). However, we eventually dropped the dependent project because of internal difficulties and "concerns" regarding the future of Durandal and other parts of our stack. Hence, this little integration work was canceled as well.
That being said, I hopefully distinguished generally applicable remarks from specific remarks below well enough, so I think they keep offering useful (perhaps even well needed) perspectives on the matters.
If you're still looking to play with Durandal, Knockout, AMD and an arbitrary localization library (there are some new players to evaluate, by the way), I've added a couple of notes from my later experiences at the end.
On the singleton pattern
One problem with the singleton pattern here is that it's hard to configure per-view; indeed there are other parameters to the translations than their locale (counts for plural forms, context, variables, gender) and these may themselves be specific to certain contexts (e.g. views/view models).
By the way it's important that you don't do this yourself and instead rely on a localization library/framework (it can get really complex). There are many questions on SO regarding these projects.
You can still use a singleton, but either way you're only halfway there.
On knockout binding handlers
One solution, explored by zewa666 in another answer, is to create a KO binding handler. One could imagine this handler taking these parameters from the view, then using any localization library as backend. More often than not, you need to change these parameters programmatically in the viewmodel (or elsewhere), which means you still need to expose a JS API.
If you're exposing such an API anyway, then you may use it to populate your view model and skip the binding handlers altogether. However, they're still a nice shortcut for those strings that can be configured from the view directly. Providing both methods is a good thing, but you probably can't do without the JS API.
Current Javascript APIs, document reloading
Most localization libraries and frameworks are pretty old-school, and many of them expect you to reload the entire page whenever the user changes the locale, sometimes even when translation parameters change, for various reasons. Don't do it, it goes against everything a client-side web application stands for. (SPA seems to be the cool term for it these days.)
The main reason is that otherwise you would need to track each DOM element that you need to retranslate every time the locale changes, and which elements to retranslate every time any of their parameters change. This is very tedious to do manually.
Fortunately, that's exactly what data binders like knockout make very easy to do. Indeed, the problem I just stated should remind you of what KO computed observables and KO data-bind attributes attempt to solve.
On the RequireJS i18n plugin
The plugin both uses the singleton pattern and expects you to reload the document. No-go for use with Durandal.
You can, but it's not efficient, and you may or may not uselessly run into problems depending on how complex your application state is.
Integration of knockout in localization libraries
Ideally, localization libraries would support knockout observables so that whenever you pass them an observable string to translate with observable parameters, the library gives you an observable translation back. Intuitively, every time the locale, the string, or the parameters change, the library modifies the observable translation, and should they be bound to a view (or anything else), the view (or whatever else) is dynamically updated without requiring you to do anything explicitly.
If your localization library is extensible enough, you may write a plugin for it, or ask the developers to implement this feature, or wait for more modern libraries to appear.
I don't know of any right now, but my knowledge of the JS ecosystem is pretty limited. Please do contribute to this answer if you can.
Real world solutions for today's software
Most current APIs are pretty straightforward; take i18next for example. Its t (translate) method takes a key for the string and an object containing the parameters. With a tiny bit of cleverness, you can get away with it without extending it, using only glue code.
translate module
define(function (require) {
var ko = require('knockout');
var i18next = require('i18next');
var locale = require('locale');
return function (key, opts) {
return ko.computed(function () {
locale();
var unwrapped = {};
if (opts) {
for (var optName in opts) {
if (opts.hasOwnProperty(optName)) {
var opt = opts[optName];
unwrapped[optName] = ko.isObservable(opt) ? opt() : opt;
}
}
}
return i18next.t(key, unwrapped);
});
}
});
locale module
define(function (require) { return require('knockout').observable('en'); });
The translate module is a translation function that supports observable arguments and returns an observable (as per our requirements), and essentially wraps the i18next.t call.
The locale module is an observable object containing the current locale used globally throughout the application. We define the default value (English) here, you may of course retrieve it from the browser API, local storage, cookies, the URI, or any other mechanism.
i18next-specific note: AFAIK, the i18next.t API doesn't have the ability to take a specific locale per translation: it always uses the globally configured locale. Because of this, we must change this global setting by other means (see below) and place a dummy read to the locale observable in order to force knockout to add it as a dependency to the computed observable. Without it, the strings wouldn't be retranslated if we change the locale observable.
It would be better to be able to explicitly define dependencies for knockout computed observables by other means, but I don't know that knockout currently provides such an API either; see the relevant documentation. I also tried using an explicit subscription mechanism, but that wasn't satisfactory since I don't think it's currently possible to trigger a computed to re-run explicitly without changing one of its dependencies. If you drop the computed and use only manual subscription, you end up rewriting knockout itself (try it!), so I prefer to compromise with a computed observable and a dummy read. However bizarre that looks, it might just be the most elegant solution here. Don't forget to warn about the dragons in a comment.
The function is somewhat basic in that it only scans the first-level properties of the options object to determine if they are observable and if so unwraps them (no support for nested objects or arrays). Depending on the localization library you're using, it will make sense to unwrap certain options and not others. Hence, doing it properly would require you to mimic the underlying API in your wrapper.
I'm including this as a side note only because I haven't tested it, but you may want to use the knockout mapping plugin and its toJS method to unwrap your object, which looks like it might be a one-liner.
Here is how you can initialize i18next (most other libraries have a similar setup procedure), for example from your RequireJS data-main script (usually main.js) or your shell view model if you have one:
var ko = require('knockout');
var i18next = require('i18next');
var locale = require('locale');
i18next.init({
lng: locale(),
getAsync: false,
resGetPath: 'app/locale/__ns__-__lng__.json',
});
locale.subscribe(function (value) {
i18next.setLng(value, function () {});
});
This is where we change the global locale setting of the library when our locale observable changes. Usually, you'll bind the observable to a language selector; see the relevant documentation.
i18next-specific note: If you want to load the resources asynchronously, you will run in a little bit of trouble due to the asynchronous aspect of Durandal applications; indeed I don't see an obvious way to wrap the rest of the view models setup code in a callback to init, as it's outside of our control. Hence, translations will be called before initialization is finished. You can fix this by manually tracking whether the library is initialized, for example by setting a variable in the init callback (argument omitted here). I tested this and it works fine. For simplicity here though, resources are loaded synchronously.
i18next-specific note: The empty callback to setLng is an artifact from its old-school nature; the library expects you to always start retranslating strings after changing the language (most likely by scanning the DOM with jQuery) and hence the argument is required. In our case, everything is updated automatically, we don't have to do anything.
Finally, here's an example of how to use the translate function:
var _ = require('translate');
var n_foo = ko.observable(42);
var greeting = _('greeting');
var foo = _('foo', { count: n_foo });
You can expose these variables in your view models, they are simple knockout computed observables. Now, every time you change the locale or the parameters of a translation, the string will be retranslated. Since it's observable, all observers (e.g. your views) will be notified and updated.
var locale = require('locale');
locale('en_US');
n_foo(1);
...
No document reload necessary. No need to explicitly call the translate function anywhere. It just works.
Integration of localization libraries in knockout
You may attempt to make knockout plugins and extenders to add support for localization libraries (besides custom binding handlers), however I haven't explored the idea, so the value of this design is unknown to me. Again, feel free to contribute to this answer.
On Ecmascript 5 accessors
Since these accessors are carried with the objects properties everywhere they go, I suspect something like the knockout-es5 plugin or the Durandal observable plugin may be used to transparently pass observables to APIs that don't support knockout. However, you'd still need to wrap the call in a computed observable, so I'm not sure how much farther that gets us.
Yet again, this is not something I looked at a lot, contributions welcome.
On Knockout extenders
You can potentially leverage KO extenders to augment normal observables to translate them on the fly. While this sounds good in theory, I don't think it would actually serve any kind of purpose; you would still need to track every option you pass to the extender, most likely by manually subscribing to each of them and updating the target by calling the wrapped translation function.
If anything, that's merely an alternative syntax, not an alternative approach.
Conclusion
It feels like there is still a lot lacking, but with a 21-lines module I was able to add support for an arbitrary localization library to a standard Durandal application. For an initial time investment, I guess it could be worse. The most difficult part is figuring it out, and I hope I've done a decent job at accelerating that process for you.
In fact, while doing it right may sound a little complicated (well, what I believe is the right way anyway), I'm pretty confident that techniques like these make things globally simpler, at least in comparison to all the trouble you'd get from trying to rebuild state consistently after a document reload or to manually tracking all translated strings without Knockout. Also, it is definitely more efficient (UX can't be smoother): only the strings that need to be retranslated are retranslated and only when necessary.
Nov 2014 notes
After writing this post, we merged the i18next initialization code and the code from the translate module in a single AMD module. This module had an interface that was intended to mimick the rest of the interface of the stock i18next AMD module (though we never got past the translate function), so that the "KO-ification" of the library would be transparent to the applications (except for the fact that it now recognized KO observables and took the locale observable singleton in its configuration, of course). We even managed to reuse the same "i18next" AMD module name with some require.js paths trickery.
So, if you still want to do this integration work, you may rest assured that this is possible, and eventually it seemed like the most sensible solution to us. Keeping the locale observable in a singleton module also turned out to be a good decision.
As for the translation function itself, unwrapping observables using the stock ko.toJS function was indeed far easier.
i18next.js (Knockout integration wrapper)
define(function (require) {
'use strict';
var ko = require('knockout');
var i18next = require('i18next-actual');
var locale = require('locale');
var namespaces = require('tran-namespaces');
var Mutex = require('komutex');
var mutex = new Mutex();
mutex.lock(function (unlock) {
i18next.init({
lng: locale(),
getAsync: true,
fallbackLng: 'en',
resGetPath: 'app/locale/__lng__/__ns__.json',
ns: {
namespaces: namespaces,
defaultNs: namespaces && namespaces[0],
},
}, unlock);
});
locale.subscribe(function (value) {
mutex.lock(function (unlock) {
i18next.setLng(value, unlock);
});
});
var origFn = i18next.t;
i18next.t = i18next.translate = function (key, opts) {
return ko.computed(function () {
return mutex.tryLockAuto(function () {
locale();
return origFn(key, opts && ko.toJS(opts));
});
});
};
return i18next;
});
require.js path trickery (OK, not that tricky)
requirejs.config({
paths: {
'i18next-actual': 'path/to/real/i18next.amd-x.y.z',
'i18next': 'path/to/wrapper/above',
}
});
The locale module is the same singleton presented above, the tran-namespaces module is another singleton that contains the list of i18next namespaces. These singletons are extremely handy not only because they provide a very declarative way of configuring these things, but also because it allows the i18next wrapper (this module) to be entirely self-initialized. In other words, user modules that require it will never have to call init.
Now, initialization takes time (might need to fetch some translation files), and as I already mentioned a year ago, we actually used the async interface (getAsync: true). This means that a user module that calls translate might in fact not get the translation directly (if it asks for a translation before initialization is finished, or when switching locales). Remember, in our implementation user modules can just start calling i18next.t immediately without waiting for a signal from the init callback explicitly; they don't have to call it, and thus we don't even provide a wrapper for this function in our module.
How is this possible? Well, to keep track of all this, we use a "Mutex" object that merely holds a boolean observable. Whenever that mutex is "locked", it means we're initializing or changing locales, and translations shouldn't go through. The state of that mutex is automatically tracked in the translate function by the KO computed observable function that represents the (future) translation and will thus be re-executed automatically (thanks to the magic of KO) when it changes to "unlocked", whereupon the real translate function can retry and do its work.
It's probably more difficult to explain than it is to actually understand (as you can see, the code above is not overly long), feel free to ask for clarifications.
Usage is very easy though; just var i18next = require('i18next') in any module of your application, then call i18next.t away at any time. Just like the initial translate function you may pass observable as arguments (which has the effect of retranslating that particular string automatically every time such an argument is changed) and it will return an observable string. In fact, the function doesn't use this, so you may safely assign it to a convenient variable: var _ = i18next.t.
By now you might be looking up komutex on your favorite search engine. Well, unless somebody had the same idea, you won't find anything, and I don't intend to publish that code as it is (I couldn't do that without losing all my credibility ;)). The explanation above should contain all you need to know to implement the same kind of thing without this module, though it clutters the code with concerns I'm personally inclined to extract in dedicated components as I did here. Toward the end, we weren't even 100% sure that the mutex abstraction was the right one, so even though it might look neat and simple, I advise that you put some thoughts into how to extract that code (or simply on whether to extract it or not).
More generally, I'd also advise you to seek other accounts of such integration work, as its unclear whether these ideas will age well (a year later, I still believe this "reactive" approach to localization/translation is absolutely the right one, but that's just me). Maybe you'll even find more modern libraries that do what you need them to do out of the box.
In any case, it's highly unlikely that I'll revisit this post again. Again, I hope this little(!) update is as useful as the initial post seems to be.
Have fun!
I was quite inspired by the answers in SO regarding this topic, so I came up with my own implementation of a i18n module + binding for Knockout/Durandal.
Take a look at my github repo
The choice for yet another i18n module was that I prefer storing translations in databases (which ever type required per project) instead of files. With that implementation you simply have a backend which has to reply with a JSON object containing all your translations in a key-value manner.
#RainerAtSpirit
Good tip with the singleton class was very helpful for the module
You might consider having one i18n module that returns a singleton with all required observables. In addition a init function that takes an i18n object to initialize/update them.
define(function (require) {
var app = require('durandal/app'),
i18n = require('i18n'),
router = require('durandal/plugins/router')
};
return{
canDeactivate: function () {
}
}
});
On the view:
<label for="hello-world" data-bind="text: i18n.helloWorldLabelText"></label>
Here is an example repo made using i18next, Knockout.Punches, and Knockout 3 with Durandal:
https://github.com/bestguy/knockout3-durandal-i18n
This allows for Handlebars/Angular-style embeds of localized text via an i18n text filter backed by i18next:
<p>
{{ 'home.label' | i18n }}
</p>
also supports attribute embeds:
<h2 title="{{ 'home.title' | i18n }}">
{{ 'home.label' | i18n }}
</h2>
And also lets you pass parameters:
<h2>
{{ 'home.welcome' | i18n:name }}
<!-- Passing the 'name' observable, will be embedded in text string -->
</h2>
JSON example:
English (en):
{
"home": {
"label": "Home Page",
"title": "Type your name…"
"welcome": "Hello {{0}}!",
}
}
Chinese (zh):
{
"home": {
"label": "家",
"title": "输入你的名字……",
"welcome": "{{0}}您好!",
}
}

Can I make Rails' CookieStore use JSON under the hood?

I feel like it should be obvious doing this from reading the documentation, but maybe somebody can save me some time. We are using Ruby's CookieStore, and we want to share the cookie with another server that is part of our website which is using WCF. We're already b64-decoding the cookie and we are able to validate the signature (by means of sharing the secret token), all of that is great... but of course the session object is marshalled as a Ruby object, and it's not clear what is the best way to proceed. We could probably have the WCF application make a call to Ruby and have it unmarshal the object and write it out as JSON, but that seems like it will add an unnecessary layer of complexity to the WCF server.
What I'd really like to do is maybe subclass CookieStore, so that instead of just b64 encoding the session object, it writes the object to JSON and then b64's it. (And does the reverse on the way back in, of course) That way, the session token is completely portable, I don't have to worry about Ruby version mismatches, etc. But I'm having trouble figuring out where to do that. I thought it would be obvious if I pulled up the source for cookie_store.rb, but it's not (at least not to me). Anybody want to point me in the right direction?
(Anticipating a related objection: Why the hell do we have two separate servers that need to be so intimately coordinated that they share the session cookie? The short answer: Deadlines.)
Update: So from reading the code, I found that when the MessageVerifier class gets initialized, it looks to see if there is an option for :serializer, and if not it uses Marshal by default. There is already a class called JSON that fulfills the same contract, so if I could just pass that in, I'd be golden.
Unfortunately, the initialize function for CookieStore very specifically only grabs the :digest option to pass along as the options to MessageVerifier. I don't see an easy way around this... If I could get it to just pass along that :serializer option to the verifier_for call, then achieving what I want would literally be as simple as adding :serializer => JSON to my session_store.rb.
Update 2: A co-worker found this, which appears to be exactly what I want. I haven't gotten it to work yet, though... getting a (bah-dump) stack overflow. Will update once again if I find anything worthy of note, but I think that link solves my problem.

Categories

Resources