Lazy loading Pattern with Typescript - javascript

So with C# and other languages where you have a get and set assessor on properties, it's quite simple to build a lazy loading pattern.
I only recently started playing with Type Script and I'm trying to achieve something the same. I'm loading a Poco with most of it's properties via a Ajax Call. The problem can be described below:
export interface IDeferredObject<T> {
HasLoaded: boolean;
DeferredURI: string;
}
export class Library{
LibraryName: string;
Books: IDeferredObject<Book[]>;
}
export class Book {
Title: string;
UniqueNumber: number;
}
window.onload = () => {
//Instantiate new Library
var lib = new Library();
//Set some properties
lib.LibraryName = "Some Library";
//Set the Deferred URI Rest EndPoint
lib.Books.DeferredURI = "http://somerestendpoint";
lib.Books.HasLoaded;
//Something will trigger a GET to lib.Books which then needs to load using the Deferred Uri
};
Couple of questions:
I
Is a Interface even the right thing to use here?
I'd like to trigger a Load action when something accesses the Library Books Property. Is this even possible?
I know it's quite an open question, just looking for some guidance on how to build that pattern.
Thanks

Books could be declared as a property(see get and set in TypeScript) and in its getter you can start an ajax call. The only issue is that you cannot know when the ajax call will return and you don't want to block the code execution while value for Books is loading. You can use promise to solve your problem. The following code is using JQuery Deferred Object which is returned by $.ajax call (since 1.5 version).
One change - HasLoaded and DeferredURI. They should reside in "loader class" i.e. Library because in my example Library class is responsible for data retrieval.
export class Library {
private _loadedBooks: Book[];
public get Books(): any {
var result: any = null;
// Check first if we've already loaded values from remote source and immediately resolve deferred object with a cached value as a result
if(this.HasLoaded) {
result = $.Deferred();
result.resolve(this._loadedBooks);
}
else {
// Initiate ajax call and when it is finished cache the value in a private field
result = $.ajax(this.DeferredURI)
.done((values) => {
this.HasLoaded = true;
this._loadedBooks = values;
});
}
return result;
}
}
Then code consumer would use Books as following:
var library = new Library();
library.Books.done((books: Book[]) => {
// do whatever you need with books collection
});
For me this design is counter intuitive and user of this code would not expect the Books field/property to return a deferred object. So I would suggest to change it to method, something like loadBooksAsync(). This name would indicate that the code inside this method is asynchronous and hinting that return value is a deferred object which value would be available somewhere later.

Is an Interface even the right thing to use here?
Yes, using interfaces simplifies the coupling problems also in TypeScript
See also: Programmers: Why are interfaces useful?
I'd like to trigger a Load action when something accesses the Library Books Property. Is this even possible?
Yes, it is possible to implement any loading pattern you like. TypeScript is just JavaScript when it comes to code loading.
But, for small to medium sized applications the loading is usually not addressed at TypeScript level but is rather handled by the module loader and script bundler.
Supporting statement:
https://github.com/SlexAxton/yepnope.js#deprecation-notice
Deprecation Notice
...The authors of yepnope feel that the front-end community now offers better software for loading scripts, as well as conditionally loading scripts. None of the APIs for these new tools are quite as easy as yepnope, but we assure you that it's probably worth your while. We don't officially endorse any replacement, however we strongly suggest you follow an approach using modules, and then package up your application using require.js, webpack, browserify, or one of the many other excellent dependency managed build tools...
See also:
Stack Overflow: How do AMD loaders work under the hood?
Stack Overflow: When to use Requirejs and when to use bundled javascript?
Programmers: Practices for organizing JavaScript AMD imports

Related

When should we load the function, if we want to add a function to a prototype using React?

I am trying to add a toCamelCase function to the String prototype in React with TypeScript.
Here is what I did, in a toCamelCase.d.ts file:
interface String {
toCamelCase(): string;
}
String.prototype.toCamelCase = function (): string {
return this[0].toUpperCase() + this.slice(1).toLowerCase();
};
Now I am just wondering when and in which file should I load this script so I can get access to it.
I am aware that I can simply define a function and use that.
However, I am curious how to do it with prototype and what would be the downside doing things like this if there are any.
Where to do it?
In main.js or index.js or any other file that bootstraps your application
What are the downsides?
You can never know if one of the libraries that you use using relies on String.prototype being unaltered. It can cause major issues that you will have a hard time finding.

How to migrate legacy JS app to modules

I have a large (~15k LoC) JS app (namely a NetSuite app) written in old-style all-global way. App consists of 26 files and dependencies between them are totally unclear.
The goal is to gracefully refactor the app to smaller modules. By gracefully I mean not breaking\locking the app for long time, but doing refactoring in smaller chunks, while after completing each chunk app remains usable.
An idea I have here is to concat all the JS files we have now into single-file bundle. After that some code could be extracted into modules. And the legacy code could start importing it. The modules & imports should be transpiled with webpack\whatever, while legacy code remains all-globals style. Finally all this is packed into single JS file and deployed.
My questions are
is there a better approach maybe? This sounds like a typical problem
are there any tools available to support my approach?
I gave webpack a try and I haven't managed to get what I want out of it. The export-loader and resolve-loader are no options because of amount of methods\vars that needs to be imported\exported.
Examples
Now code looks like
function someGlobalFunction() {
...
}
var myVar = 'something';
// and other 15k lines in 26 files like this
What I would ideally like to achieve is
function define(...) { /* function to define a module */ }
function require(moduleName) { /* function to import a module */ }
// block with my refactored out module definitions
define('module1', function () {
// extracted modularised code goes here
});
define('module2', function () {
// extracted modularised code goes here
});
// further down goes legacy code, which can import new modules
var myModule = require('myNewModule');
function myGlobalLegacyFunction() {
// use myModule
}
I'm following an approach similar to that outlined here: https://zirho.github.io/2016/08/13/webpack-to-legacy/
In brief:
Assuming that you can configure webpack to turn something like
export function myFunction(){...}
into a file bundle.js that a browser understands. In webpack's entry point, you can import everything from your module, and assign it to the window object:
// using namespace import to get all exported things from the file
import * as Utils from './utils'
// injecting every function exported from utils.js into global scope(window)
Object.assign(window, Utils).
Then, in your html, make sure to include the webpack output before the existing code:
<script type="text/javascript" src="bundle.js"></script>
<script type="text/javascript" src="legacy.js"></script>
Your IDE should be able to help identify clients of a method as you bring them into a module. As you move a function from legacy.js to myNiceModule.js, check to see if it still has clients that are aware of it globally - if it doesn't, then it doesn't need to be globally available.
No good answer here so far, and it would be great if the person asking the question would come back. I will pose a challenging answer saying that it cannot be done.
All module techniques end up breaking the sequential nature of execution of scripts in the document header.
All dynamically added scripts are loaded in parallel and they do not wait for one another. Since you can be certain that almost all such horrible legacy javascript code is dependent on the sequential execution, where the second script can depend on the first previous one, as soon as you load those dynamically, it can break.
If you use some module approach (either ES7 2018 modules or require.js or you own) you need to execute the code that depends on the loading having occurred in a call-back or Promise/then function block. This destroys the implicit global context, so all these spaghetti coils of global functions and var's we find in legacy javascript code files will not be defined in the global scope any more.
I have determined that only two tricks could allow a smooth transition:
Either some way to pause continuation of a script block until the import Promise is resolved.
const promise = require("dep1.js", "dep2.js", "dep3.js");
await promise;
// legacy stuff follows
or some way to revert the scope of a block inside a function explicitly into the global scope.
with(window) {
function foo() { return 123; }
var bar = 543;
}
But neither wish was granted by the javascript fairy.
In fact, I read that even the await keyword essentially just packs the rest of the statements into function to call when promise is resolved:
async function() {
... aaa makes promise ...
await promise;
... bbb ...
}
is just, I suppose, no different from
async function() {
... aaa makes promise ...
promise.then(r => {
... bbb ...
});
}
So this means, the only way to fix this is by putting legacy javascript statically in the head/script elements, and slowly moving things into modules, but continue to load them statically.
I am tinkering with my own module style:
(function(scope = {}) {
var v1 = ...;
function fn1() { ... }
var v2 = ...;
function fn2() { ... }
return ['v1', 'fn1', 'v2', 'fn2']
.reduce((r, n) => {
r[n] = eval(n);
return r;
}, scope);
})(window)
by calling this "module" function with the window object, the exported items would be put on there just as legacy code would do.
I gleaned a lot of this by using knockout.js and working with the source readable file that has everything together but in such module function calls, ultimately all features are on the "ko" object.
I hate using frameworks and "compilation" so generating the sequence of HTML tags to load them in the correct order by the topologically sorted dependency tree, while I could write myself such a thing quickly, I won't do this, because I do not want to have any "compilation" step, not even my own.
UPDATE: https://stackoverflow.com/a/33670019/7666635 gives the idea that we can just Object.assign(window, module) which is somewhat similar to my trick passing the window object into the "module" function.

Dojo module loading in general

Coming from Java/C# I struggle to understand what that actually means for me as a developer (I'm thinking too much object-oriented).
Suppose there are two html files that use the same Dojo module via require() in a script tag like so:
<script type="text/javascript">
require(["some/module"],
function(someModule) {
...
}
);
</script>
I understand that require() loads all the given modules before calling the callback method. But, is the module loaded for each and every require() where it is defined? So in the sample above is some/module loaded once and then shared between the two HTMLs or is it loaded twice (i.e. loaded for every require where it is listed in the requirements list)?
If the module is loaded only once, can I share information between the two callbacks then? In case it is loaded twice, how can I share information between those two callbacks then?
The official documentation says "The global function define allows you to register a module with the loader. ". Does that mean that defines are something like static classes/methods?
If you load the module twice in the same window, it will only load the module once and return the same object when you request it a second time.
So, if you're having two seperate pages, then it will have two windows which will mean that it will load the module two times. If you want to share information, you will have to store it somewhere (the web is stateless), you could use a back-end service + database, or you could use the HTML5 localStorage API or the IndexedDB (for example).
If you don't want that, you can always use single page applications. This means that you will load multiple pages in one window using JavaScript (asynchronous pages).
About your last question... with define() you define modules. A module can be a simple object (which would be similar to static classes since you don't have to instantiate), but a module can also be a prototype, which means you will be able to create instances, for example:
define([], function() {
return {
"foo": function() {
console.log("bar");
}
};
});
This will return the same single object every time you need it. You can see it as a static class or a singleton. If you require it twice, then it will return the same object.
However, you could also write something like this:
define([], function() {
return function() {
this.foo = function() {
console.log("bar");
};
};
});
Which means that you're returning a prototype. Using it requires you to instantiate it, for example:
require(["my/Module"], function(Module) {
new Module().foo();
});
Prototyping is a basic feature of JavaScript, but in Dojo there's a module that does that for you, called dojo/_base/declare. You will often see things like this:
define(["dojo/_base/declare"], function(declare) {
return declare([], {
foo: function() {
console.log("bar");
}
});
});
In this case, you will have to load it similarly to a prototype (using the new keyword).
You can find a demo of all this on Plunker.
You might ask, how can you tell the difference between a singleton/static class module, and a prototypal module... well, there's a common naming convention to it. When your module starts with a capital letter (for example dijit/layout/ContentPane, dojo/dnd/Moveable, ...) then it usually means the module requires instances. When the module starts with a lowercase letter, it's a static class/singleton (for example dojo/_base/declare, dijit/registry)
1) dojo require, loads the module once and then if you called it again require() will simply return if the package is already loaded. so the request will be called once and it will also call any dependencies once.
but all that if you are in the same HTML page if you leave the page and call the same module in a different page then it will be called from the server. you can also use cache in your config settings so things will be cached in the browser and the file will or not by setting the cacheBust to true if you want a fresh copy or false if you want things to be cached.
2) if you are in the same html page and domain, the module didn't change the module will be the same and you can share values and any change you make you can get it from anywhere unless you call a new instance. but that is not possible between different html pages.
3) not it is not like a static classes or methods from what I understand static methods A static class can be used as a convenient container for sets of methods that just operate on input parameters and do not have to get or set any internal instance fields..
dojo work differently it is a reference for an object if you did it in that way :
define(function(){
var privateValue = 0;
return {
increment: function(){
privateValue++;
},
decrement: function(){
privateValue--;
},
getValue: function(){
return privateValue;
}
};
});
This means every bit of code loads that module will reference the same object in memory so the value will be the same through out the use of that module.
of course that is my understanding please feel free to tell me where I am wrong.

RequireJs extend module initialize begin and end

I have created a JavaScript library which can be used for logging purposes.
I also want to support the logging of requirejs.
Which functions/events of requirejs can I prototype/wrap so I can log when a module is initialized and when it is done initializing and returns the initialized object.
For instance if I call require(["obj1","obj2", "obj3"], function(obj1, obj2, obj3){}
I would like to know when requirejs begins on initializing each of the object, and I would like to know when each object is completely initialized.
I looked into the documentation/code, but could not find any usefull functions I can access from the requirejs object or the require object.
Note: I do not want to change the existing code of requirejs I wish to append functionality from the outside by either prototyping or wrapping.
What I have tried (problem is that this only accesses the begin and end of the entire batch of modules):
var oldrequire = require;
require = function (deps, callback, errback, optional) {
console.log("start");
var callbackWrapper = callback;
callbackWrapper = function () {
console.log("end");
var args = new Array();
for(var i = 0; i < arguments.length; i++) {
args.push(arguments[i]);
}
callback.apply(this, args);
};
oldrequire.call(this, deps, callbackWrapper, errback, optional);
};
This is a "better than nothing answer", not a definitive answer, but it might help you look in another direction. Not sure if that's good or bad, certainly it's brainstorming.
I've looked into this recently for a single particular module I had to wrap. I ended up writing a second module ("module-wrapper") for which I added a path entry with the name of the original module ("module"). I then added a second entry ("module-actual") that references the actual module which I require() as a dependency in the wrapper.
I can then add code before and after initialization, and finally return the actual module. This is transparent to user modules as well as the actual module, and very clean and straightforward from a design standpoint.
However, it is obviously not practical to create a wrapper per module manually in your case, but you might be able to generate them dynamically with some trickery. Or somehow figure out what name was used to import the (unique) wrapper module from within it so that it can in turn dynamically import the associated actual module (with an async require, which wouldn't be transparent to user code).
Of course, it would be best if requirejs provided official hooks. I've never seen such hooks in the docs, but you might want to go through them again if you're not more certain than me.

how to unit-test private methods in jquery plugins?

Perhaps this is a bit of a novice JQuery question but:
proper jquery plugins are written inside a closure
thus only methods defining the plugin interface are accessible from the outside
sometimes (or many times) one may need helper methods that it doesn't make sense to expose as part of plugin interface (for example because they alter internal state).
how do those get unit-tested?
For example, looking at blockUI plugin, how can methods install, remove, reset get unit-tested?
To draw a parallel, in Java I would:
create a BlockUI interface containing public methods only (by definition)
create a BlockUIImpl class implementing the above interface. This class would contain install(), remove(), reset() methods that could be public, or (package) protected
So, I would unit-test the Impl but client programmers would interact with the plugin via BlockUI interface.
The same applies here as with any other language and testing privates: To test private methods, you should exercise them via the public interface. In other words, by calling your public methods, the private methods get tested in the process because the public methods rely on the privates.
Generally private methods are not tested separately from the public interface - the entire point is that they are implementation details, and tests should generally not know too much about the specifics of the implementation.
Code written inside a function in JavaScript, or closure as you called it, is not necessarily isolated from the outside of that function.
It is useful to know that functions have visibility of the scope in which they are defined. Any closure you create carries the scope, and therefore functions, of the code that contains it.
This simple example with a jQuery plugin and an artificial "namespace" might serve to prove this assumption:
// Initialise this only when running tests
my_public_test_namespace = function(){};
jQuery.fn.makeItBlue = function() {
makeItBlue(this);
function makeItBlue(object) {
object.css('color','blue');
}
if(typeof my_public_test_namespace != "undefined") {
my_public_test_namespace.testHarness = function() {
return {
_makeItBluePrivateFn: makeItBlue
}
};
}
};
$("#myElement").makeItBlue(); // make something blue, initialise plugin
console.debug(my_public_test_namespace.testHarness()._makeItBluePrivateFn);
But don't forget you shouldn't really test privates. ;)
I came up with the same question and after navigating and finding answers that not really apply, here's what I ended up to solve a similar problem.
Problem: "I have a widget that has a behavior I want to test to ensure it's working as expected, some of the methods are called internally because they have to solve internal behavior, exposing them as public does not make sense because they wont be called from outside, testing the public methods means you wont test the internals of the widget, so finally what can I do?"
Solution: "Creata a test widget that exposes the methods you are interested in testing and use them in the qunit, here is the example:"
// Namespaces to avoid having conflicts with other things defined similarly
var formeditortest = formeditortest || {};
// widget that inherits from the container I want to test
$.widget( "app.testcontainer", $.app.container, {
executeDrop: function(drop, helper) {
var self = this;
self._executeDrop(drop, helper);
}
});
// Test cases
formeditortest.testDropSimple = function(assert) {
var container = $("<div />");
container.testcontainer();
container.testcontainer("drop", 0, 3);
assert.equal(true, $(innerDiv.children()[0]).hasClass("droparea"));
});
QUnit.test(name, function( assert ) {
formeditortest.testDropSimple(assert);
formeditortest.testDropBottom(assert);
});
Using this method the inherited testcontainer could have the preparation required to test elements and then the qunit will handle the test, this solves my problem, hope this works for someone else that is having troubles to approach these kind of tests.
Critics? welcome to comment, I want to improve this if I'm doing something silly!!!

Categories

Resources