Development process using BoilerplateJS - javascript

We're impressed with the integration and best practices that BoilerplateJS provides but the documentation is definitely lacking, especially for new RequireJS users.
We're a team of 5, each with different skill sets and one of the attractive points of BoilerplateJS is the ability to isolate UI components.
From the sample scaffolding, it's clear how we can unit-test each component separately. However, we're unclear how we can do this during development:
Developer A creates component structure and view model (tested) and passes it to Developer B
Developer B develops CSS and possibly animation for the component
Developer A and/or B integrate the component into the rest of the website and further test integration
How is it possible to achieve (2)? i.e. allow designers and developers to work on an isolated component - what is the recommended way to load the component so it can developed/debugged/tested?

About CSS
A UI component has roughly 3 parts: Structure (HTML), Presentation (CSS), Behavior (JS). A common way of handling is developers focusing on the Structure and Logic where designers work on the presentation.
This is how we developed the sample application of boilerplatejs. For example, the Menu, Theme and Localization components were developed by developers as a simple 'unordered lists' which looked like below exactly when they completed it (just delete the theme css link via Chrome Developer Tools and you will see the same):
Then designers took the ugly UI and created a theme that position and render these lists in a professional manner (we developed 2 themes stored at src/modules/baseModule/theme). It is of course hard for the developers to just deliver something that ugly, but they need to trust the ability of the designers to do their job. I'm sure you use a source control tool that allows different team members to work on the same component even simultaneously.
If you want the theming to be a prominent feature, I recommend minimizing component specific CSS files. Otherwise you might not be able to create different themes that completely changes the layout and look-n-feel of your components. Downside of not having component local css is the fact that components are not really self contained without 'presentation'. I'm still struggling to answer this question properly, any ideas/help is appreciated! See my related question on this below:
global CSS theming combination with component specific local stylesheets
Anyway there are several ways you may add CSS to your components, have a look at this question where these different ways are discussed.
Adding external CSS file to a BoilerplateJS project
Now about embedding components...
If you want the components embedded in to some other webpage, you can use the DOMController of boilerplate for that. For example, lets say you need to embed the 'departments (src/modules/sampleModule1/departments)' component to some other website. You will have to add a DomController in addition to already existing UrlController (UrlController respond to browser URL changes) to the module (src/modules/sampleModule1/module.js).
//crate a dom controller that searchs within whole document body
var domController = new Boiler.DomController($("body"));
domController.addRoutes({
//look for elements with id 'department_comp' and embed the department component
'#department_comp' : new DepartmentComponent(context),
});
domController.start();
Now on your webpage or on external site place a div or a section element for the DomController to embed department.
<section id="department_comp"></section>
Of course there are two things you need to take care of:
1) Your web page needs to have boilerplatejs runtime in it. This means all your third party JS libraries and theme CSS file should be statically added to the web page. (We are working around this, with v0.2-stable we expect to release a bootstrapper that can do all that with a single script declaration)
2) If your component uses JSON services from a different domain, you will have to address cross domain HTTP requests either with JSONp or CORS. But if your REST services are hosted on the same domain, you dont have to worry about this.

Related

How to architect a extendable component based Web application?

I am trying to architect an extendable Web applications that allows users to build and export static websites. Similar to this site and this site, except developers can build their own custom components using HTML,CSS and Javascript and add it to the public library.
The Architecture must:
Ensure that user added components can not interfere with each other (Encapsulation)
HTML attributes have unique values (<div id="a"> remains unique to this component)
CSS is namespaced
Javascript objects are scoped and can only modify DOM elements within this component.
Enable developers to build components in simple HTML, CSS, Javascript without the need for building a React/Vue/Angular etc. JS components.
Developers can build components without worrying about encapsulation issues above. i.e. The application transforms the unsafe HTML,CSS,Javascript to one which is safe.
Ensures cross-browser compatibility
iFrames provide excellent encapsulation but have performance and device scaling issues. I have been reading about Shadow DOM, Web components but haven't been able to figure out the correct approach to build such a Web Application.
I also came across this SO Question but it seems like this is more geared toward back-end. Please correct me if I am wrong. Is there any existing framework/build tools/library that I can use? If not how should I go about building such a tool?

Using React (with Redux) as a component in a website

I have a large, globalised web site (not a web app), with 50k+ pages of content which is rendered on a cluster of servers using quite straightforward NodeJS + Nunjucks to generate HTML. For 90% of the site, this is perfectly acceptable, and necessary to achieve SEO visibility, particularly in non-Google search engines which don't index JS well (Yandex, Baidu, etc)
The site is a bit clunky as complexity has increased over time, and I'd like to re-architect some of the functional components that are built mostly using progressively enhanced jQuery as they are quite clunky. I've been looking at React for this with the Redux implementation of the Flux pattern.
Now my question is simply around the following - nearly 100% of the tutorials assume I'm building some sort of SPA, which I'm not. I just want to build a set of containerised reusable components that I can plug into replace the jQuery components. Oh, they have to be WCAG AA/508 accessible as well
Does React play well with being retrofitted into websites and are there any specific considerations around SEO, bootstrapping, accessibility? Examples of implementations or tutorials would be appreciated.
You can mount react component to any DOM Node on your page, so it makes it easy to insert components in statically generated content.
Most of search engines like google would wait for js files to load before they index the page so it will index a page with react component perfectly fine. However if you want to be 100% sure that your page rendered correctly by all crawling bots you have to take a look at react server rendering. If you already use NodeJS for a backend it should not be a big problem.
I never encountered with that kind of problem but my best guess would be to use ReactDOMServer.renderToString to render component on the server and then replace a node in your static html layout. The implementation would depend on you template lang you use. You can use something like handlebars to dynamically create halpers from React Components. So in your static html page you would be able to use them as {{my-component}} But it's only my speculations on that subject, may be there is more elegant solution.
Here is the article that could help.
You'll be happy to know that this is all possible through something called isomorphic javascript. Basically you'll just use React and jsx to render HTML on the server which is then sent to the browser as a fully built web page. This does not assume your app is an SPA, rather that you'll have multiple endpoints for rendering different pages, much like you already have probably.
The benefit here is that you can use the React/Redux architecture but still allow you site to be indexable by crawlers, as requests to your app will yield static pages, not a single page with lots of JS to make it work. You're also free to gradually refactor by converting your Nunjucks rendered endpoints to React one at a time, instead of a big jump to SPA land.
Here's a good tutorial I found on making isomorphic React apps with node:
https://strongloop.com/strongblog/node-js-react-isomorphic-javascript-why-it-matters/
EDIT: I may have misread your actual desire which is to inject React components into your existing web pages. This is also possible, you'll probably want to use ReactDOM to render your components to static markup, and then you can inject that markup string into your Nunjucks via templating.

SAPUI5 / OpenUI5: More than one app in a portal

I have developed some SAPUI5 mobile apps and I'd like to merge them into a portal (with tiles) so I can switch between them as a "reputation".
Now I would like to know, what would be the "best" way to implement this case?
At the moment the apps have got a controller and views. My first idea was to build a "portal-app" which includes all the views of the other apps with an own controller but then I noticed that the performance has decreased (because all resources (OData-models etc.) load when starting the portal-app).
I also tried to link them (all with their own index.html) but this case seems not to be the right one.
So is there a way to load the views dynamicly or a whole app and how can I do that?
First of all, SAP's official solution for this problem is called SAP Fiori Launchpad. However, it's much more complex to set up (you need an underlying application server which holds SAP Fiori. You need to handle user roles and assign applications to roles). However, it's great for inspiration. (Here you can check it)
You can create a separate component which holds the references to other applications. Your applications can be referenced from Tiles.
I don't know the current implementation of your applications, but it's recommended to implement them as components (UI components if they have visual representation).
With components, you will be able to use Routing (navigating between views, or even components using hashes (urls)), which helps you to manage resources and services properly. With this you can prevent unwanted odata requests as well.
It can be a big step forward from a simple application architecture, but it's worth it.
Of course, you can implement one simple application without components. In this case you can experience the mentioned performance issues. Consider to move data intensive operations into event handlers and perform these tasks asynchronously.

Third Party Polymer Elements

I'm trying to understand if polymer is built for a specific use-case-- third party web components.
What I need to accomplish is create a web component that takes as input from the caller's page an image url (attributes on an element is ok) and inside the polymer component it renders the image in a special way using HTML5 canvas.
To me, it seems like polymer isn't currently built for third-party usage. Reasons why:
one must have enough control over the caller's page to add platform.js to the <head>, specifically the <head>
my version of platform.js could potentially be different than the caller page's platform.js (or bare minimum i'm polluting the page with polymer's JS objs, right?)
in non-chrome browsers style and other tags are injected into <head>, possibly conflicting with the source page
one must have control over the caller's <body> tag if wanting to set options to avoid FOUC
Traditionally all my web components have been built via iframes and i'd like to modernize my approach with a view towards a "shadow-dom future."
Is there a way to use polymer in a third-party safe way? Perhaps a mashup with [lightningjs?
Polymer and Web Components are entirely structured around 3rd party usage, this is a central design pillar.
The broadest notion IMO is that developers will be able to go to the web and find numerous Web Components to choose from. This is not unlike being able to choose from an enormous set of JQuery plugins, but with a much greater degree of interoperability and composition because each instance can be treated as a traditional Element.
platform.js
Platform.js is modeling future browser capabilities called Web Components. There are practical realities of making this work right now, so yes, in order for a third party to use Web Components at all, they will need to opt-in in to platform.js (and all that entails). It's true that this fact makes it's difficult (today) to inject Web Components into somebody's page without their assent.
my version of platform.js could potentially be different than the caller page's
As above, platform.js is required upfront to use Web Components. This is why it's named the way it is. Unless the main page owner includes this capability, he's not providing a platform to which you can supply Web Components.
This is not dissimilar to modern libraries, e.g. JQuery. You can load numerous copies and/or versions of JQuery in one document if you aren't careful, but it's wasteful. Coordination is preferred.
With the exception of platform.js, Web Components is geared around N modules using M dependencies, and that all working together optimally. This is another way sharing is a pillar of the design.
in non-chrome browsers style and other tags are injected into , possibly conflicting with the source page
This all the price of polyfilling. If you need purity of environment, you will have to wait until Web Components are widely implemented natively. As a practical matter, the style tags are very specialized and are unlikely to conflict with anything.
one must have control over the caller's tag if wanting to set options to avoid FOUC
This is not strictly true, you can build Web Components that control their own FOUC up to a point. But it's a lot of extra work, and as a third-party, you really can't know what kind of loading mechanisms or idioms some developer is going to employ, so trying to orchestrate too much without his cooperation is going to be difficult.
Traditionally all my web components have been built via iframes
IFRAME is quite a bit different from Web Components. An IFRAME is a fresh context, and so you have a lot more safety net, but it's heavyweight and there are coordination costs.
Although platform.js, by it's very nature, is changing the shared platform, Custom Elements themselves need not mess with the user's global namespace or his CSS (although they can). Code can be restricted to the element's prototype, and CSS and DOM can be stashed inside ShadowDOM. The overall intent is that none of that need leak out of the Element, unless somebody wants it to.

Creating a loosely-coupled & multi-page JS application using Core(Mediator) / Sandbox(Facade) / Module pattern -- advice?

I'm building a multi-page javascript application. I've read a lot into design patterns, and creating applications using a Core/Facade/Module approach w/ loose coupling (pub/sub scribing to events).
I have a pretty good system worked out that minifies & combines all of my module files & related dependencies into a single external javascript file at deployment. Minimizing extra HTTP requests for my application is a design goal -- therefore I'm not too interested in AMD (asynchronous module definition).
I'm using the guidelines delinitated in Nicholas Zakas's presentation,
Scalable JavaScript Application Architecture
http://www.youtube.com/watch?v=vXjVFPosQHw
&&
Addy Osmani's Patterns For Large-Scale JavaScript Application Architecture
http://addyosmani.com/largescalejavascript/
&&
This premium tutorial, by Andrew Burgess from Nettuts, Writing Modular JavaScript
http://marketplace.tutsplus.com/item/writing-modular-javascript/128103/?ref=addyosmani&ref=addyosmani&clickthrough_id=90060087&redirect_back=true
My question is advice on how to go about managing different pages of this application & their associated modules. I'm also using Backbonejs's Router class w/ ballupton's History.js library to manipulate the HTML5 history/state API and dynamically load pages without a refresh while maintaining backwards compatibility for older browsers that don't support the HTML state API. All of my pages share a common code base (single minified & compressed js file).
Here's an outline of the structure I'm thinking of using in my application:
It's essentially a hybrid approach. The top half consists of a Core/Facade/Module pattern with discrete modules that don't interact directly with each other and publish/subscribe to notifications via the facade.
The bottom half consists of my proposed application structure, which notifies a "main controller" when the state/url changes, the main-controller performs any global operations (such as initializing the header & sidebar-menu of my UI if not already initialized) and the instructs the relevant sub-control to run it's init() method (as well as calling destroy(); on any controller that was previously loaded). Each sub-controller (correlates to ex: home-page, calendar-page, reservations-page, etc.) cherry-picks modules from the pool of available modules and initializes them.
Is this a good approach or am I on a bad track? I can see the modules are still independent of each other which is good for scalability.
I've also considered just treating the Router & Controllers as discrete modules and having them publish/subscribe to the Core, and each controller somehow initializes the necessary modules it needs for it's page.
One thing we did to keep history working smoothly was to first change the URL. On change of the URL an event would get triggered, the router would parse the url then figure out what to do. This event would automatically get triggered on page load as well. If you clicked a link it would simply change the URL, which is fairly simple and completely decouples links/buttons from the application logic. This seemed to work well for our application. We used JQM, but we dropped most of their router since we read most of our instructions from some XML file and didn't have a bunch of HTML pages to load into the main viewport area.
I've often seen backbone apps use the router as the core/mediator. This is a good idea. You can simply listen on change events for the URL and then change the page appropriately. This Mediator should probably be a singleton, although singletons are harder to unit test.
The thing I didn't necessarily agree with Backbone on was its definition of "views". The view sort of seems like an action in a controller (well from some perspectives). We added one more level of separation in our application at that point. Our views made ajax requests to template files which were filled in with some JSON and handlebars.js. I'd say your header/sidebar should just be templates. If you need to refresh them then see how you could do extremely simply otherwise you are looking at creating 4 new modules: collection for a list, model for each item, collection view, and model view. I'd couple templates more tightly with some higher level view until they need to be broken down further (eg. some "Application/Main View").
Having this template layer allows you to make superficial changes without recompiling as well, which is nice. Anytime you can put things into "meta" it is a win (well unless it requires you to read XML (ha)). As a bonus you can then cache the template separately as well (or cache bust it separately for that matter).
Your architecture does seem fine though, and is a valid approach to your problem. One tip I'd give is don't over design up front. Iterating is best. You will need to refactor. It is impossible to foresee what would make your application flow more smoothly 3-6 months in advance.
Update on Dec 18th, 2013
Now a days we are using marionette and more addy osmani tricks. On top of the above items we are using AMD's alternate format:
define(function(require) {
var myTemplate = require('hb!mytemplate.handlebars'),
view = require('myview');
...
});
We are also using the marionette application class in combination with wreqr which provides a request/response layer. This allows us to set application wide objects cleanly. It also allows us to define classes without explicitly stating the class name. This is a pretty good way to sandbox. EG:
this.app.setHandler('CanvasClass', function() {
return RaphaelCanvasView;
});
// elsewhere
this.app.request('CanvasClass').text('123', {x:1, y:2});
This all seems to work out pretty well.
You should also checkout aura js and web components. Our directory structure sort of mimics/anticipates those concepts without investing in them yet.
I think it's a good approach.
I've been developing something similar in 2 huge commercial web-apps (minus backbone, and with a custom history manager) and it works great. I'm also not using AMD and all interactions are handled by pub/sub.
One of my best inspirations (which I'm sure you already know) is: https://github.com/aurajs/aura

Categories

Resources