Whats the best way to manage JavaScript with Ruby on Rails 6? - javascript

I'm giving a try to RoR 6 (I'm coming from MEAN and I've not touched RoR from version 3) and I'm finding some troubles to find the best way to manage JavaScript code. Maybe because of my background.
I've read a lot about the topic (including official guides https://guides.rubyonrails.org/working_with_javascript_in_rails.html) but It seems official documentation is out of date.
According to documentation, when you generate a controller from cli it should create a .js file for that controller but it doesn't occur. Besides, right now Webpack has been added to RoR 6 and JavaScript is no longer managed by Asset Pipeline (Am I right?) but there's not any reference to this matter.
I want to find a way to write code JS for every view and keep that code isolated from the rest.
Where should I put all the JS code?
How can I get isolation for the JS code of every view?
I've added jQuery to the project due to Bootstrap (by using Yarn) and to Webpack this way but $ or jQuery is undefined:
const { environment } = require('#rails/webpacker')
const webpack = require("webpack")
environment.plugins.append("Provide", new webpack.ProvidePlugin({
$: 'jquery',
jQuery: 'jquery',
Popper: ['popper.js', 'default']
}))
module.exports = environment
I'd appreciate some help.
Thanks!
SOLUTION:
I found what I was looking for --> https://stimulusjs.org
Stimulusjs, created by Basecamp, adds a JS layer to every HTML view and let us to keep order and clarity when writing JS code. It connects the JS file with the DOM and nothing more. Enough to add some JS to improve functionality.
It pairs perfectly with Turbolinks and is ready to be used with Webpack. Besides, it can be learned in 10 minutes (no more). Installation is also absurdly easy.
Anyway, if you need to get some knowledge about RoR and Webpack/Webpacker, you can visit these links:
https://webpack.js.org/guides/getting-started/
https://github.com/rails/webpacker
https://medium.com/statuscode/introducing-webpacker-7136d66cddfb
And finally, if you don't wanna use a JS framework like Stimulus for managing JS code under RoR, you can always try these gems for specific page JS:
Paloma gem: https://github.com/gnclmorais/paloma (not checked)
Pagescript gem: https://github.com/maxcal/pagescript (not checked)

The change to Webpack is very new and the documentation has not quite caught up.
Generating asset files when running the generator was only done with the old assets pipeline and even then was a not really good idea. It relied on Sprockets special require_tree directive that would slurp up all the files in the directory and add them to the manifest. In alphabetical order, so you had no control over the order of execution.
It also fooled beginners into thinking that the js they put into users.js was only executed in their users controller when in fact it was all just slurped up into a single manifest.
With Webpack you explicitly import assets.
Where should I put all the JS code?
You're encouraged to place your actual application logic in a relevant structure within app/javascript.
How can I get isolation for the JS code of every view?
While you can use javascript_pack_tag in the view itself to require specific files this is not really a good idiom as it creates unnecessary HTTP requests and logic that is hard to follow.
If you want to ensure that code is executed when a particular view loads you can add data attributes to the body tag and create special events:
# app/layouts/application.html.erb
<body data-action="<%= action_name >" data-controller="<%= controller_name %>">
// fired when turbolinks changes pages.
$(document).on('turbolinks:load', ()=>{
let data = $(body).data();
// replace myns with whatever you want
$(this).trigger(`myns:${data.controller}`, data)
.trigger(`myns:${data.controller}#${data.action}`, data)
.trigger(`myns:#${data.action}`, data)
});
Then you can wrap functionality that should happen when a special page loads by listening for your custom events.
$(document).on('myns:users#show', ()=>{
console.log("We are on users#show");
});

Related

Including an external javascript library in pebble js file?

Is there any way I can include an external JS library in my pebble code?
Conventionally on a webpage I would do this in my head tags:
<script type='text/javascript' src='https://cdn.firebase.com/js/client/1.0.11/firebase.js'></script>
But in pebble, I am unable to do that since I am only using JS. So how can I include an external library for a JavaScript file.
At present, you cannot include external JS files.
If you're using CloudPebble, then the only way to do this is to copy and paste the content of the JS library files into your JS file.
If you're doing native development, you can modify the wscript file to combine multiple JS files into one at build time.
I think there's some confusion over Pebble.js vs PebbleKit JS (v3.8.1). Pebble.js is a fledgling SDK where eventually the programmer will be able to write pure JavaScript. It's still cooking so there's some functionality missing like the ability to draw a line or obtain the screen dimensions. The repo is a mix of C and JS sources where you can add C code to augment missing functionality but otherwise all your code lives in src/js/app.js or src/js/app/. Anyway, Pebble.js does support require.
I don't use CloudPebble but I got the impression that it either supports Pebble.js (and hence require) or is planning to. I think all this SDK boilerplate code would be hidden.
PebbleKit JS does not support require out of the box AFAIK. I've made a demo that ports require support from Pebble.js to PKJS. The summary of changes is:
Move your project's src/js/pebble-js-app.js to src/js/app/index.js.
Remove any ready event listener from src/js/app/index.js. index.js will
be loaded when the ready event is emitted.
Add src/js/loader.js from Pebble.js.
Add a src/js/main.js that calls require('src/js/app') on the ready event.
Update your wscript with the following
deltas.
When adding new modules, place them under src/js/app/ and require('./name') will work.
I've tried to cover this all in the demo's readme.
BTW, here's the official breakdown of all the different SDKs but it's a little confusing.
I am not sure if there have been changes since the above answer, but it looks like there is in fact a way to include additional resources while keeping things tidy. On the pebbleJS page, there is the following section with an some information on the subject.
GLOBAL NAMESPACE - require(path)
Loads another JavaScript file allowing you to write a multi-file project. Package loading loosely follows the CommonJS format. path is the path to the dependency.
You can then use the following code to "require" a JS library in your pebble project. This should be usable on both Cloud Pebble as well as native development.
// src/js/dependency.js
var dep = require('dependency');
You can then use this as shown below:
dep.myFunction(); // run a function from the dependency
dep.myVar = 2; // access or change variables
Update:
After some digging into this for my own, I have successfully gotten CloudPebble to work with this functionality. It is a bit convoluted, but follows the ECMAScript 6 standards. Below I am posting a simple example of getting things set up. Additionally, I would suggest looking at this code from PebbleJS for a better reference of a complex setup.
myApp.js
var resource = require('myExtraFile'); // require additional library
console.log(resource.value); // logs 42
resource.functionA(); // logs "This is working now"
myExtraFile.js
var myExtraFile = { // create a JSON object to export
"value" : 42, // variable
functionA : function() { // function
console.log("This is working now!");
}
};
this.exports = myExtraFile; // export this function for
// later use

Using require.js for client side dependancies in Adobe CQ5

I was wondering if anyone had experience using require.js with the Adobe CQ5 platform. I'm writing a Chaplin.js(backbone-based) single page app that will be integrated into the rest of CQ5-based site we're working on. Chaplin requires the use of a module system like AMD/Common.js and I want to make sure my compiled javascript file will usable within CQ5's clientlibs. Is it as simple as adding require.js as a dependency in clientlibs prior to loading in my javascript modules? Someone's insight who has experience in doing this would be greatly appreciated.
I've implemented this as a solution of organize in a better way all the modules such as:
//public/js/modules/myModule.js
define('myModule',[ /* dependencies */] ,function( /* stuff here */ ))
and in the components such:
<% //components/component.jsp %>
<div>
<!-- stuff here -->
</div>
componentJS:
//components/component/clientlibs/js/component.js
require(['modules/myModule']);
and finally I've configured require (config.js) and I've stored the JSs modules in a new different design folder. Located the compiled JS after close </body> to guarantee the JS is always located at the bottom after the HTML.
<!-- page/body.jsp -->
...
<cq:includeClientLib js="specialclientlibs.footer"/>
</body>
solving with this the issue of have "ready" all the content before the JS is executed. I've had some problems to resolve with this async stuff managed for the clienlibs compilation tool, in production the problem was different, however, in development, the order in what CQ compiles the JS has produced me some lacks in terms of order of the JS. The problem really was a little bit more complex than the explanation because the number of JS was really big and the team too, but in simple terms it was the best way I've discovered so far..
The Idea
I think you can compile your Chaplin.js with one of the AMDShims to make it self contained, wraps every dependencies it needs inside a function scope, expose an entry point as global variable (which is a bad practise, but CQ do it all the time...) and then use the compiled.js inside a normal clientlib:
AMD Shims
https://github.com/jrburke/requirejs/wiki/AMD-API-Shims
Example
Here is an example of how we make the one of our libs self-contained:
https://github.com/normanzb/chuanr/blob/master/gruntfile.js
Basically, in source code level the lib "require"s the other modules just as usual. However after compiled, the generated chuanr.js file contains everything its needs, even wrapped a piece of lightweight AMD compatible implementation.
check out compiled file here: https://github.com/normanzb/chuanr/blob/master/Chuanr.js
and the source code: https://github.com/normanzb/chuanr/tree/master/src
Alternative
Alternatively rather than compile every lib you are using to be independent/self-contained, what we do in our project is simply use below amdshim instead of the real require.js. so on cq component level you can call into require() function as usual:
require(['foo/bar'], function(){});
The amd shim will not send the http request to the module immediately, instead it will wait until someone else loads the module.
and then at the bottom of the page, we collect all the dependencies, send the requirements to server side handler (scriptmananger) for dynamic merging (by internally calling into r.js):
var paths = [];
for (var path in amdShim.waiting){
paths.push(path);
}
document.write('/scriptmananger?' + paths.join(','));
The amdShim we are using: https://github.com/normanzb/amdshim/tree/exp/defer

Using multiple pre-compiled templates with Handlebars .js (multiple HTTP requests)?

I think this question will give mine a little more context:
Using pre-compiled templates with Handlebars.js (jQuery Mobile environment)
Basically I'm trying to learn the precompiling stuff, so I can save load time and keep my html documents neat. I haven't started it yet, but based on the above link, every template needs to have it's own file. Isn't that going to be a lot of links to load? I don't want to be making multiple HTTP requests if I don't have to.
So if someone could shed some light, perhaps offer an alternative where I can get the templates out of my html, but not have to load up 100 different template files.
Tools like Grunt.js allow you to have your templates and consume them too. For example, this file compiles the templates and then concatenates them into a single file:
module.exports = function(grunt) {
grunt.loadNpmTasks("grunt-contrib-handlebars");
// Project configuration.
grunt.initConfig({
// Project metadata, used by the <banner> directive.
meta: {},
handlebars: {
dist: {
options: {
namespace: "JST",
wrapped: "true"
},
files: {
"templates.js": [
"./fileA.tmpl",
"./fileB.tmpl"
]
}
}
}
});
// Default task.
grunt.registerTask("default", "handlebars");
};
What I've yet to work out since I'm just getting started with pre-compiled templates is the workflow. I want to be able to have compiled templates when we're running a deployed version of the app but when doing development and debugging I'd much rather have my original individual files in uncompiled form so I can just edit them and reload the page.
Follow Up:
I wanted to come back to this after having worked out some of how to both have my pre-compiled templates when available and use individual templates which can be edited on the fly when people are doing development and debugging work and would like to have quick edit-reload-test cycles without doing Grunt builds.
The answer I came up with was to check for the existence of the JST[] data structure and then further to test and see whether a particular pre-compiled template is present in that structure. If it is then nothing further need be done. If it's not there then the template is loaded (we use RequireJS to do that) and compiled and put into the same JST[] array under the same name it would have if pre-compiled templates had been loaded.
That way when it comes time to actually use the template, the code only looks for it in one place and it's always the same.
In the near future I think we'll likely have RequireJS plugins to perform the test and load/compile code while keeping it simple for developers.

Django-pipeline and javascript dependencies

I'm working on a Django project that uses Django-pipeline for assets, and I keep having issues where I define something in one javascript file that is required by another file, but the second file gets loaded before the first and thus the second file fails to load properly. I can mess with the order things get included into PIPELINE_JS but this is pretty awkward to deal with. In most languages you can do things like require foo to make sure that foo is defined but it seems like with javascript and django-pipeline this isn't possible. I've looked into RequireJS a little but I'm not sure how whether I can use it with django-pipeline. What should I do in this case? What do others who use django-pipeline or django in general do for javascript dependency management?
As a side note, I'm actually using Coffeescript, not straight Javascript, but that doesn't seem to help things any. In rails I could do #= require 'foo' to require another coffeescript file but that seems to be linked to the rails asset pipeline.
The only way to do this is to order 'source_filenames' list accordingly, also remember that those file will be concatenated in this order when running collectstatic.
Pipeline will respect this order, it will also avoid duplicate so that your are safe when doing this :
'base.coffee',
'*.coffee',
There is no "require" syntax for now in django-pipeline.
Hope this helps.

Best practice for using JavaScript in Django

I want to push my Django project with some JavaScript/jQuery. To make it right from the beginning on I'd like to know, which way of organizing the .js-files ist the optimal one.
For loading one big file includes less overhead than loading many small ones and also because it looks cleaner in the code I considered to make one global .js-file and include that with the base.html (from which every template inherites). However, the result would be, that JavaScript would try to assign all the event-binings, even if the elements which the events should be bind to aren't in the current document. With all the jQuery-selectors which then would have to do their work that can't be too efficient. From earlier web-development experience I know that one can do something like if(location.href == '/some/url/') { (JavaScript code) ... }. That seems not practicable for me in this case, for with changing URLs, I'd have to change the URLconf and the .js-file (while using reverse() and {% url %} to prevent that elsewhere). I guess there is no possibility to make use of the named URLs here?
Has anyone an idea how to organize the JavaScript without having a file for every single template on the one hand and without killing performance unnecessarily?
I don't know that this question is specific to Django - similar issues come up managing Javascript in all sorts of systems.
That said, I usually try to tier my Javascript files, so that truly global scripts and libraries are included in one file, scripts specific to a section of the site are included in a set of section-specific files, and scripts specific to a single page are included in yet another site of page-specific files (or in inline code, depending on the context).
Django has good support for this approach, because you can tier your templates as well. Include the global script in your base.html template, then create a mysection-base.html template that inherits from base.html and just adds the Javascript (and CSS) files specific to that section. Then subpages within that section can inherit from mysection-base.html instead of base.html, and they'll all have access to the section-specific scripts.
I find django-compressor invaluable as it automatically compresses and minifies your JavaScript and CSS pre-deployment. It even automatically handles SASS, LESS and CoffeeScript if they float your boat.
Apps from http://djangopackages.com/grids/g/asset-managers/ may help.
You use modular javascript.
Choose your packager of choice (mine is browserify) that packages all your modules into one package that you minify and gzip. You send this file to the client and it is cached.
This means you have all your code cached, minimize HTTP requests and stay lean and efficient.
And since you have modular code you just load your code as you would normally.
Personally I would use some form feature detection to load modules. You can choose to feature detect on almost any feature (some css selector, routes, url segments).
Feature detection would look like this :
var Features = {
"class": "name",
"class2": "name2",
"dynamic-scroll": "dynamic-scroll",
"tabstrip": "tabstrip",
...
}
for (var key in Features) {
require(Features[key]);
}
Where as routing with davis would look like
Davis(function() {
this.get("blog", function(req) {
require("blog")(req);
});
this.get("blog/:post", function(req) {
require("blog-post")(req);
});
this.get("shop", function(req) {
require("shop")(req);
});
...
});
Alternatively you can try an event driven architecture. This means each module binds to events
// some-module
mediator.on("blog-loaded", function() {
// load in some libraries
// construct some widgets
mediator.emit("blog-ui-build", widgets);
});
And you would need some bootstrapping in place to kick off the event loop. Feel free to look at an EDA demo

Categories

Resources