I recently started exploring Browserify for bundling Node modules and using them in the browser. It's neat and works great, however I want an improvement in the work flow. In my use case, I have a script.js file that requires node modules like Cylon etc.
For brevity, script.js looks something like:
"use strict";
var Cylon = require('cylon');
Cylon.robot({
name: "BrowserBot",
connections: {
arduino: { adaptor: 'firmata', port: '/dev/tty.usbmodem1411' }
},
devices: {
led: { driver: 'led', pin: 8 }
},
work: function(my) {
Cylon.Logger.info("Hi, my name is " + my.name)
every((2).seconds(), function() {
Cylon.Logger.info("Toggling the LED");
my.led.toggle();
});
}
});
Cylon.start();
I was looking at the bundle.js file that browserify generates and i could find the exact code block mentioned above, and I think a node process is started with this code and some bindings. I want the script.js file to be dynamic to allow the user to use a different pin on an LED or any other small change for that matter. Since I am not changing any dependencies for this file, I should be just able to replace that block in bundle.js with the new contents of the script.js file as other modules are already loaded and bundled in the bunndle.js right?
I want to know if this is possible in a browser setting. Chrome Apps allow file Storage, so it is possible for me to generate bundle.js dynamically after initial creation where I just plug-in the contents of script.js and load bundle.js in an HTML file? How do I go about this?
While the question is not specific to Cylon, I am still adding it as a tag for my specific usecase.
All the .js files should be specified in the Apps manifest.json. I don't think you can edit items from the app's folder (even when accessing it thru file storage)
Related
I'm building a chrome extension with vue cli 3. I've got the basics working well, but I was hoping to also run my content and background javascript through the build process instead of just putting them into my public folder and copying it into dist. This is mostly just so I can use import/export to clean up my file structure.
I was able to add them as new "pages" in the vue config and even without the html template file, they get built properly and moved into dist.
The problem is, they then get the cache busting string appended to their filename so I'm unable to reference them in the extension manifest. For example, background.js becomes background.d8f9c902.js
Is it possible to tell the vue config that certain "pages" should not get the cache busting? The documentation here does not seem to expose that as a parameter.
Thanks in advance!
Filename hashing can be disabled for all files:
https://cli.vuejs.org/config/#filenamehashing
It works in my case using below vue.config.js:
// vue.config.js
module.exports = {
lintOnSave: true,
filenameHashing: false
}
I use node-watch script to watch any changes in files and rebuild project files with concat.
Files are building correctly but they are not uploading to server until I make double click - in app and PhpStorm. The problem is (probably) that watch function is asynchronous. I want to see changes in app immediately.
How to solve this issue?
watch('myFolderToWatch/js', {
recursive: true,
delay: 100
}, function(evt, name) {
console.log('%s changed.', name);
concat(filesToConcat, '../path_to_concat/').then(function (value) {
console.log('test');
});
});
As #LazyOne has mentioned, changes made by your script are external for PhpStorm, it doesn't see the files generated by concat until you synchronize the IDE VFS either manually, via File | Synchronize, or by moving focus from IDE and back. thus deployment doesn't work.
as a workaround, I'd suggest using File watchers instead:
create a concat.js script
add a new scope myFolderToWatch in Settings | Appearance & Behavior | Scopes, add files from your myFolderToWatch/js folders to it
in Settings | Tools | File Watchers, add a new file watcher like this one:
if everything is set up correctly, Phpstporm will see the changes made to generated files and auto-upload them once you change the source files
I'm unable to use a node.js module on my website with broweserify. The example only shows how to run a javascript file that is separate from the .html file, not how to use the node.js module within the .html file. Seems like a trivial problem, but I'm unable to make it work.
Here's what I've done:
Initialized node.js & installed a package, npm i webtorrent-health as an example
Created require_stuff.js which consists of a single line: var WebtorrentHealth = require('webtorrent-health')
Run browserify: browserify require_stuff.js > bundle.js
Include package in my html document, e.g. <script src='bundle.js'></script>
Use the package somewhere in my document, e.g. like this: <script>webtorrentHealth(magnet).then(foobazbar())</script>
Despite bundle.js executing and seemingly defining webtorrentHealth, the script within the .html document fails with WebtorrentHealth is not defined. What am I doing wrong? How do I make it work? Thank you!
You're very close to what you want to achieve. In fact, your code bundle.js is inaccessible from outside (in your case the browser) due to browserify, but you can expose your module by writing at the end of your file require_stuff.js:
window.WebtorrentHealth = WebtorrentHealth;
Now you can use WebtorrentHealth in your document.
I'm currently using the Aurelia-Framework and want to load a big json file into my application.
The problem is, I cant figure out how to get the json file to appear in the "dist" folder of my Chrome browser so that the script is able to find it.
In short, I want to do this:
var request = new XMLHttpRequest();
var jsonString = request.open("GET", "file://../core/data/5e-SRD-Monsters.json", false);
...and yes, the path is correct but the folder "data" and its content won't appear in Chrome's debug sources.
Do I have to include the json via gulp somehow?
Your main question is "how to get the json file to appear in the 'dist' folder". As also mentioned in the comments, that is a matter of simply includ.
For a skeleton jspm project do the following:
Open the ~/build/export.js file
Include the file, or folder, containing the .json file in the first 'list' section
This looks something like:
module.exports = {
'list': [
'index.html',
'config.js',
'favicon.ico',
'LICENSE',
"jspm_packages/npm/bluebird#3.4.1/js/browser/bluebird.min.js",
'jspm_packages/system.js',
'jspm_packages/system-polyfills.js',
'jspm_packages/system-csp-production.js',
'styles/styles.css',
'core/data/5e-SRD-Monsters.json'
],
Here is an an example on where to put it.
Important: Considering you're talking about a 'dist' folder I am assuming you use the skeleton with jspm. The process is totally different when you're building an Aurelia app built with CLI, skeleton webpack or perhaps the starter kit.
Now you've got the .json file in the dist folder. But using the XMLHttpRequest to load a file:// isn't exactly the recommended approach. As also mentioned in the comments, ideally you should load it up as a http request, not a file request.
Let's take this into advice. All you need to do is add the aurelia-fetch-client to your library and then you can simply do something like this:
import { HttpClient } from 'aurelia-fetch-client';
export class App {
get_stuff() {
let http = new HttpClient();
// assuming the file is in /dist/core/data/,
// simply read it through a promise + through a relative path:
http.fetch('./core/data/5e-SRD-Monsters.json')
.then(data => console.log(data));
}
}
Now this uses http rather than file://, which will eliminate any permission issues that might occur, such as lack of access to the filesystem directly.
So, I have an app that is using requireJS. Quite happily. For the most part.
This app makes use of Socket.IO. Socket.IO is being provided by nodejs, and does not run on the same port as the main webserver.
To deal with this, in our main js file, we do something like this:
var hostname = window.location.hostname;
var socketIoPath = "http://" + hostname + ":3000/socket.io/socket.io";
requirejs.config({
baseUrl: "/",
paths: {
app : "scripts/appapp",
"socket.io" : socketIoPath
}
});
More complicated than this, but you get the gist.
Now, in interactive mode, this works swimingly.
The ugliness starts when we try to use r.js to compile this (technically we're using grunt to run r.js, but that's besides the point).
In the config for r.js, we set an empty path for socket.io (to avoid it failing to pull in), and we set our main file as the mainConfigFile.
The compiler yells about this, saying:
Running "requirejs:dist" (requirejs) task
>> Error: Error: The config in mainConfigFile /…/client.js cannot be used because it cannot be evaluated correctly while running in the optimizer. Try only using a config that is also valid JSON, or do not use mainConfigFile and instead copy the config values needed into a build file or command line arguments given to the optimizer.
>> at Function.build.createConfig (/…/r.js:23636:23)
Now, near as I can figure, this is due to the fact that I'm using a variable to set the path for "socket.io". If i take this out, require runs great, but i can't run the raw from a server. If I leave it is, my debug server is happy, but the build breaks.
Is there a way that I can lazily assign the path of "socket.io" at runtime so that it doesn't have to go into the requirejs.config() methos at that point?
Edit: Did some extensive research on this. Here are the results.
Loading from CDN with RequireJS is possible with a build. However, if you're using the smaller Almond loader, it's not possible.
This leaves you with two options:
Use almond along with a local copy of the file in your build.
Use the full require.js loader and try to use a CDN.
Use a <script> tag just for that resource.
I say try for #2 because there are some caveats. You'll need to include require.js in your HTML with the data-main attribute for your built file. But if you do this, require and define will be global functions, allowing users to require any of your internal modules and mess around with them. If you're okay with this, you'll need to follow the "empty: scheme" in your build config (but not in your main config).
But the fact remains that you now have another HTTP request. If you only want one built file, which includes the require.js loader, you'll need to optimize for only one file.
Now, if you want to avoid users being able to require your modules, you'll have to do something like wrap:true in your build. But as far as I can tell, once your module comes down from CDN, if it's AMD, it's going to look for a global define function to register itself with, and that won't exist because it's now wrapped in a closure.
The lesson I took away from all this: inline your resources to your build. It makes sense. You reduce HTTP requests, minify it all and get gzip compression. You don't expose your modules to the world and everything is a lot simpler. If you cache your resources properly you won't even need to worry about it.
But since new versions of socket.io don't like AMD, here's how I did it. Make sure to include the socket.io <script> tag before requirejs. Then create a requirejs module named socket.io with the following contents:
define([], function () {
var io = window.io;
window.io = null;
return io;
});
Set the path like so: 'socket.io': 'core/socket.io' or wherever you want.
And require it as normal! The build works fine this way.
Original answer
Is it possible that you could make use of the path config fallbacks specified in the RequireJS API? Maybe you could save the file locally as a fallback so your build will work.
The socket.io GitHub repository specifies that you can serve the client with the files in the socket.io-client package's dist/ directory.