In a NodeJS application I would like to load configuration data (reports to be generated) from external files dynamically. I can load them statically by using require('path/config');
But I do have parts of the configuration that need to be refreshed on a regular schedule and, to make it all more complicated, these configuration files contain a function that must be executable.
One such report looks as follows:
const report = {
name : 'Report 3',
description : 'Very simple report.',
// Some properties
preprocessor : function() {
},
// Some more properties
};
module.exports = report;
When using require to re-load the report it is basically not reloaded. Even if I change something, it stays the same. (Reason: require() uses caching and rightfully it does.)
What is a good way (maybe an external library) to re-load external configuration files that contain executable functions?
I would use fs. If you have complete control over the configuration files (otherwise it's dangerous) you can use eval.
var fs = require('fs');
var file = fs.readFileSync(filename);
var module = {}
eval(file);
// You can access report in module.exports
If you don't want to block your application (usually recommended) you should use the async version and provide callbacks.
To circument caching problems, I now use the library require-without-cache. Seems to do the job.
Related
I'm creating an Electron app and in the process am trying to use some existing javascript and other files (css and other resources). My code is spread out across a number of packages, each package containing various of these files. Outside of Electron these files would be served from a server which provides a mapping from a flat list of files to the paths to each of these files and I am trying to implement similar "server-side" functionality in Electron's "back end", if there is such a thing.
Since Electron is getting these files from the file:// protocol it is not finding most of these files because everything is resolving relative to the path of the current javascript file and these files do not know about each other and thus cannot specify a hard-coded path.
Is there some mechanism in Electron to hook requests for files so that I can provide the path for it to look at? In my server-side code I do something where I have an object which maps file names to the paths they are in, and I feel like the solution would similarly be to intercept requests and tell Electron where to look for each file.
I found this question but the solution offered there won't work for me because my web app is a bit too dynamic and the requests are coming from deep in the code, not some user interface element I could trap.
You can accomplish this by intercepting the file protocol handler. If you have set up your file mappings into an object like so:
files = {
"file1.js": "/path/to/file1.js",
"file2.js": "/path/to/file2.js",
// etc.
}
Then in the createWindow function you will insert this code immediately after you instantiate the new BrowserWindow:
protocol.interceptFileProtocol("file", (req, cb) => {
var file = req.url.split("/")
file = file[file.length-1]
if (files[file]) {
console.log(`intercepting ${file} => ${files[file]}`)
cb({path:files[file]})
}
})
Note: The protocol references a const that you get from requiring electron, e.g. something like this:
const {app, BrowserWindow, protocol} = require("electron")
This code assumes that the file names are unique and are the only part of the path that matters. So for instance, not matter what path the code thinks "file1.js" is in, in the above example it will be redirected to /path/to/file1.js. If the requested file doesn't exist then the behavior is undefined and probably nothing will load.
Pretty much what the title suggests. I want to create a metadata file alongside every javascript file in my project. So I need a webpack loader that doesn't change the content at all, and just extracts some metadata and writes it to a file. What's the best way to do this? fs.writeFileSync seems to work fine, but I don't know if I should use it since it seems like some webpack guides recommend using something called memory-fs, webpack's in-memory filesystem.
So this took a while to find and seems rarely mentioned, but webpack loaders actually have a method this.emitFile specifically for this purpose. Here is some example usage:
function extractMeta(content) {
// extract and return metadata for file
}
module.exports = function(content) {
console.log('extracting metadata and writing to file');
// get source filename, strip file extension and append ".meta"
const filename = this.resourcePath.split('/').pop().split('.')[0] + '.meta'
this.emitFile(filename, extractMeta(content));
return content; // return the source file unchanged
};
Is it possible to add a file into the website's directory?
In other words, let's say I have a directory – as an example I'll call it "myWeb"– it has index.html and main.js.
In main.js, I want to add a new page to myWeb when I click a button, let's say secondindex.html (for now, I won't worry about overwriting the file when clicked again).
Is this possible? If not, are there other techniques, or is this too broad?
For security reasons, the client-side cannot directly write to the server-side as they're not connected with one another. If writing a file to the server is what you want, you'll need a backend server and some kind of API/script that you interact with and create files that way.
There are other ways that don't involved creating files, such as using a database or cloud solution like Firebase. However, there are perfectly valid reasons you might want to keep your site basic and write static files (speed being one of them) and not wanting to setup and maintain a database being another.
This can be done using the fs module in Node.js for example:
var fs = require('fs');
function generateHtml (req) {
return '<!DOCTYPE html><html><header><title>My Page</title></header><body><p>A test page</p></body></html>';
}
var filename = '/path/to/secondindex.html';
var stream = fs.createWriteStream(filename);
stream.once('open', function(fd) {
var html = generateHtml();
stream.write(html);
stream.end();
});
The above would be in a Javascript file server-side and then you would have some kind of server checking for a request (preferably this would be protected using a token or something) and creating the needed file.
The fs module in particular is flexible in that you can create files many different ways. The above uses streams and is great for when you're dealing with potentially massive files.
I have a knockout/require app and am struggling with the caching of one particular file. Sadly it is the file that busts the cache for all other javascript files. The setup may be slightly odd:
Each view simply binds a knockout view model. It requires the require library and the main script for the particular area of the system:
<script data-main="scripts/user" src="~/Scripts/lib/require.js"></script>
The scripts/user.js file required above requires the common file (containing the require setup) and the main viewmodel script:
require(['./Common'], function (common) {
require(['userMain']);
})
The scripts/user/userMain.js file binds the viewmodel and requires anything needed at the view level (such as custom binding handlers).
define(function (require) {
require(['ko', 'userViewModel'], function (ko, userViewModel) {
var userVm = new userViewModel(false);
userVm.initialise();
// bound to the wrapper to stop jquery dialog bindings being applied twice
ko.applyBindings(userVm, document.getElementById('pageWrapper'));
});
});
Then we have common.js:
require.config({
baseUrl: './',
paths: {
'userMain': './Scripts/user/Main',
'userAjax': './Scripts/user/userAjax',
'userBasicDetails': './Scripts/user/userBasicDetails',
'userExchangesModel': './Scripts/user/userExchangesModel',
'userModel': './Scripts/user/userModel',
'userReportAccessModel': './Scripts/user/userReportAccessModel',
'usersModel': './Scripts/user/usersModel',
'userViewModel': './scripts/user/userViewModel',
... etc
,
urlArgs: "bust=" + (new Date()).getTime()
each script within the folder then requires anything it needs within its own model.
The script structure is then setup as so:
scripts\common.js
scripts\user.js
scripts\user\main.js
scripts\user\userAjax
scripts\user\etc...
This setup allows me to reference scripts from other folders without specifying where the file is anywhere other than in common.js. The downside is that all js files have a reference in common but I can live with that.
As an example there are 4 or 5 folders at the same level as 'user' ('scripts\report\', 'scripts\client' etc) and if I want to create
a user model from any of the scripts within those folders I can simply "define (['userModel'], function (userModel)" and common will tell require where to go and find that file. This system works well, allowing me to move files around at will and only change their path in one place.
The problem comes when I add new scripts or change paths in common.js. Whilst all others are bust every request due to the setup
in common the common file itself gets cached so I have to bust users' chrome caches before the new common.js file gets picked up. This is obviously a big issue at delivery time - pages fail as they cannot find the new script because it doesn't exist in the same folder and common has been cached.
Can anyone suggest a way of automatically busting common.js or moving the path config into a separate required file so that the urlArgs bust will do it for me?
Many thanks.
Before the script element that loads RequireJS, add the following code:
<script>
require = {
urlArgs: "bust=" + (new Date()).getTime()
};
</script>
RequireJS will pick this up as its initial configuration and any module it loads, through data-main or any other way, will be required with a bust parameter.
It is probably best to remove urlArgs from your subsequent call to require.config. It will override the earlier option so the value of the bust will change. Usually, modules are loaded once and only once by RequireJS so it should not happen that the same module is loaded by the same page with two different bust values. But there are scenarios I'm unsure about (for instance, using require.undef to undefine a module). Removing the later urlArgs would avoid bad surprises.
I need to get the uri of the .js file presently executing. I need this to create a web worker, passing in the full uri of the web worker js file.
This answer doesn't work because my file is loaded by requireJS and therefore the script executing is require.js.
What I need is a way to give the full uri to a specific .js file. The uri must use the same root that the .js file executing is in so the browser does not see it as a cross domain request. If there is a better way to accomplish this than working from the uri of the executing .js file, that's fine.
requirejs has some magic arguments that you can request - among them is module:
define(["module", "other", "dependencies"], function(module, other, deps) {
var whereAmI = module.uri;
});