I'm in the process of converting a Grunt file to a Gulp file. My Grunt file contains the following line
var config = grunt.file.readJSON('json/config.json');
What this line is doing is that it is setting some variables which it then injects into the html it generates, specifically related to languages.
I tried converting the file automatically with grunt2gulp.js but it always fails with config being undefined. How would I write grunt.file.readJSON using gulp?
The easiest way to load a JSON file in node/io.js is to use require directly:
var config = require('json/config.json');
This can substitute any readJSON calls you have and also works generally. Node/io.js have the ability to synchronously require json files out of the box.
Since this is a .json file, Benjamin's answer works just fine (just require() it in).
If you have any configs that are valid JSON but not stored in files that end in a .json extension, you can use the jsonfile module to load them in, or use the slightly more verbose
JSON.parse(require('fs').readFileSync("...your file path here..."))
(if you have fs already loaded, this tends to be the path of least resistance)
The one big difference between require (which pretty much uses this code to load in json files) and this code is that require uses Node's caching mechanism, so multiple requires will only ever import the file once, and then return points to the parsed data, effectively making everything share the same data object. Sometimes that's great, sometimes it's absolutely disastrous, so keep that in mind
(If you absolutely need unique data, but you like the convenience of require, you can always do a quick var data = require("..."); copied = JSON.parse(JSON.stringify(data));)
Related
I'm working on writing a loader for filename.xyz.json files.
Now since Webpack version 2, Webpack support loading JSON files out of the box.
So I've managed to get my loader to work when working when using a completely custom file extension like .xyz.jayson.
But because I'm using .json the other, already existing loader gets triggered after my loader did his magic, which will cause an error because at this point it's not JSON anymore. How can I prevent that?
If I understand the Webpack docs correctly, the !! prefix with inline usage would do just that. But I would like to disable post/pre loaders in the config. Is this possible?
Also, I was thinking of actually using that given JSON loader instead of dodging it, because why parse the JSON myself, when there is already a loader for it? But I don't quite sure if that is possible since the returned source from the JSON is already returned as module.export. Would I need to strip the module.export and then run JSON.parse to work with it as an actual js object instead of a string?
So as a quick summary:
I'd like to either to not trigger the JSON loader at all, and parse the JSON myself to manipulate it or use the built-in JSON loader first and then manipulate the JSON data file myself.
I found the solution:
Setting the type of my rule to javascript/auto gave me the expected result.
More information here
I am coding a plugin that, for specific modules, will try to execute the module generated at build time in order to save the result to a json file.
For that, I am tapping into compilation.hooks.succeedModule, which receives a NormalModule object already built. Then I am trying to eval the source replacing webpack variables like __webpack_public_path__.
While it kind of works, this approach feels terribly wrong. Like I am missing something.
Is there a nice way to execute modules at build time from a NormalModule object having basic access to vars like __webpack_public_path__? Maybe Webpack offers a better way to do these kind of things?
Ok, yeah, sounds like you can solve this another way, I've done similar stuff where I needed to change what a module output, write stuff to disk, trigger side effects, etc. It sounds like you want loaders rather than a plugin. The run-loader (https://www.npmjs.com/package/webpack-run-loader) executes the module it loads and exports or returns the result.
You can write a custom loader that you chain to run after responsive-loader, and run-loader, and which receives the JSON from run-loader and writes it to disk where you want it (as a side effect), and then returns an empty string so that nothing is added to the build. The end result would be that requiring this module in your app gets your image files created (by responsive-loader), and the JSON written out to disk where you need it (by your custom loader). Alternately you could skip run-loader and in your custom loader use regex to just grab the JSON from the output of responsive-loader. Using regex on code generated by a project dependency seems fragile, but as long as you have your dependency versions locked down it can work just fine in practice, and it's a bit simpler conceptually than adding run-loader to the pipeline.
If you're writing webpack plugins I imagine you're comfortable writing loaders as well, but if not they're pretty straightforward -- just a function that accepts source code from the loader that came before it and returns code, and does whatever you want in between. The docs aren't bad for the API, but looking at the source of a few published loaders is helpful, too. It might look roughly (just spitballing from memory) like:
// img-info-logging-loader.js
// regex version, expects source arg to be output of responsive-loader
import * as fs from 'fs';
export const imgInfoLoggingLoader = (source) => {
const jsonFinderRegex = /someregexto(match)onsource/;
const desiredJSON = source;
const matchArr = jsonFinderRegex.exec(desiredJSON);
if (!matchArr[1]) {
throw new ReferenceError('json output not found in loader source.');
} else {
const imgConfigJsonString = matchArr[1];
// you would write a fn to generate a filename based on the
// source, or based on the module's filename, which is available
// via the webpack loader api
const fileNameToWrite = getFileNameTowrite();
try {
// async might be preferable depending on your webpack
// performance needs
fs.writeFileSync(fileNameToWrite, imgConfigJsonString);
} catch (err) {
throw new Error(`error writing ${fileNameToWrite}`);
}
}
// what the loader inserts into your JS asset: empty string
return '';
}
EDIT:
Since per your comment you are looking to output a single JSON object with all of the image info in it, you would want a slightly different approach that does use a plugin (this is the most elegant way I know to do it, there may be others). As far as I know a plugin is the only way to 'do something' when webpack is done loading modules.
You still want a custom loader that is extracting the JSON from the output of the responsive-loader, as described above. It won't write each to disk, though. Instead your loader will call a method on the following module:
You also write a json-collector.js that is just a little node module that you will use to hold on to the JSON object you're building. This bit is awkward because it's separate from the loader but the loader needs it. This collector module is simple, though, and if you wanted to be cleaner you could turn it into a more generic module and treat it as a proper, separate node dependency. All it is is an object with a method for adding JSON data, which appends it to an internal JSON object, and one for reading out the collected data, which returns the JSON.
And then you have a plugin that hooks into the end of the build (I think there's one for 'build sealed' that I've used). When that hook is reached, you know webpack has no more modules to load, so the plugin now calls the 'read' method on the json-collector, gets the JSON object from it and writes that to disc.
This solution doesn't fit the standalone plugin/standalone loader convention in webpack but if that doesn't bother you it's actually pretty straightforward, each of the three pieces has a simple job to do. I've used this pattern multiple times and it's worked for me.
We are using keystone frame work in one of our project and i am trying to use .env file variable to one of my .js file to connect with http site.I have used dotenv and called process.env.xxyz where xxyz is the variable we are using.Please let me know if there is any other method to call variable from .env file.
Reading process.env is the standard way to retrieve environment variables. See the docs.
The dotenv package you mentioned takes what is in your .env file and puts it in process.env. However, no validation is performed, so it is easy to make mistakes.
Instead, try out my envy module which prevents all of the common mistakes you might be making. It checks for missing variables, amongst other things.
If .env files are still giving you trouble:
Make sure you are using the correct variable names.
Make sure there are no typos.
Make sure you are using the proper syntax.
Consider using command line arguments instead. I recommend meow for this. Also see an example of using envy and meow together.
Regarding syntax: depending on which loader you are using (e.g. dotenv vs envy), the syntax in the .env file could have a big impact on how it is parsed. For example, in Bash and other shells (which the files are based on), each and every one of the following examples behaves entirely differently...
MY_VAR=foo $BAR
MY_VAR='foo $BAR'
MY_VAR="foo $BAR"
Also, environment variable names are case sensitive and by convention are all uppercase. This can lead to mistakes in languages where that is uncommon. In the Node.js program reading process.env, sometimes you might forget that the naming conventions for environment variables are different than the rest of the program.
const myVar = process.env.myVar; // wrong
const myVar = process.env.MY_VAR; // correct
Casing is not an issue if you use envy, as it fixes this by normalizing variable names to camelcase before returning them.
const { myVar } = envy(); // correct, no matter how it is in `.env`
Regardless of which loader you use, you will of course need to call its load function before the .env file will be read. This is pretty much impossible to forget with envy() because you use only its direct return value. But if you are using dotenv, you can easily access process.env at the wrong time because it is already available and populated (but not with all the desired properties) before calling dotenv.config().
Another trick that will help with debugging and save time and effort is to make a dedicated module for configuration. Thanks to the require cache, we avoid doing the work multiple times and also avoid relying upon the loader being idempotent.
Put this in a file called env.js.
const envy = require('envy');
module.exports = envy();
Then import it elsewhere.
const env = require('./env');
Now you have something very simple to debug and it should work the same no matter where you import it.
Just add xxyz=HTTPSiteAddress to your .env file. Then you can call the variable anywhere by using process.env.xxyz.
For example:
var request = require("request");
request(process.env.xxyz, function(error, response, body) {
console.log(body);
});
This will work as long as your keystone.js file contains at the top:
require('dotenv').config();
I am making a browser game (client side only). I am trying to make it smaller (meaning file sizes), which is first step for mobile version. I have minified CSS using LESS, JS using uglify and also angular templates using grunt-angular-templates. So at this moment I am loading very small number of files:
index.html
app.js
app.css
images.png (one file with all images)
But the remaining problem are JSON data files. There are (or will be) many levels and each level has its own JSON data file. Also there are some rule definitions etc. The problem is, that these JSON files are loaded dynamically when needed.
I am now trying to find a way, how to somehow get these files (at build time, probably some grunt task) into one file, or even better - directly into app.js. I have no problem in writing PHP script + JS class, that would do this, but I first tried to find some finished solution.
Does anybody know about something like that, or is there any other solution that I am not thinking about? Thanks for any help.
====
EDIT:
1) The point of this is getting rid of X requests and making one request (or zero) for JSON files.
2) The compiled thing does not have to be JSON at all. Part of my idea:
JsonManager.add('path/to/json/file.json', '{"json":"content of file"}');
making all these lines manually is bad idea, I was asking about something, if there is anything, that could do this job for me.
3) Ideally i am looking for some solution similar to what grunt-angular-templates task does with HTML templates (minifies them and adds them to app.js using Angular's $templateCache)
Say you have two JSONs: {'a':1} and {'b':2}.
You cannot simply concatenate them into one chunk as together they will not be a valid JSON, e.g. this {'a':1}{'b':2} is not valid JSON. You can do this with JS and CSS but not JSON.
The only option is to include them into larger structure:
[
{'a':1},
{'b':2}
]
If your code structure allows to do this then you can use any existing JS compressor/uglifier to compress the result.
For anybody who has same problem as me:
I gave up finding already finished solution, and made my own:
The solution
I have written PHP script, that iterates over files in data directory and lists all JSON files. It also minifies their contents and creates one big array, with keys as relative file names and values as JSON content of files. It then creates a .js file, in which this big array is encoded as JSON again and given to a JavaScript variable (module constant in my case - Angular)
I created a wrapper class, which serves this data as files, e.g.:
var data = dataStorage.getData('levels/level01.json'); // returns JSON content of file located at path/to/data/files/levels/level01.json but without any AJAX call or something
I used grunt-shell to automate running this php file
I added the result .js file to list of files, which should be minified by uglify (and connected together).
The result:
I can create any number of JSON files in any structure and link to them from js code using that wrapper class, but no AJAX calls are fired.
I decreased number of files needed to load at startup (but increased app.js size a bit, which is better than second request).
Thanks for your ideas and help. Hope this also helps someone
This is more a question of curiosity. In JavaScript and HTML, does dot-slash ./ (current working directory), ever solicit different behavior than the omission of it?
I'm assuming it does, otherwise it would never be used. But I've never run into such a case.
For instance, in javascript:
var config = require('./config.json');
vs
var config = require('config.json');
are both relative and refer to the same file. Any case in which it doesn't?
Yes it may depending on what environment you're running and what is handling the URL/filePath.
In your example, require is being used to look for a file path. When specifying the file path in require in node.js, for example, the script will actually look for the file name in a few locations if it is not found in the current directory, looking down a chain of directories for the file until it comes to a conclusive determination that the file doesn't exist in any of those locations. See more here: http://nodejs.org/api/modules.html#modules_file_modules
In this case, making the location explicit with ./ means node.js require is given explicit instructions on where to find the file and will not look anywhere else. And will return an error right away if it's not in the current dir.
However, with HTML, very typically the browser will only look in the same URL path as the HTML file that is making the file request, and so <script src="file.js"></script> will generally always yield the same results as <script src="./file.js"></script>. I can't think of an example where it wouldn't.
I can't say the same for client side javascript libraries, as it also depends on how these libraries will search for files. Using require.js on the client side, you can set up a fallback location to search for files if the current working directory doesn't have it: http://requirejs.org/docs/api.html#config-paths
So to answer, it depends on what functions/methods are interpreting your file path!