Is JSHint a Node.js syntax validator? - javascript

I have come across JSHint but was hoping it would validate node.js syntax but it doesn't, for example if I do:
var server = http.createServerfunctionThatDoesNotExists(function(request, response) {....
The test passes even though there isn't a function called createServerfunctionThatDoesNotExists
What am I missing with JSHint and the node.js option?

I think it doesn't dig into CommonJS modules as they can contain virtually everything.
So even if you are require'ing http it could be your own module with any interface.
'Assume Node' means it tolerates global variables like module or exports

JSHint is basically JSLint and is more for syntax/style/JS-nonos rather than a deep-scan product that executes your JS. Fully vetting JS code (or any dynamic language, really, depending on your definition of dynamic) pretty much requires running it, since things like methods can be defined more or less anywhere, including at runtime.
JSHint does go a bit further and allow the definition of high-level constructs used by some JS libraries.

Related

NodeJS - Dynamically import built in modules

I'd like to get a built in module (for example Math or path or fs), whether from the global object or require, I thought about doing something like this:
function getModuleByName(name) {
return global[name] || require(name);
}
Is there a way to check that it is indeed a module and not something else? Would this make a security problem?
Is there a way to check that it is indeed a module and not something else?
Other methods but here's an example:
function getModuleByName(name)
{
let module = null;
try {
module = require(name);
} catch (e) {
// Recommend Logging e Somewhere
}
return module;
}
This will graciously fail as null where the module does not exist, or return it.
Would this make a security problem?
Quite possibly, it depends on how it's used. I'd argue it is more of a general design issue however and would blanket say avoid doing it (without any context, you may have a very good reason).
You, like anyone, obviously have a finite amount of modules you could be loading. These modules have all been selected by yourself for your application for specific reasons, or are bundled into your node version natively and are expected parts of your environment.
What you are doing by introducing this functionality is adding the addition of unexpected elements in your environment. If you are using getModuleByName to access a third party library- you should know outright that library is available and as such there's no reason why you can't just require it directly.
--
If you do think your use case warrants this, please let me know what it is as I may never have encountered it before. I have used dynamic imports like the following:
https://javascript.info/modules-dynamic-imports
But that hasn't been for global packages/libraries, but for dynamic reference to modules built internally to the application (i.e. routing to different views, invokation of internal scripts).
These I have protected by ensuring filepaths can't be altered by whitelisting the target directories, making sure each script follows a strict interface per use case and graciously failing where a module doesn't exist (error output "this script does not exist" for the script usage and a 404 view for the routing example).

Can't set values on `process.env` in client-side Javascript

I have a system (it happens to be Gatsby, but I don't believe that's relevant to this question) which is using webpack DefinePlugin to attach some EnvironmentVariables to the global variable: process.env
I can read this just fine.
Unfortunatley, due to weirdnesses in the app startup proces, I need have chosen to do some brief overwritting of those EnvironmentVariables after the site loads. (Not interested in discussing whether that's the best option, in the context of this question. I know there are other options; I want to know whether this is possible)
But it doesn't work :(
If I try to do it explicitly:
process.env.myVar = 'foo';
Then I get ReferenceError: invalid assignment left-hand side.
If I do it by indexer (which appears to be what dotenv does) then it doesn't error, but also doesn't work:
console.log(process.env.myVar);
process.env['myVar'] = 'foo';
console.log(process.env.myVar);
will log undefined twice.
What am I doing wrong, and how do I fix this?
The premise behind this attempted solution was flawed.
I was under the impression that webpack "made process.env.* available as an object in the browser".
It doesn't!
What it actually does is to transpile you code down into literals wherever you reference process.env. So what looks like fetch(process.env.MY_URL_VAR); isn't in fact referencing a variable, it's actually being transpiled down into fetch("http://theActualValue.com") at compile time.
That means that it's conceptually impossible to modify the values on the "process.env object", because there is not in fact an actual object, in the transpiled javascript.
This explains why the direct assignment gives a ref error (you tried to execute "someString" = "someOtherString";) but the indexer doesn't. (I assume that process.env gets compiled into some different literal, which technically supports an indexed setter)
The only solutions available would be to modify the webpack build process (not an option, though I will shortly raise a PR to make it possible :) ), use a different process for getting the Env.Vars into the frontEnd (sub-optimal for various other reasons) or to hack around with various bits of environment control that Gatsby provides to make it all kinda-sorta work (distasteful for yet other reasons).

qx.log.appender Syntax

When declaring qx.log.appender.Native or qx.log.appender.Console, my IDE (PyCharm) complains about the syntax:
// Enable logging in debug variant
if (qx.core.Environment.get("qx.debug"))
{
qx.log.appender.Native;
qx.log.appender.Console;
}
(as documented here)
The warning I get is
Expression statement is not assignment or call
Is this preprocessor magic or a feature of JavaScript syntax I'm not aware yet?
Clarification as my question is ambiguous:
I know that this is perfectly fine JavaScript syntax. From the comments I conclude that here's no magic JS behavior that causes the log appenders to be attached, but rather some preprocessor feature?!
But how does this work? Is it an hardcoded handling or is this syntax available for all classes which follow a specific convention?
The hints how to turn off linter warnings are useful, but I rather wanted to know how this "magic" works
Although what's there by default is legal code, I find it to be somewhat ugly since it's a "useless statement" (result is ignored), aside from the fact that my editor complains about it too. In my code I always change it to something like this:
var appender;
appender = qx.log.appender.Native;
appender = qx.log.appender.Console;
Derrell
The generator reads your code to determine what classes are required by your application, so that it can produce an optimised application with only the minimum classes.
Those two lines are valid Javascript syntax, and exist in order to create a reference to the two classes so that the generator knows to include them - without them, you wouldn't have any logging in your application.
Another way to create the references is to use the #use compiler hint in a class comment, eg:
/**
* #use(qx.log.appender.Native)
* #use(qx.log.appender.Console)
*/
qx.Class.define("mypackage.Application", {
extend: qx.application.Standalone,
members: {
main: function() {
this.base(arguments);
this.debug("Hello world");
}
}
});
This works just as well and there is no unusual syntax - however, in this version your app will always refer to the those log appenders, whereas in the skeleton you are using the references to qx.log.appender.Native/Console were surrounded by if (qx.core.Environment.get("qx.debug")) {...} which means that in the non-debug, ./generate.py build version of your app the log appenders would normally be excluded.
Whether you think this is a good thing or not is up to you - personally, these days I ship all applications with the log appenders enabled and working so that if someone has a problem I can look at the logs (you can write your own appender that sends the logs to the server, or just remote control the user's computer)
EDIT: One other detail is that when a class is created, it can have a defer function that does extra initialisation - in this case, the generator detects qx.log.appender.Console is needed so it makes sure the class is loaded; the class' defer method then adds itself as an appender to the Qooxdoo logging system
This is a valid JS syntax, so most likely it's linter's/preprocessor's warning (looks like something similar to ESLint's no-unused-expressions).
Edit:
For the other part of the question - this syntax most likely uses getters or (rather unlikely as it is a new feature) Proxies. MDN provides simple examples of how this works under the hood.
Btw: there is no such thing as "native" JS preprocessor. There are compilers like Babel or TypeScript's compiler, but they are separate projects, not related to the vanilla JavaScript.

Why can I not use a variable as parameter in the require() function of node.js (browserify)?

I tried something like:
var path = '../right/here';
var module = require(path);
but it can't find the module anymore this way, while:
var module = require('../right/here');
works like a charm. Would like to load modules with a generated list of strings, but I can't wrap my head around this problem atm. Any ideas?
you can use template to get file dynamically.
var myModule = 'Module1';
var Modules = require(`../path/${myModule}`)
This is due to how Browserify does its bundling, it can only do static string analysis for requirement rebinding. So, if you want to do browserify bundling, you'll need to hardcode your requirements.
For code that has to go into production deployment (as opposed to quick prototypes, which you rarely ever bother to add bundling for) it's always advisable to stick with static requirements, in part because of the bundling but also because using dynamic strings to give you your requirements means you're writing code that isn't predictable, and can thus potentially be full of bugs you rarely run into and are extremely hard to debug.
If you need different requirements based on different runs (say, dev vs. stage testing vs. production) then it's usually a good idea to use process.env or a config object so that when it comes time to decide which library to require for a specific purposes, you can use something like
var knox = config.offline ? require("./util/mocks3") : require("knox");
That way your code also stays immediately traversable for others who need to track down where something's going wrong, in case a bug does get found.
require('#/path/'.concat(fileName))
You can use .require() to add the files that you want to access calculating its path instead of being static at build time, this way this modules will be included and when calling require() later they will be found.

Defining Durandal ViewModel with TypeScript

How can I do this?
Some of the examples I have seen, look horrible for example, the following, which does not read like an OO code at all, and hence what's the point of TypeScript if it's gonna be a hack. I can't exactly get intellisense on the following at all, since there's no class definition. So I have a compiled code, with no intellisense, without being able to enforce encapsulation etc - so why bother wasting time?
/// <reference path="../durandal/durandal.d.ts" />
/// <reference path="../../scripts/knockout.d.ts" />
import app = require("durandal/app");
import http = require("durandal/http");
export function activate() {
.
.
.
}
Other examples are even more funky, by exporting a variable declaration.
The resulting code is not much better, it's DI-ing this variable called exports And the code just keeps adding properties to it, which does not make sense.
If I were to write this all in javascript, I return a new object may be in JSON notation - that I can understand, a proper factory method/class. A lot less work, cleaner and no time wasted compiling.
So can someone explain what's going on?
Why is the code creating properties on a DI-ed exports object? It's like a mutant pass by reference.
Is there a more OO way of doing this? I can see myself exporting a class, but this is just too weird and goes against everything I believe to be right and just. Ok that was an exaggeration, but sure feels that way.
The resulting code is not much better, it's DI-ing this variable called exports And the code just keeps adding properties to it, which does not make sense.
This is the way web (amd) works. Its dependent on requirejs : http://requirejs.org/ and even jquery (pick any file from https://github.com/jquery/jquery/tree/master/src) uses a similar pattern e.g. : https://github.com/jquery/jquery/blob/master/src/deferred.js#L1-L5
If I were to write this all in javascript, I return a new object may be in JSON notation - that I can understand, a proper factory method/class. A lot less work, cleaner and no time wasted compiling.
You can do this with TypeScript as well by not using external modules and compiling with --out flag.
Why is the code creating properties on a DI-ed exports object? It's like a mutant pass by reference.
Is there a more OO way of doing this? I can see myself exporting a class, but this is just too weird and goes against everything I believe to be right and just. Ok that was an exaggeration, but sure feels that way.
You need to learn about External / Internal modules. In a nutshell external modules depend upon a module system (amd for the browser, provided by e.g. requirejs, commonjs for the server e.g nodejs). If you've never heard of amd/commonjs you probably shouldn't care. EXCEPT the library you are trying to use (durandal) needs you to use it. This means your javascript code would not be as simple as you think it would be.
PS: I have a video explaining typescript module systems : http://www.youtube.com/watch?hd=1&v=KDrWLMUY0R0

Categories

Resources