Getting the entire tree-sitter parse tree inside an Atom package - javascript

I am attempting to write an Atom package which would require access to the parse tree generated internally for Atom by tree-sitter. I am unfamiliar with tree-sitter and not overly familiar with Atom packages.
While reading some tutorials for Atom packages in general, I came across a page which looks promising that discusses getting the "Scope descriptor for the given position in buffer coordinates" and on the same page a way to get the current grammar but nothing for getting the entire parse tree. Is this possible using the api provided by Atom's environment? If not how would I go about doing it otherwise? Maybe reparsing the tree from inside the package (though that sounds both more work than it's worth and inefficient)?

For anyone else who might need to do this in the future, Atom exposes atom.workspace which can be used to get various things such as the TextEditors, Grammars, etc. from the active environment. While tracing Atom's source, I found that the GrammarRegistry object which is accessible in the workspace (atom.workspace.grammarRegistry) has a method called languageModeForGrammarAndBuffer which returns a TreeSitterLanguageMode object which in turn has a getter for the tree (.tree). This method in turn needs a grammar and a textBuffer, both of which can be obtained from the workspace as well (the buffer can be gotten from the TextEditorRegistry). Putting that all together, to get the root node of the tree from an Atom package, you can use something like this:
const grammarRegistry = atom.workspace.grammarRegistry;
const grammar = grammarRegistry.treeSitterGrammarsById["source.js"]; // where source.js is the name of the tree-sitter grammar you want
const buffer = atom.workspace.textEditorRegistry.editors.values().next().value.buffer; //assuming you want the first buffer
const root = grammarRegistry.languageModeForGrammarAndBuffer(grammar, buffer).tree.rootNode;

In current versions you can just do editor.getBuffer().getLanguageMode().tree. Note this could be either a TextMateLanguageMode or TreeSitterLanguageMode, depending on Tree Sitter support for the language, and the value of the core useTreeSitterParsers setting. Also none of this appears to be documented API (i.e. stable) yet.

Related

Can't set values on `process.env` in client-side Javascript

I have a system (it happens to be Gatsby, but I don't believe that's relevant to this question) which is using webpack DefinePlugin to attach some EnvironmentVariables to the global variable: process.env
I can read this just fine.
Unfortunatley, due to weirdnesses in the app startup proces, I need have chosen to do some brief overwritting of those EnvironmentVariables after the site loads. (Not interested in discussing whether that's the best option, in the context of this question. I know there are other options; I want to know whether this is possible)
But it doesn't work :(
If I try to do it explicitly:
process.env.myVar = 'foo';
Then I get ReferenceError: invalid assignment left-hand side.
If I do it by indexer (which appears to be what dotenv does) then it doesn't error, but also doesn't work:
console.log(process.env.myVar);
process.env['myVar'] = 'foo';
console.log(process.env.myVar);
will log undefined twice.
What am I doing wrong, and how do I fix this?
The premise behind this attempted solution was flawed.
I was under the impression that webpack "made process.env.* available as an object in the browser".
It doesn't!
What it actually does is to transpile you code down into literals wherever you reference process.env. So what looks like fetch(process.env.MY_URL_VAR); isn't in fact referencing a variable, it's actually being transpiled down into fetch("http://theActualValue.com") at compile time.
That means that it's conceptually impossible to modify the values on the "process.env object", because there is not in fact an actual object, in the transpiled javascript.
This explains why the direct assignment gives a ref error (you tried to execute "someString" = "someOtherString";) but the indexer doesn't. (I assume that process.env gets compiled into some different literal, which technically supports an indexed setter)
The only solutions available would be to modify the webpack build process (not an option, though I will shortly raise a PR to make it possible :) ), use a different process for getting the Env.Vars into the frontEnd (sub-optimal for various other reasons) or to hack around with various bits of environment control that Gatsby provides to make it all kinda-sorta work (distasteful for yet other reasons).

dynamically load modules in Meteor node.js

I'm trying to load multiple modules on the fly via chokidar (watchdog) using Meteor 1.6 beta, however after doing extensive research on the matter I just can't seem to get it to work.
From what I gather require by design will not take in anything other than static strings, i.e.
require("test/string/here")
Since if I try:
var path = "test/string/here"
require(path)
I just get Error: Cannot find module, even though the strings are identical.
Now the thing is I'm uncertain how to go on about this, am I really forced to either use import or static strings when using meteor or is there some workaround this?
watchdog(cmddir, (dir) => {
match = "." + regex_cmd.exec(dir);
match = dir;
loader.emit("loadcommand", match)
});
loader.on('loadcommand', (file) => {
require(file);
});
There is something intrinsically weird in what you describe.
chokidar is used to watch actual files and folders.
But Meteor compiles and bundles your code, resulting in an app folder after build that is totally different from your project structure.
Although Meteor now supports dynamic imports, the mechanism is internal to Meteor and does not rely on your actual project files, but on Meteor built ones.
If you want to dynamically require files like in Node, including with dynamically generated module path, you should avoid import and require statements, which are automatically replaced by Meteor built-in import mechanism. Instead you would have to make up your own loading function, taking care of the fact that your app built folder is different from your project folder.
That may work for example if your server is watching files and/or folders in a static location, different from where your app will be running.
In the end, I feel this is a sort of XY problem: you have not described your objective in the first place, and the above issue is trying to solve a weird solution that does not seem to fit how Meteor works, hence which may not be the most appropriate solution for your implicit objective.
#Sashko does a great job of explaining Meteor's dynamic imports here. There are also docs
A dynamic import is a function that returns a promise instead of just importing statically at build time. Example:
import('./component').then((MyComponent) => {
render(MyComponent);
});
The promise runs once the module has been loaded. If you try to load the module repeatedly then it only gets loaded once and is immediately available on subsequent requests.
afaict you can use a variable for the string to import.

How to use webpack/babel to preparse JS template strings?

Let's say for example, I want to create a rot13 template tag. You could use it like this:
let secret = rot13`This is a secret.`;
Now I could implement this tag in JavaScript, but I want to pre-parse it such that my compiled bundle would actually contain:
let secret = "Guvf vf n frperg.";
How can I do this? Do I have to create a Babel plugin to hook into their parser? What would that look like?
Now what if I want Webpack to also spit out a file called rotated_strings.txt which contains a list of all these strings that have been transformed? How do I collect them up? i.e., how do I get Babel and Webpack to communicate such that Babel can do the inline transform but somehow notify Webpack to generate this extra file?
Try out the following.
https://astexplorer.net/#/gist/89a6bdce0165d2661385828d9d85a7e0/4d745f3e8b5bfd25ba919cff567f27055d9e3a75
I have built up on what Joe Clay had created in the comments.
Right now this console logs all the things it has changed once at the end
And in the comments I have written what you can replace it with once you move it to using in your project build env (assuming it to be Node)
PS: I have used the sync APIs in comments to quickly demonstrate it, you should probably switch over to Async APIs
Update: When you write this in Babel plugin, be sure to not set quasi and cooked attrs, but use path.replaceWith(t.stringLiteral(cooked)) instead

Why can I not use a variable as parameter in the require() function of node.js (browserify)?

I tried something like:
var path = '../right/here';
var module = require(path);
but it can't find the module anymore this way, while:
var module = require('../right/here');
works like a charm. Would like to load modules with a generated list of strings, but I can't wrap my head around this problem atm. Any ideas?
you can use template to get file dynamically.
var myModule = 'Module1';
var Modules = require(`../path/${myModule}`)
This is due to how Browserify does its bundling, it can only do static string analysis for requirement rebinding. So, if you want to do browserify bundling, you'll need to hardcode your requirements.
For code that has to go into production deployment (as opposed to quick prototypes, which you rarely ever bother to add bundling for) it's always advisable to stick with static requirements, in part because of the bundling but also because using dynamic strings to give you your requirements means you're writing code that isn't predictable, and can thus potentially be full of bugs you rarely run into and are extremely hard to debug.
If you need different requirements based on different runs (say, dev vs. stage testing vs. production) then it's usually a good idea to use process.env or a config object so that when it comes time to decide which library to require for a specific purposes, you can use something like
var knox = config.offline ? require("./util/mocks3") : require("knox");
That way your code also stays immediately traversable for others who need to track down where something's going wrong, in case a bug does get found.
require('#/path/'.concat(fileName))
You can use .require() to add the files that you want to access calculating its path instead of being static at build time, this way this modules will be included and when calling require() later they will be found.

Make source maps refer to original files on remote machine

Using Google Closure Compiler to minify a bunch of javascripts. Now I'd like to also add source maps to those to debug out in the wild.
Thing is, I want to keep the original (and preferrably also the map files) on a completely different place, like another server. I've been looking for a solution to this, and found out about the sourceRoot parameter. But it seems as it's not supported?
Also found this --source_map_location_mapping parameter, but no documentation whatsoever. Seems as it wants a pipe-delimited argument (filesystem-path|webserver-path). Tried a couple of different approaches to this, like local filename|remote url but without prevail. That just gives me No such file or directory and java.lang.ArrayIndexOutOfBoundsException.
Has anyone succeeded to place the minified/mapped source files on a remote machine?
Or does anyone know of any documentation for --source_map_location_mapping?
Luckily Google Closure Compiler's source code is available publicly
https://gist.github.com/lydonchandra/b97b38e3ff56ba8e0ba5
REM --source_map_location_mapping is case SENSITIVE !
REM need extra escaped double quote --source_map_location_mapping="\"C:/tools/closure/^|httpsa://bla/\"" as per http://stackoverflow.com/a/29542669
java -jar compiler.jar --compilation_level=SIMPLE_OPTIMIZATIONS --create_source_map=C:\tools\closure\latest\maplayer.js.map --output_wrapper "%output%//# sourceMappingURL=maplayer.js.map" --js=C:\tools\closure\mapslayer.js --js_output_file=maplayer.min.js --source_map_location_mapping="\"C:/tools/closure/^|httpsa://bla/\""
The flag should be formatted like so:
--source_map_location_mapping=foo/|http://bar
The flag should be repeated if you need multiple locations:
--source_map_location_mapping=foo/|http://bar --source_map_location_mapping=xxx/|http://yyy
But what I expect that you are running into is that the "|" might be interpreted by your command shell. For example:
echo --source_map_location_mapping=foo/|http://bar
-bash: http://bar: No such file or directory
(The choice to use "|" was unfortunate). Make sure it is escaped appropriately. like:
--source_map_location_mapping="foo/|http://bar"
I submitted a pull request to report an error for badly formatted flag values:
https://github.com/google/closure-compiler/pull/620
which will at least you know that your flag value is incorrect (so you won't see the out of bounds exception).
John is correct functionality-wise, but I think I can clear it up a bit (as this was super confusing for me to get working).
I suspect many people have the same issue as I:
source map urls are generated relative to your current directory
they don't necessarily match up to relative urls on your website/server
Even if they did match up directly, the strangely-defined pseudo-spec found here means that Chrome/Firefox are going to try to load your paths relative to your sourcemap. i.e. the browser loads /assets/sourcemaps/main.map, sees assets/js/main.js, and loads /assets/sourcemap/assets/js/main.js (yay). (Or it might be relative to the original js file actually, I just happened to have them in the same directory).
Let's use the above example. Say we have assets/js/main.js in our sourcemap, and want to make sure that loads mywebsite.com/assets/js/main.js. To do this, you'd pass the option:
--source_map_location_mapping="assets|/assets"
Like John mentioned, quotes are important, and repeat the arg multiple times for multiple options. The prefixed / will let Firefox/Chrome know you want it relative to your website root. (If you're doing this in something like grunt-closure-tools you'll need to escape more:
config:{
source_map_location_mapping:"\"assets|/assets\"",
}
This way, we can essentially map any given sourcemap path to any given website path. It's not really a perfect replacement for some sort of closure source root, but it does let you map each section of your sources individually to their own roots, so it's not that bad a compromise, and does give some additional flexibility (i.e. you could specify some cdn paths for some of your assets but not for other).
An additional thing you might find helpful, you can automatically add the sourceMappingURL via an output_wrapper. (Though, if you want the ability to debug in production, you should probably prefer some ability to make the server return X-Sourcemap: blah.js.map headers instead, inaccessible by the public)
--output_wrapper="(function(){%output%}).call(this); //# sourceMappingURL=/assets/js/my_main_file.js.map"

Categories

Resources