swig - c++ to javascript - javascript

I'm trying to build a simple javascript module using swig from my cpp files. I ran alll the right commands but it seems like nothing is working.
this is my .h file
#pragma once
class Die
{
public:
Die();
Die(int a);
~Die();
int foo(int a) ;
Die* getDie(int a);
int myVar;
};
my .cpp file:
#include <iostream>
#include "example.h"
int Die::foo(int a) {
std::cout << "foo: running fact from simple_ex" <<std::endl;
return 1;
}
Die::Die(){}
Die::Die(int a){myVar = a;}
Die::~Die(){}
Die* Die::getDie(int a) {
return new Die (a);
}
my .i file:
%module example
%{
#include "example.h"
%}
%include "example.h"
my binding.gyp file:
{
"targets": [
{
"target_name": "example",
"sources": ["example.cpp", "example_wrap.cxx" ]
}
]
}
I followed all the command from the swig docs.
I ran:
sudo apt-get install libv8-dev
sudo apt-get install libjavascriptcoregtk-1.0-dev
swig -c++ -javascript -node example.i
node-gyp configure build
After I run the last commands i get all sorts of errors:
error: ‘NewSymbol’ is not a member of ‘v8::String’
and many many more..
Any help will do.
Thanks!

I tried that example to learn this interface myself.
To help others who may stumble upon this here is an example how to work
with swig and js.
First we write the C++ class and its logic using the objectbased approach swig is learning us.
#pragma once
class Die
{
public:
Die(int a);
~Die();
int foo(int a);
int myVar;
};
extern "C"
{
Die* getDie(int a);
}
The interesting thing here is we don't always create a new instance
but we use an external function to lend us a pointer to the class which can then used to import it in our Javascript. This is literally what swig is all about.
Here is the implementation:
#include <iostream>
#include "example.h"
int Die::foo(int a)
{
std::cout << "foo: running fact from simple_ex" << std::endl;
return 1;
}
Die::Die(int a)
{
myVar = a;
}
Die::~Die()
{
}
extern "C"
{
Die* getDie(int a)
{
return new Die(a);
}
}
also here the function to get that said pointer is encapsulated in extern C
which is how we separate it from the other class implementation and also some help for the compiler.
The swig interface is the same as in the question. It is used to generate the wrap-file swig makes to give us a implemented interface between Javascript and our C++ library
%module example
%{
#include "example.h"
%}
%include "example.h"
this creates us the wrap-file using the following statement in terminal:
swig -c++ -javascript -node example.i
now we need some tools for Javascript to build this:
you need to install NodeJs and NPM to use the following things.
first we need a package.json file:
{
"name": "SwigJS",
"version": "0.0.1",
"scripts": {
"start": "node index.js",
"install": "node-gyp clean configure build"
},
"dependencies": {
"nan": "^2.16.0",
"node-gyp": "^9.0.0"
}
}
this is important to let the build programm know some information about the package and its dependencies.
after that we create a file called "binding.gyp"
{
"targets": [
{
"target_name": "SwigJS",
"sources": [ "example_wrap.cxx", "example.cpp" ],
"include_dirs" : [ "<!(node -e \"require('nan')\")" ]
}
]
}
this hold information for our buildtarget and also towards nan.
to get this working we now need to create the .node file.
this is done by either using:
node-gyp configure
node-gyp build
or using:
npm i
both does nearly the same as it appears to me. (correct me if i am wrong)
at last we now implement our Javascript and use the library there.
There are some more tricks to make the path on top to disappear so that
you could write just require("modulname") but thats actually to much for this example.
const Swigjs = require("./build/Release/SwigJS.node");
console.log("exports :", Swigjs); //show exports to see if we have a working library
die = Swigjs.getDie(5); //get the Class pointer
console.log("foo:" + die.foo(5)); //call a function from the class
I hope this helps to get a clear sight how swig and js work together

Related

Parcel Bundler beautify, lint, and create .min.js

I'm new the world of automating/testing/bunding with JS and I've got parcel setup for the most part but I noticed that when it builds files, it does not actually save them with the .min.js part in the file name. I'm wondering if theres a way to do this without having to rename the build file manually.
I'm also trying to find a way to have parcel go through the original source files(the ones that you work on) and lint and beautify them for me
Here's what my package.json looks like
{
"name": "lpac",
"version": "1.3.1",
"description": "",
"dependencies": {},
"devDependencies": {
"parcel": "^2.0.0-rc.0"
},
"scripts": {
"watch": "parcel watch --no-hmr",
"build": "parcel build"
},
"targets": {
"lite-maps": {
"source": ["./path/file1.js", "./path/file2.js", "./path/file3.js"],
"distDir": "./path/build/"
}
},
"browserslist": "> 0.5%, last 2 versions, not dead",
"outputFormat" : "global",
}
I checked out the docs but I couldn't find anything on linting or beautifying with parcel. How can i go about doing that? If you have tutorial links to doing so please also share because resources/tutorials seem scarce for anything other than the basic watching and building files
Unfortunately, there is no out-of-the-box setting that can cause parcel javascript output look like [fileName].[hash].min.js instead of [fileName].[hash].js. The .min.js extension is just a convention to keep output files distinct from source files, though - it has no effect at runtime - and the fact that parcel does automatic content hashing makes it easy enough to tell this. And even though they don't have a .min.js extension, these output files are definitely still minified and optimized by default.
However, if you really, really want this anyways, it's relatively simple to write a Namer plugin for parcel that adds .min.js to all javascript output:
Here's the code:
import { Namer } from "#parcel/plugin";
import path from "path";
export default new Namer({
name({ bundle }) {
if (bundle.type === "js") {
const filePath = bundle.getMainEntry()?.filePath;
if (filePath) {
let baseNameWithoutExtension = path.basename(filePath, path.extname(filePath));
// See: https://parceljs.org/plugin-system/namer/#content-hashing
if (!bundle.needsStableName) {
baseNameWithoutExtension += "." + bundle.hashReference;
}
return `${baseNameWithoutExtension}.min.js`;
}
}
// Returning null means parcel will keep the name of non-js bundles the same.
return null;
},
});
Then, supposing the above code was published in a package called parcel-namer-js-min, you would add it to your parcel pipeline with this .parcelrc:
{
"extends": "#parcel/config-default",
"namers": ["parcel-namer-js-min", "..."]
}
Here is an example repo where this is working.
The answer to your second question (is there "a way to have parcel go through the original source files(the ones that you work on) and lint and beautify them for me") is unfortunately, no.
However, parcel can work well side-by-side with other command line tools that do this do this. For example, I have most of my projects set up with a format command in the package.json, that looks like this:
{
...
"scripts": {
...
"format": "prettier --write src/**/* -u --no-error-on-unmatched-pattern"
}
...
{
You can easily make that command automatically run for git commits and pushes with husky.

How do I use a C library in a Rust library compiled to WebAssembly?

I'm experimenting with Rust, WebAssembly and C interoperability to eventually use the Rust (with static C dependency) library in the browser or Node.js. I'm using wasm-bindgen for the JavaScript glue code.
#![feature(libc, use_extern_macros)]
extern crate wasm_bindgen;
use wasm_bindgen::prelude::*;
use std::os::raw::c_char;
use std::ffi::CStr;
extern "C" {
fn hello() -> *const c_char; // returns "hello from C"
}
#[wasm_bindgen]
pub fn greet() -> String {
let c_msg = unsafe { CStr::from_ptr(hello()) };
format!("{} and Rust!", c_msg.to_str().unwrap())
}
My first naive approach was to have a build.rs script that uses the gcc crate to generate a static library from the C code. Before introducing the WASM bits, I could compile the Rust program and see the hello from C output in the console, now I get an error from the compiler saying
rust-lld: error: unknown file type: hello.o
build.rs
extern crate gcc;
fn main() {
gcc::Build::new()
.file("src/hello.c")
.compile("libhello.a");
}
This makes sense, now that I think about it, since the hello.o file was compiled for my laptop's architecture not WebAssembly.
Ideally I'd like this to work out of the box adding some magic in my build.rs that would for example compile the C library to be a static WebAssembly library that Rust can use.
What I think that could work, but would like to avoid since it sounds more problematic, is using Emscripten to create a WASM library for the C code then compile the Rust library separately and glue them together in JavaScript.
TL;DR: Jump to "New week, new adventures" in order to get "Hello from C and Rust!"
The nice way would be creating a WASM library and passing it to the linker. rustc has an option for that (and there seem to be source-code directives too):
rustc <yourcode.rs> --target wasm32-unknown-unknown --crate-type=cdylib -C link-arg=<library.wasm>
The trick is that the library has to be a library, so it needs to contain reloc (and in practice linking) sections. Emscripten seems to have a symbol for that, RELOCATABLE:
emcc <something.c> -s WASM=1 -s SIDE_MODULE=1 -s RELOCATABLE=1 -s EMULATED_FUNCTION_POINTERS=1 -s ONLY_MY_CODE=1 -o <something.wasm>
(EMULATED_FUNCTION_POINTERS is included with RELOCATABLE, so it is not really necessary, ONLY_MY_CODE strips some extras, but it does not matter here either)
The thing is, emcc never generated a relocatable wasm file for me, at least not the version I downloaded this week, for Windows (I played this on hard difficulty, which retrospectively might have not been the best idea). So the sections are missing and rustc keeps complaining about <something.wasm> is not a relocatable wasm file.
Then comes clang, which can generate a relocatable wasm module with a very simple one-liner:
clang -c <something.c> -o <something.wasm> --target=wasm32-unknown-unknown
Then rustc says "Linking sub-section ended prematurely". Aw, yes (by the way, my Rust setup was brand new too). Then I read that there are two clang wasm targets: wasm32-unknown-unknown-wasm and wasm32-unknown-unknown-elf, and maybe the latter one should be used here. As my also brand new llvm+clang build runs into an internal error with this target, asking me to send an error report to the developers, it might be something to test on easy or medium, like on some *nix or Mac box.
Minimal success story: sum of three numbers
At this point I just added lld to llvm and succeeded with linking a test code manually from bitcode files:
clang cadd.c --target=wasm32-unknown-unknown -emit-llvm -c
rustc rsum.rs --target wasm32-unknown-unknown --crate-type=cdylib --emit llvm-bc
lld -flavor wasm rsum.bc cadd.bc -o msum.wasm --no-entry
Aw yes, it sums numbers, 2 in C and 1+2 in Rust:
cadd.c
int cadd(int x,int y){
return x+y;
}
msum.rs
extern "C" {
fn cadd(x: i32, y: i32) -> i32;
}
#[no_mangle]
pub fn rsum(x: i32, y: i32, z: i32) -> i32 {
x + unsafe { cadd(y, z) }
}
test.html
<script>
fetch('msum.wasm')
.then(response => response.arrayBuffer())
.then(bytes => WebAssembly.compile(bytes))
.then(module => {
console.log(WebAssembly.Module.exports(module));
console.log(WebAssembly.Module.imports(module));
return WebAssembly.instantiate(module, {
env:{
_ZN4core9panicking5panic17hfbb77505dc622acdE:alert
}
});
})
.then(instance => {
alert(instance.exports.rsum(13,14,15));
});
</script>
That _ZN4core9panicking5panic17hfbb77505dc622acdE feels very natural (the module is compiled and instantiated in two steps in order to log the exports and imports, that is a way how such missing pieces can be found), and forecasts the demise of this attempt: the entire thing works because there is no other reference to the runtime library, and this particular method could be mocked/provided manually.
Side story: string
As alloc and its Layout thing scared me a little, I went with the vector-based approach described/used from time to time, for example here or on Hello, Rust!.
Here is an example, getting the "Hello from ..." string from the outside...
rhello.rs
use std::ffi::CStr;
use std::mem;
use std::os::raw::{c_char, c_void};
use std::ptr;
extern "C" {
fn chello() -> *mut c_char;
}
#[no_mangle]
pub fn alloc(size: usize) -> *mut c_void {
let mut buf = Vec::with_capacity(size);
let p = buf.as_mut_ptr();
mem::forget(buf);
p as *mut c_void
}
#[no_mangle]
pub fn dealloc(p: *mut c_void, size: usize) {
unsafe {
let _ = Vec::from_raw_parts(p, 0, size);
}
}
#[no_mangle]
pub fn hello() -> *mut c_char {
let phello = unsafe { chello() };
let c_msg = unsafe { CStr::from_ptr(phello) };
let message = format!("{} and Rust!", c_msg.to_str().unwrap());
dealloc(phello as *mut c_void, c_msg.to_bytes().len() + 1);
let bytes = message.as_bytes();
let len = message.len();
let p = alloc(len + 1) as *mut u8;
unsafe {
for i in 0..len as isize {
ptr::write(p.offset(i), bytes[i as usize]);
}
ptr::write(p.offset(len as isize), 0);
}
p as *mut c_char
}
Built as rustc rhello.rs --target wasm32-unknown-unknown --crate-type=cdylib
... and actually working with JavaScript:
jhello.html
<script>
var e;
fetch('rhello.wasm')
.then(response => response.arrayBuffer())
.then(bytes => WebAssembly.compile(bytes))
.then(module => {
console.log(WebAssembly.Module.exports(module));
console.log(WebAssembly.Module.imports(module));
return WebAssembly.instantiate(module, {
env:{
chello:function(){
var s="Hello from JavaScript";
var p=e.alloc(s.length+1);
var m=new Uint8Array(e.memory.buffer);
for(var i=0;i<s.length;i++)
m[p+i]=s.charCodeAt(i);
m[s.length]=0;
return p;
}
}
});
})
.then(instance => {
/*var*/ e=instance.exports;
var ptr=e.hello();
var optr=ptr;
var m=new Uint8Array(e.memory.buffer);
var s="";
while(m[ptr]!=0)
s+=String.fromCharCode(m[ptr++]);
e.dealloc(optr,s.length+1);
console.log(s);
});
</script>
It is not particularly beautiful (actually I have no clue about Rust), but it does something what I expect from it, and even that dealloc might work (at least invoking it twice throws a panic).
There was an important lesson on the way: when the module manages its memory, its size may change which results in invalidating the backing ArrayBuffer object and its views. So that is why memory.buffer is checked multiple times, and checked after calling into wasm code.
And this is where I am stuck, because this code would refer to runtime libraries, and .rlib-s. The closest I could get to a manual build is the following:
rustc rhello.rs --target wasm32-unknown-unknown --crate-type=cdylib --emit obj
lld -flavor wasm rhello.o -o rhello.wasm --no-entry --allow-undefined
liballoc-5235bf36189564a3.rlib liballoc_system-f0b9538845741d3e.rlib
libcompiler_builtins-874d313336916306.rlib libcore-5725e7f9b84bd931.rlib
libdlmalloc-fffd4efad67b62a4.rlib liblibc-453d825a151d7dec.rlib
libpanic_abort-43290913ef2070ae.rlib libstd-dcc98be97614a8b6.rlib
libunwind-8cd3b0417a81fb26.rlib
Where I had to use the lld sitting in the depths of the Rust toolchain as .rlib-s are said to be interpreted, so they are bound to the Rust toolchain
--crate-type=rlib, #[crate_type = "rlib"] - A "Rust library" file will be produced. This is used as an intermediate artifact and can be thought of as a "static Rust library". These rlib files, unlike staticlib files, are interpreted by the Rust compiler in future linkage. This essentially means that rustc will look for metadata in rlib files like it looks for metadata in dynamic libraries. This form of output is used to produce statically linked executables as well as staticlib outputs.
Of course this lld does not eat the .wasm/.o files generated with clang or llc ("Linking sub-section ended prematurely"), perhaps the Rust-part also should be rebuilt with the custom llvm.
Also, this build seems to be missing the actual allocators, besides chello, there will be 4 more entries in the import table: __rust_alloc, __rust_alloc_zeroed, __rust_dealloc and __rust_realloc. Which in fact could be provided from JavaScript after all, just defeats the idea of letting Rust handle its own memory, plus an allocator was present in the single-pass rustc build... Oh, yes, this is where I gave up for this week (Aug 11, 2018, at 21:56)
New week, new adventures, with Binaryen, wasm-dis/merge
The idea was to modify the ready-made Rust code (having allocators and everything in place). And this one works. As long as your C code has no data.
Proof of concept code:
chello.c
void *alloc(int len); // allocator comes from Rust
char *chello(){
char *hell=alloc(13);
hell[0]='H';
hell[1]='e';
hell[2]='l';
hell[3]='l';
hell[4]='o';
hell[5]=' ';
hell[6]='f';
hell[7]='r';
hell[8]='o';
hell[9]='m';
hell[10]=' ';
hell[11]='C';
hell[12]=0;
return hell;
}
Not extremely usual, but it is C code.
rustc rhello.rs --target wasm32-unknown-unknown --crate-type=cdylib
wasm-dis rhello.wasm -o rhello.wast
clang chello.c --target=wasm32-unknown-unknown -nostdlib -Wl,--no-entry,--export=chello,--allow-undefined
wasm-dis a.out -o chello.wast
wasm-merge rhello.wast chello.wast -o mhello.wasm -O
(rhello.rs is the same one presented in "Side story: string")
And the result works as
mhello.html
<script>
fetch('mhello.wasm')
.then(response => response.arrayBuffer())
.then(bytes => WebAssembly.compile(bytes))
.then(module => {
console.log(WebAssembly.Module.exports(module));
console.log(WebAssembly.Module.imports(module));
return WebAssembly.instantiate(module, {
env:{
memoryBase: 0,
tableBase: 0
}
});
})
.then(instance => {
var e=instance.exports;
var ptr=e.hello();
console.log(ptr);
var optr=ptr;
var m=new Uint8Array(e.memory.buffer);
var s="";
while(m[ptr]!=0)
s+=String.fromCharCode(m[ptr++]);
e.dealloc(optr,s.length+1);
console.log(s);
});
</script>
Even the allocators seem to do something (ptr readings from repeated blocks with/without dealloc show how memory does not leak/leaks accordingly).
Of course this is super-fragile and has mysterious parts too:
if the final merge is run with -S switch (generates source code instead of .wasm), and the result assembly file is compiled separately (using wasm-as), the result will be a couple bytes shorter (and those bytes are somewhere in the very middle of the running code, not in export/import/data sections)
the order of merge matters, file with "Rust-origin" has to come first. wasm-merge chello.wast rhello.wast [...] dies with an entertaining message
[wasm-validator error in module] unexpected false: segment offset should be reasonable, on
[i32] (i32.const 1)
Fatal: error in validating output
probably my fault, but I had to build a complete chello.wasm module (so, with linking). Compiling only (clang -c [...]) resulted in the relocatable module which was missed so much at the very beginning of this story, but decompiling that one (to .wast) lost the named export (chello()):
(export "chello" (func $chello)) disappears completely
(func $chello ... becomes (func $0 ..., an internal function (wasm-dis loses reloc and linking sections, putting only a remark about them and their size into the assembly source)
related to the previous one: this way (building a complete module) data from the secondary module can not be relocated by wasm-merge: while there is a chance for catching references to the string itself (const char *HELLO="Hello from C"; becomes a constant at offset 1024 in particular, and later referred as (i32.const 1024) if it is local constant, inside a function), it does not happen. And if it is a global constant, its address becomes a global constant too, number 1024 stored at offset 1040, and the string is going to be referred as (i32.load offset=1040 [...], which starts being difficult to catch.
For laughs, this code compiles and works too...
void *alloc(int len);
int my_strlen(const char *ptr){
int ret=0;
while(*ptr++)ret++;
return ret;
}
char *my_strcpy(char *dst,const char *src){
char *ret=dst;
while(*src)*dst++=*src++;
*dst=0;
return ret;
}
char *chello(){
const char *HELLO="Hello from C";
char *hell=alloc(my_strlen(HELLO)+1);
return my_strcpy(hell,HELLO);
}
... just it writes "Hello from C" in the middle of Rust's message pool, resulting in the printout
Hello from Clt::unwrap()` on an `Err`an value and Rust!
(Explanation: 0-initializers are not present in the recompiled code because of the optimization flag, -O)
And it also brings up the question about locating a libc (though defining them without my_, clang mentions strlen and strcpy as built-ins, also telling their correct singatures, it does not emit code for them and they become imports for the resulting module).

How to include tesseract library in node-gyp build process

I'm trying to create simple node-addon with tesseract library as a dependency, but I'm a c++ beginner.
Whole code at: https://github.com/q-nick/node-tesseract
binding.cc:
#include <node.h>
#include <v8.h>
// #include <tesseract/baseapi.h>
// #include <leptonica/allheaders.h>
void Method(const v8::FunctionCallbackInfo<v8::Value>& args) {
v8::Isolate* isolate = args.GetIsolate();
args.GetReturnValue().Set(v8::String::NewFromUtf8(isolate, "world"));
}
void init(v8::Local<v8::Object> exports) {
NODE_SET_METHOD(exports, "hello", Method);
}
NODE_MODULE(NODE_GYP_MODULE_NAME, init)
binding.gyp:
{
"targets": [
{
"target_name": "binding",
"sources": [
"src/binding.cc"
],
'defines': [ 'V8_DEPRECATION_WARNINGS=1' ],
'include_dirs': [
],
'libraries': [
# '-lpvt.cppan.demo.google.tesseract.libtesseract',
# '-lleptonica'
]
}
]
}
I found a project which could help me compiling dependencies like tesseract, leptonica - it's https://cppan.org/
Unfortunately, I can't figure out - how to connect this with the node-gyp build process. CPPAN has one config file it's named cppan.yml (something like package.json in npm)
cppan.yml:
dependencies:
pvt.cppan.demo.google.tesseract.libtesseract: master
pvt.cppan.demo.danbloomberg.leptonica: 1
I want to build my node-addon and all dependencies (like tesseract) by one command. And don't know how to link c++ dependencies in node-gyp build
I want to use latest tesseract version so I can't use pre-compiled libraries. Currently, I'm working in Windows environment, but I want it to be a cross-platform process.
My example GitHub project (https://github.com/q-nick/node-tesseract) must compile successfully after uncommenting tesseract include.
If there is some other easy way how to accomplished this please share.
I want it to !
The solution is to build all c++ tesseract code as dependencies ! (and leptonica), so the first is to try to know how to build tesseract (which arguments, variables, defines ...)
Just checks this eg: https://github.com/istex/popplonode/blob/master/binding.gyp
There is a dependencies file to poppler in lib folder.
It could be could to work together on this !
I will answer my question by myself.
I found a project: https://github.com/cmake-js/cmake-js which has many explanation about why move away from gyp:
...First of all, Google, the creator of the gyp platform is moving towards its new build system called gn, which means gyp's days of support are counted...
I also found: https://github.com/nodejs/nan/
...The goal of this project is to store all logic necessary to develop native Node.js addons without having to inspect NODE_MODULE_VERSION and get yourself into a macro-tangle...
So i give it a try.
binding.cc:
#include <nan.h>
#include <baseapi.h>
#include <allheaders.h>
NAN_MODULE_INIT(InitAll) {
Set(target, New<String>("myMethod").ToLocalChecked(),
GetFunction(New<FunctionTemplate>(MyMethod)).ToLocalChecked());
}
NODE_MODULE(addon, InitAll)
NAN_METHOD(MyMethod) {
info.GetReturnValue().Set(Nan::New<v8::String>("world").ToLocalChecked());
}
Next thing is to create CMakeLists.txt file with few modification. I want to use cppan as dependencies installator, so I have to add some extra lines to default CMAkeLists.txt file:
add_subdirectory(.cppan)
...
target_link_libraries(${PROJECT_NAME} ${CMAKE_JS_LIB}
pvt.cppan.demo.google.tesseract.libtesseract
pvt.cppan.demo.danbloomberg.leptonica
)
CMakeLists.txt:
project(addon)
file(GLOB SOURCE_FILES "src/**/*.cc" "src/**/*.h")
add_library(${PROJECT_NAME} SHARED ${SOURCE_FILES})
add_subdirectory(.cppan)
set_target_properties(${PROJECT_NAME} PROPERTIES PREFIX "" SUFFIX ".node")
target_include_directories(${PROJECT_NAME} PRIVATE ${CMAKE_JS_INC})
target_link_libraries(${PROJECT_NAME} ${CMAKE_JS_LIB}
pvt.cppan.demo.google.tesseract.libtesseract
pvt.cppan.demo.danbloomberg.leptonica
)
cppan.yml
dependencies:
pvt.cppan.demo.google.tesseract.libtesseract: master
pvt.cppan.demo.danbloomberg.leptonica: 1
Now, everything is already set up and we can run install and build command:
cppan
and
cmake-js build
Good luck!

How do I get Babel to output a file's AST?

Is there a way I can get Babel to output the AST of a file, as a JSON or similar, rather than condense it back into JS?
The reason is that I want to be able to do some simple static analysis / code gen, and while I aim to eventually do it within a plugin for Babel (or similar), I feel it would simplify things significantly if I can start with a static model.
There's babylon, babel's own parser:
npm install -g babylon
babylon your_file.js > ast.json
Node API example and source:
https://github.com/babel/babel/tree/master/packages/babylon
Also the babel plugin handbook might come in handy for AST reference, and to get started with plugin development.
you should check out ast-source - it can take babel as a parser when it builds the tree.
Example from their npmjs page:
import ASTSource from "ast-source"
import estraverse from "estraverse"
import fs from "fs"
function transform(AST) {
var replaced = {
"type": "babel",
"value": 42,
"raw": "42"
};
return estraverse.replace(AST, {
enter: function (node) {
if (node.type === estraverse.Syntax.Literal) {
return replaced;
}
}
});
}
var source = new ASTSource(fs.readFileSync("./input.js", "utf-8"), {
filePath: "./input.js"
});
var output = source.transform(transform).output();
console.log(output.code);// => "var a = 42;"
console.dir(output.map.toString()); // => source map
fs.writeFileSync("./output.js", output.codeWithMap, "utf-8");

execute some code and then go into interactive node

Is there a way to execute some code (in a file or from a string, doesn't really matter) before dropping into interactive mode in node.js?
For example, if I create a script __preamble__.js which contains:
console.log("preamble executed! poor guy!");
and a user types node __preamble__.js they get this output:
preamble executed! poor guy!
> [interactive mode]
Really old question but...
I was looking for something similar, I believe, and found out this.
You can open the REPL (typing node on your terminal) and then load a file.
Like this: .load ./script.js.
Press enter and the file content will be executed. Now everything created (object, variable, function) in your script will be available.
For example:
// script.js
var y = {
name: 'obj',
status: true
};
var x = setInterval(function () {
console.log('As time goes by...');
}, 5000);
On the REPL:
//REPL
.load ./script.js
Now you type on the REPL and interact with the "living code".
You can console.log(y) or clearInterval(x);
It will be a bit odd, cause "As time goes by..." keep showing up every five seconds (or so).
But it will work!
You can start a new repl in your Node software pretty easily:
var repl = require("repl");
var r = repl.start("node> ");
r.context.pause = pauseHTTP;
r.context.resume = resumeHTTP;
From within the REPL you can then call pause() or resume() and execute the functions pauseHTTP() and resumeHTTP() directly. Just assign whatever you want to expose to the REPL's context member.
This can be achieved with the current version of NodeJS (5.9.1):
$ node -i -e "console.log('A message')"
The -e flag evaluates the string and the -i flag begins the interactive mode.
You can read more in the referenced pull request
node -r allows you to require a module when REPL starts up. NODE_PATH sets the module search path. So you can run something like this on your command line:
NODE_PATH=. node -r myscript.js
This should put you in a REPL with your script loaded.
I've recently started a project to create an advanced interactive shell for Node and associated languages like CoffeeScript. One of the features is loading a file or string in the context of the interpreter at startup which takes into account the loaded language.
http://danielgtaylor.github.com/nesh/
Examples:
# Load a string (Javascript)
nesh -e 'var hello = function (name) { return "Hello, " + name; };'
# Load a string (CoffeeScript)
nesh -c -e 'hello = (name) -> "Hello, #{name}"'
# Load a file (Javascript)
nesh -e hello.js
# Load a file (CoffeeScript)
nesh -c -e hello.coffee
Then in the interpreter you can access the hello function.
Edit: Ignore this. #jaywalking101's answer is much better. Do that instead.
If you're running from inside a Bash shell (Linux, OS X, Cygwin), then
cat __preamble__.js - | node -i
will work. This also spews lots of noise from evaluating each line of preamble.js, but afterwords you land in an interactive shell in the context you want.
(The '-' to 'cat' just specifies "use standard input".)
Similar answer to #slacktracer, but if you are fine using global in your script, you can simply require it instead of (learning and) using .load.
Example lib.js:
global.x = 123;
Example node session:
$ node
> require('./lib')
{}
> x
123
As a nice side-effect, you don't even have to do the var x = require('x'); 0 dance, as module.exports remains an empty object and thus the require result will not fill up your screen with the module's content.
Vorpal.js was built to do just this. It provides an API for building an interactive CLI in the context of your application.
It includes plugins, and one of these is Vorpal-REPL. This lets you type repl and this will drop you into a REPL within the context of your application.
Example to implement:
var vorpal = require('vorpal')();
var repl = require('vorpal-repl');
vorpal.use(repl).show();
// Now you do your custom code...
// If you want to automatically jump
// into REPl mode, just do this:
vorpal.exec('repl');
That's all!
Disclaimer: I wrote Vorpal.
There isn't a way do this natively. You can either enter the node interactive shell node or run a script you have node myScrpt.js. #sarnold is right, in that if you want that for your app, you will need to make it yourself, and using the repl toolkit is helpful for that kind of thing
nit-tool lets you load a node module into the repl interactive and have access to inner module environment (join context) for development purposes
npm install nit-tool -g
First I tried
$ node --interactive foo.js
but it just runs foo.js, with no REPL.
If you're using export and import in your js, run npm init -y, then tell node that you're using modules with the "type": "module", line -
{
"name": "neomem",
"version": "1.0.0",
"description": "",
"type": "module",
"main": "home.js",
"keywords": [],
"author": "",
"license": "ISC"
}
Then you can run node and import a file with dynamic import -
$ node
Welcome to Node.js v18.1.0.
Type ".help" for more information.
> home = await import('./home.js')
[Module: null prototype] {
get: [AsyncFunction: get],
start: [AsyncFunction: start]
}
> home.get('hello')
Kind of a roundabout way of doing it - having a command line switch would be nice...

Categories

Resources