Understanding Nodejs documentation - javascript

I am probably over thinking, but I am having trouble digesting the Nodejs documentation. I am new to javascript and come from a Java Background.
My question is not about any specific nodejs function just overall understanding. Below I give an example of what I am trying to understand...
When working with a statically typed language like Java it is very clear what types are needed for method calls. A trivial example, if I want to sort an array of int's I can just look at Arrays.sort and see that it takes an int[] (same for other types as well). I can also see that it returns void.
public static void sort(int[] a)
However javascript is a dynamic language thus there are no types for api calls. Take this example in the crypto module
crypto.pbkdf2(password, salt, iterations, keylen, callback)
Asynchronous PBKDF2 applies pseudorandom function HMAC-SHA1 to derive a
key of given length from the given password, salt and iterations.
The callback gets two arguments (err, derivedKey).
So without going out and finding example code, or looking at the nodejs source how do I know the argument types of the function? I realized that it is possible to derive the types by looking at the name (ie callback is a function type) but is there any other way?
For example the documentation says that the callback gets two arguments err and derivedKey. What is the type of derivedKey, what is the type or err? Am I missing something about the documentation? How do you know if you are passing in the right types?
Note: I am already know what the type of derivedKey and err is so I don't need answers like "derivedKey is ...." My question is about overall understanding of the Nodejs documentation for someone coming from a statically typed language and is not specific to crypto.pdkdf2.

Well you are pretty much over thinking. You'll have to guess most of them if it's not explained explicitly. like you can guess iterations and keylen are numbers rather than strings. NodeJS docs explain parameters explicitly when they think you can't guess, or you have to know something additional about it. like in crypto.createCredentials(details) they explain that details is a dictionary and which keys you need to use.
F.i. in case of err and derivedKey, since there is no explicit info, i would have assumed both are strings. If it turns out they are not, i would console.log them in callback function to see what they are.
Documentation could be a lot more clear if they have written down types of all parameters but don't know if it's worth the effort.

I have some experience with C# and Java, and have been programming JavaScript for about a year, so I might be able to frame this.
objects
One aspect of JavaScript is that you can make objects like this, on the spot:
var options = {
name: "something",
age: 9,
what: function() {
return 8;
}
};
Everyone takes advantage of this, so it's a big key to understanding JavaScript libraries.
You can also take the above options object and then go like this:
options.mood = "ok";
In other words, objects are just bundles of properties, and the structure isn't set. You can use language constructs like the for ... in loop to iterate through them. That is to say, the "type" of things like err is basically an associative array.
callbacks
Callbacks are basically everywhere, so the question becomes how to deal with them. A common pattern is function (err, maybeSomething). Most of the time, you only care if err is "something." That is, you'll go like this a lot:
if (err) {
...
}
Frankly, I do a lot of console.log(err) to see what I'm getting back during development.
After that it's really up to the documentation. Some of it is better than others. You're not really missing anything. About the only "trick" is that sometimes a doc will explain everything at the top.
You'll find yourself going into the source sometimes to find out what exactly a library is doing, but 97% of the time a few quick guesses and checks will get you moving.

Related

Nodejs Couchbase deserialize date property from document

I'm saving in Couchbase a document which has javascript Date values, and wish to get it exactly the same, not as string '2016-01-02T12:13:14Z'.
Found a way to achieve this using plain Javascript, by using the second parameter of JSON.parse , but Couchbase does the deserialization internally and can't really use this.
Is there any way to disable the Couchbase deserialization, and to avoid doing JSON.stringify + JSON.parse and neither deep-walking the object?
bucket.get(key, (err, result) => {
if (err) {
//deal with error here
} else {
//here "result.value" is already deserialized
done(result.value);
}
});
As you probably know, JSON and Date object handling can be a bit specific depending on what you're trying to do. We tend to stick to the defaults. What you're looking for is a fairly advanced use case. At the moment, we don't have direct support for changing the way we do that parsing.
However, there is an interface for this. It's called a "transcoder" and it lets you be very specific about how you want to handle converting incoming data to what is stored. I don't have an example that shows quite what you want to do, but a good place to start looking for this at a lower layer is shown in the tests.
It might be easier, however, to just treat whatever you're storing differently at the application level. From my read of the revivier parameter you pointed to, that'd be only at time of retrieval. There's nothing stopping you from wrapping that get() and then mutating the object before passing it along to the next layer at very low cost I do believe.

Native options for obscuring/encrypting a string?

As an exercise, I've been working on replicating this game. In case it becomes inaccessible, the premise of the game is to take a quote that's been scrambled by swapping pairs of letters (eg replace A with M and vice versa), and unscramble it to its original arrangement.
As I'm studying this game, I realize it's almost trivial to extract the solution from the source - there are any number of breakpoints you can place to access it. I've been trying to come up with a way to obscure the string in a way that it isn't immediately accessible, and the only thing I can think of is some kind of native obscuring function before the quote even has a chance to land in a variable. Something like this:
var litmus, quotes = [
"String One",
"String Two",
....
"String n",
];
litmus = obscureString(quotes[Math.floor(Math.random()*(n-1))]);
This way the user can't summon up the raw quote, or even the random integer that was used - they're gone by the time the breakpoint hits.
My question is this: is there any kind of native function that would fit the role of obscureString() in the above example, even loosely? I'm aware JavaScript doesn't have any native encryption/hash methods, and any libraries that provide that functionality just provide a chance to drop a breakpoint. Thus, I'm hoping someone here can come up with a creative way to natively obscure a string, if it's even possible in JS.
Been crunching on it for a while, and I found a very makeshift solution.
The only native (read: non-user-corruptible) transformation/hash function I was able to find was window.btoa. It does exactly what I need, in letting me obscure a string before the user ever has a chance to get their hands on it. The problem, however, is that it has a counterpart window.atob, whose only purpose is to reverse the process.
To solve that, I was able to neutralize window.atob with the following line of code, essentially making window.btoa a one-way trip:
window.atob = function(f){ return f; };
Don't make a habit of this.
This is horrific practice, and I feel dirty for writing it. It's passable in this case because my application is small, self-contained, and won't ever need to rely on that function elsewhere - but I can't in good conscience recommend this as a general solution. Many browsers won't even let you override native functions in the first place.
Just wanted to post the answer in case someone found themselves in a similar situation needing a similar answer - this may be the closest we can get to a one-way native hash function for now.

How to query NodeJS stream 'meta data'?

I have a program with several long pipes with several transforms.
e.g.
socket.pipe(ta).pipe(tb).pipe(tc);
...
tc.pipe(other_socket);
What is the best way of adding/reading meta data to/from the pipe?
For example: ta accumulates and breaks packets into lines. tb needs to prefix each line with data based on the originating IP address (if any).
How can tb get the remoteAddress from its input?
There seem to be some similarities with prototypical inheritance here. i.e. tb should ask ta (which lacks the property) then ta should ask socket (which has the property).
I'm looking for a general approach to adding and reading metadata from pipes, as I have other more complex, but analogous issues.
I'm currently solving this issue by using 'Object Streams' consisting of objects with meta and payload properties. Each transform has to do its stuff to payload and most leave meta alone. This solution is ugly, especially as I've had to create a new xnet module which looks like net but produces these augmented objects, rather than plain buffers or strings.
(Haskellers might recognise this solution as a Monad, where I'm lifting most of the stream transforms I use into a "meta" Monad. I'm still learning Haskell, so this observation may be incorrect.)
You can use the pipe event:
tb.on('pipe', function(ta) {
console.log('getting data from', ta.remoteAddress);
});
If the meta data is read-only data for particular instance of pipeline execution then why not pass this data while creating the individual pipes. Something like : socket.pipe(new Ta(address))
In terms of Haskell, it sounds like a Reader moand, where the Pipeline execution function takes a Reader which full fill all the meta data requirements of individual pipes of the pipeline

Custom JSON.stringify fails to Stringify object as whole, but works when iterated one level deep

Hoping someone can spot the error, because I'm having trouble
Alright, I built my own JSON.stringify for just custom large objects. It may not be exactly to specification for some edge case things, but's only meant for stringify on large objects that I'm building myself.
Well, it works, and works well for most objects, but I have an Object I'm trying to stringify and it's failing and printing this before exiting:
node.js:134
throw e; // process.nextTick error, or 'error' event on first tick
^
undefined
Not very helpful. The object is fine because the regular call to JSON.stringify(object) works fine, and when I iterate over the object with for (var x in obj) if (obj.hasOwnProperty(x)) { myStringify(obj); } that works fine, but if I call it on the top level of the object, it goes to hell... It doesn't really make sense to me, and the only thing I can think of is the level if recursion is somehow breaking something...
The Parser : https://gist.github.com/958776 - The stringify function I'm calling
ObjectIterator.js : https://gist.github.com/958777 - Mostly to provide the asynchronous iteration
Edit So, I iterated over the object one level deep and compared the resulting string to the string of JSON.stringify(sameLevelDeep) and they're equal. Since the output is equal, I'm not sure that it's how I'm parsing something, but possible that it's such a large object or the amount of recursion is so high?
Edit 2 So, I "fixed" the problem, I guess. Instead of every 25th iteration being pushed to the next event loop, I push every fifth. I'm not sure why this would make a difference but it does... I guess the question is now "Why does that make a difference"?
Okay well, beyond it being a very specific question helping a very specific person, I would like to take this to a different place, that might also remove your problem and maybe help others.
Since you are not specifying why you are going through this process, I will have to break it down and guess -- and provide a solution for each guessed idea.
1. (Browser) You are trying to use JavaScript to crunch data, and provide the user with a result
Downloading at least several megabytes of raw data ("some of these objects are 5-10million characters") on a webpage to process and display a result is far from optimal, you should probably be doing this operation on the server side and download the pre-calculated result.
Besides, no matter what you are doing, JavaScript does not support threads.
setTimeout(1, function() { JSON.stringify(data); }); shouldn't be much different from what you are doing.
2. (Browser) You are trying to display the downloaded content
You should attempt downloading smaller chunks instead of the whole 10+ million character content using the built-in JSON.stringify method.
3. (Non-browser) You are trying to use JavaScript for an application that requires threading
You should consider using a different programming language for this application.
In summary
I think you are climbing the wrong mountain, you can achieve the same thing walking around it without breaking sweat. If you want to climb a mountain for kicks, there are mountains out there that need it -- but it's not this one.
Translation: Work on the architecture to obsolete the obstacle instead of trying to solve it, if you want to solve a problem there are problems that need a solving -- but it's not this one.

What functions a lexer needs to provide?

I am making a lexer, don't tell me to not do because I already did most of it.
Currently it makes an array of tokens and that's it.
I would like to know, what functions the lexer needs to provide and a brief explanation of what each function needs to do.
I'll accept the most complete list.
An example function would be:
next: Consume the current token and return it
Also, should the lexer have the expect function or should the interpreter implement it?
By the way, the lexer constructor accepts a string as argument and make the lexical analyses and store all the tokens in the "tokens" variable.
The language is javascript, so I can't overload operators.
In my experience, you need:
nextToken — move forward in the input and get the next token.
curToken — return the current token; don't move
curValue — tokens like STRING and NUMBER have values; tokens like SEMICOLON don't
sourcePos — return the source position (line number, character position) of the first character of the current token
edit — oh also:
prefetch — initialize the lexer by getting the first token.
Additionally, for some languages you might want 2 or more tokens of lookahead. Then you'd want a variation on plain curToken so that you can look at a bigger "window" on the token stream. For most languages that's not really necessary however.
edit again — also I won't tell you not to write one because they're basically the funnest things ever. In javascript you can't get too crazy, but in a language like Erlang you can have your lexer act like a "token pump" by making it generate a stream of tokens it sends to a separate parser process.
You should be able to compile a comprehensive list by writing a program that uses your lexer, and implementing the functions you end up needing.
Think a second time about what you're asking: "what functions the lexer needs to provide"
What it it "needs" depends of course on what you need, not what it needs. We will probably be able to give you better aid if you explain your own needs. But well, here's a shot anyway:
A minimal one would consist of a single function that takes a string as an argument and returns a list of strings (or an iterator over strings if you want to be fancy and deferred). That's enough for many use-cases and hence is what a lexer "needs".
A more descriptive one could return more complex objects than strings, containing further information about each token (such as it's position in the original string for example, so that you'll be able to tell the poor programmer with syntax errors in his code where he should look). You can probably come up with lots of meta data to add in there besides line numbers, but once again, it all depends on your needs.

Categories

Resources