Linq for JavaScript vs standard methods - javascript

I am an architect for a software development team. I have built a sizeable warchest of web controls and tools for us using ASP.NET and JavaScript/jQuery.
Part of the toolkit is a functional equivalent to .NET's IEnumerable LINQ methods (where, select, etc.) for JavaScript arrays. I was surprised how simple these were to implement using the js prototype feature. My hope was that our devs could leverage their knowledge of LINQ seamlessly on the client side, and the results have been great so far.
There is just one snag, as I discovered today: there are already a handful of functionally identical methods as of JavaScript 1.6. They are filter, map, some, and every, corresponding to LINQ's where, select, any, and all methods, respectively.
They aren't supported by IE8 or earlier (which might explain why I had not heard about them), but it is trivial to provide an implementation so that they work cross-browser. Note that there are dozens of LINQ methods that do not have a native equivalent, such as sum, max, avg, groupBy, etc.
My question is this: how should my development team address this discrepancy? I think we have three options:
1 - Ignore the native JavaScript methods (or consume them internally in a pass through method) and use only the LINQ methods. Forbid the use of the native methods so that our codebase is self-consistent.
2 - Use the native JavaScript methods whenever applicable, and the LINQ methods when there is no equivalent.
3 - Allow either to be used.
I will suppose that most of the community will side with Option 2, as it is arguably more standards-compliant, but I feel it will be disorienting for devs to have to know that some functions are identical in JavaScript, while others have a different, arbitrary name. It really jacks up the cross-platform consistency we have achieved so far.
Which of these would you choose, and why? Is there another alternative I am not considering?
Bonus Question: jQuery has neither native functions nor LINQ functions. What method names should my jQuery extensions use?

I would opt for #1 out of the three you provided, because I value consistency, but also because it allows you to provide a fallback if the method is not natively available.
That's also similar to what Underscore.js does for some of its methods --- uses native implementation if available, otherwise resorts to its fallback implementation.

You can try manipula package that implements all of C# LINQ methods and preserves its syntax:
https://github.com/litichevskiydv/manipula
https://www.npmjs.com/package/manipula

Related

node.js utility library for working with objects and arrays?

Is there good utility library for working with objects and arrays.
For example functions like: extend, forEach, copying objects/arrays ect,
What is common in node.js environments? I wonder are there decent alternatives to underscore.js?
underscore.js is a pretty good default for this kind of stuff. Here's a thread on compatibility issues that may be handy.
Edit, upon you're request for something beyond underscore:
As far as I know, underscore has become the defacto standard when you're looking for additional array operations (much like jQuery for DOM manipulation). Joyent maintains a pretty thorough manifest of node.js compatible modules, and the only seemingly comparable utility would appear to be an experimental library called fjs with an emphasis on currying (and judging from the source, most of the functionality comes from extending underscore functions anyway). There might be something else out there, but as far as I know nothing with the penetration and maturity of underscore.
Yet another edit - here are a handful of older libraries if you're so curious, but their maintenance has fallen off a bit - valentine, wu.js, Functional, and Sugar. Functional and valentine may be a bit thinner; wu.js looks to be about the same and sugar is even fatter.
lodash is a "drop-in replacement* for underscore.js" that you may also want to consider.
Lo-Dash v0.7.0 has been tested in at least Chrome 5-21, Firefox 1-15, IE 6-9, Opera 9.25-12, Safari 3-6, Node.js 0.4.8-0.8.8, Narwhal 0.3.2, RingoJS 0.8, and Rhino 1.7RC5
For extend specifically, you can use Node's built-in util._extend() function.
var
extend = require('util')._extend,
x = {a:1},
y = extend({}, x);
Source code of Node's _extend function: https://github.com/joyent/node/blob/master/lib/util.js#L563
Have a look at Ramdajs: http://ramdajs.com/0.22.1/index.html
The primary distinguishing features of Ramda are:
Ramda emphasizes a purer functional style. Immutability and
side-effect free functions are at the heart of its design philosophy.
This can help you get the job done with simple, elegant code.
Ramda functions are automatically curried. This allows you to easily
build up new functions from old ones simply by not supplying the final
parameters.
The parameters to Ramda functions are arranged to make it convenient
for currying. The data to be operated on is generally supplied last.
The last two points together make it very easy to build functions as
sequences of simpler functions, each of which transforms the data and
passes it along to the next. Ramda is designed to support this style
of coding.

object.prototype===object.fn history

Calling all JS historians.
A common pattern is to alias a class' prototype to fn, which is less verbose.
Where did the object.prototype===object.fn convention originate in JavaScript?
I see many libraries using it.
Just curious.
I believe this practice comes from original jQuery code where author wanted to shield developers from complexity (or natural beauty) of JavaScript native feature - prototypes.
Another potential reason could be the need to remove possible connotation [with existing PrototypeJS library] of the internal jQuery library prototype aspect.

Is "monkey patching" really that bad? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Some languages like Ruby and JavaScript have open classes which allow you to modify interfaces of even core classes like numbers, strings, arrays, etc. Obviously doing so could confuse others who are familiar with the API but is there a good reason to avoid it otherwise, assuming that you are adding to the interface and not changing existing behavior?
For example, it might be nice to add a an Array.map implementation to web browsers which don't implement ECMAScript 5th edition (and if you don't need all of jQuery). Or your Ruby arrays might benefit from a "sum" convenience method which uses "inject". As long as the changes are isolated to your systems (e.g. not part of a software package you release for distribution) is there a good reason not to take advantage of this language feature?
Monkey-patching, like many tools in the programming toolbox, can be used both for good and for evil. The question is where, on balance, such tools tend to be most used. In my experience with Ruby the balance weighs heavily on the "evil" side.
So what's an "evil" use of monkey-patching? Well, monkey-patching in general leaves you wide open to major, potentially undiagnosable clashes. I have a class A. I have some kind of monkey-patching module MB that patches A to include method1, method2 and method3. I have another monkey-patching module MC that also patches A to include a method2, method3 and method4. Now I'm in a bind. I call instance_of_A.method2: whose method gets called? The answer to that can depend on a lot of factors:
In which order did I bring in the patching modules?
Are the patches applied right off or in some kind of conditional circumstance?
AAAAAAARGH! THE SPIDERS ARE EATING MY EYEBALLS OUT FROM THE INSIDE!
OK, so #3 is perhaps a tad over-melodramatic....
Anyway, that's the problem with monkey-patching: horrible clashing problems. Given the highly-dynamic nature of the languages that typically support it you're already faced with a lot of potential "spooky action at a distance" problems; monkey-patching just adds to these.
Having monkey-patching available is nice if you're a responsible developer. Unfortunately, IME, what tends to happen is that someone sees monkey-patching and says, "Sweet! I'll just monkey-patch this in instead of checking to see if other mechanisms might not be more appropriate." This is a situation roughly analogous to Lisp code bases created by people who reach for macros before they think of just doing it as a function.
Wikipedia has a short summary of the pitfalls of monkey-patching:
http://en.wikipedia.org/wiki/Monkey_patch#Pitfalls
There's a time and place for everything, also for monkey-patching. Experienced developers have many techniques up their sleeves and learn when to use them. It's seldom a technique per se that's "evil", just inconsiderate use of it.
With regards to Javascript:
is there a good reason to avoid it otherwise, assuming that you are adding to the interface and not changing existing behavior?
Yes. Worst-case, even if you don't alter existing behavior, you could damage the future syntax of the language.
This is exactly what happened with Array.prototype.flatten and Array.prototype.contains. In short, the specification was written up for those methods, their proposals got to stage 3, and then browsers started shipping it. But, in both cases, it was found that there were ancient libraries which patched the built-in Array object with their own methods with the same name as the new methods, and had different behavior; as a result, websites broke, the browsers had to back out of their implementations of the new methods, and the specification had to be edited. (The methods were renamed.)
If you mutate a built-in object like Array on your own browser, on your own computer, that's fine. (This is a very useful technique for userscripts.) If you mutate a built-in object on your public-facing site, that's less fine - it may eventually result in problems like the above. If you happen to control a big site (like stackoverflow.com) and you mutate a built-in object, you can almost guarantee that browsers will refuse to implement new features/methods which break your site (because then users of that browser will not be able to use your site, and they will be more likely to migrate to a different browser). (see here for an explanation of these sorts of interactions between the specification writers and browser makers)
All that said, with regards to the specific example in your question:
For example, it might be nice to add a an Array.map implementation to web browsers which don't implement ECMAScript 5th edition
This is a very common and trustworthy technique, called a polyfill.
A polyfill is code that implements a feature on web browsers that do not support the feature. Most often, it refers to a JavaScript library that implements an HTML5 web standard, either an established standard (supported by some browsers) on older browsers, or a proposed standard (not supported by any browsers) on existing browsers
For example, if you wrote a polyfill for Array.prototype.map (or, to take a newer example, for Array.prototype.flatMap) which was perfectly in line with the official Stage 4 specification, and then ran code that defined Array.prototype.flatMap on browsers which didn't have it already:
if (!Array.prototype.flatMap) {
Array.prototype.flatMap = function(...
// ...
}
}
If your implementation is correct, this is perfectly fine, and is very commonly done all over the web so that obsolete browsers can understand newer methods. polyfill.io is a common service for this sort of thing.
As long as the changes are isolated to
your systems (e.g. not part of a
software package you release for
distribution) is there a good reason
not to take advantage of this language
feature?
As a lone developer on an isolated problem there are no issues with extending or altering native objects. Also on larger projects this is a team choice that should be made.
Personally I dislike having native objects in javascript altered but it's a common practice and it's a valid choice to make. If your going to write a library or code that is meant to be used by other's I would heavily avoid it.
It is however a valid design choice to allow the user to set a config flag which states please overwrite native objects with your convenience methods because there's so convenient.
To illustrate a JavaScript specific pitfall.
Array.protoype.map = function map() { ... };
var a = [2];
for (var k in a) {
console.log(a[k]);
}
// 2, function map() { ... }
This issue can be avoided by using ES5 which allows you to inject non-enumerable properties into an object.
This is mainly a high level design choice and everyone needs to be aware / agreeing on this.
It's perfectly reasonable to use "monkey patching" to correct a specific, known problem where the alternative would be to wait for a patch to fix it. That means temporarily taking on responsibility for fixing something until there's a "proper", formally released fix that you can deploy.
A considered opinion by Gilad Bracha on Monkey Patching: http://gbracha.blogspot.com/2008/03/monkey-patching.html
The conditions your describe -- adding (not changing) existing behavior, and not releasing your code to the outside world -- seem relatively safe. Problems could come up, however, if the next version of Ruby or JavaScript or Rails changes their API. For example, what if some future version of jQuery checks to see if Array.map is already defined, and assumes it's the EMCA5Script version of map when in actuality it's your monkey-patch?
Similarly, what if you define "sum" in Ruby, and one day you decide you want to use that ruby code in Rails or add the Active Support gem to your project. Active Support also defines a sum method (on Enumerable), so there's a clash.

Why is it frowned upon to modify JavaScript object's prototypes?

I've come across a few comments here and there about how it's frowned upon to modify a JavaScript object's prototype? I personally don't see how it could be a problem. For instance extending the Array object to have map and include methods or to create more robust Date methods?
The problem is that prototype can be modified in several places. For example one library will add map method to Array's prototype and your own code will add the same but with another purpose. So one implementation will be broken.
Mostly because of namespace collisions. I know the Prototype framework has had many problems with keeping their names different from the ones included natively.
There are two major methods of providing utilities to people..
Prototyping
Adding a function to an Object's prototype. MooTools and Prototype do this.
Advantages:
Super easy access.
Disadvantages:
Can use a lot of system memory. While modern browsers just fetch an instance of the property from the constructor, some older browsers store a separate instance of each property for each instance of the constructor.
Not necessarily always available.
What I mean by "not available" is this:
Imagine you have a NodeList from document.getElementsByTagName and you want to iterate through them. You can't do..
document.getElementsByTagName('p').map(function () { ... });
..because it's a NodeList, not an Array. The above will give you an error something like: Uncaught TypeError: [object NodeList] doesn't have method 'map'.
I should note that there are very simple ways to convert NodeList's and other Array-like
Objects into real arrays.
Collecting
Creating a brand new global variable and stock piling utilities on it. jQuery and Dojo do this.
Advantages:
Always there.
Low memory usage.
Disadvantages:
Not placed quite as nicely.
Can feel awkward to use at times.
With this method you still couldn't do..
document.getElementsByTagName('p').map(function () { ... });
..but you could do..
jQuery.map(document.getElementsByTagName('p'), function () { ... });
..but as pointed out by Matt, in usual use, you would do the above with..
jQuery('p').map(function () { ... });
Which is better?
Ultimately, it's up to you. If you're OK with the risk of being overwritten/overwriting, then I would highly recommend prototyping. It's the style I prefer and I feel that the risks are worth the results. If you're not as sure about it as me, then collecting is a fine style too. They both have advantages and disadvantages but all and all, they usually produce the same end result.
As bjornd pointed out, monkey-patching is a problem only when there are multiple libraries involved. Therefore its not a good practice to do it if you are writing reusable libraries. However, it still remains the best technique out there to iron out cross-browser compatibility issues when using host objects in javascript.
See this blog post from 2009 (or the Wayback Machine original) for a real incident when prototype.js and json2.js are used together.
There is an excellent article from Nicholas C. Zakas explaining why this practice is not something that should be in the mind of any programmer during a team or customer project (maybe you can do some tweaks for educational purpose, but not for general project use).
Maintainable JavaScript: Don’t modify objects you don’t own:
https://www.nczonline.net/blog/2010/03/02/maintainable-javascript-dont-modify-objects-you-down-own/
In addition to the other answers, an even more permanent problem that can arise from modifying built-in objects is that if the non-standard change gets used on enough sites, future versions of ECMAScript will be unable to define prototype methods using the same name. See here:
This is exactly what happened with Array.prototype.flatten and Array.prototype.contains. In short, the specification was written up for those methods, their proposals got to stage 3, and then browsers started shipping it. But, in both cases, it was found that there were ancient libraries which patched the built-in Array object with their own methods with the same name as the new methods, and had different behavior; as a result, websites broke, the browsers had to back out of their implementations of the new methods, and the specification had to be edited. (The methods were renamed.)
For example, there is currently a proposal for String.prototype.replaceAll. If you ship a library which gets widely used, and that library monkeypatches a custom non-standard method onto String.prototype.replaceAll, the replaceAll name will no longer be usable by the specification-writers; it will have to be changed before browsers can implement it.

Javascript wrapper that gives us Rubyish Javascript?

Are there any frameworks/wrapper out there that gives us rubyish javascript?
Instead of the usual for() {} loop gives us the object.each {} loop like in Ruby?
Since javascript could be used in web browsers I do want to use it for the server side too, but I do like ruby syntax far more.
The Prototype library, having been developed by guys very close to Ruby on Rails, has a very Ruby-ish feel. It uses Ruby lingo (like mixins); for instance, the Enumerable mixin (which Prototype mixes in to arrays by default) adds the each method to an array, so you can do this:
["sample", "array"].each(function (item) {
console.log(item);
});
look up jQuery. it has a
$('.css-selector').each(function(i){
//do stuff
});
Ref: http://api.jquery.com/jQuery.each/
You might want to checkout JS.Class - Ruby-style JavaScript. From the docs,
JS.Class is a set of tools designed to make it easy to build robust object-oriented programs in JavaScript. It’s based on Ruby, and gives you access to Ruby’s object, module and class systems, some of its reflection and metaprogramming facilities, and a few of the packages from its standard library. It also provides a powerful package manager to help load your applications as efficiently as possible.
It comes with a well packaged standard library including modules and classes such as
Enumerable
Hash
Set
Observable
Command
The Enumerable module, for instance, is comparable to that in Ruby, and includes methods like
all
any
collect
drop
findAll
forEach
grep
partition
reject
select
zip
Here's a post by Ken Egozi which discusses adding .forEach and other helpers to the array prototype.

Categories

Resources