I am attempting to make a javascript library which I would prefer to be compatible with both browsers and node. However, there is some functionality offered in the node API that isn't offered in browsers (such as compression). I know it would be possible to code this functionality in javascript so it would be cross-compatible, but the node native compression will probably perform much better as it is much lower level.
How should I split between browser-compatible code and code that uses node API?
The way I see it, I could do one of the following:
make 2 separate scripts, one for node and one for browsers
make my code figure out the environment it is in and act accordingly
make all my code the same, but lose some performance improvements I would have had in node
What should I do to solve this?
I know this is an old question, however, today it is possible easily with Browserify. Browserify lets you write nodejs modules with require() syntax and have them converted to browser complain code easily!
They even ported zlib which you mention to work with it, so that dependency is OK.
I hope this helps future readers, browserify helped me :)
Related
We need to move a large application from silverlight over to html5.
The application will have a client and server part.
Because of the size of the application I thought it might be worth deviding some of the functionality into npm modules.
That way if I require to use it on the server side, i can and if I want to use it on the client (using aurelia) i can do that through jspm.
From a reusability of modularized js would you recon using npm being the best approach to maintain a versioned reusable stack or are there other ways of dealing with this?
Just want to do a sanity check to make sure I am on the right track.
Modularized code is definitely the way to go, I don't see any issue with using NPM as a versioned repo to deal with this especially as the code grows and is used by more and more people, however another approach might be using the githubs version tags, this might be a simpler solution as well (or atleast keeping everything in once place)
I'm fairly new to JS development and trying to figure out some best practices for developing and using AMD libraries. Suppose I'm developing a javascript library which depends on jquery, underscore, and whatever else. Furthermore I want to make this library an AMD module and optimize it into a big monolithic file. But the question is, how monolithic? Should it also pull in jquery and underscore so that it's completely self-contained? It seems like the pros and cons of this approach are:
Pro: it's easy to use
as an app developer using this library you can just get it and add a dependency on it, without needing to know that you need jquery, underscore, etc
no having to configure requirejs with the paths to those things
no worrying about the case where one library needs jquery 1.x while another library needs 2.x
Con: it's bloated
if the main application or another library also needs to use jquery, which seems likely, it will essentially get downloaded twice (or n times)
Anything I'm missing here? So which is the right way to do this, or is the answer "it depends", or "make versions of both"? It seems like in general you'd like to share code where possible, but it puts the onus on the consumer of libraries which have non-included dependencies, and necessitates a tool which solves the constraints to find a version of a given library which is compatible with all dependent components. Is there something out there that does something like this?
I can't think of any good reason to include a third party library (such as jQuery or Underscore) with your own library. It's rare to see this technique employed anywhere, if at all, as it restricts the consumer of your code too much.
Not only would it add bloat as you say, but what if I wanted to use Zepto or Lo Dash, or a different version of jQuery? If your library simply lists jQuery and Underscore as a dependency then I could easily map those to load alternate versions or libraries.
Users of AMD (and RequireJS) are typically very comfortable with configuring paths, maps and shims as it necessary in nearly all instances, so I wouldn't worry about that.
Keeping everything separate will also allow flexibility when it comes to optimising the JS for production. For example, I often like the build jQuery into a main module that is loaded on all pages and set other modules to exclude it.
An example of what I mean with that can be seen here:
https://github.com/simonsmith/modular-html-requirejs
I would say you should provide an unoptimized version and an optimized one. If for some reason you can't do both then just provide the unoptimized version. Why the unoptimized version?
Your code is probably not bug-free. It is easier for someone using your library to track down a bug and maybe contribute a patch if there is a 1-for-1 mapping between the original source and what they observed in their debugging environment. (I've tried source maps. They were useful but produced funky results.)
You've said it yourself: libraries like jQuery could end up being loaded n times in the final application. If you provide a way to let the developers use just your library, they can run tests to determine whether they can replace different versions of jquery with just one. (The majority of cases where a library mentions or ships a specific version of jQuery, it's just because it happened to be the version that was current when the library was made, and not because of a hard dependency.)
Provide the optimized version so that someone who just wants to try our library can do it fast.
You ask:
It seems like in general you'd like to share code where possible, but it puts the onus on the consumer of libraries which have non-included dependencies, and necessitates a tool which solves the constraints to find a version of a given library which is compatible with all dependent components. Is there something out there that does something like this?
Yes. A test suite. As a consumer of libraries, I'm unlikely to use one that does not have a substantial test suite. As a producer of software, I don't produce software without a substantial test suite. As I've said above, most often a library that depends on something like jQuery or underscore or what-have-you will list whatever version that happened to be around when the library was developed.
I'm trying to get a more powerful regex library into javascript. The only solution I found is to compile Oniguruma regex library to javascript using Emscripten
I've installed Emscripten and tested it with their small test scripts, also downloaded oniguruma source code, but still don't know what should be done next.
Anyone familiar with emscripten?
When you utilize Emscripten, the general way of building/compiling from C/C++ stays similar. The steps which change, are that you don't use e.g. the gcc compiler but Emscripten compiler.
That said there is the general question of whether you are familiar with C/C++ and more specific with autotools (which seems like the build tool Oniguruma uses). If you are not, you will probably have a very hard time understanding what needs to be done and how.
Last I checked Emscripten did not have support for Libtool, so building, utilizing autotools, will probably fail. Feel free to ask at Emscripten IRC channel though, whether this is indeed not possible.
Another way I can think of is using autotools to generate Makefiles and then writing custom targets for Emscripten programs. Beware that this is for advanced users, familiar with the make cruft.
If these steps are to taxing for you perhaps you should see whether a Javascript library can be sufficient for you.
A more realistic approach to do this is going to be to use http://xregexp.com. It adds many more features to RegExps and compiles them down to JavaScripts more limited RegExp dialect so it can get the best of both features and performance. Compiling a regexp library using emscripten is very unlikely to be performant enough to use in production. For some uses, emscripten is excellent, but in this case it seems like the overhead is going to be not worth the cost.
The author of XRegExp even has an article on lookbehinds http://blog.stevenlevithan.com/archives/javascript-regex-lookbehind
So, you are using a bunch of javascript libraries in a website. Your javascript code calls the several APIs, but every once in a while after an upgrade, one of the API changes, and your code breaks, without you knowing it.
How do you prevent this from happening?
I'm mostly interested in javascript, but any answer regarding dynamically typed languages would be valuable.
I don't think there's much you can do. You always run a risk when updating any piece of software. The best advice is to:
Read and understand documentation about upgrading
Upgrade in your test environment
TEST
Roll out live when you are happy there are no regressions
You should consider building unit tests using tools such as JsUnit and Selenium. As long as your code passes the tests, you're good to go. If some tests fail, you would quickly identify what needs to be fixed.
As an example of a suite of Selenium tests, you can check the Google Maps API Tests, which you can download and run locally in your browser.
Well there are two options:
Don't upgrade
Retest everything after you upgrade.
There is no way to guarantee that an upgrade won't break something. Even if you have something that could check the underlying API and make sure it still all lines up, you can't be certain that the underlying functionality is the same.
I'm in a process of selecting an API for building a GWT application. The answer to the following questions will help me choose among a set of libraries.
Does a third-party code rewritten in
GWT run faster than a code using a
wrapped JavaScript library?
Will code using a wrapped library
have the same performance as a pure
GWT code if the underlying
JavaScript framework is well written
and tuned?
While JavaScript libraries get a lot of programming eyeballs and attention, GWT has the advantage of being able to doing some hideously not-human-readable things to the generated JavaScript code per browser for the sake of performance.
In theory, anything the GWT compiler does, the JavaScript writers should be able to do. But in practice the JS library writers have to maintain their code. Look at the jQuery code. It's obviously not optimized per browser. With some effort, I could take jQuery and target it for Safari only, saving a lot of code and speeding up what remains.
It's an ongoing battle. The JavaScript libraries compete against each other, getting faster all the time. GWT gets better and better, and has the advantage of being able to write ugly unmaintainable JavaScript per browser.
For any given task, you'll have to test to see where the arms race currently places us, and it'll likely vary between browsers.
In some cases you don't have another option. You can not rewrite everything when moving to GWT.
In a first step you could just wrap your existing code in a wrapper and if it turns out to be a performance bottleneck you can still move the code to Java/GWT
The code optimisation in GWT will certainly be better than what the majority of JS developpers can write. And when the Browsers change, it is just a matter of modifying the GWT optimizer and your code will be better tuned for the latest advances in Js technology.
Depends on how well the code is
written.
I would think so.
Generally look at the community around a 3rd party library before using it unless it is open-source (so you can fix bugs) and specifically look for posts concerning bugs - how quick do the maintainers respond to items. How long is a release cycle, etc.