This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to decide when to use Node.js?
Sorry if I'm a bit ambiguous, but I'm trying to understand the real advantages of using Node.js instead of other server-side language.
I'm a JavaScript enthusiast, so I'm probably going to play with Node.js, but I want to know if I should use it in my projects.
It's evented asynchronous non-blocking I/O build ontop of V8.
So we have all the performance gain of V8 which is the Google JavaScript interpreter. Since the JavaScript performance race hasn't ended yet, you can expect Google to constantly update performance on V8 (for free).
We have non-blocking I/O which is simply the correct way to do I/O. This is based on an event loop and using asynchronous callbacks for your I/O.
It gives you useful tools like creating a HTTP server, creating a TCP server, handling file I/O.
It's a low level highly performant platform for doing any kind of I/O without having to write the entire thing in C from scratch. And it scales very well due to the non-blocking I/O.
So you want to use Node.js if you want to write highly scaling and efficient applications using non-blocking I/O whilst still having a high level scripting language available. If needed, you can hand optimise parts of your code by writing extensions in C.
There are plenty of OS libraries for Node.js that will give you abstractions, like Express.js and now.
You don't want to use Node.js if you want (slow) high level abstractions to do everything for you. You don't want to use Node.js if you want RAD. You don't want to use Node.js if you can't afford to trust a young platform, either due to having to write large pieces of code yourself to do things that are build into other frameworks or because you can't use Node.js, because the API isn't stable yet or it's a sub 1.0 release.
The two most oft-quoted advantages are:
JavaScript is both server-side and client-side. There are fewer things to learn, less context switching, and the ability to reuse code across the two sides.
Uses non-blocking I/O, and Chrome's V8 engine, to provide fast, highly scalable servers.
For me though, the most interesting part is the amount of activity happening in this area. There are a lot of very interesting ideas under development for node - be sure to check out the list of Node.js modules.
When you're (or even if you are not) a JavaScript enthusiast you can/should use Node.js for a number of reasons:
It's a low-level, lightweight and standalone framework which brings power of JavaScript to the server-side environment.
If you like more higher level abstraction then there is a large number of modules and the npm package manager where you can find wide range of ready-to-use applications.
Fast/unencumbered development process - for example, you don't need tons of additional tools in order to start writing serious stuff.
Big open source based community full of enthusiasts and very talented people.
Made for creating real-time web oriented applications - that's where the (near) future is.
Personally, I'd most likely use Node.js when:
I want to write a server that doesn't use the HTTP protocol.
I'm prototyping a server implementation.
I'm writing a server that isn't expecting a ton of traffic (although I've never profiled a Node.js implementation next to, say, a matching C++ implementation).
I want to get active in the community (which is apparently growing quite rapidly).
Related
This question already has answers here:
How to create threads in nodejs
(12 answers)
Closed 6 years ago.
Node.js multithreading.
Is it possible to use multithreading in Node.js? if yes.
What are the advantages and disadvantages of using multithreading in Node.js? Which are those modules that can be achieve multithreading in Node.js? I am a newbie to Node.js, I read from many blogs saying that Node.js is single threaded.
I know the java multithreading but I need to know whether it is possible in Node.js or not.
Yes and No. Let's start from the beginning. Why is NodeJs single-threaded, is explained here Why is Node.js single threaded?
While Node.js itself is multithreaded -- I/O and other such operations run from a thread pool -- JavaScript code executed by Node.js runs, for all practical purposes, in a single thread. This isn't a limitation of Node.js itself, but of the V8 JavaScript engine and of JavaScript implementations generally.
Node.js includes a native mechanism for clustering multiple Node.js processes, where each process runs on a separate core. But that clustering mechanism doesn't include any native routing logic or shared state between workers.
Generally and more clearly the statement is that, each node.js process is single threaded .if you want multiple threads, you have to have multiple processes as well.
For instance,you can use child process for this, which is described here http://nodejs.org/api/child_process.html . And just for your info, check out also this article, is very instructive and well written, and possibly will help you, if you want to work with child_processes -- https://blog.scottfrees.com/automating-a-c-program-from-a-node-js-web-app
Despite of all of the above, you can achieve a kind of multi-threading with C++ and native nodejs C++ development.
First of all check out these answers, probably they will help you,
How to create threads in nodejs
Node.js C++ addon: Multiple callbacks from different thread
Node.js C++ Addon: Threading
https://bravenewmethod.com/2011/03/30/callbacks-from-threaded-node-js-c-extension/
Of course you can find and leverage a lot of node plugins which are giving "multi"-threading capability: https://www.npmjs.com/search?q=thread
In addition, you can check JXCore https://github.com/jxcore/jxcore
JXCore is fork of Node.js and allows Node.js apps to run on multiple threads housed within the same process. So most probably JXCore is a solution for you.
"What are the advantages and disadvantages of using multi-threading in Node.js ?"
It depends of what you want to do. There are no disadvantages if you leverage and use Node.js sources correctly, and your "multi" - threaded plugins or processes or whatever, do not "hack" or misuse anything from the core of V8 or Node.js !
As in every answer, the correct answer is "use the right tools for the job".
Of course, since node is by design single-threaded, you can have better approaches for multithreading.
A technique that a lot of people use, is to make their multi-threaded application in C++, Java, Python e.t.c and then, they run it via automation and Node.js child_process (third-party application runs asynchronously with automation, you have better performance (e.g C++ app), and you can send input and get output in and from your Node.js application).
Disadvantages multi-threading Node.js
Check this: https://softwareengineering.stackexchange.com/questions/315454/what-are-the-drawbacks-of-making-a-multi-threaded-javascript-runtime-implementat
Keep in mind that if you want to create a pure multithreaded environment in Node.js by modifying it, I suppose that would be difficult, risky due to the complexity, moreover you have to be, always up to date with each new V8 or Node release that will probably affect this.
No, you can't use threads in node.js. It uses asynchronous model of code execution. Behind the asynchronous model, the node itself uses threads. But as far as I know, they can't be accessed in the app without additional libraries.
With the asynchronous model you don't actually need threads. Here is a simple example. Normally, in multi-threaded environments, you would run networks requests in each thread to not block the execution of code in main thread. With async model, those requests do not block the main thread and are still executed in other threads, only this is hidden from you to make development process straightforward.
Also check this comment by bazza.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Matlab, R, and Python are powerful but either costly or slow for some data mining work I'd like to do. I'm considering using Javascript both for
speed, good visualization libraries, and to be able to use the browser as an interface.
The first question I faced is the obvious one for science programming, how to do I/O to data files? The second is client-side or server-side? The last question, can I make something that is truly portable i.e. put it all on a USB and run from that?
I've spent a couple of weeks looking for answers. Server2go seems to address client/server needs which I think means I can get data to and from the programs on the client side. Server2go also allows running from a USB. The data files I work with are usually XML and there seem to be several javascript converters to JSON.
However, after all the looking around, I'm not sure if my approach makes sense. So before I commit further, any advice/thoughts/guidance on Javascript as a portable tool for scientific data processing?
I have to agree with the comments that JavaScript is not a good fit for scientific processing. However, you know your requirements best; maybe you already found useful libraries that do what you need. Just be aware that you'll have to implement all logic yourself. There is no built in handling of complex numbers, or matrices or integrals or ... Usually programmer time is far more valuable than machine time. Personally, I'd look in to compiled languages; after I created a first version that isn't fast enough in whatever language I like the most.
Assuming that JavaScript is the way to go:
Data I/O
I can think of three options:
Sending and receiving data with ajax to a server
Seems to be the solution you've found with Server2go. It requires you to write a server back end, but that can be kept quite simple. All it really needs to do be able to read and write files as a response to you client-side application.
Using a non-browser implementation of v8 which includes file I/O
For instance Node.js. You could then avoid the need for a server and simply use a command-line interface, and all code will be JavaScript. Other than that it is roughly equivalent to the first option.
Creating a file object using the file API which you ask the user to save or load
It is the worst option in my opinion, as user interaction is required. It would avoid the need for a server; your application could be a simple html file that loads all data files with ajax requests. You'd have to start Chrome with a special switch to allow ajax requests with the file:// protocol, as described here
These options are only concerned with file I/O and you can't do file I/O in JavaScript. This is because browsers cannot allow arbitrary web code to do arbitrary file I/O; the security implications would be horrendous. Each option describes one way to not do file I/O.
The first communicates with a server that does the file I/O for the client.
The second uses "special" versions of JavaScript, with conditions other than that of the browser so the security implications are not important. But that means you'll have to look up how file I/O is done in the actual implementation you use, it's not common to JavaScript.
The third requires the user to control the file I/O.
Interface
Even if you don't use JavaScript to do the actual processing, which so far is the consensus, there is nothing stopping you from using a browser as the interface or JavaScript libraries for visualisation. That is something JavaScript is good at.
If you want to interactively control your data mining tool, you will need a server that can control the tool. Server2go should work, or the built in server in Node.js if you use that or... If you don't need interactive control of the data tool; that is you first generate the processed data, then look at the data a server can be avoided, by using the file//: protocol and JSONP. But really; avoiding a server shouldn't be a goal.
I won't go into detail about interface issues, as there is nothing specific to say and very nearly everything that has been written about javascript is about interface.
One thing, do use a declarative data binding library like Angular.js or Knockout.js.
JavaScript speed is heavily overrated. This is a Web 2.0 myth.
Let me explain this claim a bit (and don't just downvote me for saying something you do not want to hear!)
Sure, JavaScript V8 is a quite highly optimized VM. It does beat many other scripting languages in naive benchmarks.
However, it is a very limited scope language. It is meant for the "ADHS world" of web. It is a best effort, but it may just fail and you have little guarantees on things completing or completing on time.
Consider for example MongoDB. At first it seems to be good and fast and offer a lot. Until you see for example that the MapReduce is single-threaded only and thus really slow. It's not all gold that shines!
Now look at data mining relevant libraries such as BLAS. Basic linear algebra, math operations and such. All CPU manufacturers like Intel and AMD offer optimized versions for their CPUs. This is an optimization that requires detailed understanding of the individual CPUs, way beyond the capabilities of our current compilers. The libraries contain optimized codepaths for various CPUs all essentially doing the same thing.
And for these operations, using an optimized library such as BLAS can easily yield a 5-20x speedup; at the same time matrix operations that are often in O(n^2) or O(n^3) will dominate your overall runtime.
So a good language for data mining will let you go all the way to machine code!
Pythons SciPy and R are good choices here. They have the optimized libraries inside and easily accessible, but at the same time allow to do the wrapper stuff in a simpler language.
Have a look at this programming language benchmark:
http://benchmarksgame.alioth.debian.org/u32/which-programs-are-fastest.html
Pure JavaScript has a high variance, indicating that it can do some things fast (mostly regular expressions!) others much slower. It can clearly beat PHP, but it will be just as clearly be beaten by C and Java.
Multithreading is also important for modern data mining. Few large systems today have a single core, and you do want to make use of all cores. So you need libraries and a programming language that has a powerful set of multithreading operations. This is actually why Fortran and C are losing popularity here. Other languages such as Java are much better here.
Although this discussion is a bit old and I am not a Javascript guru by any stretch of the imagination, I find the above arguments doubtful about not having the processing speed or the capabilities for advance math operations. WebGL is a Javascipt API for rendering advance 2D and 3D graphics which relies heavily on advance math operations. I believe the capabilities are there from a technical point of view however what is lacking is good libraries to handling statistical analysis, natural language processing and other predictive analytics included in data mining.
WebGL is based on openGL, which in turn uses libraries like BLAS (library info here).
Advances like node.js, w8 make it technically possible. What is lacking is libraries like we can find in R and Scilab to do the same operations.
Disclaimer: This question will be a bit open-ended. I also expect replies to be partly based off of developer preference.
I've been doing some research lately on Express.js (coupled via Node.js) and I'm struggling to find how I would fit either of these technologies into my current workflow for developing websites. Lately I've been working in either Wordpress or Ruby on Rails, the prior will run on Apache, the latter will run on it's own proprietary server (I assume).
Now perhaps I'm just not understanding something, but I fail to see the advantages to enlisting the support of a Javascript-based framework/server. If there are clear cut advantages to making this part of my workflow, what would they be? I haven't been able to find any ways to fit this into (per say) a Rails application or a Wordpress site. Could someone point me in the direction of some better help of implementing these technologies on top of ones I already use?
One last question, what happens if someone has Javascript disabled in their browser? How would a Javascript-based server react (if at all)?
There are two big differences:
Event loop
Node.js is a bit different from the usual Apache concept, because of the way it handles connections. Instead of having synchronous connections, Node uses an event loop to have non-blocking operations. Note that this is not unique to Javascript and there are C and Python based frameworks that also enable a similar event loop approach, however in Javascript it's probably the most natural feeling since this is how JS has worked since it was introduced.
Supposedly, this should enable you to handle more concurrent clients. However, it hasn't had as much real world exposure as the regular blocking solutions so this approach isn't as mature as most current implementations. The actual performance difference is questionable as it depends on the exact requirements for the application.
Code Sharing
This point is much less controversial than the previous difference, but in essence if you have the same language on both the client and the server, you can reuse a lot of the code, instead of having to rewrite your data structures etc in multiple languages, saving you a lot of development time. However, you have to understand that the concepts of server side JS are different from what you know on the browser, such as you don't have dynamic JS with jQuery or Prototype, but it's result and use-cases are more similar to what PHP is widely used for.
The primary advantage of having Javascript as your server-side language is that you're writing your whole system in a single language.
This has advantages in terms of:
Learning curve and mental context switching for the developer
and also in provides some possibility for sharing code between the two environments.
However, this last point is less helpful than it sounds, for a number of reasons, and this is where the disadvantages come in:
Not much code can actually be shared, because of the differences in what the two environments are actually doing. Maybe a bit of validation code, so you know that you're doing the same checks on both client and server, and maybe a few utility functions, but that's about it.
The libraries you use will be completely different between the two as well: jQuery only works on the client, and Node has libraries that are server-specific.
So as a developer, you still need to mentally context switch between environments, because the two environments are different. They may share a language, but their modes of operation are different, and what they do is different. In fact, sharing the language between the two can actually make it harder to context switch, and this can lead to errors.
Finally, it's worth bearing in mind that while Node is getting lots of attention from the developer community, it is still new and evolving quickly: if you're a business considering a it as a development platform, it's probably not quite yet stable enough to base a major project on.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Is NodeJS a good framework/codebase for a large server-side application? What I am looking to develop is a large application that will require HTTP transactions (states) and large amounts of concurrent users.
From what I've read online, NodeJS is not the best tool to use when it comes to large programs. What I've come across is as follows:
NodeJS runs on JavaScript which runs on event loops which are not very efficient when used in bulk.
NodeJS may be non-blocking, but all the requests are handled within a single thread so this can cause a bit of a bottleneck when many requests are handled.
NodeJS is built atop its own HTTP server so future maintenance will require its own sysadmin/developer hybrid to take care of the application.
There isn't as much well-tested and diverse software available for NodeJS that helps you build a bigger application.
Is there something I'm missing? Is NodeJS really as powerful as it can be?
Is NodeJS a good framework/codebase for a large server-side application?
That question is a bit subjective but I'm including actual objective points which solve real problems when working with node in a large project.
Update after working on project for awhile:
It is best as a front end / API server which is I/O bound (most front end/api servers are). If you have backend compute needs (processing etc...) it can be paired which other technologies (C# net core, go, Java etc... worker nodes)
I created this project as a sample illustrating most points - Sane Node Development:
https://github.com/bryanmacfarlane/sanenode
NodeJS is not built atop its own http server. It's built atop the V8 chrome javascript engine and doesn't assume an http server. There is a built in http module as well as the popular express web server but there's also socket modules (as well as socket.io). It's not just an http server.
The single thread does not cause a bottleneck because all I/O is evented and asynchronous. This link explains it well: http://blog.mixu.net/2011/02/01/understanding-the-node-js-event-loop/
As far as the software module, you can search at the npm registry. Always look at how many other folks use it (downloads) and visit the github repo to see if it's actively being maintained (or is there a bunch of issue never getting attention).
Regarding "large project" what I've found critical for sane development is:
Compile time support (and intellisense): Find issues when you compile. If you didn't think you needed this like I did when I started, you will change your mind after that first big refactor.
Eliminate Callback Hell: Keep performance which is critical (noted above) but eliminate callback code. Use async / await to write linear code and keep async perf. Integrates with promises but much better than solely using promises.
Tooling: Lots of options but I've found best is Typescript (ES6/7 today), VS Code (intellisense), Mocha (unit testing).
Instrumentation/Logging: Getting insights into your app with tracing and instrumentation is critical.
Build on well vetted frameworks: I use express as an example but that's a preference and there's others.
Node.js is a very good tool to build distributed network services. What is your large scale application design is more than a question 'which tools to use'. A lot of people use node.js in a very heterogeneous way together with ruby, php, erlang, apache & nginx & HAproxy. If you don't know why you need node you probably don't need it. Possible reasons to consider node:
you want to share common Javascript code between server and client
you expect highly concurrent load (thousands to hundreds of thousands simultaneous connections per server)
you (or your team) is much more proficient with JavaScript than with any other available language/framework
if one of 7600+ modules is implementing large portion of required functionality
One "issue" with NodeJS, is that building a large application requires discipline on the part of the developer / team.
This is particularly true with multiple teams within the same company. Existing frameworks are a bit loose, and different teams will come up with different approaches to solving an issue.
KrakenJS is a framework, built on top of express. It adds a layer of convention and configuration that should make it easy(er) to build large projects, involving multiple teams.
Really NodeJs is powerful in its own way, Some more information,
You can run multiple instance of your app under load balance to handle massive request.
Choose NodeJs to read 2000 files instead calculating 20th prime number.
Put NodeJs busy with reading/writing in files or ports.
Very useful when you need to broadcast your response to multiple client.
Don't care about dead lock in NodeJs, But care about how frequent you are doing same operation.
Most important thing is, the values live in V8 engine until the process is terminated. Be sure how much lines of code, you are going to feed in NodeJs.
I find the most important thing is to use CPU time as least as possible. If your application needs to use CPU intensively, event loop latency would increase and the application would fail to respond any other requests.
So far as I have learned, It's the raw speed and async await that makes the biggest difference.
For those who are moderately skilled in Javascript and particularly understand REST as well as the above mentioned aspects, node.js is one of the best solutions for big enterprise applications.
Correct me if I am wrong, but even using raw express (without those frameworks built on top of it ) is actually good enough if teams agree on certain conventions and standards to follow.
This is a follow up question to Is there a point to multithreading?
If JavaScript were multithreaded, would it fare better than the existing system? Would multithreading with one UI thread and other tasks in different threads (background) bring in greater responsiveness and efficient usage of resources?
Why did the designers of the language decide to stick on to the single threaded model despite the advancements in no of CPUs/machine and the need to simultaneously pull different content and data from different mechanisms. Why are they still ok with the way JavaScript timers work when multithreading can offer far greater accuracy?
I am not trying to pin down JavaScript as inefficient, rather learn how multithreading brings in value in comparison to complexity it introduces to programming
Brendan Eich (Mozilla's chief technical officer) talked about the complexity this would create in "Threads suck".
His conclusion:
"So my default answer to questions such as the one I got at last May's Ajax Experience, "When will you add threads to JavaScript?" is: "over your dead body!"
Using web workers is the alternative.
If JavaScript were multithreaded,
would it fare better than the existing
system?
What are you exactly referring to when you say "the existing system"? The exiting Javascript embedded in your browser? Or standalone Javascript implementations?
I am not trying to pin down JavaScript
as inefficient
A language by itself isn't efficient or inefficient, it's implementation is. :-)
Would multithreading with one UI
thread and other tasks in different
threads (background) bring in greater
responsiveness and efficient usage of
resources?
Rhino, the Javascript on JVM in Java can access and use threads and any other Java library, so yes, you can script up an entire Swing application in Javascript and it would work like any other Swing application in Java out there.
Anyways, the question of how multithreading brings value in comparison to the complexity it introduces is not specific to any language as such. There are some languages out there which provide a pretty good wrapper around the multithreading capabilities offered by the runtime, so don't. Multithreading is difficult but more so important these days given that we have multicore processors powering our everyday use desktop machines. :-)
Multithreading would not have made a difference to JavaScript, even now with WebWorkers, we don't need it much for most JS applications. Only now that JS is moving server side and bigger applications are written that are WebWorkers becomming useful and webworkers is infinatly better and easier then multithreading.
Next to webworkers, JS, server side JS in particular is also leaning towards asynchronous processing, further alleviating the need for linear processing.
In the past and even to this day, a majority of client side applications written in JS simply do not need multithreading as JS engines are also getting better and faster with HTML/CSS rendering being boosted by hardware acceleration as well.
And there's another nice thing, with JS (server to client) we can easily move a lot of processing to the client side, basically attaining a huge network of computers to do all the computing without clients needing to install fancy programs (just the browser).