Browser-side node.js or non-blocking javascript? - javascript

I am fascinated with non-blocking architectures. While I haven't used Node.js, I have a grasp of it conceptually. Also, I have been developing an event-driven web app so I have a fundamental understanding of event programming.
How do you write non-blocking javascript in the browser? I imagine this must differ in some ways from how Node does it. My app, for example, allows users to load huge amounts of data (serialized to JSON). This data is parsed to reconstitute the application state. This is a heavy operation that can cause the browser to lock for a time.
I believe using web workers is one way. (This seemed to be the obvious choice, however, Node accomplishes a non-blocking, event-driven architecture I believe without using Web Workers so I guess there must be another way.) I believe timers can also play a role. I read about TameJS and some other libraries that extend the javascript language. I am interested in javascript libraries that use native javascript without introducing a new language syntax.
Links to resources, libraries and practical examples are most appreciated.
EDIT:
Learned more and I realize that what I am talking about falls under the term "Futures".
jQuery implements this however, it always uses XHR to call a server where the server does the processing before returning the result and what I am after is doing the same thing without calling the server, where the client does the processing but in a non-blocking manner.
http://www.erichynds.com/jquery/using-deferreds-in-jquery/

Three are two methods of doing non-blocking work on the browser
Web Workers. WebWorkers create a new isolated thread for you to do computation in, however browser support tells you that IE<10 hates you.
not doing it, expensive work in a blocking fashion should not be done on the client, send an ajax request to a server to do this, then have the server return the results.
Poor man's threads:
There are a few hacks you can use:
emulate time splicing by using setTimeout. This basically means that after every "lump" of work you give the browser some room to be responsive by calling setTimeout(doMore, 10). This is basically writing your own process scheduler in a really poor non optimized manner, use web workers instead
creating a "new process" by creating a iframe with it's own html document. In this iframe you can do computation without blocking your own html document from being responsive.

What do you mean by non-blocking specifically?
The longest operations, Ajax-calls, are already non-blocking (async)
If you need some long-running function to run "somewhere" and then do something, you can call
setTimeout(function, 0)
and call the callback from the function.
And you can also read on promises and here as well

Related

How does node js do it better?

How and when is the single threaded asynchronous processing model of nodejs a better approach than the multithreaded approach of the known server Gurus like PHP, Java and C#?. Can someone please explain to me simply and clearly?
My question is how technically is the single threaded asynchronous processing model a better approach ?
Grasping the Node JS alternative to multithreading
Node.js was created explicitly as an experiment in async processing. The theory was that doing async processing on a single thread could provide more performance and scalability under typical web loads than the typical thread-based implementation.
The single threaded, async nature does make things complicated. But do you honestly think it's more complicated than threading? One race condition can ruin your entire month! Or empty out your thread pool due to some setting somewhere and watch your response time slow to a crawl! Not to mention deadlocks, priority inversions, and all the other gyrations that go with multithreading.
But is it really single threaded. Read this article https://softwareengineeringdaily.com/2015/08/02/how-does-node-js-work-asynchronously-without-multithreading/
Node.js is built on top of Google's V8 engine, which in turns compiles JavaScript. As many of you already know, JavaScript is asynchronous in nature. Asynchronous is a programming pattern which provides the feature of non-blocking code i.e do not stop or do not depend on another function / process to execute a particular line of code.Asynchronous is great in terms of performance, resource utilization and system throughput. But there are some drawbacks:
Very difficult for a legacy programmer to proceed with Async.
Handling control flow is really painful.
Callbacks are dirty.
NodeJS is single threaded and it is not a deterrent or a performance block really. The single threaded event loop is super efficient and is much less complicated than deploying effective multithreading. Multi-threading does not always mean better performance.
Having said that, if you do need to handle heavy concurrency, then you can employ the services of the cluster module which splits multiple NodeJS processes across available CPU cores, all the while maintaining a link with a master process which can be used to control/offload processing tasks.
Node was built from the ground up with asynchronicity in mind, leveraging the event loop of JavaScript. It can handle a lot of requests quickly by not waiting around for the request when there are certain kinds of work being done for the request, such as database requests.
Imagine you have a database operation that takes 10 seconds to complete, represented by a setTimeout
router.route('/api/delayed')
.get(function(req, res) {
setTimeout(function() {
res.end('foo');
}, 10000);
});
router.route('/api/immediate')
.get(function(req, res) {
res.end('bar');
});
or a back end framework that does not support asynchronous execution, this situation is an anti-pattern: the server will hang as it waits for the database operation to complete and then fulfill the request. In Node, it fires off the operation and then returns to be ready to field the next incoming request. Once the operation finishes, it will be handled in an upcoming cycle of the event loop and the request gets fulfilled.
As long as we only write non-blocking code, our Node server will perform better than other backend languages
After reading about it in the book: Web Development with MongoDB and Node.js 2nd Edition by Maithun Satheesh, Jason Krol and Bruno Joseph D'mello, I finally came across a clear advantage
To understand this, we should understand the problem that Node.js
tries to resolve. It tries to do asynchronous processing on a single
thread to provide more performance and scalability for applications
that are supposed to handle too much web traffic. Imagine web
applications that handle millions of concurrent requests; if the
server makes a new thread for handling each request that comes in, it
will consume a lot of resources and we would end up trying to add
more and more servers to increase the scalability of the application.
The single threaded asynchronous processing model has its advantage
in the previous context, and you can process much more concurrent
requests with less number of server-side resources.
And I notice that one can process much more concurrent requests with less serverside resources
my 2 pence worth.... i am not sure sure "if the the single-threaded approach of nodejs is better" : simply put, nodejs does not support multi-threading. that can translate loosely to " everything runs in a single thread". Now, I am not quite sure how it can "compare" to a multi-threaded system , as a "multi" threaded system can support both as a single thread (like nodejs ) and multiple threads . Its all in your application design , and the platform capabilities that are available to you.
What is more important, in my opinion, is the ability to support multi-tasking, in an asynchronous way .Nodejs, does provide support for multi- tasking, in a simplified and easy-to use package. It does have limitations on due to the lack of native support for multi-threading. to take advantage of the multi-tasking ( and not worry.. much ) about multi-threading, think along the lines of designing your serverside application as performing little chunks of work over a long period of tie , and each chunk of work is invoked, and consuming events generated from the clientside . Think an event-driven design/architecture ( simple switch/case for loops, callbacks, and data checkpointing to files or database, can do the trick). And I will bet my tiny dollar , that if you get your application to work in this fashion, sans multi-threading, it will be a much better design , more robust, and if you migrate it ( and adapt for multi-threading ) it run like on an SpaceX buster!
While multi-threading is aways a plus for serverside implementation, it is also a powerful beast that requires a lot of experience and respect to tame and harness ( something that nodejs shields/protects you from)
Another way to look at is is this : Multi-tasking is a perspective ( of running several tasks) at the application level , which multi-threading is a perspective, at a lower level : Multi-tasking can be mapping on to different implementations, with multi-threading being one of them .
Multi-threading capability
Truth : Node.js ( currently ) does not provide native support for multi-threading in the sense of low level execution/processing threads. Java, and its implementations /frameworks, provides native support for multi-threading, and extensively too ( pre-emption, multi-tenancy, synchronous multi-threading, multi-tasking, thread pools etc )
Pants on Fire(ish) : lack of multi-threading in Nodejs is a show stopper. Nodejs is built around an event driven architecture , where events are produced and consumed as quickly as possible. There is native support for functional call backs. Depending on the application design, this highlevel functionality can support what could otherwise be done by thread. s
For serverside applications, at an application level , what is important is the ability to perform multiple tasks, concurrently :ie multi-tasking. There are several ways to implement multi-tasking . Multi-threading being one of them, and is a natural fit for the task. That said, the concept of “multi -threading “ is is a low level platform aspect. For instance multi-threaded platform such as java , hosted/running on a single core process server ( server with 1 CPU processor core) still supports multi-multi at the application level, mapped to multi-threading at the low level , but in reality , only a single thread can execute at any ontime. On a multi-core machine with sa y 4 cores , the same multi-tasking at application level is supported , and with up to 4 threads can executing simultaneously at any given time. The point is, in most cases, what really matters is the support for mult-tasking, which is not always synonymous with multi-threading.
Back to node.js , the real discussion should be on application design and architecture , and more specifically, support for MULTI-TASKING. In general, there is a whole paradigm shift between serverside applications and clientside or standalone applications, more so in terms of design, and process flow. Among other things, server side applications need to run along side other applications( onthe server ), need to be resilient and self contained (not affect the rest o f the server when the application fails or crashes ) , perform robust exception handling ( ie recover from errors, even critical ones ) and need to perform multiple tasks .
Just the ability to support multi-tasking is a critical capability for any server side technology . And node.js has this capability, and presented in a very easy to use packaging . This all means that design for sever side applications need to focus more on multi-tasking, and less on just multi-threading. Yes granted, working on a server-side platform that supports multi-threading has its obvious benefits ( enhanced functionality , performance) but that alone does not resolve the need to support multi-tasking at the application level . Any solid application design for server side applications ,AND node.js must be based on multi-tasking through event generation and consumption ( event processing). In node.js , the use of function callbacks, and small event processors (as functions ), with data checkpointing ( saving processing data , in files or data bases) across event processing instances is key.
What else for Node.js vs Java
a whole lot more! Think scalability , code management , feature integration , backward, forward compatibility , return on investment , agility, productivity …
,… to cut on “verbosity” of this article , pun intended :), we will leave it at this for now :)
whether you agree or not, please shoot the messenger ( Quora) and not the opinions!

What exactly are web workers and when to use them

I was reading up something about XMLHttpRequest (Is there any reason to use a synchronous XMLHttpRequest?) here on SO where I read on a thread from 2010 that, with the introduction of 'threads' in HTML5, developers might start to use synchronous APIs. Searching a bit on google, I found the MDN page on web workers.
I am writing Javascript and Node from about a year now (assume a beginner), and I am still to encounter something that makes use of these web workers. Maybe I need to read more code.
Now my question is, even though they seem to be very useful, why isn't it seen much in the wild? Also, what are the general use cases and guidelines when using them? Is it possible to reap the multithreaded processing benefits in Nodejs environment? If so, why are all Nodejs APIs still asynchronous?
Thank you.
A web-worker is strictly a clientside thing, so it has nothing to do with Node.js (EDIT: actually, see this module).
You might have heard that JavaScript is strictly single-threaded: if a function is doing some heavy calculation, nothing else is getting done, including animating icons, repainting the window, nothing. Thus, clientside JS should always avoid heavy computation, large loops and anything else that might usurp the thread for more than a fraction of a second.
Web-workers are the solution for that. Each web-worker is running in its own thread, and it can block as much as it wants - it won't affect the normal operation of the web page. The tradeoff is that it cannot have any access to the DOM: the fact that it doesn't affect the rendering means you cannot affect rendering with it. :) If a web-worker wants to render something, it would have to send a message to the main thread to do it.
Implementation-wise, each web-worker needs to be in a separate JS file. The reason why you don't see more of them is probably twofold: the average Joe probably doesn't know how to use them, and they are only needed when you need serious computation and don't want it to block your main thread - which is not that common in the first place, and when it is, the computation is commonly offloaded to the server (on clientside) or to separate processes (in Node.js).
Read more on HTML5 Rocks.

JavaScript Execution Engine Unspecified?

I started to learn JavaScript recently. I've been working in the creation of applications with Node.js and Angular for a few months now.
One of the main aspects that was puzzling me was how it is possible to write asynchronous code in JavaScript in which I do not have to worry about things like thread synchronization, race conditions, etc.
So, I found a couple of interesting articles([1],[2]) that explained how I can be guaranteed that any piece of code that I write will always be executed by a single thread at the time. Bottom line, all my asynchronous code is simply scheduled to be executed at some point within an event loop. This sounds pretty much like the OS scheduler would work in a machine with a single processor, where every process is scheduled to use the processor for a limited amount of time, giving us the fake sense of parallelism. And the callbacks would be like interrupts.
The articles do not provide any particular references, so I thought that the best source on how the JavaScript execution engine work should certainly be the language specification, and so I got me the latest copy of EcmaScript 5.1.
To my great surprise I discovered that this execution behavior is not specified there. How come? This looks like a fundamental design choice done in all JavaScript execution engines in browsers and in node. Interestingly, I have not been able to find a place where this is specified for any specific engine. In fact, I have no clue how people find out this is the way things work to the point that is so categorically affirmed in books and blogs like the ones cited above.
So, I have a set of what I consider interesting questions. I would appreciate any answers providing insights, remarks or simply references pointing me in the right direction to understand the following:
Since the EcmaScript does not specify that the JavaScript execution engine should work with an event loop, how come may implementations of JavaScript seem to work this way, not only in browsers, but also in Node.js?
Does that mean I could implement a new JavaScript engine which is EcmaScript-compatible that in fact provides true multithreading capabilities with features like sychronization locks, conditions, etc?
Does this execution model using an event loop precludes me from taking advantage of multicores if I want to execute an intense CPU-bound task? I mean, I can surely divide the task in chunks (as explained in one of the articles), but this is still executed serially, not in parallel. So, how could a JavaScript engine take advantage of multicores to run my code?
Do you know of any other reputable sources where this behavior for any particular JavaScript engine implementation is formally specified?
How could the code be portable between libraries and engines if we cannot assume a few things about the execution environments?
It looks like too many questions, perhaps making this post too broad to be answered. If it gets closed I will try to ask them in different threads. But they all revolve around the fact that I want to understand better why JavScript and Node were designed with an event loop, and if this is specified somewhere (besides the browsers source code) that I could read and gain a deeper understanding of designs and decisions taken here and more importantly, to know exactly what is the source of information for people writing books and posts about it.
There are certain assumptions/weak references you make which lead you to this conclusion. Some of them are:
ECMAScript ECMA-XXX vs JavaScript vs JavaScriptEngine:
ECMAscript is a language specification, given by ECMA International. JavaScript is the most widely used web language that conforms to ECMAscript. For most part ECMAScript and JavaScript are synonymous (remember there is ActionScript). JavaScriptEngine is the implementation (interpreter) of JavaScript language code. It is a program in flesh and bones worked from ground-up unlike ECMAScript which only describes JavaScript's end goals and behaviour and JavaScript the code that uses the ECMAScript standard. You will find that an engine will do more than just conform to ECMAScript standard. They are at the ends of the specification/implementation spectrum. Example of this is ECMA-262/JavaScript/V8.
Event loop in browser vs Event loop in node.JS (JSEngine vs JSEnvironment):
This looks like a fundamental design choice done in all JavaScript execution engines in browsers and in node.
If you are using node.JS you may have used core libraries fs/net/http. These use event emitters which are hooked with the event loop provided with libuv. This is an extension to the JavaScriptEngine V8, forming node.JS platform. The event loop here involves objects like threads, sockets, files or abstract requests. But the event did not originate here. It was in first used in browsers. A browser implements a DOM which requires events for working with HTML elements. See the DOM specification and one implemented for Mozilla. They use events and require a event loop built on top of the JSEngine for browser use. Chrome adds DOM interface to the V8 engine it embeds.
Yes, you will feel this is common, because of the necessary DOM API in all browsers. Node developers brought forward this novel evented processing to server with the help of libuv which provides non-blocking, asynchronous abstraction for low-level operations required on server. As pointed already, not all server frameworks use event loop. Take example of Rhino which literally uses Java Classes for file,sockets (everything). If you actually use core Java IO, file operations are synchronous.
Now answering your questions in order:
explained in point 2 above
Yes, you can. Take a look at Rhino, there are many others. It may be possible in node but node is geared to be a high performance webserver and that might be against its zen.
Like I said event loop sits on JSEngine. It is a design pattern, that works best with IO. Multi-threaded design works better with high CPU-loads. If you want to use multiple cores in node.JS take a look at cluster module. For browsers you have webworkers
That varies from engine to engine. And how it is embedded. Browsers will have DOM and therefore event loop. Servers can vary. Check their specifications.
For browser it is possible to make it portable between them to a good extent. No promises for server.
Event loop doesn't have anything to do with javascript itself, it's a part of environment, not js engine. Since javascript was designed primarily to manipulate user interface, it was used heavily with event loop. But event loop is a part of UI implementation, not just in javascript, but in any language.
Yes, you can. But it will not be just engine, more like environment/platform. I think (but not quite sure) that you can use threads and related stuff in Rhino.
Yes, it does. In node this is usually solved by spawning more processes and in browser you can use WebWorkers.
I can't imagine a better source then specification. If something isn't there, it's just not a part of javascript (aka EcmaScript)
I have spent a good amount of time today trying to find the answers to my own questions, guided by some of the comments and other answers left for me here. I share my findings here in case others may consider them useful.
Event-Driven Design in JavaScript for Browsers
The decision to design JavaScript this way seems mostly related to the requirements of the DOM Event Architecture. In this specification we can find explicit requirements related to the implementation of events order and the event loop. The HTML5 specification goes even further, and define the terms explicitly and state specific requirements for the event loop implementation.
This must have certainly driven the design of the JavaScript execution engines in browsers. In this article Timing and Synchronization in JavaScript published by Opera we can clearly see that these requirements are the driving force behind the design of the Opera browser. Also in this another article from Mozilla, named Concurrency Model and Event Loop, we can find a clear explanation of the same event-driven design concepts as implemented by Mozilla (although the document seems outdated).
The use of an event loop to deal with this kind of applications is not new.
Handling user input is the most complex aspect of interactive
programming. An application may be sensitive to multiple input
devices, such as mouse and keyboard, and may multiplex these among
multiple input devices (e.g. different windows). Managing this
many-to-many mapping is usually in the province of of User Interface
Management Systems (UIMS) toolkits. Since most UIMS are implemented
in sequential languages they must resort to various techniques to
emulate the necessary concurrency. Typically this toolkits use an
event-loop that monitors the stream of input events and maps the events to call-back functions (or event handlers) provided by the
application programmer.
- Jonh H. Reppy - Concurrent Programming in ML
The use of event loops is present in other famous UI toolkits like Java Swing and Winforms. In Java all UI work must be done within the EventDispatchThread whearas in Winforms all UI work must be done within the thread that created the Window object. So, even when these languages support true multithreading they still require all UI code to be run in a single thread of execution.
Douglas Crockford explains the history of the event loop in JavaScript in this great video called Loopage (worth watching).
Event-Driven Design in JavaScript for Node
Now, the decision of using an event-driven design for Node.js is a bit less evident. Crockford gives a good explanation in the video shared above. But also, in the book, The Past, Present and Future of JavaScript, its author Axel Rauschmayer says:
2009—Node.js, JavaScript on the server. Node.js lets you implement
servers that perform well under load. To do so, it uses event-driven
non-blocking I/O and JavaScript (via V8). Node.js creator Ryan Dahl
mentions the following reasons for choosing JavaScript:
“Because it’s bare and does not come with I/O APIs.” [Node.js can thus introduce its own non-blocking APIs.]
“Web developers use it already.” [JavaScript is a widely known language, especially in a web context.]
“DOM API is event-based. Everyone is already used to running without threads and on an event loop.” [Web developers are not scared of
callbacks.]
So, it looks like Ryan Dahl, creator of Node.js, took into account the current design of JavaScript in browsers to decide which should be the implementation of his non-blocking, event-driven solution for Node.js.
The latest implementation of Node.js seems to use a library called libuv, designed for the implementation of this kind of applications. This library is a core part of the design of node. We can find the definition of event loops in its documentation. Evidently this plays an important role in the current implementation of Node.js.
About Other EcmaScript Compatible Engines
The EcmaScript specification does not provide requirements about how the concurrency needs to be handled in JavaScript. Therefore, this is decided by the implementation of the language. Other models of concurrency could easily be used without making the implementation incompatible with the standard.
The best two examples I found were the new Nashorn JavaScript Engine created for Oracle for the JDK8, and Rhino JavaScript Engine created by Mozilla. They both are EcmaScript compatible, and they both allow the creation of Java classes. Nothing in these engines requires the use of event-driven programming to deal with concurrency. These engines have access to the Java class library and since they run on top of the JVM they probably have access to other concurrency models offered in this platform.
Consider the following example take from JavaScript, The Definitive Guide to illustrate how to use Rhino JavaScript.
print(x); // Global print function prints to the console
version(170); // Tell Rhino we want JS 1.7 language features
load(filename,...); // Load and execute one or more files of JavaScript code
readFile(file); // Read a text file and return its contents as a string
readUrl(url); // Read the textual contents of a URL and return as a string
spawn(f); // Run f() or load and execute file f in a new thread
runCommand(cmd, // Run a system command with zero or more command-line args
[args...]);
quit() // Make Rhino exit
You can see a new thread can be spawned to run a JavaScript file in an independent thread of execution.
About Event-Driven Design, Multicores and True Concurrency
The best explanation I found on this subject comes from the book JavaScript The Definitive Guide. In this book, David Flanagan explains:
One of the fundamental features of client-side JavaScript is that it
is single-threaded: a browser will never run two event handlers at the
same time, and it will never trigger a timer while an event handler is
running, for example. Concurrent updates to application state or to
the document are simply not possible, and client-side programmers do
not need to think about, or even understand, concurrent programming. A
corollary is that client-side JavaScript functions must not run too
long: otherwise they will tie up the event loop and the web browser
will become unresponsive to user input. This is the reason that Ajax
APIs are always asynchronous and the reason that client-side
JavaScript cannot have a simple, synchronous load() or require()
function for loading JavaScript libraries.
The Web Workers specification very carefully relaxes the
single-threaded requirement for client-side JavaScript. The “workers”
it defines are effectively parallel threads of execution. Web workers
live in a self-contained execution environment, however, with no
access to the Window or Document object and can communicate with the
main thread only through asynchronous message passing. This means that
concurrent modifications of the DOM are still not possible, but it
also means that there is now a way to use synchronous APIs and write
long-running functions that do not stall the event loop and hang the
browser. Creating a new worker is not a heavyweight operation like
opening a new browser window, but workers are not flyweight threads
either, and it does not make sense to create new workers to perform
trivial operations. Complex web applications may find it useful to
create tens of workers, but it is unlikely that an application with
hundreds or thousands of workers would be practical.
What About Node.js True Parallelism?
Node.js is a fast-evolving technology, and perhaps that's why it is difficult to find opinions that are up-to-date. But basically, since it follows the same event-driven model as the browsers do, it is impossible to simply program a piece of code and expect it will take advantage of our multiple cores in the server. Since Node.js is implemented using non-blocking technologies, we could assume that every time we do some form of I/O (i.e. read a file, send something through a socket, write to a database, etc.), under the hood, the node engine could be spawning multiple threads and maybe taking advantage of the cores, but our code would still be run serially.
These days, it looks like node.js clustering is the solution for this problem. There are also some libraries like Node Worker that seem to implement the Web Worker concept in node. These libraries basically let us spawn new independent processes within node.js. (Although I have not experimented with this yet).
What About Portability?
It looks like there is no way that, in terms of the concurrency models, we can guarantee that all these libraries will play nice in all environments.
Although in the realm of browsers they all seem to work similarly, and since Node.js runs in an event loop, many things may still work, but there not guarantees that this should work in other engines. I guess this is probably one of the disadvantages of EcmaScript compared to other more extensive specifications like those defining the Java Virtual Machine or the CLR.
Perhaps something gets standardize later. In the future of EcmaScript, more concurrency ideas are being discussed today. See the EcmaSript Wiki: Strawman Proposals Communicating Event-Loop Concurrency and Distribution

Advantages of Web Workers and how they were achieved before?

I have read about Web Workers on http://www.whatwg.org/specs/web-apps/current-work/multipage/workers.html and I think I understand their purpose, but I am wondering if one of the main purposes of web workers, namely "allows long tasks to be executed without yielding to keep the page responsive." could be already achieved without web workers?
Like Registering Callbaks also allow long tasks to be executed, and only interrupt when they are ready, wtihout blocking, isn't that the same?
Callbacks allow you to manage concurrency. That is handling tasks. Not always in an easy way.
Not only do webworkers allow you to do concurrency in an easier way, they also let you have parallelism, that is tasks really running in parallel : they don't necessarily block each other and they don't block the UI.
In order to have a long javascript based running task in your browser before web worker, you had to micro-manage it to cut it in small parts in order to allow the UI to keep responsive. And of course having more than one long running task was more complex.
We know web browsers increased a lot over the past few years and it is primarily because of lot of work done on its engines, ex- V8 (Google), Chakra (Microsoft). The JavaScript so far runs in a single thread. The problem with single-threaded architecture is that it blocks the code and UI becomes unresponsive in case of running the complex script. There are various ways to solve this problem:
Offload work to the server, but to make apps faster fat client is preferred
Use asynchronous calls, but many complex ecosystem of async calls & promises could
lead into callback hell
Leverage multi-threading. Interesting!
Web Workers solve this issue by providing the capability of multi-threading in JavaScript.

what is the main advantage of server-side javascript?

I just want to know if there is an advantage of using server-side JS? Also, how can it work with PHP?
i just want to know that what is advantage of server-side js?
It lets you use JS on the server. (Which lets you reuse existing JS skills and code, and has all the usual benefits of JS (event driven programming, powerful lambdas, etc).
And how it works with php?
Generally speaking, it is used instead of PHP.
Javascript has an excellent event programming model thanks to it's callback functionality. This makes it great for server side coding.
First event driven model is great for large requests to be taken care of. In a typical Apache server, every client request spawns off a new thread. So your server is generated large number of threads for requests EVEN if some threads are just sitting idle waiting for some taks. This is surely not ideal.
With event driven programming, you can register events and once the results return from the database, event calls are made. So idle time is less and thread footprint is minimal. (note: it's not an alternative to asynchronous programming, which has it's own advantages).
And yes, it is used INSTEAD of PHP.
I would say one main advantage of using server-side javascript (and this applies not just to php but any other server-side language -- e.g. Java) is that it allows you to customize certain aspects of your execution. So you can have your normal execution flow but provide some "hooks" in the code where you allow JavaScript code to get executed and change certain values/conditions -- which might trigger different execution paths. This was you can have for instace non-technical people customize certain aspects of your applications without actually having to write server-side code for it, but instead just using a "simple" language like JavaScript.
You can use Apache 2.4 event mpm and TeaJS for a setup similar to your Apache/mod_php setup. See http://qteajs.org
Two of the advantages I don't see mentioned here are enhanced performance (V8 compiles the code) and maintainability (you are using the same language on the client and server side)

Categories

Resources