multi threading using an iframe - javascript

I am trying to simulate multi threading using an iframe but I have come across a situation which I do not know if it actually utilizes the iframe process (thread) on its own.
For instance, If I call a method which lays inside an iframe, will it run using the thread created by the iframe or will it run using the main parent window thread?
If it is the latter, then is it possible to change the scope so that the iframe calls the method (so that the program uses a different thread from that of the parent window)
EDIT:
Maybe I should have been more clear on this but I do not want to use WebWorkers simply because I do not have access to the DOM elements.

If you want to run some background tasks just use WebWorkers.
Generally you don't need to multi thread js code. You should use event loops instead.

Take a look at Using web workers from the MDN docs.
The Worker interface spawns real OS-level threads, and concurrency can
cause interesting effects in your code if you aren't careful. However,
in the case of web workers, the carefully controlled communication
points with other threads means that it's actually very hard to cause
concurrency problems. There's no access to non-thread safe components
or the DOM and you have to pass specific data in and out of a thread
through serialized objects. So you have to work really hard to cause
problems in your code.
John Resig wrote Computing with JavaScript Web Workersn back in 2009 on this topic. However, according to When can I use, there is no IE support until IE10 so it may not fit your needs.

Related

What exactly are web workers and when to use them

I was reading up something about XMLHttpRequest (Is there any reason to use a synchronous XMLHttpRequest?) here on SO where I read on a thread from 2010 that, with the introduction of 'threads' in HTML5, developers might start to use synchronous APIs. Searching a bit on google, I found the MDN page on web workers.
I am writing Javascript and Node from about a year now (assume a beginner), and I am still to encounter something that makes use of these web workers. Maybe I need to read more code.
Now my question is, even though they seem to be very useful, why isn't it seen much in the wild? Also, what are the general use cases and guidelines when using them? Is it possible to reap the multithreaded processing benefits in Nodejs environment? If so, why are all Nodejs APIs still asynchronous?
Thank you.
A web-worker is strictly a clientside thing, so it has nothing to do with Node.js (EDIT: actually, see this module).
You might have heard that JavaScript is strictly single-threaded: if a function is doing some heavy calculation, nothing else is getting done, including animating icons, repainting the window, nothing. Thus, clientside JS should always avoid heavy computation, large loops and anything else that might usurp the thread for more than a fraction of a second.
Web-workers are the solution for that. Each web-worker is running in its own thread, and it can block as much as it wants - it won't affect the normal operation of the web page. The tradeoff is that it cannot have any access to the DOM: the fact that it doesn't affect the rendering means you cannot affect rendering with it. :) If a web-worker wants to render something, it would have to send a message to the main thread to do it.
Implementation-wise, each web-worker needs to be in a separate JS file. The reason why you don't see more of them is probably twofold: the average Joe probably doesn't know how to use them, and they are only needed when you need serious computation and don't want it to block your main thread - which is not that common in the first place, and when it is, the computation is commonly offloaded to the server (on clientside) or to separate processes (in Node.js).
Read more on HTML5 Rocks.

Why does web workers does not give access to DOM object?

I have been using web-workers in javascript. It would have been an icing on the cake if window, document references were present in web workers context.
I would like to know for what reason was it chosen to not make these references accessible in workers?
Also is there any work-around to use these references?
Standard JavaScript and the DOM API have absolutely no exclusion mechanisms allowing several threads to safely access the same objects.
The solution that is most often chosen to allow multi-tasking in JavaScript is to isolate threads and only let them exchange through messages (or events). Giving access to the DOM to webworkers would break that isolation.
Note that this isn't totally specific to JavaScript: almost all GUI frameworks, whatever the language, restrict the modifications of the GUI to one dedicated thread. JavaScript is more restrictive as most often (always in the browser) you can't share objects at all.
The simple workaround is to let your main thread in the browser do the modifications you need to do, when the background thread gives it the instruction through messages. Or rather: do only the CPU extensive tasks in the webworker, letting the main thread obtain the input data, and update the DOM when the webworker sends the output data.

Is Javascript the only choice for DOM interaction when embedding a web browser?

I've looked at the various ways to embed a web browser into an application (like IE or Safari via OS-specific means, or Firefox/Mozilla via XULRunner, or Chrome via the Chromium Embedded Framework) and I've managed to integrate CEF with my app up to a point where I'm convinced that it'll all work as expected. Now, it seems to me that whenever I want to modify the DOM (e.g. to add or remove elements), I'll have to do this via Javascript, i.e. my application calls out to Javascript where the actual work is done.
I wonder why this is so. My (naive?) belief is that if for example I call appendChild in Javascript, the actual "work" of appending a child will eventually be performed by a C/C++ function as the browser itself is written in C/C++ and not in Javascript. So, I'm wondering why in an embedded web browser I can't call this C/C++ function directly instead of going through Javascript. I understand that for general scripting you don't want other languages than Javascript for security reasons, but if the browser is embedded into an application I can control anyway this shouldn't be the reason, should it?
What am I missing?
CEF is implemented as a layer between chromium's content api and your application. When using CEF, Chromium is a library inside CEF, and you only have access to CEF's Public API, which is more or less restricted to whatever chromium content api leverages (keep in mind no browser was created as an embeddable plugin and then evolved into an application, it was always the other way around). The content API was the way google engineers had to formalize some forms of introspection, but they aren't completed simply because the browser isn't completely modular by itself. There's work in progress on chromium code to separate specific "do-it-all" components in more general ones that you may pick at will.
Therefore you can't simply hook into chromium's implementation details when using CEF: you'd need to patch it to implement something it doesn't expose by itself. CEF implements a class for DOM traversal (see here), but you can only pick at DOM, not change it.
That said, on the C++ side you can do some arbitrary stuff such as inspecting/mangling http requests (which allows you to inject javascript into pages, for instance), and running arbitrary javascript code straight from C++, which can, by it's own turn, asynchronously call back to C++ code by diverse paths (ajax -> http handling in C++, or V8 extensions which you can code straightly in C++.
See https://bitbucket.org/chromiumembedded/cef/wiki/JavaScriptIntegration for more details.
One could customize CEF or go straightly to chromium source code, but that thing is huge. Other solutions I heard of are more or less alike in terms of API limitations, i.e. Awesomium, Mozilla's Gecko, etc.

JavaScript Execution Engine Unspecified?

I started to learn JavaScript recently. I've been working in the creation of applications with Node.js and Angular for a few months now.
One of the main aspects that was puzzling me was how it is possible to write asynchronous code in JavaScript in which I do not have to worry about things like thread synchronization, race conditions, etc.
So, I found a couple of interesting articles([1],[2]) that explained how I can be guaranteed that any piece of code that I write will always be executed by a single thread at the time. Bottom line, all my asynchronous code is simply scheduled to be executed at some point within an event loop. This sounds pretty much like the OS scheduler would work in a machine with a single processor, where every process is scheduled to use the processor for a limited amount of time, giving us the fake sense of parallelism. And the callbacks would be like interrupts.
The articles do not provide any particular references, so I thought that the best source on how the JavaScript execution engine work should certainly be the language specification, and so I got me the latest copy of EcmaScript 5.1.
To my great surprise I discovered that this execution behavior is not specified there. How come? This looks like a fundamental design choice done in all JavaScript execution engines in browsers and in node. Interestingly, I have not been able to find a place where this is specified for any specific engine. In fact, I have no clue how people find out this is the way things work to the point that is so categorically affirmed in books and blogs like the ones cited above.
So, I have a set of what I consider interesting questions. I would appreciate any answers providing insights, remarks or simply references pointing me in the right direction to understand the following:
Since the EcmaScript does not specify that the JavaScript execution engine should work with an event loop, how come may implementations of JavaScript seem to work this way, not only in browsers, but also in Node.js?
Does that mean I could implement a new JavaScript engine which is EcmaScript-compatible that in fact provides true multithreading capabilities with features like sychronization locks, conditions, etc?
Does this execution model using an event loop precludes me from taking advantage of multicores if I want to execute an intense CPU-bound task? I mean, I can surely divide the task in chunks (as explained in one of the articles), but this is still executed serially, not in parallel. So, how could a JavaScript engine take advantage of multicores to run my code?
Do you know of any other reputable sources where this behavior for any particular JavaScript engine implementation is formally specified?
How could the code be portable between libraries and engines if we cannot assume a few things about the execution environments?
It looks like too many questions, perhaps making this post too broad to be answered. If it gets closed I will try to ask them in different threads. But they all revolve around the fact that I want to understand better why JavScript and Node were designed with an event loop, and if this is specified somewhere (besides the browsers source code) that I could read and gain a deeper understanding of designs and decisions taken here and more importantly, to know exactly what is the source of information for people writing books and posts about it.
There are certain assumptions/weak references you make which lead you to this conclusion. Some of them are:
ECMAScript ECMA-XXX vs JavaScript vs JavaScriptEngine:
ECMAscript is a language specification, given by ECMA International. JavaScript is the most widely used web language that conforms to ECMAscript. For most part ECMAScript and JavaScript are synonymous (remember there is ActionScript). JavaScriptEngine is the implementation (interpreter) of JavaScript language code. It is a program in flesh and bones worked from ground-up unlike ECMAScript which only describes JavaScript's end goals and behaviour and JavaScript the code that uses the ECMAScript standard. You will find that an engine will do more than just conform to ECMAScript standard. They are at the ends of the specification/implementation spectrum. Example of this is ECMA-262/JavaScript/V8.
Event loop in browser vs Event loop in node.JS (JSEngine vs JSEnvironment):
This looks like a fundamental design choice done in all JavaScript execution engines in browsers and in node.
If you are using node.JS you may have used core libraries fs/net/http. These use event emitters which are hooked with the event loop provided with libuv. This is an extension to the JavaScriptEngine V8, forming node.JS platform. The event loop here involves objects like threads, sockets, files or abstract requests. But the event did not originate here. It was in first used in browsers. A browser implements a DOM which requires events for working with HTML elements. See the DOM specification and one implemented for Mozilla. They use events and require a event loop built on top of the JSEngine for browser use. Chrome adds DOM interface to the V8 engine it embeds.
Yes, you will feel this is common, because of the necessary DOM API in all browsers. Node developers brought forward this novel evented processing to server with the help of libuv which provides non-blocking, asynchronous abstraction for low-level operations required on server. As pointed already, not all server frameworks use event loop. Take example of Rhino which literally uses Java Classes for file,sockets (everything). If you actually use core Java IO, file operations are synchronous.
Now answering your questions in order:
explained in point 2 above
Yes, you can. Take a look at Rhino, there are many others. It may be possible in node but node is geared to be a high performance webserver and that might be against its zen.
Like I said event loop sits on JSEngine. It is a design pattern, that works best with IO. Multi-threaded design works better with high CPU-loads. If you want to use multiple cores in node.JS take a look at cluster module. For browsers you have webworkers
That varies from engine to engine. And how it is embedded. Browsers will have DOM and therefore event loop. Servers can vary. Check their specifications.
For browser it is possible to make it portable between them to a good extent. No promises for server.
Event loop doesn't have anything to do with javascript itself, it's a part of environment, not js engine. Since javascript was designed primarily to manipulate user interface, it was used heavily with event loop. But event loop is a part of UI implementation, not just in javascript, but in any language.
Yes, you can. But it will not be just engine, more like environment/platform. I think (but not quite sure) that you can use threads and related stuff in Rhino.
Yes, it does. In node this is usually solved by spawning more processes and in browser you can use WebWorkers.
I can't imagine a better source then specification. If something isn't there, it's just not a part of javascript (aka EcmaScript)
I have spent a good amount of time today trying to find the answers to my own questions, guided by some of the comments and other answers left for me here. I share my findings here in case others may consider them useful.
Event-Driven Design in JavaScript for Browsers
The decision to design JavaScript this way seems mostly related to the requirements of the DOM Event Architecture. In this specification we can find explicit requirements related to the implementation of events order and the event loop. The HTML5 specification goes even further, and define the terms explicitly and state specific requirements for the event loop implementation.
This must have certainly driven the design of the JavaScript execution engines in browsers. In this article Timing and Synchronization in JavaScript published by Opera we can clearly see that these requirements are the driving force behind the design of the Opera browser. Also in this another article from Mozilla, named Concurrency Model and Event Loop, we can find a clear explanation of the same event-driven design concepts as implemented by Mozilla (although the document seems outdated).
The use of an event loop to deal with this kind of applications is not new.
Handling user input is the most complex aspect of interactive
programming. An application may be sensitive to multiple input
devices, such as mouse and keyboard, and may multiplex these among
multiple input devices (e.g. different windows). Managing this
many-to-many mapping is usually in the province of of User Interface
Management Systems (UIMS) toolkits. Since most UIMS are implemented
in sequential languages they must resort to various techniques to
emulate the necessary concurrency. Typically this toolkits use an
event-loop that monitors the stream of input events and maps the events to call-back functions (or event handlers) provided by the
application programmer.
- Jonh H. Reppy - Concurrent Programming in ML
The use of event loops is present in other famous UI toolkits like Java Swing and Winforms. In Java all UI work must be done within the EventDispatchThread whearas in Winforms all UI work must be done within the thread that created the Window object. So, even when these languages support true multithreading they still require all UI code to be run in a single thread of execution.
Douglas Crockford explains the history of the event loop in JavaScript in this great video called Loopage (worth watching).
Event-Driven Design in JavaScript for Node
Now, the decision of using an event-driven design for Node.js is a bit less evident. Crockford gives a good explanation in the video shared above. But also, in the book, The Past, Present and Future of JavaScript, its author Axel Rauschmayer says:
2009—Node.js, JavaScript on the server. Node.js lets you implement
servers that perform well under load. To do so, it uses event-driven
non-blocking I/O and JavaScript (via V8). Node.js creator Ryan Dahl
mentions the following reasons for choosing JavaScript:
“Because it’s bare and does not come with I/O APIs.” [Node.js can thus introduce its own non-blocking APIs.]
“Web developers use it already.” [JavaScript is a widely known language, especially in a web context.]
“DOM API is event-based. Everyone is already used to running without threads and on an event loop.” [Web developers are not scared of
callbacks.]
So, it looks like Ryan Dahl, creator of Node.js, took into account the current design of JavaScript in browsers to decide which should be the implementation of his non-blocking, event-driven solution for Node.js.
The latest implementation of Node.js seems to use a library called libuv, designed for the implementation of this kind of applications. This library is a core part of the design of node. We can find the definition of event loops in its documentation. Evidently this plays an important role in the current implementation of Node.js.
About Other EcmaScript Compatible Engines
The EcmaScript specification does not provide requirements about how the concurrency needs to be handled in JavaScript. Therefore, this is decided by the implementation of the language. Other models of concurrency could easily be used without making the implementation incompatible with the standard.
The best two examples I found were the new Nashorn JavaScript Engine created for Oracle for the JDK8, and Rhino JavaScript Engine created by Mozilla. They both are EcmaScript compatible, and they both allow the creation of Java classes. Nothing in these engines requires the use of event-driven programming to deal with concurrency. These engines have access to the Java class library and since they run on top of the JVM they probably have access to other concurrency models offered in this platform.
Consider the following example take from JavaScript, The Definitive Guide to illustrate how to use Rhino JavaScript.
print(x); // Global print function prints to the console
version(170); // Tell Rhino we want JS 1.7 language features
load(filename,...); // Load and execute one or more files of JavaScript code
readFile(file); // Read a text file and return its contents as a string
readUrl(url); // Read the textual contents of a URL and return as a string
spawn(f); // Run f() or load and execute file f in a new thread
runCommand(cmd, // Run a system command with zero or more command-line args
[args...]);
quit() // Make Rhino exit
You can see a new thread can be spawned to run a JavaScript file in an independent thread of execution.
About Event-Driven Design, Multicores and True Concurrency
The best explanation I found on this subject comes from the book JavaScript The Definitive Guide. In this book, David Flanagan explains:
One of the fundamental features of client-side JavaScript is that it
is single-threaded: a browser will never run two event handlers at the
same time, and it will never trigger a timer while an event handler is
running, for example. Concurrent updates to application state or to
the document are simply not possible, and client-side programmers do
not need to think about, or even understand, concurrent programming. A
corollary is that client-side JavaScript functions must not run too
long: otherwise they will tie up the event loop and the web browser
will become unresponsive to user input. This is the reason that Ajax
APIs are always asynchronous and the reason that client-side
JavaScript cannot have a simple, synchronous load() or require()
function for loading JavaScript libraries.
The Web Workers specification very carefully relaxes the
single-threaded requirement for client-side JavaScript. The “workers”
it defines are effectively parallel threads of execution. Web workers
live in a self-contained execution environment, however, with no
access to the Window or Document object and can communicate with the
main thread only through asynchronous message passing. This means that
concurrent modifications of the DOM are still not possible, but it
also means that there is now a way to use synchronous APIs and write
long-running functions that do not stall the event loop and hang the
browser. Creating a new worker is not a heavyweight operation like
opening a new browser window, but workers are not flyweight threads
either, and it does not make sense to create new workers to perform
trivial operations. Complex web applications may find it useful to
create tens of workers, but it is unlikely that an application with
hundreds or thousands of workers would be practical.
What About Node.js True Parallelism?
Node.js is a fast-evolving technology, and perhaps that's why it is difficult to find opinions that are up-to-date. But basically, since it follows the same event-driven model as the browsers do, it is impossible to simply program a piece of code and expect it will take advantage of our multiple cores in the server. Since Node.js is implemented using non-blocking technologies, we could assume that every time we do some form of I/O (i.e. read a file, send something through a socket, write to a database, etc.), under the hood, the node engine could be spawning multiple threads and maybe taking advantage of the cores, but our code would still be run serially.
These days, it looks like node.js clustering is the solution for this problem. There are also some libraries like Node Worker that seem to implement the Web Worker concept in node. These libraries basically let us spawn new independent processes within node.js. (Although I have not experimented with this yet).
What About Portability?
It looks like there is no way that, in terms of the concurrency models, we can guarantee that all these libraries will play nice in all environments.
Although in the realm of browsers they all seem to work similarly, and since Node.js runs in an event loop, many things may still work, but there not guarantees that this should work in other engines. I guess this is probably one of the disadvantages of EcmaScript compared to other more extensive specifications like those defining the Java Virtual Machine or the CLR.
Perhaps something gets standardize later. In the future of EcmaScript, more concurrency ideas are being discussed today. See the EcmaSript Wiki: Strawman Proposals Communicating Event-Loop Concurrency and Distribution

Why doesn't Node.js have a native DOM?

When I discovered that Node.js was built using the V8 JavaScript engine, I thought:
Great, web scraping will be easier as the page
will be rendered like in the browser, with a
"native" DOM supporting XPath and any AJAX calls on
the page executed.
Why doesn't it have a native DOM when it uses the same JavaScript engine as Chrome?
Why doesn't it have a mode to run JavaScript in retrieved pages?
What am I not understanding about JavaScript engines vs the engine in a web browser?
Many thanks!
The DOM is the DOM, and the JavaScript implementation is simply a separate entity. The DOM represents a set of facilities that a web browser exposes to the JavaScript environment. There's no requirement however that any particular JavaScript runtime will have any facilities exposed via the global object.
What Node.js is is a stand-alone JavaScript environment completely independent of a web browser. There's no intrinsic link between web browsers and JavaScript; the DOM is not part of the JavaScript language or specification or anything.
I use the old Rhino Java-based JavaScript implementation in my Java-based web server. That environment also has nothing at all to do with any DOM. It's my own application that's responsible for populating the global object with facilities to do what I need it to be able to do, and it's not a DOM.
Note that there are projects like jsdom if you want a virtual DOM in your Node project. Because of its very nature as a server-side platform, a DOM is a facility that Node can do without and still make perfect sense for a wide variety of server applications. That's not to say that a DOM might not be useful to some people, but it's just not in the same category of services as things like process control, I/O, networking, database interop, and so on.
There may be some "official" answer to the question "why?" out there, but it's basically just the business of those who maintain Node (the Node Foundation now). If some intrepid developer out there decides that Node should ship by default with a set of modules to support a virtual DOM, and successfully works and works and makes that happen, then Node will have a DOM.
P.S: When reading this question I was also wondering if V8 (node.js is built on top of this) had a DOM
Why when it uses the same JS engine as Chrome doesn't it have a native
DOM?
But I searched google and found Google's V8 page which recites the following:
JavaScript is most commonly used for client-side scripting in a
browser, being used to manipulate Document Object Model (DOM) objects
for example. The DOM is not, however, typically provided by the
JavaScript engine but instead by a browser. The same is true of
V8—Google Chrome provides the DOM. V8 does however provide all the
data types, operators, objects and functions specified in the ECMA
standard.
node.js uses V8 and not Google Chrome.
Likewise, why doesn't it have a mode to run JS in retrieved pages?
I also think we don't really need it that bad. Ryan Dahl created node.js as one man (single programmer). Maybe now he (his team) will develop this, but I was already extremely amazed by the amount of code he produced (crazy). He wanted to make a non-blocking easy/efficient library, which I think he did a mighty good job at.
But then again, another developer created a module which is pretty good and actively developed (today) at https://github.com/tmpvar/jsdom.
What am I not understanding about Javascript engines vs the engine in
a web browser? :)
Those are different things as is hopefully clear from the quote above.
The Document Object Model (DOM in short) is a programming interface for HTML and XML documents and it represents the page so that programs can change the document structure, style, and content. More on this subject.
The necessary distinction between client-side (browser) and server-side (Node.js) and their main goals:
Client-side: accessing and displaying information of the web
Server-side: providing stable and reliable ways to deliver web information
Why is there no DOM in Node.js be default?
By default, Node.js doesn't have access, nor have any knowledge about the actual DOM in your own browser. Node.js just delivers the data, that will be used by your own browser to process and render the whole website, the DOM included. The server provides the data to your browser to use and process. That is the intended way.
Why wouldn't you want to access the DOM in Node.js?
Accessing your browser's actual DOM using Node.js would be just simply out of the goal of the server. Your own browser's role is to display the data coming from the server. However it is certainly possible and there are multiple solutions in different level of depths and varieties to pre-render, manipulate or change the DOM using AJAX calls. We'll see what future trends will bring.
Why would you want to access the DOM in Node.js?
By default, you shouldn't access your own, actual DOM (at least some data of it) using Node.js. Client-side and server-side are separated in terms of role, functionality, and responsibility based on years of experience and knowledge. Although there are several situations, where there are solid reasons to do so:
Gathering usage data (A/B testing, UI/UX efficiency and feedback)
Headless testing (Development, automation, web-scraping)
How can you access the DOM in Node.js?
jsdom: pure-JavaScript implementation, good for testing your own DOM/browser-related project
cheerio: great solution if you like/often use jQuery
puppeteer: Google's own way to provide headless testing using Google Chrome
own solution (your possible future project link here)
Although these solutions do not provide a way to access your browser's own, actual DOM by default, but you can create a project to send some form of data about your DOM to the server, then use/render/manipulate that data based on your needs.
...and yes, web-scraping and web development in terms of tools and utilities became more sophisticated and certainly easier in several fields.
node.js chose not to include it in their standard library. For any functionality, there is an inevitable tradeoff between comprehensiveness, scalability, and maintainability.
That doesn't mean it's not potentially useful. There is at least one JavaScript DOM implementation intended for NodeJS (among other CommonJS implementations).
You seem to have a flawed assumption that V8 and the DOM are inextricably related, that's not the case. The DOM is actually handled by Webkit, V8 doesn't handle the DOM, it handles Javascript calls to the DOM. Don't let this discourage you, Node.js has carved out a significant niche in the realtime server market, but don't let anybody tell you it's just for servers. Node makes it possible to build almost anything with JavaScript.
It is possible to do what you're talking about. For example there is the very good jsdom library if you really need access to the DOM, and node-htmlparser, there are also some really good scraping libraries that take advantage of these like apricot.
2018 answer: mainly for historical reasons, but this may change in future.
Historically, very little DOM manipulation was done on the server. Addiotinally, as other answers allude, the JS stdlib and the DOM are seperate libraries - if you're using node, for, say, Unix scripting, then HTMLElement and NodeList etc aren't really relevant to that.
However: server-side DOM manipulation is now a very common part of delivering web apps. Web servers need to understand the structure of pages, and, if asked to render a resource as HTML, deliver HTML content that reflects the initial state of a web application. This means web apps load much faster than if the server simply delivers a stub page and has the browsers then do the work of filling in the real content. Currently this is done with JSDom and similar, but in the same way node has Request and Response objects built in, having DOM functions maintained as part of the stdlib would help with this task.
Javascript != browser. Javascript as a language is not tied to browsers; node.js is simply an implementation of Javascript that is intended for servers, not browsers. Hence no DOM.
If you read DOM as 'linked objects immediately accessible from my script' then the answer 'it does, but it's very different from set of objects available from web document script'. The main reason is that node is 'evented I/O for V8', not 'HTML tree objects for V8'
Node is a runtime environment, it does not render a DOM like a browser.
Because there isn't a DOM. DOM stands for Document Object Model. There is no document in Node, so not DOM to manipulate it. That is definitively a browser thing.
You can use a library like cheerio though which gives you some simple DOM manipulation.
Node is server-level JavaScript. It's just the language applied to a basic system API, more like C++ or Java.
It seems people have answered 'why' but not how. A quick answer of how is that in a web browser, a document object is exposed (hence DOM , document object model). On windows this object is called document object. You can refer to this page and look at the methods it exposes which are for handling HTML documents like createElement. I don't use node.js or haven't done COM programming in a while but I'd imagine you could use DOM in node.js by simply calling the COM object IHTMLDocument3. Of course for other platforms like Mac OS X or Linux you would probably have to use something from their OS api. This should allow you to easily build a webpage server side using DOM, or to scrape incoming web pages.
Node.js is for serverside programming. There is no DOM to be rendered in the server.
1) What does it mean for it to have a D ocument O bject M odel? There's no document to represent.
2) You're most of the time you're not retrieving pages. You can, but most Node apps probably won't be.
3) Without a document and a browser, Javascript is just another programming language. So you may ask why there isn't a DOM in C# or Java

Categories

Resources