I'm a WebAssembly newb just looking to get started, but I have a question I can't seem to find a reasonable answer to. I have an idea of how I would like to design this software, but I don't know if I'm asking the wrong thing of WebAssembly.
Do external JavaScript calls interrupt WebAssembly? Say I'm running a game loop for some visualization, and I want a HTML button bound to JavaScript to change some value within the currently executing WebAssembly context. Is that possible? Would it be more ideal to update values between each frame and only use WebAssembly for per-frame rendering?
The thing is, I really like the targetability of WebAssembly from other languages like C++, as I could never really get into either JavaScript or Typescript, and I'm also looking to build a more performant game-type application. However, since I was planning on making this a web game anyway, I wanted to see if I could use HTML + CSS for my UI elements instead of replicating a UI into WebAssembly, essentially turning it into a terrible canned Java applet.
Yes and no. The short answer is that the browser cannot interrupt long running WebAssembly code.
However, the browser also cannot interrupt long running JS code or deliver any events if you have long running code of any kind. WebAssembly and JS execute on the same execution stack, and in the web model you need to return back the event loop at reasonable intervals. The recommended approach here (for both JS and WebAssembly code) is to break up your game loop and have it run one frame at a time on a callback.
(There are some ways around this, such as using workers to run you long running code or compiler tricks such as binaryen's asynchify to break up your long running code).
Related
Since node runs a single threaded model with event looping I wonder how node prevents the entire application to fail if you write a code like:
while(true){ doSomething()}
where doSomething is a synchronous function (a blocking piece of code)
Note that it doesn't make any sense to write a function like doSomething but nothing prevents you to make a mistake
The problem here is that, since it's single threaded, it won't allow any other parts of the application to run (for instance, a web server would stop accepting new connections) because this function would never end. In a Multi threaded environment you would loose this thread alone.
Is there anything that node can do for you to prevent these kind of problems?
I wonder how node prevents the entire application to fail if you write an infinite loop
nodejs does not prevent such an infinite loop. It will just run that loop forever or until some resource is exhausted (if the loop is consuming some resource like memory).
If node can't prevent this kind of situations, is this a design fault or there's no way to prevent these kind of problems?
I don't think most people consider it a design fault - though that's purely an opinion and different people may have a different opinion. It is a consequence of the way nodejs was designed which has many other benefits.
The only way to prevent such problems is to not write faulty code that does this. Honestly, it's not too hard to avoid writing this type of code once you're aware that it's an issue to avoid.
The problem here is that, since it's single threaded, it won't allow any other parts of the application to run (for instance, a web server would stop accepting new connections) because this function would never end. In a Multi threaded environment you would loose this thread alone
Correct. This is something you learn when coding in nodejs. I've never found it a hard thing to avoid. nodejs is an single-threaded event driven system, not a multi-threaded system. As such, you program with events, not long running loops that poll or check conditions. It is a rather straightforward concept to learn and use once you understand this is how nodejs works. It is different than some other environments. But, how to use asynchronous operations in nodejs is just something you have to learn to program in that environment. It's not avoidable and is just part of the character of nodejs. There is no way that nodejs could have the type of architecture it has without having to learn this to program in it. If you want a different architecture (for whatever personal reason), then pick a different environment, not nodejs.
The single-threadedness massively simplifies many other things (far, far fewer opportunities for race conditions) and improves scalability in some circumstances (with asynchronous I/O) vs. threaded environments. For situations where you want multiple CPUs to be applied to your problem, it is generally straightforward in node.js to either use the built-in clustering module or to fire up worker processes and feed them work. Data is often shared among multiple processes via some sort of database (either file-based or RAM-based) that handles much of the multi-process synchronization for you.
It doesn't. This seems like less of a question and more an open statement. Node will loop infinitely and all your parallel code will stop running.
it's not possible to find such issue in the node.js program itself. however a node.js script with an infinite loop will use lead to 100% cpu . so this can be monitored and you can use tools to restart the program. I don't recommand to do this, you should fix your infinite loop first, but it s sometimes hard to find the issue with large codebase. last time it happened to be I used a remote debugger to find the infinite loop.
I've been doing a lot of research into the core of Node.js lately, and I have some questions about the inner workings of the Node platform. As I understand it, Node.js works like this:
Node has an API, written in Javascript, that allows the programmer to interact with things like the filesystem and network. However, all of that functionality is actually done by C/C++ code, also a part of Node. Here is where things get a little fuzzy. So it's the Chrome V8 engine's job to essentially "compile"(interpret?) javascript down into machine code. V8 is written in C++, and the Javascript language itself is specified by ECMA, so things such as keywords and features of the language are defined by them. This leads me to my first few questions:
How is the Node Standard Library able to interact with the Node Bindings, since the Node Bindings are written in C++?
How does the Chrome V8 engine interpret Javascript in the context of Node? I know it uses a technique called JIT, which was mentioned in a similar question: (https://softwareengineering.stackexchange.com/questions/291230/how-does-chrome-v8-work-and-why-was-javascript-not-jit-compiled-in-the-first-pl)
But this doesn't explain how Javascript is interpreted in the context of Node. Is the Chrome V8 engine that ships with Node the exact same engine that runs on the Chrome browser, or has it been modified to work with Node?
That brings me to my next question. So Node features event-driven, non-blocking IO. It accomplishes this via the Event Loop, which, although it is often referred to as the "Node Event Loop", is actually a part of the libuv library, a C++ library designed to provide asynchronous IO. At a high level, the event loop is essentially accessed via Callbacks, which is a native Javascript feature and one of the reasons Javascript was chosen as the language for the Node project. Below is an illustration of the how the event loop works:
This can also be demonstrated live by this neat little site: http://latentflip.com/loupe/
Let's say our Node application needs to make a call to an external API. So we write this:
request(..., function eyeOfTheTiger() {
console.log("Rising up to the challenge of our rival");
});
Our call to request gets pushed onto the call stack, and our callback is passed somewhere, where it is kept until the request operation finishes. When it does, the callback is passed onto the callback queue. Every time the call stack is cleared, the event loop pushes the item at the top of the callback queue onto the call stack, where it is executed. This event loop is run on a single thread. Where problems arise is when someone writes 'blocking' code, or code that never leaves the call stack and effectively ties up the thread. If there is always code executing on the call stack, than the event loop will never push items from the callback queue onto the call stack and they will never get executed, essentially freezing the application. This leads me to my next question:
If the Javascript is interpreted by the Chrome V8 engine, than what "controls" the pushing of code onto the callback queue? How is Javascript code handled by the libuv event loop?
I've found this image as a demonstration of the process:
This is where I'm uncertain about how exactly Chrome V8 engine and libuv interact. I'm inclined to believe that the Node Bindings facilitate this interaction, but I'm not quite sure how. In the image above, it appears as though the NodeJS bindings are only interacting with machine code that has been compiled down from Javascript by V8. If so, than I am confused as to how the V8 engine interprets the Javascript in such a way that the Node Bindings can differentiate between the callback and the actual code to immediately execute.
I know this is a very deep and complicated series of questions, but I believe this will help to clear up a lot of confusion for people trying to understand Node.js, and also help programmers to understand the advantages and disadvantages of event-driven, non-blocking IO at a more fundamental level.
Status Update: Just watched a fantastic talk from a Sencha conference(Link here). So in this talk, the presenter mentions the V8 embed guide(Link here), and talks about how C++ functions can be exposed to Javascript and vice versa. Essentially how it works is that C++ functions can be exposed to V8 and also specify how it wants those objects to be exposed to Javascript, and the V8 interpreter will be able to recognize your embedded C++ functions and execute them if it finds Javascript that matches what you specified. For example, you can expose variables and functions to V8 that are actually written in C++. This is essentially what Node.js does; it is able to add functions like require into Javascript that actually execute C++ code when they are called. This clears up question number 1 a little, but it doesn't exactly show how the Node standard library works in conjunction with V8. It is still also unclear about how libuv is interacting with any of this.
Basically what you are looking for is V8 Templates. It exposes all your C++ code as JavaScript functions that you can call from within the V8 Virtual Machine. You can associate C++ callbacks when functions are invoked or when specific object properties are accessed (Read Accessors and Interceptors).
I found a very good article which explains all of this - How does NodeJS work?. It also explains how libuv works in conjunction with Node to achieve asynchronicity.
Hope this helps!
It says in a lot of website that JavaScript is single-threaded. When they say this, do they mean the JavaScript runtime?
I might be misunderstanding something but isn't JavaScript just a programming language and the programs you create with it should be the ones labeled as single-threaded? But maybe I'm not understanding something so can someone please explain what I am don't getting?
JavaScript, the language, is nearly silent on the topic of threading. Whether it's single- or multi-threaded is a matter of the environment in which it runs. There are single-threaded JavaScript environments, and multi-threaded JavaScript environments. The only real requirement the specification makes is that a thread have a job queue and that once a job (a unit of code to run, like a call to an event handler) is started, it runs to completion on the thread before another job from that queue is started. That is, JavaScript has run-to-completion semantics.
JavaScript on browsers isn't single-threaded and hasn't been for years. There is one main thread (the thread the UI is handled on), and any number of web worker threads. The web workers don't have direct access to the UI, they send messaegs to the UI thread, and it does the UI updates. The threads don't directly share data, they share data explicitly via messaging. This separation makes programming multi-threaded code dramatically simpler and less error-prone than if multiple threads had access to the UI and the same common data area. (Writing correct multi-threaded code where any thread can access anything at any time is hard.)
Outside the browser, JavaScript in NodeJS is run on a single thread. There was a fork of Node to add multi-threading, but I don't think it ever went anywhere.
JavaScript on the JVM (Rhino, Nashorn) has always been multi-threaded, supported by the threading facilities of the JVM.
While TJC's answer is of course correct, I don't think it addresses the issue of what people actually mean when they say that "JavaScript is single-threaded". What they're actually summarising (inaccurately) is that the run-time must behave as if has a single thread of execution, which cannot be pre-empted, and which must run to completion. The actual runtime can do anything it likes as long as the end result behaves in this way.
This means that while a JavaScript program may appear to be massively parallel with lot of threads interacting with each other, it's actually nothing of the sort. A kernel controls everything using the queue, event loop, and run-to-completion semantics (briefly) described here.
This is exactly the same problem that Hardware Description Languages (VHDL, Verilog, SystemC (though not actually a language), and so on) face. They give the illusion of massive parallelism by having a runtime kernel cycle between 'processes' which are not pre-emptible, and which must run until defined suspend points. The point of this is to ensure that models are executed in a determinate, repeatable fashion.
The difference between the HDLs and JS is that this is very well defined and fundamental for HDLs, while it's glossed over for JS. This is an extract from the SystemC LRM, which covers it briefly - it's much better defined in the VHDL LRM, for example.
Since process instances execute without interruption, only a single
process instance can be running at any one time, and no other process
instance can execute until the currently executing process instance
has yielded control to the kernel. A process shall not pre-empt or
interrupt the execution of another process. This is known as
co-routine semantics or co-operative multitasking.
I am working on web app in which I am using javascript on client side to handle validation/calling backend cgi scripts etc kind of things.Now my problem is the WebGUI has become some what slow than it was earlier.Actually everything is working fine as expected but this issue.
I know that there may be many issues which are directly affecting the speed of application.
After all all the application is depended on cgi script response but still is it possible to make the app faster by taking care of certain javascript functions??
So can you please suggest that what are the steps I should take care to make javascript execution some what faster ???(i.e less number of LOC)
Thanks in advance...
As long as you do not post some js code, it will be kind of difficult to help you.
Still, speaking in general: Keep your DOM-Manipulations to a minimum, especially when dealing with lots of data. Do not use jquery functions that affect the dom (append, insertBefore, insertAfter) in a loop - try to do all your preparation in a loop and then call DOM-affecting functions once having all the changes together.
But of course, i do not know if this is the case in any code of your application.
Am I correct that if someone wrote a Ruby plugin for a web browser and a user installed that plugin then it would be possible to replace javascript with ruby on the frontend?
Aren't there any plugins for this? Or even for using other languages than javascript on the browser side?
You could use http://ironruby.net/ in a Silverlight Plugin, but I have not a clue about how easy DOM interaction is this way.
But I BEG YOU don't do it! Please, use the Open Web Stack to solve your problems.
If you don't leave your Ruby world of comfort, you will not only hurt your users experience "WTF? Why do I need Silverlight for this page?" but you will also get stuck in your small little Ruby world without learning anything new and exciting.
It would be better for both of you, if you'd just go ahead and learn JavaScript.
Because remember: "Learning is a good thing!"
One thing is A FACT: as of 2010 JavaScript does not have a thread stopping "sleep" function (other than the one that just burns CPU cycles).
I have been working with JavaScript for at least a year before posting this comment and I have come to a conclusion that the lack of a thread-stopping sleep function is a real show-stopper for threading related code.
A consequence of the lack of the sleep function is that it's not possible to simulate a Ruby/C#/C++/etc. like threading model in JavaScript, which in turn means that it's not possible to translate any of the threading enabled languages to JavaScript, no matter, what one does, unless the JavaScript is supplemented with a (preferably non-CPU-cycle-burning) sleep function.
If one surfs around, then one can find many comments that state that the sleep function is not even necessary, that the setTimeout is sufficient, etc., but I guess that people, who state that, have not tried to implement a threading framework in JavaScript. (Think of mutexes, critical sections. I refuse to go into a discussion that the critical sections/synchronization are/is not necessary for cases, where widget content consists of multiple data components that form an "atomic whole".)
The second show-stopper for the whole DOM-model is the implementation that renders DOM elements IN THE BACKGROUND THREAD.
Here's, what happens:
In Javascript:
create_my_awsome_widget_in_DOM();
edit_my_awsome_widget_by_editing_DOM_inside_it()
if_we_are_lucky_we_reach_here_without_crashing_the_app()
As the DOM is rendered in background (read: in a separate thread), there will be a race condition between the thread that initiated the DOM editing, by making a call to the create_my_awsome_widget_in_DOM(), and the DOM rendering. If the rendering thread is "quick enough" to render the DOM before the JavasSript thread calls the edit_my_awsome_widget_by_editing_DOM_inside_it(), everything works fine, but if it's the other way around, then the JavaScript starts to modify region of the DOM that does not (yet) exist.
Essentially it means that due to the background DOM rendering the create_my_awsome_widget_in_DOM() and edit_my_awsome_widget_by_editing_DOM_inside_it() are executed in a random order and obviously the application crashes, if the edit_my_awsome_widget_by_editing_DOM_inside_it() is called before the create_my_awsome_widget_in_DOM().
There might be a way to do it indirectly. Here is the original presentation at RubyConf 2008. The topic:
This talk is about the many paths towards getting ruby running in your web browser. I'll first talk about why this is even a good idea. I'll then talk briefly about each approach I've investigated and the differing amounts of FAIL I encountered with each. Next I'll focus on the most promising contender, rubyjs, a ruby compiler which outputs javascript.
The project rubyjs still exists, but it appears to be dead. The idea probably was a little too crazy.
mruby seems like an interesting option for running ruby in a web browser:
http://qiezi.me/projects/mruby-web-irb/mruby.html
It's not a typical plugin as it does not require installation, it's javascript (compiled from C) running ruby code.
Technically that would be correct, assuming the browser/plugin also provided an extensive API to deal with the DOM and such. I am not aware of any plugins that make this possible, but it's an interesting idea.