How to do compute-intensive tasks in AngularJS application? - javascript

I am writing an application using JavaScript, HTML5 and AngularJS. It only has to work on pretty recent browsers (for instance, IE10 but not IE9).
At several places in the application, there will be compute-intensive tasks, such as XML parsing, base64 decoding; these could involve fairly big data (a few MB is certainly a possibility).
If I just call things like atob() or DOMParser.parseFromString(), I will get an unresponsive browser for seconds or even minutes. This is clearly not acceptable to a user.
I've used Angular's Q Service to make things like accessing an external Web service asynchronous, and hence avoid hanging up the browser while awaiting a response. But such operations already have an asynchronous API.
What about these compute-intentive tasks, which don't have an asynchronous API of their own?
I can split some of these tasks a bit, chaining promises. Does this help at all? Does the browser message queue get a spin at the end of each task?
I see the existence of "Web Workers", which seem to offer proper multi-threading. But they seem to have rather poor abilities to transfer objects to/from the worker threads. Certainly, it seems that way for someone like me coming from C#.Net! For instance, I'd like to inject Angular services (built-in and my own) into the tasks on the threads. And I don't want to copy massive data between threads either.
Are other people achieving responsive client-side Web apps that include serious computation? If so, what are they using to achieve this?

It sounds like you are looking for the Parallel.js library.
Here is a quick description of the library from their website:
"Parallel.js is a tiny library for multi-core processing in Javascript. It was created to take full advantage of the ever-maturing web-workers API."
I'm not currently aware of any examples specific to usage of Parallel.js in Angular, but I'm sure it wouldn't be too hard to integrate the library as an Angular service.

Related

How to implement multi threading in Angular?

https://www.npmjs.com/package/threads
It seems to me we can use this package in Angular for running threads.
But I feel difficulties on implementing this.
Is there anyway to use threading in Angular?
How can I use thread in Angular?
Angular does not have "threads", which by the way can mean many different things, in different contexts, environments, platforms, CPUs, and operating systems. Threads can be a way to accomplish parallelism; or they can be a way to organize your code as a set of concurrent processes; or they can be a way to manage access to shared resources; or any or all of the above.
Angular works in a browser. Browsers run JavaScript. The closest thing we have to threads in our browser world is web workers. To greatly oversimplify, web workers are not light-weight threads; in other words, you wouldn't want to create 100,000 of them. But if you are looking for a simple way to offload some computation away from the main browser task, so that it does not lock up the browser while you are computing, then you are probably interested in web workers.
Web workers do not really need any special library, or wrapping, or scaffolding. They're easy enough to just write directly. However, if you're interested in some ways to facilitate the process of using web workers within an Angular context, then google for "angular web workers".
I have no special knowledge of the library you mention. At first glance, it appears to be a way to abstract concurrent algorithms over different threading implementations appropriate for the node.js platform vs. the browser. If you're planning on working in Angular, then most likely the node.js platform part is irrelevant, so this entire library is not anything you should be interested in.

Anything recent in Concurrency in the JS ecosystem ? Last was 4 months ago

**
Is there anything like Actors in JavaScript and its ecosystem (Node, CoffeeScript, Backbone etc) ?
With the widespread use of AJAX it seems perfect for asynch message-passing.
If you are using Javascript in the browser, take a look at Web workers:
https://developer.mozilla.org/en-US/docs/Web/Guide/Performance/Using_web_workers
From the page:
Dedicated Web Workers provide a simple means for web content to run scripts in background threads
You communicate to the web workers using message passing.
Because JavaScript is traditionally single-threaded, it would be difficult to make Actors or a similar async message-passing technique without exposing some of the internals to the users of the library. If I understand correctly, Actors wait synchronously for messages, and it's just sending which is happening asynchronously. It's much more idiomatic in JavaScript to both read and write asynchronously and use callbacks to deal with the results of the communication.
Of course, there are ways around this, so this other question and the presentation linked in the top answer and this list of node.js modules for dealing with control flow are decent starting points for how you might go about implementing your own.

Javascript and Scientific Processing? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Matlab, R, and Python are powerful but either costly or slow for some data mining work I'd like to do. I'm considering using Javascript both for
speed, good visualization libraries, and to be able to use the browser as an interface.
The first question I faced is the obvious one for science programming, how to do I/O to data files? The second is client-side or server-side? The last question, can I make something that is truly portable i.e. put it all on a USB and run from that?
I've spent a couple of weeks looking for answers. Server2go seems to address client/server needs which I think means I can get data to and from the programs on the client side. Server2go also allows running from a USB. The data files I work with are usually XML and there seem to be several javascript converters to JSON.
However, after all the looking around, I'm not sure if my approach makes sense. So before I commit further, any advice/thoughts/guidance on Javascript as a portable tool for scientific data processing?
I have to agree with the comments that JavaScript is not a good fit for scientific processing. However, you know your requirements best; maybe you already found useful libraries that do what you need. Just be aware that you'll have to implement all logic yourself. There is no built in handling of complex numbers, or matrices or integrals or ... Usually programmer time is far more valuable than machine time. Personally, I'd look in to compiled languages; after I created a first version that isn't fast enough in whatever language I like the most.
Assuming that JavaScript is the way to go:
Data I/O
I can think of three options:
Sending and receiving data with ajax to a server
Seems to be the solution you've found with Server2go. It requires you to write a server back end, but that can be kept quite simple. All it really needs to do be able to read and write files as a response to you client-side application.
Using a non-browser implementation of v8 which includes file I/O
For instance Node.js. You could then avoid the need for a server and simply use a command-line interface, and all code will be JavaScript. Other than that it is roughly equivalent to the first option.
Creating a file object using the file API which you ask the user to save or load
It is the worst option in my opinion, as user interaction is required. It would avoid the need for a server; your application could be a simple html file that loads all data files with ajax requests. You'd have to start Chrome with a special switch to allow ajax requests with the file:// protocol, as described here
These options are only concerned with file I/O and you can't do file I/O in JavaScript. This is because browsers cannot allow arbitrary web code to do arbitrary file I/O; the security implications would be horrendous. Each option describes one way to not do file I/O.
The first communicates with a server that does the file I/O for the client.
The second uses "special" versions of JavaScript, with conditions other than that of the browser so the security implications are not important. But that means you'll have to look up how file I/O is done in the actual implementation you use, it's not common to JavaScript.
The third requires the user to control the file I/O.
Interface
Even if you don't use JavaScript to do the actual processing, which so far is the consensus, there is nothing stopping you from using a browser as the interface or JavaScript libraries for visualisation. That is something JavaScript is good at.
If you want to interactively control your data mining tool, you will need a server that can control the tool. Server2go should work, or the built in server in Node.js if you use that or... If you don't need interactive control of the data tool; that is you first generate the processed data, then look at the data a server can be avoided, by using the file//: protocol and JSONP. But really; avoiding a server shouldn't be a goal.
I won't go into detail about interface issues, as there is nothing specific to say and very nearly everything that has been written about javascript is about interface.
One thing, do use a declarative data binding library like Angular.js or Knockout.js.
JavaScript speed is heavily overrated. This is a Web 2.0 myth.
Let me explain this claim a bit (and don't just downvote me for saying something you do not want to hear!)
Sure, JavaScript V8 is a quite highly optimized VM. It does beat many other scripting languages in naive benchmarks.
However, it is a very limited scope language. It is meant for the "ADHS world" of web. It is a best effort, but it may just fail and you have little guarantees on things completing or completing on time.
Consider for example MongoDB. At first it seems to be good and fast and offer a lot. Until you see for example that the MapReduce is single-threaded only and thus really slow. It's not all gold that shines!
Now look at data mining relevant libraries such as BLAS. Basic linear algebra, math operations and such. All CPU manufacturers like Intel and AMD offer optimized versions for their CPUs. This is an optimization that requires detailed understanding of the individual CPUs, way beyond the capabilities of our current compilers. The libraries contain optimized codepaths for various CPUs all essentially doing the same thing.
And for these operations, using an optimized library such as BLAS can easily yield a 5-20x speedup; at the same time matrix operations that are often in O(n^2) or O(n^3) will dominate your overall runtime.
So a good language for data mining will let you go all the way to machine code!
Pythons SciPy and R are good choices here. They have the optimized libraries inside and easily accessible, but at the same time allow to do the wrapper stuff in a simpler language.
Have a look at this programming language benchmark:
http://benchmarksgame.alioth.debian.org/u32/which-programs-are-fastest.html
Pure JavaScript has a high variance, indicating that it can do some things fast (mostly regular expressions!) others much slower. It can clearly beat PHP, but it will be just as clearly be beaten by C and Java.
Multithreading is also important for modern data mining. Few large systems today have a single core, and you do want to make use of all cores. So you need libraries and a programming language that has a powerful set of multithreading operations. This is actually why Fortran and C are losing popularity here. Other languages such as Java are much better here.
Although this discussion is a bit old and I am not a Javascript guru by any stretch of the imagination, I find the above arguments doubtful about not having the processing speed or the capabilities for advance math operations. WebGL is a Javascipt API for rendering advance 2D and 3D graphics which relies heavily on advance math operations. I believe the capabilities are there from a technical point of view however what is lacking is good libraries to handling statistical analysis, natural language processing and other predictive analytics included in data mining.
WebGL is based on openGL, which in turn uses libraries like BLAS (library info here).
Advances like node.js, w8 make it technically possible. What is lacking is libraries like we can find in R and Scilab to do the same operations.

Browser-side node.js or non-blocking javascript?

I am fascinated with non-blocking architectures. While I haven't used Node.js, I have a grasp of it conceptually. Also, I have been developing an event-driven web app so I have a fundamental understanding of event programming.
How do you write non-blocking javascript in the browser? I imagine this must differ in some ways from how Node does it. My app, for example, allows users to load huge amounts of data (serialized to JSON). This data is parsed to reconstitute the application state. This is a heavy operation that can cause the browser to lock for a time.
I believe using web workers is one way. (This seemed to be the obvious choice, however, Node accomplishes a non-blocking, event-driven architecture I believe without using Web Workers so I guess there must be another way.) I believe timers can also play a role. I read about TameJS and some other libraries that extend the javascript language. I am interested in javascript libraries that use native javascript without introducing a new language syntax.
Links to resources, libraries and practical examples are most appreciated.
EDIT:
Learned more and I realize that what I am talking about falls under the term "Futures".
jQuery implements this however, it always uses XHR to call a server where the server does the processing before returning the result and what I am after is doing the same thing without calling the server, where the client does the processing but in a non-blocking manner.
http://www.erichynds.com/jquery/using-deferreds-in-jquery/
Three are two methods of doing non-blocking work on the browser
Web Workers. WebWorkers create a new isolated thread for you to do computation in, however browser support tells you that IE<10 hates you.
not doing it, expensive work in a blocking fashion should not be done on the client, send an ajax request to a server to do this, then have the server return the results.
Poor man's threads:
There are a few hacks you can use:
emulate time splicing by using setTimeout. This basically means that after every "lump" of work you give the browser some room to be responsive by calling setTimeout(doMore, 10). This is basically writing your own process scheduler in a really poor non optimized manner, use web workers instead
creating a "new process" by creating a iframe with it's own html document. In this iframe you can do computation without blocking your own html document from being responsive.
What do you mean by non-blocking specifically?
The longest operations, Ajax-calls, are already non-blocking (async)
If you need some long-running function to run "somewhere" and then do something, you can call
setTimeout(function, 0)
and call the callback from the function.
And you can also read on promises and here as well

Performance considerations with Facebook C# SDK versus Javascript SDK

I'm starting a new Facebook canvas application so I can pick the technology I'm going to use. I've always been a fan of the .NET platform so I'm strongly considering it for this app. I think the work done in:
facebooksdk.codeplex.com
looks very promising. But my question is the following:
It's my understanding that when using an app framework like this (or PHP for that matter) with Facebook, whenever we have a call into the API to do some action (say post to the stream), the flow would be the following:
-User initiates request which is direceted to ASP.NET server
-ASP.NET server makes Facebook API call
so a total of three machines are involved.
Why wouldn't one use the Javascript SDK instead?
http://developers.facebook.com/docs/reference/javascript/FB.api
"Server-side calls are available via the JavaScript SDK that allow you to build rich applications that can make API calls against the Facebook servers directly from the user's browser. This can improve performance in many scenarios, as compared to making all calls from your server. It can also help reduce, or eliminate the need to proxy the requests thru your own servers, freeing them to do other things."
So as I see it, I'd be taking my ASP.NET server out of the equation, reducing the number of machines involved from three to two. My server is under less load and the user (likely) gets fatter performance.
Am I correct that using the Facebook C# SDK, we have this three machine scenario instead of the two machine scenario of the JS API?
Now I do understand that a web server framework like ASP.NET offers great benefits like great development tools, infrastructure for postbacks, etc, but do I have an incomplete picture here? Would it make sense to use the C# framework but still rely on the javascript sdk for most of the FB api calls? When should one use each?
Best,
-Ben
You should absolutely use the Javascript SDK when you can. You are going to get a lot better performance and your app will be more scalable. However, performance isn't always the only consideration. Some things are just easier on the server. Also, a lot of apps do offline (or delayed processing) of user data that doesn't involve direct interaction.
I don't think that there is a right or wrong place to use each SDK, but they definitely both have their place in a well built Facebook app. My advice would just be to use whichever is easier for each task. As your app grows you are going to learn where the bottlenecks are and where you need to really squeeze that extra bit of performance is needed by either moving stuff to the client (Javascript SDK) or moving stuff to be processed in the background (Facebook C# SDK).
Generally, we use the Javascript SDK for some authentication stuff and for most of the stuff with the user interface. The one exception to the UI stuff is when we are really concerned about handling errors. It is a lot easier to handler errors on the server than with the Javascript SDK. Errors I am talking about are things like errors from facebook or just general facebook downtime.
Like I said, in the beginning just use both and do whatever is easier for each task.

Categories

Resources