canvas data to web worker - javascript

I'm trying to do some processing with webworker on image data from canvas. Solution, that I have right know, works quite ok, but there are still some visible lags when I do the processing (because besides processing I have to draw video from webcam to canvas and it starts to lag).
So I tried to use webworker and I did everything asynchronously. The only problem is that when I use JSON.stringify, it takes longer than actual processing.
My question: is there any other way, how to pass lot of data via worker.postMessage quickly? Is there some kind of workaround that I don't know about?
Small subquestion: what for are webworkers? I find workers really useless, passing just strings.
EDIT:
possible duplicate: Pass large amounts of data between web worker and main thread

Everything is copied to a webworker, so unless your computation is very intensive, I doubt you'll see much gain there.
WebWorkers are meant for long-running, computationally intensive algorithms. The obvious use cases are:
AI in web games
Raytracers
Compression/decompression on large data sets
AJAX requests that need a lot of processing before it's displayed
Other complex algorithms
Since data is copied both ways, you have to be careful about what you're doing. WebWorkers don't have access to the DOM, so they're likely useless for what you're trying to do. I don't know what your app does, but it sounds like it isn't very intense.
There are also Sharedworkers, which can be shared by multiple tabs/windows, which is a really nice way to pass data between tabs.
Edit:
Also look into the structured clone algorithm. It seems to be more efficient that JSON for many things, and can even duplicate ImageData (so it doesn't just support strings anymore).
For browsers that don't support the clone algorithm, I would urge you to consider base64. It's decent at storing binary data, and I think it's faster than JSON.stringify. You may have to write some code to handle it though.

Related

How much JavaScript can actually be loaded into memory by a browser?

I'm working on a BIG project, written in RoR with jQuery frontend. I'm adding AngularJS which has intelligent dependency injection, but what I want to know is how much javascript can I put on a page before the page becomes noticeably slow? What are the specific limits of each browser?
Assuming that my code is well factored and all operations run in constant time, how many functions, objects, and other things can I allocate in javascript before the browser hits it's limit (which there must be one, because any computer has a finite amount of RAM and disk space (although disk space would be an ambitious limit to hit with javascript)
I've looked online but I've only seen questions about people asking how many assets they can load, i.e. how many megabytes can I load etc. I want to know if there is an actual computation limit set out by browsers and how they differ
-- EDIT --
For the highly critical, I guess a better question is
How does a modern web browser determin the limit for the resources it allocates to a page? How much memory is a webpage allowed to use? How much disk space can a page use?
Obviously I use AJAX, I know a decent amount about render optimization. It's not a question of how can I make my page faster, but rather what is my resource limitation?
Although technically, it sounds a monumental task to reach the limits of a client machine, it's actually very easy to reach these limits with an accidental loop. Everyone has done it at least once!
It's easy enough to test, write a JS loop that will use huge amounts of memory and you'll find the memory usage of your PC will peg out and will indeed consume your virtual memory too, before the browser will fall over.
I'd say, from experience, even if you don't get anywhere near the technological limits you're talking about, the limits of patience of your visitors/users will run out before the resources.
Maybe it's worth looking at AJAX solutions in order to load relevant parts of the page at a time if loading times turn out to be an issue.
Generally, you want to minify and package your javascript to reduce initial page requests as much as possible. Your web application should mainly consist of one javascript file when you're all done, but its not always possible as certain plugins might not be compatible with your dependency management framework.
I would argue that a single page application that starts to exceed 3mb or 60 requests on an initial page load (with cache turned off) is starting to get too big and unruly. You'll want to start looking for ways of distilling copy-and-pasted code down into extendable, reusable objects, and possibly dividing the one big application into a collection of smaller apps that all use the same library of models, collections, and views across all of them. If using RequireJS (what I use) you'll end up with different builds that will need to be compiled before launching any code if any of the dependencies contained within that build have changed.
Now, as for the 'speed' of your application, look at tutorials for render optimization for your chosen framework. Tricks like appending a model's view one-by-one as they are added to the collection results in a faster rendering page then trying to attach a huge blob of html all at once. Be careful of memory leaks. Ensure you're closing references to your views when switching between the pages of your single page application. Create an 'onClose' method in your views that ensures all subviews and attached data references are destroyed when the view itself is close, and garbage collection will do the rest. Use a global variable for storing your collections and models. Something like window.app.data = {}. Use a global view controller for navigating between the main sections of your application, which will help you close out view chains effectively. Use lazy-loading wherever possible. Use 'base' models, collections, and views and extend them. Doing this will give you more options later on for controlling global behavior of these things.
This is all stuff you sort of learn from experience over time, but if enough care is taken, it's possible to create a well-running single page application on your first try. You're more likely to discover problems with the application design as you go though, so be prepared to refactor your code as these problems come up.
It depends much more on the computer than the browser - a computer with a slow CPU and limited amount of RAM will slow down much sooner than a beefy desktop.
A good proxy for this might be to test the site on a few different smartphones.
Also, slower devices sometimes run outdated and/or less feature-rich browsers, so you could do some basic user-agent sniffing or feature detection on the client and fall back to plane server-rendered HTML.

HTTP data streaming

I've got a backend to be implemented in Python that should stream data to a web browser where the JavaScript is creating the representation (e.g. continuously updating a variable or drawing to a <canvas>).
That data will update at a rate of up to 100 Hz (and might as a worst case scenario even be at 1000 Hz...) with perhaps 10 - 20 Bytes each.
So my first thought of using the COMET pattern would produce far too much overhead, I guess.
My next guess were WebSockets. They would be a perfect fit - but being disabled in Firefox makes them unusable for me.
So what is your recommendation to use in this case?
(Requirement: running in a few modern browsers on pure JavaScript, no Flash or Java allowed. Back end in Python. Already used lib is jQuery. Implementation should be easy, preferably using lightweight libs)
The solution I took now is to use the COMET pattern and transport all data that queued up in the backend since the last request. So I'm not polling during times of slow data generation (-> COMET) and I'll only have that amount of connections that the frontend (i.e. the browser) can handle as it's creating them.
And the overhead is reduced as each request contains a few data points. (You could even say that the overhead is scaled dynamically depending on the data rate. As the data rate gets higher, the overhead sinks...)
As an update to this question, nowadays, you should be able to use Server-sent events. I didn't use XHR due to it keeping the entire response in memory, and didn't use websockets, since I didn't need duplex comms. I had pretty much the same question, answered it here:
How to process streaming HTTP GET data?

Which is better: HTML rendering on server or on client in JS?

I have a best practices/performance question. I am creating an ASP.NET MVC 2 project, and I have several parts of the page that are accessed dynamically either at load time or on user interaction.
My question is this: is it better to have the sections rendered in HTML on the server and then just replace the sections of HTML or is it better to just retrieve the information as JSON objects and then use JS to create and insert the HTML?
It should be noted that the objects of concerns are very simple in nature. An example would be a 'message' object that has an ID field, a to field, a from field, a subject field and a body field that are all strings.
Are there some serious advantages or disadvantages to either approach? Or is this a case of preference to how to construct your application?
Consider the following questions:
Will there be any advantage of having the raw data on the client? In some cases other parts of the page use the data. In these cases it may make more sense to send data over the wire.
Are there potential performance differences? Consider the total pipeline. Sending HTML can be verbose, but is rendering faster on the server? Can the rendered HTML be cached on the server?
If neither of these push you in one direction or another, then I choose the more maintainable code base. This will depend not only on the specific problem, but also on the skillset of the team.
Bob
I don't think either are better; it's going to depend on your requirements. The question is borderline unanswerable. Are you using the data on the client for further computation or manipulation or are you just plopping something out to be displayed?
In both cases you're outputting textual data, though it happens to be easier to represent data structures as JSON more directly than it is to convert data structures to HTML and it's easier to directly display HTML than JSON.
Many frameworks have relatively slow render libraries (the View portion of Model-View-Controller architecture.) The reason is that the render library needs to parse/execute a View domain-specific-language to substitute variables, etc.
Depending on your app's scale, it can be much faster to have the client's browser execute the render. But moving the View computation to the client can be tricky to do in a consistent way.
Google's Closure compiler includes a template library. Another option is liquid. It has a Javascript, .Net and Ruby implementation.
As Jonathon has said already, I don't think there is a simple yes/no answer to your question.
The only factor that hasn't been mentioned already is that server side execution is more predictable, whereas client side execution is out of your control and may vary depending on the browser. This may practically not be a factor on an intranet site, but can become important if the audience is diverse. Modern Javascript libraries usually (not always) shield us from browser quirks, but older browsers could have specific performance issues as well (performance really shouldn't be your primary criteria though, unless you try it out and it's horrendous).
Picking the solution you feel the most comfortable implementing might very well be the way to go.

How much external data is too much? (XML or JSON)

I have written pure JavaScript front ends before and started noticing performance decrease when working with large stores of data. I have tried using xml and json, but in both cases, it was a lot for the browser to handle.
That poses my question, which is how much is too much?
You can't know, not exactly and not always. You can make a good guess.
It depends on the browser, OS, RAM, CPU, what else is running at that moment, how fast their connection is, what else they're transferring, etc.
Figure out several situations you expect for your average user, and test those. Add for various best, worst, and interesting (e.g. mobile, tablet) cases.
You can, of course, apply experience and extrapolate from your specific cases, and the answer will change for the future.
But don't fall into the trap of "it works for me!"
I commonly see this with screen resolutions: as those have increased, it's much more popular to have multiple windows visible at the same time. In 1995 it was rare for me to not have something maximized; now fifteen years later, it's exactly the opposite.
Yet sometimes people will design some software or a website, use lower contrast[1], maximize it, and connect to a server on localhost—and that's the only evaluation they do.
[1] Because they know what the text says and don't need to read it themselves, so lower contrast looks aesthetically better.
In my opinion, if you need to stop and think about this issue, then the data is too much. In general you should design your applications so that users with a low-end netbooks and/or slow internet connections are still able to run them. Also keep in my mind that more often than not your application isn't the only page your users are visiting at the same time.
My recommendation is to use Firefox with Firebug to do some measurements. See how long a request takes to complete in a modest configuration. If it takes noticeable time for the browser to render data, then you'd better off doing a redesign.
A good guiding principle should be that instead of worrying about whether the browser can handle the volume of data you're sending it, worry about whether your user can handle it. It all depends on the presentation of course (i.e., a lot of data bound for a visualization tool that'll render a complex graph in a canvas is different than a lot of raw numbers bound for a gigantic table), but in my experience a user's brain reaches data overload before the browser/network/client computer.
It really depends on the form that your external data is going to take in your Javascript. If you want to load all your data at once and keep it in memory as a large object with lots of properties (associative array), then you will find that most current desktops can only handle about 100k entries (with small key-value pairs) before performance really degrades.
If it is possible, you should see if there are ways to only load the data that is needed by the user for a given request / interaction. You can use AJAX to request needed data and prefetch data that you think the user may need.

How much is too much JSON to send over to a web client?

So, my question is sort of based on experience, so I'm wondering about those out there who have tried to load datasets out there, what's a reasonable amount of data to load. My users have relatively big pipes, so I don't have to worry about modem users, but I do concern myself with processing times. I'm guessing my limit is somewhere in the 300-1024k range, but does anyone have a way or a website which has done something which can be a little more definitive?
I've run across this resource. It's from 2005, so I'd consider it out of date even though the general lesson seems to be pretty sound:
http://blogs.nitobi.com/dave/2005/09/29/javascript-benchmarking-iv-json-revisited/
I also came across this:
http://www.jamesward.com/census/
Is there is anything else out there worth checking into?
A typical JSON packet can (and should) be compressed using gzip by the web server to approx. 10% its initial size. So you're really looking at 30-100k. If those responses can be cached, then it's even less of a problem.
The size of the transmission should not be the deciding factor in whether a packet is "too much". Instead, look at how long it will take the browser to process this packet (update the UI, etc).
Actually parsing the JSON should be very fast, up to many megabytes of data. Turning that into something new in the UI will largely depend on how complicated the HTML you're producing is.

Categories

Resources