Is a Web Worker faster than running a script? - javascript

I was tasked with creating a dedicated webworker instance on each page to handle sending a server request on a given interval regardless of what page the user is on. Since this must work for any browser, a shared webworker was not an option (hence why it must be loaded for each page).
I created a script, thinking that I had been creating a worker, but I was informed recently that workers were not actually being created, though the script was doing the intended function of the webworker.
The basic function of the webworker was this:
onPageLoad {
function sendHeartbeat() {
sendRequest(URL);
}
function startHeartbeat() {
if(timeToSendHeartbeat) {
sendHeartbeat();
} else {
setInterval(timeRemaining, sendHeartbeat());
}
}
}
This got me to thinking about whether or not using a webworker was even the best choice. Is there some inherent advantage to using a webworker that I am missing? Is using a webworker no more efficient than attaching a script to each page and running it as is? Or is this application just not suited for a webworker to begin with?

WebWorkers just run scripts, so they won't be faster than other methods. They shine by running in a different thread and not blocking the UI or any other code that wants to run in the main thread.
The real deciding factor is whether the code to be worker-ized runs for long enough to cause problems with the rest of the application. If you have intervals that need to fire on time or a very long-running math operation, you may want to start up a worker and let it go for a bit, then grab the results at the end.
So far as the main thread is concerned, workers and API calls are not entirely different in principal. You're sending someone else off to do the work and collecting results when they finish. Whether it happens on a server or another thread is less important, the part to focus on is the main thread is not doing the work.

Related

Working of Web Worker

I was reading about web workers, and I understood that it runs on a separate thread. One doubt I have is, whether the web worker spawns a new thread for every request sent to it. Example, if I have 2 js files wherein I share a webworker between two. Now when I postmessage from both files to web worker, will two threads be created or a single one ?
No, each Worker is a single thread, and they still use the same event loop mechanism as the main execution context; meaning, for example, if your Worker runs into an infinite loop, it will lock up completely and not react to any further messages.

is it conflicting for multiple users on one backend server websockets

I'm planning on building some backend logic on a server for personal use. Its connected to a websocket from another server and I've set code to handle data from that socket. I'm still fairly new to using websockets so the whole concept is still a little foreign to me.
If I allowed more users to use that backend and the websocket has specific logic running wouldn't it be conflicted by multiple users? Or would each user have their own instance of the script running?
Does it make any sense of what I'm trying to ask?
If I allowed more users to use that backend and the websocket has specific logic running wouldn't it be conflicted by multiple users? Or would each user have their own instance of the script running?
In node.js, there is only one copy of the script running (unless you use something like clustering to run a copy of the script for each core, which it does not sound like you are asking about). So, if you have multiple webSocket connections to the same server, they will all be running in the same server code with the same variables, etc... This is how node.js works. One running Javascript engine and one code base serves many connections.
node.js is an event-driven system so it will serve an incoming event from one webSocket, then return control back to the Javascript system and serve the next event in the event queue and so on. Whenever a request handler calls some asynchronous operation and waits for a response, that is an opportunity for another event to be pulled from the incoming event queue and another request handler can run. In this way, multiple requests handlers can be interleaved with all making progress toward completion, even though there is only one single thread of Javascript running.
What this architecture generally means is that you never want to put request-specific state in the global or module scope because those scopes are shared by all request handlers. Instead, the state should be in the request-specific scope or in a session that is bound to that particular user.
Is it conflicting for multiple users on one backend server websockets
No, it will not conflict if you write your server code properly. Yes, it will conflict if you write your server code wrongly.

Service workers and page performance

I'm stuck at a wedding reception that I really don't want to be at and I'm driving, so obviously I'm reading about service workers. I'm on my phone so can't play about with anything but was thinking if they're a viable option for improving page performance?
Images are the biggest killer on my site and I'm half thinking we could use a service worker to cache them to help get page load times down. From what I can tell, the browser still makes the http request, it's just the response is from the SW cache, not the file location. Am I missing something here? Is there therefore any actual benefit to doing this?
While the regular http cache has a lot of overlap with ServiceWorker cache, one thing that the former can't handle very well is the dynamically generated html used in many client-side javascript applications.
Even when all the resources of the app are cache hits, there is still the delay as the javascript is compiled and executed before the app is usable.
Addy Osmani has demonstrated how ServiceWorker can be used to cache the Shell of an app. When the DOM is modified on the client, it is updated in the cache. The next time that URL is requested, the ServiceWorker replies with html that is ready for use before the app has booted.
The other advantage regards lie-fi: when it seems the network is available, but not enough packets are getting through. ServiceWorkers can afford to have a near-imperceptible timeout, because they can serve immediately from cache and wait for the response to load (if ever).
Your consideration is invalid.
Service worker is designed to work like a proxy server that can especially handle some off-page operations like offline ability, push notification, background synchronization, etc. So in your case, you will gain no performance benefits by caching images with service worker over the traditional browser's cache approach.

javascript setInterval() load

I am trying to figure out what kind of load the window function setInterval() places on a user's computer. I would like to place a setInterval() on a page that is viewable only by my company's employees that will be checking a basic text file every 5 seconds or so and then if there is something to display, it will throw up some html code dynamically on the screen.
Any thoughts? Any better, less intrusive way to do this?
Appears it should not cause a problem, pending that the function setInterval() fires off is not heavy. Since I will only be reading a text file which should never be too large (text file will be overwritten about every minute by a completely separate job or bash script), the load should be minimal since it will be read in as a string, analyzed, and if necessary throw out a small amount of HTML code to the page.
I agree with all the comments regarding a single, polling setInterval() to be trivial.
However, if you want alternatives:
Long Polling
The browser makes an Ajax-style request to the server, which is kept
open until the server has new data to send to the browser, which is
sent to the browser in a complete response.
Also see:
PHP example and explanation
SignalR
Web Sockets
WebSockets is an advanced technology that makes it possible to open an
interactive communication session between the user's browser and a
server. With this API, you can send messages to a server and receive
event-driven responses without having to poll the server for a reply.

How to measure HTTP cache hit rates?

Is it possible to detect HTTP cache hits in order to calculate a cache hit rate?
I'd like to add a snippet of code (JavaScript) to a HTML page that reports (AJAX) whether a resource was available from a client's local cache or fetched from server. I'd then compile some stats to give some insight on the effects of my cache tuning. I'm particularly interested in hit rates for the very first page of a user's visit.
I though about using access logs but that seems imprecise (bots) and cumbersome. Additionally, it wouldn't work with resources from different servers (especially Google's AJAX Libraries API, e.g. jquery.min.js).
Any non-JavaScript solution would be well appreciated too though.
There might be some easier way, but you could build a test where javascript loads the element and you record the time. Then when the onload event fires compare the times. You would have to test to see what the exact difference between loading from cache and loading from the server is. Or for a whole lot of items have the javascript load first record the time. Then record the onload events of everything else as it loads onto the page. This may not be as accurate though.

Categories

Resources