Sending large number of requests to server on node.js - javascript

I have the following:
var tempServer=require("./myHttp");
tempServer.startServer(8008,{'/fun1':fun1 , '/fun2': fun2 , '/fun3':fun3 , '/fun4':fun4},"www");
which creates a server on localhost:8008.
and if i type in the url line in my browser the following:
localhost:http://localhost:8008/fun2
it will call fun2 function and will perform what is needed.
having said that,
how can i write a script function (fun2) which simulates a call for a large number
of requests (say 10000) to tempServer?
any help will be much appreciated!

You could try this on your laptop, but you would find that your computer will not really be able to create 10,000 requests to a temp server (or anyplace) at the same time. Most modern machines top out around 1000 or so simultaneous connections.
You can create something in code that will certainly do this one connection after another, but this is not the same as a true load test where you ramp up and ramp down the number of requests per second. You will quickly find that this is IO and connection-bound and not a true test for your application.
Your best bet, if you are trying to benchmark/stress your server, is to put your server on a box or service that is internet accessible like nodejitsu or nodester (both free services) and then use another service (free or otherwise) to hit your server's URL. You don't say if this is what you're doing, but this is the usual way to do load and stress testing.
Two companies that I have used in the past are Load Impact and Blitz.io to get the number of simultaneous user requests to your server. for 10,000 users you will need to pay a bit but it's not too large a fee.
You may also want to look into the libraries from New Relic to help you monitor the server/service itself and how it behaves under stress. They have some great capabilities in helping find bottlenecks in applications.
This may be a bit more than what you were looking for, but I hope that it's helpful. If I am off the mark, let me know and I'll be happy to amend this to be closer to what it is you are looking for.

Are you interested in scripting 10,000 calls to fun2 via http2 or
issuing 10,000 requests from fun2 within node?
If it is within Node and as you probably know all the code is written as sequential but
the events are executed asynchronously.
Check the EventEmitter for dispatching function calls on evens:
http://nodejs.org/docs/v0.4.7/api/all.html#events.EventEmitter
and see this example as an inspiration for issuing calls in parallel:
http://ricochen.wordpress.com/2011/10/15/node-js-example-2-parallel-processing/
Hope this helps.
Edmon

Related

What is the proper algo/method to track changes between two async calls/requests

I have frontend that send requests to backend to keep the data as "live" as possible. The problem is that if frontend sends two different(in terms of data) requests in a short period of time, server may get the second request before the first one(due to delay, bad network, async etc).
My question is there any type of algorithm/method to deal with such events, to track changes in a sync way for async requests?
I thought of timestamps, but in that case I would need to "trust" the client - doesn't look a good solution.
I can't share my code(not sure if it's going to help anyway) or give you exactly what I'm making, but it's kind of live-text-editor, when two(or more) parties can see the editor and make changes to it.
Also, even though I would prefer not to use web-sockets, but perhaps web-sockets could solve this problem(never used it before)? I don't know how web-sockets work especially with asynchronous calls.
I use MERN stack, in case it's relevant.

Is node.js event loop like an ajax call?

I am confused with node.js' advantages over other tech. I've been reading this article : http://www.toptal.com/nodejs/why-the-hell-would-i-use-node-js and this How to decide when to use Node.js? to familiarize myself with it and have left me confused.
I am familiar with cpu intensive task like the computation of the Fibonacci series but that's where my understanding ends.
For example, I have a Rest API that does all the computation or recommendation and is housed on a different server from the machine running node, then node.js won't have any trouble with having to deal with cpu intensive task. Just call the api then tell the client that your request is acknowledged.
I can't shake this thinking about comparing node.js with a simple ajax call to send the request from a form to the server, display a ticker then show the result. I am guessing that node.js is a web server, doing lot's of "ajax" type calls and handling concurrent connections.
Are my assumptions correct?
Is it also correct to assume that retrieving data from a database is an io operation but creating a complex report from that data a cpu intensive one?
You are right about handling many ajax requests, however thats true in worker based model also (php/python workers threads)
Main difference for event based system there will be only one worker doing all sorts of computation part of code (such as filtering data, adding processing etc). When it calls io ops like read from file, or db etc. node doesn't have control over that, instead of waiting on that to finish it puts a call back in the queue and moves on with next processing in queue (if any).
For analogy think of pizza outlet, if only one person is taking order and handing over the order to kitchen, once its ready cutting it, packing and giving it to customer. Where ever there is wait, he just moves on to next task. This is what node does, that person wont hang-on next to kitchen until pizza gets cooked.
In case of worker based approach think of a bank teller and you see couple of them (may be 5 or so) they take every kind of request but they dont switch between customer / request.
Refer to these resources for a deeper understanding of how JavaScript event loop works.
https://www.youtube.com/watch?v=8aGhZQkoFbQ
http://latentflip.com/loupe/
I can't answer all your doubts, but would like you to have some clarity over AJAX.
AJAX - Asynchronous JavaScript + XML is a technique to make requests to a server. Nodejs server knows how to handle such requests, but saying that is the only thing it can do is absolutely wrong. Nodejs is single threaded, hence async. Whether it is good for CPU intensive tasks, I would say why not, unless you want to solve issues in a multithreaded fashion.

Best Server API and Client Side Javascript Interaction Methods?

Currently, I'm using setTimeout() to pause a for loop on a huge list so that I can add some styling to the page. For instance,
Eg: http://imdbnator.com/process?id=wtf&redirect=false
What I use setTimeOut for:
I use setTimeout() to add images,text and css progress bar (Why doesn't Progress Bar dynamically change unlike Text?2).
Clearly, as you can see it is quite painful for a user to just browse through the page and hover over a few images. It gets extremely laggy. Is there any any workaround to this?
My FOR Loop:
Each for loop makes an ajax request on the background to a PHP API. It definitely costs me some efficiency there but how do all other websites pull it off with such elegance? I mean, I've seen websites show a nice loading image with no user interference while it makes an API request. While I try to do something like that, I have set a time-out everytime.
Is that they use better Server-Client side interaction languages like the node.js that I've heard?
Also, I'e thought of a few alternatives but run into other complications. I would greatly appreciate if you can help me on each of these possible alternatives.
Method 1:
Instead of making an AJAX call to my PHP API through jQuery, I could do a complete server side script altogether. But then, the problem I run into is that I cannot make a good Client Side Page (as in my current page) which updates the progress bar and adds dynamic images after each of the item of the list is processed. Or is this possible?
Method 2: (Edited)
Like one the useful answers below, I think the biggest problem is the server API and client interaction. Websockets as suggested by him look promising to me. Will they necessarily be a better fix over a setTimeout? Is there any significant time difference in lets say I replace my current 1000 AJAX requests into a websocket?
Also, I would appreciate if there is anything other than websocket that is better off than an AJAX call.
How do professional websites get around with a fluidic server and client side interactions?
Edit 1: Please explain how professional websites (such as http://www.cleartrip.com when you are requesting for flight details) provide a smooth client side while processing the server side.
Edit 2: As #Syd suggested. That is something that I'm looking for.I think there is a lot of delay in my current client and server interaction. Websockets seem to be a fix for that. What are the other/ best ways for improving server cleint interaction apart from the standard AJAX?
Your first link doesn't work for me but I'll try to explain a couple of things that might help you if I understand your overall problem.
First of all it is bad to have synchronous calls with large amount of data that require processing in your main ui thread because the user experience might suffer a lot. For reference you might want to take a look into "Is it feasible to do an AJAX request from a Web Worker?"
If I understand correctly you want to load some data on demand based on an event.
Here you might want to sit back and think what is the best event for your need, it's quite different to make an ajax request every once in a while especially when you have a lot of traffic. Also you might want to check if your previous request has completed before you initialize the next one (this might not be needed in some cases though). Have a look at async.js if you want to create chained asynchronous code execution without facing the javascript "pyramid of doom" effect and messy code.
Moreover you might want to "validate - halt" the event before making the actual request. For example let's assume a user triggers a "mouseenter" you should not just fire an ajax call. Hold your breath use setTimeout and check if the user didn't fire any other "mouseenter" event for the next 250 ms this will allow your server to breath. Or in implementations that load content based on scroll. You should not fire an event if the user scrolls like a maniac. So validate the events.
Also loops and iterations, we all know that if the damn loop is too long and does heavy lifting you might experience unwanted results. So in order to overcome this you might want to look into timed loops (take a look at the snippet bellow). basically loops that break after x amount of time and continue after a while. Here are some references that helped me with a three.js project. "optimizing-three-dot-js-performance-simulating-tens-of-thousands-of-independent-moving-objects" and "Timed array processing in JavaScript"
//Copyright 2009 Nicholas C. Zakas. All rights reserved.
//MIT Licensed
function timedChunk(items, process, context, callback){
var todo = items.concat(); //create a clone of the original
setTimeout(function(){
var start = +new Date();
do {
process.call(context, todo.shift());
} while (todo.length > 0 && (+new Date() - start < 50));
if (todo.length > 0){
setTimeout(arguments.callee, 25);
} else {
callback(items);
}
}, 25);
}
cleartip.com will probably might use some of these techniques and from what I've seen what it does is get a chunk of data when you visit the page and then upon scroll it fetches other chunks as well. The trick here is to fire the request a little sooner before the user reaches the bottom of the page in order to provide a smooth experience. Regarding the left side filters they only filter out data that are already in the browser, no more requests are being made. So you fetch and you keep something like cache (in other scenarios though caching might be unwanted for live data feeds etc).
Finally If you are interested for further reading and smaller overhead in data transactions you might want to take a look into "WebSockets".
You must use async AJAX calls. Right now, the user interaction is blocked while the HTTP ajax request is being done.
Q: "how professional websites (such as cleartrip.com) provide a smooth client side while processing the server side."
A: By using async AJAX calls

How to cache results for autosuggest component?

I have a UI autosuggest component that performs an AJAX request as user types. For example, if user types mel, the response could be:
{
suggestions: [{
id: 18,
suggestion: 'Melbourne'
}, {
id: 7,
suggestion: 'East Melbourne'
}, {
id: 123,
suggestion: 'North Melbourne'
}]
}
The UI component implements client side caching. So, if user now clicks b (results for melb are retrieved), and then Backspace, the browser already has results for mel in memory, so they are immediately available. In other words, every client makes at most one AJAX call for every given input.
Now, I'd like to add server side caching on top of this. So, if one client performs an AJAX call for mel, and let's say there is some heavy computation going on to prepare the response, other clients would be getting the results without executing this heavy computation again.
I could simply have a hash of queries and results, but I'm not sure that this is the most optimal way to achieve this (memory concerns). There are ~20000 suggestions in the data set.
What would be the best way to implement the server side caching?
You could implement a simple cache with an LRU (least recently used) discard algorithm. Basically, set a few thresholds (for example: 100,000 items, 1 GB) and then discard the least recently used item (i.e., the item that is in cache but was last accessed longer ago than any of the other ones). This actually works pretty well, and I'm sure you can use an existing Node.js package out there.
If you're going to be building a service that has multiple frontend servers, it might be easier and simpler to just set up memcached on a server (or even put it on a frontend server if you have a relatively low load on your server). It's got an extremely simple TCP/IP protocol and there's memcached clients available for Node.js.
Memcached is easy to set up and will scale for a very long time. Keeping the cache on separate servers also has the potential benefit of speeding up requests for all frontend instances, even the ones that have not received a particular request before.
No matter what you choose to do, I would recommend keeping the caching out of the process that serves the requests. That makes it easy to just kill the cache if you have caching issues or need to free up memory for some reason.
(memory concerns). There are ~20000 suggestions in the data set.
20,000 results? Have you thought about home much memory that will actually take? My response is assuming you're talking about 20,000 short strings as presented in the example. I feel like you're optimizing for a problem you don't have yet.
If you're talking about a reasonably static piece of data, just keep it in memory. Even if you want to store it in a database, just keep it in memory. Refresh it periodically if you must.
If it's not static, just try and read it from the database on every request first. Databases have query caches and will chew through a 100KB table for breakfast.
Once you're actually getting enough hits for this to become an actual issue, don't cache it yourself. I have found that if you actually have a real need for a cache other people have written it better than you would have. But if you really need one, go for an external one like Memcached or even something like Redis. Keeping that stuff external can makes testing and scalability a heap easier.
But you'll know when you actually need a cache.

Backbone - How to limit ajax requests per second or create queue?

In my case, I load a collection with two requests
get me
get friends
each one makes another request(s) to get photos in the model
1 request
10 requests (one per friend)
Bottomline is that I have 13 requests. But server can service only 3 request per second. What should I do?
UPD
Remote server is not mine. Mb its wrong, but my way already was:
collection.add(collection.getMe()) -> model.init -> model.getphotos->
view.render() (1 time)
next
collection.add(collection.getFriends()) ->... next for each friend ... model.init -> model.getphotos->view.render() (10 times)
I'm totally noob in backbone. I'm trying to program in the backbone style. But I cannot understand how to limit ajax requests.
I would look at this at two different levels, the first being design, and the second being performance optimization.
Design:
Personally I would approach the problem differently by creating an api endpoint that returned all of the required data in a single request. This could involve:
an endpoint that returns a single object and direct dependencies (get me) and then
an endpoint that returns all friends of a specific object and their dependencies (get friends + photos, etc)
Performance:
If the reason for the question is because its already really slow then you might consider the design, but if you are only worried that it will be slow then I would move forward and measure the real-world performance. Spending time optimizing what you think might be slow could prove to be a wrong strategy.
3 requests per second could be enough and the N requests will most likely queue up and be served in succession. You might be fine.
If you are not fine, then you can change the loading of the photos (or whatever is being loaded after) on demand (when the user interacts or paginates, etc) this will ensure that you use what you pull and you only pull what you use.
Hope this helps! Good luck!

Categories

Resources