Load balancing using setTimeout - javascript

I know it's a clickbait title, but this is a serious question. Recently, in Sri Lanka, the government deployed https://fuelpass.gov.lk/ application where a Sri Lankan can register and check the fuel available for the registered vehicle. (The system was designed to control the amount of fuel available to an individual as fuel imports have slowed down due to the economic crisis).
When I checked the system yesterday I realized that they have deployed the dev build to production. As I was going through the code, I noticed that in the Login component, there is a setTimeout call on the form submit to show a loading animation.
The login component looked something like this:
export default () => {
const [waiting, setWaiting] = useState(false)
return(
<Loading showLoading = {waiting}>
<form onSubmit = {() => {
setWaiting(true)
setTimeout(async () => {
await loginRequestCall()
setWaiting(false)
}, Math.floor(Math.random() * 5000)
}}>
<input />
<button>Send OTP</button>
</form>
</Loading>
)
}
Screenshot of the source code:
The important part is, when user clicks on the Send OTP, It shows the loading screen, only after waiting a random amount of time it sends the request.
I created a video about this and it kinda went viral in Sri Lanka. According to the comments, some of the justifications (for using setTimeout with a random amount of time to wait) were following.
It is a mechanism to prevent DDos attacks
Way to prevent users spamming Send OTP button causing multiple requests
It is a cheap way of load balancing (way to reduce the number of OTP requests)
I wouldn't think if someone wanted to DDos they would be dumb enough to use UI automation. There is no need of setTimeout to prevent spam also.
Which leave me with the last point.
So my questions are,
Is it possible to load balance using setTimeout?
If so, how exactly would it distribute the load?
Is there a valid reason to add a setTimout?

Is it possible to load balance using setTimeout?
Load balancing is the process of distributing network traffic across multiple servers.
Since load balancing is not reducing the number of calls but distributing them, it is not possible to do load balancing using setTimeout
If so, how exactly would it distribute the load?
As I mentioned, it is impossible to do so with setTimeout, at least not in a non-hacky and inefficient manner.
Is there a valid reason to add a setTimout?
In my opinion, this setTimeout might be intended to be used as a debounce function to avoid the user clicking on the OTP button multiple times if the feedback is not provided immediately (but still not doing the work since all calls will be done anyway).
So the answer is NO, in my opinion, there is not any valid reason for this setTimeout

Related

How can you guarantee that a PWA is fully up-to-date using only built-in functionality, without reloading the page? [duplicate]

I'm working on adding a requested feature to my SPA. We have some users who leave their tabs open to our application for long periods of time. We also push out frequent updates (sometimes 5x a day, as we're pre-revenue). I'm wondering if it's possible to modify the serviceWorker that comes installed with Create-React-App to run a polling loop (maybe every 10 minutes) to poll for new updates to the application, instead of only on initial page load.
This way, a user who leaves their tab open could receive update notifications without having to refresh.
Has anyone achieved something like this before, and know how I might implement that into the CRA serviceWorker?
Figured it out! In the registerServiceWorker.js file, I added a simple setInterval inside the callback for the navigator.serviceWorker.register() function:
// poll for live updates to the serviceWorker
pollingLoopInterval = setInterval(async () => {
await registration.update();
}, POLLING_LOOP_MS);
Easy!

Firestore suddenly has a huge trigger delay

We are running an application on Firestore and got a simple trigger that when order's details are created or updated some of it's information should be rewritten in the parent order collection.
The function for this got following code
export const updateOrderDetails = functions
.region(FUNCTION_REGION)
.firestore.document("orders/{orderId}/details/pickupAndDropoff")
.onWrite(async (change, context) => {
return await admin
.firestore()
.collection("orders")
.doc(context.params.orderId)
.set({ pickupAndDropoff: change.after.data() }, { merge: true });
});
It was work fine before, but now at random about every third of its executions is delayed. Sometimes by few minutes. In Cloud Function logs we see normal execution times <200ms, so it seems the trigger runs after a huge pause.
What's worse from time to time our change.after.data() is undefined, but we never delete anything - it's just updates and creates.
It was working fine, we did not changed nothing since last week, but now it started to have this unexpected delays. We've also checked the firebase status, but there are no malfunctions in firebase functions service. What can be the cause of this?
The problem can be due to the monotonically increasing orderId as the parameter passed here:
...
.collection("orders")
.doc(context.params.orderId)
...
If you can check once if the orderId passed here is monotonically increasing with each request? It can lead to hotspots which impacts latency.
To explain, I think the write rate must be changing at different day's and time's - as the user traffic using the application or load testing requests changes - which is creating the unexpected kind of behaviour. At low write rate, the requests are working as expected most of the time. At high write rate, the requests are facing hotspot situation in the firestore as mentioned in the firestore documentation resulting in delays (latency issue).
Here is the relevant link to firestore best practices documentation.
Thanks to Frank van Puffelen suggestion we've sent this question directly to Firebase support and after their internal investigation we've got the reply from an engineering team that it was in fact an infrastructure malfunction.
The reply I got from them was:
I escalated the issue to recover more information. So far it appears that there was an issue with pub/sub delivering and creating the event. The Firestore team is also communicating with the pub/sub team to investigate the issue and prevent future incidents.
It seems that the best way to deal with such problems is to quickly write directly to Firebase support team, because as they mentioned in the automatic reply I got after sending a support ticket:
For Firebase outages not listed on the status dashboard, we'll respond within 4 hours.
which seems to be the best option.

How to run asynchronous tasks for multiple users

So I've this project for school who is about "triggers" for social networks.
Let me explain:
- A user can register for our application and login
- He can sign in for multiple services like Facebook, Twitter etc.
- Then their is what we call triggers, once he signed in on our application and registered his services, everytime he will post something on twitter for example my server will see it and post it on Facebook aswell.
I knew nothing about node.js a month ago so I'm kinda new to all this async stuff but I took some course to help myself. So far so good I can now manage users etc, (I've again some research to do with oauth).
My biggest problem is this "real-time" update on our server.
I mean I searched on the internet and saw this what we call polling (?), the idea to make request frequently to a server every X seconds.
So with a bit a sudo code this what I tought it would look like:
For each User
asynchronously watch for every update on Facebook and Twitter for
this User
So I did some research about performing request every each second and found about setInterval and setTimeout
const watchSocialMedia = setInterval(function(){
Users.forEach(user => {
User.watchAndPostAnyNewPost() //
}
}, 60000);
So I put some dummy data to illustrate.
Problem is I don't think It'll be done asynchronously ?
I mean the ideal is if I could put one time for each user a 'watcher' like saying
For each User
User.watchAndPostAnyNewPost()
where watchAndPostAnyNewPost() look like this
class User () {
...
const watchAndPostAnyNewPost = setInterval(function(){
fetchFacebook();
fetchTwitter();
}, 60000);
}
So each user have his own setInterval function running on him to check if he posted anything
Anyone can tell me if it's even possible ? :-)
Thanks a lot for reading me !!!

How to block the link from malicious bot visitors?

I'm producing an event registration website. When someone click on a link:
Reserve id=10 event
The system is doing a "lock" on this event for ten minutes for this visitor. In that case no one else can reserve this event in next ten minutes. If the payment is done in that time, everything is OK, else the event is unlocked again. I hope the idea is clear.
PROBLEM: When bot (google bot, malicious bot, or angry customer script :P) visits this page, he see this link. Then he enters the page. Then the lock is done...
Also if someone visit recursive: /reserve/1, /reserve/2, /reserve/3, ... He can lock all the events.
I thought about creating a random md5 string for each event. In that case, every event has (next to id) unique code, for example: 1987fjskdfh938hfsdvpowefjosidjf8243
Next, I can translate libraries, to work like this:
<a href="/reserve/1987fjskdfh938hfsdvpowefjosidjf8243" rel="nofollow">
Reserve
</a>
In that case I can prevent the "bruteforce" lock. But the link is still visible for bots.
Then I thought about entering the captcha. And that is the solution. But captchas are... not so great in case of usability and user experience.
I saw few websites with reservation engine working like this. Are they protected? Maybe there is a simple ajax / javascript solution to prevent the bots from reading this as a pure text? I thought about:
Reserve
<script type="text/javascript">
$('#reserve').click(function(e) {
e.preventDefault();
var address = ...;
// something not so obvious to follow?
// for example: md5(ajaxget(some_php_file.php?salt=1029301))
window.location('/reserve/' + address);
});
</script>
But I'm not sure what shall I do there to prevent bots form calculating it. I mean stupid bots will not be able even to follow javascript or jquery stuff, but sometimes, someone wants to destroy something, and if the source is obvious, it can be broken in few lines of code. And whole database of events will be locked down with no reservation option for noone.
CRFS + AJAX POST + EVENT TOKEN generated on each load.
Summary: don't rely on GET requests especially through a elements.
And better if you add some event block rate limits (by IP for instance).
EDIT: (this is a basic sketch)
replace all the href="..." with data-reservation-id=ID
delegate click on the parent element for a[data-reservation-id]
in the callback, simply make a POST ajax call to the API
in the API's endpoint check rate limits using IP for instance
if OK, block the event and return OK, if not return error.
IP-Specific maximum simultaneous reservations
Summary: Depend on the fact that many simple bots operate from one host. Limit the number of simultaneous reservations for a host.
Basic scetch:
Store the requesting IP alongside the reservation
On reservation request count the IP's which have a non-completed reservation.
SELECT Count(ip) FROM reservations WHERE ip=:request_ip AND status=open;
If the number is above a certain threshold, block the reservation.
(this is mostly an expansion of point 4 given in avetist's excellent answer)

Meteor.jd, realtime latency with complex pusblish/subscribe rules?

I'm playing with realtime whiteboards with meteor. My first attempt was working very well, if you open 2 browsers and draw in one of them, the other one updates in a few milliseconds ( http://pen.meteor.com/stackoverflow )
Now, my second project, is to make an infinite realtime whiteboard. The main thing that changes now, is that all lines are grouped by zones, and the viewer only subscribe to the lines in the visible zones. And now there is a dealy of 5 seconds (!) when you do something in one browser to see it happen in the other one ( http://carve.meteor.com/love ).
I've tried to add indexes in the mongo database for the fields determining the zones.
I've tried updating the Collection only for a full line (and not each time I push a new point like i my first project).
I've tried adding a timeout not to subscribe too often when scrolling or zooming the board.
Nothing changes, always a 5 seconds delay.
I don't have this delay when working locally.
Here is the piece of code responsible for subscribing to the lines you the visible area :
subscribeTimeout=false;
Deps.autorun(function () {
var vT=Session.get("visible_tiles");
var board_key=Session.get("board_key");
if (subscribeTimeout) Meteor.clearTimeout(subscribeTimeout);
subscribeTimeout=Meteor.setTimeout(subscribeLines, 500);
});
function subscribeLines() {
subscribeTimeout=false;
var vT=Session.get("visible_tiles");
console.log("SUBSCRIBE");
Meteor.subscribe("board_lines", Session.get("board_key"),vT.left,vT.right,vT.top,vT.bottom, function() {
console.log("subscribe board_lines "+Session.get("board_key"));
});
}
I've been a SysAdmin for 15 years. Without running the code, it sounds like an imposed limitation of the meteor.com server. They probably put in delays on the resources so everyone gets a fair share. I'd publish to another server like heroku for an easy deploy or manually to another server like linode or my favorite Joyent. Alternatively you could try and contact meteor.com directly and ask them if/how they limit resource usage.
Since the code runs fast/instantly locally, you should see sub-second response times from a good server over a good network.

Categories

Resources