I am creating a user management system. There's an option for moderators to block users for a specific number of days. After receiving a block request, the disabled attribute for the user is set to true in the database. After the specified duration passes, the value is set back to false. The duration may range from 1 - 10 days. Is setTimeout() the right choice? Will it be cpu intensive if run for many users simultaneously?
Here's the function from my program. The complete program uses express as a server and mongoose to interact with the mongodb database.
function disableUser(req,res) {
let username = req.body.username;
let app = req.body.app;
let duration = req.body.duration;
let reason = req.body.reason;
User.find({username:username}).then((result) => {
let user = result[0];
user.disabled = true;
user.save().then((user) => {
disableMail(app, user.email, duration, reason); //send mail to user
// schedule a task to set user.disabled to false after the duration
}, (err) => {
res.status(500).send(err);
});
}, (err) => {
res.status(500).send(err);
});
}
EDIT :
Seeing all the comments and answers, I think I'll implement something like this : store the end date of the ban (calculated using something like momentjs) along with the disabled attribute. A setInterval will run separately and check the date every 12 hours for all the users with disabled set to true. If the end_date matches the current date, the disabled attribute will be set to false.
A setTimeout() set for several days from now will work and it's not hardship on the server (even if you have a lot of them), but it's still probably not the right design choice for the following reasons:
setTimeout() is only good for the duration of your server running. As soon as your server restarts for any reason (crash, planned maintenance, etc...), the setTimeout() is gone so even if you did use a setTimeout() you'd have to use a more durable means of storage anyway so you could restore all the timers on a server restart.
You don't need ms timing. A multi-day ban can be removed hourly or even daily.
My suggestion is that instead of a timer, you run some periodic process that could be run once an hour that does a query to find any banned users whose ban has expired. You will want to store the ban and the ban duration in the database in a way that makes this query efficient (perhaps in a separate table of banned users). Then, you can run this query once an hour, remove the user from the banned table and then re-enable the "disabled" user attribute for those users whose ban has expired.
This type of design should meet all your objectives, is a durable way of storing the ban and can be implemented efficiently.
Using setTimeout is probably impractical for a few reasons. First, you are correct that it would be intensive, especially if you have a lot of blocked users. Second, it wouldn't survive a server restart, which may occur because you are pushing new code.
If you go with some automated background task, then consider:
You probably want to check if the user is blocked on all API calls and form submissions at the server-side. This could be part of your general security/permission checks on write calls.
You'll need some error handling in the UI whenever a banned user's API call is rejected.
You may want some background task that executes like once a minute and checks for blocked users and expires the ban when appropriate. A chron type job, or quartz (if Java), or something similar if on Node, can run the query and perform the updates.
Related
What we have?
An API build in Node.js (using Moleculer.js for micro-services and PostgreSQL for storing data) which has the functionality of keeping track of users and user groups. We have in average 3k users per group, and a user can be part of multiple groups.
What we want to achieve?
We want to create a special service which will send text messages. The admins will select multiple groups, the code will remove the duplicated users and send them an SMS.
After a selection we can have around 1 million users. How can we send them text messages in an efficient way?
What have we tried?
Paginate the users and for each page send a request to the SMS service.
const users = db.getPage(1); // [{ id: 1, phone: '+123456789' }, ...]
smsClient.sendBulk(users);
PROBLEM: The user list in the database can change in the process and can affect the pagination by giving us duplicates or skipping some users.
Load all the results in the memory and send all the users to the SMS service.
const users = db.getAll(); // [..., { id: 988123, phone: '+987654321' }]
smsClient.sendBulk(users);
PROBLEM: We think it's a bad idea, resource wise, to make this kind of queries to the database and keep them in the memory. In the same time, we don't want to send 1 million entities through an HTTP request to the SMS service.
How can we select a 1 million users and send them an SMS message without worry about duplicates, skipped data or any other alteration to the admin's selection? We were thinking about queues as a necessary step but after we find a solution for the cases mentioned above. Or, is the queue part of the solution?
How can we select a 1 million users and send them an SMS message without worry about duplicates, skipped data, or any other alteration to the admin's selection?
For managing duplicates You could use an additional DB to save a Hash Table for the users that been handled already. This is a bit more expensive because you will need to check the user before each SMS send.
Managing not skipping is a bit tricky because you will need to add more recipients to an ongoing SMS transaction. You will need the ability to detect (hook) when a user is added to a group and add it as recipients to the ongoing transactions accordingly.
You will need to find a fast DB and save that user in a HashSet for a fast set and get (O(1))
We were thinking about queues as a necessary step but after we find a solution for the cases mentioned above. Or, is the queue part of the solution?
Defenently. Queue is the correct way to go for this scenario (queueing many small tasks). Some queues come with a re-queue features that will re-queue any task that didn't get acknowledgment.
you need to check out RabbitMQ.message-driven microservices
Have you considered creating an indirect state between the user and sent SMS? Something like SmsRequest / SmsTask / however you'd call it.
It'd consist of necessary user-data, message content, status of the request (to-send, sending, sent, failed, ...) and some additional metadata depending on your needs.
Then the first step you'd do is to prepare these request and store them in db, effectively making a queue out of a table. You can add some constraints on user and message type that'd prevent any duplicates and then start second asynchronous process that simply fetches requests in to-send state, sets the state to sending and then saves the outcome.
This also gives you the benefit of audit + you can batch the outgoing messages and.
Of course it'd increase your data volume significantly but I guess it's cheap nowadays anyway.
This is not a post about HOW to change channel's name (I know it).
I have an international server using several bots. And we all depend on UTC time (to coordinate through the world). So there was borned a solution to make a time-bot which will show current UTC-time in the dedicated channel nobody can visit. And yes, precision is necessary, even seconds.
I created a voice channel with permissions not to join for #everyone. Everything worked fine, it updated every 1000 ms. Then (after several months of good work) something was broke, it started updating incorrect. I've increased update interval up to 5000 ms and it have started to work fine... until yesterday.
Now it doesn't work anymore. Even if I increase interval much more. It works sometimes I don't really know what the interval is, it's huge and unpredictable.. the time-bot is broken for now and cannot be used anymore in that case.
Is there any restrictions for updating channel name? I can't find any information about this in available documentations.
Client.setInterval(() => {
const { h, m, s } = getTime();
channel.edit({ name: `${getClockEmoji({ h, m })} UTC: ${h}-${m}-${s}` }).catch((err) => console.log(err));
}, updateInterval);
Providing data is correct, 'cause I send it to console and it updates as good as I need in interval I set. But channel name not updates that often..
Does discord filter too often update requests?
discord.js version is v.12.2.0
Discord had set the rate limit for things like channelrename to 2 requests per 10 minutes.
"Normal" requests like sendmessage is limited to 10,000 per 10 minutes.
This seems to likely be an issue directly related to rate limiting:
https://discord.com/developers/docs/topics/rate-limits
IP addresses that make too many invalid HTTP requests are automatically and temporarily restricted from accessing the Discord API. Currently, this limit is 10,000 per 10 minutes. An invalid request is one that results in 401, 403, or 429 statuses.
For every API request made, we return optional HTTP response headers containing the rate limit encountered during your request.
You should probably decrease the interval by a considerable amount to reduce the risk of the IP being restricted.
I want my discordbot to send send a message with an attached file in it and a text. Then the bot has to edit this text a couple of times but the problem is that when bot eddits message 5 times then it waits some time and then edits again 5 times etc etc. How can i make it edit messages without stopping?
if(msg.content.includes("letter")){
msg.channel.send("alphabet", { files: ["/Users/48602/Videos/discordbot/aaa.png"]})}
if(msg.content === 'alphabet'){
msg.edit("**a**")
msg.edit("**b**")
msg.edit("**c**")
msg.edit("**d**") // Here bot stop for a 2 seconds and i dont know why
msg.edit("**e**")
msg.edit("**f**")
msg.edit("**g**")
msg.edit("**h**")
msg.edit("**i**")
msg.edit("**j**")// Here bot stop for a 2 seconds and i dont know why
msg.edit("**k**")
msg.edit("**l**")
msg.edit("**m**")
msg.edit("**n**")
msg.edit("**o**") // Here bot stop for a 2 seconds and i dont know why
msg.delete()
}
Discord has a rate limit of 5 in each request. Trying to bypass this would be considered API abuse (the solutions later is not API abuse).
Exceeding this limit will pause other requests until a certain number of seconds has passed. Along with my research, I came across this simple explanation:
5 anything per 5 seconds per server (if you did not understand what I said above).
On Discord's Developer guide on rate limits, it tells you this:
There is currently a single exception to the above rule [rate limits] regarding different HTTP methods sharing the same rate limit, and that is for the deletion of messages. Deleting messages falls under a separate, higher rate limit so that bots are able to more quickly delete content from channels (which is useful for moderation bots).
One workaround, without API abusing, would be to send messages, and delete the previous messages since there is a higher limit for deleting messages.
Another workaround would be to add intermediate timeouts to your animation.
A simple method such as:
function async wait = { require("util").promisify(setTimeout); };
//syntax: await wait(1000); to "pause" for 1 second
You will need to play around with the timings so it fits your intended animation speed, and without pausing due to the rate limit.
Database stores some data about the user which almost never change. Well sometimes information might change if the user wants to edit his name for example.
Data information is about each user's name, username and his company data.
The first two are being shown to his navigation bar all the time using ejs, like User_1 is logged in, his company profile data when he needs to create an invoice.
My current way is to fetch user data through middleware using router.use so the extracted information is always available through all routes/views, for example:
router.use(function(req, res ,next) { // this block of code is called as middleware in every route
req.getConnection(function(err,conn){
uid = req.user.id;
if(err){
console.log(err);
return next("Mysql error, check your query");
}
var query = conn.query('SELECT * FROM user_profile WHERE uid = ? ', uid, function(err,rows){
if(err){
console.log(err);
return next(err, uid, "Mysql error, check your query");
}
var userData = rows;
return next();
});
});
})
.
I understand that this is not an optimal way of passing user profile data to every route/view since it makes new DB queries every time the user navigates through the application.
What would be a better way of having this data available without repeating the same query in each route yet having them re-fetched once the user changes a portion of this data, like his fullname ?
You've just stumbled into the world of "caching", welcome! Caching is a very popular choice for use cases like this, as well as many others. A cache is essentially somewhere to store data that you can get back much quicker than making a full DB query, or a file read, etc.
Before we go any further, it's worth considering your use case. If you're serving only a few users and have a low load on your service, caching might be over-engineering and in fact making a DB request might be the simplest idea. Adding caching can add a lot of complexity to your code as things move forward, not enough to scare you, but enough to cause hard to trace bugs. So consider for a moment your service load, if it's not very high (say an internal application for somewhere you work with only maybe a few requests every few minutes) then just reading from the DB is probably not going to slow down a request too much. In this case, reading from the DB is the simplest and probably best solution. However, if you're noticing that this DB request is slowing down your application for requests or making it harder to scale up, then caching might be for you.
A really popular approach for this would be to get something like "redis" which is a key-value database that holds everything in memory (RAM). Redis can sit as a service like MySQL and has a very basic query language. It is blindingly fast and can scale to enormous loads. If you're using Express, there are a number of NPM modules that help you access a redis instance. Simply push in your credentials and you can then make GET and SET requests (to get data or to set data).
In your example, you may wish to store a users profile in a JSON format against their user id or username in redis. Then, create a function called getUserProfile which takes in the ID or username. This can then look it up in redis, if it finds the record then it can return it to your main controller logic. If it does not, it can look it up in your MySQL database, save it in redis, and then return it to the controller logic (so it'll be able to get it from cache next time).
Your next problem is known for being a very pesky problem in computer science. It's "Cache Invalidation", in this case if your user profile updates you want to "invalidate" your cache. A way of doing this would be to update your cached version when the user updates their profile (or any other data saved). Alternatively, you could also just remove the cached version from redis and then next time it's requested from getUserProfile, it will be fetched from the DB fresh, and then put into redis for next time.
There are many other ways to approach this, but this will most likely solve your problem in the simplest way without too much overhead. It will also be easy to expand in the future!
I have multiple heroku dynos and a chat app. When a user logs in, their status is set to "online" in MongoDB. However, if a server crashes, their status will still be set as online. How can I update the user status to be "offline" when a server crashes?
If I only had one dyno, this would be easy. I'd just update every user to be "offline" when the server starts. Unfortunately, this is not possible with multiple servers.
As per our chat and comments.
The best option is to go with checking against last activity. So seeing when the last message was sent and if it happened within the last let's say 5 minutes they are online if there were no activity mark them as offline.
Like I mentioned in the comments, if you are not storing a date_created on the messages documents you will not have to change anything because _id stores the timestamp
ObjectId("507f191e810c19729de860ea").getTimestamp()
that returns this Date object
ISODate("2012-10-17T20:46:22Z")
This answer is another option (if you are wanting to keep them as online even if they are not sending messages):
If you would like to know they are still active even when they're not jumping from page to page, include a bit of javascript to ping your server every 60 seconds or so to let you know they are still alive. It'll work the same way as my original suggestion, but it will update your records without requiring them to be frantically browsing your site at least once every five minutes.
var stillAlive = setInterval(function () {
/* XHR back to server
Example uses jQuery */
$.get("stillAlive.php");
}, 60000);