Control Time and Send Query Every Second with Node Js? - javascript

I am writing a web software with node js. I am also using Mssql database. I have a table that has a datetime value and a bit value. The bit value is always 0 until the real time is equals the value that datetime has. If they equals, the bit value will be 1.
Okey the question is "How can I check the realtime and the datetime value in the table, everytime?"
I used setInterval func and the server closed itself after 3 or 4 time. Weird thing is I can't see any error code or anything like that after the server closed.
I need help. If you have any idea about how to solve, please help me. I heard something about socket.io but I don't now how to adapt it.

Querying a database every second is probably too frequent. It's possible one query won't finish before the next one starts.
To work around this you should use a new database connection each time your setInterval handler runs. That way your new query still has a chance of working if the previous query is not done. Getting a new database connection each time is reasonable if you use a connection pool in your program, and request your new connections from that pool.
Something like this: not debugged.
const sql = require('mssql')
const pool = new sql.ConnectionPool(config)
const poolConnect = pool.connect()
pool.on('error', err => {
// ... error handler
})
function intervalTimer() {
return poolConnect.then((pool) => {
pool
.request()
.execute('SELECT whatever', (err, result) => {
// ... error checks
console.dir(result)
})
}).catch(err => {
// ... error handler
})
}
Or, you can even skip the query if the previous one isn't done. This code does that with an intervalTimerInProgress boolean value. It sets it right before starting the query, and clears it when the query finishes.
const sql = require('mssql')
const pool = new sql.ConnectionPool(config)
const poolConnect = pool.connect()
pool.on('error', err => {
// ... error handler
})
let intervalTimerInProgress = false
function intervalTimer() {
if (!intervalTimerInProgress) {
intervalTimerInProgress = true
return poolConnect.then((pool) => {
pool
.request()
.execute('SELECT whatever', (err, result) => {
// ... error checks
console.dir(result)
intervalTimerInProgress = false
})
}).catch(err => {
// ... error handler
})
}
}
Again, polling a DBMS once a second is a design choice you should reconsider. DBMSs are the scarcest resource and the bottleneck in many web apps, and polling them can put bigger loads on them than you want.

Related

Express server stops after 5 GET requests

This code works like it should work, but after fifth GET request it does what it should do on the backend(stores the data in db) but it's not logging anything on the server and no changes on frontend(reactjs)
const express = require('express');
const router = express.Router();
const mongoose = require('mongoose');
const User = require('./login').User;
mongoose.connect('mongodb://localhost:27017/animationsdb');
router.get('/', async(req, res) => {
await User.findOne({ username: req.query.username }, (err, result) => {
if (result) {
// when user goes to his profile we send him the list of animations he liked
// list is stored in array at db, field likedAnimations
res.send({ animationList: result.likedAnimations });
console.log("Lajkovane animacije:", result.likedAnimations);
} else {
console.log("no result found");
res.sendStatus(404)
}
});
});
router.put('/', async(req, res) => {
console.log("username:", req.body.username);
console.log("link:", req.body.link);
// if animation is already liked, then dislike it
// if it's not liked, then store it in db
const user = await User.findOne({ username: req.body.username });
if (user.likedAnimations.indexOf(req.body.link) === -1) {
user.likedAnimations.push(req.body.link);
} else {
user.likedAnimations = arrayRemove(user.likedAnimations, user.likedAnimations[user.likedAnimations.indexOf(req.body.link)]);
}
user.save();
});
function arrayRemove(arr, value) {
return arr.filter((item) => {
return item != value;
});
}
module.exports = router;
For first five requests I get this output:
Liked animations: ["/animations/animated-button.html"]
GET /animation-list/?username=marko 200 5.152 ms - 54
Liked animations: ["/animations/animated-button.html"]
GET /animation-list/?username=marko 304 3.915 ms - -
After that I don't get any output on server console and no changes on front end untill I refresh the page, even though db operations still work and data is saved.
It appears you have a couple issues going on. First, this request handler is not properly coded to handle errors and thus it leaves requests as pending and does not send a response and the connection will stay as pending until the client eventually times it out. Second, you likely have some sort of database concurrency usage error that is the root issue here. Third, you're not using await properly with your database. You either use await or you pass a callback to your database, not both. You need to fix all three of these.
To address the first and third issues:
router.get('/', async(req, res) => {
try {
let result = await User.findOne({ username: req.query.username };
if (result) {
console.log("Liked animations:", result.likedAnimations);
res.send({ animationList: result.likedAnimations });
} else {
console.log("no database result found");
res.sendStatus(404);
}
} catch(e) {
console.log(e);
res.sendStatus(500);
}
});
For the second issue, the particular database error you mention appears to be some sort of concurrency/locking issue internal to the database and is triggered by the sequence of database operations your code executes. You can read more about that error in the discussion here. Since the code you show us only shows a single read operation, we would need to see a much larger context of relevant code including the code related to this operation that writes to the database in order to be able to offer any ideas on how to fix the root cause of this issue.
We can't see the whole flow here, but you need to use atomic update operations in your database. Your PUT handler you show is an immediate race condition. In multi-client databases, you don't get a value, modify it and then write it back. That's an opportunity for a race condition because someone else could modify the value while you're sitting their holding it. When you then modify your held value, you overwrite the change that the other client just made. That's a race condition. Instead, you use an atomic operation that updates the operation directly in one database call or you use transactions to make a multi-step operation into a safe operation.
I'd suggest you read this article on atomic operations in mongodb. And, probably you want to use something like .findAndModify() so you can find and change an item in the database in one atomic operation. If you search for "atomic operations in mongodb", there are many other articles on the topic.

How to sync my saved data on apple devices with service worker?

I know that Background Sync API is not supported in the apple ecosystem, so how would you get around it and make a solution that would work in the apple ecosystem and other platforms as well, now i have a solution that uses Background Sync API and for some reason it literally does not do anything on IOS, it just saves the failed requests, and then never sync-s, could i just access the sync queue somehow, with a indexedDB wrapper and then sync at an arbitrary time?
I tried it once and it broke everything, do you guys have an idea how?
const bgSyncPlugin = new workbox.backgroundSync.Plugin('uploadQueue', {
maxRetentionTime: 60 * 24 * 60,
onSync: async ({ queue }) => {
return getAccessToken().then((token) => {
replayQueue(queue, token).then(() => {
return showNotification();
});
});
},
});
This is the code i have, they all. have a purpose, since my token has a timeout i have to check if the token is expired or not and proceed after that and replace the token in the headers if it is expired, and i have to change data as well when i sync in the request bodies, but it all works good on anything other than apple devices. Apple devices never trigger the onsync, i tried to do listen to fetch events and trigger onsync with:
self.registration.sync.register('uploadQueue');
But to no awail, i tried to register sync on servvice worker registration, nothing seems to help.
If the sync registration is not viable on ios, then can i access the upload queue table somehow?
P.S.: I`m using dexie.js as a indexedDB wrapper, it is a vue.js app, with laravel api, and the sync process is quite complex, but it is working, just have to figure out how to do it on IOS!
I have found an answer to this after like 2 weeks of it being on my mind and on my to do list.
Now get some popcorn and strap yourself the heck in, because this is quite a chonker.
In my case the sync process was pretty complex as my users could be away from any connection for such a long time that my accessTokens would expire so i had to do a check for the access token expiration as well and reFetch it.
Furthermore my users could add new people to the database of people, which all had their on unique server side id-s, so i had to order my requests in a way that the person registrations are sent first then the tasks and campaigns that were completed for them, so i can receive the respective ids from the API.
Now for the fun part:
Firstly you cant use a bgSyncPlugin, because you cant access the replayQueue, you have to use a normal queue, like this:
var bgSyncQueue = new workbox.backgroundSync.Queue('uploadQueue', {
maxRetentionTime: 60 * 24 * 60,
onSync: () => syncData(),
});
And push the failed requests to the queue inside the fetch listener:
this.onfetch = (event) => {
let requestClone = event.request.clone();
if (requestClone.method === 'POST' && 'condition to match the requests you need to replay') {
event.respondWith(
(() => {
const promiseChain = fetch(requestClone).catch(() => {
return bgSyncQueue.pushRequest(event);
});
event.waitUntil(promiseChain);
return promiseChain;
})()
);
} else {
event.respondWith(fetch(event.request));
}
};
When user has connection we trigger the "syncData()" function, on ios this is a bit complicated(more on this later), on android it happens automatically, as the service worker sees it has connection, now lets just check out what syncData does:
async function syncData() {
if (bgSyncQueue) //is there data to sync?
return getAccessToken() //then get the access token, if expired refresh it
.then((token) => replayQueue(bgSyncQueue, token).then(() => showNotification({ body: 'Succsesful sync', title: 'Data synced to server' })))
.catch(() => showNotification({ title: 'Sync unsuccessful', body: 'Please find and area with better coverage' })); //replay the requests and show a notification
return Promise.resolve('empty');//if no requests to replay return with empty
}
For the android/desktop side of thing we are finished you can be happy with your modified data being synced, now on iOS we cant just have the users data be uploaded only when they restart the PWA, thats bad user experience, but we are playing with javascript everything is possible in a way or another.
There is a message event that can be fired every time that the client code sees that it has internet, which looks like this:
if (this.$online && this.isIOSDevice) {
if (window.MessageChannel) {
var messageChannel = new MessageChannel();
messageChannel.port1.onmessage = (event) => {
this.onMessageSuccess(event);
};
} else {
navigator.serviceWorker.onmessage = (event) => {
this.onMessageSuccess(event);
};
}
navigator.serviceWorker.ready.then((reg) => {
try {
reg.active.postMessage(
{
text: 'sync',
port: messageChannel && messageChannel.port2,
},
[messageChannel && messageChannel.port2]
);
} catch (e) {
//firefox support
reg.active.postMessage({
text: 'sync',
});
}
});
}
this is inside a Vue.js watch function, which watches whether we have connection or not, if we have connection it also checks if this is a device from the apple ecosystem, like so:
isIosDevice() {
return !!navigator.platform && /iPad|iPhone|MacIntel|iPod/.test(navigator.platform) && /^((?!chrome|android).)*safari/i.test(navigator.userAgent);
}
And so it tells the service worker that it has internet and it has to sync, in that case this bit of code gets activated:
this.onmessage = (event) => {
if (event.data.text === 'sync') {
event.waitUntil(
syncData().then((res) => {
if (res !== 'empty') {
if (event.source) {
event.source.postMessage('doNotification');//this is telling the client code to show a notification (i have a built in notification system into the app, that does not use push notification, just shows a little pill on the bottom of the app with the message)
} else if (event.data.port) {
event.data.port.postMessage('doNotification'); //same thing
}
return res;
}
})
);
}
};
Now the most useful part in my opinion, the replay queue function, this guy gets the queue and the token from getAccessToken, and then it does its thing like clockwork:
const replayQueue = async (queue, token) => {
let entry;
while ((entry = await queue.shiftRequest())) {//while we have requests to replay
let data = await entry.request.clone().json();
try {
//replay the person registrations first and store them into indexed db
if (isPersonRequest) {
//if new person
await fetchPerson(entry, data, token);
//then replay the campaign and task submissions
} else if (isTaskOrCampaignRequest) {
//if task
await fetchCampaigns(entry, data, token);
}
} catch (error) {
showNotification({ title: 'no success', body: 'go for better internet plox' });
await queue.unshiftRequest(entry); //put failed request back into queue, and try again later
}
}
return Promise.resolve();
};
Now this is the big picture as how to use this guy on iOS devices and make Apple mad as heck :) I am open to any questions that are related, in this time i think i have become pretty good with service worker related stuff as this was not the only difficult part of this project but i digress, thats a story for another day.
(you may see that error handling is not perfect and maybe this thing is not he most secure of them all, but this project has a prettty small amount of users, with a fixed number which know how to use it and what it does, so im not really afraid of security in this case, but you may want to improve on things if you use in in a more serious project)
Hope i could help and all of you have a grea day.

How to have only one mongodb instance?

Im writting a node app to log some informations in a mongo database.
Below is the snippet code that called each time i need to store log in the mongo database.
const mongo = {}
const mongo_cli = require('mongodb').MongoClient
module.exports = {
log (l) {
mongo_cli.connect(the_mongo_url, (error, client) => {
if (error) throw error;
mongo.cli = client;
mongo.db = client.db(the_database);
//insert and update operations
});
}
}
The code above work for now. I mean, I can insert and update logs already inserted at the price of one (or more) connection (s) that I never close due to my lack of control of callback functions.
So, how can i structure it better so that i can just have only one mongo_cli call to not consume too many ressources ?

Cron http request to cloud function - look into database

I'm trying to use cron to trigger my cloud function in order to look at my database every couple hours. I can trigger the function automatically, but the output is not what I was expecting. I'm just a tad bit confused on why I am not able to retrieve anything from the database, meaning I can't log out my console.log("refund"). I currently have 1 document in the request collection and has 1 file with the replied field that satisfies replied == false. So I'm just confused on how to go about this correctly since I believe it should've logged that once?
exports.daily_job = functions.https.onRequest((req, res) => {
const key = req.query.key;
// Exit if the keys don't match.
if (!secureCompare(key, functions.config().cron.key)) {
console.log('The key provided in the request does not match the key set in the environment. Check that', key,
'matches the cron.key attribute in `firebase env:get`');
res.status(403).send('Security key does not match. Make sure your "key" URL query parameter matches the ' +
'cron.key environment variable.');
return null;
}
let db = admin.firestore()
let request = db.collection('request')
.where('replied', '==', false)
.get().then(function(querySnapshot){
querySnapshot.forEach(function(doc) {
console.log("refunded")
})
})
.catch(function (error) {
console.log('Error getting documents: ', error)
res.send('error');
})
res.send('finished refund');
return null;
});
You're not waiting on the promise returned by get(), which is asynchronous. As your code is now, the entire function sends a response "finished refund" immediately after the query is made, before it has time to finish. Once you send a response, the function is terminated.
You need to send the client response only after all the async work is complete in your function, which would be in BOTH your then() and catch() callbacks.

RxJs avoid external state but still access previous values

I'm using RxJs to listen to a amqp queu (not really relevant).
I have a function createConnection that returns an Observable that emits the new connection object. Once I have a connection, I want to send messages through it every 1000ms and after 10 messages I want to close the connection.
I'm trying to avoid external state, but if I don't store the connection in an external variable, how can I close it? See I begin with the connection, then flatMap and push messages, so after a few chains I no longer have the connection object.
This is no my flow but imagine something like this:
createConnection()
.flatMap(connection => connection.createChannel())
.flatMap(channel => channel.send(message))
.do(console.log)
.subscribe(connection => connection.close()) <--- obviously connection isn't here
Now I understand that it's stupid to do that, but now how do I access the connection? I could of course begin with var connection = createConnection()
and later on somehow join that. But how do I do this? I don't even know how to ask this question properly. Bottomline, what I have is an observable, that emits a connection, after the connection is opened I want an observable that emits messages every 1000ms (with a take(10)), then close the connection
The direct answer to your question is "you can carry it through each step". For example, you can replace this line
.flatMap(connection => connection.createChannel())
with this one:
.flatMap(connection => ({ connection: connection, channel: connection.createChannel() }))
and retain access to the connection all the way down.
But there's another way to do what you want to do. Let's assume your createConnection and createChannel functions look something like this:
function createConnection() {
return Rx.Observable.create(observer => {
console.log('creating connection');
const connection = {
createChannel: () => createChannel(),
close: () => console.log('disposing connection')
};
observer.onNext(connection);
return Rx.Disposable.create(() => connection.close());
});
}
function createChannel() {
return Rx.Observable.create(observer => {
const channel = {
send: x => console.log('sending message: ' + x)
};
observer.onNext(channel);
// assuming no cleanup here, don't need to return disposable
});
}
createConnection (and createChannel, but we'll focus on the former) returns a cold observable; each subscriber will get their own connection stream containing a single connection, and when that subscription expires, the dispose logic will be called automatically.
This allows you to do something like this:
const subscription = createConnection()
.flatMap(connection => connection.createChannel())
.flatMap(channel => Rx.Observable.interval(1000).map(i => ({ channel: channel, data: i })))
.take(10)
.subscribe(x => x.channel.send(x.data))
;
You don't actually have to dispose the subscription for cleanup to occur; after take(10) is satisfied, the whole chain will finish and cleanup will be triggered. The only reason you'd need to call dispose on the subscription explicitly is if you wanted to tear things down before the 10 1000ms intervals were up.
Note that this solution also contains an instance of the direct answer to your question: we cart the channel down the line so we can use it in the onNext lambda passed to the subscribe call (which is customarily where such code would appear).
Here's the whole thing working: https://jsbin.com/korihe/3/edit?js,console,output
This code gave me a error because flatmap wait for a observable<(T)> and ({ connection: connection, channel: connection.createChannel() }) it's a Object.
.flatMap(connection => ({ connection: connection, channel: connection.createChannel() }))
instead you can use the combineLatest operator
.flatMap(connection => Observable.combineLatest( Observable.of(connection), connection.createChannel(), (connection, channel) => {
... code ....
});

Categories

Resources