Node JS Auto Reload - javascript

I want to make a program with javascript or node.js what I want to achieve from the program is when there is a new item in rss that I take it will get a log through the terminal, and for the future I will put the code in firebase hosting, so I need that the code can run by itself the log that I will get maybe I will change it into a text file or stored in a database
so like this
I run the program and get all the items on RSS,
but when there is a new item I don't have to run the node app.js again, so every time there is a new item in the rss it will display the log by itself automatically
so far i made it with js node and i use rss-parser
and the code I use like this:
let Parser = require('rss-parser');
let parser = new Parser();
(async () => {
let feed = await parser.parseURL('https://rss-checker.blogspot.com/feeds/posts/default?alt=rss');
feed.items.forEach(items => {
console.log(items);
})
})();

There are three common ways to achieve this:
Polling
Stream push
Webhook
Based on your code sample I assume that the RSS feeder is request/response. This lends well to polling.
A poll based program will make a request to a resource on an interval. This interval should be informed by resource limits and expected performance from the end user. Ideally the API will accept an offset or a page so you could request all feeds above some ID. This make the program stateful.
setInterval can be used to drive the polling loop. Below shows an example of the poller loop with no state management. It polls at 5 second intervals:
let Parser = require('rss-parser');
let parser = new Parser();
setInterval(async () => {
let feed = await parser.parseURL('https://rss-checker.blogspot.com/feeds/posts/default?alt=rss');
feed.items.forEach(items => {
console.log(items);
})
}), 5000);
This is incomplete because it needs to keep track of already seen posts. Creating a poll loop means you have a stateful process that needs to stay running.

Related

Express JS: how does it handle simultaneous requests and avoid collision?

I am new to nodejs/Express.js development.
I have built my backend service with Express.js / Typescript and I have multiple routes / api endpoints defined. One is like this:
app.post('/api/issues/new', createNewIssue);
where browser will send a post request when a user submits a new photo (also called an issue in my app).
The user can send an issue to another user, and the backend will first query the database to find the number of issues that matches the conditions of "source user" and "destination user", and then give the new issue an identifying ID in the form srcUser-dstUser-[number], where number is the auto-incremented count.
The createNewIssue function is like this:
export const createNewIssue = catchErrors(async (req, res) => {
const srcUser = req.header('src_username');
const dstUser = req.header('dst_username');
// query database for number of issues matching "srcUser" and "dstUser"
...
const lastIssues = await Issue.find( {where: {"srcUser": srcUser, "dstUser": dstUser}, order: { id: 'DESC'}});
const count = lastIssues.length;
// create a new issue Entity with the ID `srcUser-dstUser-[count+1]`
const newIssue = await createEntity(Issue, {
...
id: `srcUser-dstUser-${count+1}`,
...
});
res.respond({ newIssue: newIssue});
})
Say the backend receives multiple requests with the same srcUser and dstUser attributes at the same time, will there be collisions where multiple new issues are created with the same id?
I have read some documentation about nodejs being single-threaded, but I'm not sure what that means definitely for this specific scenario.
Besides business logic in this scenario, I have some confusions in general about Express JS / Node JS:
When there is only one cpu core, Express JS process multiple concurrent requests asynchronously: it starts processing one and does not wait for it to finish, instead continues to process the next one. Is this understanding accurate?
When there are multiple cpu cores, does Express JS / Node Js utilize them all in the same manner?
Node.js will not solve this problem for you automatically.
While it will only deal with one thing at a time, it is entirely possible that Request 2 will request the latest ID in the database while Request 1 has hit the await statement at the same point and gone to sleep. This would mean they get the same answer and would each try to create a new entry with the same ID.
You need to write your JavaScript to make sure that this doesn't happen.
The usual ways to handle this would be to either:
Let the database (and not your JavaScript) handle the ID generation (usually by using a sequence.
Use transactions so that the request for the latest ID and the insertion of the new row are treated as one operation by the database (so it won't start the same operation for Request 2 until the select and insert for Request 1 are both done).
Test to make sure createEntity is successful (and doesn't throw a 'duplicate id' error) and try again if it fails (with a limit in case it keeps failing in which case it should return an error message to the client).
The specifics depend on which database you use. I linked to the Postgresql documentation for the sake of example.

How can I collect data on all BrowserWindows synchronuously in Electron?

I have an app where I spawn several BrowserWindows, with html forms, and I'd like to collect all the data (in order to save it, to be able to spawn them in the same state at a restart) at a press of a button.
At the moment, the only solution I found to do so, is to have each BrowserWindow do ipcRenderer.send every single time any variable changes (not too hard to do with Vuejs 'watchers'), but this seems demanding and inefficient.
I also thought of doing 'executeJavascript' to each window but that does not allow to capture the return value afaik.
I'd just like to be able to send a message from main when a request for saving is made, and wait for the windows to respond before saving all.
EDIT
I found a slightly better way, it looks like this
app.js
// wait for update reponses
ipc.on('update-response', (evt,args) => {
updates[evt.sender.id] = args;
if(Object.keys(updates).length == BrowserWindow.getAllWindows().length) {
// here I do what I need to save my settings, using what is stored in 'updates'
// ...
// and now reset updates for next time
updates = {}
}
});
// now send the requests for updates
BrowserWindow.getAllWindows().map(w => w.send('update'));
renderer.js
ipcRenderer.on('update', () => {
// collect the data
// var data = ...
ipcRenderer.send('update-response', data);
})
and obviously on the renderer side I am listening to these 'update' messages and sending data with 'udpate-response'.
But it seems a bit complicated and so I am sure there is a simpler way to achieve this using the framework.
EDIT 2
I realized that the above does not always work, because for some reason, the evt.sender.id do not match the ids obtained from BrowserWindows.getAllWindows(). I worked around that by sending ids in the request, and having the responder include it. But this is all so much fine for so very little...

Synchronize critical section in API for each user in JavaScript

I wanted to swap a profile picture of a user. For this, I have to check the database to see if a picture has already been saved, if so, it should be deleted. Then the new one should be saved and entered into the database.
Here is a simplified (pseudo) code of that:
async function changePic(user, file) {
// remove old pic
if (await database.hasPic(user)) {
let oldPath = await database.getPicOfUser(user);
filesystem.remove(oldPath);
}
// save new pic
let path = "some/new/generated/path.png";
file = await Image.modify(file);
await Promise.all([
filesystem.save(path, file),
database.saveThatUserHasNewPic(user, path)
]);
return "I'm done!";
}
I ran into the following problem with it:
If the user calls the API twice in a short time, serious errors occur. The database queries and the functions in between are asynchronous, causing that the changes of the first API call weren't applied when the second API checks for a profile pic to delete. So I'm left with a filesystem.remove request for an already unexisting file and an unremoved image in the filesystem.
I would like to safely handle that situation by synchronizing this critical section of code. I don't want to reject requests only because the server hasn't finished the previous one and I also want to synchronize it for each user, so users aren't bothered by the actions of other users.
Is there a clean way to achieve this in JavaScript? Some sort of monitor like you know it from Java would be nice.
You could use a library like p-limit to control your concurrency. Use a map to track the active/pending requests for each user. Use their ID (which I assume exists) as the key and the limit instance as the value:
const pLimit = require('p-limit');
const limits = new Map();
function changePic(user, file) {
async function impl(user, file) {
// your implementation from above
}
const { id } = user // or similar to distinguish them
if (!limits.has(id)) {
limits.set(id, pLimit(1)); // only one active request per user
}
const limit = limits.get(id);
return limit(impl, user, file); // schedule impl for execution
}
// TODO clean up limits to prevent memory leak?

Handling large data sets on client side

I'm trying to build an application that uses Server Sent Events in order to fetch and show some tweets (latest 50- 100 tweets) on UI.
Url for SSE:
https://tweet-service.herokuapp.com/stream
Problem(s):
My UI is becoming unresponsive because there is a huge data that's coming in!
How do I make sure My UI is responsive? What strategies should I usually adopt in making sure I'm handling the data?
Current Setup: (For better clarity on what I'm trying to achieve)
Currently I have a Max-Heap that has a custom comparator to show latest 50 tweets.
Everytime there's a change, I am re-rendering the page with new max-heap data.
We should not keep the EventSource open, since this will block the main thread if too many messages are sent in a short amount of time. Instead, we only should keep the event source open for as long as it takes to get 50-100 tweets. For example:
function getLatestTweets(limit) {
return new Promise((resolve, reject) => {
let items = [];
let source = new EventSource('https://tweet-service.herokuapp.com/stream');
source.onmessage = ({data}) => {
if (limit-- > 0) {
items.push(JSON.parse(data));
} else {
// resolve this promise once we have reached the specified limit
resolve(items);
source.close();
}
}
});
}
getLatestTweets(100).then(e => console.log(e))
You can then compare these tweets to previously fetched tweets to figure out which ones are new, and then update the UI accordingly. You can use setInterval to call this function periodically to fetch the latest tweets.

Angular2/RXJS - Handling Potentially Long Queries

Goal: Front-end of application allows users to select files from their local machines, and send the file names to a server. The server then matches those file names to files located on the server. The server will then return a list of all matching files.
Issue: This works great if you a user select less than a few hundred files, otherwise it can cause long response times. I do not want to limit the number of files a user can select, and I don't want to have to worry about the http requests timing out on the front-end.
Sample code so far:
//html on front-end to collect file information
<div>
<input (change)="add_files($event)" type="file" multiple>
</div>
//function called from the front-end, which then calls the profile_service add_files function
//it passes along the $event object
add_files($event){
this.profile_service.add_files($event).subscribe(
data => console.log('request returned'),
err => console.error(err),
() => //update view function
);
}
//The following two functions are in my profile_service which is dependency injected into my componenet
//formats the event object for the eventual query
add_files(event_obj){
let file_arr = [];
let file_obj = event_obj.target.files;
for(let key in file_obj){
if (file_obj.hasOwnProperty(key)){
file_arr.push(file_obj[key]['name'])
}
}
let query_obj = {files:title_arr};
return this.save_files(query_obj)
}
//here is where the actual request to the back-end is made
save_files(query_obj){
let payload = JSON.stringify(query_obj);
let headers = new Headers();
headers.append('Content-Type', 'application/json');
return this.http.post('https://some_url/api/1.0/collection',payload,{headers:headers})
.map((res:Response) => res.json())
}
Possible Solutions:
Process requests in batches. Re-write the code so that the profile-service is only called with 25 files at a time, and upon each response call profile-service again with the next 25 files. If this is the best solution, is there an elegant way to do this with observables? If not, I will use recursive callbacks which should work fine.
Have the endpoint return a generic response immediately like "file matches being uploaded and saved to your profile". Since all the matching files are persisted to a db on the backend, this would work and then I could have the front-end query the db every so often to get the current list of matching files. This seem ugly, but figured I'd throw it out there.
Any other solutions are welcome. Would be great to get a best-practice for handling this type of long-lasting query with angular2/observables in an elegant way.
I would recommend that you break up the number of files that you search for into manageable batches and then process more as results are returned, i.e. solution #1. The following is an untested but I think rather elegant way of accomplishing this:
add_files(event_obj){
let file_arr = [];
let file_obj = event_obj.target.files;
for(let key in file_obj){
if (file_obj.hasOwnProperty(key)){
file_arr.push(file_obj[key]['name'])
}
}
let self = this;
let bufferedFiles = Observable.from(file_arr)
.bufferCount(25); //Nice round number that you could play with
return bufferedFiles
//concatMap will make sure that each of your requests are not executed
//until the previous completes. Then all the data is merged into a single output
.concatMap((arr) => {
let payload = JSON.stringify({files: arr});
let headers = new Headers();
hearders.append('Content-Type', 'application/json');
//Use defer to make sure because http.post is eager
//this makes it only execute after subscription
return Observable.defer(() =>
self.post('https://some_url/api/1.0/collection',payload, {headers:headers})
}, resp => resp.json());
}
concatMap will keep your server from executing more than whatever the size of your buffer is, by preventing new requests until the previous one has returned. You could also use mergeMap if you wanted them all to be executed in parallel, but it seems the server is the resource limitation in this case if I am not mistaken.
I'd suggest to use websocket connections instead because they don't time out.
See also
- https://www.npmjs.com/package/angular2-websocket
- http://mmrath.com/post/websockets-with-angular2-and-spring-boot/
- http://www.html5rocks.com/de/tutorials/websockets/basics/
An alternative approach would be polling, where the client makes repeated requests in a defined interval to get the current processing state from the server.
To send multiple requests and waiting for all of them to complete
getAll(urls:any[]):Observable {
let observables = [];
for(var i = 0; i < items.length; i++) {
observables.push(this.http.get(urls[i]));
}
return Observable.forkJoin(observables);
}
someMethod(server:string) {
let urls = [
'${server}/fileService?somedata=a',
'${server}/fileService?somedata=b',
'${server}/fileService?somedata=c'];
this.getAll(urls).subscribe(
(value) => processValue(val),
(err) => processError(err),
() => onDone());
}

Categories

Resources