I wanted to swap a profile picture of a user. For this, I have to check the database to see if a picture has already been saved, if so, it should be deleted. Then the new one should be saved and entered into the database.
Here is a simplified (pseudo) code of that:
async function changePic(user, file) {
// remove old pic
if (await database.hasPic(user)) {
let oldPath = await database.getPicOfUser(user);
filesystem.remove(oldPath);
}
// save new pic
let path = "some/new/generated/path.png";
file = await Image.modify(file);
await Promise.all([
filesystem.save(path, file),
database.saveThatUserHasNewPic(user, path)
]);
return "I'm done!";
}
I ran into the following problem with it:
If the user calls the API twice in a short time, serious errors occur. The database queries and the functions in between are asynchronous, causing that the changes of the first API call weren't applied when the second API checks for a profile pic to delete. So I'm left with a filesystem.remove request for an already unexisting file and an unremoved image in the filesystem.
I would like to safely handle that situation by synchronizing this critical section of code. I don't want to reject requests only because the server hasn't finished the previous one and I also want to synchronize it for each user, so users aren't bothered by the actions of other users.
Is there a clean way to achieve this in JavaScript? Some sort of monitor like you know it from Java would be nice.
You could use a library like p-limit to control your concurrency. Use a map to track the active/pending requests for each user. Use their ID (which I assume exists) as the key and the limit instance as the value:
const pLimit = require('p-limit');
const limits = new Map();
function changePic(user, file) {
async function impl(user, file) {
// your implementation from above
}
const { id } = user // or similar to distinguish them
if (!limits.has(id)) {
limits.set(id, pLimit(1)); // only one active request per user
}
const limit = limits.get(id);
return limit(impl, user, file); // schedule impl for execution
}
// TODO clean up limits to prevent memory leak?
Related
I want to check if a container exists and if not, initialize it. I was hoping for something like the following:
const { endpoint, key, databaseId } = config;
const containerName = "container1"
const client = new CosmosClient({ endpoint ,key});
const containerDefinition = getContainerDefinition(containerName);
const db = await createDatabase(client, databaseId);
if (!db.containers.contains(containerName)
{
// Do something
}
The reason I'm not using "createIfNotExists" is because I would need to make a 2nd call to check if the container returned is populated with items or not. The container I'm creating is going to hold settings data which will be static once the container is initially created. This settings check is going to happen per request so I'd like to minimize the database calls and operations if possible.
I tried doing something like:
try
{
db.container(containerName).read();
}
catch(err)
{
if(err.message.contains("Resource Not Found"))
{
// do something
}
}
But that doesn't seem like the right way to do it.
Any help would be appreciated!
I'm not quite clear on why you would need to do this since typically you only need to do this sort of thing once for the life of your application instance. But I would not recommend doing it this way.
When you query Cosmos to test the existence of a database, container, etc., this hits the master partition for the account. The master partition is kind of like a tiny Cosmos database with all of your account meta data in it.
This master partition is allocated a small amount of the RU/s that manage the metadata operations. So if you app is designed to make these types of calls for every single request, it's quite likely you will get rate limited in your application.
If there is some way you can design this such that it doesn't have to query for the existence of a container then I would pursue that instead.
Interesting question. So i think you have few options
Just call const { container } = await database.containers.createIfNotExists({ id: "Container" }); it will be fast probably few milliseconds, since I went via code at looks like it will always try to read from cosmos :( If you want to still check if container exists sdk has methods(But again no real benefits ):
const iterator = database.containers.readAll();
const { resources: containersList } = await iterator.fetchAll();
Create singleton and first time just initialise all your containers so next time you dont call it, sure if you scale each instance will do the same
My favourite, use terraform/armtemplates/bicep to spin up infrastructure so you code wont need to handle that
You can try this code:
async function check_container_exist(databaseId,containerId) {
let exist = false;
const querySpec = {
query: "SELECT * FROM root r WHERE r.id = #container",
parameters: [
{name: "#container", value: containerId}
]
};
const response = await client.database(databaseId).containers.query(querySpec).fetchNext();
if(response.resources[0]){
exist = true;
}
return exist;
}
I have an app where I spawn several BrowserWindows, with html forms, and I'd like to collect all the data (in order to save it, to be able to spawn them in the same state at a restart) at a press of a button.
At the moment, the only solution I found to do so, is to have each BrowserWindow do ipcRenderer.send every single time any variable changes (not too hard to do with Vuejs 'watchers'), but this seems demanding and inefficient.
I also thought of doing 'executeJavascript' to each window but that does not allow to capture the return value afaik.
I'd just like to be able to send a message from main when a request for saving is made, and wait for the windows to respond before saving all.
EDIT
I found a slightly better way, it looks like this
app.js
// wait for update reponses
ipc.on('update-response', (evt,args) => {
updates[evt.sender.id] = args;
if(Object.keys(updates).length == BrowserWindow.getAllWindows().length) {
// here I do what I need to save my settings, using what is stored in 'updates'
// ...
// and now reset updates for next time
updates = {}
}
});
// now send the requests for updates
BrowserWindow.getAllWindows().map(w => w.send('update'));
renderer.js
ipcRenderer.on('update', () => {
// collect the data
// var data = ...
ipcRenderer.send('update-response', data);
})
and obviously on the renderer side I am listening to these 'update' messages and sending data with 'udpate-response'.
But it seems a bit complicated and so I am sure there is a simpler way to achieve this using the framework.
EDIT 2
I realized that the above does not always work, because for some reason, the evt.sender.id do not match the ids obtained from BrowserWindows.getAllWindows(). I worked around that by sending ids in the request, and having the responder include it. But this is all so much fine for so very little...
I want to make a program with javascript or node.js what I want to achieve from the program is when there is a new item in rss that I take it will get a log through the terminal, and for the future I will put the code in firebase hosting, so I need that the code can run by itself the log that I will get maybe I will change it into a text file or stored in a database
so like this
I run the program and get all the items on RSS,
but when there is a new item I don't have to run the node app.js again, so every time there is a new item in the rss it will display the log by itself automatically
so far i made it with js node and i use rss-parser
and the code I use like this:
let Parser = require('rss-parser');
let parser = new Parser();
(async () => {
let feed = await parser.parseURL('https://rss-checker.blogspot.com/feeds/posts/default?alt=rss');
feed.items.forEach(items => {
console.log(items);
})
})();
There are three common ways to achieve this:
Polling
Stream push
Webhook
Based on your code sample I assume that the RSS feeder is request/response. This lends well to polling.
A poll based program will make a request to a resource on an interval. This interval should be informed by resource limits and expected performance from the end user. Ideally the API will accept an offset or a page so you could request all feeds above some ID. This make the program stateful.
setInterval can be used to drive the polling loop. Below shows an example of the poller loop with no state management. It polls at 5 second intervals:
let Parser = require('rss-parser');
let parser = new Parser();
setInterval(async () => {
let feed = await parser.parseURL('https://rss-checker.blogspot.com/feeds/posts/default?alt=rss');
feed.items.forEach(items => {
console.log(items);
})
}), 5000);
This is incomplete because it needs to keep track of already seen posts. Creating a poll loop means you have a stateful process that needs to stay running.
Goal: Front-end of application allows users to select files from their local machines, and send the file names to a server. The server then matches those file names to files located on the server. The server will then return a list of all matching files.
Issue: This works great if you a user select less than a few hundred files, otherwise it can cause long response times. I do not want to limit the number of files a user can select, and I don't want to have to worry about the http requests timing out on the front-end.
Sample code so far:
//html on front-end to collect file information
<div>
<input (change)="add_files($event)" type="file" multiple>
</div>
//function called from the front-end, which then calls the profile_service add_files function
//it passes along the $event object
add_files($event){
this.profile_service.add_files($event).subscribe(
data => console.log('request returned'),
err => console.error(err),
() => //update view function
);
}
//The following two functions are in my profile_service which is dependency injected into my componenet
//formats the event object for the eventual query
add_files(event_obj){
let file_arr = [];
let file_obj = event_obj.target.files;
for(let key in file_obj){
if (file_obj.hasOwnProperty(key)){
file_arr.push(file_obj[key]['name'])
}
}
let query_obj = {files:title_arr};
return this.save_files(query_obj)
}
//here is where the actual request to the back-end is made
save_files(query_obj){
let payload = JSON.stringify(query_obj);
let headers = new Headers();
headers.append('Content-Type', 'application/json');
return this.http.post('https://some_url/api/1.0/collection',payload,{headers:headers})
.map((res:Response) => res.json())
}
Possible Solutions:
Process requests in batches. Re-write the code so that the profile-service is only called with 25 files at a time, and upon each response call profile-service again with the next 25 files. If this is the best solution, is there an elegant way to do this with observables? If not, I will use recursive callbacks which should work fine.
Have the endpoint return a generic response immediately like "file matches being uploaded and saved to your profile". Since all the matching files are persisted to a db on the backend, this would work and then I could have the front-end query the db every so often to get the current list of matching files. This seem ugly, but figured I'd throw it out there.
Any other solutions are welcome. Would be great to get a best-practice for handling this type of long-lasting query with angular2/observables in an elegant way.
I would recommend that you break up the number of files that you search for into manageable batches and then process more as results are returned, i.e. solution #1. The following is an untested but I think rather elegant way of accomplishing this:
add_files(event_obj){
let file_arr = [];
let file_obj = event_obj.target.files;
for(let key in file_obj){
if (file_obj.hasOwnProperty(key)){
file_arr.push(file_obj[key]['name'])
}
}
let self = this;
let bufferedFiles = Observable.from(file_arr)
.bufferCount(25); //Nice round number that you could play with
return bufferedFiles
//concatMap will make sure that each of your requests are not executed
//until the previous completes. Then all the data is merged into a single output
.concatMap((arr) => {
let payload = JSON.stringify({files: arr});
let headers = new Headers();
hearders.append('Content-Type', 'application/json');
//Use defer to make sure because http.post is eager
//this makes it only execute after subscription
return Observable.defer(() =>
self.post('https://some_url/api/1.0/collection',payload, {headers:headers})
}, resp => resp.json());
}
concatMap will keep your server from executing more than whatever the size of your buffer is, by preventing new requests until the previous one has returned. You could also use mergeMap if you wanted them all to be executed in parallel, but it seems the server is the resource limitation in this case if I am not mistaken.
I'd suggest to use websocket connections instead because they don't time out.
See also
- https://www.npmjs.com/package/angular2-websocket
- http://mmrath.com/post/websockets-with-angular2-and-spring-boot/
- http://www.html5rocks.com/de/tutorials/websockets/basics/
An alternative approach would be polling, where the client makes repeated requests in a defined interval to get the current processing state from the server.
To send multiple requests and waiting for all of them to complete
getAll(urls:any[]):Observable {
let observables = [];
for(var i = 0; i < items.length; i++) {
observables.push(this.http.get(urls[i]));
}
return Observable.forkJoin(observables);
}
someMethod(server:string) {
let urls = [
'${server}/fileService?somedata=a',
'${server}/fileService?somedata=b',
'${server}/fileService?somedata=c'];
this.getAll(urls).subscribe(
(value) => processValue(val),
(err) => processError(err),
() => onDone());
}
I have the following query:
fire = new Firebase 'ME.firebaseio.com'
users = fire.child 'venues/ID/users'
users.once 'value', (snapshot) ->
# do things with snapshot.val()
...
I am loading 10+ mb of data, and the request takes around 1sec/mb. Is it possible to give the user a progress indicator as content streams in? Ideally I'd like to process the data as it comes in as well (not just notify).
I tried using the on "child_added" event instead, but it doesn't work as expected - instead of children streaming in at a consistent rate, they all come at once after the entire dataset is loaded (which takes 10-15 sec), so in practice it seems to be a less performant version of on "value".
You should be able to optimize your download time from 10-20secs to a few milliseconds by starting with some denormalization.
For example, we could move the images and any other peripherals comprising the majority of the payload to their own path, keep only the meta data (name, email, etc) in the user records, and grab the extras separately:
/users/user_id/name, email, etc...
/images/user_id/...
The number of event listeners you attach or paths you connect to does not have any significant overhead locally or for networking bandwidth (just the payload) so you can do something like this to "normalize" after grabbing the meta data:
var firebaseRef = new Firebase(URL);
firebaseRef.child('users').on('child_added', function(snap) {
console.log('got user ', snap.name());
// I chose once() here to snag the image, assuming they don't change much
// but on() would work just as well
firebaseRef.child('images/'+snap.name()).once('value', function(imageSnap) {
console.log('got image for user ', imageSnap.name());
});
});
You'll notice right away that when you move the bulk of the data out and keep only the meta info for users locally, they will be lightning-fast to grab (all of the "got user" logs will appear right away). Then the images will trickle in one at a time after this, allowing you to create progress bars or process them as they show up.
If you aren't willing to denormalize the data, there are a couple ways you could break up the loading process. Here's a simple pagination approach to grab the users in segments:
var firebaseRef = new Firebase(URL);
grabNextTen(firebaseRef, null);
function grabNextTen(ref, startAt) {
ref.limit(startAt? 11 : 10).startAt(startAt).once('value', function(snap) {
var lastEntry;
snap.forEach(function(userSnap) {
// skip the startAt() entry, which we've already processed
if( userSnap.name() === lastEntry ) { return; }
processUser(userSnap);
lastEntry = userSnap.name();
});
// setTimeout closes the call stack, allowing us to recurse
// infinitely without a maximum call stack error
setTimeout(grabNextTen.bind(null, ref, lastEntry);
});
}
function processUser(snap) {
console.log('got user', snap.name());
}
function didTenUsers(lastEntry) {
console.log('finished up to ', lastEntry);
}
A third popular approach would be to store the images in a static cloud asset like Amazon S3 and simply store the URLs in Firebase. For large data sets in the hundreds of thousands this is very economical, since those solutions are a bit cheaper than Firebase storage.
But I'd highly suggest you both read the article on denormalization and invest in that approach first.