This is the function that I have:
let counter = 0;
let dbConnected = false;
async function notASingleton(params) {
if (!dbConnected) {
await new Promise(resolve => {
if (Math.random() > 0.75) throw new Error();
setTimeout((params) => {
dbConnected = true; // assume we use params to connect to DB
resolve();
}, 1000);
});
return counter++
}
};
// in another module that imports notASingleton
Promise.all([notASingleton(params), notASingleton(params), notASingleton(params), notASingleton(params)]);
or
// in another module that imports notASingleton
notASingleton(params);
notASingleton(params);
notASingleton(params);
notASingleton(params);
The problem is that apparently the notASinglton promises in might be executed concurrently and assuming they are run in parallel, the execution context for all of them will be dbConnected = false.
Note: I'm aware that we could introduce a new variable e.g. initiatingDbConnection and instead of checking for !dbConnected check for !initiatingDbConnection; however, as long as concurrently means that the context of the promises will be the same inside Promise.all, that will not change anything.
The pattern can be properly implemented in e.g. Java by utilizing the contracts of JVM for creating a class: https://stackoverflow.com/a/16106598/12144949
However, even that Java implementation cannot be used for my use case where I need to pass a variable: "The client application can’t pass any argument, so we can’t reuse it. For example, having a generic singleton class for database connection where client application supplies database server properties."
https://www.journaldev.com/171/thread-safety-in-java-singleton-classes-with-example-code
Note 2: Another possibly related issue: https://eslint.org/docs/rules/require-atomic-updates#rule-details
"On some computers, they may be executed in parallel, or in some sense concurrently, while on others they may be executed serially."
That MDN description is rubbish. I'll remove it. Promise.all is not responsible for executing anything, and promises cannot be "executed" anyway. All it does is to wait for its arguments, and you're the one who are creating those promises in the array. They would run (and concurrently open multiple connections) even if you omitted Promise.all and simply called notASingleton() multiple times.
the execution context for all of them will be dbConnected = false
Yes, but only because your dbConnected = true; is in a timeout and you are calling notASingleton() again before that happens. Not because Promise.all does anything special.
Related
Is using async and await the crude person's threads?
Many moons ago I learned how to do multithreaded Java code on Android. I recall I had to create threads, start threads, etc.
Now I'm learning Javascript, and I just learned about async and await.
For example:
async function isThisLikeTwoThreads() {
const a = slowFunction();
const b = fastFunction();
console.log(await a, await b);
}
This looks way simpler than what I used to do and is a lot more intuitive.
slowFunction() would start first, then fastFunction() would start, and console.log() would wait until both functions resolved before logging - and slowFunction() and fastFunction() are potentially running at the same time. I expect it's ultimatly on the browser whether or not thesea are seperate threads. But it looks like it walks and talks like crude multithreading. Is it?
Is using async and await the crude person's threads?
No, not at all. It's just syntax sugar (really, really useful sugar) over using promises, which in turn is just a (really, really useful) formalized way to use callbacks. It's useful because you can wait asynchronously (without blocking the JavaScript main thread) for things that are, by nature, asynchronous (like HTTP requests).
If you need to use threads, use web workers, Node.js worker threads, or whatever multi-threading your environment provides. Per specification (nowadays), only a single thread at a time is allowed to work within a given JavaScript "realm" (very loosely: the global environment your code is running in and its associated objects, etc.) and so only a single thread at a time has access to the variables and such within that realm, but threads can cooperate via messaging (including transferring objects between them without making copies) and shared memory.
For example:
async function isThisLikeTwoThreads() {
const a = slowFunction();
const b = fastFunction();
console.log(await a, await b);
}
Here's what that code does when isThisLikeTwoThreads is called:
slowFunction is called synchronously and its return value is assigned to a.
fastFunction is called synchronously and its return value is assigned to b.
When isThisLikeTwoThreads reaches await a, it wraps a in a promise (as though you did Promise.resolve(a)) and returns a new promise (not that same one). Let's call the promise wrapped around a "aPromise" and the promise returned by the function "functionPromise".
Later, when aPromise settles, if it was rejected functionPromise is rejected with the same rejection reason and the following steps are skipped; if it was fulfilled, the next step is done
The code in isThisLikeTwoThreads continues by wrapping b in a promise (bPromise) and waiting for that to settle
When bPromise settles, if it was rejected functionPromise is rejected with the same rejection reason; if it was fulfilled, the code in isThisLikeTwoThreads continues by logging the fulfillment values of aPromise and bPromise and then fulfilling functionPromise with the value undefined
All of the work above was done on the JavaScript thread where the call to isThisLikeTwoThreads was done, but it was spread out across multiple "jobs" (JavaScript terminology; the HTML spec calls them "tasks" and specifies a fair bit of detail for how they're handled on browsers). If slowFunction or fastFunction started an asynchronous process and returned a promise for that, that asynchronous process (for instance, an HTTP call the browser does) may have continued in parallel with the JavaScript thread while the JavaScript thread was doing other stuff or (if it was also JavaScript code on the main thread) may have competed for other work on the JavaScript thread (competed by adding jobs to the job queue and the thread processing them in a loop).
But using promises doesn't add threading. :-)
I suggest you read this to understand that the answer is no: https://developer.mozilla.org/en-US/docs/Web/JavaScript/EventLoop
To summarize, the runtime uses multiple threads for internal operations (network, disk... for a Node.js environment, rendering, indexedDB, network... for a browser environment) but the JavaScript code you wrote and the one you imported from different libraries will always execute in a single thread. Asynchronous operations will trigger callbacks which will be queued and executed one by one.
Basically what happens when executing this function:
async function isThisLikeTwoThreads() {
const a = slowFunction();
const b = fastFunction();
console.log(await a, await b);
}
Execute slowFunction.
Execute fastFunction.
Enqueue the rest of the code (console.log(await a, await b)) when a promise and b promise have resolved. The console.log runs in the same thread after isThisLikeTwoThreads has returned, and after the possible enqueued callbacks have returned. Assuming slowFunction and fastFunction both return promises, this is an equivalent of:
function isThisLikeTwoThreads() {
const a = slowFunction();
const b = fastFunction();
a.then(aResult => b.then(bResult => console.log(aResult, bResult)));
}
Similar but not the same. Javascript will only ever be doing 'one' thing at a time. It is fundamentally single-threaded. That said, it can appear odd as results may appear to be arriving in different orders - exacerbated by functions such as async and await.
For example, even if you're rendering a background process at 70fps there are small gaps in rendering logic available to complete promises or receive event notifications -- it's during these moments promises are completed giving the illusion of multi-threading.
If you REALLY want to lock up a browser try this:
let a = 1;
while (a == 1){
console.log("doing a thing");
}
You will never get javascript to work again and chrome or whatever will murder your script. The reason is when it enters that loop - nothing will touch it again, no events will be rendered and no promises will be delivered - hence, single threading.
If this was a true multithreaded environment you could break that loop from the outside by changing the variable to a new value.
Using nodejs 8.12 on Gnu/Linux CentOS 7. Using the built-in web server, require('https'), for a simple application.
I understand that nodejs is single threaded (single process) and there is no actual parallel execution of code. Based on my understanding, I think the http/https server will process one http request and run the handler through all synchronous statements and set up asynchronous statements to be executed later before it will return to process a subsequent request. However, with http/https libraries, you have an asynchronous code that is used to assemble the body of the request. So, we already have one callback which is executed when the body is ready ('end' event). This fact makes me think it might be possible to be in the middle of processing two or more requests simultaneously.
As part of handling the requests, I need to execute a string of shell commands and I use the shelljs.exec library to do that. It runs synchronously, waiting until complete before returning. So, example code would look like:
const shelljs_exec = require('shelljs.exec');
function process() {
// bunch of shell commands in string
var command_str = 'command1; command2; command3';
var exec_results = shelljs_exec(command_str);
console.log('just executed shelljs_exec command');
var proc_results = process_results(exec_results);
console.log(proc_results);
// and return the response...
}
So node.js runs the shelljs_exec() and waits for completion. While it's waiting, can another request be worked on, such that there is a risk, slight, of two or more shelljs.exec invocations running at the same time? Since that could be a problem, I need to ensure only one shelljs.exec statement can be in progress at a given time.
If that is not a correct understanding, then I was thinking I need to do something with mutex locks. Like this:
const shelljs_exec = require('shelljs.exec');
const locks = require('locks');
// Need this in global scope - so we are all dealing with the same one.
var mutex = locks.createMutex();
function await_lock(shell_commands) {
var commands = shell_commands;
return new Promise(getting_lock => {
mutex.lock(got_lock_and_execute);
});
function got_lock_and_execute() {
var exec_results = shelljs_exec(commands);
console.log('just executed shelljs_exec command');
mutex.unlock();
return exec_results;
}
}
async function process() {
// bunch of shell commands in string
var command_str = 'command1; command2; command3';
exec_results = await await_lock(command_str);
var proc_results = process_results(exec_results);
console.log(proc_results);
}
If shelljs_exec is synchronous, no need.
If it is not. If it takes a callback wrap it in a Promise constructor so that it can be awaited. I would suggest properly wrapping the mutex.lock in a promise that gets resolved when the lock is acquired. The try finally is needed to ensure that the mutex is unlocked if shelljs_exec throws an exception.
async function await_lock(shell_commands) {
await (new Promise(function(resolve, reject) {
mutex.lock(resolve);
}));
try {
let exec_results = await shelljs_exec(commands);
return exec_results;
} finally {
mutex.unlock();
}
}
Untested. But it looks like it should work.
I am attempting to make a simple text game that operates in a socket.io chat room on a node server. The program works as follows:
Currently I have three main modules
Rogue : basic home of rogue game functions
rogueParser : module responsible for extracting workable commands from command strings
Verb_library: module containing a list of commands that can be invoked from the client terminal.
The Client types a command like 'say hello world'. This triggers the following socket.io listener
socket.on('rg_command', function(command){
// execute the verb
let verb = rogueParser(command);
rogue.executeVerb(verb, command, function(result){
console.log(result);
});
});
Which then in turn invokes the executeVerb function from rogue..
executeVerb: function(verb, data, callback){
verb_library[verb](data, callback);
},
Each verb in verb_library should be responsible for manipulating the database -if required- and then returning an echo string sent to the appropriate targets representing the completion of the action.
EDIT: I chose 'say' when I posted this but it was pointed out afterward that it was a poor example. 'say' is not currently async but eventually will be as will be the vast majority of 'verbs' as they will need to make calls to the database.
...
say: function(data, callback){
var response = {};
console.log('USR:'+data.user);
var message = data.message.replace('say','');
message = ('you say '+'"'+message.trim()+'"');
response.target = data.user;
response.type = 'echo';
response.message = message;
callback(response);
},
...
My problem is that
1 ) I am having issues passing callbacks through so many modules. Should I be able to pass a callback through multiple layers of modules? Im worried that I'm blind so some scope magic that is making me lose track of what should happen when I pass a callback function into a module which then passes the same callback to another module which then calls the callback. Currently it seems I either end up without access to the callback on the end, or the first function tries to execute without waiting on the final callback returning a null value.
2 ) Im not sure if Im making this harder than it needs to be by not using promises or if this is totally achievable with callbacks, in which case I want to learn how to do it that way before I summon extra code.
Sorry if this is a vague question, I'm in a position of design pattern doubt and looking for advice on this general setup as well as specific information regarding how these callbacks should be passed around. Thanks!
1) Passing callback trough multiple layers doesn't sound like a good idea. Usually I'm thinking what will happen, If I will continiue doing this for a year? Will it be flexible enough so that when I need to change to architecture (let's say customer have new idea), my code will allow me to without rewriting whole app? What you're experiencing is called callback hell. http://callbackhell.com/
What we are trying to do, is to keep our code as shallow as possible.
2) Promise is just syntax sugar for callback. But it's much easier to think in Promise then in callback. So personally, I would advice you to take your time and grasp as much as you can of programming language features during your project. Latest way we're doing asynchronus code is by using async/await syntax which allows us to totally get rid of callback and Promise calls. But during your path, you will have to work with both for sure.
You can try to finish your code this way and when you're done, find what was the biggest pain and how could you write it again to avoid it in future. I promise you that it will be much more educative then getting explicit answear here :)
Asynchronousness and JavaScript go back a long way. How we deal with it has evolved over time and there are numerous applied patterns that attempt to make async easier. I would say that there are 3 concrete and popular patterns. However, each one is very related to the other:
Callbacks
Promises
async/await
Callbacks are probably the most backwards compatible and just involve providing a function to some asynchronous task in order to have your provided function be called whenever the task is complete.
For example:
/**
* Some dummy asynchronous task that waits 2 seconds to complete
*/
function asynchronousTask(cb) {
setTimeout(() => {
console.log("Async task is done");
cb();
}, 2000);
}
asynchronousTask(() => {
console.log("My function to be called after async task");
});
Promises are a primitive that encapsulates the callback pattern so that instead of providing a function to the task function, you call the then method on the Promise that the task returns:
/**
* Some dummy asynchronous task that waits 2 seconds to complete
* BUT the difference is that it returns a Promise
*/
function asynchronousTask() {
return new Promise(resolve => {
setTimeout(() => {
console.log("Async task is done");
resolve();
}, 2000);
});
}
asynchronousTask()
.then(() => {
console.log("My function to be called after async task");
});
The last pattern is the async/await pattern which also deals in Promises which are an encapsulation of callbacks. They are unique because they provide lexical support for using Promises so that you don't have to use .then() directly and also don't have to explicitly return a Promise from your task:
/*
* We still need some Promise oriented bootstrap
* function to demonstrate the async/await
* this will just wait a duration and resolve
*/
function $timeout(duration) {
return new Promise(resolve => setTimeout(resolve, duration));
}
/**
* Task runner that waits 2 seconds and then prints a message
*/
(async function() {
await $timeout(2000);
console.log("My function to be called after async task");
}());
Now that our vocabulary is cleared up, we need to consider one other thing: These patterns are all API dependent. The library that you are using uses callbacks. It is alright to mix these patterns, but I would say that the code that you write should be consistent. Pick one of the patterns and wrap or interface with the library that you need to.
If the library deals in callbacks, see if there is a wrapping library or a mechanism to have it deal in Promises instead. async/await consumes Promises, but not callbacks.
Callbacks are fine, but I would only use them if a function is dependent on some asynchronous result. If however the result is immediately available, then the function should be designed to return that value.
In the example you have given, say does not have to wait for any asynchronous API call to come back with a result, so I would change its signature to the following:
say: function(data){ // <--- no callback argument
var response = {};
console.log('USR:'+data.user);
var message = data.message.replace('say','');
message = ('you say '+'"'+message.trim()+'"');
response.target = data.user;
response.type = 'echo';
response.message = message;
return response; // <--- return it
}
Then going backwards, you would also change the signature of the functions that use say:
executeVerb: function(verb, data){ // <--- no callback argument
return verb_library[verb](data); // <--- no callback argument, and return the returned value
}
And further up the call stack:
socket.on('rg_command', function(command){
// execute the verb
let verb = rogueParser(command);
let result = rogue.executeVerb(verb, command); // <--- no callback, just get the returned value
console.log(result);
});
Of course, this can only work if all verb methods can return the expected result synchronously.
Promises
If say would depend on some asynchronous API, then you could use promises. Let's assume this API provides a callback system, then your say function could return a promise like this:
say: async function(data){ // <--- still no callback argument, but async!
var response = {};
console.log('USR:'+data.user);
var message = data.message.replace('say','');
response.target = data.user;
response.type = 'echo';
// Convert the API callback system to a promise, and use AWAIT
await respone.message = new Promise(resolve => someAsyncAPIWithCallBackAsLastArg(message, resolve));
return response; // <--- return it
}
Again going backwards, you would also change the signature of the functions that use say:
executeVerb: function(verb, data){ // <--- still no callback argument
return verb_library[verb](data); // <--- no callback argument, and return the returned promise(!)
}
And finally:
socket.on('rg_command', async function(command){ // Add async
// execute the verb
let verb = rogueParser(command);
let result = await rogue.executeVerb(verb, command); // <--- await the fulfillment of the returned promise
console.log(result);
});
Hi my question above is a little vague so I'll try to make it clearer.
I have code of the form:
function main () {
async function recursive () {
var a = "Hello World";
var b = "Goodbye World";
recursive();
}
recursive();
}
I am running into an issue where I am running out of heap memory.
Assuming what I showed above is how my program behaves, with a and b being declared within the recursive function, my question is whether the variables are destroyed upon the call to recursive within the recursive function or whether they will persist until there are no more recursive calls and the main function reaches its endpoint assuming I keep the main function running long enough for that to happen.
I'm worried they stay alive in the heap and since my real program stores large strings in those variables I'm worried that that is the reason I'm running out of heap.
JavaScript does not (yet?) have tail call optimization for recursion, so for sure you'll eventually fill up your call stack. The reason your heap is filling up is because recursive is defined as an async function and never waits for the allocated Promise objects to resolve. This will fill up your heap because the garbage collector doesn't have the chance to collect them.
In short, recursion doesn't use up the heap unless you're allocating things in the heap within your functions.
Expansion of what's happening:
main
new Promise(() => {
new Promise(() => {
new Promise(() => {
new Promise(() => {
new Promise(() => { //etc.
You say you're not awaiting Promises because you want things to run in parallel. This is a dangerous approach for a few reasons. Promises should always be awaited in case they throw, and of course you want to make sure that you're allocating predictable amounts of memory usage. If this recursive function means you're getting an unpredictable number of needed tasks, you should consider a task queue pattern:
const tasks = [];
async function recursive() {
// ...
tasks.push(workItem);
// ...
}
async function processTasks() {
while (tasks.length) {
workItem = tasks.unshift();
// Process work item. May provoke calls to `recursive`
// ...do awaits here for the async parts
}
}
async function processWork(maxParallelism) {
const workers = [];
for (let i = 0; i < maxParallelism; i++) {
workers.push(processTasks());
}
await Promise.all(workers);
}
Finally, in case it needs to be said, async functions don't let you do parallel processing; they just provide a way to more conveniently write Promises and do sequential and/or parallel waits for async events. JavaScript remains a single-threaded execution engine unless you use things like web workers. So using async functions to try to parallelize synchronous tasks won't do anything for you.
In my classes, I've been learning indexeddb and how to handle asynchronous-ity. In the interest of stretching myself and learning more, I've been trying to use the functional paradigm in my code.
I'd been using cursors before but I've realized that my code isn't completely immutable or stateless, which bothers me. I was wondering if there was a way to use cursors without having to resort to pushing elements to an array.
Currently, I'm using something like:
async function getTable(){
return new Promise(function(resolve, reject){
const db = await connect();
const transaction = await db.transaction(["objectStore"], "readonly");
const store = await transaction.objectStore("objectStore");
var myArray = [];
store.openCursor().onsuccess = function(evt) {
var cursor = evt.target.result;
if (cursor) {
myArray.push(cursor.value);
//I don't want to use push, because it's impure. See link:
cursor.continue();
} else {
resolve(myArray);
}
}
}
//link: https://en.wikipedia.org/wiki/Purely_functional_programming
And it works fine. But it's not pure, and it uses push. I'd like to learn another way to do it, if there is one.
Thank you!
You could do several things in the spirit of functional programming, but it is probably not worth it in JavaScript.
For example, to implement immutability arrays, at least in spirit, you simply create and return a new array everytime you want to add an element to an array. I think if I recall my Scheme correctly the function was called cons.
function push(array, newValue) {
const copy = copyArray(array);
copy.push(newValue);
return copy;
}
function copyArray(array) {
const copy = [];
for(const oldValue of array) {
copy.push(oldValue);
}
return copy;
}
// Fancy spread operator syntax implementation if you are so inclined
function copyArray2(inputArray) {
return [...inputArray];
}
Now instead of mutating the input array, you are creating a modified copy of it. Keep in mind this is absolutely horrific performance, and you probably never ever ever want to do this in a real app.
You could take this further probably and use some stack based approach. Again, this is hilariously bad, but it would be something that basically creates a push function that returns a function. The stack increases in size as you append items, and then when you unwind it, it unwinds into an array of values.
My second point is that you can avoid this array building entirely by using newer, less-well-documented features of indexedDB. Specifically, IDBObjectStore.prototype.getAll. This function would create the array for you, opaquely, and provided that it is opaque, you will never know about any FP anti-patterns hidden within its abstraction, and therefore are not breaking the rules.
function getTable(){
return new Promise(function(resolve, reject){
const db = await connect();
const transaction = await db.transaction(["objectStore"], "readonly");
const store = await transaction.objectStore("objectStore");
const request = store.getAll();
request.onsuccess = () => resolve(request.result);
request.onerror = () => reject(request.error);
}
}
My third point is simple, use db.transaction("objectStore", "readonly"); instead of db.transaction(["objectStore"], "readonly");. The array parameter is optional, and it is preferable to keep it simple.
My fourth point is simple, use db.transaction("objectStore"); instead of db.transaction(["objectStore"], "readonly");. "readonly" is the default mode of a transaction, so there is no need to specify it. The purpose of your code is sufficiently clearly communicated by not specifying the parameter, and omitting the parameter is less verbose.
My fifth point is your use of the async specifier(?) in the function definition. You don't need to use it here. You have a synchronous function returning an instance of a Promise. If anything, specifying async leads to increased confusion about what your code is doing. Instead, you probably want to be using the async qualifier when using the function.
My sixth point is that you are violating some FP principles in your call to connect(). What is connect connecting to? Implied global state. This is quite the violation of the spirit of functional programming. Therefore, your connect parameters must be parameters to the function itself, so that you do not rely on the implied knowledge of which database is used.
My seventh point is that you are using a database. Functional programmers have so many problems with databases, or any I/O or interaction with the outside world, they seem to like to pretend there is no such thing. Therefore, you probably shouldn't be using a database at all if you want to use functional approach.
My eight point is that connecting within the promise (calling and awaiting connect) is definitively an anti-pattern. The goal is to chain promises, so that one starts after the other. Either the caller has to call connect and then call getTable, or getTable has to call connect and then do the rest of the promise work.
My ninth point is that I am not even sure how this executes. The executor function that you pass to the Promise constructor is not qualified as async. So you are using the await modifier in a non-qualified function. This should be throwing an error. Technically, a promise swallowed exception, meaning that this promise should always be rejecting.
My tenth point is your use of async everywhere. I have no idea what is going on, unless your connect function is returning some kind of wrapper library, but calls to IDBDatabase.prototype.transaction and IDBTransaction.prototype.objectStore are synchronous. It makes no sense why you are awaiting them. They do not return promises.'
My eleventh point is that you are not watching for errors. request.onsuccess is not called back when there is an error. This could lead to your promise never settling. You need to also consider the failure case.
My twelfth point is you seem to missing a closing parentheses for your onsuccess handler function. I am not sure how this code ever is interpreted successfully.