I made a chatbot which talks to an external API and calls another intent using the data provided by the API. The API provides the followup event's name. I have to make use of the inline editor only and not the webhook for certain reasons.
function setFollow(agent) {
agent.add("hello");
axios.post(url, body, {headers})
.then (({ data }) => {
agent.add("setting followup event");
agent.setFollowupEvent({ "name": data.followupEventInput_name, "parameters" : { "name": data.name}});
})
}
I am successfully calling the API and getting an appropriate response from it (the event name and the parameter value). The event and thus the next intent is still not being called. I even tried using hardcoded values to call the event but it did not work. The execution is going inside the then statement (checked with console logs) but the event is not being set (even with hardcoded values).
What is the reason behind this and how can I solve it?
I have already checked the contexts: there are 2 active contexts (A and B) and the intent which should be called based on the event has 1 input context (A). Right now, no intent is being triggered.
It was a JS issue. I was not returning the promise.
return axios.post(url, body, {headers}).then(...)
solved the issue.
This was happening because JavaScript code execution is asynchronous by default, which means that JavaScript won’t wait for a function to finish before executing the code below it. The return statement before the promise makes JS wait for the promise to resolve or reject.
Without the return statement, the function completed execution before agent.add() was executed and hence the Dialogflow bot was not working as expected.
Related
I have a cloud function in Firebase that, among a chain of promise invocations, ends with a call to this function:
function sendEmail() {
return new Promise((accept) => {
const Email = require('email-templates');
const email = new Email({...});
email.send({...}).then(() => {
console.log('Email sent');
}).catch((e) => {
console.error(e);
});
accept();
});
}
I am well aware of the fact that email.send() returns a promise. There's a problem however with this approach, that is, if I were to change the function to be:
function sendEmail() {
const Email = require('email-templates');
const email = new Email({...});
return email.send({...});
}
It usually results in the UI hanging for a significant amount of time (10+ seconds) because the time it takes from the promise to resolve equals the amount of time it takes for the email to send.
That's why I figured the first approach would be better. Just call email.send() asynchronously, it'll send the email eventually, and return a response to the client whether the email has finished its round trip or not.
The first approach is giving me problems. The cloud function finishes execution must faster, and thus ends up being a better experience for the user, however, the email doesn't send for another 15+ minutes.
I am considering another approach where we have a separate cloud function hook that handles the email sending, but I wanted to ask StackOverflow first.
I think there are two aspects being mixed here.
One side of the question deals with promises in the context of Cloud Functions. Promises in Cloud Functions need to be resolved before you call res.send() because right after this call the function will be shutdown and there's no guarantee that unresolved promises will complete before the function instance is terminated, see this question. You might as well never call res.send() and instead return the result of a promise as shown in the Firebase documentation, the key here would be to ensure the promise is resolved properly for example using an idiom like return myPromise().then(console.log); which will force the promise resolution.
Separately, as Bergi pointed out in the comments the first snippet uses an anti-pattern with promises and the second one is way more concise and clear. If you're experiencing a delay in the UI it's likely that the execution gets freezed waiting for the Function response and you might consider whether this could be avoided in your particular use case.
All that said, your last idea of creating a separate function to deal with the email send process would also likely reduce the response time and could even make more sense from a separation of concerns point of view. To go this route I would suggest to send a PubSub message from the main function so that a second one sends the email. Moreover, PubSub triggered function allows to configure retry policies which may be useful to ensure the mail will be sent in the context of eventual errors. This approach is also suggested in the question linked above.
Context: I'm trying to implement a rudimentary socket pool in TypeScript. My current implementation is just a list of sockets that have an "AVAILABLE/OCCUPIED" enum attached to them (it could be a boolean admittedly) that allows me to have a mutex-like mechanism to ensure each socket only sends/receives a single message at once.
What I understand: I got that Node.js's way of handling "parallel" operations is "single-threaded asynchrony".
What I infer: to me, this means that there is only a single "control pointer"/"code read-head"/"control flow position" at once, since there is a single thread. It seems to me that the readhead only ever jumps to somewhere else in the code when "await" is called, and the Promise I am "awaiting" cannot yet be resolved. But I am not sure that this is indeed the case.
What I am wondering: does "single-threaded asynchrony" ensure that there is indeed no jump of the control flow position at any other time than when "await" is called ? Or is there some underlying scheduler that may indeed cause jumps between tasks at random moments, like normal multithreading ?
My question: All of this to ask, do I need a pure mutex/compare-and-swap mechanism to ensure that my mutex-like AVAILABLE/OCCUPIED field is set appropriately ?
Consider the following code:
export enum TaskSocketStatus
{
AVAILABLE, //Alive and available
OCCUPIED, //Alive and running a task
}
export interface TaskSocket
{
status:TaskSocketStatus;
socket:CustomSocket;
}
export class Server //A gateway that acts like a client manager for an app needing to connect to another secure server
{
private sockets:TaskSocket[];
[...]
private async Borrow_Socket():Promise<TaskSocket|null>
{
for (const socket of this.sockets)
{
if (!socket.socket.Is_Connected())
{
await this.Socket_Close(socket);
continue;
}
if (socket.status === TaskSocketStatus.AVAILABLE)
{
//This line is where things could go wrong if the control flow jumped to another task;
//ie, where I'd need a mutex or compare-and-swap before setting the status
socket.status = TaskSocketStatus.OCCUPIED;
return (socket);
}
}
if (this.sockets.length < this.max_sockets)
{
const maybe_socket = await this.Socket_Create();
if (maybe_socket.isError())
{
return null;
}
//Probably here as well
maybe_socket.value.status = TaskSocketStatus.OCCUPIED;
return maybe_socket.value;
}
return null;
}
[...]
}
The issue I'm looking to avoid is two different "SendMessage" tasks borrowing the same socket because of race conditions. Maybe this is needless worry, but I'd like to make sure, as this is a potential issue that I would really prefer not to have to confront when the server is already in production...
Thanks for your help !
So, the control flow to another operation is not when await is called. It's when the running piece of Javascript returns back to the event loop and the event loop can then service the next waiting event. Resolved promises work via the event loop too (a special queue, but still in the event loop).
So, when you hit await, that doesn't immediately jump control somewhere else. It suspends further execution of that function and then causes the function to immediately return a promise and control continues with a promise being returned to the caller of the function. The caller's code continues to execute after receiving that promise. Only when the caller or the caller of the caller or the caller of the caller of the caller (depending upon how deep the call stack is) returns back to the event loop from whatever event started this whole chain of execution does the event loop get a chance to serve the next event and start a new chain of execution.
Some time later when the underlying asynchronous operation connected to that original await finishes it will insert an event into the event queue. When other Javascript execution returns control back to the event loop and this event gets to the start of the event queue, it will get executed and will resolve the promise that the await was waiting for. Only then does the code within the function after the await get a chance to run. When that async function that contained the await finally finishes it's internal execution, then the promise that was originally returned from the async function when that first await was hit will resolve and the caller will be notified that the promise it got back has been resolved (assuming it used either await or .then() on that promise).
So, there's no jumping of flow from one place to another. The current thread of Javascript execution returns control back to the event loop (by returning and unwinding its call stack) and the event loop can then serves the next waiting event and start a new chain of execution. Only when that chain of execution finishes and returns can the event loop go get the next event and start another chain of execution. In this way, there's just the one call stack frame going at a time.
In your code, I don't quite follow what you're concerned about. There is no pre-emptive switching in Javascript. If your function does an await, then its execution will be suspended at that point and other code can run before the promise gets resolved and it continues execution after the await. But, there's no pre-emptive switching that could change the context and run other code in this thread without your code calling some asynchronous operation and then continuing in the complete callback or after the await.
So, from a pure Javascript point of view, there's no worry between pure local Javascript statements that don't involve asynchronous operations. Those are guaranteed to be sequential and uninterrupted (we're assuming there's none of your code involved that uses shared memory and worker threads - which there is no sign of in the code you posted).
What I am wondering: does "single-threaded asynchrony" ensure that there is indeed no jump of the control flow position at any other time than when "await" is called ?
It ensures that there is no jump of the control flow position at any time except when you return back the event loop (unwind the call stack). It does not occur at await. await may lead to your function returning and may lead to the caller then returning back to the event loop while it waits for the returned promise to resolve, but it's important to understand that the control flow change only happens when the stack unwinds and returns control back to the event loop so the next event can be pulled from the event queue and processed.
Or is there some underlying scheduler that may indeed cause jumps between tasks at random moments, like normal multithreading ?
Assuming we're not talking about Worker Threads, there is no pre-emptive Javascript thread switching in nodejs. Execution to another piece of Javascript changes only when the current thread of Javascript returns back to the event loop.
My question: All of this to ask, do I need a pure mutex/compare-and-swap mechanism to ensure that my mutex-like AVAILABLE/OCCUPIED field is set appropriately ?
No, you do not need a mutex for that. There is no return back to the event loop between the test and set so they are guaranteed to be not interrupted by any other code.
I am having a bit of trouble understanding why my code is not working. I am trying to read data from firebase inside a react-native project. I can read it just fine, but I cannot set the data to any variables.
This is my code here.
let tmp;
let userRef = firebase.firestore().collection("Users");
userRef.doc(this.state.FirstName).get().then((document) => {
tmp = document.data().FirstName;
alert(tmp);
})
.catch((errorMsg) => {
alert(errorMsg);
})
alert("tmp Data: " + tmp);
};
The problem is that if I alert tmp inside of the function it shows the FirstName variable as expected. But when I alert tmp outside of the function, it shows undefined. I just cannot seem to wrap my head around why this is not working, if anyone could tell me what I'm doing wrong here, I would much appreciate it.
This is totally normal. It happens because if you put the alert outside the block, then it will get executed before the block, with tmp being uninitialized. The code that gets the FirstName from the database (the get() function) executes some code in another thread and your original thread continues without waiting for it to finish. When that another thread finishes execution, the code inside the block gets executed. You can verify this behavior by adding alerts before, inside, and after the block to see the order of the execution. To know more, read about asynchronous operations and promises.
Why all of this? Why does get() execute some code in another thread? Briefly, because it uses the network to access the Firebase database and it may take some time before getting a response back. If get() executes the 'networking code' in the same calling thread, then calling it from the main thread (the UI thread) will make your UI unresponsive until the response returns back. So, instead, get() dispatches the 'networking code' to another thread and returns a promise object immediately even before the 'networking code' finishes execution. You use that promise object to specify what you want to do with the result whenever it arrives. This way, the calling thread continues execution and does not need to wait, and in the case where the calling thread is the UI thread (which is usually the case), it means that your UI is always responsive.
So, I wrote the following function:
function getData() {
var data;
$(function () {
$.getJSON('https://ipinfo.io', function (ipinfo) {
data = ipinfo;
console.log(data);
})
})
console.log(data);
}
The problem with the above is the 2nd console.log doesn't retain the info from the assignment inside the jQuery and logs an undefined object. I'm not exactly sure what is wrong, but I believe it to be something quite minor. However, as much as I've searched online, I haven't found an answer for this particular problem.
One line: Javascript is Asynchronous.
While many struggle to figure out what it exactly means, a simple example could possibly explain you that.
You request some data from a URL.
When the data from second URL is received, you wish to set a variable with the received data.
You wish to use this outside the request function's callback (after making the request).
For a conventional programmer, it is very hard to grasp that the order of execution in case of JavaScript will not be 1,2 and then 3 but rather 1,3,2.
Why this happens is because of Javascript's event-loop mechanism where each asynchronous action is tied with an event and callbacks are called only when the event occurs. Meanwhile, the code outside the callback function executes without holding on for the event to actually occur.
In your case:
var data;
$(function () {
$.getJSON('https://ipinfo.io', function (ipinfo) {//async function's callback
data = ipinfo;
console.log(data);//first console output
})
})
console.log(data);//second console output
While the async function's callback is executed when the data is received from the $.getJSON function, javascript proceeds further without waiting for the callback to assign the value to the data variable, causing you to log undefined in the console (which is the value of the data variable when you call console.log.
I hope I was able to explain that.!
My question is about readTextAsync and writeTextAsync in the context of windows store applications. I have searched StackOverflow and MSDN and also otherwise Googled extensively.
My code is given below:
Windows.Storage.ApplicationData.current.roamingFolder.getFileAsync("sample.txt")
.then(
function(samplefile){
return Windows.Storage.FileIO.readTextAsync(samplefile)
},
function(e){},
function(samplefile){
Windows.Storage.FileIO.readTextAsync(samplefile)
}
)
.done(
function(something){ data = something; },
function(){},
function(something){ data = something; }
);
My problem is that most of the time the file does not get read. When I debug, it gets read intermittently.
It appears to be an issue of not allowing enough time for the async call to complete.
I am totally new to Windows app programming and javascript.
I would appreciate any help. Thanks. ravi
Whan you chain many promises you should have one error function at the end where you put your "done".
In this way you will be able to see if there is an error while it is reading.
The way you should write it is:
Windows.Storage.ApplicationData.current.roamingFolder.getFileAsync("sample.txt")
.then(
function(samplefile){
return Windows.Storage.FileIO.readTextAsync(samplefile)
}
)
.done(
function(data){ //do something with your data, like assign to a list },
function(error){ //do something with error },
function(data){ //do something with your data } //progress function,not sure what you want to do with this
);
But this call may not be your problem, if you put this code in a normal function and then call it, you will not be able to see the data object because it will be loaded in an asynchronous way.
You have to process your data inside the done function because if you assing it to an external variable (your data object) like you did, that variable will be empty when you try to use because most likely the done method hasn't occured yet.
In the progress handler for then, I am just trying to repeat the call, to ensure completion.
That makes no sense. It might even lead to race conditions, since you try to read the file while getting it is still in progress. Also, the repeated call does not return anything, and the task will not be chained into / synchronized with the flow of the rest. Remove that handler.
The reason for the emptry error handler is I haven't decided what to do in case of an error.
You can just omit it then. Errors would then just "bubble" up, and can be caught in other promises that depend on the failed one.
I want the text that is read to be stored in data for processing later.
But when is "later"? You would need to ensure that the processing does not start before the file is completely read - for which you will need to hook a done handler on your promise. Do the processing in the promise chain as well, never use global/higher-scoped variables with promises. If you need data multiple times, you can simple store the promise in a variable, and attach multiple done handlers (which will work even when the promise is already resolved).