Call cloud functions without waiting for response - javascript

Trying a little hack here with cloud functions but can't seem to figure out what the issue is.
I'm currently using now.sh to host serverless functions and would like to call 1 function from another. Lets say I have 2 functions declared fetchData & setData. When the setData function is called it processes some data and then calls the fetchData function.
export const setData = async (req: Request, res: Response) => {
await axios.post(
fetchDataEndpointUrl,
{
params: {
key,
},
},
);
return res.json({
payload: true,
});
}
The above code works fine but the time taken for the entire operation to complete would be setData function call + the time taken for the fetchData function call to complete. What I'm trying to do is make a call to fetchData without having to wait for it to complete essentially removing the await in the axios call. I've tried removing the await but the call just ends abruptly when the setData call ends. Is there a way to decouple this action and not have to wait for the setData function to complete?

The summary of your question appears to be that when you call a Cloud Function, you want it to be able to return a value to its caller while simultaneously performing background work such as calling another service and waiting for a response.
When you return a value to the caller for a Cloud Function, that is the end of the life span of the Cloud Function. You can not be assured of any kind of life in the Cloud Function beyond the return.
This is documented here.
The text (in part) reads:
A function has access to the resources requested (CPU and memory) only
for the duration of function execution. Code run outside of the
execution period is not guaranteed to execute, and it can be stopped
at any time. Therefore, you should always signal the end of your
function execution correctly and avoid running any code beyond it.
Another part of the question is what if we want to have our Cloud Function make an asynchronous REST call where we don't care about waiting for the response. Can we return from the Cloud Function without waiting for the nested REST call to complete?
The answer is maybe but it will depend on the nature of service being called and how we are calling it. To appreciate the nuances in this remember that JavaScript is a single threaded language. There aren't multiple threads of execution. In our JavaScript app, only one thing will ever happen at a time. If we make an asynchronous REST call and don't care about a reply but (obviously) do care that the request has been sent then we need to synchronously send the request before we terminate the Cloud Function. This can get tricky if we start using library packages without delving into their natures. For example, a REST call might include:
A socket connection to the partner
A transmission of the outbound request
A callback when the response is received
We need to be absolutely sure that the transmission has happened before we end the top level Cloud Function call. In addition, if we do end the top level Cloud Function call, that may very well tare down the socket used for the response. This could result in an exception in the called REST service that now is unable to return its 200 response.
To run work where we don't need to nor want to wait for a response, GCP provides an architected solution for this. It is called "Cloud Tasks" (see Cloud Tasks). Within your Cloud Function, you would define a request to call your nested service asynchronously and hand that request off to Cloud Tasks to execute. Cloud Tasks would acknowledge receipt of the request to execute and you can now be assured that it will and can return at the highest level.

Related

what exactly pushes async messages to the queue in javascript?

Recently I've been struggling to understand how asynchronous actions are handled in Javascript. This is an example which I can't understand:
let's say we have a node js server and we make a request to another server like this
async function fetchInfo() {
fetch("https://api.github.com/users/defunkt")
.then((response) => response.json())
.then((data) => console.log(data.login));
}
console.log(1);
fetchInfo();
here is my thought proccess about this function execution:
when the fetchInfo function is called, there is a fetch request so os makes that request in the "background" as many refer to this process. While the function waits till the response comes back we can do other stuff such as logging something to the console and logging will not be blocked.
Now, let's say the response comes back, what happens at that time? I know that the functions in .then are somehow pushed into the queue and event loop sees it and pushes it to the stack but how are those functions pushed to the queue? how does node know that the response has arrived? If there are some sort of event listeners(on the os level) which listen when the response returns, would not each of those listeners require thread to constantly monitor when response comes? In that case node would not be single threaded at all. I can understand the case when a new thread is created on every request but node somehow magically manages do it without creating new threads in the mysterious "background" black box.
Would be really glad if someone cleared my doubts about this issue. Feel free to recommend any books/resources, I'm willing to down the rabbit hole.

Blocking vs Non-Blocking in NodeJS

We all know that NodeJS is Single-Threaded, which means if we have an async/await operation in our code, the node will wait for it to be done before executing the rest of the code. So if a user makes an async request other users should wait for it to be done before making requests too?
Here I created a simple example, the first route uses an async function and it takes 10 sec before sending a response and the second route sends a response immediately.
When I sent a request to the first route and while waiting for a response I sent another request to the second route and I got a response even though the first route didn't finish executing the code yet.
Why is it non-blocking on this example?
function sleep(){
return new Promise((resolve,reject)=>{
setTimeout(()=>{
resolve(true)
},10000)
}).then(val=>val)
}
router.get('/route1',async (req,res)=>{
const test = await sleep()
res.send('HELLO WORLD')
})
router.get('/route2',(req,res)=>{
res.send("HELLO WORLD")
})
await only blocks/suspends execution of the current function, not the whole interpreter. In fact, at the point a function hits the first await inside the function, the function immediately returns a promise and other processing after that function (or other events that occur) are free to run.
So, in your example, when it hits the await sleep(), that function execution is suspended until the await resolves/rejects and the containing async function immediately returns an unfulfilled promise. Since Express with router.get() is not doing anything with that returned promise, it just ignores it and returns control back to the event loop. Sometime later, your second request arrives at the server, an event gets put into the nodejs event queue and Express gets called with that event and it serves your second route handler.
so if a user make an async request other users should wait for it to be done before making requests too?
No. Only that one instance of that one request handler that contains the await is suspended. Other execution in the interpreter and other event handler through the event loop (such as other incoming requests) can still happen, so other requests can still be processed, even though one request handler is sitting at an await. This illustrates how await does not block or suspend the whole interpreter, only the execution of one function.
When I sent a request to first route and while waiting for a response I sent another request to the second route and I got a response even though the first route didn't finish executing the code yet. why is it non-blocking on this example?
Only the first route was suspended by the await. Other events and other incoming requests can still be processed just fine.
NodeJs is not Single-Threaded.
You can check it with these step:
Create a this js file then run it:
while(true)
On the terminal you execute this command to get the number of threads using to run this js file.
NUM=ps M <pid> | wc -l && echo number of thread is: $((NUM-1))
And you can see that while(true) uses more than 1 thread.
Let's move back to your code,
The reason why you can get the result of the request to /route2 immediately when your request to /route1 is not finished yet because NodeJs using EventLoop to make the asynchronous functions do not block the main thread.
When you call the sleep function, NodeJs will start a timer then take your callback out of the call stack(That's why the request to /route2 doesn't blocked by /route1), and when your timer is out the resolve(true) will be put in the EventQueue, and with the help of EventLoop, your callback on route1 will be executed.
Link To EventLoop, Timer

How to send API requests from an array sequentially

The situation:
I have an API which is connected to a DynamoDB database via Lambda. Each request performs changes on several records in the database. Hence it is required that each request is fully completed before the next request can be send to the API. Otherwise I will get inconsistent data accross the database.
Where I need help: Given I have an array of elements to request in the database...
const requestArray = [123, 456, 567]
... and my code works with an await of the API post method promise ...
async function databaseRequestExample (requestArray[x]){
const result = await postAPIFunction(requestArray[x])
}
... I would like to know how to work through the array one request after the other. So concurrent invocation via array.map is not an option.
Thank you
You can use the async.eachSeries function of the async library.
Basically it'll run the foreach loop for you but run it serially. So you can fire your requests one after the other.
One option is to save all the incoming requests to a FIFO SQS queue in order. Then, you can let the lambda function pull from the queue one request at a time, and set the reserved concurrency of the lambda function to 1. This will ensure that at any given time there is at most one lambda function is querying/making changes to the DynamoDB, and it's processing in order.

Does node js execute multiple commands in parallel?

Does node js execute multiple commands in parallel or execute one command (and finish it!) and then execute the second command?
For example if multiple async functions use the same Stack, and they push & pop "together", can I get strange behaviour?
Node.js runs your main Javascript (excluding manually created Worker Threads for now) as a single thread. So, it's only ever executing one piece of your Javascript at a time.
But, when a server request contains asynchronous operations, what happens in that request handle is that it starts the asynchronous operation and then returns control back to the interpreter. The asynchronous operation runs on its own (usually in native code). While all that is happening, the JS interpreter is free to go back to the event loop and pick up the next event waiting to be run. If that's another incoming request for your server, it will grab that request and start running it. When it hits an asynchronous operation and returns back to the interpreter, the interpreter then goes back to the event loop for the next event waiting to run. That could either be another incoming request or it could be one of the previous asynchronous operations that is now ready to run it's callback.
In this way, node.js makes forward progress on multiple requests at a time that involve asynchronous operations (such as networking, database requests, file system operations, etc...) while only ever running one piece of your Javascript at a time.
Starting with node v10.5, nodejs has Worker Threads. These are not automatically used by the system yet in normal service of networking requests, but you can create your own Worker Threads and run some amount of Javascript in a truly parallel thread. This probably isn't need for code that is primarily I/O bound because the asynchronous nature of I/O in Javascript already gives it plenty of parallelism. But, if you had CPU-intensive operations (heavy crypto, image analysis, video compression, etc... that was done in Javascript), Worker Threads may definitely be worth adding for those particular tasks.
To show you an example, let's look at two request handlers, one that reads a file from disk and one that fetches some data from a network endpoint.
app.get("/getFileData", (req, res) => {
fs.readFile("myFile.html", function(err, html) {
if (err) {
console.log(err);
res.sendStatus(500);
} else {
res.type('html').send(html);
}
})
});
app.get("/getNetworkData", (req, res) => {
got("http://somesite.com/somepath").then(result => {
res.json(result);
}).catch(err => {
console.log(err);
res.sendStatus(500);
});
});
In the /getFileData request, here's the sequence of events:
Client sends request for http://somesite.com/getFileData
Incoming network event is processed by the OS
When node.js gets to the event loop, it sees an event for an incoming TCP connection on the port its http server is listening for and calls a callback to process that request
The http library in node.js gets that request, parses it, and notifies the observes of that request, once of which will be the Express framework
The Express framework matches up that request with the above request handler and calls the request handler
That request handler starts to execute and calls fs.readFile("myfile.html", ...). Because that is asynchronous, calling the function just initiates the process (carrying out the first steps), registers its completion callback and then it immediately returns.
At this point, you can see from that /getFileData request handler that after it calls fs.readFile(), the request handler just returns. Until the callback is called, it has nothing else to do.
This returns control back to the nodejs event loop where nodejs can pick out the next event waiting to run and execute it.
In the /getNetworkData request, here's the sequence of events
Steps 1-5 are the same as above.
6. The request handler starts to execute and calls got("http://somesite.com/somepath"). That initiates a request to that endpoint and then immediately returns a promise. Then, the .then() and .catch() handlers are registered to monitor that promise.
7. At this point, you can see from that /getNetworkData request handler that after it calls got().then().catch(), the request handler just returns. Until the promise is resolved or rejected, it has nothing else to do.
8. This returns control back to the nodejs event loop where nodejs can pick out the next event waiting to run and execute it.
Now, sometime in the future, fs.readFile("myFile.html", ...) completes. At this point, some internal sub-system (that may use other native code threads) inserts a completion event in the node.js event loop.
When node.js gets back to the event loop, it will see that event and run the completion callback associated with the fs.readFile() operation. That will trigger the rest of the logic in that request handler to run.
Then, sometime in the future the network request from got("http://somesite.com/somepath") will complete and that will trigger an event in the event loop to call the completion callback for that network operation. That callback will resolve or reject the promise which will trigger the .then() or .catch() callbacks to be called and the second request will execute the rest of its logic.
Hopefully, you can see from these examples how request handlers initiate an asynchronous operation, then return control back to the interpreter where the interpreter can then pull the next event from the event loop and run it. Then, as asynchronous operations complete, other things are inserted into the event loop causing further progress to run on each request handler until eventually they are done with their work. So, multiple sections of code can be making progress without more than one piece of code every running at the same time. It's essentially cooperative multi-tasking where the time slicing between operations occurs at the boundaries of asynchronous operations, rather than an automatic pre-emptive time slicing in a fully threaded system.
Nodejs gets a number of advantages from this type of multi-tasking as it's a lot, lot lower overhead (cooperative task switching is a lot more efficient than time-sliced automatic task switching) and it also doesn't have most of the usual thread synchronization issues that true multi-threaded systems do which can make them a lot more complicated to code and/or more prone to difficult bugs.

How to write nodejs service, running all time

I am new into nodeJs (and JS), so can you explain me (or give a link) how to write simple service of nodeJs, which run permanently?
I want to write service, which sends a request every second to foreign API at store the results it DB.
So, maybe nodeJs have some simple module to run js method (methods) over and over again?
Or, I just have to write while loop and do it there?
setInterval() is what you would use to run something every second in node.js. This will call a callback every NN milliseconds where you pass the value of NN.
var interval = setInterval(function() {
// execute your request here
}, 1000);
If you want this to run forever, you will need to also handle the situation where the remote server you are contacting is off-line or having issues and is not as responsive as normal (for example, it may take more than a second for a troublesome request to timeout or fail). It might be safer to repeatedly use setTimeout() to schedule the next request 1 second after one finishes processing.
function runRequest() {
issueRequest(..., function(err, data) {
// process request results here
// schedule next request
setTimeout(runRequest, 1000);
})
}
// start the repeated requests
runRequest();
In this code block issueRequest() is just a placeholder for whatever networking operation you are doing (I personally would probably use the request() module and use request.get().
Because your request processing is asynchronous, this will not actually be recursion and will not cause a stack buildup.
FYI, a node.js process with an active incoming server or an active timer or an in-process networking operation will not exit (it will keep running) so as long as you always have a timer running or a network request running, your node.js process will keep running.

Categories

Resources