AWS Lambda halts halfway while executing a function - javascript

I tried to execute two functions imported from two different files on aws lambda:
const tag_test = require("./tag.js");
const login_logout = require("./login_logout.js");
exports.handler = async function(event, context) {
await tag_test.tag();
await login_logout.login();
console.log("all tests done.");
}
The first function was executed fine, but while running the second function, lambda halted at one point and waited until the whole process timed out. I suspect it stopping right before let browser because on the console log, I can see "opening up browser" but not "got browser".
module.exports.tag = async() => {
console.log("starting test 2");
const puppeteer = require('puppeteer-lambda');
console.log("opening up browser");
let browser = await puppeteer.getBrowser(
'--no-sandbox',
'--disable-gpu',
'--single-process'
);
console.log("got browser");
let page = await browser.newPage();
console.log("got page");
//my test
//...
}
Does anyone have any insight on what went wrong?

increase lambda timeout. default is 3 seconds.

Resolved.
It's actually puppeteer-lambda's problem. Apparently you cannot open a browser, close it and then open it again on AWS lambda with puppeteer-lambda. I opened and later closed a browser in my first test and that's what made my second test stuck at let browser.
To solve this problem, I let the first test return the browser and pass it to the second test for reusing.

Related

firebase httpsCallable is not called / does not respond

I'm facing a strange issue developing a react-native application connected with firebase as backend.
I try to call a firebase cloud function via httpsCallable.
If i'm in debug mode everything is working fine and the saveImage() functions returns a value.
But if i disable debug randomly (maybe 50% of the time) the function just hangs at await functions().httpsCallable('directUpload')
I already tried to get an output showing a alert with the result, because i can not use console.log without debug, but its not working. Same for the error. It seems like its waiting for the await forever
Even on the server side, i can see in the log that the function is not called.
saveImage = async item => {
try {
let result = await functions().httpsCallable('directUpload')({ //its "frozen here"
uid: this.state.user.uId,
mimeType: item.mimeType,
ext: item.ext,
});
helper.showAlert(result) //never gets called
return {success: true};
} catch (error) {
helper.showAlert(error); //never gets called
return {success: false};
}
};
Does anyone have any idea where the problem is coming from or what it could be?

Puppeteer custom error messages when failure

I am trying to create a custom error messages when Puppeteer fails to do a task, in my case it cannot find the field that it has to click.
let page;
before(async () => { /* before hook for mocha testing */
page = await browser.newPage();
await page.goto("https://www.linkedin.com/login");
await page.setViewport({ width: 1920, height: 1040 });
});
after(async function () { /* after hook for mocah testing */
await page.close();
});
it('should login to home page', async () => { /* simple test case */
const emailInput = "#username";
const passwordInput = "#assword";
const submitSelector = ".login__form_action_container ";
linkEmail = await page.$(emailInput);
linkPassword = await page.$(passwordInput)
linkSubmit = await page.$(submitSelector);
await linkEmail.click({ clickCount: 3 });
await linkEmail.type('testemail#example.com'); // add the email address for linkedin //
await linkPassword.click({ clickCount: 3 }).catch(error => {
console.log('The following error occurred: ' + error);
});;
await linkPassword.type('testpassword'); // add password for linkedin account
await linkSubmit.click();
await page.waitFor(3000);
});
});
I have deliberately put a wrong passwordInput name in order to force puppeteer to fail. However, the console.log message is never printed.
This is my error output which is the default mocha error:
simple test for Linkedin Login functionality
1) should login to home page
0 passing (4s)
1 failing
1) simple test for Linkedin Login functionality
should login to home page:
TypeError: Cannot read property 'click' of null
at Context.<anonymous> (test/sample.spec.js:29:28)
Line 29 is the await linkPassword.click({ clickCount: 3 })
Anyone has an idea how I can make it print a custom error message when an error like this occurs?
The problem is that the exception is being thrown not in the result of the function await linkPassword.click() execution, but in the result of attempt of executing the function. By .catch() you try to handle an eventual exception thrown during execution. page.$() works this way it returns a null if a selector isn't found. And in your case, you execute null.click({ clickCount: 3 }).catch() what actually doesn't have sense.
To quickly solve your problem you should do a check to verify whether linkPassword isn't null. However, I think you make a big mistake by using page.$() to get an element to interact with. This way you lose a lot of the puppeteer features because instead to use puppeteer's method page.click() you use a simple browser's click() in the browser.
Instead, you should make sure that the element exists and is visible and then use the puppeteer's API to play with the element. Like this:
const emailInput = "#username";
await page.waitForSelector(emailInput);
await page.click(emailInput, { clickCount: 3 });
await page.type(emailInput, 'testemail#example.com')
Thanks to that your script makes sure the element is clickable and if it is it scrolls to the element and performs clicks and types the text.
Then you can handle a case when the element isn't found this way:
page.waitForSelector(emailInput).catch(() => {})
or just by using try/catch.

Keeping same Puppeteer page open nodeJS

I'm working on an API developed around running some JS on a page that's opened in Puppeteer but I don't want to keep open/closing & waiting for the page to load since it's a heavy content page.
Is it possible to run a forever start on a node script that initiates the page & keeps it open forever & then call a separate node script whenever it's needed to run some javascript on this page?
I've attempted the following but appears the page doesn't remain open:
keepopen.js
'use strict';
const puppeteer = require('puppeteer');
(async() => {
const start = +new Date();
const browser = await puppeteer.launch({args: ['--no-sandbox']});
const page = await browser.newPage();
await page.goto('https://www.bigwebsite.com/', {"waitUntil" : "networkidle0"});
const end = +new Date();
console.log(end - start);
//await browser.close();
})();
runjs.js
'use strict';
const puppeteer = require('puppeteer');
(async() => {
const start = +new Date();
const browser = await puppeteer.launch({args: ['--no-sandbox']});
const page = await browser.targets()[browser.targets().length-1].page();
const hash = await page.evaluate(() => {
return runFunction();
});
const end = +new Date();
console.log(hash);
console.log(end - start);
//await browser.close();
})();
I run the following: forever start keepopen.js and then runjs.js but I'm getting the error:
(node:1642) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'evaluate' of null
It is not possible to share a resource between two Node.js scripts like that. You need a server that keeps the browser open.
Code Sample
Below is an example, using the library express to start a server. Calling /start-browser launches the browser and stores the browser and page object outside of the current function. That way a second function (called when /run is accessed) can use the page object to run code inside of it.
const express = require('express');
const app = express();
let browser, page;
app.get('/start-browser', async function (req, res) {
browser = await puppeteer.launch({args: ['--no-sandbox']});
page = await browser.newPage();
res.end('Browser started');
});
app.get('/run', async function (req, res) {
await page.evaluate(() => {
// ....
});
res.end('Done.'); // You could also return results here
});
app.listen(3000);
Keep in mind, that this a minimal example to get you started. In a real world scenario, you would need to catch errors and maybe also restart the browser from time to time.
You could run a http server, using node, where the puppeteer page object is created once on startup, and then initiate your current script by placing that code inside (a so called) "routing" function (which is just a function that serves a web request) of the http server you've created.
As long as the page object is created right outside the scope of the routing function, that contains your code, your routing function will maintain access to that same page object between numerous web requests.
You'll be able to reuse that same page object over and over again instead having to reload it for each call like you're currently doing. However, you need a service to persist the page object between requests/calls.
You can either create your own http server (using node's built-in http package), or use express (and there are many other http based packages besides express you could use).

How can I get Chrome's remote debug URL when using the "remote-debugging-port" in Electron?

I've set the remote-debugging-port option for Chrome in my Electron main process:
app.commandLine.appendSwitch('remote-debugging-port', '8315')
Now, how can I get the ws:// URL that I can use to connect to Chrome?
I see that the output while I'm running Electron shows
DevTools listening on ws://127.0.0.1:8315/devtools/browser/52ba17be-0c0d-4db6-b6f9-a30dc10df13c
but I would like to get this URL from inside the main process. The URL is different every time. How can I get it from inside the Electron main process?
Can I somehow read my Electron's main process output, from within my main process JavaScript code?
Here's how to connect Puppeteer to your Electron window from your Electron main process code:
app.commandLine.appendSwitch('remote-debugging-port', '8315')
async function test() {
const response = await fetch(`http://localhost:8315/json/list?t=${Math.random()}`)
const debugEndpoints = await response.json()
let webSocketDebuggerUrl = ''
for (const debugEndpoint of debugEndpoints) {
if (debugEndpoint.title === 'Saffron') {
webSocketDebuggerUrl = debugEndpoint.webSocketDebuggerUrl
break
}
}
const browser = await puppeteer.connect({
browserWSEndpoint: webSocketDebuggerUrl
})
// use puppeteer APIs now!
}
// ... make your window, etc, the usual, and then: ...
// wait for the window to open/load, then connect Puppeteer to it:
mainWindow.webContents.on("did-finish-load", () => {
test()
})

Why doesn't this async function throw undefined?

I have a pretty simple async function that calls the fetch api, and brings me back some data. I am using the await keyword 2 times in this function, and then getting that data and pushing it into my component state.
Here is my pseudo-code in regards to how this function is executing (please tell me if I'm right or wrong here):
Call the fetch api with await: this allows the rest of your code to continue to the next line.
Once you get the fetch the response stream, put it into data variable. Again, the code can continue while we are waiting for this to happen.
Log the data to the console.
Step 3 is where I have some questions...let's say I'm on a really terrible network, and my fetch request doesn't give me my data for 5 full seconds. At that point, shouldn't my console.log(data) line throw undefined, and execute the catch block, due to the async function allowing console.log(data to run BEFORE I get my fetch data back?
I tested this by going into the Chrome Web dev console, and selected the "slow 3g" connection. Even with that network, I was able to log my data to the console without throwing undefined.
What I want to do is to make sure there is data to push into state right after I get my data object back.
getMovieDetails = async id => {
const API_KEY = process.env.REACT_APP_API_KEY;
const movieId = id;
const url = `https://api.themoviedb.org/3/movie/${movieId}?api_key=${API_KEY}&language=en-US`;
try {
const response = await fetch(url);
const data = await response.json();
console.log(data);
this.setState({
title: data.title,
poster: data.poster_path
});
} catch (err) {
console.log(`Couldn't fetch the endpoint!`);
console.log(err);
}
};
You are wrong in the first point of your Pseudo-code
Call the fetch api with await: this allows the rest of your code to
continue to the next line.
Actually, no. await will block the execution of the next lines in the async function.
So,
yourAsyncFunction = async () => {
await doSomething();
console.log('done something') // Will not run until doSomething() gets completed
}
Which is why you are always getting the fetched data in your console.log(data) statement.

Categories

Resources