What is causing this RichApi error when trying to use insertInlinePictureFromBase64() - javascript

I am trying to replace the token in the word document with the image i have in base64. But i get an unhelpful error
Error Code
Here is the function thats not working
async function insertImage(token, image) {
console.log("adding image");
await Word.run(async (context) => {
const results = context.document.body.search(token);
await context.sync();
results.load();
await context.sync();
console.log(results.items);
for (let i = 0; i < results.items.length; i++) {
const pasteMeHere = results.items[i];
pasteMeHere.insertInlinePictureFromBase64(image, "Replace");
pasteMeHere.delete();
}
await context.sync();
});
}
I have tried different images, one of which from the ms script lab repo which i confirmed working. I have also tried different locations including using the paragraphs.getLast() function.

Related

read CAR file using js-car

I have a CAR file object in javascript and want to read it using js-car github. But I keep getting unexpected end of the file error. Here is my code I am trying
let arrayBuffer = await files[0].arrayBuffer();
let bytes=new Uint8Array(carFile);
const reader = await CarReader.fromBytes(bytes) //throws error here
const indexer = await CarIndexer.fromBytes(bytes) //throws error here
I also tired this
let str = await files[0].stream()
const reader = await CarReader.fromIterable(files[0].stream()) //throws error here
and none of them work. However with the same file this code works
const inStream = fs.createReadStream('test.car')
const reader = await CarReader.fromIterable(inStream)
I checked and I know that CarReader.fromBytes needs a Unit8Arrey and I am sure files[0] is not null. Does anyone knows what I am missing here?
for the people might face similar issue in future this is my solution:
I used res.body directly and converted it to an async stream and read it using fromIterable
async function* streamAsyncIterator(stream) {
// Get a lock on the stream
const reader = stream.getReader();
try {
while (true) {
// Read from the stream
const { done, value } = await reader.read();
// Exit if we're done
if (done) return;
// Else yield the chunk
yield value;
}
}
finally {
reader.releaseLock();
}
}
const info = await w3StorageClient.status(response)
if (info) {
// Fetch and verify files from web3.storage
const res = await w3StorageClient.get(response);
const reader = await CarReader.fromIterable(streamAsyncIterator(res.body))
// read the list of roots from the header
const roots = await reader.getRoots()
// retrieve a block, as a { cid:CID, bytes:UInt8Array } pair from the archive
const got = await reader.get(roots[0])
// also possible: for await (const { cid, bytes } of CarIterator.fromIterable(inStream)) { ... }
let decoded = cbor.decode(got.bytes)
console.log('Retrieved [%s] from example.car with CID [%s]',
decoded,
roots[0].toString())
}

How can I click on all links matching a selector with Playwright?

I'm using Playwright to scrape some data. How do I click on all links on the page matching a selector?
const { firefox } = require('playwright');
(async () => {
const browser = await firefox.launch({headless: false, slowMo: 50});
const page = await browser.newPage();
await page.goto('https://www.google.com');
page.pause(); // allow user to manually search for something
const wut = await page.$$eval('a', links => {
links.forEach(async (link) => {
link.click(); // maybe works?
console.log('whoopee'); // doesn't print anything
page.goBack(); // crashes
});
return links;
});
console.log(`wut? ${wut}`); // prints 'wut? undefined'
await browser.close();
})();
Some issues:
console.log inside the $$eval doesn't do anything.
page.goBack() and page.pause() inside the eval cause a crash.
The return value of $$eval is undefined (if I comment out page.goBack() so I get a return value at all). If I return links.length instead of links, it's correct (i.e. it's a positive integer). Huh?
I get similar results with:
const links = await page.locator('a');
await links.evaluateAll(...)
Clearly I don't know what I'm doing. What's the correct code to achieve something like this?
(X-Y problem alert: I don't actually care if I do this with $$eval, Playwright, or frankly even Javascript; all I really want to do is make this work in any language or tool).
const { context } = await launch({ slowMo: 250 });
const page = await context.newPage();
await page.goto('https://stackoverflow.com/questions/70702820/how-can-i-click-on-all-links-matching-a-selector-with-playwright');
const links = page.locator('a:visible');
const linksCount = await links.count();
for (let i = 0; i < linksCount; i++) {
await page.bringToFront();
try {
const [newPage] = await Promise.all([
context.waitForEvent('page', { timeout: 5000 }),
links.nth(i).click({ modifiers: ['Control', 'Shift'] })
]);
await newPage.waitForLoadState();
console.log('Title:', await newPage.title());
console.log('URL: ', page.url());
await newPage.close();
}
catch {
continue;
}
}
There's a number of ways you could do this, but I like this approach the most. Clicking a link, waiting for the page to load, and then going back to the previous page has a lot of problems with it - most importantly is that for many pages the links might change every time the page loads. Ctrl+shift+clicking opens in a new tab, which you can access using the Promise.all pattern and catching the 'page' event.
I only tried this on this page, so I'm sure there's tons of other problems that my arise. But for this page in particular, using 'a:visible' was necessary to prevent getting stuck on hidden links. The whole clicking operation is wrapped in a try/catch because some of the links aren't real links and don't open a new page.
Depending on your use case, it may be easiest just to grab all the hrefs from each link:
const links = page.locator('a:visible');
const linksCount = await links.count();
const hrefs = [];
for (let i = 0; i < linksCount; i++) {
hrefs.push(await links.nth(i).getAttribute('href'));
}
console.log(hrefs);
Try this approach.I will use typescript.
await page.waitForSelector(selector,{timeout:10000});
const links = await page.$$(selector);
for(const link of links)
{
await link.click({timeout:8000});
//your additional code
}
See more on https://youtu.be/54OwsiRa_eE?t=488

Inner async code not executing at all despite using await block

I've got a pretty simple class that I'm trying to use Puppeteer within, but no matter what I do the async code just doesn't seem to execute when I put a breakpoint on it.
The let data = await page.$$eval will execute and then literally nothing happens after that. The code doesn't even step into the inner function block.
Surely the await on that line should force the inner async block to execute before it moves onto the console log at the bottom?
let url = "https://www.ikea.com/gb/en/p/godmorgon-high-cabinet-brown-stained-ash-effect-40457851/";
let scraper = new Scraper();
scraper.launch(url);
export class Scraper{
constructor(){}
async launch(url: string){
let browser = await puppeteer.launch({});
let page = await browser.newPage();
await page.goto(url);
let data = await page.$$eval(' body *', elements => {
console.log("Elements: ", elements);
elements.forEach(element => {
console.log("Element: ", element.className);
})
return "done";
})
console.log("Data: ", data);
}
}
I'm trying to follow this tutorial.
I even copied this block of code directly from the tutorial but still it doesn't work.
await page.goto(this.url);
// Wait for the required DOM to be rendered
await page.waitForSelector('.page_inner');
// Get the link to all the required books
let urls = await page.$$eval('section ol > li', links => {
// Make sure the book to be scraped is in stock
links = links.filter(link => link.querySelector('.instock.availability > i').textContent !== "In stock")
// Extract the links from the data
links = links.map(el => el.querySelector('h3 > a').href)
return links;
});
console.log(urls);

Making a fetch call within a nested for loop

I'm currently converting my local function into a lambda function and am running into a few hardships. In a previous method, I get an array of url fragments that I have to string along programmatically to a prefix to get the data back.
Logically, I figure the best way of doing this is to do a nested for loop. Go through the series of prefixes adding the url fragment and doing the fetch call.
It works fine in local, but lambda throws errors.
function getVariantData(data, cb) {
for (var i = 0; i < chapters.length; i++) {
let source = chapters[i];
// chapters = url prefix
data.forEach(async element => {
let res = await fetch(chapters[i] + element);
//element = url fragment
let body = await res.text();
createVariantsFile(element, source, body, cb);
});
}
}
this code runs fine but i've learned that lambdas are a bit more strict with forEach and async/awaits so I've changed my code to this and I've been dealing with a mess of issues. I havent gone past writing console.log because...well I haven't gotten past the error.
async function getVariantData(data, cb) {
for (var i = 0; i < chapters.length; i++) {
let source = chapters[i];
const promises = data.map((datum, index) => fetch(source+datum))
const chapterData = await Promise.all(promises)
console.log(chapterData)
// await data.map(async element => {
// return await (chapters[i] + element);
// createVariantsFile(element, source, body, cb);
// });
}
}

What is the source of my 'Execution Context Destroyed' error?

I am creating a program to scrape forum responses for the online Uni I work for. I managed to successfully navigate to the appropriate pages, but when I tried to include scraping for the list of names of the learners who have responded I receive an 'Execution context was destroyed error'.
So far I tried moving around page.waitFor() methods with varying amounts of timeouts.
const nameLinkList = await page.$$eval(
'.coursename',
(courseLinks => courseLinks.map(link => {
const a = link.querySelector('.coursename > a');
return {
name: a.innerText,
link: a.href
};
}))
);
for (const {
name,
link
} of nameLinkList) {
await Promise.all([
page.waitForNavigation(),
page.goto(link),
page.waitFor(2000),
]);
let [button] = await page.$x("//a[contains(., 'Self')]");
if (button) {
await button.click();
} else {
console.log(name);
console.log('Didnt find link');
}
fs.appendFile('out.csv', name + '\n');
await page.waitFor(1000);
var elementExists = await page.$$('.author .media-body');
if (elementExists) {
await console.log(name);
await page.waitFor(500);
for (let z of elementExists) {
const studentName = await z.$eval('a', a => a.innerText);
await page.waitFor(2000)
await console.log(studentName);
}
}
await page.goto('www.urlwiththelistofcourses.com');
}
I expected it to iterate through each page, logging first the name of the course, followed by the names of any students who posted on the courses particular forum. The thing that confuses me is that unlike previous errors which got stuck on a particular iteration this one is variable, usually in the same area, around the 12-17th iteration, sometimes even earlier.
It seems that a combination of adjusting the waitFor here:
fs.appendFile('out.csv', name + '\n');
await page.waitFor(1000);
var elementExists = await page.$$('.author .media-body');
to 2000, combined with disabling the rendering of css and images solved the problem. The program must have linked away before entering the loop if the page loaded too slowly.

Categories

Resources