How to scrape google images with unirest and cheerio? - javascript

I am trying to scrape google images by using unirest and cheerio, but I got stuck when I found that parsing was not happening correctly.
This is my code currently :
const unirest = require("unirest");
const cheerio = require("cheerio");
const getData = async() => {
let count= [] , page_url = [];
let url =
"https://www.google.com/search?q=india&oq=india&tbm=isch&asearch=ichunk&async=_id:rg_s,_pms:s,_fmt:pc&sourceid=chrome&ie=UTF-8";
const response = await unirest
.get(
url
)
.headers({
"User-Agent":
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36",
})
.proxy(
"proxy"
);
const $ = cheerio.load(response.body)
console.log(response.body)//html file returned successsfully
let title = [] , link = [];
$(".vbC6V").each((i,el) => {
title[i] = $(el).find(".iKjWAf .mVDMnf").text()//not parsing
link[i] = $(el).find(".rg_l .rg_ic").attr("src")//not parsing
})
console.log(title)//returned empty
console.log(link)//returned empty
}
getData();

Selectors like ".rg_bx" and ".rg_l .rg_ic" isn't stable and often changed. I've
made little change in your code (I seem that is more convenient for next usage) and recommend you use more stable selectors:
const $ = cheerio.load(response.body);
const results = Array.from($(".PNCib.MSM1fd")).map((el, i) => ({
title: $(el).find(".VFACy").attr("title"),
link: $(el).find(".VFACy").attr("href"),
}));
console.log(results);
Output:
[
{
"title":"India - Wikipedia",
"link":"https://en.wikipedia.org/wiki/India"
},
{
"title":"India | History, Map, Population, Economy, & Facts | Britannica",
"link":"https://www.britannica.com/place/India"
},
{
"title":"India - Know all about India including its History, Geography, Culture, etc",
"link":"https://www.mapsofindia.com/india/"
},
{
"title":"India | History, Map, Population, Economy, & Facts | Britannica",
"link":"https://www.britannica.com/place/India"
},
...and other results
]
But even "more stable" selectors changed from time to time and you need always maintain your code. To make it even more reliable, regular expressions to extract inline JSON data is a way to go. Although inline JSON position in the HTML could be changed, it is less frequent or won't be changed at all.
You can read more about scraping Google Images with regular expression in my blog post web Scraping Google Images with Nodejs.

So yeah I found out that the parent class for parsing will be rg_bx and not vbC6V. So the updated code will be :
$(".rg_bx").each((i,el) => {
title[i] = $(el).find(".iKjWAf .mVDMnf").text()
link[i] = $(el).find(".rg_l .rg_ic").attr("src")
})

Related

Evaporate.js upload file with x-amz-security-token: SignatureDoesNotMatch

I want upload a file with evaporate.js and crypto-js using x-amz-security-token:
import * as Evaporate from 'evaporate';
import * as crypto from "crypto-js";
Evaporate.create({
aws_key: <ACCESS_KEY>,
bucket: 'my-bucket',
awsRegion: 'eu-west',
computeContentMd5: true,
cryptoMd5Method: data => crypto.algo.MD5.create().update(String.fromCharCode.apply(null, new Uint32Array(data))).finalize().toString(crypto.enc.Base64),
cryptoHexEncodedHash256: (data) => crypto.algo.SHA256.create().update(data as string).finalize().toString(crypto.enc.Hex),
logging: true,
maxConcurrentParts: 5,
customAuthMethod: (signParams: object, signHeaders: object, stringToSign: string, signatureDateTime: string, canonicalRequest: string): Promise<string> => {
const stringToSignDecoded = decodeURIComponent(stringToSign)
const requestScope = stringToSignDecoded.split("\n")[2];
const [date, region, service, signatureType] = requestScope.split("/");
const round1 = crypto.HmacSHA256(`AWS4${signParams['secret_key']}`, date);
const round2 = crypto.HmacSHA256(round1, region);
const round3 = crypto.HmacSHA256(round2, service);
const round4 = crypto.HmacSHA256(round3, signatureType);
const final = crypto.HmacSHA256(round4, stringToSignDecoded);
return Promise.resolve(final.toString(crypto.enc.Hex));
},
signParams: { secretKey: <SECRET_KEY> },
partSize: 1024 * 1024 * 6
}).then((evaporate) => {
evaporate.add({
name: 'my-key',
file: file, // file from <input type="file" />
xAmzHeadersCommon: { 'x-amz-security-token': <SECURITY_TOKEN> },
xAmzHeadersAtInitiate: { 'x-amz-security-token': <SECURITY_TOKEN> },
}).then(() => console.log('complete'));
},
(error) => console.error(error)
);
but it produce this output
AWS Code: SignatureDoesNotMatch, Message:The request signature we calculated does not match the signature you provided. Check your key and signing method.status:403
What I'm doing wrong
SIDE NOTE
This is the versione I'm using on browser side:
{
"crypto-js": "^4.1.1",
"evaporate": "^2.1.4"
}
You have your crypto.HmacSHA256 parameters reversed. They should all be the other way around. I've been bashing my head against a wall trying to get evaporate 2.x to work for the last week, it's been very frustrating.
I tried your code above and looked over all the docs and forum posts related to this, and I think using Cognito for this auth just doesn't work or isn't obvious how it's supposed to work, even though the AWS docs suggest it's possible.
In the end I went with using Lambda authentication and finally got it working after seeing much misinformation about how to use various crypto libraries to sign this stuff. I got it working last night after rigorously examining every bit of what was going on. Reading the docs also helped get me on the right path as to how the crypto needs to work, it gives example inputs and outputs so you can test for sure that your crypto methods are working as AWS expects them to work:
https://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html
Tasks 1, 2 and 3 are especially important to read and understand.

When adding a Job to scheduler: Value cannot be null, Job class cannot be null?

My question is very similar to:
Quartz.net - "Job's key cannot be null"
However its different setup as I am using Rest API.
I am able to run a job when adding through Startup.cs however when I call API to add job using javascript it fails with below error:
ERROR:
System.ArgumentNullException: Value cannot be null. (Parameter 'typeName')
at System.RuntimeType.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase, StackCrawlMark& stackMark)
at System.Type.GetType(String typeName)
at Quartz.Web.Api.JobsController.AddJob(String schedulerName, String jobGroup, String jobName, String jobType, Boolean durable, Boolean requestsRecovery, Boolean replace) in E:\Amit\DotNet\QuartzApi\QuartzApi\Controllers\JobsController.cs:line 108
at lambda_method14(Closure , Object )
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.AwaitableResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeActionMethodAsync>g__Logged|12_1(ControllerActionInvoker invoker)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync()
--- End of stack trace from previous location ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|19_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Logged|17_1(ResourceInvoker invoker)
at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)
at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)
HEADERS
=======
Accept: application/json
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Connection: close
Content-Length: 83
Content-Type: application/json
Host: localhost:44379
Referer: https://localhost:44379/jobs.html
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36
sec-ch-ua: "Chromium";v="92", " Not A;Brand";v="99", "Google Chrome";v="92"
sec-ch-ua-mobile: ?0
origin: https://localhost:44379
sec-fetch-site: same-origin
sec-fetch-mode: cors
sec-fetch-dest: empty
SETUP:
In VS, I created Quartz REST API and front end in a single project. Running the project loads webpage with Jobs and API running in the background.
All controller endpoints work except AddJob. (i.e. get jobs, view job details, pause, resume, trigger, delete)
Dependency:
Quartz.Extensions.Hosting 3.3.3
JobsController.cs
quartznet/JobsController.cs at main · quartznet/quartznet · GitHub
[HttpPut]
[Route("{jobGroup}/{jobName}")]
public async Task AddJob(string schedulerName, string jobGroup, string jobName, string jobType, bool durable, bool requestsRecovery, bool replace = false)
{
var scheduler = await GetScheduler(schedulerName).ConfigureAwait(false);
var jobDetail = new JobDetailImpl(jobName, jobGroup, Type.GetType(jobType), durable, requestsRecovery);
await scheduler.AddJob(jobDetail, replace).ConfigureAwait(false);
}
HelloWorldJob.cs:
https://andrewlock.net/using-quartz-net-with-asp-net-core-and-worker-services/
Startup.cs: (Adds a job without API and runs it using trigger at start)
void ConfigureHostQuartz(IServiceCollection services)
{
services.AddQuartz(q =>
{
q.UseMicrosoftDependencyInjectionScopedJobFactory();
var jobKey = new JobKey("HelloWorldJob");
q.AddJob<HelloWorldJob>(opts => opts.WithIdentity(jobKey));
q.AddTrigger(opts => opts
.ForJob(jobKey)
.WithIdentity("HelloWorldJob-trigger")
.WithCronSchedule("0/5 * * * * ?"));
});
services.AddQuartzHostedService(
q => q.WaitForJobsToComplete = true);
}
Html/Javascript front end:
Following this example:
Tutorial: Call an ASP.NET Core web API with JavaScript | Microsoft Docs
<form action="javascript:void(0);" method="POST" onsubmit="addJob()">
<input type="text" id="add-name" placeholder="New job">
<input type="submit" value="Add">
</form>
<script>
function addJob() {
const addNameTextbox = document.getElementById('add-name').value.trim();
const item = {
jobType: "HelloWorldJob",
durable: true,
requestsRecovery: false,
replace: false
};
fetch(`${uri}/DEFAULT/${addNameTextbox}`, {
method: 'PUT',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify(item)
})
.then(response => console.log(response))
.then(() => {
getJobs();
addNameTextbox.value = '';
})
.catch(error => console.error('Unable to add job.', error));
}
</script>
I have tried updating the API to include jobType in url, then it gives different error:
Job class cannot be null
at Qurtz.Impl.JobDetilImpl.set_JobType(Type value)
You need to supply assembly qualified name as job type. Problems is here:
jobType: "HelloWorldJob",
jobType should be something like "MyNameSpace.JobType, MyAssembly" - you can probably get this written to console with Console.WriteLine(typeof(HelloWorldJob).AssemblyQualifiedName) - you can ignore the version etc, only type name with namespace and assembly name are needed.
Please also note that your setup has security implications as you allow CLR types to be passed from the UI.
API controller changes:
As mentioned by Marko above, jobType needs fully qualified name, assembly reference is not however necessary in my case as I have jobs in same assembly.
[HttpPut]
[Route("{jobGroup}/{jobName}/{jobType}/{replace}/new")]
public async Task NewJob(string schedulerName, string jobGroup,
string jobName, string jobType, bool replace = false)
{
//Note: Job added without a trigger must be durable.
var scheduler = await GetScheduler(schedulerName).ConfigureAwait(false);
var jobDetail = new JobDetailImpl(jobName, jobGroup,
Type.GetType("QuartzApi.Jobs." + jobType), true, false);
await scheduler.AddJob(jobDetail, replace).ConfigureAwait(false);
}
JavaScript fetch query changes:
Removed JSON body tag and added extra parameters to url. Note its a job without trigger. At a later stage jobType can be a variable, for now its included in fetch string.
function addJob() {
const addNameTextbox = document.getElementById('add-name').value.trim();
fetch(`${uri}/DEFAULT/${addNameTextbox}/HelloWorldJob/false/new`, {
method: 'PUT',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
}
})
.then(response => console.log(response))
.then(() => {
getJobs();
addNameTextbox.value = '';
})
.catch(error => console.error('Unable to add job.', error));
}
Running the UI request to add job, now adds it without a trigger (to be worked on in separate section). To confirm, I then ran API request in browser to fetch all jobs for the running scheduler using:
[https://localhost:44379/api/schedulers/QuartzScheduler/jobs]
resulting in:
[{"name":"HelloWorldJob","group":"DEFAULT"},{"name":"TestJob","group":"DEFAULT"}]
That implies a few things:
Passing JSON object in body does not associate it with API function parameters. I need to add all parameters in url string to use them. May be there is a way to use body parameters.
Now that the class is correctly referenced in API, I can continue passing just the class name through UI, without namespace and assembly to keep it secure as class is defined in the project at build time.
Adding Console.Writeline in API function did not return any output at runtime.

How is this site forming the headers on a POST request?

I am trying to learn how the headers are being constructed when a zipcode is entered by the user and a "POST" command is issued (by clicking on the "Shop Now" button) from the following website:
I believe the interesting part of this "POST" request is how the site is forming the following headers but I can't figure out how it is doing it (my suspicion is that there is some JavaScript/Angular code that is responsible):
x-ccwfdfx7-a
x-ccwfdfx7-b
x-ccwfdfx7-c
x-ccwfdfx7-d
x-ccwfdfx7-f
x-ccwfdfx7-z
So I have tried to use the requests module to login as guest to learn more about how this flow works:
with requests.Session()
with cloudscraper.create_scraper()
So far all my attempts have FAILED. Here is my code:
import requests
from requests_toolbelt.utils import dump #pip install requests_toolbelt
import cloudscraper #pip install cloudscraper
#with requests.Session() as session:
with cloudscraper.create_scraper(
browser={
'custom': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36'
}
) as session:
CITY = XXXXX
ZIPCODE = XXXXX
#get cookies
url = 'http://www.peapod.com'
res1 = session.get(url)
session.headers['Referer'] = 'https://www.peapod.com/'
#get more cookies
url = 'http://www.peapod.com/login'
res2 = session.get(url)
#get more cookies
url = 'https://www.peapod.com/ppd/bundles/js/ppdBundle.js'
res3 = session.get(url)
#get all the service locations
response = session.get('https://www.peapod.com/api/v4.0/serviceLocations',
params={
'customerType': 'C',
'zip': ZIPCODE
}
)
try:
loc_id = list(
filter(
lambda x: x.get('location', {}).get('city') == CITY, response.json()['response']['locations']
)
)[0]['location']['id']
except IndexError:
raise ValueError("Can't find City '{}' -> Zip {}".format(CITY, ZIPCODE))
#login as guest
response = session.post('https://www.peapod.com/api/v4.0/user/guest',
json={
'customerType': 'C',
'cities': None,
'email': None,
'serviceLocationId': loc_id,
'zip': ZIPCODE
},
params={
'serviceLocationId': loc_id,
'zip': ZIPCODE
}
)
This seems to produce some sort of an error message saying "I'm blocked" which I believe is due to the fact that I can't figure out how the browser constructs the ccwfdfx7headers in the "POST" request (my suspicion is that there is some JavaScript/Angular code that is responsible for constructing these headers but I can't find it and hoping someone could help...)
On the same computer, Chrome browser is able to login just fine

Javascript: Cherrio is returning inconsistent results for anchor tag

I'm trying to scrape websites and grab their mailto links:
const url = "https://www.cverification.com/";
axios.get(url).then(({ data }) => {
const $_ = cheerio.load(data);
const mailToLink = $_('a[href^="mailto:"]');
console.log("maillllllllll: ", mailToLink);
if (!mailToLink || !mailToLink.length) {
console.log("NO EMAILLLL: ", url); // <------------ this prints
return;
}
const email = mailToLink.attr("href").replace("mailto:", "");
console.log("SUCCEEDEDDD", url, email);
});
However, Cheerio is returning a weird object for some of the links:
maillllllllll: initialize {
options:
{ withDomLvl1: true,
normalizeWhitespace: false,
xml: false,
decodeEntities: true },
_root:
initialize {
'0':
{ type: 'root',
name: 'root',
namespace: 'http://www.w3.org/1999/xhtml',
attribs: {},
This script works for some websites and not for others. When I visit https://www.cverification.com/ and run the code above line by line (just using jQuery) it works. What am I doing wrong?
As others in the comments have discovered, the site was using React and therefore the link was inserted after React injects all the components.
I fixed this by updating the user-agent of my request:
const instance = axios.create({
headers: {
"User-Agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/600.1.3 (KHTML, like Gecko) Version/8.0 Mobile/12A4345d Safari/600.1.4"
}
});
This fixed it!

Get Android app Icon by package name from javascript code

I'm trying to get application icon from my web javascript code using package name only.
How can I fetch it from google play store?
Is the only way is scraping? Is it dangerous?
Thanks.
If you want to get application icon from Google Play App info page you need just extract them from the correct selectors, but you need an applicaton id. To do this you can search what you want on Google Play main page, get the first result (most relevant to your search) and parse app id from it.
I'll show how you can do this in the code below (also check it on the online IDE):
const cheerio = require("cheerio");
const axios = require("axios");
const searchQuery = "asphalt 9"; // what you want to search
const mainOptions = {
headers: {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 Safari/537.36",
}, // adding the User-Agent header as one way to prevent the request from being blocked
hl: "en", // Parameter defines the language to use for the Google search
gl: "us", // parameter defines the country to use for the Google search
};
function getAppId() {
const AXIOS_OPTIONS = {
headers: mainOptions.headers,
params: {
q: searchQuery,
c: "apps", // parameter defines category of the search. "apps" means apps & games category
hl: mainOptions.hl,
gl: mainOptions.gl,
},
};
return axios.get(`https://play.google.com/store/search`, AXIOS_OPTIONS).then(function ({ data }) {
let $ = cheerio.load(data);
const link = `https://play.google.com${$('[jscontroller="gKWqec"] .ULeU3b .Si6A0c')?.attr("href")}`;
const appId = link.slice(link.indexOf("?id=") + 4);
return appId;
});
}
function getAppInfo(id) {
const AXIOS_OPTIONS = {
headers: mainOptions.headers,
params: {
id, // Parameter defines the ID of a product you want to get the results for
hl: mainOptions.hl,
gl: mainOptions.gl,
},
};
return axios.get(`https://play.google.com/store/apps/details`, AXIOS_OPTIONS).then(function ({ data }) {
let $ = cheerio.load(data);
return {
thumbnail: $(".l8YSdd > img")?.attr("srcset")?.slice(0, -3),
};
});
}
getAppId().then(getAppInfo).then(console.log);
Output
{
"thumbnail": "https://play-lh.googleusercontent.com/PJo-zZiPokt4vUPri7-md-S-adydt9HPf9yfAcuKift7tYTC1cyrhpxmqFPQbuDRrDU=w240-h480-rw"
}
You can read more about scraping Google Play App info from my blog post Web scraping Google Play App Info with Nodejs.

Categories

Resources