Crop image to variable in react - javascript

I'm creating an app that detects faces and classifies faces' ethnicity using 2 Clarifai APIs. The issue is ethnicity classifier can only do 1 face for 1 photo. My solution is cropping the faces by face detection and then separately classify each face's ethnicity.
Then I got stuck with an idea: I want to crop the face by the coordination provided by the face detection and save the faces to variables, so I can use ethnicity classifier for each face.
I would love to receive some help from you guys. Thank you.
Here is the code of my issue
this.setState({imageURL: this.state.input});
app1.models
.predict(
{
id: 'ethnicity-demographics-recognition',
name: 'appearance-multicultural',
version: 'b2897edbda314615856039fb0c489796',
type: 'visual-classifier',
},
this.state.input
)
.then((response) => console.log('ethnic',response))
.catch((err) => {
console.log("Clarifai Error:", err);
});
app2.models
.predict(
{
id: 'face-detection',
name: 'face-detection',
version: '6dc7e46bc9124c5c8824be4822abe105',
type: 'visual-detector',
}, this.state.input)
.then((response) => {
const regions = response.outputs[0].data.regions;
const box = [];
regions.forEach(region => {
box.push(this.calculateFaceLocation(region))
});
this.displayFaceBox(box);
})
.catch((err) => {
console.log("Clarifai Error:", err);
});
I looked up some solution like HTML canvas and react-image-crop, but I am still unable to save the faces as variables.

Related

Detect a bad connection in hls.js

I want to display a warning message to the user when the user’s connection is not optimal to process the current flow. But I can’t find a way to detect that information properly.
The solution I found is to calculate the loading time of each fragment to trigger or not the warning but this option impacts the performance of the application.
let localLowConnection = false;
localHlsInstance.on(Hls.Events.FRAG_LOADING, (event: any, data: any) => {
const id = setTimeout(() => {
if (!localLowConnection) {
isLowConnection(true);
localLowConnection = true;
}
}, 1000);
ids.push({ timerId: id, eventId: data.frag.sn });
});
localHlsInstance.on(Hls.Events.FRAG_LOADED, (event: any, data: any) => {
each(ids, (item, index) => {
if (item.eventId === data.frag.sn) {
clearTimeout(item.timerId);
}
});
if (localLowConnection) {
isLowConnection(false);
localLowConnection = false;
}
})
This code seems to work but it displays and removes the overlay almost instantly.
I find no post on this topic and nothing explicit in the hls.js documentation for this case. I also specify that the adapdative bitrate is not activated and I therefore have only one quality level.
Thank you for your help :)

Estimating gas cost programmatically while minting NFT in web3.js

I am building an NFT minting dapp, I used a template and it seems like the gas limit is fixed in the configuration file, however, I want to be able to estimate the fee on the go. I found .estimategas() but I am not sure how to incorporate it into my function. Here is my mint function:
const claimNFTs = () => {
let cost = CONFIG.WEI_COST;
let gasLimit = CONFIG.GAS_LIMIT;
let totalCostWei = String(cost * mintAmount);
let totalGasLimit = String(gasLimit * mintAmount);
console.log("Cost: ", totalCostWei);
console.log("Gas limit: ", totalGasLimit);
setFeedback(`Minting your ${CONFIG.NFT_NAME}...`);
setClaimingNft(true);
blockchain.smartContract.methods
.mint(blockchain.account, mintAmount)
.send({
gasLimit: String(totalGasLimit),
to: CONFIG.CONTRACT_ADDRESS,
from: blockchain.account,
value: totalCostWei,
})
.once("error", (err) => {
console.log(err);
setFeedback("Sorry, something went wrong please try again later.");
setClaimingNft(false);
})
.then((receipt) => {
console.log(receipt);
setFeedback(
`WOW, the ${CONFIG.NFT_NAME} is yours! go visit Opensea.io to view it.`
);
setClaimingNft(false);
dispatch(fetchData(blockchain.account));
});
With this code, the gas fees seem to be very high, I just want to make sure I am not missing anything that may be driving the gas fee so high.

Getting a TypeError when fetching from Google Vision API

So I'm trying to take a picture and populate the data in a list in a new screen. I'm getting a TypeError after I snap the picture.
Error [TypeError: undefined is not an object (evaluating 'jsonRes.responses[0]')]
Here is my code:
detectText(base64) {
fetch("https://vision.googleapis.com/v1/images:annotate?key=" + GOOGLE_CLOUD_KEYFILE, {
method: 'POST',
body: JSON.stringify({
"requests": [{
"image": { "content": base64 },
"features": [
{ type: "TEXT_DETECTION" }
]}]
})
})
.then(response => { return response.json() })
.then(jsonRes => {
let text = jsonRes.responses[0].fullTextAnnotation.text
this.props.navigation.navigate('ContactScreen', { text: text })
}).catch(err => {
console.log('Error', err)
})
}
When I snap the picture, the data should be gathered in a picker select list.
Has been super frustrating and been trying for over 2 hours now, any help would be greatly appreciated.
Turns out Google Vision API was changed recently. Issue resolved.

React-Native Face detect from list of images

i want to detect faces from list of images which are listed from photo albums..
here is my code..
// get All images from particular album
await CameraRoll.getPhotos({
first: count,
after,
assetType: 'Photos',
groupTypes: 'All',
groupName: this.props.navigation.state.params.album,
})
.then(r => {
this.setState({ data: r.edges, isLoading: false, });
this.state.data.map((p, index) => {
this.getFaces(p.node.image.uri);
});
})
}
// Detect faces from list of images
async getFaces(uri) {
await FaceDetector.detectFacesAsync(uri).then(res => {
if (res.faces.length > 0) {
console.log('Has faces: ' + uri);
this.state.faceImage.push(uri); // array of face images and set it to Flatlist
} else {
console.log('No faces: ' + uri);
}
});
this.setState({
faceImage: this.state.faceImage,
isLoading: false,
});
}
All things are worked correctly but when image array size was big then my app was stuck and close only in android device.
Try to split it to new array when the size of the array is to big
Have you tried to save it to file and load it from there?

How to crop image in React Native

I'm using react-native-camera to take photos. The taken photos are 16/9 ratio but I need them in 4/3.
Thus what I want is to crop the image to for example 1920*1440.
I use ImageEditor from React Native. The code of ImageEditor can be found here.
My code is as follows :
this.camera.capture(target)
.then((data) => {
let cropData = {
offset:{x:0,y:0},
size:{width:1920, height:1440}
};
ImageEditor.cropImage(
data.path,
cropData,
(uri) => {
this.props.captured(this.props.request, {
path: uri,
data: data.data,
longitude: position.coords.longitude,
latitude: position.coords.latitude
}
);
Actions.pop();
},
(error) => {});
})
.catch(err => {
console.error(err)
});
But the code above doesn't work. The saved photo isn't cropped and it is 1440*2560.
Any advice?
It's not entirely clear what happens with your code in captured(), but I think the problem is that you're passing the original data to this.props.captured. My understanding from the docs is that:
If the cropping process is successful, the resultant cropped image will be stored in the ImageStore, and the URI returned in the success callback will point to the image in the store.
So instead of reusing data, you should be reading the cropped image from uri

Categories

Resources