Extract frames from gif using a light browser JS libary (like omggif) - javascript

I'd like to extract frames from a gif file in the browser. More specifically, given the url of a gif gifUrl: string, I'd like to download it and obtain it as an array of frames imageList: ImageData[]). I'll be using putImageData on them at various coordinates of a canvas. I'd also like the solution to be lightweight.
On BundlePhobia, omggif is 50ms-long to download via emerging-3G. All alternatives I've seen so far are more around 700ms. However, omggif only offers the basic low level interactions, and common recipes like getting the gif as an array of ImageData are missing.
The best documentation I've found for omggif so far are omggif's types in the DefinitelyTyped project.
There's also movableink's example (awaiting in a PR since January 2019).
I use TypeScript and am thus interested in typed recipes if possible.
Related questions:
How to extract frames from an animated gif using javascript? [closed]
GIF animation on canvas with frame control

Here's how you do it:
import { GifReader } from 'omggif';
export const loadGifFrameList = async (
gifUrl: string,
): Promise<ImageData[]> => {
const response = await fetch(gifUrl);
const blob = await response.blob();
const arrayBuffer = await blob.arrayBuffer();
const intArray = new Uint8Array(arrayBuffer);
const reader = new GifReader(intArray as Buffer);
const info = reader.frameInfo(0);
return new Array(reader.numFrames()).fill(0).map((_, k) => {
const image = new ImageData(info.width, info.height);
reader.decodeAndBlitFrameRGBA(k, image.data as any);
return image;
});
};
If you need transparency, you might want to use canvas, as they can then be interfaced with ctx.drawImage(canvas, x, y):
import { GifReader } from 'omggif';
export const loadGifFrameList = async (
gifUrl: string,
): Promise<HTMLCanvasElement[]> => {
const response = await fetch(gifUrl);
const blob = await response.blob();
const arrayBuffer = await blob.arrayBuffer();
const intArray = new Uint8Array(arrayBuffer);
const reader = new GifReader(intArray as Buffer);
const info = reader.frameInfo(0);
return new Array(reader.numFrames()).fill(0).map((_, k) => {
const image = new ImageData(info.width, info.height);
reader.decodeAndBlitFrameRGBA(k, image.data as any);
let canvas = document.createElement('canvas');
canvas.width = info.width;
canvas.height = info.height;
canvas.getContext('2d')!.putImageData(image, 0, 0);
return canvas;
});
};

Related

loadLayersModel() or loadGraphModel() for TensorflowJS

So I followed some tutorial on converting a tensorflow model (downloaded from tensorflow hub) to a tfjs model with binary and json files. When I try to use loadLayersModel() it throws a error. When I try to use loadGraphModel(), it loads and runs but the predictions dont seem to work or be meaningful in anyway. How can I tell which method to load models? I need some direction on how to troubleshoot or some recommended workflow particularly in tensorflowJS. I am using this in React Native with expo project. not sure if that matters.
TensorflowHub : https://tfhub.dev/google/aiy/vision/classifier/food_V1/1
my code:
import * as tf from '#tensorflow/tfjs'
import { bundleResourceIO, decodeJpeg } from '#tensorflow/tfjs-react-native'
import * as FileSystem from 'expo-file-system';
const modelJSON = require('./TFJS/model.json')
const modelWeights = require('./TFJS/group1-shard1of1.bin')
export const loadModel = async () => {
const model = await tf.loadLayersModel(
bundleResourceIO(modelJSON, modelWeights)
).catch((e) => {
console.log("[LOADING ERROR] info:", e)
})
return model
}
export const transformImageToTensor = async (uri)=>{
//read the image as base64
const img64 = await FileSystem.readAsStringAsync(uri, {encoding:FileSystem.EncodingType.Base64})
const imgBuffer = tf.util.encodeString(img64, 'base64').buffer
const raw = new Uint8Array(imgBuffer)
let imgTensor = decodeJpeg(raw)
const scalar = tf.scalar(255)
//resize the image
imgTensor = tf.image.resizeNearestNeighbor(imgTensor, [192, 192])
//normalize; if a normalization layer is in the model, this step can be skipped
const tensorScaled = imgTensor.div(scalar)
//final shape of the rensor
const img = tf.reshape(tensorScaled, [1,192,192,3])
return img
}
export const getPredictions = async (image)=>{
await tf.ready()
const model = await loadModel()
const tensor_image = await transformImageToTensor(image)
// const predictions = await makePredictions(1, model, tensor_image)
const prediction = await model.predict(tensor_image)
console.log(prediction)
console.log(prediction.datasync()[0])
return prediction
}
If its a layers model, you you 'loadLayersModel and if its a graph model, you use 'loadGraphModel - they are NOT interchangable and you CANNOT load a model using different method.
So if it loads using loadGraphModel, it is a graph model - as simple as that.
the predictions dont seem to work or be meaningful in anyway
This does not help - what do you expect and what do you actually get?

PDFtron lastest versoin imported base 64 images are low quality

I have a project in laravel 8 with vue 2, i made an implementation of pdftron where i save the signatures user creates into a database as base64 images, so far all good, problem comes when i load those images from the database, although i can see them in the signature tool they are extremely low quality and i can no longer change color to the image, which doesn't happen in the demo sites, i think that when they get exported they become something that pdftron can no longer manipulate as a newly created signature, so i would like to know if someone else has this problem and a possible solution for it, i'll put the code i'm using and the images for you here
Above saved siganture, down new signature
let that = this;
const viewerElement = document.getElementById('webviewer');
WebViewer({
path: '/js/WebViewer/lib',
initialDoc: this.initialDoc,
extension: 'pdf',
}, viewerElement)
.then((instance) => {
const { documentViewer, annotationManager } = instance.Core;
const signatureTool = documentViewer.getTool('AnnotationCreateSignature');
documentViewer.addEventListener('documentLoaded', () => {
instance.UI.setLanguage(that.locale);
let signatures = JSON.parse(that.savedSignatures);
signatures = signatures.map(a => a.base64_signature);
signatureTool.importSignatures(["data:image/png;base64, " + signatures]); //base64 images array
document.getElementById('app').setAttribute('style', 'padding: 0');
document.getElementById('loader-container').classList.add('d-none');
document.getElementById('all-pages-content').removeAttribute('style')
document.getElementById('downloadButton').setAttribute('style', 'visibility: visible')
document.getElementById('pdf-ui').setAttribute('style', 'visibility: visible')
});
documentViewer.addEventListener('annotationsLoaded', async () => {
annotationManager.addEventListener('annotationDrawn', async (annotationList) => {
console.log('1')
annotationList.forEach(annotation => {
if (annotation.Subject === "Signature")
that.extractAnnotationSignature(annotation, documentViewer);
})
})
});
let saveSignedPdf = document.getElementById('downloadButton');
saveSignedPdf.addEventListener('click', async () => {
const doc = documentViewer.getDocument();
const xfdfString = await annotationManager.exportAnnotations();
const data = await doc.getFileData({
// saves the document with annotations in it
xfdfString
});
const arr = new Uint8Array(data);
const blob = new Blob([arr], {type: 'application/pdf'});
await that.processDocument(blob)
// Add code for handling Blob here
})
// instance.disableElements(['downloadButton', 'printButton']);
// instance.disableElements(['toolbarGroup-Insert']);
return instance
});
and here is the code i use to export the images taken from official docs
async extractAnnotationSignature(annotation, docViewer) {
let that = this;
// Create a new Canvas to draw the Annotation on
const canvas = document.createElement('canvas');
// Reference the annotation from the Document
const pageMatrix = docViewer.getDocument().getPageMatrix(annotation.PageNumber);
// Set the height & width of the canvas to match the annotation
canvas.height = annotation.Height;
canvas.width = annotation.Width;
const ctx = canvas.getContext('2d');
// Translate the Annotation to the top Top Left Corner of the Canvas ie (0, 0)
ctx.translate(-annotation.X, -annotation.Y);
// Draw the Annotation onto the Canvas
annotation.draw(ctx, pageMatrix);
// Convert the Canvas to a Blob Object for Upload
canvas.toBlob((blob) => {
let formData = new FormData();
formData.append('signature', blob);
formData.append('customer_id', that.customerId);
const config = {
headers: {
'content-type': 'multipart/form-data',
'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').content,
}
}
axios.post(that.saveSignatureUrl, formData, config)
.then(function (response) {
if (!response.data.success) {
console.log("could not save signature for future use")
}
else {
console.log("saved signature for future use")
}
})
.catch(function (error) {
that.output = error;
console.log("could not reach backend")
});
});
}
I know you will see several flaws in this code, I'm a rookie so please bear with me, thank you for your time, and any help is appreciated
test base64 string generated with this code
iVBORw0KGgoAAAANSUhEUgAAAMgAAAB/CAYAAACql41TAAAS0UlEQVR4Xu1dC5BU1Zk+59yenlGUpz3dvAxgVkAeJUYcEzdoBaOLy3Qju8Nu0GgeCoJAYcwCZqbHm+kZCFJmV0yMsFsxtUnAMAnQ3UMsTCCmjEkQJSwPWZIViDow3S3ER9AZpu85+/UwYwSGmenu29333v5vVVe/7vnP/3//+e55/4czuggBQuCiCHDChhAgBC6OABGESgch8DEEAsGWCsnYC5zxWyMh70tEECoehEAnAnfocZ/LUL9VjNdFQ+U/SP1MBKHiQQh0IuAPtjSBHK9GQ95Hu0AhglDxIASAgL82vpIpNQ7NqtkfB4QIQsWj6BHwV5+oYkKsKtXKbmjUB5wighR9kSAAuhAIVJ8cqURyDzobcyN13l+cjwzVIFRWihqBVL+DKb4rUu8NdQcEEaSoi0dxG++viQUZVxWRkG/mxZAgghR3GSla6ytrWj6HZlWjkCXXhhuGvEkEKdqiQIafj8C8daqkpTm+VxmqPtrg29gTQlSDUPkpOgTQ71iHmfJkOOR9sDfjiSC9IUT/OwoBfzB2Nwxadp1Wfq2uc6wq6fkigvSGEP3vGARmVx//RFJoexWTs6Khob/ui2FEkL6gRPc4AoHUkC5X4nfh+vKGvhpEBOkrUnSfrRHw17Q8wjj/LJaS3JGOIUSQdNCie22JgL86Po0JFTU0NmWb7j2SjhFEkHTQontth0DVJqW17ovt5ZyvRu3xo3QNIIKkixjdbysEKmtiT3POFMixIBPFiSCZoEZpbIEAhnTnMcXmX+cqn9qXId3ujCKC2MLVpGS6CFQGT0zFiNUuwY2pW0PDXk03fdf9RJBMkaN0lkXgFv1Xrv7Ja3Zzrr4bDvn+KxtFiSDZoEdpLYkAFiJ+X3D+YV+WkvRmABGkN4Tof1shgKgkKxQXMyJ15TeboTgRxAwUSYYlEAjUxP9ZcfWES7o/s7lh0J/NUIoIYgaKJKPgCPiD8SlMyZekYJVNdb4dZilEBDELSZJTMAQq9eNXckPDfnLxeCTkWW+mIkQQM9EkWXlHoGr5qQFt7uQvUHtsidT7VpmtABHEbERJXl4RqAzGfon4h7sRCfGRXGRMBMkFqiQzLwig3/EzBF2IIVzPwlxlSATJFbIkN6cIBIKxZxRnJSBHaodgzi4iSM6gJcG5QgBrrFYzpq5BuJ7KXOXRJZcIkmuESb6pCARqY/OlZIuSLtdNz+lD3jNVeDfCiCC5Rpjkm4YAlq7fhqXrm4XB/37ryvK9pgnuQRARJB8oUx5ZI+DX459khnoJBHkgXOfdkrXAPgoggvQRKLqtsAgg4MILiKG7DTF01+RTEyJIPtGmvDJCAOR4EltmB6DmuCcjAVkkIoJkAR4lzT0CHbsCOVtQKt6uaNQnnMl9jufmQATJN+KUX58RmBmMfVow/iJeFVtDnox3BfY5QxrFygYqSptPBKp05T4j469KxR/vOlAzn/l35UU1SCFQpzx7RaCyNv59LtUH6JQv6vXmHN5ABMkhuCQ6MwRwsM0CxdSXovW+iswkmJeKCGIeliTJBAT81cevZ1x7GWFCPxUJlf/BBJFZiSCCZAUfJTYbAYxavQKZ6xHozdSNT5nqSQTJFDlKZzoCIMf3FOMudMrvN114hgKJIBkCR8nMRQChehZzLr54nea5MdMoiOZqdFYaESQXqJLMtBBIHaiJNVap6OsV23TfgbQS5/hmIkiOASbxPSNwhx73lRjq95LxZWhabbIaXkQQq3mkyPRBv+PnijHsKfc+akXTiSBW9EqR6IRts2tAjlEYsaqyqslEEKt6xuF64Ui0+zHXsZS1iRsjj3net6q5RBCresbBelVWx/6JC4ao6/xzVpgM7AlqIoiDC6IVTZulJ26RSblTCPX5rSaGCM2VrUSQXCFLci9AwK/HJmPb7E7GxQJEX2+0A0REEDt4yQE6ztZPjkgayZ2YDHw8XOdZZxeTiCB28ZSN9URw6Uu5IXZixCoSDflW2skUIoidvGVTXQPB+Dal1CHs7fi63UwggtjNYzbTtzIYj+AU5hOY65hvM9U71CWC2NFrNtEZ0UiaUMTezPSMciuYSQSxghccpoOuK7HHiDehcB014yDNQsJDBCkk+g7Mu+MIZmNClCn1eqH3k5sBLxHEDBRJRgcC89apkthb8SaQ43C43rfECbAQQZzgRQvYMGOtKi1JgBxcHYzU+ZZaQCVTVCCCmAJjcQu5U28pNwyxCUcw74nWeb/mJDSIIE7yZgFsqdTfHseSyUbB+WZ0yC25pyMbWIgg2aBX5GkDeuwzymCN6HM8hhNmn3AiHEQQJ3o1DzbNrG2pFBLk4Pw+zHP8KA9ZFiQLIkhBYLd3pv7q+L1MyKe4UlXh+qE/t7c1PWtPBHGyd3NgW6C25SGlxEOciapw6IpdOcjCUiKJIJZyh7WVqaxtqeeKz0TE9aqm+vI/WVtbc7QjgpiDo+OlVAZb/pMzPqZUK6tq1AeccrzBnQYSQYrF0xnaOUM/2d9ltG8EOU6hM/7FDMXYNhkRxLauy73i2Og0jie1jVjz/TzIsTz3OVovByKI9XxiCY3OhgPlGzHHsdKpcxx9AZoI0heUiuweRDu8G/s4nkHNMdcuwRVy5SIiSK6QzUKuf1niclkqx3DFRnMhPNiu6uFKlqPQXoEgz/2wt7sfCm8/JtlleC9FVhpjynX2XaQ+iyyy74e0l+KEp3fR7+juVNlWNLdGZyHfVkmJIAV0V2qRXzLJJiLSxzUo1BMwK30NmjRjodJlIMVRkOEI5hviirMElzKhOL8avz0vJTuN99Ngwl+TmtamktLgzJ1krg8M1W4YblUmMzGrXVN1grMpTMkHNUN7ozsZm1d6TmQi265piCB58Nwt+tGyy42ySYJrE6WUeOcTUAtMBPhlKPypcP8HUShfQ21wkAnX4YjuOZ4HtT7KYt48VdLijW9An4O7hWduo95tzZFPlSyTFxHEZFdU1rwzmonWyVyJSagVJqOZMhHNlatR+Pcz1UGG/ZhsOyBdyQNRfVi3T2mTVepR3Cz9xChliA2ouV5xyiYnM/EjgmSIpl9/Y5iU7vFCifEAcTxqgokoZJPRT3hfMQky8P2Ci33KkAciK8v343dUGta60Bm/CcTdoKR8Klo/dLW1tLOGNkSQi/jhFl25Bre/M7xdJEdxrkYxqfDOR6E2uApJxuOFdr46lHoprr3GpDyYNNT+51YNTVjDtT1rgXA8cxCOZyPuutfJq3Gz9YXjCFK1/NSADy5pH6IxMRgd2yFSqSFC8cvRAS5VzCjDAE+pErwUh9SX4aGO31gZ+gSlUrHLAYanY6SIKQ9+H4jPzSDAMTz7j2Fc6BhqiGNIc5QlxWuRld5YtuAXKn2gNoEFh3IZ4xLDuEN/VSg97JCvrQji/0bMq9xqNDfYKDRhhiOE/nCFdxTi4WgqDEfbfjie8B/iO9YK8ZMw7iS+pz6/h6XZbZLzNvQJWvHwb0Nhb1NMa+WCt2IItU0p9lfUEAmpiYSrzZ3Ysqr/STs4MF0dMQH4bQwdf1ZrP3PXllUj/phu+mK733IECVSfHCl5Ek0YDHcKPlpgLgCFdwwIgLF31YZCfARDoMcw5NnMJWvGE70ZQ6HNSU01y8Hlzc8t4W3F5sS+2JtqMvY3EhtQO5a4B5XPbfwax4OErt4QKBhBAtXNI5mmXYsa4Gwnlym8i3F4P43vh9DAP4ymzREYcJTx5NHSfpccaVwx+N3eDKL/L0QAtcZoYLoBNcfucF25I8Lx5MvPeSHI7fqbg91J9/XoB1yPJ9j1aPLgpdwwcg9eh/DbIRwgf0hLtv8vqn1HNm3y5dDz8/FXH5+GyfUf4/cn0Rl/rFB62DXfnBCkqmqT1jr25umY7Z0OYFKvsWgWvcKFtjv1LjX1yjbdm6od6MohAmfXVKkfYsDhLhw7sCGHWTlWtGkEmaX/ZaA0zvwrkPLDKdNRW7yMzzu4MnaEQ8NedCyCFjUsUNNSLTm7H8PPd0cbhv3GompaXq2sCRKojd2JWgHE4HPwehbBw36qCfeOrfqgdyxvvUMVRM2xDoMa47iRvDvcMPxNh5qZF7MyJkjnMb7/htriLSwifbb0jPaTxtXUic6L1y6SSecxZ8+gaXsiXOe9p5C6OCXvtAnS2a5dlppEQ82xJlrv2+kUMOxsh7829nms9XoGD6ynIyFfvZ1tsZLufSbIjMV/KnUPHPB0an5CcdmAGdhtVjKkmHVBQIUlmACtx3qwLyM27s+KGQuzbe8TQe4Mnvy0wZI4mZT/Gge/LzZbCZKXOQKp/gaaVNdimfyXw/qQ1zKXRCm7Q6BXgnRW3dsxT/GVaKj8BwSjNRCo1BPXYT3ZOiyZ2WPX8/+sgWTPWvRIkFn62zfIpPFLrHm6F52+LXYwqBh0DAQT92Gx4Xr4ZQH8Ypszx+3om4sS5I5H3rra5XLvwCRTkGoO67gWTarvoalboZiYHw1dsds6mjlTk4sSBI7YjI7fy+FQ+becabq9rJoVTHwKgyNPoam7v/SQZ35jI9Y005VzBLolSOdQ7hIMF96Qcw0og14RCNTGH8SQ+lqsal4YracmVa+AmXjDBQRBNL1LueE6LLn8UlOdb4eJeZGoNBFIhf9hpfIpzG+Mkbx9YVNoxP+kKYJuzxKBCwiCw99rUvsvMAH4lSxlU/IsEOgYPWQM5JCRSGjow1mIoqRZIHAOQap05W4z4s3MYNOxpXRfFnIpaRYI+GtiQaylWoKFhgsjDUMbsxBFSbNE4ByCoO+BJSRsUjFG8c4SR1OS/6MeG6MZ7DtYrWBwaSykhYamwJqVkHMIEgjGjhlSzmlqGJpaqk5XHhEA9vegaZsixyoEi16Vx6wpqx4Q+Igg6HugzyFmYynJTEIsfwhU6QfdbckhIAavwH77RRhWp70z+YO/15z+RpDa2IuI/bQGT69Ir6noBlMQ6DhiQPAnsff+BewVX4RNZpYLLmeKoTYW0kGQQLBlEiagwuh7jLGxLbZS3V+TCHIul0ouFkXrPKkAbnRZEIGzBKlt+ReMta8Oh3yjLKijo1SaWRP/O42r76CqaOXStSjcMIR2/FnYwx0ESe31KBnY/22cLnFTRKfh3Vz5C02qr6KfsRYPIz1S712Tq3xIrnkIfKyTHnsAIyiLtWRyGoXeMQ/glKRZ+tGBKnnpWgS7G6c0uSSq+35vbg4kLVcInDPMmzoHG3FsbxSamrtF98VzlWkxyQWmAYREXQugN4VDXuzhp8tOCFy4FisY+yaaAXMQrPn+qF5O4WKy8CYCW/wHZsQDSmmLo/WepixEUdICIdD9at7q+L1MqCexW40mrTJwzMzalumoif+9IzZYK38o8pjn/QzEUBILIHDR/SBdyx4w/HuJYHIFRrh2WUBfy6uAWmMliPFVxcVSGr61vLt6VbD3PenB2APYOJXaNPXdKZonqOuIqU7XBQhUBhM3cyYfR637R83FllIfzhmFpFeCpMyc/Y3EUEOTKzF2f7OS7NFog/eHzjDfHCsQdmcVgLwPD5KH0RH/b3OkkhQrINAngnQpmtqjgMUQ30Sz610QpSHaUNydeH9N/B8wNL4aHfF9mqYeplrDCkXaXB3SIkhX1me3gMqvY4HdH5Tk3y42ovhXY6ffabkaE34zpFQrmhp8PzHXLSTNKghkRJAu5TEzvBhDwg/h+14s1X4a+6Wft4phudKjM+ROPU6/evY9cXrFC/poHOlGl1MRyIogHzW9gvFFOPfvHsbFAJwF+GPVfmZD5Fsj/89JoPn1+DRuqFrJuIbzbYPFVms6yZfp2GIKQT5qegVbKtDsuAuHZd4Fwbs5F9uZENvtHBKzc6VzaqflNLwasOJ5fToA0732RsBUgnwcCuyrno3If7eh6XU7+io4UZZtx4Gc24VRtsvqJ8h2BOoeNPA2JY0vYE4DwRP4E5E6TwPt17B3Yc9E+5wR5ByyBONTkNHtkslbMRQ6Ff+dQMDll9HB3S3w7h3h3bN+Pm/PxIBs01Tp8ctamZzIDT4ZRJiEUamUfhWpQN14/6kv5lm3fn1hdMvWNkqfPQJ5Icj5auK888lKqKlC8KmoYVLB6aagYB7Elt/9qGn249zyA4aLHzDrHMMUCdrOqDFYXzZGCDFaMXkVmoJjMTx7NQKylYO0B/G+D92LfYKpV91aYlejPuFM9vCSBLsjUBCCnA9a1Salte9NTMLhnhPxX2p340TMSuMzH4KZ6Y4TcLE19RDI9LrG+Os8WXrsbDNN8arlf+n/YUmynLuYlxtyBJpBw5FmBNJeic9X4vMnILMf7j2KeLZHUHMdQTidoyDjYRDlMOJ/4TNdhED3CFiCIBdzzq3LXx9wiavfeKGJ8bhnrFTqk5iovApP/pH4Pggvgdd7eKWW5sdAApx6xZox9PwWlpi/ge2sf+aG8QZifOE/ugiB9BGwNEF6MidV6zTOoQDO6bucUqSDwP8DoZsT2sPJFDkAAAAASUVORK5CYII=
i also tried exportSIgnatures and i got the same base64 string for already saved signature that look blurry but for new ones i got this, which i don't quite understand what it is and how to use it for reconstructing the images
[[[{"x":181.15942028985506,"y":1.4492753623188406},{"x":178.2608695652174,"y":1.4492753623188406},{"x":168.1159420289855,"y":7.246376811594203},{"x":150.7246376811594,"y":23.18840579710145},{"x":128.9855072463768,"y":40.57971014492754},{"x":85.5072463768116,"y":66.66666666666667},{"x":56.52173913043478,"y":82.6086956521739},{"x":17.391304347826086,"y":98.55072463768116},{"x":2.898550724637681,"y":104.34782608695652},{"x":1.4492753623188406,"y":104.34782608695652},{"x":1.4492753623188406,"y":102.89855072463769},{"x":4.3478260869565215,"y":89.85507246376811},{"x":17.391304347826086,"y":65.21739130434783},{"x":24.63768115942029,"y":57.971014492753625},{"x":44.927536231884055,"y":43.47826086956522},{"x":50.72463768115942,"y":43.47826086956522},{"x":59.42028985507246,"y":43.47826086956522},{"x":73.91304347826087,"y":50.72463768115942},{"x":79.71014492753623,"y":57.971014492753625},{"x":95.65217391304348,"y":73.91304347826087},{"x":123.18840579710145,"y":97.10144927536231},{"x":149.2753623188406,"y":115.94202898550725},{"x":176.81159420289856,"y":128.9855072463768},{"x":182.6086956521739,"y":130.43478260869566},{"x":194.20289855072463,"y":130.43478260869566},{"x":198.55072463768116,"y":130.43478260869566}]]]
Hello David Gabriel Lopez Duarte,
We were able to reproduce this issue, it is likely due to our canvas isnt being adjusted to match the zoom level. This has been reported before and is in our backlog!

I am implementing Screen Capture API and I want to combine both primary and extended monitor for streaming and taking screenshots. How can i do that?

I am implementing Screen Capture API and I want to combine both primary and extended monitor for streaming and taking screenshots. Here is how I can capture one screen and save the screenshot and I want to enable the same for extended display as well.
screenshot = async() => {
const mediaDevices = navigator.mediaDevices as any;
let displayMediaOptions = {
video: {
mediaSource: 'screen'
}
}
const stream = await mediaDevices.getDisplayMedia(displayMediaOptions);
console.log('stream', stream.getTracks())
const track = stream.getVideoTracks()[0]
// init Image Capture and not Video stream
console.log(track)
const imageCapture = new ImageCapture(track)
const bitmap = await imageCapture.grabFrame()
// destory video track to prevent more recording / mem leak
track.stop()
const canvas = document.getElementById('fake')
// this could be a document.createElement('canvas') if you want
// draw weird image type to canvas so we can get a useful image
canvas.width = bitmap.width
canvas.height = bitmap.height
const context = canvas.getContext('2d')
context.drawImage(bitmap, 0, 0, bitmap.width, bitmap.height)
const image = canvas.toDataURL()
// this turns the base 64 string to a [File] object
const res = await fetch(image)
const buff = await res.arrayBuffer()
// clone so we can rename, and put into array for easy proccessing
const file = [
new File([buff], `photo_${new Date()}.jpg`, {
type: 'image/jpeg',
}),
]
return file
}
I can get the screenshot from single screen and attach it to a canvas. How can I do for extended display as well ?

Transfer Learning Tensorflow.js size/shape error

I am trying to apply transfer learning by using a knnClassifier and the mobileNet image recognition model in Tensorflow.js I am, however, receiving the following error:
Size(28672) must match the product of shape 28,3072
I don't know how to tackle this issue, I've tried creating tensor3D, resizing using bilinear and nearest neighbor but to no avail. I was wondering if someone here could check this out.
Note that my idea here is to train images from certain folders and assign them to their class using the add example of the knnClassifier. I have a function that reads the image from a path, and an async function that trains the model and makes a prediction from an image.
................................................................................................
const tf = require('#tensorflow/tfjs');
//MobileNet : pre-trained model for TensorFlow.js
const mobilenet = require('#tensorflow-models/mobilenet');
//The module provides native TensorFlow execution
//in backend JavaScript applications under the Node.js runtime.
const tfnode = require('#tensorflow/tfjs-node');
const knnClassifier = require('./node_modules/#tensorflow-models/knn-classifier/dist/knn-classifier');
var glob = require('glob')
//The fs module provides an API for interacting with the file system.
const fs = require('fs');
const readImage = path => {
//reads the entire contents of a file.
//readFileSync() is synchronous and blocks execution until finished.
const imageBuffer = fs.readFileSync(path);
//Given the encoded bytes of an image,
//it returns a 3D or 4D tensor of the decoded image. Supports BMP, GIF, JPEG and PNG formats.
var tfimage = tfnode.node.decodeImage(imageBuffer);
// const t3d = tf.tensor3d(Array.from(tfimage.dataSync()),[tfimage.shape[0], tfimage.shape[1], 1])
const smalImg = tf.image.resizeNearestNeighbor(tfimage, [32, 32]);
const resized = tf.cast(smalImg, 'float32');
// t3d.reshape([32,32,3])
// var smalImg = tf.image.resizeBilinear(tfimage, [368, 432]);
// const resized = tf.cast(smalImg, 'float32');
return resized;
}
var mainDirectory = "./img_samples/";
const imageClassification = async path => {
const classifier = await knnClassifier.create();
const image = await readImage(path);
// Load the model.
const model = await mobilenet.load();
// Classify the image.
const predictions = await model.classify(image);
// print results on terminal
console.log('Classification Results:', predictions);
var folders = fs.readdirSync(mainDirectory);
var filesPerClass = [];
for(var i=0;i<folders.length;i++){
files = fs.readdirSync(mainDirectory+folders[i]);
var files_complete = [];
for(var j=0;j<files.length;j++){
files_complete.push(mainDirectory+folders[i]+"/"+files[j]);
}
filesPerClass.push(files_complete);
}
for(var i=0;i<filesPerClass.length;i++){
for(var j=0;j<filesPerClass[i].length;j++){
imageSample = readImage(filesPerClass[i][j]);
console.log(imageSample);
activation = await model.infer(imageSample, 'conv_preds'); //main directory
classifier.addExample(activation,i);
}
}
console.log(readImage('./hospitalTest.jpg'))
const predictionsTest = await classifier.predictClass(readImage('./hospitalTest.jpg'));
console.log('classficationTest:',predictionsTest);
}
if (process.argv.length !== 3) throw new Error('Incorrect arguments: node classify.js <IMAGE_FILE>');
imageClassification(process.argv[2]);
Since the knn classifier is trained using an output from a node of mobilenet, the prediction needs to be done likewise
outputMobilenet = await model.infer(readImage('./hospitalTest.jpg'), 'conv_preds')
predicted = await classifier.predictClass(outputMobilenet)

How to download, store and load tiles using Open Layers?

I'm creating a PWA using Open Layers. The user must have an option to download the Tiles on the Wifi to load them Offline. I read the Open Layers documentation, but, I couldn't find the answer for my problem. The section Tile Cache is empty.
You'll need three things for this to work:
An IndexedDB to store tiles
A custom tileLoadFunction for your tile source
A component that downloads tiles for a given extent
For (1), you'll want to set up a store, e.g. tiles. The snippet below uses the idb package (https://npmjs.com/package/idb):
import idb from 'idb';
let indexedDb;
idb.open(this.name, 1, upgradeDb => {
if (!upgradeDb.objectStoreNames.contains('tiles')) {
upgradeDb.createObjectStore('tiles');
}
}).then(db => {
indexedDb = db;
});
For (2), a starting point could look something like this:
source.setTileLoadFunction(function(tile, url) {
const tx = db.transaction('tiles', 'readonly');
tiles = tx.objectStore('tiles');
const image = tile.getImage();
tiles.get(url).then(blob => {
if (!blob) {
// use online url
image.src = url;
return;
}
const objUrl = URL.createObjectURL(blob);
image.onload = function() {
URL.revokeObjectURL(objUrl);
};
image.src = objUrl;
}).catch(() => {
// use online url
image.src = url;
});
}
For (3), you'll probably want to limit downloading to a small extent. Then, for the chosen extent (in map units) and each zoom level you want to cache, do something like this:
const tilegrid = source.getTileGrid();
const projection = map.getView().getProjection();
const getUrl = source.getTileUrlFunction();
tilegrid.forEachTileCoord(extent, zoom, tilecoord => {
const url = getUrl(tilecoord, devicePixelRatio, projection);
fetch(url).then(response => {
if (response.ok) {
response.blob().then(blob => {
const tx = db.transaction('tiles', 'readwrite');
const tiles = tx.objectStore('tiles');
tiles.put(url, blob);
});
}
});
});

Categories

Resources