I have an webpage where the user will drop a video file, and the page will upload the video, and generate a thumbnail based on a timestamp that the user provide.
For the moment, I am just trying to generate the thumbnail based on the FIRST frame of the video.
here is a quick exemple on my current progress :
(please use chrome as firefox will complain about the https link, and also, sorry if it autodownload an image)
https://stackblitz.com/edit/rxjs-qc8iag
import { Observable, throwError } from 'rxjs'
const VIDEO = {
imageFromFrame(videoSrc: any): Observable<any> {
return new Observable<any>((obs) => {
const canvas = document.createElement('canvas')
const video = document.createElement('video')
const context = canvas.getContext('2d')
const source = document.createElement('source');
source.setAttribute('src', videoSrc);
video.appendChild(source);
document.body.appendChild(canvas)
document.body.appendChild(video)
if (!context) {
throwError(`Couldn't retrieve context 2d`)
obs.complete()
return
}
video.load()
video.addEventListener('loadedmetadata', function () {
console.log('loadedmetadata')
// Set canvas dimensions same as video dimensions
canvas.width = video.videoWidth
canvas.height = video.videoHeight
})
video.addEventListener('canplay', function () {
console.log('canplay')
canvas.style.display = 'inline'
context.drawImage(video, 0, 0, video.videoWidth, video.videoHeight)
// Convert canvas image to Base64
const img = canvas.toDataURL("image/png")
// Convert Base64 image to binary
obs.next(VIDEO.dataURItoBlob(img))
obs.complete()
})
})
},
dataURItoBlob(dataURI: string): Blob {
// convert base64/URLEncoded data component to raw binary data held in a string
var byteString
if (dataURI.split(',')[0].indexOf('base64') >= 0) byteString = atob(dataURI.split(',')[1])
else byteString = unescape(dataURI.split(',')[1])
// separate out the mime component
var mimeString = dataURI.split(',')[0].split(':')[1].split(';')[0]
// write the bytes of the string to a typed array
var ia = new Uint8Array(byteString.length)
for (var i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i)
}
return new Blob([ia], { type: mimeString })
},
}
VIDEO.imageFromFrame('https://www.learningcontainer.com/wp-content/uploads/2020/05/sample-mp4-file.mp4?_=1').subscribe((r) => {
var a = document.createElement('a')
document.body.appendChild(a)
const url = window.URL.createObjectURL(r)
a.href = url
a.download = 'sdf'
a.click()
window.URL.revokeObjectURL(url)
})
The problem is, the image it download is empty, and do not represent the first frame of the video. but the video should have been loaded. and drawed in the canvas.
I am trying to solve it, but if someone could help me found out the issue, thanks.
for anyone looking, I made this which work correctly (need improvement but the idea is there). I use observable cause the flow of my app use observable, but you can change to promise or whatever =>
imageFromFrame(
videoFile: File,
options: { frameTimeInSeconds: number; filename?: string; fileType?: string } = {
frameTimeInSeconds: 0.1,
}
): Observable<File> {
return new Observable<any>((obs) => {
const canvas = document.createElement('canvas')
const video = document.createElement('video')
const source = document.createElement('source')
const context = canvas.getContext('2d')
const urlRef = URL.createObjectURL(videoFile)
video.style.display = 'none'
canvas.style.display = 'none'
source.setAttribute('src', urlRef)
video.setAttribute('crossorigin', 'anonymous')
video.appendChild(source)
document.body.appendChild(canvas)
document.body.appendChild(video)
if (!context) {
throwError(`Couldn't retrieve context 2d`)
obs.complete()
return
}
video.currentTime = options.frameTimeInSeconds
video.load()
video.addEventListener('loadedmetadata', function () {
canvas.width = video.videoWidth
canvas.height = video.videoHeight
})
video.addEventListener('loadeddata', function () {
context.drawImage(video, 0, 0, video.videoWidth, video.videoHeight)
canvas.toBlob((blob) => {
if (!blob) {
return
}
obs.next(
new File([blob], options.filename || FILES.getName(videoFile.name), {
type: options.fileType || 'image/png',
})
)
obs.complete()
URL.revokeObjectURL(urlRef)
video.remove()
canvas.remove()
}, options.fileType || 'image/png')
})
})
},
Related
How to encode canvas stream to h264?
I want to encode canvas stream and download as h264 file.
I try to run the code as follow but the file can not play.
<body>
<canvas id="canvas" height="480" width="720"></canvas>
<video id="video" src="./video.mp4" controls></video>
<script type="module">
const canvas = document.getElementById("canvas")
const video = document.getElementById("video")
const ctx = canvas.getContext("2d")
video.onloadedmetadata = () => {
video.height = video.videoHeight
video.width = video.videoWidth
}
let fn = () => {
ctx.drawImage(video, 0, 0)
requestAnimationFrame(() => {
fn()
})
}
fn()
let result = new Uint8Array(0)
const encoder = new VideoEncoder({
output: (chunk, metadata) => {
let data = new Uint8Array(chunk.byteLength)
chunk.copyTo(data)
result = concatTypedArrays(result, data)
},
error: (e) => {
console.log("[error]", e)
}
})
const config = {
codec: "avc1.64001f"
width: 640,
height: 480,
bitrate: 2_000_000,
framerate: 30,
};
encoder.configure(config);
const stream = canvas.captureStream(30)
const track = stream.getVideoTracks()[0];
setTimeout(() => {
track.stop()
}, 3000)
const trackProcessor = new MediaStreamTrackProcessor(track);
const reader = trackProcessor.readable.getReader();
while (true) {
const result = await reader.read();
if (result.done) break;
const frame = result.value;
encoder.encode(frame, { keyframe: true });
frame.close();
}
function concatTypedArrays(a, b) {
var c = new (a.constructor)(a.length + b.length);
c.set(a, 0);
c.set(b, a.length);
return c;
}
await encoder.flush()
const blob = new Blob([result]);
const videoURL = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = videoURL;
a.download = "video.264";
a.click();
</script>
</body>
the file can not play, and I use ffprobe video.h264 return error Invalid data found when processing input
I want to encode canvas stream to h264 file and download.
I am new to encode/decode
I am not sure what missed
I am building a plugin for FIGMA, where the user selects multiple images, which I then save in an array, that I send to be interpreted.
I have 2 issues with my code
img.src = bin; does not trigger img.onload, but if I set img.src = "literal string", the onload method workds.
the imageData array sent at the end is undefined, I assume because of my bad understanding of async functions.
I would appreciate your help figuring this out. Thank you
P.S. this is pure javascript, and you don't need to know FIGMA to follow the code.
<input type="file" id="images" accept="image/png" multiple />
<button id="button">Create image</button>
<script>
const button = document.getElementById('button');
button.onclick = () => {
const files = document.getElementById('images').files;
function readFile(index) {
console.log(1);
if (index >= files.length) {return};
console.log(2);
const file = files[index];
const imageCaption = file.name;
var reader = new FileReader();
reader.readAsArrayBuffer(file);
reader.onload = function (e) {
console.log(4);
// get file content
const bin = e.target.result;
const imageBytes = new Uint8Array(bin);
//Get Image width and height
var img = new Image();
img.src = bin;
img.onload = function () {
console.log(6);
const width = img.width;
const height = img.height;
console.log("imageCaption: " + imageCaption);
console.log("width: " + width);
console.log("height: " + height);
console.log("imageBytes: " + imageBytes);
var data = {imageBytes, imageCaption, width, height};
//Read Next File
nextData = readFile(index + 1);
if( nextData ) {
data.concat(nextData)
}
return data;
}
}
}
//Call function to Read and Send Images
const imageData = readFile(0);
//Send Data
parent.postMessage({
pluginMessage: {
type: 'send-image',
imageData,
}
}, '*');
A friend ended up helping me with it. Thank you Hiba!
const button = document.getElementById('button');
const input = document.getElementById('input');
button.addEventListener('click', async () => {
const files = input.files ? [...input.files] : [];
const data = await Promise.all(
files.map(async (file) => await getImageData(file))
);
console.log(data);
});
async function getImageData(file) {
// get binary data from file
const bin = await file.arrayBuffer();
// translate bin data into bytes
const imageBytes = new Uint8Array(bin)
// create data blob from bytes
const blob = new Blob([imageBytes], { type: "image/png" });
// create html image element and assign blob as src
const img = new Image();
img.src = URL.createObjectURL(blob);
// get width and height from rendered image
await img.decode();
const { width, height } = img;
const data = { image: blob, caption: file.name, width, height };
return data;
}
I'm trying to capture webcam video in the client side and sent frames to the server to process it. I'm newbie in JS and I'm having some problems.
I tried to use OpenCV.js to get the data but I didn't understand how to get it, in Python we can make
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
and the frame is an 2d-array with the image, but how can I get each frame (as 2d array) to send using OpenCV.js?
I have this code on the client side:
<script type="text/javascript">
function onOpenCvReady() {
cv['onRuntimeInitialized'] = () => {
var socket = io('http://localhost:5000');
socket.on('connect', function () {
console.log("Connected...!", socket.connected)
});
const video = document.querySelector("#videoElement");
video.width = 500;
video.height = 400;
if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ video: true })
.then(function (stream) {
video.srcObject = stream;
video.play();
})
.catch(function (err0r) {
console.log(err0r)
console.log("Something went wrong!");
});
}
let src = new cv.Mat(video.height, video.width, cv.CV_8UC4);
let cap = new cv.VideoCapture(video);
const FPS = 15;
function processVideo() {
let begin = Date.now();
cap.read(src);
handle_socket(src['data']);
// schedule next one.
let delay = 1000 / FPS - (Date.now() - begin);
setTimeout(processVideo, delay);
}
// schedule first one.
setTimeout(processVideo, 0);
function handle_socket(src) {
socket.emit('event', { info: 'I\'m connected!', data: src });
}
}
}
</script>
My solution was:
// Select HTML video element where the webcam data is
const video = document.querySelector("#videoElement");
// returns a frame encoded in base64
const getFrame = () => {
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
canvas.getContext('2d').drawImage(video, 0, 0);
const data = canvas.toDataURL('image/jpeg');
return data;
}
// Send data over socket
function handle_socket(data, dst) {
socket.emit('event', data, function (res) {
if (res !== 0) {
results = res;
}
});
}
frame64 = getFrame()
handle_socket(frame64);
I want to sore an image file to my local storage in order to reuse it on a different page. Im getting a picture or a file when on desktop. That should be transferred to a file and then I want to store it the onImagePicked function. I have connected that function with a button so the image should be transformed to a file and also be stored then. But somehow the func. is not called because I don't get my console.log. I suppose it could have to do something that Im just testing with pictures that come directly from my desktop and not with pictures from a device. But I don't know.
Here is my code: (The rest with getting and displaying the picture works)
import { Storage } from '#ionic/storage';
// convert base64 image into a file
function base64toBlob(base64Data, contentType) {
contentType = contentType || '';
const sliceSize = 1024;
const byteCharacters = atob(base64Data);
const bytesLength = byteCharacters.length;
const slicesCount = Math.ceil(bytesLength / sliceSize);
const byteArrays = new Array(slicesCount);
for (var sliceIndex = 0; sliceIndex < slicesCount; ++sliceIndex) {
const begin = sliceIndex * sliceSize;
const end = Math.min(begin + sliceSize, bytesLength);
const bytes = new Array(end - begin);
for (let offset = begin, i = 0; offset < end; ++i, ++offset) {
bytes[i] = byteCharacters[offset].charCodeAt(0);
}
byteArrays[sliceIndex] = new Uint8Array(bytes);
}
return new Blob(byteArrays, { type: contentType });
}
export const IMAGE_DATA = 'image_data';
...
#ViewChild('filePicker') filePickerRef: ElementRef<HTMLInputElement>;
#Output() imagePick = new EventEmitter<string | File>();
selectedImage: string;
usePicker = false;
...
takePicture() {
if (this.usePicker) {
this.filePickerRef.nativeElement.click();
}
Plugins.Camera.getPhoto({
quality: 80,
source: CameraSource.Prompt,
correctOrientation: true,
saveToGallery: true,
allowEditing: true,
resultType: CameraResultType.Base64,
direction: CameraDirection.Front
}).then(image => {
this.selectedImage = image.base64String;
this.imagePick.emit(image.base64String);
}).catch(error => {
console.log(error);
return false;
});
}
onImagePicked(imageData: string | File) {
let imageFile;
if (typeof imageData === 'string') {
try {
imageFile = base64toBlob(imageData.replace('data:image/jpeg;base64,', ''), 'image/jpeg');
this.storage.set('image_data', imageFile);
console.log('stored');
} catch (error) {
console.log(error);
}
} else {
imageFile = imageData;
}
}
onFileChosen(event: Event) {
const pickedFile = (event.target as HTMLInputElement).files[0];
const fr = new FileReader();
fr.onload = () => {
const dataUrl = fr.result.toString();
this.selectedImage = dataUrl;
this.imagePick.emit(pickedFile);
};
fr.readAsDataURL(pickedFile);
}
html (where I try to call the function)
<app-make-photo (imagePick)="onImagePicked($event)"></app-make-photo>
i have written this function to capture each frame for the GIF but the output is very laggy and crashes when the data increases. Any suggestions ?
Code :
function createGifFromPng(list, framerate, fileName, gifScale) {
gifshot.createGIF({
'images': list,
'gifWidth': wWidth * gifScale,
'gifHeight': wHeight * gifScale,
'interval': 1 / framerate,
}, function(obj) {
if (!obj.error) {
var image = obj.image;
var a = document.createElement('a');
document.body.append(a);
a.download = fileName;
a.href = image;
a.click();
a.remove();
}
});
}
/////////////////////////////////////////////////////////////////////////
function getGifFromCanvas(renderer, sprite, fileName, gifScale, framesCount, framerate) {
var listImgs = [];
var saving = false;
var interval = setInterval(function() {
renderer.extract.canvas(sprite).toBlob(function(b) {
if (listImgs.length >= framesCount) {
clearInterval(interval);
if (!saving) {
createGifFromPng(listImgs, framerate, fileName,gifScale);
saving = true;
}
}
else {
listImgs.push(URL.createObjectURL(b));
}
}, 'image/gif');
}, 1000 / framerate);
}
In modern browsers you can use a conjunction of the MediaRecorder API and the HTMLCanvasElement.captureStream method.
The MediaRecorder API will be able to encode a MediaStream in a video or audio media file on the fly, resulting in far less memory needed than when you grab still images.
const ctx = canvas.getContext('2d');
var x = 0;
anim();
startRecording();
function startRecording() {
const chunks = []; // here we will store our recorded media chunks (Blobs)
const stream = canvas.captureStream(); // grab our canvas MediaStream
const rec = new MediaRecorder(stream); // init the recorder
// every time the recorder has new data, we will store it in our array
rec.ondataavailable = e => chunks.push(e.data);
// only when the recorder stops, we construct a complete Blob from all the chunks
rec.onstop = e => exportVid(new Blob(chunks, {type: 'video/webm'}));
rec.start();
setTimeout(()=>rec.stop(), 3000); // stop recording in 3s
}
function exportVid(blob) {
const vid = document.createElement('video');
vid.src = URL.createObjectURL(blob);
vid.controls = true;
document.body.appendChild(vid);
const a = document.createElement('a');
a.download = 'myvid.webm';
a.href = vid.src;
a.textContent = 'download the video';
document.body.appendChild(a);
}
function anim(){
x = (x + 1) % canvas.width;
ctx.fillStyle = 'white';
ctx.fillRect(0,0,canvas.width,canvas.height);
ctx.fillStyle = 'black';
ctx.fillRect(x - 20, 0, 40, 40);
requestAnimationFrame(anim);
}
<canvas id="canvas"></canvas>
You can also use https://github.com/spite/ccapture.js/ to capture to gif or video.