I a trying to convert the code in the ML5 image classification example(Link) to my React component, which is as follows:
class App extends Component {
video = document.getElementById('video');
state = {
result :null
}
loop = (classifier) => {
classifier.predict()
.then(results => {
this.setState({result: results[0].className});
this.loop(classifier) // Call again to create a loop
})
}
componentDidMount(){
ml5.imageClassifier('MobileNet', this.video)
.then(classifier => this.loop(classifier))
}
render() {
navigator.mediaDevices.getUserMedia({ video: true })
.then((stream) => {
this.video.srcObject = stream;
this.video.play();
})
return (
<div className="App">
<video id="video" width="640" height="480" autoplay></video>
</div>
);
}
}
export default App;
However this does not work. The error message says that Unhandled Rejection (TypeError): Cannot set property 'srcObject' of null.
I can imagine video = document.getElementById('video'); is probably not able to grab the element by id. So I tried
class App extends Component {
video_element = <video id="video" width="640" height="480" autoplay></video>;
...
render() {
...
return (
<div className="App">
{video_element}
</div>
);
}
}
Which did not work either. I'm confused of what will be the correct method to implement this?
Any help appreciated, thanks!
I am answering again with an highlight on a slightly different problem rather then the ref one.
There is a huge problem that is causing the awful blinking and constantly failing promise due to an exception... and it is the get of the user media in the render method!
Consider this, every time you set the state, the component is re-rendered. You have a loop that constantly update the state of the component, and that promise keeps failing.
You need to get the user media when the component gets mounted:
componentDidMount() {
navigator.mediaDevices.getUserMedia({ video: true }).then(stream => {
if (this.video.current) {
this.video.current.srcObject = stream;
this.video.current.play();
}
ml5.imageClassifier("MobileNet", this.video.current)
.then(classifier => this.loop(classifier));
});
}
Your render method is then a lot shorter:
render() {
return (
<div className="App">
<video ref={this.video} id="video" width="640" height="480" autoPlay />
</div>
)
}
In the moment App is instantiated the video element doesn't exist yet, but the document.getElementById runs, returning undefined or null. That's why you get:
Cannot set property 'srcObject' of null
Because here:
this.video.srcObject = stream
this.video is null.
This is not the proper way of doing this. You should prepare a reference of the dom element, assign it as a prop and then access the element from there. Something like:
class App extends Component {
video = React.createRef()
...
render() {
navigator.mediaDevices.getUserMedia({ video: true })
.then((stream) => {
if ( this.video.current ) {
this.video.current.srcObject = stream;
this.video.current.play();
}
})
return (
...
<video ref={ this.video }
id="video"
width="640"
height="480"
autoplay
/>
Related
In localhost these videos work as expected using OnMouseHover and OnMouseOut, however when I deploy the site, the videos don't show up and I get an error saying : "Uncaught (in promise) DOMException: The element has no supported sources." The videos are uploaded to my github but I suspect that isn't the issue. Can anyone guide me as to how to fix this?
Here's my code:
interface ICanvasProps {
canvas: {
name: string;
src: any;
}
}
export function Canvas(props: ICanvasProps) {
const { canvas } = props
return (
{
canvas.map((canva) => {
return (
<div key={canva.name} className='card'>
<video
preload={'auto'}
autoPlay
loop
onLoad={()=>{"I am loaded"}}
//#ts-ignore
onMouseOver = { (event) => event.target.play() }
//#ts-ignore
onMouseOut = { (event) => event.target.pause() }
style={{
width: '100%',
borderRadius:"10px"
}}
src={canva.src}></video>
</div>
)
})
}
</div>
)
}
I tried looking for issues using event.target and suspect it could be because the browser cannot load the videos before displaying them but I am unsure how to load the videos since I'm mapping through an array of sources. `
At the moment, I am working on a project that requires me to add three videos to the homepage, but loading them all at once will reduce the load time considerably.
Also i want to use <video/> tag instead of using <iframe/> because i want that autoplay functionality.
What's the best way to do this in React? Using NextJS and Chakra UI.
You can use IntersectionObserver and do it as below. For React all you have to do is to add the below code in an useEffect with empty dependency.
const video = document.querySelector("video");
function handleIntersection(entries) {
entries.map(async (entry) => {
if (entry.isIntersecting) {
const res = await fetch("/video.mp4");
const data = await res.blob();
video.src = URL.createObjectURL(data);
}
});
}
const observer = new IntersectionObserver(handleIntersection);
observer.observe(video);
<video autoplay muted loop playsinline></video>
Also I used a video with a relative path to avoid possible CORS issues.
i found a way to do it using '#react-hook/intersection-observer'
import useIntersectionObserver from '#react-hook/intersection-observer'
import { useRef } from 'react'
const LazyIframe = () => {
const containerRef = useRef()
const lockRef = useRef(false)
const { isIntersecting } = useIntersectionObserver(containerRef)
if (isIntersecting) {
lockRef.current = true
}
return (
<div ref={containerRef}>
{lockRef.current && (
<video
src={"add video source here"}
type="video/mp4"
></video>
)}
</div>
)
}
I'm using Twillio JS API in my project to display video outputs from multiple sources. This API generates enumeration of DOM video/audio elements that can be attached to the page as follows:
let tracks = TwillioVideo.createLocalTracks({
video: { deviceId: this.state.selectedVideoInput.deviceId },
audio: { deviceId: this.state.selectedAudioInput.deviceId }
}
//Find dom element to attach tracks to
let previewContainer = document.getElementById('local-media')
//Attach all tracks
this.setState({localTracks: tracks})
tracks.forEach(track => previewContainer.appendChild(track.attach()))
track.attach() generates a dom element that can be appended but its not something i can put in React state so it can be rendered like so:
<div id="local-media">{this.state.localTracks.map(track => track.attach()}</div>
If I in fact try to do it i get:
Unhandled Rejection (Invariant Violation): Objects are not valid as a
React child (found: [object HTMLAudioElement]). If you meant to render
a collection of children, use an array instead.
EDIT 1:
I was able to get rid of error by doing this:
{this.state.localTracks.map(track => track.attach().Element)}
but it's not returning renderable html but undefined instead
Twilio developer evangelist here.
The attach method in Twilio Video can take an argument, which is an HTMLMediaElement, and will attach the media to that element.
I would recommend that you create a component you can use to render the media for each media track and then use React refs to get a pointer to the DOM element.
Something like this:
import React, { Component, createRef } from 'react';
class Participant extends Component {
constructor(props) {
super(props);
this.video = createRef();
this.audio = createRef();
this.trackAdded = this.trackAdded.bind(this);
}
trackAdded(track) {
if (track.kind === 'video') {
track.attach(this.video.current);
} else if (track.kind === 'audio') {
track.attach(this.audio.current);
}
}
componentDidMount() {
const videoTrack = Array.from(
this.props.participant.videoTracks.values()
)[0];
if (videoTrack) {
videoTrack.attach(this.video.current);
}
const audioTrack = Array.from(
this.props.participant.audioTracks.values()
)[0];
if (audioTrack) {
audioTrack.attach(this.audio.current);
}
this.props.participant.on('trackAdded', this.trackAdded);
}
render() {
return (
<div className="participant">
<h3>{this.props.participant.identity}</h3>
<video ref={this.video} autoPlay={true} muted={true} />
<audio ref={this.audio} autoPlay={true} muted={true} />
</div>
);
}
}
export default Participant;
Then, for every participant in your chat, you can render one of these component.
Let me know if this helps at all.
I am in the process of moving my project from Ionic 3 to Ionic 4, but I noticed that my autoplay video functionality, which was working in Ionic 3, is no longer working in Ionic 4.
From what I can tell my ContentChildren isn't being populated, despite the videos being present in the view. I have tried using forwardRef as well, but that didn't do anything.
I checked other questions that were similar to this, and none of the provided solutions have worked for me, and most were low-quality answers, so I don't believe this is a duplicate considering the time that has passed since they were asked.
The important packages that I'm using are:
Angular: 7.2.15, #ionic/angular 4.4.0, zone.js 0.8.29, intersection-observer 0.7.0.
Here is the autoplay component
import { Component, ContentChildren, ElementRef, forwardRef, NgZone, OnDestroy, OnInit, QueryList } from '#angular/core';
import { AutoplayVideoDirective } from '../../../directives/autoplay-video.directive';
#Component({
selector: 'autoplay',
template: `<ng-content></ng-content>`
})
export class AutoplayContentComponent implements OnInit, OnDestroy {
#ContentChildren(forwardRef(() => AutoplayVideoDirective),
{
read: ElementRef,
descendants: true,
},
) autoPlayVideoRefs: QueryList<any>;
private intersectionObserver: IntersectionObserver;
private mutationObserver: MutationObserver;
private play: Promise<any>;
constructor(private element: ElementRef,
public ngZone: NgZone) {}
public ngOnInit() {
// we can run this outside the ngZone, no need to trigger change detection
this.ngZone.runOutsideAngular(() => {
this.intersectionObserver = this.getIntersectionObserver();
this.mutationObserver = this.getMutationObserver(this.element.nativeElement);
});
}
// clean things ups
public ngOnDestroy() {
if (this.intersectionObserver) {
this.intersectionObserver.disconnect();
}
if (this.mutationObserver) {
this.mutationObserver.disconnect();
}
}
// construct the InterSectionObserver and return it
private getIntersectionObserver() {
// execute the onIntersection on the threshold intersection of 0 and 70%
return new IntersectionObserver(entries => this.onIntersection(entries), {
threshold: [0, 0.70],
});
}
// construct the MutationObserver and return it
private getMutationObserver(containerElement: HTMLElement) {
console.log(containerElement);
// execute the onDomChange
let mutationObserver = new MutationObserver(() => this.onDomChange());
// at the very least, childList, attributes, or characterData
// must be set to true
const config = {attributes: true, characterData: true, childList: true};
// attach the mutation observer to the container element
// and start observing it
mutationObserver.observe(containerElement, config);
return mutationObserver;
}
private onDomChange() {
// when the DOM changes, loop over each element
// we want to observe for its interaction,
// and do observe it
console.log(this.autoPlayVideoRefs);
this.autoPlayVideoRefs.forEach((video: ElementRef) => {
this.checkIfVideosCanLoad(video.nativeElement).then((canPlay) => {
console.log('Video can play: ', canPlay);
});
this.intersectionObserver.observe(video.nativeElement);
});
}
/*
* In low-power mode, videos do not load.
* So this quickly checks to see if videos have the capability of loading.
*/
private async checkIfVideosCanLoad(video: any) {
let canPlay: boolean;
return new Promise((resolve) => {
// A safe timeout of 3 seconds, before we declare that the phone is in low power mode.
let timeout = setTimeout(() => {
canPlay = false;
resolve(canPlay);
}, 3000);
// Loads meta data about the video, but not the whole video itself.
video.onloadeddata = () => {
canPlay = true;
clearTimeout(timeout);
resolve(canPlay);
};
});
}
private onIntersection(entries: IntersectionObserverEntry[]) {
entries.forEach((entry: any) => {
// get the video element
let video = entry.target;
// are we intersecting?
if (!entry.isIntersecting) return;
// play the video if we passed the threshold
// of 0.7 and store the promise so we can safely
// pause it again
if (entry.intersectionRatio >= 0.70) {
if (this.play === undefined) this.play = video.play();
} else if (entry.intersectionRatio < 0.70) {
// no need to pause something if it didn't start playing yet.
if (this.play !== undefined) {
// wait for the promise to resolve, then pause the video
this.play.then(() => {
video.pause();
this.play = undefined;
}).catch(() => {});
}
}
});
}
}
The autoplay directive is pretty simple.
import { Directive } from '#angular/core';
/*
* To be used with the autoplay-content.ts component.
*/
#Directive({
selector: 'video'
})
export class AutoplayVideoDirective {}
As is the component which the autoplay component is being wrapped around. This is called inline-video
<autoplay>
<div tappable (tap)="changeVideoAudio(video?.id)">
<video video playsinline loop [muted]="'muted'" [autoplay]="true" preload="auto" muted="muted"
[poster]="(video?.src | AspectRatio) | videoPoster" [id]="'media-' + video?.id" class="video-media">
<source [src]="video.src | AspectRatio" type="video/mp4" src="">
</video>
</div>
</autoplay>
I'm unsure why this wouldn't work in Ionic 4, but was working in ionic 3... so any help would be much appreciated.
The hierarchy of components is as follows:
Feed Page > Feed Item > Autoplay > Inline Video. Where the autoplay component is wrapped around the inline video component.
edit: I have the AllowInlineMediaPlayback preference set as well, in the config.xml
<preference name="AllowInlineMediaPlayback" value="true" />
I am trying to get a webcam feed to display on my app using react hooks. I also need to be able to capture the latest image from the feed
I believe I have the foundations but am missing something.
import React,{useState,useEffect} from "react"
export function VideoFeed(){
const[constraints] = useState({width:300,height:300})
useEffect(()=>{
navigator.mediaDevices.getUserMedia({video:true})
.then(stream=>{
let video = document.querySelector('video')
video.source = stream;
video.play();
})
.catch(e=>{
console.log(e)
})
})
return(
<video autoPlay ={true} id ="video"></video>
)
}
See How to access a DOM element in React? instead of document.querySelector.
When applied with useRef hook and fixing how often useEffect needs to execute, it would look something like this:
export function VideoFeed() {
const videoEl = useRef(null)
useEffect(() => {
if (!videoEl) {
return
}
navigator.mediaDevices.getUserMedia({video:true})
.then(stream => {
let video = videoEl.current
video.srcObject = stream
video.play()
})
}, [videoEl])
return <video ref={videoEl} />
}
Found the issue.
Change
video.source = stream;
To:
video.srcObject = stream;
Viola