Detect 360 degree video on YouTube - javascript

Is there any method that find out whether a YouTube video type is spherical(360 degree) or not?
For 360 degree video, the YouTube video player shows an arrow key stick on left-upper side. Is it possible to notice that by looking at the HTML code?
I'm carefully looking at the html code of 360 video, but can't find any sign of it.

I've checked, and there is no difference in the link from the usual, both having the watch?v= format and both using an eleven digit code for the video's unique link. The only three ways of know is by watching the actual video and testing if it is 3D yourself, reading the comments or looking at the title (most 360 degree videos say that they are in the title).

To recognise what video your player is playing (360 or plain), there are two ways.
1. Direct querying to YouTube API.
You can simply query by this URL:
https://www.googleapis.com/youtube/v3/videos?part=snippet,contentDetails,status&id=<YOUR_VIDEO_ID>&key=<YOUR_API_KEY>&alt=json
To obtain an API key, you must visit this page to check guide if you haven't it yet: https://developers.google.com/youtube/v3/getting-started. In the response, you should read the value contentDetails.projection as defined there: https://developers.google.com/youtube/v3/docs/videos#contentDetails.projection.
2. Indirect guess of video format.
First way is doing additional HTTP call which may beat performance, so we can use this way also. Iframe API Player has .getSphericalProperties(), which will be empty object if there is rectangular (plain) video presented, but will have something like {yaw: 0, pitch: 0, roll: 0, fov: 100.00004285756798}, if you're using 360 video.

Related

Generate a dynamic "video" in real time, with the data entered?

On this page I enter the data for the "home loan simulation". and a "video" is generated in response to the data that I entered, it is something like a dynamic "video".
This "video" shows related data that I entered.
I inspect the code and I don't see anything that shows a video tag or something. This looks like video, from the control bar, to the full screen option, and the cc. It also has audio, although the voice of the "video" does not mention any dynamic data.
I inspect the code and I don't see anything that shows a video tag or something.
This looks like video, from the control bar, to the full screen option, and the cc. It also has audio, although the voice of the "video" does not mention any dynamic data.
Does anyone know how this was done or has an example of how to do it? is there any way to do the same using javascript, css, and html?
thank you
this is the link
https://www.grupobancolombia.com/personas/creditos/vivienda/simulador-credito-vivienda##sim-results
The video from your question is actually a very big SVG as #Kaiido mentioned.
The animation script is very hard to understand. Here is just a part of it:
You can see it has more that 320.000 lines of code. And we have no clue what all this numbers mean. Of course, some of them are time codes, some are coordinates, but we need reverse engineering to understand.
Your original question: is there any way to do the same using javascript, css, and html? of course has the answer: yes. Almost any animation is possible.
But we need examples. Ok, there are two possible ways to solve the animation: use existing library or create your own. If you are interested in your own, just ask in comments.
Use library
Google suggest: animate.js library.
Here is an example of using controls (play/pause/resume/reset/set time) as it is in real video player: click.
Here are 3 examples of using SVG: move along the path, morph to other shape, change line properties.
More examples using animate.js are here.
Write your own library
I use some kind of self-written library in one of my projects. The idea is:
have an array of keyframes - this is where animation changes. Each keyframe has: time start, duration (similar to having "time end"), the list of changes (objects or their properties).
I update the animation in requestAnimationFrame() loop (because my animation goes only to the future, I do not have controls)
when current time becomes greater than new keyframe start time, I drop (remove) previous keyframe from array and apply new objects/values
if current time is greater than keyframe start, but less than keyframe end, I use lerp (linear interpolation) to calculate in-between values of objects
But this description is just for the idea, so that you can create something that suits your needs.
Audio
I think, audio is just a normal audio tag in HTML:
<audio id="a">
<source src="horse.ogg" type="audio/ogg">
<source src="horse.mp3" type="audio/mpeg">
Your browser does not support the audio element.
</audio>
It can be controlled with methods and properties: look here. Example:
const a = document.getElementById('a');
a.currentTime = 0.8; // playing at 0.8 seconds from the start
a.play();
From what i can understand, they are using an API provided by a company. What they are doing its basically taking your inputs, process them and send a bunch of info to the API as a POST request, then the API responds with a custom URL that they show to you with an iframe.
If you wanna learn more you should check the company that provides the API. IndiVideo
Also, i don't think that this is the right place to ask for something like this.
It appears to be an SVG video that takes in your data and renders it out in real time. It's definitely something anyone can build, but it would take a little less effort if you used an API.

JavaScript: Detecting shape in canvas

I found out app called https://skinmotion.com/ and for learning purposes, I would like to create my own, web version of the app.
Web application work as follows. It asks user for permission to access camera. After that, video is caputer. Once every second, image is taken from the stream and processed. During this process, I look for soundwave patern in the image.
If the pattern is found, video recording stops and some action is executed.
Example of pattern - https://www.shutterstock.com/cs/image-vector/panorama-mini-earthquake-wave-on-white-788490724.
Idealy, it should work like with QR codes - even small qrcode is detected, it should not depend on rotation and scaling.
I am no computer vision expert, this field is fairly new to me. I need some help. Which is the best way to do this?
Should I train my own Tensorflow dataset and use tensorflow.js? Or is there easier and more "light weight" option?
My problem is, I could not find or come up with algorithm for processing the captured image to make as "comparable" as possible - scale it up, rotate, threshold to white and black colors, etc.
I hope that after this transformation, resemble.js could be used to compare "original" and "captured" image.
Thank you in advance.
With Deep Learning
If there are certain waves patterns to be recognized, a classification model can be written using tensorfow.js.
However if the model is to identify waves pattern in general, it can be more complex. An object detection model is to be used.
Without deep learning
Adding to the complexity, would be detecting the waveform and play an audio from it. In this latter case, the image can be read byte by byte. The wave graph is drawn with a certain color that is different from the background of the image. The area of interest can be identified and an array representing the wav form can be generated.
Then to play the audio from the array, it can be done as shown here

Changing HTML video stream source/quality on the fly

Background:
I'm working on a video project with 50+ short videos (10min, 720p) that I want to present online. My current architecture is to placing 16 video tags in a 4x4 grid, and then randomly setting their source on load using JavaScript, and on click zooming a video to cover the full screen until clicked again.
The problem:
Each video in 720p webm is around 80mb. With 16 videos that is 1.3GB totally, or 130MB per minute, or 2MB per second. Which is a ridiculous amount of data, I think, maybe I'm wrong. The each video is so big (80mb) is to support the zoom-full-screen feature.
My idea for a solution:
Each video in two resolutions, and use the low resolution for the grid layout, and the higher resolution on the click-to-zoom.
My question: How to make this smooth? Can I preload the high resolution video on click in the background at the position of the low resolution video? And make the shift in the CSS transform? Or is there a better way to do this?
Secondary question: How to host this online? Can I put the videos on vimeo maybe? Right now I'm using wordpress.com hosting.
The normal way to achieve something like that is to encode the video using an adaptive bitrate format. The two primary formats for that would either be HLS or MPEG-DASH. Most online encoding platforms can provide those as outputs. Normally you would encode 5-6 different qualities (this helps with users that are on wifi, where bandwidth might constantly be changing) but you could easily encode it in just two different qualities.
Normally the players would be able to select the right quality automatically, but you can manage that yourself if you want.
If you are going to use HLS, you can use hls.js and its Quality Switch API. For MPEG-DASH, a good player to use would be Shaka Player and then set it like this:
player.configure({enableAdaption: false});
player.selectVideoTrack(trackId);
If you want to switch specifically on fullscreen, just listen for the fullscreen events on the players.

How to compare the video/image quality using protractor?

If you see this image, I'm trying to analyze Video1 with Video4 or Video2 with Video3
In a peer-to-peer WebRTC connection I'm trying to compare the video input of peer1 to the output on peer2. Basically I'm testing the quality of the video, and theoretically I can do that by :
Checking the video itself frame by frame.
Taking a screen shot on both the sides and checking the image resolution.
I've seen this google video where they have mentioned a complex but clever idea of feeding a barcode stitched video and comparing the unique frame ID with each other. But, its written in C and I'm using protractor
Is there anyone who have tried to do calculate an image resolution of an image or analyzing video frames? Any help would be appreciated, thanks
The webrtc testing tool from testrtc has some code related to that. In particular the cam resolution test which extracts the video from the canvas.
If you want to feed a special video stream from a file, that is doable with the use-file-for-fake-video-capture flag.

Youtube Javascript API to create audio bar

For people that have experience with the Youtube Javascript API, I was wondering would it be possible to create a completely customized audio player using the API. My objective is to simply use the API to play sound from a embed youtube video, pause, change volume, from javascript buttons, without ever displaying the flash object itself. Does the API allow this?
Yes. Take a look at this: http://code.google.com/apis/youtube/js_example_1.html
You'll see that you can have full range of control. If you do not want to display the flash object, set the width and height equal to zero.

Categories

Resources