I have a mp4 file sample.mp4. I used mp4box to convert it into segments and mpd using how to create a mpd file using MP4Box
Now my code is from this source.
After Creating the segments the video is played in the browser. But there is no audio.
The files formed are as follows :
I have two queries here.
Firstly, How can I add audio to this video using javascript in MSE (Media Source Extension)? The video is playing on a mute.
Secondly, The new format files are of the name sample_dash_track1_init.mp4 and sample_dash_track2_init.mp4. While the video plays with the first file and its segments, what is the use of second file and its segments?
`<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>MSE Demo</title>
</head>
<body>
<h1>MSE Demo</h1>
<div>
<video controls width="80%" ></video>
</div>
<script type="text/javascript">
(function() {
var baseUrl = 'http://beta.insidesoftwares.com/development/video/sam/';
var initUrl = baseUrl + 'sample_dash_track1_init.mp4';
var templateUrl = baseUrl + 'sample_dash_track1_$Number$.m4s';
var sourceBuffer;
var index = 1;
var numberOfChunks = 13;
var video = document.querySelector('video');
if (!window.MediaSource) {
console.error('No Media Source API available');
return;
}
var ms = new MediaSource();
video.src = window.URL.createObjectURL(ms);
ms.addEventListener('sourceopen', onMediaSourceOpen);
function onMediaSourceOpen() {
sourceBuffer = ms.addSourceBuffer('video/mp4; codecs="avc1.4d401f"');
sourceBuffer.addEventListener('updateend', nextSegment);
GET(initUrl, appendToBuffer);
video.play();
}
function nextSegment() {
var url = templateUrl.replace('$Number$', index);
GET(url, appendToBuffer);
index++;
if (index > numberOfChunks) {
sourceBuffer.removeEventListener('updateend', nextSegment);
}
}
function appendToBuffer(videoChunk) {
if (videoChunk) {
sourceBuffer.appendBuffer(new Uint8Array(videoChunk));
}
}
function GET(url, callback) {
var xhr = new XMLHttpRequest();
xhr.open('GET', url);
xhr.responseType = 'arraybuffer';
xhr.onload = function(e) {
if (xhr.status != 200) {
console.warn('Unexpected status code ' + xhr.status + ' for ' + url);
return false;
}
callback(xhr.response);
};
xhr.send();
}
})();
</script>
</body>
</html>`
I am guessing that sample_dash_track1* contains the video and sample_dash_track2* contains the audio.
So your hand written JavaScript player plays only video because there is only video in sample_dash_track1*.
Although I like your JavaScript player - you may want to take a look at https://github.com/Dash-Industry-Forum/dash.js?
Otherwise you would have to parse the mpd file and dynamically remux which is a significant amount of work that has been done before for example in the DASH.js library.
Related
I am trying to upload audio data from a web page to a server and find it more difficult than it should be.
Here is my test page, it has a button and when it is clicked, a voice recording of 5 seconds starts, then it is played back and finally the sound data should be uploaded to the server.
The voice recording and play back parts are working fine.
The upload to the server is not completely working.
The code I have is totally visible below. I put the whole file (called "GetAudio.php") on purpose, so anyone can easily copy-paste it to try.
Here is what goes wrong: the file created on the server called "Audio.data", contains 4 characters, namely:
blob
This is not what I want. I want the file to contain the actual sound data that has been recorded locally. Can someone tell me where my code is missing some important thing?
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Record&Upload Trial</title>
</head>
<body>
<div>
<h2>Five seconds voice record and upload.</h2>
<p>
<button id=startRecord><h3>Start the voice recording</h3></button><br/>
<audio id="player" controls></audio>
<p id="XMLH"></p>
</p>
</div>
<?php
// Server side code:
if ($_POST['AudioData']) {
$myfile = fopen("Audio.data", "w");
fwrite($myfile, $_POST['AudioData']);
fclose($myfile);
}
?>
<script>
startRecord.onclick = e => {
startRecord.disabled = true;
audioChunks = [];
rec.start();
setTimeout(() => {
rec.stop();
startRecord.disabled = false;
}, 5000);
}
var player = document.getElementById('player');
var handleSuccess = function(stream) {
rec = new MediaRecorder(stream);
rec.ondataavailable = e => {
audioChunks.push(e.data);
if (rec.state == "inactive") {
let blob = new Blob(audioChunks,{type:'audio/x-mpeg-3'});
player.src = URL.createObjectURL(blob);
player.controls=true;
player.autoplay=true;
// The code below intends to upload the sound file to the server.
// But it is partly (or completely) wrong and does not work.
var xhr = new XMLHttpRequest();
var params = 'AudioData=blob';
xhr.open('POST', 'GetAudio.php', true);
xhr.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
xhr.send(params);
}
}
};
navigator.mediaDevices.getUserMedia({audio:true})
.then(handleSuccess);
</script>
</body>
</html>
I am trying to make my own broadcasting architecture. In this system i am using Websocket to transfer data since i know it is suitable for continuous data transfer.
In my system there is a Host who initiate webcam live broadcast video. I use MediaStreamRecorder.js which record every 5s chunk of video and send to server through websocket as blob array.
Server simply recieve and send to the all client who are connected in that Session.
When client connected then it receive continuous 5s chunk of video as blob array through Websocket.
My main problem is in Client side how can I set the video blob array to html video source dynamically in every 5 seconds such that it can play every 5s chunk of video data.
I am using Glassfish 4.0 as server and Javscript in Host and Client side. Browser: Chrome
Source Code:
ServerBroadCast.java
package websocket1;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.Collections;
import java.util.HashSet;
import java.util.Iterator;
import java.util.Set;
import javax.websocket.OnClose;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;
import javax.websocket.server.ServerEndpoint;
#ServerEndpoint(value = "/liveStreamMulticast")
public class LiveStreamMultiCast {
private static final Set<Session> sessions = Collections.synchronizedSet(new HashSet<Session>());
#OnOpen
public void whenOpening(Session session) {
// session.setMaxBinaryMessageBufferSize(1024*512); // 512 KB
sessions.add(session);
System.out.println("You are Connected!");
System.out.println("Total Connection are connected: " + sessions.size());
}
#OnMessage
public void handleVideo(byte[] videoData, Session HostSession) {
// System.out.println("Insite process video");
try {
if (videoData != null) {
sendVideo(videoData, HostSession);
}
} catch (Throwable e) {
System.out.println("Error sending message " + e.getMessage());
}
}
#OnClose
public void onClosing(Session session) {
System.out.println("Goodbye!");
sessions.remove(session);
}
private void sendVideo(byte[] videoData, Session hostSession) throws IOException {
Iterator<Session> iterator = sessions.iterator();
Session tempSession = null;
while (iterator.hasNext()) {
tempSession = iterator.next();
// System.out.println("Sever send data to "+ tempSession);
if (!tempSession.equals(hostSession))
tempSession.getBasicRemote().sendBinary(ByteBuffer.wrap(videoData));
}
}
}
host.html
<html>
<head>
<title>Demo</title>
<script type="text/javascript" src="js/required/mediastream.js"></script>
</head>
<body>
<video id="video" autoplay=""></video>
<button id="stopButton" onclick="stop()">Stop</button>
<script type="text/javascript">
var url = "ws://localhost:8080/LiveTraining3Demo/liveStreamMulticast"; // 8080/application_name/value_given_in_annotation
var socket = new WebSocket(url);
var video = document.querySelector('video');
socket.onopen = function(){
console.log("Connected to Server!!");
}
socket.onmessage = function(msg){
console.log("Message come from server");
}
/////////////////////////////////
var wholeVideo =[];
var chunks = [];
var mediaRecorder;
//////////////////////////////////////
function gotMedia(stream) {
video.srcObject = stream;
mediaRecorder = new MediaStreamRecorder(stream);
console.log("mediaRecorderCalled");
mediaRecorder.mimeType = 'video/webm';
mediaRecorder.start(5000);//
console.log("recorder started");
mediaRecorder.ondataavailable = (event) =>{
chunks.push(event.data);
console.log("push B");
wholeVideo.push(event.data);
console.log("WholeVideo Size:");
setTimeout(sendData(),5010);
}
}
function sendData(){
//var byteArray = new Uint8Array(recordedTemp);
const superBuffer = new Blob(chunks, {
type: 'video/webm'
});
socket.send(superBuffer);
console.log("Send Data");
console.table(superBuffer);
chunks = [];
}
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia;
navigator.mediaDevices.getUserMedia({video: true , audio: true})
.then(gotMedia)
.catch(e => { console.error('getUserMedia() failed: ' + e); });
</script>
</body>
</html>
client.html
<html>
<head>
<title>Recieve Video</title>
</head>
<body>
<video id="video" autoplay controls loop
style="width: 700; height: 500; margin: auto">
<source src="" type="video/webm">
</video>
<script>
var url = "ws://localhost:8080/LiveTraining3Demo/liveStreamMulticast"; // 8080/application_name/value_given_in_annotation
var check = true;
var socket = new WebSocket(url);
var videoData = [];
var superBuffer = null;
//var videoUrl;
//socket.binaryType = 'arraybuffer';
socket.onopen = function() {
console.log("Connected!!");
}
var check = true;
socket.onmessage = function(videoStream) {
var video = document.querySelector('video');
var videoUrl = window.URL.createObjectURL(videoStream.data);
video.src = videoUrl;
video.load();
video.onloadeddata = function() {
URL.revokeObjectURL(video.src);
video.play();
}
//video.srcObject
//video.play();
console.table(videoStream);
}
socket.onerror = function(err) {
console.log("Error: " + err);
}
</script>
</body>
</html>
When I try to run all other looks fine but in client.html only the video tag source is display with no any video play.
I am working on it since a week.
Might be my some implementation goes wrong, I also know WebRTC, Mauz Webrtc Broadcast but i didn't like to go through that complex if there is another simple way to do that. I am not like to use node.js server since i have to make this web application with spring.
Any idea can be appreciated.
Thanks In Advance!!.
In client side will get array buffer. So you need to convert array buffer into blob array.
let video = document.querySelector('video');
let blobArray = [];
socket.on('message',data=>{
blobArray.push(new Blob([new Uint8Array(data)],{'type':'video/mp4'}));
let currentTime = video.currentTime;
let blob = new Blob(blobArray,{'type':'video/mp4'});
video.src = window.URL.createObjectURL(blob);
video.currentTime = currentTime;
video.play();
});
I'm new to python i did one application using python in that i want to capture Images from my webcam using html and AJAX javascript and save it to server side python. I have completed capturing of images from using client side HTML but i don't know how to save and pass the data from html client side to server side python.If anybody did this please can you help me...
THANK YOU IN ADVANCE...
My.html:
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Get User Media - Photo</title>
</head>
<body>
<button id="take">Take a photo</button><br />
<video id="v"></video>
<canvas id="canvas" style="display:none;"></canvas>
<img src="D:/VoteTest/img.jpg" id="photo" alt="photo">
<script>
;(function(){
function userMedia(){
return navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia || null;
}
// Now we can use it
if( userMedia() ){
var videoPlaying = false;
var constraints = {
video: true,
audio:false
};
var video = document.getElementById('v');
var media = navigator.getUserMedia(constraints, function(stream){
// URL Object is different in WebKit
var url = window.URL || window.webkitURL;
// create the url and set the source of the video element
video.src = url ? url.createObjectURL(stream) : stream;
// Start the video
video.play();
videoPlaying = true;
}, function(error){
console.log("ERROR");
console.log(error);
});
// Listen for user click on the "take a photo" button
document.getElementById('take').addEventListener('click', function(){
if (videoPlaying){
var canvas = document.getElementById('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
canvas.getContext('2d').drawImage(video, 0, 0);
var data = canvas.toDataURL('image/webp');
document.getElementById('photo').setAttribute('src', data);
}
}, false);
} else {
console.log("KO");
}
})();
</script>
</body>
</html>
I just did this recently for a project. You can use XHR to send the image inside form data:
let formdata = new FormData();
formdata.append("image", data);
let xhr = new XMLHttpRequest();
xhr.open('POST', 'http://yourserver/image', true);
xhr.onload = function () {
if (this.status === 200)
console.log(this.response);
else
console.error(xhr);
};
xhr.send(formdata);
I had trouble using the toDataURL to convert the canvas, so I used toBlob for an easier conversion:
canvas.toBlob(callBackToMyPostFunctionAbove, 'image/jpeg');
Here is a sample HTML file with embedded JavaScript and my Python server.
HTML & Embedded JavaScript
The JavaScript uses:
getUserMedia to start a local video stream
a mouse click on the image to initiate the image capture
a canvas object to save an image from the getUserMedia stream
XHR to send the file as form data
The code:
<!DOCTYPE html>
<html>
<head>
<title>Post an Image test</title>
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
</head>
<style>
/* mirror the image */
video, canvas {
transform: scale(-1, 1); /*For Firefox (& IE) */
-webkit-transform: scale(-1, 1); /*for Chrome & Opera (& Safari) */
}
</style>
<body>
<video id="myVideo" autoplay></video>
<script>
let v = document.getElementById("myVideo");
//create a canvas to grab an image for upload
let imageCanvas = document.createElement('canvas');
let imageCtx = imageCanvas.getContext("2d");
//Add file blob to a form and post
function postFile(file) {
let formdata = new FormData();
formdata.append("image", file);
let xhr = new XMLHttpRequest();
xhr.open('POST', 'http://localhost:5000/image', true);
xhr.onload = function () {
if (this.status === 200)
console.log(this.response);
else
console.error(xhr);
};
xhr.send(formdata);
}
//Get the image from the canvas
function sendImagefromCanvas() {
//Make sure the canvas is set to the current video size
imageCanvas.width = v.videoWidth;
imageCanvas.height = v.videoHeight;
imageCtx.drawImage(v, 0, 0, v.videoWidth, v.videoHeight);
//Convert the canvas to blob and post the file
imageCanvas.toBlob(postFile, 'image/jpeg');
}
//Take a picture on click
v.onclick = function() {
console.log('click');
sendImagefromCanvas();
};
window.onload = function () {
//Get camera video
navigator.mediaDevices.getUserMedia({video: {width: 1280, height: 720}, audio: false})
.then(stream => {
v.srcObject = stream;
})
.catch(err => {
console.log('navigator.getUserMedia error: ', err)
});
};
</script>
</body>
</html>
This uses adapter.js to polyfill getUserMedia on different browsers without any error checks.
Python Server
And here is a sample in Python using Flask as a web server:
from flask import Flask, request, Response
import time
PATH_TO_TEST_IMAGES_DIR = './images'
app = Flask(__name__)
#app.route('/')
def index():
return Response(open('./static/getImage.html').read(), mimetype="text/html")
# save the image as a picture
#app.route('/image', methods=['POST'])
def image():
i = request.files['image'] # get the image
f = ('%s.jpeg' % time.strftime("%Y%m%d-%H%M%S"))
i.save('%s/%s' % (PATH_TO_TEST_IMAGES_DIR, f))
return Response("%s saved" % f)
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
If you are looking for php in server side, here is how I did it.
Post the image data to php script using jquery:
var imgData = canvas.toDataURL('image/png');
$.post("https://path-to-your-script/capture.php", {image: imgData},
function(data) {
console.log('posted');
});
The php script will be like:
capture.php
$data = $_POST['image'];
// remove "data:image/png;base64," from image data.
$data = str_replace("data:image/png;base64,", "", $data);
// save to file
file_put_contents("/tmp/image.png", base64_decode($data));
I just found a working docx to html converter using only javascript on github. The main code which converts docx to html is below. The issue is the page just has a button which on click or drag and choosing a word document, opens it as html. I want to specify a file location in the code so I can load it on the server for loading some documents from computer locally.
Code which converts docx to html and renders :
<html lang="en">
<head>
<meta charset="UTF-8">
<title>DocxJS Example</title>
<script type="text/javascript" src="https://code.jquery.com/jquery-2.2.4.min.js"></script>
<script type="text/javascript" src="https://www.docxjs.com/js/build/latest.docxjs.min.js"></script>
</head>
<body>
<input id="inputFiles" type="file" name="files[]" multiple="false">
<div id="loaded-layout" style="width:100%;height:800px;"></div>
<script>
$(document).ready(function(){
var $inputFiles = $('#inputFiles');
$inputFiles.on('change', function (e) {
var files = e.target.files;
var docxJS = new DocxJS();
docxJS.parse(
files[0],
function () {
docxJS.render($('#loaded-layout')[0], function (result) {
if (result.isError) {
console.log(result.msg);
} else {
console.log("Success Render");
}
});
}, function (e) {
console.log("Error!", e);
}
);
});
});
</script>
</body>
</html>
I tried changing var files = e.target.files; to var files = "C:/sda/path/to/docx"; but that didn't help.
I tried to change
var files = e.target.files;
to
var files = new Array(new File([""], "sample.docx"));
but it gives me OOXML parse error.
Update:
Lets say I have a file location variable in PHP and I wish to use that instead in the javascript code. How do I do it?
I also checked docx2html javascript code and here is the code for it:
<!DOCTYPE html>
<html>
<head>
<script src="index.js"></script>
<script>
function test(input){
require("docx2html")(input.files[0]).then(function(converted){
text.value=converted.toString()
})
}
</script>
</head>
<body>
<input type="file" style="position:absolute;top:0" onchange="test(this)">
<br/>
<br/>
<textarea id="text"></textarea>
</body>
</html>
Same issue need input.files[0] here as well
Update:
I am trying to use the method mentioned in the comments but encounter some errors:
var fil;
var getFileBlob = function (url, cb) {
var xhr = new XMLHttpRequest();
xhr.open("GET", url);
xhr.responseType = "blob";
xhr.addEventListener('load', function() {
cb(xhr.response);
});
xhr.send();
};
var blobToFile = function (blob, name) {
blob.lastModifiedDate = new Date();
blob.name = name;
return blob;
};
var getFileObject = function(filePathOrUrl, cb) {
getFileBlob(filePathOrUrl, function (blob) {
cb(blobToFile(blob, 'test.docx'));
});
};
getFileObject('demo.docx', function (fileObject) {
console.log(fileObject);
fil = fileObject;
});
The error primarily was “Cross origin requests are only supported for HTTP.” before I used https://calibre-ebook.com/downloads/demos/demo.docx instead of just demo.docx in above file path. This however gives another error:
Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https, chrome-extension-resource.
which means chrome cannot load it. It needs to be working on a server. If someone can help providing a fix to make it work offline, let me know. The last method was asynchronous call.
In the browser, there is a sandbox policy.
It can not access files directly via Path.
Please access the file through drag & drop event or input file change event.
i am making an application in which i need to directly pick up the .doc or .docx files from the file system and load them on the page. Can you help me with the code ?
There is a problem with using a normal file reader in opening these files , can anyone clarify why is it happenning ?
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<script src="resources/sap-ui-core.js" id="sap-ui-bootstrap" data-sap-ui-libs="sap.ui.commons"
data-sap-ui-theme="sap_goldreflection">
</script>
<!-- add sap.ui.table,sap.ui.ux3 and/or other libraries to 'data-sap-ui-libs'
if required -->
<body>
<input type="file" id="files" name="file" />
<div id="byte_content"></div>
<script>
function readBlob() {
var files = document.getElementById('files').files;
if (!files.length) {
alert('Please select a file!');
return;
}
var file = files[0];
var start = 0;
var stop = file.size - 1;
var reader = new FileReader();
// If we use onloadend, we need to check the readyState.
reader.onloadend = function (evt) {
if (evt.target.readyState == FileReader.DONE) { // DONE == 2
document.getElementById('byte_content').textContent = evt.target.result;
}
};
var blob = file.slice(start, stop + 1);
reader.readAsBinaryString(blob);
}
$("document").ready(function () {
$("#files").change(function () {
readBlob();
});
});
</script>
</body>
</html>
You could take a look at the DocumentCloud project which has a bunch of components including an HTML5 Open Source Document viewer - NYtimes Document viewer - hosted on git (Apache license)