WebRTC: Is it possible to control the microphone and volume levels - javascript

I am working on a demo site which includes a slide-out widget that allows a user to place a call.
I am using the SIPml5 tool along with the webrtc2sip back end for handling the call. That part is all set up and properly working. So now I am looking at seeing if I can control the microphone and volume levels using sliders in the widget. Is this even possible? I look everywhere online and haven't had much luck.
I did find a couple sites that showed me how I can control the volume of the audio tag within the jQuery slider code. So I tried setting it up like the code below:
$(function() {
$( "#slider-spkr" ).slider({
orientation: "vertical",
range: "min",
min: 0,
max: 100,
value: 60,
slide: function( event, ui ) {
var value = $("#slider-spkr").slider("value");
document.getElementById("audio_remote").volume = (value / 100);
},
change: function() {
var value = $("#slider-spkr").slider("value");
document.getElementById("audio_remote").volume = (value / 100);
}
});
});
Unfortunately, this isn't working either. So I'm not sure if I am allowed to do this when using SIPml5, or if my jQuery code needs adjusted.
Has anyone else had any luck with adding microphone/volume controls? Thanks for your help.

Afaik it's impossible to adjust microphone volume. But you can switch it on/off by using stream api:
function toggleMic(stream) { // stream is your local WebRTC stream
var audioTracks = stream.getAudioTracks();
for (var i = 0, l = audioTracks.length; i < l; i++) {
audioTracks[i].enabled = !audioTracks[i].enabled;
}
}
This code is for native webrtc api, not sipML5. It seems they haven't implemented it yet. Here is not so clear receipt for it.

Well it is possible, but currently only in Chrome and with some assumptions.
I am not the auther, you can find inspiration for this code in this open-source library (SimpleWebRtc).
navigator.webkitGetUserMedia(constraints,
function(webRTCStream){
var context = new window.AudioContext();
var microphone = context.createMediaStreamSource(webRTCStream);
var gainFilter = context.createGain();
var destination = context.createMediaStreamDestination();
var outputStream = destination.stream;
microphone.connect(gainFilter);
gainFilter.connect(destination);
var filteredTrack = outputStream.getAudioTracks()[0];
webRTCStream.addTrack(filteredTrack);
var originalTrack = webRTCStream.getAudioTracks()[0];
webRTCStream.removeTrack(originalTrack);
},
function(err) {
console.log("The following error occured: " + err);
}
);
The trick is to modify the stream and then replace the audio track of current stream with audio track of modified stream (taken from MediaStreamDestination stream).
DISCLAIMER:
This doesn't work in FireFox as of version 35, since they merely didn't implement MediaStream.addTrack/removeTrack. I use this check currently
this.micVolumeIsSupported = function() {
var MediaStream = window.webkitMediaStream || window.MediaStream;
return !!MediaStream.prototype.addTrack && !!MediaStream.prototype.removeTrack;
};
_gainSupported = this.micVolumeIsSupported();
This has a limitation in Chrome due to a bug with stopping stream with mixed up tracks. You might wish to restore these tracks before closing connection or on connection interruption;
this.restoreTracks = function(){
if(_gainSupported && _tracksSubstituted){
webRTCStream.addTrack(originalTrack);
webRTCStream.removeTrack(filteredTrack);
_tracksSubstituted = false;
}
};
This works for me

Related

Intercept calls to HTML5 canvas element

I have a WEB application, that renders it's entire User Interface in an HTML5 canvas.
Note that I can't change the current application.
Currently, this application is being tested using Selenium.
This is done by simulating a click event at a given location in the browser window.
After the click has been executed, a sleep of 2 seconds is being performed to ensure that the entire UI is ready before moving to the next step.
Due to all the 'wait' statements, testing the application is very slow.
Therefore, I thought it was an idea to intercept all calls to the HTML5 canvas.
That way I can rely on the triggered events to know if the UI is ready to move to the next step.
Assume that I have the following code in my application that renders the canvas.
var canvas = document.getElementById("canvasElement");
var ctx = canvas.getContext("2d");
ctx.fillStyle = "green";
ctx.fillRect(10, 10, 100, 100);
Is there a way to intercept the 'fillRect' event?
I tought something along the lines:
var canvasProxy = document.getElementById("canvasElement");
canvasProxy.addEventListener("getContext", function(event) {
console.log("Hello");
});
var canvas = document.getElementById("canvasElement");
var ctx = canvas.getContext("2d");
ctx.fillStyle = "green";
ctx.fillRect(10, 10, 100, 100);
Unforuntately this is not working.
I've created a JSFiddle to play with the example.
https://jsfiddle.net/5cknym74/4/
Amy toughts?
I played a bit around with the JS API and it seems that the following might be working:
// SECTION: Store a reference to all the HTML5 'canvas' element methods.
HTMLCanvasElement.prototype._captureStream = HTMLCanvasElement.prototype.captureStream;
HTMLCanvasElement.prototype._getContext = HTMLCanvasElement.prototype.getContext;
HTMLCanvasElement.prototype._toDataURL = HTMLCanvasElement.prototype.toDataURL;
HTMLCanvasElement.prototype._toBlob = HTMLCanvasElement.prototype.toBlob;
HTMLCanvasElement.prototype._transferControlToOffscreen = HTMLCanvasElement.prototype.transferControlToOffscreen;
HTMLCanvasElement.prototype._mozGetAsFile = HTMLCanvasElement.prototype.mozGetAsFile;
// SECTION: Patch the HTML5 'canvas' element methods.
HTMLCanvasElement.prototype.captureStream = function(frameRate) {
console.log('INTERCEPTING: HTMLCanvasElement.prototype.captureStream');
return this._captureStream(frameRate);
}
HTMLCanvasElement.prototype.getContext = function(contextType, contextAttributes) {
console.log('INTERCEPTING: HTMLCanvasElement.prototype.getContext');
console.log('PROPERTIES:');
console.log(' contextType: ' + contextType);
return this._getContext(contextType, contextAttributes);
}
HTMLCanvasElement.prototype.toDataURL = function(type, encoderOptions) {
console.log('INTERCEPTING: HTMLCanvasElement.prototype.toDataURL');
return this._toDataURL(type, encoderOptions);
}
HTMLCanvasElement.prototype.toBlob = function(callback, mimeType, qualityArgument) {
console.log('INTERCEPTING: HTMLCanvasElement.prototype.toBlob');
return this._toBlob(callback, mimeType, qualityArgument);
}
HTMLCanvasElement.prototype.transferControlToOffscreen = function() {
console.log('INTERCEPTING: HTMLCanvasElement.prototype.transferControlToOffscreen');
return this._transferControlToOffscreen();
}
HTMLCanvasElement.prototype.mozGetAsFile = function(name, type) {
console.log('INTERCEPTING: HTMLCanvasElement.prototype.mozGetAsFile');
return this._mozGetAsFile(name, type);
}
Now that I can intercept the calls, I can find out which calls are responsible that draw a button and react accordingly.

How to Avoid continuous triggering of HTML5 DeviceOrientationEvent

I am trying to tilt an image based on HTML5 DeviceOrientation event. However, I am seeing that the event is getting continuously fired even when the device is stable i.e non rotating/non moving. In the following code snippet, the console log is printed continuously. What is the possible reason, how can I stop it.
I tried both capturing and bubbling,
if (window.DeviceOrientationEvent) {
window.addEventListener('deviceorientation', function(eventData) {
var tiltLR = eventData.gamma;
console.log("tiltLR..........",tiltLR);
}, false);
}
I havent needed to use this type of event listener before so I am not familiar with the output.
However, I believe you would need to compare the old tilt with the new tilt. If the new tilt is substantially greater or less then... execute code.
if (window.DeviceOrientationEvent) {
var originalTilt = undefined,
tolerance = 5;
window.addEventListener('deviceorientation', function(eventData) {
if (eventData.gamma > originalTilt + tolerance ||
eventData.gamma < originalTilt - tolerance){
var tiltLR = eventData.gamma;
console.log("tiltLR..........",tiltLR);
originalTilt = tiltLR;
}
}, false);
}

Media Source Extensions Not Working

I am trying to use the MediaSource API to append separate WebM videos to a single source.
I found a Github project that was attempting the same thing, where a playlist of WebMs is loaded, and each one is appended as a SourceBuffer. But it was last committed a year ago, and thus out-of-sync with the current spec. So I forked it and updated to the latest API properties/methods, plus some restructuring. Much of the existing code was taken directly from the spec’s examples and Eric Bidelman’s test page.
However, I can not get it to work as expected. I am testing in two browsers, both on Mac OS X 10.9.2: Chrome 35 stable (latest at the time of this writing), and Firefox 30 beta with the flag media.mediasource.enabled set to true in about:config (this feature will not be introduced until FF 25, and current stable is 24).
Here are the problems I’m running into.
Both browsers
I want the video to be, in the end, one long video composed of the 11 WebMs (00.webm, 01.webm, …, 10.webm). Right now, each browser only plays 1 segment of the video.
Chrome
Wildly inconsistent behavior. Seems impossible to reproduce any of these bugs reliably.
Sometimes the video is blank, or has a tall black bar in the middle of it, and is unplayable.
Sometimes the video will load and pause on the first frame of 01.webm.
Sometimes, the video will play a couple of frames of the 02.webm and pause, having only loaded the first three segments.
The Play button is initially grayed out.
Pressing the grayed out Play button produces wildly inconsistent behaviors. Sometimes, it loads a black, unplayable video. Other times, it will play the first segment, then, when you get to the end, it stops, and when you press Play/Pause again, it will load the next segment. Even then, it will sometimes skip over segments and gets stuck on 04.webm. Regardless, it never plays the final segment, even though the console will report going through all of the buffers.
It is honestly different every time. I can’t list them all here.
Known caveats: Chrome does not currently implement sourceBuffer.mode, though I do not know what effect this might have.
Firefox
Only plays 00.webm. Total running time is 0:08, the length of that video.
Video seeking does not work. (This may be expected behavior, as there is nothing actually happening in the onSeeking event handler.)
Video can not be restarted once finished.
My initial theory was that this had to do with mediaSource.sourceBuffers[0].timestampOffset = duration and duration = mediaSource.duration. But I can’t seem to get anything back from mediaSource.duration except for NaN, even though I’m appending new segments.
Completely lost here. Guidance very much appreciated.
EDIT: I uncommented the duration parts of the code, and ran mse_webm_remuxer from Aaron Colwell's Media Source Extension Tools (thanks Adam Hart for the tips) on all of the videos. Voila, no more unpredictable glitches in Chrome! But alas, it still pauses once a media segment ends, and even when you press play, it sometimes gets stuck on one frame.
In Firefox Beta, it doesn’t play past the first segment, responding with:
TypeError: Value being assigned to SourceBuffer.timestampOffset is not a finite floating-point value.
Logging the value of duration returns NaN (but only in FF).
The main problem is with the video files. If you open chrome://media-internals/ you can see error Media segment did not begin with keyframe. Using properly formatted videos, like the one from Eric Bidelman's example (I hope he doesn't get mad that I keep linking directly to that video, but it's the only example video I've found that works), your code does work with the following change in appendNextMediaSegment():
duration = mediaSource.duration;
mediaSource.sourceBuffers[0].timestampOffset = duration;
mediaSource.sourceBuffers[0].appendBuffer(mediaSegment);
You can try Aaron Colwell's Media Source Extension Tools to try to get your videos working, but I've had limited success.
It also seems a little weird that you're looking at the onProgress event before appending segments, but I guess that could work if you only want to append if the video is actually playing. It could make the seekbar act odd since the video length is unknown, but that can be a problem in any case.
I agree with the opinion Adam Hart said. With a webm file, I tried to implement an example like http://html5-demos.appspot.com/static/media-source.html and then made a conclusion that its problem caused the source file I used.
If you have an arrow left, how about trying to use "samplemuxer" introduced at https://developer.mozilla.org/en-US/docs/Web/HTML/DASH_Adaptive_Streaming_for_HTML_5_Video.
In my opinion, samplemuxer is one of encoders like FFMPEG.
I found that the converted file works with mediaSource API. If you will also see it works, please let me know.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>MediaSource API Demo</title>
</head>
<body>
<h3>Appending .webm video chunks using the Media Source API</h3>
<section>
<video controls autoplay width="320" height="240"></video>
<pre id="log"></pre>
</section>
<script>
//ORIGINAL CODE http://html5-demos.appspot.com/static/media-source.html
var FILE = 'IU_output2.webm';
var NUM_CHUNKS = 5;
var video = document.querySelector('video');
var mediaSource = new MediaSource();
video.src = window.URL.createObjectURL(mediaSource);
function callback(e) {
var sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"');
logger.log('mediaSource readyState: ' + this.readyState);
GET(FILE, function(uInt8Array) {
var file = new Blob([uInt8Array], {type: 'video/webm'});
var chunkSize = Math.ceil(file.size / NUM_CHUNKS);
logger.log('num chunks:' + NUM_CHUNKS);
logger.log('chunkSize:' + chunkSize + ', totalSize:' + file.size);
// Slice the video into NUM_CHUNKS and append each to the media element.
var i = 0;
(function readChunk_(i) {
var reader = new FileReader();
// Reads aren't guaranteed to finish in the same order they're started in,
// so we need to read + append the next chunk after the previous reader
// is done (onload is fired).
reader.onload = function(e) {
try {
sourceBuffer.appendBuffer(new Uint8Array(e.target.result));
logger.log('appending chunk:' + i);
}catch(e){
console.log(e);
}
if (i == NUM_CHUNKS - 1) {
if(!sourceBuffer.updating)
mediaSource.endOfStream();
} else {
if (video.paused) {
video.play(); // Start playing after 1st chunk is appended.
}
sourceBuffer.addEventListener('updateend', function(e){
if( i < NUM_CHUNKS - 1 )
readChunk_(++i);
});
} //end if
};
var startByte = chunkSize * i;
var chunk = file.slice(startByte, startByte + chunkSize);
reader.readAsArrayBuffer(chunk);
})(i); // Start the recursive call by self calling.
});
}
mediaSource.addEventListener('sourceopen', callback, false);
// mediaSource.addEventListener('webkitsourceopen', callback, false);
//
// mediaSource.addEventListener('webkitsourceended', function(e) {
// logger.log('mediaSource readyState: ' + this.readyState);
// }, false);
function GET(url, callback) {
var xhr = new XMLHttpRequest();
xhr.open('GET', url, true);
xhr.responseType = 'arraybuffer';
xhr.send();
xhr.onload = function(e) {
if (xhr.status != 200) {
alert("Unexpected status code " + xhr.status + " for " + url);
return false;
}
callback(new Uint8Array(xhr.response));
};
}
</script>
<script>
function Logger(id) {
this.el = document.getElementById('log');
}
Logger.prototype.log = function(msg) {
var fragment = document.createDocumentFragment();
fragment.appendChild(document.createTextNode(msg));
fragment.appendChild(document.createElement('br'));
this.el.appendChild(fragment);
};
Logger.prototype.clear = function() {
this.el.textContent = '';
};
var logger = new Logger('log');
</script>
</body>
</html>
another test code
<!DOCTYPE html>
<html>
<head>
<title>MediaSource API Demo</title>
</head>
<body>
<h3>Appending .webm video chunks using the Media Source API</h3>
<section>
<video controls autoplay width="320" height="240"></video>
<pre id="log"></pre>
</section>
<script>
//ORIGINAL CODE http://html5-demos.appspot.com/static/media-source.html
var FILE = 'IU_output2.webm';
// var FILE = 'test_movie_output.webm';
var NUM_CHUNKS = 10;
var video = document.querySelector('video');
var mediaSource = new MediaSource();
video.src = window.URL.createObjectURL(mediaSource);
function callback(e) {
var sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"');
logger.log('mediaSource readyState: ' + this.readyState);
GET(FILE, function(uInt8Array) {
logger.log('byteLength:' + uInt8Array.byteLength );
sourceBuffer.appendBuffer(uInt8Array);
});
}
mediaSource.addEventListener('sourceopen', callback, false);
// mediaSource.addEventListener('webkitsourceopen', callback, false);
//
// mediaSource.addEventListener('webkitsourceended', function(e) {
// logger.log('mediaSource readyState: ' + this.readyState);
// }, false);
function GET(url, callback) {
var xhr = new XMLHttpRequest();
xhr.open('GET', url, true);
xhr.responseType = 'arraybuffer';
xhr.send();
xhr.onload = function(e) {
if (xhr.status != 200) {
alert("Unexpected status code " + xhr.status + " for " + url);
return false;
}
callback(new Uint8Array(xhr.response));
};
}
</script>
<script>
function Logger(id) {
this.el = document.getElementById('log');
}
Logger.prototype.log = function(msg) {
var fragment = document.createDocumentFragment();
fragment.appendChild(document.createTextNode(msg));
fragment.appendChild(document.createElement('br'));
this.el.appendChild(fragment);
};
Logger.prototype.clear = function() {
this.el.textContent = '';
};
var logger = new Logger('log');
</script>
</body>
</html>
Thanks.

Ask for microphone on onclick event

The other day I stumbled upon with this example of a Javascript audio recorder:
http://webaudiodemos.appspot.com/AudioRecorder/index.html
Which I ended up using for implementing my own. The problem I'm having is that in this file:
var audioContext = new webkitAudioContext();
var audioInput = null,
realAudioInput = null,
inputPoint = null,
audioRecorder = null;
var rafID = null;
var analyserContext = null;
var canvasWidth, canvasHeight;
var recIndex = 0;
/* TODO:
- offer mono option
- "Monitor input" switch
*/
function saveAudio() {
audioRecorder.exportWAV( doneEncoding );
}
function drawWave( buffers ) {
var canvas = document.getElementById( "wavedisplay" );
drawBuffer( canvas.width, canvas.height, canvas.getContext('2d'), buffers[0] );
}
function doneEncoding( blob ) {
Recorder.forceDownload( blob, "myRecording" + ((recIndex<10)?"0":"") + recIndex + ".wav" );
recIndex++;
}
function toggleRecording( e ) {
if (e.classList.contains("recording")) {
// stop recording
audioRecorder.stop();
e.classList.remove("recording");
audioRecorder.getBuffers( drawWave );
} else {
// start recording
if (!audioRecorder)
return;
e.classList.add("recording");
audioRecorder.clear();
audioRecorder.record();
}
}
// this is a helper function to force mono for some interfaces that return a stereo channel for a mono source.
// it's not currently used, but probably will be in the future.
function convertToMono( input ) {
var splitter = audioContext.createChannelSplitter(2);
var merger = audioContext.createChannelMerger(2);
input.connect( splitter );
splitter.connect( merger, 0, 0 );
splitter.connect( merger, 0, 1 );
return merger;
}
function toggleMono() {
if (audioInput != realAudioInput) {
audioInput.disconnect();
realAudioInput.disconnect();
audioInput = realAudioInput;
} else {
realAudioInput.disconnect();
audioInput = convertToMono( realAudioInput );
}
audioInput.connect(inputPoint);
}
function cancelAnalyserUpdates() {
window.webkitCancelAnimationFrame( rafID );
rafID = null;
}
function updateAnalysers(time) {
if (!analyserContext) {
var canvas = document.getElementById("analyser");
canvasWidth = canvas.width;
canvasHeight = canvas.height;
analyserContext = canvas.getContext('2d');
}
// analyzer draw code here
{
var SPACING = 3;
var BAR_WIDTH = 1;
var numBars = Math.round(canvasWidth / SPACING);
var freqByteData = new Uint8Array(analyserNode.frequencyBinCount);
analyserNode.getByteFrequencyData(freqByteData);
analyserContext.clearRect(0, 0, canvasWidth, canvasHeight);
analyserContext.fillStyle = '#F6D565';
analyserContext.lineCap = 'round';
var multiplier = analyserNode.frequencyBinCount / numBars;
// Draw rectangle for each frequency bin.
for (var i = 0; i < numBars; ++i) {
var magnitude = 0;
var offset = Math.floor( i * multiplier );
// gotta sum/average the block, or we miss narrow-bandwidth spikes
for (var j = 0; j< multiplier; j++)
magnitude += freqByteData[offset + j];
magnitude = magnitude / multiplier;
var magnitude2 = freqByteData[i * multiplier];
analyserContext.fillStyle = "hsl( " + Math.round((i*360)/numBars) + ", 100%, 50%)";
analyserContext.fillRect(i * SPACING, canvasHeight, BAR_WIDTH, -magnitude);
}
}
rafID = window.webkitRequestAnimationFrame( updateAnalysers );
}
function gotStream(stream) {
// "inputPoint" is the node to connect your output recording to.
inputPoint = audioContext.createGainNode();
// Create an AudioNode from the stream.
realAudioInput = audioContext.createMediaStreamSource(stream);
audioInput = realAudioInput;
audioInput.connect(inputPoint);
// audioInput = convertToMono( input );
analyserNode = audioContext.createAnalyser();
analyserNode.fftSize = 2048;
inputPoint.connect( analyserNode );
audioRecorder = new Recorder( inputPoint );
zeroGain = audioContext.createGainNode();
zeroGain.gain.value = 0.0;
inputPoint.connect( zeroGain );
zeroGain.connect( audioContext.destination );
updateAnalysers();
}
function initAudio() {
if (!navigator.webkitGetUserMedia)
return(alert("Error: getUserMedia not supported!"));
navigator.webkitGetUserMedia({audio:true}, gotStream, function(e) {
alert('Error getting audio');
console.log(e);
});
}
window.addEventListener('load', initAudio );
As you might be able to see, the initAudio() function (the one wich ask the user for permission to use his/her microphone) is called inmediately when the page is loaded (read the last line) with this method:
window.addEventListener('load', initAudio );
Now, I have this code in the HTML:
<script type="text/javascript" >
$(function() {
$("#recbutton").on("click", function() {
$("#entrance").hide();
$("#live").fadeIn("slow");
toggleRecording(this);
$(this).toggle();
return $("#stopbutton").toggle();
});
return $("#stopbutton").on("click", function() {
audioRecorder.stop();
$(this).toggle();
$("#recbutton").toggle();
$("#live").hide();
return $("#entrance").fadeIn("slow");
});
});
</script>
And as you can see, I call the toggleRecording(this) function (the one wich starts the recording process) only after the #recbutton is pressed. Now, everything works fine with this code BUT, the user gets prompted for microphone permission as soon as the page is loaded and I want to ask them for permission to use the microphone ONLY AFTER they clicked the #recbutton Do you understand me? I tought that if I remove the last line of the first file:
window.addEventListener('load', initAudio );
and modify my embedded script like this:
<script type="text/javascript" >
$(function() {
$("#recbutton").on("click", function() {
$("#entrance").hide();
$("#live").fadeIn("slow");
initAudio();
toggleRecording(this);
$(this).toggle();
return $("#stopbutton").toggle();
});
return $("#stopbutton").on("click", function() {
audioRecorder.stop();
$(this).toggle();
$("#recbutton").toggle();
$("#live").hide();
return $("#entrance").fadeIn("slow");
});
});
</script>
I might be able to achieve what I wanted, and actually I am, the user doesn't get prompted for his/her microphone until they click the #recbutton. The problem is, the audio never get's recorded, when you try to download it, the resulting WAV it is empty.
How can I fix this?
My project's code is at: https://github.com/Jmlevick/html-recorder
No, your problem is that getUserMedia() has an asynchronous callback (gotMedia()); you need to have the rest of your code logic in the startbutton call (the toggleRecording bit, in particular) inside that callback, because right now it's getting executed before getUserMedia returns (and sets up the audio nodes).
I found an elegant & easy solution for this (or at least I see it that way):
What I did was toss "main.js" and "recorder.js" inside a getScript call that is executed only when a certain button (#button1) is clicked by the user... These scripts do not get loaded with the webpage itself until the button it's pressed, but we need some more nifty tricks to make it work the way I described and wanted above:
in main.js, I changed:
window.addEventListener('load', initAudio );
for:
window.addEventListener('click', initAudio );
So when the scripts are loaded into the page with getScript the "main.js" file now listens for a click event in the webpage to ask the user for the microphone. Next, I had to create a hidden button (#button2) on the page wich is fakely clicked by jQuery exactly right after the scripts are loaded on the page, so it triggers the "ask for microphone permisson" event and then, just below that line of code wich generates the fake click I added:
window.removeEventListener("click", initAudio, false);
so the "workflow" for this trick ends up as follows:
User presses a button wich loads the necesary js files into the page with getScript, it's worth mentioning that now the "main.js" file listens for a click event on the window instead of a load one.
We have a hidden button wich is "fakely clicked" by jQuery just in the moment you click the first one, so it triggers the permisson event for the user.
Once this event is triggered, the click event listener is removed from the window, so it never fires the "ask for permisson" event again when the user clicks anywhere on the page.
And basically that's all folks! :) now when the user goes into the page he/she never get asked for microphone permisson until they click a "Rec" button on the page just as I wanted. With one click of the user we do 3 things in jQuery, but for the user it seems like nothing happened other that the "microphone permisson message" appearing on the screen instantly right after they click the "Rec" Button.

Custom CSS / meta tag for iPad/iPhone

I am working on a web app that uses Extjs components, PHP, and MySQL.
I want to correctly display my apps on iPad. Are there special CSS rules or meta tags?
Your question is fairly vague. Here are some tips for developing a web application on iOS:
For fixed width sites, use a <meta> tag to tell mobile Safari what the width of your site should be, similar to:
<meta name = "viewport" content = "width = 320, initial-scale = 2.3, user-scalable = no">
You can get a list of other <meta> tags supported by mobile Safari here.
Mobile Safari adds new events to the JavaScript DOM in order to support touch and orientation change. Here is the Apple reference to them.
Here's a useful overview of how to make a web app suitable for use on iPad.
Finally, try a Google search.
I haven't gotten a chance to test it yet, but I wrote this script to fire the contextmenu event on an element after a long press of 1.5 seconds or more. Try it out.
UPDATE: finally got a chance to test it, it works as intended. I lowered the delay from 1500 ms to 1200 ms since the delay seemed too long for my taste.
(function() {
var EM = Ext.EventManager,
body = document.body,
activeTouches = {},
onTouchStart = function(e, t) {
var be = e.browserEvent;
Ext.id(t);
if(be.touches.length === 1) {
activeTouches[t.id] = fireContextMenu.defer(1200, null, [e, t]);
} else {
cancelContextMenu(e, t);
}
},
fireContextMenu = function(e, t) {
var touch = e.browserEvent.touches[0];
var me = document.createEvent("MouseEvents");
me.initMouseEvent("contextmenu", true, true, window,
1, // detail
touch.screenX,
touch.screenY,
touch.clientX,
touch.clientY,
false, false, false, false, // key modifiers
2, // button
null // relatedTarget
);
t.dispatchEvent(me);
},
cancelContextMenu = function(e, t) {
clearTimeout(activeTouches[t.id]);
};
if(navigator.userAgent.match(/iPad/i) != null) {
Ext.onReady(function() {
EM.on(body, "touchstart", onTouchStart);
EM.on(body, "touchmove", cancelContextMenu);
EM.on(body, "touchend", cancelContextMenu);
});
}
})();

Categories

Resources