Intercept calls to HTML5 canvas element - javascript

I have a WEB application, that renders it's entire User Interface in an HTML5 canvas.
Note that I can't change the current application.
Currently, this application is being tested using Selenium.
This is done by simulating a click event at a given location in the browser window.
After the click has been executed, a sleep of 2 seconds is being performed to ensure that the entire UI is ready before moving to the next step.
Due to all the 'wait' statements, testing the application is very slow.
Therefore, I thought it was an idea to intercept all calls to the HTML5 canvas.
That way I can rely on the triggered events to know if the UI is ready to move to the next step.
Assume that I have the following code in my application that renders the canvas.
var canvas = document.getElementById("canvasElement");
var ctx = canvas.getContext("2d");
ctx.fillStyle = "green";
ctx.fillRect(10, 10, 100, 100);
Is there a way to intercept the 'fillRect' event?
I tought something along the lines:
var canvasProxy = document.getElementById("canvasElement");
canvasProxy.addEventListener("getContext", function(event) {
console.log("Hello");
});
var canvas = document.getElementById("canvasElement");
var ctx = canvas.getContext("2d");
ctx.fillStyle = "green";
ctx.fillRect(10, 10, 100, 100);
Unforuntately this is not working.
I've created a JSFiddle to play with the example.
https://jsfiddle.net/5cknym74/4/
Amy toughts?

I played a bit around with the JS API and it seems that the following might be working:
// SECTION: Store a reference to all the HTML5 'canvas' element methods.
HTMLCanvasElement.prototype._captureStream = HTMLCanvasElement.prototype.captureStream;
HTMLCanvasElement.prototype._getContext = HTMLCanvasElement.prototype.getContext;
HTMLCanvasElement.prototype._toDataURL = HTMLCanvasElement.prototype.toDataURL;
HTMLCanvasElement.prototype._toBlob = HTMLCanvasElement.prototype.toBlob;
HTMLCanvasElement.prototype._transferControlToOffscreen = HTMLCanvasElement.prototype.transferControlToOffscreen;
HTMLCanvasElement.prototype._mozGetAsFile = HTMLCanvasElement.prototype.mozGetAsFile;
// SECTION: Patch the HTML5 'canvas' element methods.
HTMLCanvasElement.prototype.captureStream = function(frameRate) {
console.log('INTERCEPTING: HTMLCanvasElement.prototype.captureStream');
return this._captureStream(frameRate);
}
HTMLCanvasElement.prototype.getContext = function(contextType, contextAttributes) {
console.log('INTERCEPTING: HTMLCanvasElement.prototype.getContext');
console.log('PROPERTIES:');
console.log(' contextType: ' + contextType);
return this._getContext(contextType, contextAttributes);
}
HTMLCanvasElement.prototype.toDataURL = function(type, encoderOptions) {
console.log('INTERCEPTING: HTMLCanvasElement.prototype.toDataURL');
return this._toDataURL(type, encoderOptions);
}
HTMLCanvasElement.prototype.toBlob = function(callback, mimeType, qualityArgument) {
console.log('INTERCEPTING: HTMLCanvasElement.prototype.toBlob');
return this._toBlob(callback, mimeType, qualityArgument);
}
HTMLCanvasElement.prototype.transferControlToOffscreen = function() {
console.log('INTERCEPTING: HTMLCanvasElement.prototype.transferControlToOffscreen');
return this._transferControlToOffscreen();
}
HTMLCanvasElement.prototype.mozGetAsFile = function(name, type) {
console.log('INTERCEPTING: HTMLCanvasElement.prototype.mozGetAsFile');
return this._mozGetAsFile(name, type);
}
Now that I can intercept the calls, I can find out which calls are responsible that draw a button and react accordingly.

Related

Aframe Dynamic Canvas as Texture

I was trying to use a canvas as texture in my aframe project. I found some instructions here. It mentioned:
The texture will automatically refresh itself as the canvas changes.
However, I gave it a try today and the canvas could only be changed / updated in init function. Afterwards the update to canvas cannot be reflected. Here is my implementation:
module.exports = {
'canvas_component': {
schema: {
canvasId: { type: 'string' }
},
init: function () {
this.canvas = document.getElementById(this.data.canvasId);
this.ctx = this.canvas.getContext('2d');
this.ctx.fillStyle = "#FF0000";
this.ctx.fillRect(20, 20, 150, 100);
setTimeout(() => {
this.ctx.fillStyle = "#FFFF00";
this.ctx.fillRect(20, 20, 150, 100);
}, 2000);
}
}
The color change of of the texture was never changed. Is there anything I missed? Thank you so much for any advice.
I could never get it to work with those instructions (never checked out if bug or improper use though), but you can achieve the same with Three.js:
// assuming this is inside an aframe component
init: function() {
// we'll update this manually
this.texture = null
let canvas = document.getElementById("source-canvas");
// wait until the element is ready
this.el.addEventListener('loaded', e => {
// create the texture
this.texture = new THREE.CanvasTexture(canvas);
// get the references neccesary to swap the texture
let mesh = this.el.getObject3D('mesh')
mesh.material.map = this.texture
// if there was a map before, you should dispose it
})
},
tick: function() {
// if the texture is created - update it
if (this.texture) this.texture.needsUpdate = true
}
Check it out in this glitch.
Instead using the tick function, you could update the texture whenever you get any callback from changing the canvas (mouse events, source change).
The docs are out of date, I've made a pull request to update them. Here is the code that shows how to do it now:
src: https://github.com/aframevr/aframe/issues/4878
which points to: https://github.com/aframevr/aframe/blob/b164623dfa0d2548158f4b7da06157497cd4ea29/examples/test/canvas-texture/components/canvas-updater.js
We can quickly turn that into a component like this, for example:
/* global AFRAME */
AFRAME.registerComponent('live-canvas', {
dependencies: ['geometry', 'material'],
schema: {
src: { type: "string", default: "#id"}
},
init() {
if (!document.querySelector(this.data.src)) {
console.error("no such canvas")
return
}
this.el.setAttribute('material',{src:this.data.src})
},
tick() {
var el = this.el;
var material;
material = el.getObject3D('mesh').material;
if (!material.map) {
console.error("no material map")
this.el.removeAttribute('live-canvas')
return;
}
material.map.needsUpdate = true;
}
});
(remember to declate your components before your scene...)
usage:
<body>
<canvas id="example-canvas"></canvas>
<a-scene>
<a-box live-canvas="src:#example-canvas;"></a-box>
</a-scene>
</body>
live glitch code demo here:
https://glitch.com/edit/#!/live-canvas-demo?path=index.html%3A58%3A43
You can of course be more efficient than a tick handler if you just intentionally run the equivalent code manually whenever you update the canvas yourself, if that makes more sense / isn't happening frame-by-frame.

Firefox drawing blank Canvas when restoring from saved base64

I am working with a single canvas that allows the user to click on a window pane in a window image. The idea is to show where the user has clicked. The image will then be modified (by drawing a grill on the window) and then saved to in JPEG. I am saving the canvas image prior to the click function because I don't want the selection box to show in the final image. However, Firefox often displays a blank canvas when restoring the canvas where IE and Chrome do not. This works perfectly in Chrome and IE. Any suggestions? Does Firefox have a problem with toDataURL()? Maybe some async issue going on here? I am also aware that saving a canvas in this fashion is memory intensive and there may be a better way to do this but I'm working with what I have.
Code:
/**
* Restores canvas from drawingView.canvasRestorePoint if there are any restores saved
*/
restoreCanvas:function()
{
var inverseScale = (1/drawingView.scaleFactor);
var canvas = document.getElementById("drawPop.canvasOne");
var c = canvas.getContext("2d");
if (drawingView.canvasRestorePoint[0]!=null)
{
c.clearRect(0,0,canvas.width,canvas.height);
var img = new Image();
img.src = drawingView.canvasRestorePoint.pop();
c.scale(inverseScale,inverseScale);
c.drawImage(img, 0, 0);
c.scale(drawingView.scaleFactor, drawingView.scaleFactor);
}
},
/**
* Pushes canvas into drawingView.canvasRestorePoint
*/
saveCanvas:function()
{
var canvas = document.getElementById("drawPop.canvasOne");
var urlData = canvas.toDataURL();
drawingView.canvasRestorePoint.push(urlData);
},
EXAMPLE OF USE:
readGrillInputs:function()
{
var glassNum = ir.get("drawPop.grillGlassNum").value;
var panelNum = ir.get("drawPop.grillPanelNum").value;
drawingView.restoreCanvas();
drawEngine.drawGrill(glassNum, panelNum,null);
drawingView.saveCanvas();
},
sortClick:function(event)
{
..... //Sorts where user has clicked and generates panel/glass num
.....
drawingView.showClick(panelNum, glassNum);
},
showClick:function(panelNum, glassNum)
{
var glass = item.panels[panelNum].glasses[glassNum];
var c = drawEngine.context;
drawingView.restoreCanvas();
drawingView.saveCanvas();
c.strokeStyle = "red";
c.strokeRect(glass.x, glass.y, glass.w, glass.h);
},
By just looking at the code setting the img.src is an async action to retrieve the image, so when you try to draw it 2 lines later to the canvas, it probably hasn't been loaded yet (having it in cache will make it return fast enough that it might work).
You should instead use an img.onload function to draw the image when it has loaded.
restoreCanvas:function()
{
var inverseScale = (1/drawingView.scaleFactor);
var canvas = document.getElementById("drawPop.canvasOne");
var c = canvas.getContext("2d");
if (drawingView.canvasRestorePoint[0]!=null)
{
c.clearRect(0,0,canvas.width,canvas.height);
var img = new Image();
img.onload = function() {
c.scale(inverseScale,inverseScale);
c.drawImage(img, 0, 0);
c.scale(drawingView.scaleFactor, drawingView.scaleFactor);
};
img.src = drawingView.canvasRestorePoint.pop();
}
},

Using TogetherJS Extensibility to Sync Changes to a Div

My goal is to add some code around TogetherJS to enable the synchronization (between TogetherJS users) of changes that are being made to a contenteditable div.
My question is how I could do this for a div - which seems like it would be a much easier functionality to implement but I can't currently wrap my head around it.
TogetherJS developers provided an example of how to do this for drawing on a canvas:
<canvas id="sketch"
style="height: 400px; width: 400px; border: 1px solid #000">
</canvas>
// get the canvas element and its context
var canvas = document.querySelector('#sketch');
var context = canvas.getContext('2d');
// brush settings
context.lineWidth = 2;
context.lineJoin = 'round';
context.lineCap = 'round';
context.strokeStyle = '#000';
We’ll use mousedown and mouseup events on the canvas to register our move() handler for the mousemove event:
var lastMouse = {
x: 0,
y: 0
};
// attach the mousedown, mousemove, mouseup event listeners.
canvas.addEventListener('mousedown', function (e) {
lastMouse = {
x: e.pageX - this.offsetLeft,
y: e.pageY - this.offsetTop
};
canvas.addEventListener('mousemove', move, false);
}, false);
canvas.addEventListener('mouseup', function () {
canvas.removeEventListener('mousemove', move, false);
}, false);
And then the move() function will figure out the line that needs to be drawn:
function move(e) {
var mouse = {
x: e.pageX - this.offsetLeft,
y: e.pageY - this.offsetTop
};
draw(lastMouse, mouse);
lastMouse = mouse;
}
And lastly a function to draw lines:
function draw(start, end) {
context.beginPath();
context.moveTo(start.x, start.y);
context.lineTo(end.x, end.y);
context.closePath();
context.stroke();
}
This is enough code to give us a very simple drawing application. TogetherJS has a “hub” that echoes messages between everyone in the session. It doesn’t interpret messages, and everyone’s messages travel back and forth, including messages that come from a person that might be on another page. TogetherJS also lets the application send their own messages like:
TogetherJS.send({
type: "message-type",
...any other attributes you want to send...
})
to send a message (every message must have a type), and to listen:
TogetherJS.hub.on("message-type", function (msg) {
if (! msg.sameUrl) {
// Usually you'll test for this to discard messages that came
// from a user at a different page
return;
}
});
The message types are namespaced so that your application messages won’t accidentally overlap with TogetherJS’s own messages.
To synchronize drawing we’d want to watch for any lines being drawn and send those to the other peers:
function move(e) {
var mouse = {
x: e.pageX - this.offsetLeft,
y: e.pageY - this.offsetTop
};
draw(lastMouse, mouse);
if (TogetherJS.running) {
TogetherJS.send({type: "draw", start: lastMouse end: mouse});
}
lastMouse = mouse;
}
Before we send we check that TogetherJS is actually running (TogetherJS.running). The message we send should be self-explanatory.
Next we have to listen for the messages:
TogetherJS.hub.on("draw", function (msg) {
if (! msg.sameUrl) {
return;
}
draw(msg.start, msg.end);
});
We don’t have to worry about whether TogetherJS is running when we register this listener, it can only be called when TogetherJS is running.
This is enough to make our drawing live and collaborative. But there’s one thing we’re missing: if I start drawing an image, and you join me, you’ll only see the new lines I draw, you won’t see the image I’ve already drawn.
To handle this we’ll listen for the togetherjs.hello message, which is the message each client sends when it first arrives at a new page. When we see that message we’ll send the other person an image of our canvas:
TogetherJS.hub.on("togetherjs.hello", function (msg) {
if (! msg.sameUrl) {
return;
}
var image = canvas.toDataURL("image/png");
TogetherJS.send({
type: "init",
image: image
});
});
Now we just have to listen for this new init message:
TogetherJS.hub.on("init", function (msg) {
if (! msg.sameUrl) {
return;
}
var image = new Image();
image.src = msg.image;
context.drawImage(image, 0, 0);
});
This worked surprisingly well for me - awesome performance (although this is for an intranet site).
For beginners (like me) to extending TogetherJS to your own apps, the "type" can be set to anything. It helps distinguish the function of this particular message/action pair from others. It is required as it is basically the title of the message. For "output", you can also name that anything (or have more than one). That will store data to be sent with the message.
The first section of code sends the message.
The second section of code listens for the message from other TogetherJS users on the same shared URL. The naming conventions between the "send" and "listen" events/functions must match (e.g., text-send)
Here is my solution:
$('#SourceText').keyup(function (event) {
// grab text for sending as a message to collaborate
var sharedtext = $('#SourceText').html()
//alert(sharedtext)
if (TogetherJS.running) {
TogetherJS.send({
type: "text-send",
output: sharedtext
});
console.log(sharedtext)
}
});
TogetherJS.hub.on("text-send", function (msg) {
if (! msg.sameUrl) {
return;
}
$('#SourceText').html(msg.output);
console.log(msg.output)
});

Ask for microphone on onclick event

The other day I stumbled upon with this example of a Javascript audio recorder:
http://webaudiodemos.appspot.com/AudioRecorder/index.html
Which I ended up using for implementing my own. The problem I'm having is that in this file:
var audioContext = new webkitAudioContext();
var audioInput = null,
realAudioInput = null,
inputPoint = null,
audioRecorder = null;
var rafID = null;
var analyserContext = null;
var canvasWidth, canvasHeight;
var recIndex = 0;
/* TODO:
- offer mono option
- "Monitor input" switch
*/
function saveAudio() {
audioRecorder.exportWAV( doneEncoding );
}
function drawWave( buffers ) {
var canvas = document.getElementById( "wavedisplay" );
drawBuffer( canvas.width, canvas.height, canvas.getContext('2d'), buffers[0] );
}
function doneEncoding( blob ) {
Recorder.forceDownload( blob, "myRecording" + ((recIndex<10)?"0":"") + recIndex + ".wav" );
recIndex++;
}
function toggleRecording( e ) {
if (e.classList.contains("recording")) {
// stop recording
audioRecorder.stop();
e.classList.remove("recording");
audioRecorder.getBuffers( drawWave );
} else {
// start recording
if (!audioRecorder)
return;
e.classList.add("recording");
audioRecorder.clear();
audioRecorder.record();
}
}
// this is a helper function to force mono for some interfaces that return a stereo channel for a mono source.
// it's not currently used, but probably will be in the future.
function convertToMono( input ) {
var splitter = audioContext.createChannelSplitter(2);
var merger = audioContext.createChannelMerger(2);
input.connect( splitter );
splitter.connect( merger, 0, 0 );
splitter.connect( merger, 0, 1 );
return merger;
}
function toggleMono() {
if (audioInput != realAudioInput) {
audioInput.disconnect();
realAudioInput.disconnect();
audioInput = realAudioInput;
} else {
realAudioInput.disconnect();
audioInput = convertToMono( realAudioInput );
}
audioInput.connect(inputPoint);
}
function cancelAnalyserUpdates() {
window.webkitCancelAnimationFrame( rafID );
rafID = null;
}
function updateAnalysers(time) {
if (!analyserContext) {
var canvas = document.getElementById("analyser");
canvasWidth = canvas.width;
canvasHeight = canvas.height;
analyserContext = canvas.getContext('2d');
}
// analyzer draw code here
{
var SPACING = 3;
var BAR_WIDTH = 1;
var numBars = Math.round(canvasWidth / SPACING);
var freqByteData = new Uint8Array(analyserNode.frequencyBinCount);
analyserNode.getByteFrequencyData(freqByteData);
analyserContext.clearRect(0, 0, canvasWidth, canvasHeight);
analyserContext.fillStyle = '#F6D565';
analyserContext.lineCap = 'round';
var multiplier = analyserNode.frequencyBinCount / numBars;
// Draw rectangle for each frequency bin.
for (var i = 0; i < numBars; ++i) {
var magnitude = 0;
var offset = Math.floor( i * multiplier );
// gotta sum/average the block, or we miss narrow-bandwidth spikes
for (var j = 0; j< multiplier; j++)
magnitude += freqByteData[offset + j];
magnitude = magnitude / multiplier;
var magnitude2 = freqByteData[i * multiplier];
analyserContext.fillStyle = "hsl( " + Math.round((i*360)/numBars) + ", 100%, 50%)";
analyserContext.fillRect(i * SPACING, canvasHeight, BAR_WIDTH, -magnitude);
}
}
rafID = window.webkitRequestAnimationFrame( updateAnalysers );
}
function gotStream(stream) {
// "inputPoint" is the node to connect your output recording to.
inputPoint = audioContext.createGainNode();
// Create an AudioNode from the stream.
realAudioInput = audioContext.createMediaStreamSource(stream);
audioInput = realAudioInput;
audioInput.connect(inputPoint);
// audioInput = convertToMono( input );
analyserNode = audioContext.createAnalyser();
analyserNode.fftSize = 2048;
inputPoint.connect( analyserNode );
audioRecorder = new Recorder( inputPoint );
zeroGain = audioContext.createGainNode();
zeroGain.gain.value = 0.0;
inputPoint.connect( zeroGain );
zeroGain.connect( audioContext.destination );
updateAnalysers();
}
function initAudio() {
if (!navigator.webkitGetUserMedia)
return(alert("Error: getUserMedia not supported!"));
navigator.webkitGetUserMedia({audio:true}, gotStream, function(e) {
alert('Error getting audio');
console.log(e);
});
}
window.addEventListener('load', initAudio );
As you might be able to see, the initAudio() function (the one wich ask the user for permission to use his/her microphone) is called inmediately when the page is loaded (read the last line) with this method:
window.addEventListener('load', initAudio );
Now, I have this code in the HTML:
<script type="text/javascript" >
$(function() {
$("#recbutton").on("click", function() {
$("#entrance").hide();
$("#live").fadeIn("slow");
toggleRecording(this);
$(this).toggle();
return $("#stopbutton").toggle();
});
return $("#stopbutton").on("click", function() {
audioRecorder.stop();
$(this).toggle();
$("#recbutton").toggle();
$("#live").hide();
return $("#entrance").fadeIn("slow");
});
});
</script>
And as you can see, I call the toggleRecording(this) function (the one wich starts the recording process) only after the #recbutton is pressed. Now, everything works fine with this code BUT, the user gets prompted for microphone permission as soon as the page is loaded and I want to ask them for permission to use the microphone ONLY AFTER they clicked the #recbutton Do you understand me? I tought that if I remove the last line of the first file:
window.addEventListener('load', initAudio );
and modify my embedded script like this:
<script type="text/javascript" >
$(function() {
$("#recbutton").on("click", function() {
$("#entrance").hide();
$("#live").fadeIn("slow");
initAudio();
toggleRecording(this);
$(this).toggle();
return $("#stopbutton").toggle();
});
return $("#stopbutton").on("click", function() {
audioRecorder.stop();
$(this).toggle();
$("#recbutton").toggle();
$("#live").hide();
return $("#entrance").fadeIn("slow");
});
});
</script>
I might be able to achieve what I wanted, and actually I am, the user doesn't get prompted for his/her microphone until they click the #recbutton. The problem is, the audio never get's recorded, when you try to download it, the resulting WAV it is empty.
How can I fix this?
My project's code is at: https://github.com/Jmlevick/html-recorder
No, your problem is that getUserMedia() has an asynchronous callback (gotMedia()); you need to have the rest of your code logic in the startbutton call (the toggleRecording bit, in particular) inside that callback, because right now it's getting executed before getUserMedia returns (and sets up the audio nodes).
I found an elegant & easy solution for this (or at least I see it that way):
What I did was toss "main.js" and "recorder.js" inside a getScript call that is executed only when a certain button (#button1) is clicked by the user... These scripts do not get loaded with the webpage itself until the button it's pressed, but we need some more nifty tricks to make it work the way I described and wanted above:
in main.js, I changed:
window.addEventListener('load', initAudio );
for:
window.addEventListener('click', initAudio );
So when the scripts are loaded into the page with getScript the "main.js" file now listens for a click event in the webpage to ask the user for the microphone. Next, I had to create a hidden button (#button2) on the page wich is fakely clicked by jQuery exactly right after the scripts are loaded on the page, so it triggers the "ask for microphone permisson" event and then, just below that line of code wich generates the fake click I added:
window.removeEventListener("click", initAudio, false);
so the "workflow" for this trick ends up as follows:
User presses a button wich loads the necesary js files into the page with getScript, it's worth mentioning that now the "main.js" file listens for a click event on the window instead of a load one.
We have a hidden button wich is "fakely clicked" by jQuery just in the moment you click the first one, so it triggers the permisson event for the user.
Once this event is triggered, the click event listener is removed from the window, so it never fires the "ask for permisson" event again when the user clicks anywhere on the page.
And basically that's all folks! :) now when the user goes into the page he/she never get asked for microphone permisson until they click a "Rec" button on the page just as I wanted. With one click of the user we do 3 things in jQuery, but for the user it seems like nothing happened other that the "microphone permisson message" appearing on the screen instantly right after they click the "Rec" Button.

touchmove drawing two lines instead of one on canvas

I am making a PhoneGap application with jQuery Mobile UI framework. I need a page where users will be able to draw stuff on the screen. I used this for reference and it works great in Ripple Emulator. However, on my actual device, a Nexus 4, instead of one line per touchmove, I get two lines. Is there something wrong with what I am doing?
EDIT: I found a similar problem reported in github. It seems to be the problem with Android's browser. The two lines were due to overlapping canvas elements. The only solution is to have canvas size less than 256px. Here's the link:
https://github.com/jquery/jquery-mobile/issues/5107
Here's my code
// start canvas code
var canvas = null; //canvas object
var context = null; //canvas's context object
var clearBtn = null; //clear button object
var buttonDown = false;
function captureDraw(){
canvas = document.getElementById('canvas');
clearBtn = document.getElementById('clearBtn');
setCanvasDimension();
initializeEvents();
context = canvas.getContext('2d');
}
function setCanvasDimension() {
//canvas.width = 300;
// canvas.width = window.innerWidth;
// canvas.height = window.innerHeight; //setting the height of the canvas
}
function initializeEvents() {
canvas.addEventListener('touchstart', startPaint, false);
canvas.addEventListener('touchmove', continuePaint, false);
canvas.addEventListener('touchend', stopPaint, false);
clearBtn.addEventListener('touchend', clearCanvas,false);
}
function clearCanvas() {
context.clearRect(0,0,canvas.width,canvas.height);
}
function startPaint(evt) {
if(!buttonDown)
{
context.beginPath();
context.moveTo(evt.touches[0].pageX, evt.touches[0].pageY);
buttonDown = true;
}
evt.preventDefault();
}
function continuePaint(evt) {
if(buttonDown)
{
context.lineTo(evt.touches[0].pageX,evt.touches[0].pageY);
context.stroke();
}
}
function stopPaint() {
buttonDown = false;
}
// end canvas code
Thanks!
Not an actual answer, but I found out that this is a known bug since Android 4.1.1. There have been many solutions like overriding offset-x: visible to the parent div of the canvas element, but it didn't work for me. See https://code.google.com/p/android/issues/detail?id=35474 for more information.
Other solution is keeping your canvas size below 256px. This is certainly a weird bug!

Categories

Resources