Send video stream from iOS7 camera to JavaScript Canvas in web view - javascript

First a word of caution: this question is not suitable for the faint of heart. It is an interesting challenge that I have encountered recently. See if you can solve it or help in any way to get closer to an answer, at your own risk.
Here is the problem: Create an iOS application with a standard UIWebView inside. Obtain camera stream from either camera. Send each frame in a format that can be rendered into an HTML5 canvas. Make this happen efficiently so that video stream can be displayed at 720p 30fps or higher in an iOS7 device.
So far I have not found any solution that look promissing. In fact I started with the solution that looked most ridiculus which is encoding each frame in a base64 image string and passing it to web view via stringByEvaluatingJavaScriptFromString. Here is the method that does the JPEG encoding
- (NSString *)encodeToBase64StringJPEG:(UIImage *)image {
return [UIImageJPEGRepresentation(image, 0.7) base64EncodedStringWithOptions:NSDataBase64Encoding64CharacterLineLength];
}
Inside the viewDidLoad I create and configure the capture session
_output = [[AVCaptureVideoDataOutput alloc] init];
// create a queue to run the capture on
dispatch_queue_t captureQueue=dispatch_queue_create("captureQueue", NULL);
// setup output delegate
[_output setSampleBufferDelegate:self queue:captureQueue];
// configure the pixel format (could this be the problem? Is this suitable for JPEG?
_output.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];
[_session addOutput:_output];
[_session startRunning];
The frames are captured and converted to UIImage first.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
_image = imageFromSampleBuffer(sampleBuffer);
// here comes the ridiculus part. Attempt to encode the whole image and send it to JS land:
_base64imgCmd = [NSString stringWithFormat:#"draw('data:image/png;base64,%#');", [self encodeToBase64StringJPEG:_image]];
[self.webView stringByEvaluatingJavaScriptFromString:_base64imgCmd];
}
Guess what, it did not work. XCode is showing me this error:
DemoNativeToJS[3983:1803] bool _WebTryThreadLock(bool), 0x15e9d460: Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now...
1 0x355435d3 WebThreadLock
2 0x3013e2f9 <redacted>
3 0x7c3a1 -[igViewController captureOutput:didOutputSampleBuffer:fromConnection:]
4 0x2c5bbe79 <redacted>
Is this error because WebView is running our of memory qouta? I should note that there is a big spike in the app memory usage just before crash. It crashes anywhere between 14MB to above 20MB depending on what the quality level is set for JPEG encoding.
I do not want to render the camera stream in native -- that wont be an interesting problem at all. I want to pass the video feed to JavaScript land and draw it inside the canvas.
For your convenience I have a minimal demo project (XCode) on github that you can use to get a guick headstart:
git clone https://github.com/arasbm/DemoNativeToJS.git
Please let me know if you have other more sane ideas for passing the data through instead of using stringByEvaluatingJavaScriptFromString. If you have other ideas or suggestions feel free to let me know in comments, but I would expect the answer to demonstrate with some code what path will work.

The crash you are experiencing is due to UIWebKit attempting to call UIKit from a background thread. The easiest way to prevent this from happening is to force stringByEvaluatingJavaScriptFromString which is making the call to UIKit to run in the main thread. You can do this by changing
[self.webView stringByEvaluatingJavaScriptFromString:_base64imgCmd];
To this
[self.webView performSelectorOnMainThread:#selector(stringByEvaluatingJavaScriptFromString:) withObject:_base64imgCmd waitUntilDone:NO];
Which will now make the call to UIKit from the main execution thread, which will be safe.

Related

Babylon JS - SceneLoader from Local File

New Babylon JS user, looking to get up to speed with this fantastic framework. Have had a play with the Sandbox and online Editor, worked up my own coded model from scratch using the standard components - Box, Sphere etc. My question relates to how to get more complex custom geometry loaded. Very comfortable with 3D CAD - STL/OBJ files, got some exports going from Blender to .Babylon format which import great into Babylon's online Sandbox & Editors. However, I can't seem to get the SceneLoader going to read a file from local C:/ drive. Code extract below:
// Create new Babylon Scene
var scene = new BABYLON.Scene(engine);
// Change scene background color
scene.clearColor = new BABYLON.Color3(1, 1, 1);
// Create and positions a free camera
var camera = new BABYLON.FreeCamera("camera1", new BABYLON.Vector3(0, 10, 0), scene);
// Target the camera to scene origin
camera.setTarget(BABYLON.Vector3.Zero());
// Attach camera to the canvas
camera.attachControl(canvas, true);
// Define built-in 'box' shape.
var box = BABYLON.Mesh.CreateBox("sphere1", 1, scene);
// Define 'ground' plane
var ground = BABYLON.Mesh.CreateGround("ground1", 100, 100, 100, scene);
ground.position.y = 0;
//Load local .babylon file from root Dir
BABYLON.SceneLoader.Load("", "Test.babylon", engine, scene);
My model has a standard box for geometry with ground plane. All renders great in Babylon - until I add the SceneLoader line. When I add this I get stuck on the Babylon Loading intro splash screen (rotating Babylon logo). If I comment out the last line of code above the model renders fine with the box.
Have had a look at various forum pages on this and wrecked my brain to point of being stuck e.g: http://www.html5gamedevs.com/topic/20924-stlobj-file-loader/ & https://www.eternalcoding.com/?p=313
I believe Google Chrome may be locking out local file links for security, have tried running in -Allow-Local-File-Access mode, still stuck on loading page. Do I need a web server (I wouldn't know where to start!) or can I run Babylon scenes locally?
First issue posed by OP: Browser is not loading mesh from file system.
Solution: Use a web server such as Simple HTTP Server (Python). The way to do this is slightly different depending on your Python version. To check Python version on Windows, open command prompt and type python --version. Remember the version number for later :)
Setting up simple web server with Python with command prompt:
Navigate to the directory with your index.html file in File Explorer
Left click on to a blank space inside the path box (where it says This PC > Documents, etc...)
Type cmd and it will open Command Prompt in the current directory
Enter the appropriate command...
python -m SimpleHTTPServer [optional port number] if you are using Python 2
python -m http.server [optional port number] if you are using Python 3
I usually leave out the port number and simply type python -m http.server.
Now open your preferred browser and enter localhost:8000 into your address bar. (8000 is the default port number. If you specified a port, use the number which you specified.) It should load your mesh if the code has no errors.
Second issue posed by OP: SceneLoader.Load method overrides previously loaded meshes.
Solution:
If you only need to import a few meshes, use either BABYLON.SceneLoader.Append(...) or BABYLON.SceneLoader.ImportMesh(...). However, this method is inconvenient for managing many assets.
Alternatively, use BABYLON.AssetsManager(...). Since Babylon.js loads models in asynchronously, the asset manager allows ease of use through callback functions. In other words, you can find your assets by name by using scene.getMeshByName("yourMesh") if you type inside the callback function. Here is a simple demo.
I know this question is a few years old, but in case anyone still has issues with this I hope this answer helps.
So I’m not 100% sure about this answer, but hopefully it will help. I followed this tutorial (Skip down to the section where the scene gets loaded). One issue is definitely the cross origin thing, the other how you call the SceneLoader.Load method.
When I try the code from the tutorial with regular Chrome I see three warnings in my web console. Two errors about Test.babylon.manifest (using your example file naming) and one about Test.babylon. You can ignore the ones regarding manifests afaik. The important one is the error about Test.babylon itself. So by default Cross origin requests are not allowed and the babylon file does not load (as expected).
Now, when I close Chrome and reopen it by running open -a "Google Chrome" --args --allow-file-access-from-files in the terminal (I’m on OSX Yosemite), and then load the page, the object loads fine. I still see two errors about manifests in the web console, but they can be ignored.
Note how the BABYLON.SceneLoader.Load function is being called. The import process is asynchronous, and the last parameter looks to be a callback function for what to do once the object has successfully loaded, so I don't think you can just pass scene as in your original code. Check out the function docs.
Ok - porgress.
I got it going using SceneLoader.ImportMesh but I had to setup a simple HTTP Server using Python (v3). This link helped a lot: http://www.linuxjournal.com/content/tech-tip-really-simple-http-server-python
So you run the Python HTTP server from the directory that the Babylon index.html is based in, and it runs as if HTTP bypassing local file access constraints in Chrome.
So my problem is all but answered. I now have my mesh geometry from the Test.Baylon file into my main scene. Still having issues using SceneLoader.Load as the new scene coming in supercedes my original scene and the original geometry disappears. David - I think you're right on the function being needed, although I thought this was optional. As I said, the Tutorial example creates a newScene and renders within the function, in my case I don't know what to do in the function... maybe just 'return'?

MediaRecorder changes size without provocation

I'm using the MediaRecorder API along with the Canvas captureStream method to encode a VP8 video stream of a canvas in browser. This data is sent to FFmpeg via binary web socket.
var outputCaptureStream = $('canvas')[0].captureStream(30);
var mediaRecoder = new MediaRecoder(outputCaptureStream, {
mimeType: 'video/webm'
});
mediaRecorder.ondataavailable = function (e) {
ffmpegStdin.write(e.data);
}
mediaRecoder.start(1000);
For some reason, the stream seems to be randomly switching to a lower resolution mid-stream. FFmpeg isn't happy about this:
Input stream #0:0 frame changed from size:1280x720 fmt:yuv420p to size:1024x576 fmt:yuv420p
[vp8 # 0x2a02c00] Upscaling is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[vp8 # 0x2a02c00] If you want to help, upload a sample of this file to ftp://upload.ffmpeg.org/incoming/ and contact the ffmpeg-devel mailing list. (ffmpeg-devel#ffmpeg.org)
I suspect that it has something to do with excessive CPU usage and that Firefox is trying to be helpful by scaling down the video. My questions:
Does Firefox scale down the video on the fly?
If so, what conditions cause this to happen? (CPU load? Stream backpressure?)
Is it possible to prevent Firefox from doing this?
Is there a different explanation for this behavior that I'm missing?
Firefox will rescale (downscale) WebRTC/getUserMedia video if it detects the system's CPU is being overloaded. There are a few prefs in about:config that control this behavior, but it's not controllable via JS.
You can disable the feature by setting
media.navigator.load_adapt=false
You can look at the other media.navigator.load_adapt.* flags for some control over the behavior. By default you will get downscaling if the CPU gets pegged more than 90% for 3 seconds.

DOM Exception 12 when trying to stream MP3 through websocket

I am currently working on a small project where I want to split an mp3 into frames, send them to a client (browser) through a websocket and then play them back using WebAudio (webkitAudioContext). My server is running nodejs and to transfer the data as binary, I use binaryJS. The browser I am testing with is Chrome 25.0.1354.0 dev, running on Ubuntu 12.04.
I have gotten as far as successfully splitting the mp3 into frames, or, at least, based on my tests, it seems to work. If I write the frames back into a file, mplayer has no problem playing back the file and also parses the header correctly. Each frame is stored in a nodejs Buffer of the correct size and the last byte of the buffer is always the first byte before the next sync word.
As an initial test, I am only sending the first MP3 frame. The client receives the frame successfully (stored in an ArrayBuffer), and the buffer contains the correct data. However, when I call decode, I get the following message:
Uncaught Error: SyntaxError: DOM Exception 12
My function, where I call decodeAudio, looks like this:
streamDone = ->
bArray = new Uint8Array(arr[0].byteLength)
console.log "Stream is done, bytes", bArray.length
context.decodeAudioData bArray, playAudio, err
The initial frame that I am trying to deocde, can be found here.
I have been banging my head in the wall for a couple of days now trying to solve this. Has anyone managed to solve this and sucessfully decoded mp3 frames, and see what I do wrong? I have found two related question on StackOverflow, but the answers did not help me solve my problem. However, according to the accepted answer here, my frame should qualify as a valid mp3 chunk and, thus, be decoded.
Thanks in advance for any help!
Turns out that a break and some fresh eyes can work wonders, a general code cleanup solved the issue. If anyone is interested in the code, I published it here.

Capture Browser Web Page [duplicate]

Is it possible to to take a screenshot of a webpage with JavaScript and then submit that back to the server?
I'm not so concerned with browser security issues. etc. as the implementation would be for HTA. But is it possible?
Google is doing this in Google+ and a talented developer reverse engineered it and produced http://html2canvas.hertzen.com/ . To work in IE you'll need a canvas support library such as http://excanvas.sourceforge.net/
I have done this for an HTA by using an ActiveX control. It was pretty easy to build the control in VB6 to take the screenshot. I had to use the keybd_event API call because SendKeys can't do PrintScreen. Here's the code for that:
Declare Sub keybd_event Lib "user32" _
(ByVal bVk As Byte, ByVal bScan As Byte, ByVal dwFlags As Long, ByVal dwExtraInfo As Long)
Public Const CaptWindow = 2
Public Sub ScreenGrab()
keybd_event &H12, 0, 0, 0
keybd_event &H2C, CaptWindow, 0, 0
keybd_event &H2C, CaptWindow, &H2, 0
keybd_event &H12, 0, &H2, 0
End Sub
That only gets you as far as getting the window to the clipboard.
Another option, if the window you want a screenshot of is an HTA would be to just use an XMLHTTPRequest to send the DOM nodes to the server, then create the screenshots server-side.
Another possible solution that I've discovered is http://www.phantomjs.org/ which allows one to very easily take screenshots of pages and a whole lot more. Whilst my original requirements for this question aren't valid any more (different job), I will likely integrate PhantomJS into future projects.
Pounder's if this is possible to do by setting the whole body elements into a canvase then using canvas2image ?
http://www.nihilogic.dk/labs/canvas2image/
A possible way to do this, if running on windows and have .NET installed you can do:
public Bitmap GenerateScreenshot(string url)
{
// This method gets a screenshot of the webpage
// rendered at its full size (height and width)
return GenerateScreenshot(url, -1, -1);
}
public Bitmap GenerateScreenshot(string url, int width, int height)
{
// Load the webpage into a WebBrowser control
WebBrowser wb = new WebBrowser();
wb.ScrollBarsEnabled = false;
wb.ScriptErrorsSuppressed = true;
wb.Navigate(url);
while (wb.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); }
// Set the size of the WebBrowser control
wb.Width = width;
wb.Height = height;
if (width == -1)
{
// Take Screenshot of the web pages full width
wb.Width = wb.Document.Body.ScrollRectangle.Width;
}
if (height == -1)
{
// Take Screenshot of the web pages full height
wb.Height = wb.Document.Body.ScrollRectangle.Height;
}
// Get a Bitmap representation of the webpage as it's rendered in the WebBrowser control
Bitmap bitmap = new Bitmap(wb.Width, wb.Height);
wb.DrawToBitmap(bitmap, new Rectangle(0, 0, wb.Width, wb.Height));
wb.Dispose();
return bitmap;
}
And then via PHP you can do:
exec("CreateScreenShot.exe -url http://.... -save C:/shots domain_page.png");
Then you have the screenshot in the server side.
This might not be the ideal solution for you, but it might still be worth mentioning.
Snapsie is an open source, ActiveX object that enables Internet Explorer screenshots to be captured and saved. Once the DLL file is registered on the client, you should be able to capture the screenshot and upload the file to the server withing JavaScript. Drawbacks: it needs to register the DLL file at the client and works only with Internet Explorer.
We had a similar requirement for reporting bugs. Since it was for an intranet scenario, we were able to use browser addons (like Fireshot for Firefox and IE Screenshot for Internet Explorer).
This question is old but maybe there's still someone interested in a state-of-the-art answer:
You can use getDisplayMedia:
https://github.com/ondras/browsershot
The SnapEngage uses a Java applet (1.5+) to make a browser screenshot. AFAIK, java.awt.Robot should do the job - the user has just to permit the applet to do it (once).
And I have just found a post about it:
Stack Overflow question JavaScript code to take a screenshot of a website without using ActiveX
Blog post How SnapABug works – and what they should do
I found that dom-to-image did a good job (much better than html2canvas). See the following question & answer: https://stackoverflow.com/a/32776834/207981
This question asks about submitting this back to the server, which should be possible, but if you're looking to download the image(s) you'll want to combine it with FileSaver.js, and if you want to download a zip with multiple image files all generated client-side take a look at jszip.
You can achieve that using HTA and VBScript. Just call an external tool to do the screenshotting. I forgot what the name is, but on Windows Vista there is a tool to do screenshots. You don't even need an extra install for it.
As for as automatic - it totally depends on the tool you use. If it has an API, I am sure you can trigger the screenshot and saving process through a couple of Visual Basic calls without the user knowing that you did what you did.
Since you mentioned HTA, I am assuming you are on Windows and (probably) know your environment (e.g. OS and version) very well.
If you are willing to do it on the server side, there are options like PhantomJS, which is now deprecated. The best way to go would be Headless Chrome with something like Puppeteer on Node.JS. Capturing a web page using Puppeteer would be as simple as follows:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
await browser.close();
})();
However it requires headless chrome to be able to run on your servers, which has some dependencies and might not be suitable on restricted environments. (Also, if you are not using Node.JS, you might need to handle installation / launching of browsers yourself.)
If you are willing to use a SaaS service, there are many options such as
Restpack
UrlBox
Screenshot Layer
A great solution for screenshot taking in Javascript is the one by https://grabz.it.
They have a flexible and simple-to-use screenshot API which can be used by any type of JS application.
If you want to try it, at first you should get the authorization app key + secret and the free SDK
Then, in your app, the implementation steps would be:
// include the grabzit.min.js library in the web page you want the capture to appear
<script src="grabzit.min.js"></script>
//use the key and the secret to login, capture the url
<script>
GrabzIt("KEY", "SECRET").ConvertURL("http://www.google.com").Create();
</script>
Screenshot could be customized with different parameters. For example:
GrabzIt("KEY", "SECRET").ConvertURL("http://www.google.com",
{"width": 400, "height": 400, "format": "png", "delay", 10000}).Create();
</script>
That's all.
Then simply wait a short while and the image will automatically appear at the bottom of the page, without you needing to reload the page.
There are other functionalities to the screenshot mechanism which you can explore here.
It's also possible to save the screenshot locally. For that you will need to utilize GrabzIt server side API. For more info check the detailed guide here.
As of today Apr 2020 GitHub library html2Canvas
https://github.com/niklasvh/html2canvas
GitHub 20K stars | Azure pipeles : Succeeded | Downloads 1.3M/mo |
quote : " JavaScript HTML renderer The script allows you to take "screenshots" of webpages or parts of it, directly on the users browser. The screenshot is based on the DOM and as such may not be 100% accurate to the real representation as it does not make an actual screenshot, but builds the screenshot based on the information available on the page.
I made a simple function that uses rasterizeHTML to build a svg and/or an image with page contents.
Check it out :
https://github.com/orisha/tdg-screen-shooter-pure-js

Html5 Audio plays only once in my Javascript code

I have a dashboard web-app that I want to play an alert sound if its having problems connecting. The site's ajax code will poll for data and throttle down its refresh rate if it can't connect. Once the server comes back up, the site will continue working.
In the mean time I would like a sound to play each time it can't connect (so I know to check the server). Here is that code. This code works.
var error_audio = new Audio("audio/"+settings.refresh.error_audio);
error_audio.load();
//this gets called when there is a connection error.
function onConnectionError() {
error_audio.play();
}
However the 2nd time through the function the audio doesn't play. Digging around in Chrome's debugger the 'played' attribute in the audio element gets set to true. Setting it to false has no results. Any ideas?
I encountered this just today, after more searching I found that you must set the source property on the audio element again to get it to restart. Don't worry, no network activity occurs, and the operation is heavily optimized.
var error_audio = new Audio("audio/"+settings.refresh.error_audio);
error_audio.load();
//this gets called when there is a connection error.
function onConnectionError() {
error_audio.src = "audio/"+settings.refresh.error_audio;
error_audio.play();
}
This behavior is expressed in chrome 21. FF doesn't seem to mind setting the src twice either!
Try setting error_audio.currentTime to 0 before playing it. Maybe it doesn't automatically go back to the beginning
You need to implement the Content-Range response headers, since Chrome requests the file in multiple parts via the Range HTTP header.
See here: HTML5 <audio> Safari live broadcast vs not
Once that has been implemented, both the play() function and setting the currentTime property should work.
Q: I’VE GOT AN AUDIOBUFFERSOURCENODE, THAT I JUST PLAYED BACK WITH NOTEON(), AND I WANT TO PLAY IT AGAIN, BUT NOTEON() DOESN’T DO ANYTHING! HELP!
A: Once a source node has finished playing back, it can’t play back more. To play back the underlying buffer again, you should create a new AudioBufferSourceNode and call noteOn().
Though re-creating the source node may feel inefficient, source nodes are heavily optimized for this pattern. Plus, if you keep a handle to the AudioBuffer, you don't need to make another request to the asset to play the same sound again. If you find yourself needing to repeat this pattern, encapsulate playback with a simple helper function like playSound(buffer).
Q: WHEN PLAYING BACK A SOUND, WHY DO YOU NEED TO MAKE A NEW SOURCE NODE EVERY TIME?
A: The idea of this architecture is to decouple audio asset from playback state. Taking a record player analogy, buffers are analogous to records and sources to play-heads. Because many applications involve multiple versions of the same buffer playing simultaneously, this pattern is essential.
source:
http://updates.html5rocks.com/2012/01/Web-Audio-FAQ
You need to pause the audio just before its end and change the current playing time to zero, then play it.
Javascript/Jquery to control HTML5 audio elements - check this link - explains How to handle/control the HTML5 audio elements?. It may help you!
Chrome/Safari have fixed this issue in newer versions of the browser and the above code now works as expected. I am not sure the precise version it was fixed in.

Categories

Resources