Take desktop screenshot with Electron - javascript

I am using Electron to create a Windows application that creates a fullscreen transparent overlay window. The purpose of this overlay is to:
take a screenshot of the entire screen (not the overlay itself which is transparent, but the screen 'underneath'),
process this image by sending the image as a byte stream to my python server, and
draw some things on the overlay
I am getting stuck on the first step, which is the screenshot capturing process.
I tried option 1, which is to use capturePage():
this.electronService.remote.getCurrentWindow().webContents.capturePage()
.then((img: Electron.NativeImage) => { ... }
but this captures my overlay window only (and not the desktop screen). This will be a blank image which is useless to me.
Option 2 is to use desktopCapturer:
this.electronService.remote.desktopCapturer.getSources({types: ['screen']}).then(sources => {
for (const source of sources) {
if (source.name === 'Screen 1') {
try {
const mediaDevices = navigator.mediaDevices as any;
mediaDevices.getUserMedia({
audio: false,
video: { // this specification is only available for Chrome -> Electron runs on Chromium browser
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: source.id,
minWidth: 1280,
maxWidth: 1280,
minHeight: 720,
maxHeight: 720
}
}
}).then((stream: MediaStream) => { // stream.getVideoTracks()[0] contains the video track I need
this.handleStream(stream);
});
} catch (e) {
}
}
}
});
The next step is where it becomes fuzzy for me. What do I do with the acquired MediaStream to get a bytestream from the screenshot out of it? I see plenty of examples how to display this stream on a webpage, but I wish to send it to my backend. This StackOverflow post mentions how to do it, but I am not getting it to work properly. This is how I implemented handleStream():
import * as MediaStreamRecorder from 'msr';
private handleStream(stream: MediaStream): void {
recorder.stop()
const recorder = new MediaStreamRecorder(stream);
recorder.ondataavailable = (blob: Blob) => { // I immediately get a blob, while the linked SO page got an event and had to get a blob through event.data
this.http.post<Result>('http://localhost:5050', blob);
};
// make data available event fire every one second
recorder.start(1000);
}
The blob is not being accepted by the Python server. Upon inspecting the contents of Blob, it's a video as I suspected. I verified this with the following code:
let url = URL.createObjectURL(blob);
window.open(url, '_blank')
which opens the blob in a new window. It displays a video of maybe half a second, but I want to have a static image. So how do I get a specific snapshot out of it? I'm also not sure if simply sending the Javascript blob format in the POST body will do for Python to be correctly interpret it. In Java it works by simply sending a byte[] of the image so I verified that the Python server implementation works as expected.
Any suggestions other than using the desktopCapturer are also fine. This implementation is capturing my mouse as well, which I rather not have. I must admit that I did not expect this feature to be so difficult to implement.

Here's how you take a desktop screenshot:
const { desktopCapturer } = require('electron')
document.getElementById('screenshot-button').addEventListener('click', () => { // The button which takes the screenshot
desktopCapturer.getSources({ types: ['screen'] })
.then( sources => {
document.getElementById('screenshot-image').src = sources[0].thumbnail.toDataURL() // The image to display the screenshot
})
})
Using 'screen' will take a screenshot of the entire desktop.
Using 'windows' will take a screenshot of only the application.
Also refer to these docs: https://www.electronjs.org/docs/api/desktop-capturer

desktopCapturer only takes videos. So you need to get a single frame from it. You can use html5 canvas for that. Here is an example:
https://ourcodeworld.com/articles/read/280/creating-screenshots-of-your-app-or-the-screen-in-electron-framework
Or, use some third party screenshot library available on npm. The one I found needs to have ImageMagick installed on linux, but maybe there are more, or you don't need to support linux. You'll need to do that in the main electron process in which you can do anything that you can do in node.

You can get each frame from taken video like this:
desktopCapturer.getSources({
types: ['window'], thumbnailSize: {
height: 768,
width: 1366
}
}).then(sources => {
for (let s in sources) {
const content = sources[s].thumbnail.toPNG()
console.log(content)
}
})

Related

Bad Resolution Image taken with getuserMedia() Javascript

i wanted to take screenshots from a mobilephone camera using javascript getUserMedia function but resolution is very bad.
if (navigator.mediaDevices) {
// access the web cam
navigator.mediaDevices.getUserMedia({
video: {
width: {
min: 1280,
},
height: {
min: 720,
},
facingMode: {
exact: 'environment'
}
}
}).then(function(stream) {
video.srcObject = stream;
video.addEventListener('click', takeSnapshot);
})
.catch(function(error) {
document.body.textContent = 'Could not access the camera. Error: ' + error.name;
});
}
var video = document.querySelector('video'), canvas;
function takeSnapshot(){
var img = document.createElement('img');
var context;
var width = video.offsetWidth, height = video.offsetHeight;
var canvas = document.createElement('canvas');
canvas.width = width;
canvas.height = height;
context = canvas.getContext('2d');
context.webkitImageSmoothingEnabled = false;
context.mozImageSmoothingEnabled = false;
context.imageSmoothingEnabled = false;
context.drawImage(video, 0, 0, width, height);
img.src = canvas.toDataURL('image/jpeg');
}
No errors-code, but resolution not good, i cannot read the text of the photo.
There is a method to get real image quality from camera?
MediaCapture
This is what you are using via getUserMedia.
If you have a camera which allows a 1920x1080, 1280x720, and 640x480 resolutions only, the browser implementation of Media Capture can emulate a 480x640 feed from the 1280x720 (see MediaStream). From testing (primarily Chrome) the browser typically scales 720 down to 640 and then crops the center. Sometimes when I have used virtual camera software I see Chrome has added artificial black padding around a non supported resolution. The client sees a success message and a feed of the right dimensions but a person would see a qualitative degradation. Because of this emulation you cannot guarantee the feed is correct or not scaled. However it will typically have the correct dimensions requested.
You can read about constraints here. It basically boils down to: Give me a resolution as close to x. Then the browser determines by its own implementation to reject the constraints and throw an error, get the resolution, or emulate the resolution.
More information of this design is detailed in the mediacapture specification. Especially:
The RTCPeerConnection is an interesting object because it acts
simultaneously as both a sink and a source for over-the-network
streams. As a sink, it has source transformational capabilities (e.g.,
lowering bit-rates, scaling-up / down resolutions, and adjusting
frame-rates), and as a source it could have its own settings changed
by a track source.
The main reason for this is allowing n clients to have access to the same media source but may require different resolutions, bit rate, etc, thus emulation/scaling/transforming attempts to solve this problem. A negative to this is that you never truly know what the source resolution is.
ImageCapture
This is potentially your solution.
If 60FPS video isn't a hard requirement and you have leway on compatibility you can poll ImageCapture to emulate a camera and receive a much clearer image from the camera.
You would have to check for clientside support and then potentially fallback on MediaCapture.
The API enables control over camera features such as zoom, brightness, contrast, ISO and white balance. Best of all, Image Capture allows you to access the full resolution capabilities of any available device camera or webcam. Previous techniques for taking photos on the Web have used video snapshots (MediaCapture rendered to a Canvas), which are lower resolution than that available for still images.
https://developers.google.com/web/updates/2016/12/imagecapture
and its polyfill:
https://github.com/GoogleChromeLabs/imagecapture-polyfill
I just want to mention that when using the imagecapture-polyfill for taking photos in Safari or Chrome on iOS, I was getting really bad image quality until I added a 2 sec. delay in-between getting the MediaStreamTrack and calling the ImageCapture constructor. This is not really an answer because I don't know why this delay is necessary, but maybe it can help someone anyway.
// Media constraints
const constraints = {
audio: false,
video: {
facingMode: { exact: 'environment' }, // Use the back camera (otherwise the front camera will be used by default)
width: { ideal: 99999 },
height: { ideal: 99999 }
}
};
// MediaStream
navigator.mediaDevices.getUserMedia(constraints).then(async mediaStream => { // The user will get a notification on the mobile device that this interface is being used
// MediaStreamTrack
const mediaStreamTrack = mediaStream.getVideoTracks()[0];
// ImageCapture
await new Promise(resolve => setTimeout(resolve, 2000)); // For an unknown reason, adding this delay greatly increases image quality on iOS
this.imageCapture = new ImageCapture(mediaStreamTrack); // Note that iOS doesn't support ImageCapture [as of 2022-11-08] and instead uses the libs/imagecapture-polyfill.js to obtain an image by using the <canvas> instead
// Take picture
return this.imageCapture.takePhoto().then(blob => {
// Upload photo right away
if (blob) {
I also edited the imagecapture-polyfill.js code to get a jpeg instead of a png, greatly reducing the file size.
self.canvasElement.toBlob(resolve, 'image/jpeg'); // [YB 2022-11-02: Output jpeg instead of png]

Node Webshot Screenshot of Angular JS Page does not wait for page load

I am using node-webshot utility to capture screenshot of an angularJS based website.
I have the following code which is partially working.
const webshot = require('webshot');
var options = {
streamType: 'png',
windowSize: {
width: 2048,
height: 2048
},
shotSize: {
width: 'all',
height: 'all'
}
};
webshot('URL here', 'image.png', options, function(err) {
if(err){
console.log("An error ocurred ", err);
}
else{
console.log("Job done mate :)")
}
});
It works fine for simple websites like google.com e.t.c, but the only problem is that the angular js website i am trying to fetch information from loads the default page first, then loads some additional bits (Forms, UI stuff) e.t.c after few seconds since it makes some http calls in the background.
I have tried using renderDelay property and set it to a large value. it does wait for that time, but still image is just a blank image.
As per their doucmentation set this
takeShotOnCallback false Wait for the web page to signal to webshot
when to take the photo using window.callPhantom('takeShot');
Now i can put this takeShotOnCallback in my properties above, but in the angular app, when i try to write this window.callPhantom, it gives me the error that method doesn't exist.
So how should i use this?
Another broader question is, does Node webshot is the right tool to capture screenshot for Angular 6 website. Or should i look for some other solution.

Audio recording is empty on safari ios

I've used RecordRTC in order to record audio and send it to a speech-to-text API.
Somehow, it all works perfectly fine except for using Safari IOS.
While using Safari IOS, the recording which I'm retrieving as base64 string,
is somehow returned empty from the recorder object.
Previous questions asked about it were answered to use another library,
yet the docs for RecordRTC specifically says it fully supports Safari IOS.
Could you please help me figuring out the problem and finding a workaround?
My code:
async initMic() {
let stream = await navigator.mediaDevices.getUserMedia({video: false, audio: true});
mic = new RecordRTCPromisesHandler(stream, {
type: 'audio',
mimeType: 'audio/wav',
recorderType: RecordRTC.StereoAudioRecorder,
sampleRate: 48000,
numberOfAudioChannels: 1,
});
},
async sendRecording() {
let vm = this;
mic.stopRecording(function() {
mic.getDataURL(function(dataURL) {
vm.$store.dispatch('UpdateAudioBase64', dataURL.replace('data:audio/wav;base64,', ''));
mic.reset();
vm.$emit('send-recording');
});
});
},
** The string 'replace' function is meant to remove the base64 header
before sending it to speech-to-text API (API's needs).
Thank You!
If not mistaken, apple fu... messed up again with their dumb policy,
problem is you can't do a lot of things(like setting up recorder)
without USER trigger them,
so you should wrap your recorder in click event listener,
user click button, then your mic = new RecordRTCPromisesHandler(stream, {... etc
fires and recording starts.
check this example https://github.com/muaz-khan/RecordRTC/blob/master/simple-demos/audio-recording.html
here this trick works
btw your code works in mac safari?

How to fix images not loading in phaser.js

Whenever I try to load images into phaser they always appear as a green box on my screen?
I've not only tried the root file path like normal, but also every file path imaginable, but it still never works?
I'm assuming you've already set up a web server.. Did you look at developer tools and check if there are any errors? On Google Chrome, right click your web page and hit inspect. Navigate to the Consoles Tab and look for errors. Be sure to also Log XMLHttpRequests, as these will indicate when the request for said image has been fulfilled (just click the Log XMLHttpRequests checkbox).
Did any of the above solutions work for you? For me, I've used a Simple HTTP Server with Python. I was referencing my images correctly and had my web server hosted correctly. When using developer tools, there were no error messages.. I was banging my head against the wall for hours. The Phaser API was not able to tell what the local directory (http://192.168.0.2:8080/) was from the webpage from some reason.
I fixed my issue by using the following path for my images:
'http://192.168.0.2:8080/assets/sky.png'
where 192.168.0.2 was my local IP address hosting the server & 8080 was the port I was using for communications
Alternative to adding that lengthy file path for each of your images, you can call the this.load.setBaseURL('http://192.168.0.2:8080') function in combination with what you already have.
For example, see the following code:
var config = {
type: Phaser.AUTO,
width: 800,
height: 600,
physics: {
default: 'arcade',
arcade: {
gravity: { y: 200 }
}
},
scene: {
preload: preload,
create: create
}
};
var game = new Phaser.Game(config);
function preload ()
{
console.log('preload');
this.load.setBaseURL('http://192.168.0.2:8080');
this.load.image('bombP','bowmb.png');
this.load.image('einMary','/bowmb.png');
}
function create ()
{
console.log('create');
this.add.image(126, 119, 'bombP');
this.add.image(222,269,'einMary');
//var s = game.add.sprite(80,0,'einMary');
//s.rotation=0.219;
}
I should start by checking those two points below :
Check if the image has the right path to the file
Check if the 'key' you are using is the right one

Getting "ScreenCaptureError" in Chrome using Kurento Media Server

I'm trying to share my screen with Kurento WebRtc server. But getting this error:
NavigatorUserMediaError {name: "ScreenCaptureError", message: "", constraintName: ""}
There is no errors in Firefox with same code.
Constraints using for webrtc:
var constraints = {
audio: true,
video: {
mandatory : {
chromeMediaSource: 'screen',
maxWidth: 1920,
maxHeight: 1080,
maxFrameRate: 30,
minFrameRate: 15,
minAspectRatio: 1.6
},
optional: []
}
}
var options = {
localVideo : video,
onicecandidate : onIceCandidate,
mediaConstraints : constraints
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerSendonly(options,function(error) {
if (error) {
return console.error(error);
}
webRtcPeer.generateOffer(onOfferPresenter);
});
How do I share my screen using chrome and kurento?
Sharing a screen with Kurento through WebRTC, is exactly the same as sharing the webcam: get the stream from the client and negotiate the endpoint. The tricky part when doing screenshare is to get the stream. The kurento-utils-js library will give you a little help on that, as you can create the WebRtcPeer object, in the client, indicating that you want to share your screen or a window. You just need to make sure that you
have an extension installed to do screen-sharing in Chrome. In FF, it's enough to add the domain to the whitelist. Check this extension.
pass a valid sendSource value (screen or window) in the options bag when creating the kurentoUtils.WebRtcPeer object
have a getScreenConstraints method in your window object, as it will be used here. getScreenConstraints should return a valid set of constraints, depending on the browser. YOu can check an implementation of that function here
I think that should be enough. We are doing screen sharing with the library, using our own getScreenConstrains and extension, and it works fine. Once you have that, doing screen sharing with the kurento-utils-js library is quite easy. Just need to pass the sendSource value when creating the peer like so
var constraints = {
audio: false,
video: true
}
var options = {
localVideo: videoInput, //if you want to see what you are sharing
onicecandidate: onIceCandidate,
mediaConstraints: constraints,
sendSource: 'screen'
}
webRtcPeerScreencast = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error) {
if (error) return onError(error) //You'll need to use whatever you use for handling errors
this.generateOffer(onOffer)
});
The value of sendSource is a string, and it depends on what you want to share
'screen': will let you share the whole screen. If you have more than one, you can choose which one to share
'window': lets you choose between all open windows
[ 'screen', 'window' ]: WARNING! Only accepted by Chrome, this will let the user choose between full screens or windows.
'webcam': this is the default value of you don't specify anything here. Guess what'll happen ;-)

Categories

Resources