Increase image DPI using Html2Canvas library - javascript

We are using Html2Canvas library to capture div element to an image. But captured image quality is not good. Please let us know how can we improve quality (DPI upto 300)
We have added below javascript code
$(‘#divId’).html2canvas({
onrendered: function (canvas) {
var imageData = { "imageData": canvas.toDataURL("image/png", 1.0) };
$('#ImageData').val(imageData);
var sourceUrl = configuration.baseURL + "/ScreenShot";
$.ajax({
type: "POST",
url: sourceUrl,
data: imageData,
success: function () {
},
error: function (a, errorStatus, errorThrown) {
alert(a, errorStatus, errorThrown);
}
});
}
});
C# code ::
public void ScreenShot(string imageData)
{
string trimmedData = imageData.Replace("data:image/png;base64,", string.Empty);
byte[] uploadedImage = Convert.FromBase64String(trimmedData);
byte[] tiffBytes;
using (MemoryStream inStream = new MemoryStream(uploadedImage))
using (MemoryStream outStream = new MemoryStream())
{
Bitmap.FromStream(inStream).Save(outStream, ImageFormat.Tiff);
tiffBytes = outStream.ToArray();
}
string fileName = Guid.NewGuid() + ".tiff";
string path = Server.MapPath(fileName);
System.IO.File.WriteAllBytes(path, tiffBytes);
}

Usually the screen resolution is near 100dpi, usualy a bit lower, so you cannot capture a higher resolution, because it's not available. The capture is a pixel by pixel capture of what's in the screen.
I don't know if yu can do so, but the only way to get a higher resolution is to cheat by zooming in in your browser, and capture the zoomed in div content. If you need 300 dpi you have to zoom at least x3.
The only other solution is to get the original capture, and use some library that can increase the resolution using interpolation, but you're not going to get a good quality.

Related

BodyPix (Tensorflow.js) error: Unknown input type: [object ImageData]

I am trying to load images serverside (locally) using node.js so that I can run the segmentation functions on them.
In the BodyPix README it states that the segmentPerson function accepts ImageData object images:
"Params in segmentPerson()
image - ImageData|HTMLImageElement|HTMLCanvasElement|HTMLVideoElement The input image to feed through the network."
https://github.com/tensorflow/tfjs-models/tree/master/body-pix
Here is my code:
var bodyPix = require("#tensorflow-models/body-pix");
var tfjs = require("#tensorflow/tfjs")
var inkjet = require('inkjet');
var createCanvas = require('canvas');
var fs = require('fs');
async function loadAndPredict(data) {
const net = await bodyPix.load({
architecture: 'MobileNetV1',
outputStride: 16,
multiplier: 0.75,
quantBytes: 2
});
imgD = createCanvas.createImageData(new Uint8ClampedArray(data.data), data.width, data.height);
const segmentation = await net.segmentPerson(imgD, {
flipHorizontal: false,
internalResolution: 'medium',
segmentationThreshold: 0.7
});
const maskImage = bodyPix.toMask(segmentation, false);
}
inkjet.decode(fs.readFileSync('./person.jpg'), function(err, data) {
if (err) throw err;
console.log('OK: Image');
loadAndPredict(data)
});
It loads an image from my current directory, then converts it to the specified ImageData format, then it feeds it into the segmentPerson function. The half of the code that doesn't relate to the loading and formatting of the image is stripped from the GitHub README AND has worked for me when using HTML Image elements.
However it returns an Unknown input type: [object ImageData] on the call to "net.segmentPerson(imgD,..."
I haven't been able to find a solution to this issue so any help or guidance would be GREATLY appreciated. Thank you.
Check this repo: https://github.com/ajaichemmanam/Posenet-NodeServer
Posenet (Bodypix uses this) is used in a nodejs server script. You can check how the image is loaded here and is used for posenet estimation

Saving current work from multiple canvas elements

Here is my problem: I have created an image collage function in javascript. (I started off with some code from this post btw: dragging and resizing an image on html5 canvas)
I have 10 canvas elements stacked on top of each other and all parameters, including 2dcontext, image data, positions etc. for each canvas is held in instances of the function 'collage'.
This is working fine, I can manipulate each canvas separately (drag, resize, adding frames, etc). But now and I want the user to be able to save the current work.
So I figure that maybe it would be possible to create a blob, that contains all the object instances, and then save the blob as a file on disk.
This is the function collage (I also push each instance to the array collage.instances, to be able to have numbered indexes)
function collage() {
this.canvas_board = '';
this.canvas = '';
this.ctx = '';
this.canvasOffset = '';
this.offsetX = '';
this.offsetY = '';
this.startX = '';
this.startY = '';
this.imageX = '';
this.imageY = '';
this.mouseX = '';
this.mouseY = '';
this.imageWidth = '';
this.imageHeight = '';
this.imageRight = '';
this.imageBottom = '';
this.imgframe = '';
this.frame = 'noframe';
this.img = '';
collage.instances.push(this);
}
collage.instances = [];
I tried with something like this:
var oMyBlob = new Blob(collage.instances, {type: 'multipart/form-data'});
But that doesn't work (only contains about 300 bits of data).
Anyone who can help? Or maybe suggest an alternative way to save the current collage work. It must of course must be possible to open the blob and repopulate the object instances.
Or maybe I am making this a bit more complicated than it has to be... but I am stuck right now, so I would appreciate any hints.
You can extract each layer's image data to DataURLs and save the result as a json object.
Here's a quick demo: http://codepen.io/gunderson/pen/PqWZwW
The process literally takes each canvas and saves out its data for later import.
The use of jquery here is for convenience:
$(".save-button").click(function() {
var imgData = JSON.stringify({
layers: getLayerData()
});
save(imgData, "myfile.json");
});
function save(filecontents, filename) {
try {
var $a = $("<a>").attr({
href: "data:application/json;," + filecontents,
download: filename
})[0].click();
return filecontents;
} catch (err) {
console.error(err);
return null;
}
}
function getLayerData() {
var imgData = [];
$(".layer").each(function(i, el) {
imgData.push(el.toDataURL("image/png"));
});
return imgData;
}
To restore, you can use a FileReader to read the contents of the JSON back into the browser, then make <img>s for each layer, set img.src to the dataURLs in your JSON and from there you can draw the <img> into onload canvases.
Add a reference (src URL) for the image to the instance, then serialize the instance array as JSON and use f.ex. localStorage.
localStorage.setItem("currentwork", JSON.stringify(collage.instances));
Then to restore you would need to do:
var tmp = localStorage.getItem("currentwork");
collage.instances = tmp ? JSON.parse(tmp) : [];
You then need to iterate through the array and reload the images using proper onload handling. Finally re-render everything.
Can you store image data on client? Yes, but not recommended. This will take a lot of space and if too much you will not be able to save all the data, the user may refuse to allow more storage space etc.
Keeping a link to the image on a server is a better approach for these things IMO. But if you disagree, look into IndexedDB (or WebSQL although deprecated) to have local storage which can be expanded in available space. localStorage can only hold between 2.5 - 5 mb, ie. no image data and only strings. Each char takes two bytes, data-uris adds 33% on top, so this will run empty pretty fast...

Load file into IMAGE object using Phantom.js

I'm trying to load image and put its data into HTML Image element but without success.
var fs = require("fs");
var content = fs.read('logo.png');
After reading content of the file I have to convert it somehow to Image or just print it to canvas. I was trying to conver binary data to Base64 Data URL with the code I've found on Stack.
function base64encode(binary) {
return btoa(unescape(encodeURIComponent(binary)));
}
var base64Data = 'data:image/png;base64,' +base64encode(content);
console.log(base64Data);
Returned Base64 is not valid Data URL. I was trying few more approaches but without success. Do you know the best (shortest) way to achieve that?
This is a rather ridiculous workaround, but it works. Keep in mind that PhantomJS' (1.x ?) canvas is a bit broken. So the canvas.toDataURL function returns largely inflated encodings. The smallest that I found was ironically image/bmp.
function decodeImage(imagePath, type, callback) {
var page = require('webpage').create();
var htmlFile = imagePath+"_temp.html";
fs.write(htmlFile, '<html><body><img src="'+imagePath+'"></body></html>');
var possibleCallback = type;
type = callback ? type : "image/bmp";
callback = callback || possibleCallback;
page.open(htmlFile, function(){
page.evaluate(function(imagePath, type){
var img = document.querySelector("img");
// the following is copied from http://stackoverflow.com/a/934925
var canvas = document.createElement("canvas");
canvas.width = img.width;
canvas.height = img.height;
// Copy the image contents to the canvas
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
// Get the data-URL formatted image
// Firefox supports PNG and JPEG. You could check img.src to
// guess the original format, but be aware the using "image/jpg"
// will re-encode the image.
window.dataURL = canvas.toDataURL(type);
}, imagePath, type);
fs.remove(htmlFile);
var dataUrl = page.evaluate(function(){
return window.dataURL;
});
page.close();
callback(dataUrl, type);
});
}
You can call it like this:
decodeImage('logo.png', 'image/png', function(imgB64Data, type){
//console.log(imgB64Data);
console.log(imgB64Data.length);
phantom.exit();
});
or this
decodeImage('logo.png', function(imgB64Data, type){
//console.log(imgB64Data);
console.log(imgB64Data.length);
phantom.exit();
});
I tried several things. I couldn't figure out the encoding of the file as returned by fs.read. I also tried to dynamically load the file into the about:blank DOM through file://-URLs, but that didn't work. I therefore opted to write a local html file to the disk and open it immediately.

How do I detect a disconnect from an MJPEG stream in javascript

I am showing an MJPEG stream from an IP camera on a web page. The stream is shown using an image element, which is set by jQuery:
view = $('<img>');
view.load(function() {
console.log('loaded');
});
view.error(function() {
console.log('error');
});
view.attr('src', 'http://camera_ip/videostream.mjpeg');
Both events fire neatly when their respective situation occurs. Until I disconnect the camera. The image freezes (of course). I want to detect this disconnection, to show the user an error message. I thought up a solution, which was copying frames several seconds apart from the image to a canvas, and comparing the contents.
Is there an easier option?
The only way I can think of doing this on the front end is creating an AJAX request exactly when you set the src attribute of the image. The AJAX request should call the "complete" callback when the mjpeg stream ends.
If you are comfortable with node.js and/or websockets, you could alternatively set up an mjpeg proxy back end that serves up the mjpeg stream and emits a 'close' event to that client over a websocket when the stream ends. So it would look something like this (keep in mind, I still haven't figured out exactly how bufferToJPEG would parse out the single jpeg frame from the stream):
http.get('http://camera_ip/videostream.mjpeg', function(response) {
var buffer = "";
response.on('data', function(chunk) {
buffer += chunk;
clientSocket.emit('imageFrame', bufferToJPEG(buffer));
});
response.on('end', function() {
clientSocket.emit('imageEnd');
});
});
The problem with this (which I am trying to deal with in my own project right now) is that you then have to either associate a websocket with each image request, or emit the raw jpegs from the mjpeg stream as they come in over websockets (you can render those images with data uris on the front end).
Hope that helped a little -- sorry you had to wait so long for a response.
edit: https://github.com/wilhelmbot/Paparazzo.js looks like a good way of proxying that image in the way that I described above.
Here is what I came up with after reading zigzackattack answer. I use the "datauri" package for simplicity but for more fine grained control over the final image I also successfully tested the "node-canvas" package instead.
var mjpeg2jpegs = require('mjpeg2jpegs')
const Datauri = require('datauri')
var camURL = '/videostream.cgi?user=admin&pwd=password'
var camPort = 81
var camTimeout = 10000
var FPS_DIVIDER = 1
var options = {
hostname: '192.168.1.241',
port: camPort,
path: camURL,
timeout: camTimeout
}
function startCamStream (camName, options) {
var http = require('http')
var req = http.request(options, mjpeg2jpegs(function (res) {
var data
var pos = 0
var count = 0
res.on('imageHeader', function (header) {
// console.log('Image header: ', header)
data = new Buffer(parseInt(header['content-length'], 10))
pos = 0
})
res.on('imageData', function (chunk) {
// console.log('Image data: ', data.length)
chunk.copy(data, pos)
pos += chunk.length
})
res.on('imageEnd', function () {
// console.log('Image end')
if (count++ % FPS_DIVIDER === 0) {
const datauri = new Datauri()
datauri.format('.jpeg', data)
socket.emit(camName, datauri.content) // Send the image uri via websockets.
}
})
})).on('timeout', function () {
console.log('timeout')
startCamStream(camName, options)
}).end()
}
startCamStream('ipcam1', options)
Using vue.js (optional) I simply embed the image uri with an img tag.
<img :src="ipcam1" alt="ipcam1" />
Increasing the FPS_DIVIDER variable will reduce the fps output. If you want to change the image when there is a timeout then you can send an "offline" image when it reach the "timeout" callback.

Crop and upload image on client-side without server-side code involve

As title said. The requirement is to be able to crop an image before uploading the cropped image to the server. All the work should be done on the client-side.
I have heard of the method to crop the image on the server and save it altogether.
But as i use Parse.com service. There is no supported for image manipulation on the server-side so i need to process it locally and upload the finished image directly to Parse.com service.
Example code would be very helpful.
Thanks.
The solution i used:
First I use a 3rd party javascript library to select the crop area like jCrop.
Once i got the coordinates (x1,x2,y1,y2), i draw a copy of an image to a canvas.
var canvas = document.getElementById('drawcanvas');
var context = canvas.getContext('2d');
canvas.width = canvas.width; // clear canvas
var imageObj = new Image();
imageObj.onload = function() {
// draw cropped image
// ...
context.drawImage(imageObj, sourceX, sourceY, sourceWidth, sourceHeight, destX, destY, sourceWidth, sourceHeight);
var dataURL = canvas.toDataURL();
};
imageObj.src = // image url
After i drew the canvas, i converted the canvas to a DataURL which is in base64 format.
Once i've got the DataURL, i use this function i found from the internet where it converts the DataURL to raw binary data.
DataURLConverter: function(data) {
// convert base64 to raw binary data held in a string
// doesn't handle URLEncoded DataURIs
var byteString = atob(data.split(',')[1]);
// separate out the mime component
var mimeString = data.split(',')[0].split(':')[1].split(';')[0]
// write the bytes of the string to an ArrayBuffer
var ab = new ArrayBuffer(byteString.length);
var ia = new Uint8Array(ab);
for (var i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i);
}
return ia;
}
When we got the binary data, we then upload this directly to Parse.com.
Upload to parse with 'ia' as a data
var serverUrl = 'https://api.parse.com/1/files/' + fileName;
$.ajax({
type: "POST",
beforeSend: function(request) {
request.setRequestHeader("X-Parse-Application-Id", "App id");
request.setRequestHeader("X-Parse-REST-API-Key", "API Key");
request.setRequestHeader("Content-Type", "File type");
},
url: serverUrl,
data: ia,
processData: false,
contentType: false,
success: function(data) {
},
error: function(data) {
}
});
OK, I finally made it!!! after searching for a whole day!! Even now parse propose server side cropping, it's still interesting to have client side resizing.
Check this:
HTML5 Pre-resize images before uploading
Justin Levene's correction works really good!
But to work with Parse.com, you need to use
new Parse.File(name, {base64: somebase64string});
These codes works for me (for exemple, I uploaded a 2M photo, the re-sized photo would be like 150k):
var dataurl = canvas.toDataURL("image/jpeg");
var name = "image.jpg";
var parseFile = new Parse.File(name, {base64: dataurl.substring(23)});
parseFile.save().then(function() { ....
the "23" is all the letters before the real base64 string.
the result of dataurl is "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2......", we just need the part begin by "/9j/"
Good luck!
This might be an old post but if you found this answer (like me), you might want to know that Parse allows now to crop images server side.
For the latest code you should refer to their documentation: https://www.parse.com/docs/cloud_modules_guide#images
Parse Image Object (from Parse documentation):
var Image = require("parse-image");
Parse.Cloud.httpRequest({
url: object.get("profilePhoto").url(),
success: function(response) {
// The file contents are in response.buffer.
var image = new Image();
return image.setData(response.buffer, {
success: function() {
console.log("Image is " + image.width() + "x" + image.height() + ".");
},
error: function(error) {
// The image data was invalid.
}
})
},
error: function(error) {
// The networking request failed.
}
});
Crop Image (from Parse documentation):
// Crop the image to the rectangle from (10, 10) to (30, 20).
image.crop({
left: 10,
top: 10,
right: 30,
bottom: 20,
success: function(image) {
// The image was cropped.
},
error: function(error) {
// The image could not be cropped.
}
});
You can also scale, change image format, and create thumbnails.

Categories

Resources