Adding base64 encoded PNG to PDF is not working - javascript

I'm trying to embedd an image into a PDF with jsPDF. For some reason this is not working . This is the Code when I create my PDF:
exportPDF () {
const doc = new jsPDF('p', 'pt')
var footer = 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAC...'
var header = 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgA...'
/* Some other stuff */
doc.addImage(this.output, 'PNG', 400, 400, 500, 500) #This does not work
doc.addImage(header, 'PNG', 0, 0, 600, 139.86) #For some reason these images work
doc.addImage(footer, 'PNG', 0, 800, 600, 54.68)
/* Some other stuff */
doc.save('Containertreppen Teileliste.pdf')
}
The Variable output is passed from another page and therefore not created in the method (I'm using vuejs). In this Variable the Screenshot of a Babylonjs Scene is stored, encoded in base64.
This is the Code when a take the Screenshot:
printDiv() {
this.scene.render();
const self = this
const options = {
type: "dataURL",
};
this.$html2canvas(document.getElementById("renderCanvas"), options).then(function(canvas){
self.output = canvas
})
},
I'm able to display the screenshot in an HTML-element and I can download it as a normal png but for some reason when I try to add it to my PDF the following error occurs in my console:
Error in v-on handler: "Error: Supplied Data is not a valid base64-String jsPDF.convertStringToImageData"
I also tried to put the String as src for an HTML-image-element but it changed nothing.
This is the beginning of the Variable output:
"data:image/png;base64,iVBORw..."
Can someone help me with this?

Related

can we convert docx file into PDF using javascript without js libraries

I want to create LWC component in which I have one file selector which is uploading '.pdf', '.png','.jpg','.jpeg','.docx','.doc' types of file in salesforce. but on button click I want that file to get converted into PDF and gets download immediately.
Is it possible to convert file using js in LWC?.
I got this code but it is only working on string containing HTML element. I want it to work on whole file.
window.jsPDF = window.jspdf.jsPDF;
var doc = new jsPDF();
// Source HTMLElement or a string containing HTML.
var elementHTML = document.querySelector("#content");
doc.html(elementHTML, {
callback: function(doc) {
// Save the PDF
doc.save('sample-document.pdf');
},
margin: [10, 10, 10, 10],
autoPaging: 'text',
x: 0,
y: 0,
width: 190, //target width in the PDF document
windowWidth: 675 //window width in CSS pixels
});

can't load more than one image while using jsPDF on NextJS

Im trying to create a pdf based on a html, which has 3 images. The html can be seen and there is a button that downloads the html as a pdf file, but only the first image is shown on the downloaded pdf. This is the function that downloads the file (im using jsPDF library):
const savePDF = () => {
const doc = new jsPDF('p', 'pt', 'a4');
const margin = 10;
const certificate = document.querySelector('#doc');
const scale = (doc.internal.pageSize.width - margin * 2) / certificate.offsetWidth;
doc.internal.write(0, 'Tw');
doc.html(certificate, {
x: margin,
y: margin,
html2canvas: {
scale: scale
},
callback: function(doc) {
doc.output('save', {filename: 'pdfFile.pdf'});
}
})
}
Everything else is allright, the text, the tables and the first image of the doc. But for some reason the second and third image arent shown on the downloaded pdf file. I get console message: "Error loading image". All these images are on the public folder of the project, locally. Im working on NextJS.
Btw, all images are set the same way:
<Image src='/image1.png' alt='' width={500} height={500}/>

trying to modify the js function so that it gives the same output as it was when called the other js function

I am exporting the content on the webpage to the PDF file, for this i have used jsPDF API and i could able to get it work but now i want to use html2PDF as it resolves few issues which were faced when using jsPDF API.
I have written the function $scope.exportUsingJSPDF which is called when the button Export Using JSPDF is clicked. Similarly i want to implement the function $scope.exportUsingHTML2PDF which uses html2PDF API but could not succeed. Any inputs on how to modify $scope.exportUsingHTML2PDF so that it iterates the divs and shows the div content as shown when invoked using $scope.exportUsingJSPDF by clicking Export using JSPDF API.
Complete online example: https://plnkr.co/edit/454HUFF3rmLlkXLCQkbx?p=preview
js code:
//trying to implement the below function same as $scope.exportUsingJSPDF, so
// that when user click on Export using HTML2PDF button, it exports the content to the PDF and generaes the PDF.
$scope.exportUsingHTML2PDF = function(){
var pdf = new jsPDF('l', 'pt', 'a4');
var pdfName = 'test.pdf';
pdf.canvas.height = 72 * 11;
pdf.canvas.width = 72 * 8.5;
html2pdf(document.getElementByClassName("myDivClass"), pdf, function(pdf){
pdf.save(pdfName);
});
}
$scope.exportUsingJSPDF = function() {
var pdf = new jsPDF('p','pt','a4');
var pdfName = 'test.pdf';
var options = { pagesplit: true};
var $divs = $('.myDivClass') //jQuery object of all the myDivClass divs
var numRecursionsNeeded = $divs.length -1; //the number of times we need to call addHtml (once per div)
var currentRecursion=0;
//Found a trick for using addHtml more than once per pdf. Call addHtml in the callback function of addHtml recursively.
function recursiveAddHtmlAndSave(currentRecursion, totalRecursions){
//Once we have done all the divs save the pdf
if(currentRecursion==totalRecursions){
pdf.save(pdfName);
}else{
currentRecursion++;
pdf.addPage();
//$('.myDivClass')[currentRecursion] selects one of the divs out of the jquery collection as a html element
//addHtml requires an html element. Not a string like fromHtml.
pdf.fromHTML($('.myDivClass')[currentRecursion], 15, 20, options, function(){
console.log(currentRecursion);
recursiveAddHtmlAndSave(currentRecursion, totalRecursions)
});
}
}
pdf.fromHTML($('.myDivClass')[currentRecursion], 15, 20, options, function(){
recursiveAddHtmlAndSave(currentRecursion, numRecursionsNeeded);
});
}
PS: I was trying to modify $scope.exportUsingHTML2PDF so that it gives the same output as generated when clicked on "Export using JSPDF" button which calls the function $scope.exportUsingJSPDF.
The problem lies with your function using exportUsingHTML2PDF, the error is that you need to pass in the html to the function of html2PDF. Manage the page css on the basis of your need.
EDIT: You have wrong library. Please check html2pdf.js library within the plunker
Working plunker: html2pdf
$scope.exportUsingHTML2PDF = function() {
var element = document.getElementById('element-to-print');
html2pdf(element, {
margin: 1,
filename: 'myfile.pdf',
image: {
type: 'jpeg',
quality: 0.98
},
html2canvas: {
dpi: 192,
letterRendering: true
},
jsPDF: {
unit: 'in',
format: 'letter',
orientation: 'portrait'
}
});
}
With JSPDF and HTML2PDF, you have to get used to two fundamentally different coding styles:
JSPDF: imperative (javascript statements)
HTML2PDF: declarative (directives embedded in HTML)
So for page breaks:
JSPDF: pdf.addPage();
HTML2PDF: <div class="html2pdf__page-break"></div>
That should work, however HTML2PDF is buggy and gives a "Supplied data is not a JPEG" error when <div class="html2pdf__page-break"></div> is included (at least it does so for me, in Plunkr), despite being totally what the documentation tells us to do.
I haven't got time to debug it. You'll need to do some research. Someone will have posted a solution somewhere on the web.

convert web page to a pdf with styles

I want to convert a web page to PDF including all the styles I've added.
I've used jspdf.js and html2Canvas. But I got only a blur picture in PDF form.
I've searched for many JavaScript codes but didn't find a correct one.
Script I have written is:
function getPDF() {
html2canvas(document.body, {
onrendered: function(canvas) {
var img = canvas.toDataURL("Image/jpeg");
//window.open(img);
var doc = new jsPDF({
unit: 'px',
format: 'a4'
});
doc.addImage(img, 'JPEG', 0, 0, 440, 627);
doc.save("download");
}
});
}
Thank You!
There may be a few things that are contributing.
Your encoder options for the canvas.toDataURL are missing, so you're getting the defaults. The defaults may be creating a low quality image. Try this for the highest quality JPEG image:
var img = canvas.toDataURL("Image/jpeg", 1.0);
You might also try creating your jsPDF object with measurements, rather than pixels:
var pdf = new jsPDF("p", "mm", "a4"); // jsPDF(orientation, units, format)
And when you add the image, scale it to the dimensions of the page:
pdf.addImage(imgData, 'JPEG', 10, 10, 190, 277); // 190x277 mm # (10,10)mm
See if that gives you a better image quality.

Cropping a profile picture in Node JS

I want the user on my site to be able to crop an image that they will use as a profile picture. I then want to store this image in an uploads folder on my server.
I've done this using php and the JCrop plugin, but I've recently started to change the framework of my site to use Node JS.
This is how I allowed the user to crop an image before using JCrop:
$("#previewSub").Jcrop({
onChange: showPreview,
onSelect: showPreview,
aspectRatio: 1,
setSelect: [0,imgwidth+180,0,0],
minSize: [90,90],
addClass: 'jcrop-light'
}, function () {
JcropAPI = this;
});
and I would use php to store it in a folder:
<?php
$targ_w = $targ_h = 300;
$jpeg_quality = 90;
$img_r = imagecreatefromjpeg($_FILES['afile']['tmp_name']);
$dst_r = ImageCreateTrueColor( $targ_w, $targ_h );
imagecopyresampled($dst_r,$img_r,0,0,$_POST['x'],$_POST['y'],
$targ_w,$targ_h,$_POST['w'],$_POST['h']);
header("Content-type: image/jpeg");
imagejpeg($dst_r,'uploads/sample3.jpg', $jpeg_quality);
?>
Is there an equivalent plugin to JCrop, as shown above, using Node JS? There are probably multiple ones, if there are, what ones would you recommend? Any simple examples are appreciated too.
EDIT
Because the question is not getting any answers, perhaps it is possible to to keep the JCrop code above, and maybe change my php code to Node JS. If this is possible, could someone show me how to translate my php code, what would would be the equivalent to php above?
I am very new to Node, so I'm having a difficult time finding the equivalent functions and what not.
You could send the raw image (original user input file) and the cropping paramenters result of JCrop.
Send the image enconded in base64 (string), when received by the server store it as a Buffer :
var img = new Buffer(img_string, 'base64');
ImageMagick Doc : http://www.imagemagick.org/Usage/files/#inline
(inline : working with base64)
Then the server has the image in a buffer and the cropping parameters.
From there you could use something like :
https://github.com/aheckmann/gm ...or...
https://github.com/rsms/node-imagemagick
to cast the modifications to the buffer image and then store the result in the file system.
You have other options, like manipulating it client-side and sending the encoded image result of the cropping.
EDIT : First try to read & encode the image when the user uses the input :
$('body').on("change", "input#selectImage", function(){readImage(this);});
function readImage(input) {
if ( input.files && input.files[0] ) {
var FR = new FileReader();
FR.onload = function(e) {
console.log(e.target.result);
// Display in <img> using the b64 string as src
$('#uploadPreview').attr( "src", e.target.result );
// Send the encoded image to the server
socket.emit('upload_pic', e.target.result);
};
FR.readAsDataURL( input.files[0] );
}
}
Then when received at the server use the Buffer as mentioned above
var matches = img.match(/^data:([A-Za-z-+\/]+);base64,(.+)$/), response = {};
if (matches.length !== 3) {/*invalid string!*/}
else{
var filename = 'filename';
var file_ext = '.png';
response.type = matches[1];
response.data = new Buffer(matches[2], 'base64');
var saveFileAs = 'storage-directory/'+ filename + file_ext;
fs.unlink(saveFileAs, function() {
fs.writeFile(saveFileAs, response.data, function(err) {if(err) { /* error saving image */}});
});
}
I would personally send the encoded image once it has been edited client-side.
The server simply validates and saves the file, let the client do the extra work.
As promised, here is how I used darkroom.js with one of my Express projects.
//Location to store the image
var multerUploads = multer({ dest: './uploads/' });
I upload the image first, and then allow the user to crop it. This is because, I would like to keep the original image hence, the upload jade below:
form(method='post', action='/user/image/submit', enctype='multipart/form-data')
input(type='hidden', name='refNumber', value='#{refNumber}')
input(type='file', name='photograph' accept='image/jpeg,image/png')
br
input(type='submit', value='Upload image', data-role='button')
Here is the form I use to crop the image
//- figure needed for darkroom.js
figure(class='image-container', id='imageContainer')
//specify source from where it should load the uploaded image
img(class='targetImg img-responsive', src='/user/image/view/#{photoName}', id='target')
form(name='croppedImageForm', method='post', enctype='multipart/form-data', id='croppedImageForm')
input(type='hidden', name='refNumber', id='refNumber', value='#{refNumber}')
input(type='hidden', id='encodedImageValue', name='croppedImage')
br
input(type='submit', value='Upload Cropped Image', id='submitCroppedImage' data-role='button')
The darkroom.js is attached to the figure element using this piece of javascript.
new Darkroom('#target', {
// Canvas initialization size
minWidth: 160,
minHeight: 160,
maxWidth: 900,
maxHeight: 900,
});
});
Once you follow the STEP 1, STEP 2 and finally STEP 3 the base64 value of the cropped region is stored in under figure element see the console log shown in the screenshot below:
I then have a piece of javascript that is triggered when the Upload Cropped Image is clicked and it then copy/paste the base64 value of the img from figure into the input element with id encodedImageValue and it then submit it to the server. The javascript function is as follow:
$("#submitCroppedImage").click(function() {
var img = $('#imageContainer img');
var imgSrc = img.attr('src');
if(imgSrc !== undefined && imgSrc.indexOf('base64') > 0) {
$('#encodedImageValue').val(img.attr('src'));
$.ajax({
type: "POST",
url: "/user/image/cropped/submit",
data: $('#croppedImageForm').serialize(),
success: function(res, status, xhr) {
alert('The CROPPED image is UPLOADED.');
},
error: function(xhr, err) {
console.log('There was some error.');
}
});
} else {
alert('Please follow the steps correctly.');
}
});
Here is a screenshot of POST request with base64 field as its body
The post request is mapped to the following route handler in Express app:
router.post('/user/image/cropped/submit',
multerUploads,
function(req, res) {
var photoName = null;
var refNumber = req.body.refNumber;
var base64Data = req.body.croppedImage.replace(/^data:image\/png;base64,/, "");
fs.writeFile("./uploads/cropped-" + 'profile_image.png', base64Data, 'base64',
function(err) {
logger.info ("Saving image to disk ...");
res.status(200).send();
});
});
I have the following .js files relating to awesome Fabric.js and darkroom.js
script(src='/static/js/jquery.min.js')
script(src='/static/js/bootstrap.min.js')
//get the js files from darkroom.js github
script(src='/static/js/fabric.js')
script(src='/static/js/darkroom.js')
script(src='/static/js/plugins/darkroom.crop.js')
script(src='/static/js/plugins/darkroom.history.js')
script(src='/static/js/plugins/darkroom.save.js')
link(href='/static/css/darkroom.min.css', rel='stylesheet')
link(href='https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.2/css/bootstrap.css', rel='stylesheet')
//get this from darkroom.js github
link(href='/static/css/page.css', rel='stylesheet')
Lastly, also copy the svg icons for selecting, cropping, saving, etc (from darkroo.js github page).

Categories

Resources