I am trying to create an Electron JS app that has the purpose to print letter size PDFs.
This is my snippet of code for printing:
win = new BrowserWindow({
width: 378,
height: 566,
show: true,
webPreferences: {
webSecurity: false,
plugins: true
}
});
// load PDF
win.loadURL('file://' + __dirname + '/header1_X_BTR.pdf');
// if pdf is loaded start printing
win.webContents.on('did-finish-load', () => {
win.webContents.print({silent: true, printBackground:true});
});
My issues are: if I have print({silent:true}) my printer prints an empty page. If I have print({silent:false}), the printer prints in the same way as the screenshot, with headers, controls, etc.
I need a silent print of the PDF content, and I can't manage to do it for days. Did anyone experience the same thing with Electron?
If you have already have the pdf file or you save the pdf before printing "I assuming it is", then you can grab the file location then you can use externals process to do the printing using child_process.
You can use lp command or PDFtoPrinter for windows
const ch = require('os');
switch (process.platform) {
case 'darwin':
case 'linux':
ch.exec(
'lp ' + pdf.filename, (e) => {
if (e) {
throw e;
}
});
break;
case 'win32':
ch.exec(
'ptp ' + pdf.filename, {
windowsHide: true
}, (e) => {
if (e) {
throw e;
}
});
break;
default:
throw new Error(
'Platform not supported.'
);
}
I hope it helps.
Edit:
You can also use SumatraPDF for windows https://github.com/sumatrapdfreader/sumatrapdf
The easiest way to do this is to render the PDF pages to individual canvas elements on a page using PDF.js and then call print.
I fixed this gist to use the PDF.js version (v1) it was designed for and its probably a good starting point.
This is essentially what the electron/chrome pdf viewer is doing but now you have full control over the layout!
<html>
<body>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/pdf.js/1.10.90/pdf.js"></script>
<script type="text/javascript">
function renderPDF(url, canvasContainer, options) {
var options = options || { scale: 1 };
function renderPage(page) {
var viewport = page.getViewport(options.scale);
var canvas = document.createElement('canvas');
var ctx = canvas.getContext('2d');
var renderContext = {
canvasContext: ctx,
viewport: viewport
};
canvas.height = viewport.height;
canvas.width = viewport.width;
canvasContainer.appendChild(canvas);
page.render(renderContext);
}
function renderPages(pdfDoc) {
for(var num = 1; num <= pdfDoc.numPages; num++)
pdfDoc.getPage(num).then(renderPage);
}
PDFJS.disableWorker = true;
PDFJS.getDocument(url).then(renderPages);
}
</script>
<div id="holder"></div>
<script type="text/javascript">
renderPDF('//cdn.mozilla.net/pdfjs/helloworld.pdf', document.getElementById('holder'));
</script>
</body>
</html>
I'm facing the same issue. It appears the PDF printing to a printer is just not implemented in Electron, despite it's been requested since 2017. Here is another related question on SO and the feature request on GitHub:
Silent printing in electron
Support printing in native PDF rendering
One possible solution might be to use Google PDFium and a wrapping NodeJS library which appears to allow conversion from PDF to a set of EMFs, so the EMFs can be printed to a local/network printer, at least on Windows.
As another viable option, this answer provides a simple C# solution for PDF printing using PdfiumViewer, which is a PDFium wrapper library for .NET.
I'm sill looking at any other options. Utilizing a locally installed instance of Acrobat Reader for printing is not an acceptable solution for us.
UPDATED. For now, PDF.js solves the problem with rendering/previewing individual pages, but as to printing itself, it appears Electron (at the time of this posting) just lacks the proper printing APIs. E.g., you can't set paper size/landscape portrait mode etc. Moreover, when printing, PDF.js produces rasterized printouts - thanks to how HTML5 canvas work - unlike how Chrome PDF Viewer does it. Here is a discussion of some other PDF.js shortcomings.
So for now I think we might go on with a combination of PDF.js (for UI in the Electron's Renderer process) and PDFium (for actual printing from the Main process).
Based on Tim's answer, here's a version of the PDF.js renderer using ES8 async/await (supported as of the current version of Electron):
async function renderPDF(url, canvasContainer, options) {
options = options || { scale: 1 };
async function renderPage(page) {
let viewport = page.getViewport(options.scale);
let canvas = document.createElement('canvas');
let ctx = canvas.getContext('2d');
let renderContext = {
canvasContext: ctx,
viewport: viewport
};
canvas.height = viewport.height;
canvas.width = viewport.width;
canvasContainer.appendChild(canvas);
await page.render(renderContext);
}
let pdfDoc = await pdfjsLib.getDocument(url);
for (let num = 1; num <= pdfDoc.numPages; num++)
{
if (num > 1)
{
// page separator
canvasContainer.appendChild(document.createElement('hr'));
}
let page = await pdfDoc.getPage(num);
await renderPage(page);
}
}
Since your are using contents.print([options], [callback]) I will assume that you want to print on paper and not on your Disk.
The answer to your issue is simple. It is the event you are listening on which is causing the error. So if you simply do this:
winObject.webContents.on('did-frame-finish-load', () => {
setTimeout(() => {winObject.webContents.print({silent: true, printBackground:true})}, 3000);
});
everything will work fine if the default printer is the right one. I did test this and it will do its job more or less. You can change my event to whatever event you like, the important part is the waiting with setTimeout. The PDF you are trying to print is simply not available in the frame when using silent:true.
However let me get into detail here a little bit to make things clear:
Electron will load Files or URLs into a created window (BrowserWindow) which is bound to events. The problem is that every event "can" behave differently on different systems.
We have to live with that and cannot change this easily. But knowing this will help improve the development of custom Apps.
If you load urls or htmls everything will work without setting any custom options. Using PDFs as source we have to use this:
import electron, { BrowserWindow } from 'electron';
const win = new BrowserWindow({
// #NOTE I did keep the standard options out of this.
webPreferences: { // You need this options to load pdfs
plugins: true // this will enable you to use pdfs as source and not just download it.
}
});
hint: without webPreferences: { plugins: true } your source PDF will be downloaded instead of loaded into the window.
That said you will load your PDF into the webContents of your window. So we have to listen on events compatible with BrowserWindow. You did everything right, the only part you missed was that printing is another interface.
Printing will capture your webContents as it is when you press "print". This is very inportant to know when working with printers. Because if something will load slightly longer on a different system, for example the PDFs viewer will be still dark grey without the letters, then your printing will print the dark grey background or even the buttons.
That little issue is easily fixed with setTimeout().
Useful Q&A for printing with electron:
Silent printing in electron
Print: How to stick footer on every page to the bottom?
However there are alot more possible issues with printing, since most of the code is behind closed doors without worldwide APIs to use. Just keep in mind that every printer can behave differently so testing on more machines will help.
This is 2021 and here is the simplest way ever.
Let's begin
First of all, install the pdftoprinter
npm install --save pdf-to-printer
import the library into your file
const ptp require('pdf-to-printer') // something like this
Then call the method into your function
ptp.print('specify your route/url');
It should work!
So it seems like you're trying to download the pdf file rather than print a pdf of the current screen which is what print tries to do. As such, you have a couple of options.
1) Disable the native pdf viewer in electron:
If you don't care about the electron window displaying the pdf, disabling the native pdf viewer in electron should instead cause it to treat the file as a download and attempt to download it.
new BrowserWindow({
webPreferences: {
plugins: false
}
})
You may also want to checkout electron's DownloadItem api to do some manipulation on where the file will be saved.
2) Download the pdf through some other api
I'm not gonna give any specifics for this one because you should be able to find some information on this yourself, but basically if you want to download the file from somewhere, then you can use some other download API like an AJAX library to download the file and save it somewhere. This would potentially allow you to render the document in an electron window as well, since once you initiate the download you can probably redirect the window to the pdf url and have the native viewer handle it.
Long story short, it sounds to me like you don't really want to print from electron, you just want to save the pdf file that you're displaying. Printing from electron will render what you see on the screen, not the pdf document itself so I think you just misunderstood what the goal of print was. Hopefully this helps you, good luck!
=== EDIT ===
Unfortunately, I don't believe that there is a way to print the file directly from electron since electron printing is for printing the contents of electrons display. But you should be able to download the file via a simple request for the file (see above).
My recommendation for you would be to create a page for previewing the file. This would be an independent page, not the built in pdf viewer. You can then insert a button somewhere on the page to download the pdf via some means and skip any save location prompts (this should be easy enough to find documentation for).
Then, in order to have your preview, on the same page you can have a webview tag into your page, which will display the native pdf viewer. In order for the native pdf viewer to work in the webview tag, you must include the plugins attribute in the tag. It's a boolean tag, so it's mere presence is all that is needed such as <webview ... plugins> This turns on plugin support for that webview's renderer which is required for the pdf viewer.
You can modify the size styling of this tag on the page as you wish to suit your needs. A trick to get rid of the download and print options so that a user cannot press them is to append #toolbar=0 to the end of the pdf url to prevent the native pdf viewer from displaying the top toolbar with these buttons.
So, this way you can have your preview, ensure that the user can't use the built in download or print from the pdf viewer with the extra ui, and you can add another button to download it so it can be printed later.
Related
I'm using exif-js library to extract the orientation from images uploaded to my web app.
I need the exif orientation to rotate incorrectly rotated android images.
The problem is that images uploaded from android device always return 0 as their orientation.
I've tried transfering image taken from the same android device to desktop, and uploading it from there, everything works fine in that case and I get the orientation 6.
localforage.getItem('photo').then(function(value) {
alert('inside forage');
image = value;
alert(image); // i get the image object
url = window.URL.createObjectURL(image);
alert(url); // url is correct
let preview = document.getElementById('camera-feed');
preview.src = url;
// const tags = ExifReader.load(image);
console.log( tags );
EXIF.getData(image, function() {
myData = this;
if (myData.exifdata.Orientation) {
orientation = parseInt(EXIF.getTag(this, "Orientation"));
alert(orientation); // on desktop 6, on android always 0
}
});
....
I'm using chrome browser on android.
After a lot of changes in my project, I used
this library to handle image rotation on the frontend
I know you already solved this, but I would still like to recommend exifr library if you need more than orientation. Or if performance is important to you. Exifr is fast and can handle hundreds of photos without crashing the browser or taking a long time :). Plus there's neat simple API and you feed it with pretty much anything - element, URL, Buffer, ArrayBuffer, even base64 url or string and more.
exifr.orientation(file).then(val => {
console.log('orientation:', val)
})
I want to redirect the user to a different webpage after they click a hyperlink which allows them to download a file. However since they need to make a choice in the open/save file dialog, I don't want to redirect them until they accept the download.
How can I detect that they performed this action?
As i've found from years of maintaining download.js, there simply is no way to tell from JS (or likely in general, see below) what a user chooses to do with the download Open/Save dialog. It's a common feature request, and i've looked into it repeatedly over the years. I can say with confidence that it's impossible; I'll joyfully pay 10 times this bounty if someone can demo a mechanical way to determine the post-prompt user action on any file!
Further, it's not really just a matter of JS rules, the problem is complicated by the way browsers download and prompt such files. This means that even servers can't always tell what happened. There might be some specific work-arounds for a few specific cases, but they are not pretty or simple.
You could force your users to "re-upload" a downloaded file with an <input type=file> to validate it, but that's cumbersome at best, and the local file browse dialog could be alarming to some. It's the only sure-fire method to ensure a download, but for non-sensitive applications its very draconian, and it won't work on some "mobile" platforms that lack file support.
You might also try watching from the server side, pushing a message to the client that the file was hit on the server. The problem here is that downloads start downloading as soon as the Open/Save dialog appears, though invisibly in the background. That's why if you wait a few moments to "accept" a large file, it seems to download quickly at first. From the server's perspective, the activity is the same regardless of what the user does.
For a huge file, you could probably detect that the whole file was not transferred, which implies the user clicked "cancel", but it's a complicated syncing procedure pushing the status from backend to client. It would require a lot of custom programming with sockets, PHP message passing, EventSource, etc for little gain. It's also a race against time, and an uncertain amount of time at that; and slowing down the download is not recommended for user satisfaction.
If it's a small file, it will physically download before the user even sees the dialog, so the server will be useless. Also consider that some download manager extensions take over the job, and they are not guaranteed to behave the same as a vanilla browser. Forcing a wait can be treacherous to someone with a slow hard drive that takes "forever" to "finish" a download; we've all experienced this, and not being able to continue while the "spinny" winds down would lower user satisfaction, to put it mildly.
In short, there's no simple way, and really no way in general, except for huge files you know will take a long time to download. I've spent a lot of blood sweat and tears trying to provide my download.js users the ability, but there are simply no good options. Ryan dahl initially wrote node.js so he could provide his users an upload progress bar, maybe someone will make a server/client package to make it easy to do the same for downloads.
Here is a hacky solution.
My StreamSaver lib don't use blobs to download a file with a[download]. It uses service worker to stream something to the disc by emulating how the server handles download with content-disposition attachment header.
evt.respondWith(
new Response(
new ReadableStream({...})
)
)
Now you don't have any exact way of knowing what the user pressed in the dialog but you have some information about the stream. If the user press cancel in the dialog or abort the ongoing download then the stream gets aborted too.
The save button is trickier. But lets begin with what a stream bucket highWaterMark can tell us.
In my torrent example I log the writer.desiredSize. It's the correlation to how much data it is willing to receive. When you write something to the stream it will lower the desired size (whether it be a count or byte strategy). If it never increases then it means that user probably have paused the download. When it goes down below 0 then you are writing more data than what the user is asking for.
And every chunk write you do returns a promise
writer.getWriter().write(uint8).then(() => {
// Chunk have been sent to the destination bucket
// and desiredSize increase again
})
That promise will resolve when the bucket isn't full. But it dose not mean that the chunk have been written to the disc yet, it only means that the chunk has been passed from the one stream to another stream (from write -> readable -> respondWith) and will often do so in the beginning of the stream and when another earlier chunk have been written to the disc.
It's a possibility that the write stream can finishes even before the user makes a choice if the hole data can fit within the bucket (memory)
Tweaking the bucket size to be lower then the data can help
So you can make assumption on when the
download starts
finish
and pauses
but you won't know for sure since you don't get any events (apart from the abort that closes the stream)
Note that the torrent example don't show correct size if you don't have support for Transferable streams but you could get around this if you do everything inside a service worker. (instead of doing it in the main thread)
Detecting when the stream finish is as easy as
readableStream.pipeTo(fileStream).then(done)
And for future references WICG/native-file-system might give you access to write files to disc but it has to resolve a prompt dialog promise before you can continue and might be just what the user is asking for.
There are examples of saving a blob as a stream, and even more multiple blob's as a zip too if you are interested.
Given that user is, or should be aware that file should be downloaded before next step in process, user should expect some form of confirmation that file has been downloaded to occur.
You can create a unique idenfifier or timestamp to include within downloaded file name by utilizing <a> element with download attribute set to a the modified file name.
At click event of <button> element call .click() on <a> element with href set to a Blob URL of file. At a element click handler call .click() on an <input type="file"> element, where at attached change event user should select same file which was downloaded at the user action which started download of file.
Note the chaining of calls to .click() beginning with user action. See Trigger click on input=file on asynchronous ajax done().
If the file selected from user filesystem is equal to modified downloaded file name, call function, else notify user that file download has not been confirmed.
window.addEventListener("load", function() {
let id, filename, url, file;
let confirmed = false;
const a = document.querySelector("a");
const button = document.querySelector("button");
const confirm = document.querySelector("input[type=file]");
const label = document.querySelector("label");
function confirmDownload(filename) {
if (confirmed) {
filename = filename.replace(/(-\d+)/, "");
label.innerHTML = "download of " + filename + " confirmed";
} else {
confirmed = false;
label.innerHTML = "download not confirmed";
}
URL.revokeObjectURL(url);
id = url = filename = void 0;
if (!file.isClosed) {
file.close()
}
}
function handleAnchor(event) {
confirm.click();
label.innerHTML = "";
confirm.value = "";
window.addEventListener("focus", handleCancelledDownloadConfirmation);
}
function handleFile(event) {
if (confirm.files.length && confirm.files[0].name === filename) {
confirmed = true;
} else {
confirmed = false;
}
confirmDownload(filename);
}
function handleDownload(event) {
// file
file = new File(["abc"], "file.txt", {
type: "text/plain",
lastModified: new Date().getTime()
});
id = new Date().getTime();
filename = file.name.match(/[^.]+/g);
filename = filename.slice(0, filename.length - 1).join("")
.concat("-", id, ".", filename[filename.length - 1]);
file = new File([file], filename, {
type: file.type,
lastModified: id
});
a.download = filename;
url = URL.createObjectURL(file);
a.href = url;
alert("confirm download after saving file");
a.click();
}
function handleCancelledDownloadConfirmation(event) {
if (confirmed === false && !confirm.files.length) {
confirmDownload(filename);
}
window.removeEventListener("focus", handleCancelledDownloadConfirmation);
}
a.addEventListener("click", handleAnchor);
confirm.addEventListener("change", handleFile);
button.addEventListener("click", handleDownload);
});
<button>download file</button>
<a hidden>download file</a>
<input type="file" hidden/>
<label></label>
plnkr http://plnkr.co/edit/9NmyiiQu2xthIva7IA3v?p=preview
jquery.fileDownload allows you to do this:
$(document).on("click", "a.fileDownloadPromise", function () {
$.fileDownload($(this).prop('href'))
.done(function () { alert('File download a success!'); })
.fail(function () { alert('File download failed!'); });
return false;
});
Take a look at Github:
https://github.com/johnculviner/jquery.fileDownload
I had a project that I dabbled in recently that required me to specify whether a user could upload a particular kind of file, i.e. (a user can upload a png but not a pdf).
I may not used the most efficient method, but ultimately what I did was to code a small, built in "webapp" that functioned as a file browser, for upload or download.
I suppose the closest example without releasing my "secret project" would be https://encodable.com/filechucker/
Maybe you could write a simple integrated filebrowser such as that that cloud services use sometimes (i.e. dropbox) and have some functions that detect input with custom boxes and stuff.
Just a few thoughts.
window.showSaveFilePicker from the File System Access API does what you want, but unfortunately it's currently supported only by Chrome and Edge. It returns a promise -- if the user chooses to download the file the promise is resolved; if they cancel, an AbortError is raised.
try this:
<html>
<head>
<script type="text/javascript">
function Confirmation(pg) {
var res = confirm("Do you want to download?");
if(res){
window.open(pg,"_blank");
}
return res;
}
</script>
</head>
<body>
download
</body>
</html>
I have a website witch uses facebook plugin comments. I'm looking for a way to have those comments inside a screenshot. If I use the simple html2canvas I get a blank box instead of them. So I try to use html2canvasproxy but now it print some javascript console log instead of the facebook comments.
It shoud be like but I get . I noticed that the html2canvasproxy.php saves the facebook plugin html correctly.
I can't find any javascript error in the console log.
I'm using the following code to take the screenshot:
html2canvas(document.body, {
"logging": true, //Enable log (use Web Console for get Errors and Warnings)
"proxy":"js/html2canvasproxy.php",
"onrendered": function(canvas) {
var img = new Image();
img.onload = function() {
img.onload = null;
document.body.appendChild(img);
};
img.onerror = function() {
img.onerror = null;
if(window.console.log) {
window.console.log("Not loaded image from canvas.toDataURL");
} else {
alert("Not loaded image from canvas.toDataURL");
}
};
img.src = canvas.toDataURL("image/png");
}
});
And I have this settings in html2canvasproxy.php:
//Turn off errors because the script already own uses "error_get_last"
error_reporting(0);
//setup
define('JSLOG', 'console.log'); //Configure alternative function log, eg. console.log, alert, custom_function
define('PATH', '../screenshots');//relative folder where the images are saved
define('CCACHE', 60 * 5 * 1000);//Limit access-control and cache, define 0/false/null/-1 to not use "http header cache"
define('TIMEOUT', 30);//Timeout from load Socket
define('MAX_LOOP', 10);//Configure loop limit for redirect (location header)
define('CROSS_DOMAIN', 0);//Enable use of "data URI scheme"
//constants
define('EOL', chr(10));
define('WOL', chr(13));
define('GMDATECACHE', gmdate('D, d M Y H:i:s'));
First idea I got while reading is to include some timeout - waiting a bit longer (let's say 200ms) - so that you have more probability for things to get loaded.
But after reading this on plugin site: "The script allows you to take "screenshots" of webpages or parts of it, directly on the users browser. The screenshot is based on the DOM and as such may not be 100% accurate to the real representation as it does not make an actual screenshot, but builds the screenshot based on the information available on the page." it could not help.
Personally I would investigate using another solution - like for example PhantomJS:
"PhantomJS is a headless WebKit scriptable with a JavaScript API. It has fast and native support for various web standards: DOM handling, CSS selector, JSON, Canvas, and SVG."
It's easy like this:
var page = require('webpage').create();
page.open('http://github.com/', function() {
page.render('github.png');
phantom.exit();
});
Hi I am using jsPdf for making pdf of html content, it is going fine for short content and creating pdf , but when I am trying to use it on large content with html2canvas.js (for rendering css), it is not creating pdf.
Any suggestions or sample code for this would be helpful.Thank you.
It is possible to create pdf for large files. There are primarily two ways to do this :
1. html -> canvas -> image -> pdf (I assume you are trying this approach)
2. html -> pdf (does not work if html contains svg)
I would suggest you to go for (2) unless you have a good reason to go for (1) (like if you are have svg content)-- it is quite expensive operation for the browser and there is possibility of the browser crashing too.
1. html -> canvas -> image -> pdf
This is very neatly described here - https://github.com/MrRio/jsPDF/issues/339#issuecomment-53327389
My experience when using this method - crashes when the pdf generated contains more than 2-3 pages. (tested on latest chrome and firefox)
2. html -> pdf
var pdf = new jsPDF('l', 'pt', 'a4');
var options = {
pagesplit: true
};
pdf.addHTML($('body'), 0, 0, options, function(){
pdf.save("test.pdf");
});
This is way faster compared to above approach. Generates pdf containing 5-6 pages in 1-2 seconds!
Hope this helps!
PDFPY
https://www.npmjs.com/package/pdfpy
var data = {
//the key as to be same as below
input: "./test.html",
output: "./output.pdf"
}
pdfpy.file(data, function(err, res) {
if(err) throw err
if(res) console.log("success")
});
I am working on a web page for uploading photos from a mobile device, using the <input type="file" accept="image/*"/> tag. This works beautifully on iphone and on chrome on the android, but where we are running into issues is with the stock android browser.
The issue arises when you select a file from your gallery (it works fine when you use the camera to take a photo). And we have narrowed it down even further to seeing that the data MIME type isn't available when taken from the gallery on the stock browser (the photos below show the first 100 characters of the data URL being loaded. The goal was to force JPEG, but without the MIME type we cannot know for sure how to fix this. See code below for how the images are being rendered.
How can an image be rendered without the type? Better yet, does anybody know why the type is not available on the stock android browser?
EDIT
Firstly, these are not the same image, they were taken near the same time, and that's not the issue, that's why the data is different (The MIME type doesn't appear on any images on the stock browser, so that's not the problem.
Update
I confirmed that the MIME type is the issue by inserting image/jpeg into the stock browser where it is on chrome. Unfortunately, we have no way of guaranteeing that it's going to be jpeg, so we again really can't do it that way
_readInputFile: function (file, index) {
var w = this, o = this.options;
try {
var fileReader = new FileReader();
fileReader.onerror = function (event) {
alert(w._translate("There was a problem opening the selected file. For mobile devices, some files created by third-party applications (those that did not ship with the device) may not be standard and cannot be used."))
$('#loadingDots').remove();
return false;
}
fileReader.onload = function (event) {
var data = event.target.result;
//alert(data.substring(0,100));
//var mimeType = data.split(":")[1].split(";")[0];
alert("Load Image"); //I get to this point
$('#' + w.disp.idPrefix + 'hiddenImages').append($('<img />', {
src: data,
id: "dummyImg" + index,
load: function(){
var width = dummy.width();
var height = dummy.height();
$('#dummyImg' + index).remove();
alert("Render"); // I don't get here
var resized = w._resizeAndRenderImage(data, null, null, biOSBugFixRequired, skewRatio, width, height);
alert("Image Rendered"); // I don't get here
}
}));
}
fileReader.readAsDataURL(file);
}
catch (e) {
}
}
Chrome
Stock Browser
Since the issue is probably browser-related, and you can't really fix the browser(you could report a bug to Google though), I'd suggest taking a different path.
Have a look Here:
In Node.js, given a URL, how do I check whether its a jpg/png/gif?
See the comments of the accepted answer, which suggests a method to check the file type using the file stream. I'm pretty sure this would work on browser-implemented Javascript and not only in Node.js.