For a Blazor WASM project I have been working on, I used this great open-source projects called Blazor Diagrams. The client wants me to export a screenshot of the diagram, but with the catch that is that the screenshot should be at the full resolution of the graph which will almost always be higher resolution than the browser window. For example, imagine the div containing the diagram is 900px wide to fit in the browser, but the entire diagram width is 2500px. I would like to capture an image that is 2500px wide.
I have looked into various options like
html2canvas
getDisplayMedia
Html2canvas does not play nicely with Blazor and if I understand it right getDisplayMedia would be a pixel for pixel capture.
I don't think we would be able to use 3rd party API's due to confidentiality, so I am wondering what my options would be.
Most if not all of the javascript screen to image, Dom to image library's lack full svg support and has some quirks with in line css etc. Best is to get creative with either playwright or bunit to grab the output html in a staged environment for best results. Currently we had all the issues mentioned but have resolved it in other ways with no/minimal javascript dependencies.
Using Blazor serverside, webassembly might be a different case
In case anyone comes across this and finds this useful here is what I did. My project is a Blazor WASM hosted on an IIS server.
Created a console application that handles the Playwright logic
Created an API endpoint that when hit, fires up the console application.
Created a special viewing page for my diagram which does some maths to set the size of the container div to make sure the entire diagram is visible.
Used playwright to get a screenshot of that particular div.
Some IIS tips
In our scenario, I ended up placing the console app exe in the root Websites folder as Node needed read/write access and was throwing errors being inside a subfolder.
I used the following code to install only the chrome browser on first run. This way all the playwright code is located in my Diagram Export app folder.
string imagesFolderPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "images");
string browserFolderPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "browser");
if (!Directory.Exists(imagesFolderPath))
{
Directory.CreateDirectory(imagesFolderPath);
Console.WriteLine("Created images folder", imagesFolderPath);
}
if (!Directory.Exists(browserFolderPath))
{
Directory.CreateDirectory(browserFolderPath);
Console.WriteLine("Created browser'{0}'", browserFolderPath);
}
Console.WriteLine("Checking dependencies");
Environment.SetEnvironmentVariable("PLAYWRIGHT_BROWSERS_PATH", browserFolderPath);
Environment.SetEnvironmentVariable("PLAYWRIGHT_NODEJS_PATH", Path.Combine(AppDomain.CurrentDomain.BaseDirectory, ".playwright\\node\\win32_x64\\node.exe"));
Microsoft.Playwright.Program.Main(new[] { "install","chromium" });
Had to play around with the auto waiting features on playwright to make sure the page was completely rendered before taking the screenshot. In my case I used the HoverAsync API on the last element type to render in my diagram.
Related
Some background context:
I'm working on a game written in BABYLONJS, which renders 3D graphics inside an HTML5 canvas using webGL. That is to say this project is not a typical Web UI where I need to test DOM elements like button clicks or form submits. BABYLON JS has its own way of simulating pointer events in the context of a 3D scene and you can pass it a pointerInfo object that mocks if a mesh was hit etc., and that's what I would like to use.
My project came bootstrapped with esbuild. I love it because it's fast at transpiling typescript and bundles everything into a single artifact and doesn't produce javascript artifacts next to my typescript files, so my directories are clean.
I started testing using jest. This was fine until I started running into issues where
Window is not defined
would crop up. It was because the code being tested brought other code along that was testing window.navigator... for attributes to see if this was a mobile device or not. I could try to mock the Window object but it is a pain. Also when trying to simulate a click the PointerEvent was not defined. I tried tried adding JSDOM, but that didn't seem to help and I wasn't able to get unblocked. It just seemed like I was trying to use a tool built for node, when I should just test in a real browser.
But Googling "browser based testing" usually finds results I don't want. I'm not looking for full end-to-end user interaction testing. I don't want selenium/chrome driver style of testing because:
it's slow
my project is not a traditional website, I don't have many HTML elements for a user to interact with
I don't want to test the whole stack of logging in, dealing with authentication etc.
I just want to test classes and functions in small unit test level, but I need access to Window and PointerEvent and all the goodies that come with a browser for free.
Next I looked at Jasmine. Jasmine standalone has a browser based SpecRunner.html. It's a single HTML page that you modify. It includes its own jasmine boot scripts that are loaded with script tags, and then your source code files and your test files are also imported as JS files with script tags.
This seemed promising in that the tests run in a browser so presumably they have access to the window object. However, both my specs and my source code are written in typescript, not javascript, so how do I place my code and tests into the SpecRunner.html?
esbuild is only bundling a single output artifact. If instead of using esbuild, If I used the tsc command and pollute my directories full of javascript artifacts, then yes, the jasmine SpecRunner.html would have access to the javascript files, but tsc is slower and js files everywhere is messy.
But before I get too far on this,... I think the downside to this approach is that for every test file I write, I need to manually modify SpecRunner.html to include all the source code to be tested and all the test files which will be annoying to maintain whenever file paths or file names change.
TLDR;
Any advice on what is a good solution to run unit tests with a real browser (not selenium style) when using typescript and esbuild? I don't have a real preference for any particular test framework.
In function of an e-mail signature tool we're developing, we're building a feature to not only export the signature, but also make a screenshot of it.
We tried working with the Javascript library HTML2Canvas: https://html2canvas.hertzen.com/ but when using this solution, all of our images inside the HTML that are hosted elsewhere, are not shown in the screenshot.
This issue exists for a longer time, hence the question if there's any other solution to render a screenshot from HTML that includes external images.
You can use headless browser. This article provides all the necessary information for taking screenshots. https://bitsofco.de/using-a-headless-browser-to-capture-page-screenshots/
I need to add some infographics into Angular 5 app. I've chosen d3.js for that. I also need to be able to do export of graphs, i.e. make SVGs with Node and wrap them inside PDF.
Fortunately it's rather simple to make code that makes d3 graph in browser work on node.js. The following lines do that...
const { JSDOM } = jsdom;
const { window } = new JSDOM();
const { document } = (new JSDOM('')).window;
global.document = document;
After that only minor changes to code that works in browser are required.
Obviously I don't want to have 2 copies of almost the same code, so I need a way to organize usage of the functions that create SVG (I'd prefer if that was Typescript not javascript) on both angular app side and node app side. Unfortunately I don't have to much experience in Node and don't see an easy solution for that.
Here are my questions...
How can I simply organize usage of functions that create SVG using d3 by angular 5 app and node.js app?
Maybe rendering d3.js with node isn't the best solution and there's another, that is simpler?
Thank you in advance!
I would like to suggest the following solution.
First of all, no matter which front-end framework you actually use right now.
If I got your idea correctly, you need to have a picture/screenshot of the d3js chart, in order to use it in PDF in the future. Is it correct?
You need to write a utility, to be able to open the real web page with your chart component and make a screenshot (with a resolution you want ofc) It might be a combination of the protractor with chrome-browser, for example. (there are a lot of solutions, actually, we could even use PhantomJS. In my experience using Protractor simpler and easier to implement). Also, Protractor has an internal feature to make screenshots of the page and save to the particular folder.
Which benefits we have following that solution:
the only one place with a source code related to chart rendering
100% sure that chart view the same, as on the real web-page (with
other angular components)
we don't need to find the way render SVG on the Node.JS side and etc...
The job might look like below:
Launch some NPM/Gulp/Grunt (whatever) task to open the particular
page of your web-app by using Protractor and Chrome browser.
Open the dummy page with only chart component + data layer.
Make a screenshot and save to the particular folder. Use screenshot
of the chart inside PDF (manually or by using another tool)
If you want to do it on the server side, you can have an api which will generate the graphics and return the element. You can directly plug it in the UI and also use the same function for you PDF generation.
I have a problem. I've tried some libraries that convert html to PDF but they don't import CSS, so my PDF is invalid.
I's tried with "html2pdf" , "pdfmake", "jspdf"..
PDFMake does not help me because it need to generate a JSON with HTML data...
The structure of file that I would like to convert to PDF is:
html: www/templates/phase_one_report.html
css: www/css/phase_one_report.css
Some ideas? I am using nodeJS with sailsJS in backend and javascript with ionic in frontend.
Sorry about my english.
This is a difficult problem. I have also found that existing HTML to PDF libraries usually don't handle the HTML & CSS that I throw at them.
The best solution I have found is not Javascript at all: wkhtmltopdf. This is essentially a program that wraps up the webkit rendering engine so that you can give it any HTML + CSS that webkit can render and it will return a PDF document. It does an outstanding job, since it's actually rendering the document just like a browser would.
You mention that you're using node.js, but it's not clear exactly what your environment is, so I'm going assume that your report is available at a URL like http://my.domain/phase_one_report.html. The simplest way to get this working would be to install the wkhtmltopdf application on your server, then use child_process.exec to execute it.
For example:
import { exec } from 'child_process';
// generate the report
// execute the wkhtmltopdf command
exec(
'wkhtmltopdf http://my.domain/phase_one_report.html output_file.pdf',
(error) => {
// send the PDF file to the client
}
);
There are a lot of different command-line options for wkhtmltopdf - you'll need to look into all the different ways to configure it.
If your report is not accessible at a URL, then this becomes a little more complicated - you'll need to inline the CSS and send everything to wkhtmltopdf at once.
There are a number of options available right now:
Edit 09/2018: Use puppeteer, the JS Headless Chrome driver. Firefox now also has headless mode but I'm not sure which library corresponds to puppeteer.
wkhtmltopdf as mentioned before does the job but is slightly outdated.
You will have to watch the latest chrome releases which will have a --headless option to enable html+css+js to pdf conversion.
Then there is PhantomJS and Slimer.js. Both are possible to use with node and Javascript. Nightmare.js is also an option but sits on top of it.
However, Phantom.js is currently the only solution that is truly headless and javascript based. Slimer.JS works with Firefox but requires you to have a window manager, at least xvfb, a virtual frame buffer.
If you want the latest browser features you will have to go with slimer.js or, another option, go with one of the Electron based solutions that keep popping up. Electron is based on Chrome and is scriptable too. A fine solution that also ships with Docker containers is currently https://github.com/msokk/electron-render-service
This list is possibly incomplete and will change a lot in the near future.
I'm building an HTA application in which I need to display a list of file with their associated system icon.
I'm using FileSystemObject to list the file but there seem to have no way to get the icon...
I've found a script in VBS that can save the icon of a file into a .ico .
It read the file (PE resource file, .exe or dll) and parse the icon data.
I modified that script to return the icon's bytes, convert it to base64 and use embed base64 images in HTML.
Here's the original script: http://gilpin.us/IconSiphon/
Issue
) In most case the .ico contains multiple icons (many sizes and color depth) but there's no way I can specify which one to use (as I need 16x16 icons).
) Not all icons are displayed
) Could be slow with many file as it read exe and dll (but I'm ok with that, I can cache already fetched icon)
I've also tried some ActiveX control but none seem to work properly. Even those provided by microsoft (ShellFolderView or ListView) are very buggy.
Requirements
Must display 16x16 icon
Must allow multiple file selection
Everything must be embed in hta (if possible). No external .exe
Does anyone know a way to achieve that?
Thanks!
Use SHGetFileInfo() with the SHGFI_ICON flag.
http://msdn.microsoft.com/en-us/library/windows/desktop/bb762179(v=vs.85).aspx
The filesystemobject will provide you the necessary functions for enumerating files on the local filesystem. However to get the icon image you will need to use the win32 api per #seanchase's response or an external exe.
However you can access the win32api via javascript in the hta using the wshApiToolkit activex object - http://www.google.com/search?q=wshAPIToolkit.ucATO%2F&rls=com.microsoft:en-us&ie=UTF-8&oe=UTF-8&startIndex=&startPage=1
Find a copy of that and you're close to being done. It does require distributing the activex object with your code and shell executing the registration process from within the HTA so that might violate your third constraint. Though I believe you can base64 encode the exe into the hta in a dataurl and write that back out to the file system so it would at least be bundled into a single file. If you support that option then maybe embedding an exe that does the same would meet your requriements.
Definitely some hacky stuff that may be unstable on future OS versions - heck I'm not even sure the wshApiToolkit works on windows 7, and 8 is just around the corner. Good luck!
You indicated you're opened to installing ActiveX components and using them in your HTA.
If I had the time, I would approach this for myself by creating ActiveX components using Visual Studio to call FindResource, LoadResource and LockResource. These will enable access to the Group Icon resource for which I would then provide rich interfaces to iterate through the Icons offering the ability to extract BMPs (or PNGs).
This is "how" I would go about achieving this short of actually going off doing it.
Once I build a similar HTA interface and I faced the same problem. I solved the problem by creating a custom icon gallery and converting the images using base64. You may achieve the same by either converting or using sprite. Many UI does it, even java.swing has its own collection embbebed. As you noticed, reading from *.dll can speed down the application