Best Way to Save Highcharts Graph to Webpage? - javascript

I have a web app that I'm writing with FLASK and JavaScript which allows the user to drag and drop a CSV file, the CSV file is then parsed and rendered via Highcharts -- all with JavaScript. Trouble is, the chart obviously won't save in the form.
My question is, how would I go about storing the chart that I just rendered? I'm kind of doing reverse logic here instead of feeding the data to highcharts via a db, so maybe that would be a better option?

Related

Exporting multiple graphs (SVG) to PDF

I'm trying to export multiple graphs (fusion charts XT - rendered as SVG format in web page) to PDF. I learnt fusion charts isn't providing any feature for Javascript graphs. I search for multiple dlls. Finally tried working with jsPDF.js as well as svg_to_pdf.js - but when it comes to rendering multiple graphs - not sure how to proceed. Is there any example online for downloading multiple svg images in a web page to pdf? Please suggest.
Do you have multiple graphs on the same page and want to export them all into one PDF?
If yes, then you can probably consider converting those SVGs into multiple image streams (base64 encoded) using canvas, pass them to the server side, and the server can generate a PDF file containing those image streams.
We have used this method in our own application and I hope it can help you, too.

Generate PDF from web app

I need to generate a PDF from the current screen in my webapp. Some kind of screenshot, but I'm facing serious difficulties.
The main problem is that the view contains a grid made with jQuery Gridster; and some "widgets" contain complex elements like tables, highcharts, etc.
So plugins like jsPDF or html2canvas can't render my page in a prorper PDF. They always generate it blank.
This is how the page looks like. You can/move resize each element:
(Sorry for the CIA style, but there's business data in there)
Some ideas I came across but don't work are:
Using browser print-to-pdf feature programatically. (can't)
Use phantomjs. (but page state matters, so...)
I believe a solution to this poroblem may be widely adopted by anyone trying to generate a PDF of img from current screen in a web app. Quite an unresolved problem.
It's ok if only works on Google Chrome.
Many thanks.
EDIT:
One posible solution might be to find a way to represent the current layout status with an object and save it with and id.
Then retrieve that object via url param with the id and apply the stored layout to the inital page.
This way I might able to take a screenshot with phatomjs, but it seems quite complex to me. Any alternative?
Based on the fact that you're struggling with capturing dynamic content, I think at this point you need to take a step back and see that you might need to alter your approach. The reason these plugins are failing is because they will only work with the HTML before interactions right?
Why not convert the HTML to .pdf format from the server side? But the key part here is, send the current HTML back. By sending it back, you're sending updated static HTML back to the server to be rendered into a PDF? I've used HTML to PDF from server side before and it works fine, so I can't see why it wouldn't be appropriate here.
See this answer for details about HTML to PDF server side.

Apache POI vs JS XLS - What's the best for an Excel Sheet driven application?

I'm developing an Excel Sheet driven application.
All the data must be retrieved from an specific and default Excel Sheet that will contain new rows and alterations in existing rows. I need to merge specific columns with the existing data, and preserving the other columns.
The basic functions are:
Import table and merge with existing data (data that is already in database).
Edit certain columns of this existing data in a grid/table view.
I need to import an .xlsx file (not .csv). My frontend is in AngularJS and backend with Java.
With AngularJS (JavaScript), I can use the JS XLSX library.
With Java, I can use Apache POI.
What's the best approach? And what good materials/examples can you recommend?
Thanks everyone!

Is it possible to get D3js charts into xml?

I have created a simple Area chart from D3js Example. Now i want to export complete code of area chart into xml so that any end user can manually change xml data and then save it and use it.
Is there a way to achieve in it.
If you are generating SVG via D3.js, it is naturally XML - which is handy for you.
You could grab the SVG from the HTML DOM and use the FileSaver API to generate a file to save.
Here is a FileSaver polyfil which also has a nice introduction.
Alternatively you could send the SVG (from the DOM) via Ajax and do it server-side.
UPDATE:
Here is an example of grabbing the svg element on the client side, however it generates the download vis the server-side. Mixed with FileSaver you could do the lot client side.
http://d3export.housegordon.org/
See this question I don't think you can export the chart but you can try exporting the chart data.

What is the best way to convert HTML into Excel

I have an HTML page which has a flash chart(FusionCharts) and HTML table. I need to convert this whole thing into Excel. HTML table should be displayed in cells of excel sheet. Flash chart can be displayed as an image.
Is there any open source API that we could use for achieving this. Could you let me know what are the possible options.
Can this be done by using javascript alone.
The HTML table is relatively easy. You can download the page, parse the HTML (there are various HTML parsing libraries available), extract the table and convert it into CSV (which Excel can load), or directly create an Excel file, e.g. using Java POI, as suggested above.
The Flash part is significantly harder. There are quite a few tools available to capture flash to an image, you'd need to use one of them. This can be tricky, as Flash might be interactive, so you'd possibly have to remote-control the Flash part so it shows the right image before capturing. Hard to tell without more info.
That said, screen-scraping (which is what you're doing) is always labour-intensive and fragile. You should really push for a better interface to get your data from, it will save loads of hassle in the long run.
Just set the content type of the page to "application/vnd.ms-excel". If the html page is just a table it will open with excel and look perfect. You can even add background colors and font styles.
Try some of these content types
application/excel
application/vnd.ms-excel
application/x-excel
application/x-msexcel
Excel can convert HTML tables by default. The easiest way to force it to do this is to save the HTML file with an XLS extension. Excel will then open the XLS as if it were its native workbook.
There's a very good Java POI api that would let you do that, but it's Java.
http://poi.apache.org/
If you're on Win32 you can also use Excel's COM api, there are quite a few tutorials on the net.
I cannot offer any advice on the Flash part, but I have done HTML table to Excel many times. Yes, Excel can open HTML tables but most HTML tables out there have extraneous crap in them that can make it fragile to consistently parse the tables.
CPAN module HTML::TableExtract is a wonderful module that allows you to focus on the non-presentation specific aspects of the table you are trying to extract. Just specify the column headings you are interested in and maybe specify the title or class of the table and you are mostly set. You might have to post process the rows returned a little, but that is considerably easier than dealing with the underlying tag soup in all its glory.
Further, for output to Excel format, stick with Spreadsheet::WriteExcel rather than the OLE interface. That way, you do not depend on having Excel installed for your program to work and things go a little faster.
Make sure you specify the data type of cells if you do not want content to be changed automatically by Excel upon opening the files (another reason I do not like sending around CSV files). Use a configuration file for formatting information so that you can change how the spreadsheet looks without having to change the program.
You can always use Excel's built-in charting facilities to replace the web site graphs.
This combination has enabled me to generate pretty good looking documents comprising several hundreds of megabytes of scraped data (with logos and image links etc) using just a few hundred lines of Perl and a couple of days' work.
What you're trying to do is fragile and difficult to maintain. You should attempt to create a csv feed to fetch the data. All it takes is for someone to come along and modify the HTML and your scraper will throw up on it (probably years after anyone remembers how your program works).
Try to get CSV and image data from the original source (ie, database or whatever) and build the Excel file from that.
I will add to SpliFF's answer that when you have your data as a CSV file you can set the mime type of the page to application/vnd.ms-excel which will open the page in Excel

Categories

Resources