provide a downloadable file perl-template - javascript

I am working with Perl/CGI, MySQL, Perl Template toolkit.
I have a database set up and some perl modules to query the database.
From the perl modules I pass on a datastructure (hash of hashes) to perl template toolkit (.tt) and that renders the results on the webpage.
I would now like to add an option of downloading the search results into a tab delimited file; i.e. provide a download file option. I have a subroutine in my perl module to do the conversion into tab-seperated format. I want to be able to call that subroutine
to convert the search results into tab-seperated format. Can I call a subroutine from a perl module in Template toolkit?
I am trying to figure out how to generate a downloadable file without again querying the database or without storing the results in CACHE.
Is there a way to pass the datastructure(hash of hashes) that Perl template is rendering, to a Javascript (that further calls subroutine) that can then generate a downloadble file?
Please suggest a correct approach.
Thanks for your time

Can I call a subroutine from a perl module in Template toolkit?
You can, but it doesn't make sense to for this problem.
You don't need any templating capabilities, and you do need a different Content-Type header. Don't use TT when the tab separated file is being created.
I am trying to figure out how to generate a downloadable file without again querying the database or without storing the results in CACHE.
There is no reasonable way to do that. The closest you could get would be to parse the data out of the generated (by TT) HTML document with JavaScript (not using the Perl you wrote to generate the tab separated file), and then build the tab separated file on the client, and make it available for download.
It would be simpler, easier and more reliable to just hit the database again.

Related

Automatic creation of PDF file from Template

for a project at university we are working on an application that is supposed to automatically create a file for the user after having queried several information from the user. The general idea is to use Decision Model and Notation "DMN" to perform the query and collect the information needed. The file input depends on the answers provided by the user. The application is further intended to be web-based.
My question is therefore, how we can put the strings that result from the DMN query into a PDF template that is ready to print/send? The template is currently set up to be a text document (.docx) that has several input fields that need to be filled.
Thanks!
You can use Kogito for the DMN execution side; it is JVM based but it exposes for you automatically generated REST (JSON) endpoints to evaluate the DMN model. Based on the requirements you listed, this should an easy way to achieve the DMN evaluation part; that is for Kogito you drop the .dmn model file into the src/main/resources directory and it will automatically provide for you a cloud-native based application exposing the REST endpoint.
Further, the outcoming JSON payload (of the DMN evaluation results) could be fed into a template engine, in order to generate from the JSON of result the final PDF leveraging conversion from a more friendlier target. For instance, this could have also been done with Apache FreeMarker/Velocity template engine. You could use as target HTML or ODF, and finally achieve the final PDF conversion.

Alternative to passing Data to JavaScript from PHP?

I have a fairly large Application and I'm currently trying to find a way around having to pass Data from PHP (User Tokens for 3rd Party API's and such) through the DOM. Currently I use data-* attributes on a single element and parse the Data from that, but it's pretty messy.
I've considered just making the contents of the element encoded JSON with all the config in, which would greatly improve the structure and effectiveness, but at the same time storing sensitive information in the DOM isn't ideal or secure whatsoever.
Getting the data via AJAX is also not so feasible, as the Application requires this information all the time, on any page - so running an AJAX request on every page load before allowing user input or control will be a pain for users and add load to my server.
Something I've considered is having an initial request for information, storing it in the Cache/localStorage along with a checksum of the data, and include the checksum for the up-to-date data in the DOM. So on every page load it'll compare the checksums and if they are different (JavaScript has out-of-date data stored in Cache/localStorage), it'll send another request.
I'd rather not have to go down this route, and I'd like to know if there are any better methods that you can think of. I can't find any alternative methods in other questions/Google, so any help is appreciated.
You could also create a php file and put the header as type javascript. Request this file as a normal javascript file. <script src="config.js.php"></script> (considering the filename is config.js.php) You can structure your javascript code and simply assign values dynamically.
For security, especially if login is required, this file can only be returned once the user is logged in or something. Otherwise you simply return a blank file.
You could also just emit the json you need in your template and assign it to a javascript global.
This would be especially easy if you were using a templating system that supports inheritance like twig. You could then do something like this in the base template for your application:
<script>
MyApp = {};
MyApp.cfg = {{cfg | tojson | safe}};
</script>
where cfg is a php dictionary in the templating context. Those filters aren't twig specific, but there to give you an idea.
It wouldn't be safe if you were storing sensitive information, but it would be easier than storing the info in local storage,

Is fetching remote data server-side and processing it on server faster than passing data to client to handle?

I am developing a web app which functions in a similar way to a search engine (except it's very specific and on a much smaller scale). When the user gives a query, I parse that query, and depending on what it is, proceed to carry out one of the following:
Grab data from an XML file located on another domain (ie: from www.example.com/rss/) which is essentially an RSS feed
Grab the HTML from an external web page, and proceed to parse it to locate text found in a certain div on that page
All the data is plain text, save for a couple of specific queries which will return images. This data will be displayed without requiring a page refresh/redirect.
I understand that there is the same domain policy which prevents me from using Javascript/Ajax to grab this data. An option is to use PHP to do this, but my main concern is the server load.
So my concerns are:
Are there any workarounds to obtain this data client-side instead of server-side?
If there are none, is the optimum solution in my case to: obtain the data via my server, pass it on to the client for parsing (with Javascript/Ajax) and then proceed to display it in the appropriate form?
If the above is my solution, all my server is doing with PHP is obtaining the data from the external domains. In the worst (best?) case scenario, let's say a thousand or so requests are being executed in a minute, is it efficient for my web server to be handling all those requests?
Once I have a clear idea of the flow of events it's much easier to begin.
Thanks.
I just finish a project to do the same request like your req.
My suggestion is:
use to files, [1] for frontend, make ajax call to sen back url; [2] receive ajax call, and get file content from url, then parse xml/html
in that way, it can avoid your php dead in some situation
for php, please look into [DomDocument] class, for parse xml/html, you also need [DOMXPath]
Please read: http://www.php.net/manual/en/class.domdocument.php
No matter what you do, I suggest you always archive the data in you local server.
So, the process become - search your local first, if not exist, then grab from remote also archive for - 24 hrs.
BTW, for your client-side parse idea, I suggest you do so. jQuery can handle both html and xml, for HTML you just need to filter all the js code before parse it.
So the idea become :
ajax call local service
local php grab xm/html (but no parsing)
archive to local
send filter html/xml to frontend, let jQuery to parse it.
HTML is similar to XML. I would suggest grabbing the page as HTML and traversing through it with an XML reader as XML.

Accessing File contents based on Key using AJAX or Jquery

My web application uses file which has been updated by other process. My application reads the content of the file using ajax
xmlhttp.open("GET","/config/myfile",false);
xmlhttp.send();
Once response is received then app parses this response and shows that values on Web UI. The file contains 50 fields and whenever I want to read any single field I need to open whole file.
Is there any way to get the values of single field based on key instead of reading whole file.
As per my understanding we need to read and open file and then parse the response text. But would like to know is there any way to reduce the file calls with any other method.
I want to achieve this to reduce the file I/O operations. Since other processes are writing in to it and at same time my web app accessing to read the latest value.
Any other option would be appreciated.
Note :- I do not have to use any server side scripting lang option.
Regds

Javascript Decryption during Download

I'm building an ASPX website that should allow the user to download a CSV/Excel file (including the 'Save To' dialog). The CSV contains encrypted data - the decryption key is available at user side and should be kept secret against the webservice.
So decryption actually should be performed within the browser, a javascript implementation (sjcl) has proofed to work fine.
But how can the incoming datastream during a file download be influenced? Something like a browser hosted proxy performing the javascript decryption?
#closure: thanks a lot! Ajax is no problem, and the idea
<a href='data:application/csv;base64,aGVsbG87d29ybGQNCg=='>click</a>
is really cool, but it has two problems: it seems not work with IE and it is not the right approach for really huge tables. The solution should be able to handle many thousands of records, therefore we need some sort of download stream encoder/decrypter.
Here are the steps to achieve this:
Instead of downloading the CSV directly to the client machine, fetch it via ajax
Once the data is received in via Ajax, parse the CSV via many available functions on internet. Let me know, if you need help on this. This function will convert the CSV to native Javascript Arrays.
Walk through the Array and covert the encrypted data to unencrypted data. Do it natively in the same Array.
Convert the array to CSV (Again there are functions in public domain)
Make a link (a element) and set the href to local data like data:text/csv;charset=utf-8, + encodeURIComponent(csv)
Present this link to the user and ask him to click on it to save the file locally.

Categories

Resources