I'm new to Excel Web Add-Ins and want to figure out if it's possible to make an add-in that can export a custom file.
I've looked around and all I find are Excel specific commands like Workbook.SaveAs() but I can't find anything on making custom export functions. I need to convert the file into XML but a specific XML setup and so, I could just work the data before I save it to XML. But again, can't find much of anything to suggest that this is supported.
How would I go about writing a file to disk from Excel that isn't just the Workbook?
There's no such API to support exporting custom file to disk. It seems we can have workaround to do this work, this workaround just works for excel online.
Please see this link:
How to create a file in memory for user to download, but not through server?
The closest thing there is for what you want to do is:
Office.context.document.getFileAsync(Office.FileType.Compressed, (result) => {
const file = result.value;
// do whatever ...
});
The file variable in this case contains the entire document in Office Open XML (OOXML) format as a byte array.
Related
I need to create a PDF from HTML inside a react-js app.
Many packages I have found prompt a download button in the browser ( like jsPDF ), but I actually need the PDF as a binary string. I need this string to be send to a private API that stores this PDF ( binary string ) in S3 as a PDF file. This private API call already exists, and I can not change anything from this code.
I am struggeling to understand why this is so hard. How would you go about converting HTML to PDF binary string? Thanks for any suggestions, packages, ... It can be javscript, if I can implement it inside my reactJS app.
Bonus points if the solution can accept HTML tags, since the input is done inside an WYSIWYG editor.
This server side solution works with any HTML framework.
https://github.com/PDFTron/web-to-pdf
This is from the company I work for, but is AGPL-3.0 so you should be able to use no problem.
I want to parse excel file using javascript in html. I successfully parsed it by reading . But i want it automatically read the excel from current directory on page load. is it possible? Because i want to run the html in as local file without any server.
Yes you can, using SheetJS
you will simply specify in the init according to docs appropriate file you want to open and then it is easy as follow docs
What's the correct way to export an HTML table as an Excel file so that the user can click a button and download the Excel file (ideally using Angular and without using server)?
I've seen many answers like this:
Export to xls using angularjs but doing this gives an error similar to the following:
"The file format and extension dont match... The file could be corrupted..."
and I believe the file is actually in HTML or XML format, not actual Excel.
The warning does not present a good image to the user.
What's the right way to actually export a file as Excel without using the server?
Or is the server required to create the file?
If you are just using tabular data, then I would argue that the best solution would be building a CSV file. This could be natively opened by excel and converted into an XLS file if necessary. You can do so by arranging your data with a data URI. The octet-stream will force a file download rather than opening in browser. Here is an example:
CSV
I'm trying to figure out the best way to accomplish the following:
Download a large XML (1GB) file on daily basis from a third-party website
Convert that XML file to relational database on my server
Add functionality to search the database
For the first part, is this something that would need to be done manually, or could it be accomplished with a cron?
Most of the questions and answers related to XML and relational databases refer to Python or PHP. Could this be done with javascript/nodejs as well?
If this question is better suited for a different StackExchange forum, please let me know and I will move it there instead.
Below is a sample of the xml code:
<case-file>
<serial-number>123456789</serial-number>
<transaction-date>20150101</transaction-date>
<case-file-header>
<filing-date>20140101</filing-date>
</case-file-header>
<case-file-statements>
<case-file-statement>
<code>AQ123</code>
<text>Case file statement text</text>
</case-file-statement>
<case-file-statement>
<code>BC345</code>
<text>Case file statement text</text>
</case-file-statement>
</case-file-statements>
<classifications>
<classification>
<international-code-total-no>1</international-code-total-no>
<primary-code>025</primary-code>
</classification>
</classifications>
</case-file>
Here's some more information about how these files will be used:
All XML files will be in the same format. There are probably a few dozen elements within each record. The files are updated by a third party on a daily basis (and are available as zipped files on the third-party website). Each day's file represents new case files as well as updated case files.
The goal is to allow a user to search for information and organize those search results on the page (or in a generated pdf/excel file). For example, a user might want to see all case files that include a particular word within the <text> element. Or a user might want to see all case files that include primary code 025 (<primary-code> element) and that were filed after a particular date (<filing-date> element).
The only data entered into the database will be from the XML files--users won't be adding any of their own information to the database.
All steps could certainly be accomplished using node.js. There are modules available that will help you with each of these tasks:
node-cron: lets you easily set up cron tasks in your node program. Another option would be to set up a cron task on your operating system (lots of resources available for your favourite OS).
download: module to easily download files from a URL.
xml-stream: allows you to stream a file and register events that fire when the parser encounters certain XML elements. I have successfully used this module to parse KML files (granted they were significantly smaller than your files).
node-postgres: node client for PostgreSQL (I am sure there are clients for many other common RDBMS, PG is the only one I have used so far).
Most of these modules have pretty great examples that will get you started. Here's how you would probably set up the XML streaming part:
var XmlStream = require('xml-stream');
var xml = fs.createReadStream('path/to/file/on/disk'); // or stream directly from your online source
var xmlStream = new XmlStream(xml);
xmlStream.on('endElement case-file', function(element) {
// create and execute SQL query/queries here for this element
});
xmlStream.on('end', function() {
// done reading elements
// do further processing / query database, etc.
});
Are you sure you need to put the data in a relational database, or do you just want to search it in general?
There don't seem to be any actual relations in the data, so it might be simpler to put it in a document search index such as ElasticSearch.
Any automatic XML to JSON converter would probably produce suitable output. The large file size is an issue. This library, despite its summary saying "not streaming", is actually streaming if you inspect the source code, so it would work for you.
I had task with xml files as you wrote. This are principals I used:
All incoming files I stored as is in DB (XMLTYPE), because I need a source file info;
All incoming files parsed with XSL transformation. For example, I see that it is three entity here: fileInfo, fileCases, fileClassification. You can write XSL transformation to compile source file info in 3 entity types (in tags FileInfo, FileCases, FileClassification);
When you have output transformed XML you can make 3 procedures, that inserts data into DB (each entity in DB area).
I need to store a file pairing colours and images for use in my JavaScript. I would have liked to use a simple CSV file and Papa Parse, but PP requires either text or a File object as input, and I can find no way of opening a File object, nor of reading the text from the CSV file. Surely my code should be allowed to read files that reside under the web site, not randomly among the file system?
My alternative is to have the end user, non-technical, edit a JSON file that is parsed by my code.
Am I wrong, or is this the case? Then maybe I should build an editor for the JSON file that simplifies the data editing for the end user.
All the file has to store is colour/image name pairs.
Have you tried using something like this? Are you getting an error? Sorry, I cannot leave comments yet.
StreamWriter _testData = new StreamWriter(Server.MapPath("~/data.txt"), true);
_testData.WriteLine(TextBox1.Text); // Write the file.
_testData.Flush();
_testData.Close(); // Close the instance of StreamWriter.
_testData.Dispose(); // Dispose from memory.