Append a table object to an existing Excel file - javascript

I am trying to build an app where I have a data that should be appended to an existing Excel file. I am using Tauri where my front end is React based (JSX) and my backend uses Rust. When I click a button called "Append", the program needs to append data to the specified spreadsheet as rows.
An example can be shown as below:
I have a sample table like mentioned below:
Name|Surname|Age
John Doe 32
Tom Brown 12
(Append)
An existing file at C:\Test.xlsx where the content is as such:
Name|Surname|Age
Susan Ford 45
Mike Ferry 59
When I click the "Append button" from my browser, the spreadsheet needs to be updated as such:
Name|Surname|Age
John Doe 32
Tom Brown 12
Susan Ford 45
Mike Ferry 59
I am thrilled to learn your creative ideas for this problem and looking for a solution since I couldn't find any resource about appending tables to excel files with a button click using JS. Thanks in advance.
I tried to search for appending tables to excel files on various platforms but couldn't find any solution. Let it be my ignorance..

Related

Using Sheets to search through the entire sheet and pull up results in a column

I have a bunch of sheets I use for personal work. They have a bunch of different car parts under different tabs in each sheet.
I created a master sheet that importrange's from all of them and shows links to them in a master tab to jump to each tab separately. (doors, hoods, lightbulbs, door trims, roof racks, box of crown vic parts; its all over the place)
Is there a way for a user to search some text in a cell and have the column next to it populate it with results with matching words and ultimately, link to the tab and row that the item exists on?
ex: I have a sheet called "Search" and I type in A2 "crown vic". Then it will populate B2:B100 with any items found in the entire sheet with the words "crown vic" in it, and C2:C100 will have a link to the tabbed sheet that it is in.
Link to a test page to get my idea across:
https://docs.google.com/spreadsheets/d/1WrImPYHhhMOOZbf-AE2sNs82xL-u8wWOW4IFel6RGcY/edit#gid=999756632
I believe it would be better for me to use Javascript and HTML to create a web database for all this info instead of using sheets since its limited in some ways I want to use it. Ultimately I want it to be easier to find all the data by bringing up things with search.
I think I have a basic answer for you. However your sample sheet is not very much like your final sheet, with all of the tabs you've mentioned, so there is only the working concept that I can demonstrate here. With a really reresentative sample sheet, I could flesh out more details on how the links to multiple possible tabs would need to be built. See my sample tab, GK-Help Search, added to your sample sheet.
First, we do a query, in column B, to return the list of matching car parts.
=QUERY('Car Parts'!A2:A,"select A where upper(A) contains '"&UPPER(A2)&"' ",0)
For your production sheet, this would require all of the data tabs to be concatenated, in a vertically stacked array. Eg.
=QUERY({ 'doors'!A2:A;
'hoods'!A2:A;
'lights'!A2:A },"select...")
Then the main formula is this, in C2:
=HYPERLINK("https://docs.google.com/spreadsheets/d/1WrImPYHhhMOOZbf-AE2sNs82xL-u8wWOW4IFel6RGcY/edit#gid=" &
"0" & "&range=" &
SUBSTITUTE(REGEXEXTRACT(CELL("address",INDIRECT("'Car Parts'!A" & MATCH(B2,'Car Parts'!A$2:A,0)+1)),"(\$.*)"),"$",""),
CELL("address",INDIRECT("'Car Parts'!A" & MATCH(B2,'Car Parts'!A$2:A,0)+1)))
This does a lookup of each car part, to get the address of the cell. Then a dynamic HYPERLINK is built, using the URL of the spreadsheet, and the address of the cell. The element that is not fleshed in my demo out is how to build the "gid" part of the URL address, since you did not provide multiple sample tabs. But this is very possible.
Here is a previous answer on doing that last part.
how-to-insert-hyperlink-to-a-cell-in-google-sheet-using-formula
My sample sheet looks like the following:

Adding html codes to SharePoint Online

is it possible to add html codes to SharePoint Online?
I would like to add a html code from Statista such as the following:
<img src="https://www.statista.com/graphic/1/262861/uk-brent-crude-oil-monthly-price-development.jpg" alt="Statistic: Average monthly Brent crude oil price from July 2018 to July 2019 (in U.S. dollars per barrel)* | Statista" style="width: 100%; height: auto !important; max-width:1000px;-ms-interpolation-mode: bicubic;"/><br />Find more statistics at Statista
As far as I know only iframe based codes can get entered to SP Online. However, Statista only provides the above mentioned version.
Is there any possibility to add these codes to SP Online?
Please note that I am really not familiar with such codes.`
Thanks in advance
Markus
Only way you can do this OTB is by saving your HTML code as a .aspx page, uploading it into a document library and then using the 'Embed' webpart.
terribly annoying as you have to edit your code separately, but this will allow any/all completely custom code (HTML/CSS/JS) to be inserted into the page.
It's not supported to add html codes in SharePoint modern page. However, you could achieve this in classic page.
Go to Pages library, create a classic page.Click Edit Source in the ribbon and add the code.
We can deploy react script editor web part to your site, then add the HTML code into this web part in modern site page. In classic site page, we can use the OOTB script editor web part to achieve it.
If you have a standard Sharepoint page - click the add button to add a new element and then select "Code Snippet". You can then paste the code there. You may run into some permission issues depending on your setup however.

How to create a table of content for pdf viewer in angular 2

I'm working on an online learning platform. I've used ng2-pdf-viewer to display my lessons which are in pdf format obviously. I'm looking for a way to create a side bar menu on which the user can find all the different chapters of the lesson.
LIKE THIS
I've thought about using page jumps to jump from one chapter to another as you click on its name on the side menu but that would be very unconvinient for me cause there will be quite alot of lessons and pdf files. I was looking for how the table of content thingy THIS ONE that you can find on any pdf file, was made but I didnt get anywhere. Any suggestions ?
Angular 2 + IntelliJ

Dynamic Website Update Script

I have a website with various applets/widgets/RSS feeds. How would I go about creating a cron/script that is able to figure out when the last time one of these applets/RSS feeds was updated and store that in a database?
I'd need to be able to differentiate between an update in one feed or another. Some widgets have only pictures, one of them is the twitter widget so the content is all different.
You could find a element which has a date for example enter link description here has a pubDate element <pubDate>Mon, 28 Jul 2014 14:03:00 -0400</pubDate> or you could have a database row which told you the last article parsed from an RSS feed for each source. there would be a lot of ways this is possible, it might be worth looking on github for an opensource RSS reader and see if they have solved this problem.

Scraping javascript-generated tables in R with -relenium-

Recently, starting from this very useful question (Scraping html tables into R data frames using the XML package) I successfully used the XML package for scraping HTML tables.
Now I am trying to extract Javascript-generated tables from here:
Tables 2013 (then click on "Sortare alfabetică").
I am interested in exporting the first 9 columns of the, say, pag.1-pag.10 data.
I went through different related questions on the forum, including some where it was suggested not to use R to perform such task and a similar question that however did not prove directly useful for my problem. As suggested, I have been reading the information about the Relenium package (see the developers' toy example here).
According to the structure of the website where the tables of interest are located, I have to click a first button to access the tables sorted by name and then to click a second button to navigate through all the next tables I want to export. In practice I have to:
click Sortare alfabetică button
copy the first 9 columns of the 10-row table
click right button (called Pagina urmatoare)
And repeat 2-3 for 10 times.
By using the Chrome inspector (Tools > Developer tools) I found the following paths for the two buttons:
/html/body/table/tbody/tr[1]/td/table[2]/tbody/tr[2]/td/table/tbody/tr/td[2]/a
/html/body/table/tbody/tr[1]/td/table[3]/tbody/tr/td[4]/table
I started with this code in order to accomplish step 1:
library(relenium)
firefox <- firefoxClass$new()
firefox$get("http://bacalaureat.edu.ro/2013/rapoarte/rezultate/index.html")
buttonElement <- firefox$findElementByXPath("/html/body/table/tbody/tr[1]/td/table[2]/tbody/tr[2]/td/table/tbody/tr/td[2]/a")
buttonElement$click()
But I get the following error:
[1] "Error: NoSuchElementException"
[1] "Thrown by Firefox$findElement(By by) and webElement$findElement(By by)."
I don't know whether it is an easier way to proceed, but an alternative to point 3 to navigate through pag.1-pag.10 can be to work with the dropdown menu of the webpage.
The paths for pag.1 and pag.2 are:
//*[#id="PageNavigator"]/option[1]
//*[#id="PageNavigator"]/option[2]
Focusing on scraping data from a single table
Clearly, even before being able to navigate in the 10 tables through the buttons or the scrolldown menu, the crucial problem is to extract the data contained in each table.
With this code I tried to focus on extracting the first 9 columns of the first table only (then the code could be iterated through "http://bacalaureat.edu.ro/.../page_2.html", "http://bacalaureat.edu.ro/.../page_3.html", etc.):
library(XML)
library(relenium)
firefox <- firefoxClass$new()
firefox$get("http://bacalaureat.edu.ro/2013/rapoarte/rezultate/alfabetic/page_1.html")
doc <- htmlParse(firefox$getPageSource())
tables <- readHTMLTable(doc, stringsAsFactors=FALSE)
But the output is extremely messy. I don't know if this maks sense, and I am only guessing, but it could be necessary to go deeper in the javascript code and extract the information in the table cell by cell.
For instance, for the first individual, the 9 variable values of interest are characterized by the following XPaths:
//*[#id="mainTable"]/tbody/tr[3]/td[1]
//*[#id="mainTable"]/tbody/tr[3]/td[2]
//*[#id="mainTable"]/tbody/tr[3]/td[3]/a
//*[#id="mainTable"]/tbody/tr[3]/td[4]/a
//*[#id="mainTable"]/tbody/tr[3]/td[5]/a
//*[#id="mainTable"]/tbody/tr[3]/td[6]/a
//*[#id="mainTable"]/tbody/tr[3]/td[7]
//*[#id="mainTable"]/tbody/tr[3]/td[8]
//*[#id="mainTable"]/tbody/tr[3]/td[9]
Using these paths, the entries of each cell could be saved into an R vector and the procedure could be repeated for all the other individual-specific rows of data. Is it sensible to proceed like this? If so, how would you proceed with -relenium-?

Categories

Resources