Convert a javascript object to CSV file - javascript

I have a script which reads a file line by line, generate an object with some fields from certain lines and now I want to put that generated object into a CSV file.
How can I do the following:
From the script itself generate a CSV file
Give initial fields (headers) to the file
Update that file line by line (add to the file one line at a time)
Some clarifications, I don't know the size of the CSV in advance, so the file must by dynamically changed.
Thanks in advance.

Looking at what you have said:
From the script itself generate a csv file
Have a look at node-csv-generate which lets you generate csv strings easily
Give initial fields (headers) to the file & 3. Update that file line by line (add to the file one line at a time)
Check out the node-csv-generate stream functionality to write individually line by line (i.e. inital headers first)
Now since you said you need to run it locally, I would recommend Rhino if just using JS but if node.js is required then check out Rhinodo. These will let you run the program locally on the JVM basically (you could call the JS from within Java if you wanted to).
To export the CSV file there are plenty examples online this SO thread being one... i.e.
var encodedUri = encodeURI(csvContent);
window.open(encodedUri);
Where csvContent is the complete string of your csv. I am not sure how supported this is on Rhinodo, but I'm pretty sure it'll all work on Rhino.
If this is intended to be a purely desktop based application, I would look at using Java (or your preferred language Python or C# might be nicer depending on what you are used to :-) ) rather than JS if everything needs to be local and it intends on being widely used. That way you have a much cleaner interaction with the OS and a lot more control.
I hope this helps!

Related

Is it possible to pass a string variable as a file in command line argument?

I'm invoking a command line child process that converts a local file to another file format. It goes like this
>myFileConversion localfile.txt convertedfile.bin
That would convert the localfile.txt to the needed format in a file named convertedfile.bin.
It also has an option to put the contents in stdout.
I'm running this in node js on the server and need the create the localfile.txt on the fly.
The contents of localfile.txt is just a string I dynamically generate. If possible, I would like the pass the string instead of writing the string to a file to be more efficient. How could I do this? Is it possible? Would it be faster than just writing to the local file?
As Chris mentioned in the comments it may be possible to pipe the data but since I only need to save the file once, it's easier to just save the file locally and pass the name of the file.
Please post other possible answers as well!

Removing part of a file in node.js

Currently, I'm looking at trying to remove part of what is basically a proprietary archive format; in order to support the ability to remove a file, I'm trying to figure out how to remove a segment of the file (given an offset and a length). I see there's plenty of append logic when it comes to the fs module of node, but nothing that seems to "splice" parts of a file.
Is this going to be even possible? Will I have to resort to the less preferred option of writing to an entirely new file instead?
Operation System handles appending to file very quickly, there is no need to rewrite the all file when you open it for appending.
But, if you wish to slice (cut) the middle of the file, it doesn't matter which programing language do you use, you have to read the whole file and save it again.
What you can do is to create a new file, and save to it two slices of the input buffer.
var fs=require('fs')
var buffer=fs.readFileSync('input_file')
fs.writeFileSync("output",buffer.slice(0,20))
fs.appendFileSync("output",buffer.slice(50,100))

How to scrape javascript table in R?

I want to scrape a table from the citibike : https://s3.amazonaws.com/tripdata/index.html
My goal is to get the urls of the zip files all at once, instead of manually type all the dates and downloading one at each time. Since the webpage is updated monthly, every time I run the function, I want be able to get all the up-to-date data files.
I first tried to use Rvest and XML packages and then realized that the webpage contains both the html and a table that's generated by a javascript function. That's where the problem was.
Really appreciate any help and please let me know if I could provide further information.
If I go to https://s3.amazonaws.com/tripdata/ (just the root, no index.html) I get a simple XML file. The relevant element is Key (uppercase K, lowercase e,y) if you want to parse the XML but I would just search the plain text, that is: ignore the XML, treat it like a simple text file, get every string between <Key> and </Key> treat that as the filename that it is and prefix https://s3.amazonaws.com/tripdata/ to get it.
The first entry is all together (170 MB) as it seems, so you might be ok with that alone.

How can I save a very large in-memory object to file?

I have a very large array with thousands of items
I tried this solution:
Create a file in memory for user to download, not through server
of creating an anchor
text file
~~JSON.stringify on the array caused the tab to freeze~~ Correction: Trying to log out the result caused the tab to freeze, stringify by itself works fine
The data was originally in string form but creating an anchor with that data resulted in a no-op, I'm assuming also because the data was too big, because using dummy data successfully resulted in a file download being triggered
How can I get this item onto my filesystem?
edit/clarification:
There is a very large array that I can only access via the the browser inspector/console. I can't access it via any other language
Javascript does not allow you to read or write files, except for cookies, and I think the amount of data you are using exceeds the size limit for cookies. This is for security reasons.
However languages such as php, python and ruby allow the reading and writing of files. It appears you are using binary data, so use binary files and write functions.
As to the choice of language : if you already know one use that, or whichever you can get help with. Writing a file is a very basic operation and all three languages are equally good. If you don't know any of these languages you can literally copy and paste the code from their websites.

How to save a files contents in Javascript/jQuery

Basically, I want to upload ONLY a CSV file via Javascript or jQuery.
I want to try and do this without any PHP involved.
I need to use a HTML upload form, and then save only it's contents to a multidimensional array or a string.
I do not need to save the uploaded file to the server, I just need to save it's contents to a string as stated.
I have looked far and wide online, yet everything involves PHP.
Is this possible with just Javascript or jQuery?
Thanks in advance
This uses a library I wrote and released under the GPLv3 License: html5csv
The example below uploads a CSV file into the browser, where it is available as an array of arrays.
The library supports various block operations, such as make a table, edit, plot, fit, call a function, save in browser session storage or local storage.
JSFIDDLE
html
Choose a CSV file to load into the application:
<input id='foo' type='file'>
<hr />
js (requires jQuery and html5csv.js)
CSV.begin('#foo').
table('output', {header:1, caption:'Uploaded CSV Data'}).
go();
Here, go() can take a function callback
(e,D), where e will contain an error string or null, and D is an object that may contain D.rows[0][0],...,D.rows[n-1][m-1] for a n x m matrix of data. Row 0 may be a header row.
Asynchronicity is used, in fact enforced in places. So beware that like AJAX, this code will return immediately to the subsequent line, and is best read as setting up a workflow of what to do when the previous step becomes ready.
Saving/Restoring
You can save data into the user's browser localStorage object with .save('local/someKey'). somewhere in the workflow, and data existing in the array at that point will be stored in HTML5 local storage (perhaps even compressed if you include the LZString library as documented), until the browser user deletes it.
Then in the same page or another page on the same web site you can get the data back out with CSV.begin('local/someKey')...
Using the data
You should put any code you want to use the data into a function that can fit either the callbacks expected by html5csv's call or go as documented on the html5csv site.
The jQuery CSV plugin can use client-side file handling (no need for server-side script like PHP):
https://code.google.com/p/jquery-csv/#Client-Side_File_Handling
You can use plugin which allow you to parse CSV into Array.
http://code.google.com/p/jquery-csv/
Features
Convert a CSV String to an array
Convert a multi-line CSV string to a 2D array
Convert a multi-line CSV string to an array of objects (ie header:value pairs)
Convert an array of values to CSV (under development)
Convert an array of objects to CSV (under development)
Hooks/Callbacks to extend the default parsing process
Customizable delimiter (default: ") and separator (default: ,) characters
Node.js support (ie CommonJS importing and async callback support)
To do the upload, you need to be able to read the file off the disc. You can do this with the HTMl5 File API. I'm sure there are jQuery libraries to simplify this, but that's the underlying tech.
Someone else posted a question (and solution) on how to do that with jQuery: html5's file api example with jquery?
Once you've got access to the file in the browser, use a CSV library to work with it.

Categories

Resources