Filename change with timestamp - javascript

Dynamic file generation with timestamp
Hi
I have two files like FILE_123.csv, FILE_456.csv but in real time file names will be dynamic.
So two files are coming I want to change the file name like FILE_'YYYYMMDDHH24MISS' with one second difference for each file. I think we can use wait time for each file generation by comparing each file in java script.
Can someone suggest me the code for this ?

Related

Can i give a file a unique id?

Im working with nodejs.
I have a webpage that accepts an excel upload, reads the excel and creates products based on the excels info read. My reading algorithm only works with certain format, the excel cant be written in just any random way. So i offer an excel file written in that format for the user to download, fill it (it only allows certain cells to be modified) and then upload it.
The problem is distinguishing the right file (the one i offer), from any other random file without wasting time reading it. Is there any way to give a file an id or something that certifies that it is actually the file that is suppossed to be uploaded? At first i thought about the name, but anyone can upload a random file with the same name so i dont see any way. Can you help me? Thank you

Is it possible to pass a string variable as a file in command line argument?

I'm invoking a command line child process that converts a local file to another file format. It goes like this
>myFileConversion localfile.txt convertedfile.bin
That would convert the localfile.txt to the needed format in a file named convertedfile.bin.
It also has an option to put the contents in stdout.
I'm running this in node js on the server and need the create the localfile.txt on the fly.
The contents of localfile.txt is just a string I dynamically generate. If possible, I would like the pass the string instead of writing the string to a file to be more efficient. How could I do this? Is it possible? Would it be faster than just writing to the local file?
As Chris mentioned in the comments it may be possible to pipe the data but since I only need to save the file once, it's easier to just save the file locally and pass the name of the file.
Please post other possible answers as well!

Convert a javascript object to CSV file

I have a script which reads a file line by line, generate an object with some fields from certain lines and now I want to put that generated object into a CSV file.
How can I do the following:
From the script itself generate a CSV file
Give initial fields (headers) to the file
Update that file line by line (add to the file one line at a time)
Some clarifications, I don't know the size of the CSV in advance, so the file must by dynamically changed.
Thanks in advance.
Looking at what you have said:
From the script itself generate a csv file
Have a look at node-csv-generate which lets you generate csv strings easily
Give initial fields (headers) to the file & 3. Update that file line by line (add to the file one line at a time)
Check out the node-csv-generate stream functionality to write individually line by line (i.e. inital headers first)
Now since you said you need to run it locally, I would recommend Rhino if just using JS but if node.js is required then check out Rhinodo. These will let you run the program locally on the JVM basically (you could call the JS from within Java if you wanted to).
To export the CSV file there are plenty examples online this SO thread being one... i.e.
var encodedUri = encodeURI(csvContent);
window.open(encodedUri);
Where csvContent is the complete string of your csv. I am not sure how supported this is on Rhinodo, but I'm pretty sure it'll all work on Rhino.
If this is intended to be a purely desktop based application, I would look at using Java (or your preferred language Python or C# might be nicer depending on what you are used to :-) ) rather than JS if everything needs to be local and it intends on being widely used. That way you have a much cleaner interaction with the OS and a lot more control.
I hope this helps!

How to scrape javascript table in R?

I want to scrape a table from the citibike : https://s3.amazonaws.com/tripdata/index.html
My goal is to get the urls of the zip files all at once, instead of manually type all the dates and downloading one at each time. Since the webpage is updated monthly, every time I run the function, I want be able to get all the up-to-date data files.
I first tried to use Rvest and XML packages and then realized that the webpage contains both the html and a table that's generated by a javascript function. That's where the problem was.
Really appreciate any help and please let me know if I could provide further information.
If I go to https://s3.amazonaws.com/tripdata/ (just the root, no index.html) I get a simple XML file. The relevant element is Key (uppercase K, lowercase e,y) if you want to parse the XML but I would just search the plain text, that is: ignore the XML, treat it like a simple text file, get every string between <Key> and </Key> treat that as the filename that it is and prefix https://s3.amazonaws.com/tripdata/ to get it.
The first entry is all together (170 MB) as it seems, so you might be ok with that alone.

Using Node.js file system is it smarter to run through 80 files looking for a property than it is to run through conditions in one file?

Let me explain: I am building a node.js project that needs to check if dates match or fall within a range. If there is a match, I need to store a reference to the path of a file on the server. There are about 80 of these files. They are configurations.
I can write a giant condition in a function that can run through and check the dates. It will be fast, I'm sure. The real question is, is it smarter to let each config file store it's own date (the date is a calculation based on a date that will have to be passed in to the config file), then loop through the files, requiring each one, finding the property holding the date, checking it, then either storing the files path or not?
The requiring approach will be much less code, and it will be cleaner, but I'm wondering if I will take a huge performance hit. Is it better to just write a giant list of conditions?
Sorry if this is not clear. Let me know if I need to include anything to help clarify the question.
the date is a calculation based on a date that will have to be passed in to the config file
Then don't put the calculation result in the config file as well, but store it in memory.
On startup, run through all 80 files (in parallel?), collect the dates and do the respective calculations. Store the results in an array or so.
On each request, write a loop (not a giant hand-written condition!) to find the date, and use the file path associated to it.

Categories

Resources