reading data line by line from txt file using javascript Rhino - javascript

I have created a txt file using python code which contents like as shown in image:
I am writing a javascript code in a Nomagic Cameo system modeler . I am reading the file but I want to read all the given lines one by one into a variable and then use this variable to open these csv files in loop and do some processing.
So far I have written the following code but I am not sure how to correct it as per my requirement.
importPackage(org.w3c.dom);
importClass(java.io.File);
importClass(java.util.Scanner);
importClass (java.util.Scanner);
//importClass(java.io.BufferedReader);
importClass(javax.xml.parsers.DocumentBuilderFactory);
importClass(javax.xml.transform.OutputKeys);
importPackage(java.sql);
importClass(com.nomagic.magicdraw.automaton.AutomatonMacroAPI);
importClass(com.nomagic.magicdraw.openapi.uml.SessionManager);
java.lang.Class.forName("com.mysql.jdbc.Driver");
var conn = DriverManager.getConnection("jdbc:mysql://"+HostName+":"+Port,Username,Password);
var stat = conn.createStatement();
var resultSet = null;
var logger = com.nomagic.magicdraw.core.Application.getInstance().getGUILog();
var fileR = new File(FilePath);
//var sc = new Scanner(fileR);
var scanner = new Scanner(new File(fileR));
while (scanner.hasNextLine()) {
var line = scanner.nextLine();
print(line);
// process the line
}

Related

Google Apps Script: trying to read a text file from Google Drive with .getAs(MimeType.PLAIN_TEXT)

I'm stuck into trouble trying to read an HTML file from a Google Drive. So
I tried :
to get a text with a help of UrlFetchApp.fetch("https://googledrive.com/host/{folderID}/{filename}.html"), but it fetches some google css file instead of mine.
to convert a file from blob to a text string with file.getAs(MimeType.PLAIN_TEXT), and it just outputs "Blob" without any file content. How can I extract a file text without any specific libraries?
var dApp = DriveApp;
var folderIter = dApp.getFoldersByName("Лаборатории ФББ");
var folder = folderIter.next();
var filesIter = dApp.getFilesByName("Labs.html");
var filelist = [];
var propfiledate = 0;
var propfilename;
while(filesIter.hasNext()){
var file = filesIter.next();
var filename = file.getName();
var fileurl = file.getUrl();
var filedate = file.getDateCreated();
if(filedate >= propfiledate){
var propfiledate = filedate;
var propfileurl = fileurl;
var propfilename = filename;
var propfile = file;
}
}
Logger.log(propfile);
// 1st try var myHtmlFile = UrlFetchApp.fetch(propfileurl);
// 2nd try var myHtmlFile = propfile.getAs(MimeType.PLAIN_TEXT);
// 3rd try var myHtmlFile = propfile.getBlob().text();
var ss = SpreadsheetApp.create("test");
SpreadsheetApp.setActiveSpreadsheet(ss);
var sheet = ss.getActiveSheet();
sheet.appendRow(myHtmlFile.toString().split("\n"));
Logger.log(propfiledate);
Logger.log(propfilename);
Logger.log(propfileurl);
}
Using Apps Script on a dummy HTML file, you can get the HTML data that is inside of it.
Using DriveApp getFilesByName(name) method you retrieve the file by the name.
This will return a FileIterator since there can be many files with similar names.
Then you can get the file blob with getBlob() and the blob data as a string with getDataAsString()
I have managed to get the dummyHTML.html file data by using this previously mentioned methods:
function myFunction() {
var files = DriveApp.getFilesByName("dummyHTML.html");
while (files.hasNext()) {
var file = files.next();
Logger.log(file.getBlob().getDataAsString());
}
}

Google Script: How to script an automatic import from a txt in my drive into spreadsheet?

I've never used Javascript before and i've been trying for ages to do this but with no luck, and I can't find any previous people trying.
I want to copy the text data straight from this txt document in my drive, it is possible to do this fine manually but I want it to be done daily automatically instead.
The text document;
Boxes Made,3
Target Percentage,34
Hourly Rate,2
If I import this into a spreadsheet with these settings its perfect;
Import Settings
And it imports like this;
After Import
Now I need to try and automate this so that a script imports it automatically.
The script I have so far doesn't work, please help.
Current script;
function AutoImporter (Source)
{
var Source = DriveApp.getFilesByName('DailyData.txt');
var TextContents = Source.copyText();
var Target = SpreadsheetApp.getActiveSheet();
Target.appendText(TextContents[1]);
}
--edit
Some guy just sent me a script that seems closer but still didn't work;
function autoCSV() {
var ss=SpreadsheetApp.getActiveSpreadsheet();
var s=ss.getActiveSheet();
var r=s.getActiveCell();
var id="DailyData.txt";//<<<<<enter the ID of the text file
var f3=DriveApp.getFileById(id);
var lst1=f3.getBlob().getDataAsString().split('\n').map(function(x) {return x.split(',')});
var ncols=1,i,lst2=[];
for (i in lst1) {if (lst1[i].length>ncols) ncols=lst1[i].length;}
for (i=0;i<ncols;i++) lst2.push('');
for (i in lst1) lst1[i]=lst1[i].concat(lst2.slice(0,lst2.length-lst1[i].length));
s.getRange(r.getRow(), r.getColumn(), lst1.length, ncols).setValues(lst1);
}
You may read text file from Google Drive this way:
'use strict'; // <- Always use strict mode.
function foo() {
var fileName = 'DailyData.txt';
var files = DriveApp.getFilesByName(fileName);
if (!files.hasNext()) {
throw new Error('No file with name:' + fileName);
}
// We take only the first file among all files with such name.
var file = files.next();
var text = file.getBlob().getDataAsString('utf8');
Logger.log(text);
// Now you have to parse the file.
}
Documentation:
DriveApp.getFilesByName returns collection of Files.
File.getBlob returns Blob.
Blob.getDataAsString returns String.

Convert binary file to JavaScript string and then to Uint8Array

I'm trying to create a web application that can be used via a file:// URI. This means that I can't use AJAX to load binary files (without turning off security features in the browser, which I don't want to do as a matter of principle).
The application uses a SQLite database. I want to provide the database to a sql.js constructor, which requires it in Uint8Array format.
Since I can't use AJAX to load the database file, I could instead load it with <input type="file"> and FileReader.prototype.readAsArrayBuffer and convert the ArrayBuffer to a Uint8Array. And that's working with the following code:
input.addEventListener('change', function (changeEvent) {
var file = changeEvent.currentTarget.files[0];
var reader = new FileReader();
reader.addEventListener('load', function (loadEvent) {
var buffer = loadEvent.target.result;
var uint8Array = new Uint8Array(buffer);
var db = new sql.Database(uint8Array);
});
reader.readAsArrayBuffer(file);
});
However, <input type="file"> requires user interaction, which is tedious.
I thought I might be able to work around the no-AJAX limitation by using a build tool to convert my database file to a JavaScript object / string and generate a ".js" file providing the file contents, and then convert the file contents to a Uint8Array, somehow.
Psuedocode:
// In Node.js:
var fs = require('fs');
var sqliteDb = fs.readFileSync('path/to/sqlite.db');
var string = convertBufferToJsStringSomehow(sqliteDb);
fs.writeFileSync('build/db.js', 'var dbString = "' + string + '";');
// In the browser (assume "build/db.js" has been loaded via a <script> tag):
var uint8Array = convertStringToUint8ArraySomehow(dbString);
var db = new sql.Database(uint8Array);
In Node.js, I've tried the following:
var TextEncoder = require('text-encoding').TextEncoder;
var TextDecoder = require('text-encoding').TextEncoder;
var sql = require('sql.js');
var string = new TextDecoder('utf-8').decode(fs.readFileSync('path/to/sqlite.db'));
// At this point, I would write `string` to a ".js" file, but for
// the sake of determining if converting back to a Uint8Array
// would work, I'll continue in Node.js...
var uint8array = new TextEncoder().encode(string);
var db = new sql.Database(uint8array);
db.exec('SELECT * FROM tablename');
But when I do that, I get the error "Error: database disk image is malformed".
What am I doing wrong? Is this even possible? The database disk image isn't "malformed" when I load the same file via FileReader.
Using the following code, I was able to transfer the database file's contents to the browser:
// In Node.js:
var fs = require('fs');
var base64 = fs.readFileSync('path/to/sqlite.db', 'base64');
fs.writeFileSync('build/db.js', 'var dbString = "' + base64 + '";');
// In the browser (assume "build/db.js" has been loaded via a <script> tag):
function base64ToUint8Array (string) {
var raw = atob(string);
var rawLength = raw.length;
var array = new Uint8Array(new ArrayBuffer(rawLength));
for (var i = 0; i < rawLength; i += 1) {
array[i] = raw.charCodeAt(i);
}
return array;
}
var db = new sql.Database(base64ToUint8Array(dbString));
console.log(db.exec('SELECT * FROM tablename'));
And that's working with the following code:
input.addEventListener('change', function (changeEvent) {
var file = changeEvent.currentTarget.files[0];
var reader = new FileReader();
reader.addEventListener('load', function (loadEvent) {
var buffer = loadEvent.target.result;
var uint8Array = new Uint8Array(buffer);
var db = new sql.Database(uint8Array);
});
reader.readAsArrayBuffer(file);
});
However, <input type="file"> requires user interaction, which is
tedious.
Using current working approach would be less tedious than attempting to create workarounds. If user intends to use application, user can select file from their filesystem to run application.

Apps script write to Big Query unknown error

This is supposed to read in a CSV and then write it to bigquery. When it runs, however, nothing is written, and there are no errors logged. I read that I need to write a csv and then turn it into an Octet Stream. I am not sure whether or not this is compatible with google bigquery.
function test(){
try{
var tableReference = BigQuery.newTableReference();
tableReference.setProjectId(PROJECT_ID);
tableReference.setDatasetId(datasetId);
tableReference.setTableId(tableId);
var schema = "CUSTOMER:string, CLASSNUM:integer, CLASSDESC:string, CSR:string, CSR2:string, INSURANCE:string, REFERRALGENERAL:string, REFERRALSPECIFIC:string, NOTES:string, INMIN:integer, INHR:integer, OUTMIN:integer, OUTHR:integer, WAITMIN:integer, WAITHR:integer, DATETIMESTAMP:float, DATEYR:integer,DATEMONTH:integer, DATEDAY:integer";
var load = BigQuery.newJobConfigurationLoad();
load.setDestinationTable(tableReference);
load.setSourceUris(URIs);
load.setSourceFormat('NEWLINE_DELIMITED_JSON');
load.setSchema(schema);
load.setMaxBadRecords(0);
load.setWriteDisposition('WRITE_TRUNCATE');
var configuration = BigQuery.newJobConfiguration();
configuration.setLoad(load);
var newJob = BigQuery.newJob();
newJob.setConfiguration(configuration);
var loadr = DriveApp.getFilesByName("test.csv");
var x = loadr.next().getBlob();
Logger.log(x.getDataAsString());
var d = DriveApp.getFilesByName("test.csv");
var id = d.next().getId();
Logger.log(id);
var data = DocsList.getFileById(id).getBlob().getDataAsString();
var mediaData = Utilities.newBlob(data, 'application/octet-stream');
BigQuery.Jobs.insert(newJob, PROJECT_ID, mediaData)
}
catch(error){Logger.log("A" + error.message);}
}
Your sourceFormat is wrong for CSV files:
The format of the data files. For CSV files, specify "CSV". For
datastore backups, specify "DATASTORE_BACKUP". For newline-delimited
JSON, specify "NEWLINE_DELIMITED_JSON". The default value is CSV.
https://developers.google.com/bigquery/docs/reference/v2/jobs#configuration.load.sourceUris
On the other hand I think you don't need at all the load.setSourceUris(URIs); since you try to load from local file, and not from Google Cloud Storage. Check this python example https://developers.google.com/bigquery/loading-data-into-bigquery

Jscript ReadLine() related

Can any one please tell me that we use ReadLine() to read a particular line from a file (.txt). Now I want to read the total content of the file (not only the first line). For that what method I need to use. I googled a lot but I cant get the solution.
My Code is given below:
var ForReading = 1;
var TristateUseDefault = -2;
var fso = new ActiveXObject("Scripting.FileSystemObject");
var newFile = fso.OpenTextFile(sFileName, ForReading, true, TristateUseDefault);
var importTXT = newFile.ReadLine();
This is returning the first line of the .txt file by importTXT variable. Now I want to get the total file content in importTXT.
Any suggestion will be very much helpful for me.
You use the ReadAll method:
var importTXT = newFile.ReadAll();
(Don't forget to close the stream when you are done with it.)
Here: ReadAll (msdn)
I found the example given very poor - for example it did not CLOSE the file, so I added this to the msdn page:
function ReadAllTextFile(filename)
{
var ForReading = 1;
var fso = new ActiveXObject("Scripting.FileSystemObject");
// Open the file for input.
var f = fso.OpenTextFile(filename, ForReading);
// Read from the file.
var text = (f.AtEndOfStream)?"":f.ReadAll(); // this is where it is read
f.Close();
return text;
}
var importTXT = ReadAllTextFile(sFileName);

Categories

Resources