A .frd file is a type of multi-column numeric data table used for storing information about the frequency response of speakers. A .frd file looks something like this when opened in a text editor:
2210.4492 89.1 -157.7
2216.3086 88.99 -157.7
2222.168 88.88 -157.6
2228.0273 88.77 -157.4
Using javascript, is there a way that I can parse this data in order to return each column separately?
For example, from the .frd file above, I would need to return the values like so:
var column1 = [2210.4492, 2216.3086, 2222.168, 2228.0273];
var column2 = [89.1, 88.99, 88.88, 88.77];
var column3 = [-157.7, -157.7, -157.6, -157.4];
I'm not exactly sure where to begin in trying to achieve this, so any step in the right direction would be helpful!
I found the following description of the FRD file format and I will follow it.
Let's assume that the content of your .frd file is in the variable called content (the following example is for Node.js):
const fs = require('fs');
const content = fs.readFileSync('./input.frd').toString();
Now if content has your FRD data, it means it's a set of lines, each line contains exactly three numbers: a frequency (Hz), a level (dB), and a phase (degrees). To split your content into lines, we can just literally split it:
const lines = content.split(/\r?\n/);
(normally, splitting just by '\n' would've worked, but let's explicitly support Windows-style line breaks \r\n just in case. The /\r?\n/ is a regular expression that says "maybe \r, then \n")
To parse each line into three numbers, we can do this:
const values = line.split(/\s+/);
If the file can contain empty lines, it may make sense to double check that the line has exactly three values:
if (values.length !== 3) {
// skip this line
}
Given that we have three values in values, as strings, we can assign the corresponding variables:
const [frequency, level, phase] = values.map(value => Number(value));
(.map converts all the values in values from strings to Number - let's do this to make sure we store the correct type).
Now putting all those pieces together:
const fs = require('fs');
const content = fs.readFileSync('./input.frd').toString();
const frequencies = [];
const levels = [];
const phases = [];
const lines = content.split(/\r?\n/);
for (const line of lines) {
const values = line.split(/\s+/);
if (values.length !== 3) {
continue;
}
const [frequency, level, phase] = values.map(value => Number(value));
frequencies.push(frequency);
levels.push(level);
phases.push(phase);
}
console.log(frequencies);
console.log(levels);
console.log(phases);
The main code (the one that works with content) will also work in browser, not just in Node.js, if you need that.
This code can be written in a tons of different ways, but I tried to make it easier to explain so did something very straightforward.
To use it in Node.js (if your JavaScript file is called index.js):
$ cat input.frd
2210.4492 89.1 -157.7
2216.3086 88.99 -157.7
2222.168 88.88 -157.6
2228.0273 88.77 -157.4
$ node index.js
[ 2210.4492, 2216.3086, 2222.168, 2228.0273 ]
[ 89.1, 88.99, 88.88, 88.77 ]
[ -157.7, -157.7, -157.6, -157.4 ]
Related
The name list is supposedly as below:
Rose : 35621548
Jack : 32658495
Lita : 63259547
Seth : 27956431
Cathy: 75821456
Given you have a variable as StudentCode that contains the list above (I think const will do! Like:
const StudentCode = {
[Jack]: [32658495],
[Rose]: [35621548],
[Lita]: [63259547],
[Seth]: [27956431],
[Cathy]:[75821456],
};
)
So here are the questions:
1st: Ho can I define them in URL below:
https://www.mylist.com/student=?StudentCode
So the link for example for Jack will be:
https://www.mylist.com/student=?32658495
The URL is imaginary. Don't click on it please.
2nd: By the way the overall list is above 800 people and I'm planning to save an external .js file to be called within the current code. So tell me about that too. Thanks a million
Given
const StudentCode = {
"Jack": "32658495",
"Rose": "35621548",
"Lita": "63259547",
"Seth": "27956431",
"Cathy": "75821456",
};
You can construct urls like:
const urls = Object.values(StudentCode).map((c) => `https://www.mylist.com?student=${c}`)
// urls: ['https://www.mylist.com?student=32658495', 'https://www.mylist.com?student=35621548', 'https://www.mylist.com?student=63259547', 'https://www.mylist.com?student=27956431', 'https://www.mylist.com?student=75821456']
To get the url for a specific student simply do:
const url = `https://www.mylist.com?student=${StudentCode["Jack"]}`
// url: 'https://www.mylist.com?student=32658495'
Not sure I understand your second question - 800 is a rather low number so will not be any performance issues with it if that is what you are asking?
The properties of the object (after the trailing comma is removed) can be looped through using a for-in loop, (see: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...in)
This gives references to each key of the array and the value held in that key can be referenced using objectName[key], Thus you will loop through your object using something like:
for (key in StudentCode) {
keyString = key; // e.g = "Jack"
keyValue = StudentCode[key]; // e.g. = 32658495
// build the urls and links
}
to build the urls, string template literals will simplify the process (see: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals) allowing you to substitute values in your string. e.g.:
url = `https://www.mylist.com/student=?${StudentCode[key]`}
Note the use of back ticks and ${} for the substitutions.
Lastly, to build active links, create an element and sets its innerHTML property to markup built using further string template literals:
let link = `<a href=${url}>${keyValue}</a>`
These steps are combined in the working snippet here:
const StudentCode = {
Jack: 32658495,
Rose: 35621548,
Lita: 63259547,
Seth: 27956431,
Cathy: 75821456,
};
const studentLinks = [];
for (key in StudentCode) {
let url = `https://www.mylist.com/student=?${StudentCode[key]}`;
console.log(url);
studentLinks.push(`<a href href="url">${key}</a>`)
}
let output= document.createElement('div');
output.innerHTML = studentLinks.join("<br>");
document.body.appendChild(output);
I am trying to add all numbers from an column to an variable. The Problem is my code is adding the String to which results into NaN.
var csvData=[];
let test = 0;
var parser = parse({delimiter: ','}, function(err, data){
});
fs.createReadStream(__dirname+'/test2.csv','utf16le').pipe(parser)
.on('data', function(csvrow) {
csvData.push(csvrow);
test = test + (csvrow[2]);
})
.on('end',function() {
console.log(test)
});
gives me : "0Daily Device Installs00001000101100" and if I add parseInt(csvrow[2]) I will get NaN for test.
My goal is to add all numbers after Daily Device Installs, what am I missing?
I did a bit research on Node.js CSV package.
Use the header
If your CSV file contains a header row as supposed in comment by GrafiCode, like in this example:
"Day","Daily Device Installs"
"2021-09-15",1
"2021-09-16",1
Then CSV Parser has a feature to use the header row with column-names.
See the columns option.
Benefit:
log the header
map the column-names (for simple use in code)
use it to make your code clean & expressive
defend against changes of column-order inside the input CSV
var csvData=[];
let test = 0;
// options: use default delimiter comma and map header
let parser = parse({
columns: header =>
header.map( column => {
console.log(column);
// could also map (e.g. similar to Snake_Case)
return column.replace(/ /g,"_");
})
}
function addToCounter(value) {
if (!isNaN(value))
console.log("WARN: not a number: ", value)
return;
test += value;
}
// read from file
fs.createReadStream(__dirname+'/test2.csv','utf16le').pipe(parser)
.on('data', function(csvrow) {
csvData.push(csvrow);
addToCounter(csvrow.Daily_Device_Installs); // the column name as mapped with underscore
})
.on('end',function() {
console.log(test)
});
Note:
I extracted the counter-increment to a function.
Your csvData array now contains for each row an object (with column-names as keys) instead an array of columns.
try
if (!isNaN(csvrow[2])) test += +csvrow[2];
The method I use I need to put +13 and -1 inside the calculation when searching the position of each part of the text (const Before and const After), is there a more reliable and correct way?
const PositionBefore = TextScript.indexOf(Before)+13;
const PositionAfter = TextScript.indexOf(After)-1;
My fear is that for some reason the search text changes and I forget to change the numbers for the calculation and this causes an error in the retrieved text.
The part of text i'm return is date and hour:
2021-08-31 19:12:08
function Clock() {
var sheet = SpreadsheetApp.getActive().getSheetByName('Clock');
var url = 'https://int.soccerway.com/';
const contentText = UrlFetchApp.fetch(url).getContentText();
const $ = Cheerio.load(contentText);
const Before = '"timestamp":"';
const After = '});\n block.registerForCallbacks();';
var ElementSelect = $('script:contains(' + Before + ')');
var TextScript = ElementSelect.html().replace("\n","");
const PositionBefore = TextScript.indexOf(Before)+13;
const PositionAfter = TextScript.indexOf(After)-1;
sheet.getRange(1, 1).setValue(TextScript.substring(PositionBefore, PositionAfter));
}
Example full text colected in var TextScript:
(function() {
var block = new HomeMatchesBlock('block_home_matches_31', 'block_home_matches', {"block_service_id":"home_index_block_homematches","date":"2021-08-31","display":"all","timestamp":"2021-08-31 19:12:08"});
block.registerForCallbacks();
$('block_home_matches_31_1_1').observe('click', function() { block.filterContent({"display":"all"}); }.bind(block));
$('block_home_matches_31_1_2').observe('click', function() { block.filterContent({"display":"now_playing"}); }.bind(block));
block.setAttribute('colspan_left', 2);
block.setAttribute('colspan_right', 2);
TimestampFormatter.format('block_home_matches_31');
})();
There is no way to eliminate the risk of structural changes to the source content.
You can take some steps to minimize the likelihood that you forget to change your code - for example, by removing the need for hard-coded +13 and -1. But there can be other reasons for your code to fail, beyond that.
It's probably more important to make it extremely obvious when your code does fail.
Consider the following sample (which does not use Cheerio, for simplicity):
function demoHandler() {
var url = 'https://int.soccerway.com/';
const contentText = UrlFetchApp.fetch(url).getContentText();
var matchedJsonString = contentText.match(/{.*?"timestamp".*?}/)[0];
if ( matchedJsonString ) {
try {
var json = JSON.parse(matchedJsonString);
} catch(err) {
console.log( err ); // "SyntaxError..."
}
console.log(json.timestamp)
} else {
consle.log( 'Something went terribly wrong...' )
}
}
When you run the above function it prints the following to the console:
2021-08-31 23:18:46
It does this by assuming the key value of "timestamp" is part of a JSON string, starting with { and ending with }.
You can therefore extract this JSON string and convert it to a JavaScript object and then access the timestamp value directly, without needing to handle substrings.
If the JSON is not valid you will get an explicit error similar to this:
[SyntaxError: Unexpected token c in JSON at position 0]
Scraping web page data almost always has these types of risk: Your code can be brittle and break easily if the source structure changes without warning. Just try to make suc changes as noticeable as possible. In your case, write the errors to your spreadsheet and make it really obvious (red, bold, etc.).
And make good use of try...catch statements. See: try...catch
I am trying to replicate the example here by Nadieh Bremer which is a radar chart made in D3.js.
The data structure which is used in Nadieh's example is:
var data = [
[//iPhone
{axis:"Battery Life",value:20},
{axis:"Brand",value:28},
{axis:"Contract Cost",value:29},
{axis:"Design And Quality",value:17},
{axis:"Have Internet Connectivity",value:22},
{axis:"Large Screen",value:02},
{axis:"Price Of Device",value:21},
{axis:"To Be A Smartphone",value:50}
],[//Samsung
{axis:"Battery Life",value:27},
{axis:"Brand",value:16},
{axis:"Contract Cost",value:35},
{axis:"Design And Quality",value:13},
{axis:"Have Internet Connectivity",value:20},
{axis:"Large Screen",value:13},
{axis:"Price Of Device",value:35},
{axis:"To Be A Smartphone",value:38}
],etc.
];
I want to bring in the data from a CSV file which is set-up like this:
axis,value,type
Battery Life,20,iPhone
Brand,28,iPhone
Contract Cost,29,iPhone
Design And Quality,17,iPhone
Have Internet Connectivity,22,iPhone
Large Screen,02,iPhone
Price Of Device,21,iPhone
To Be A Smartphone,50,iPhone
Battery Life,27,SAmsung
Brand,16,SAmsung
...etc.
In my script, I have:
var tmpArr;
d3.csv("data1.csv", (d) => {
tmp = d;
tmpArr = Array.from(d3.group(tmp, d => d.group));
console.log(tmpArr);
});
This obviously returns an array for each group with the title of the group as the first element of the array and then the contents of the group as the second element in the array.
I would be grateful for help to remove that first element so that I'm simply returned an array of the values I need. I'm pretty sure I need to use d3.map() but I just can't quite work out what to do with it. Thanks in advance.
You can just use a standard .map and for every pair returned from Array.from(d3.group(...)) return the 2nd item of that pair. For the 2nd item in each pair use index [1] because arrays are zero-based indices.
The working example aligns to the data structure in Nadieh Bremer's block:
const csv = `axis,value,type
Battery Life,20,iPhone
Brand,28,iPhone
Contract Cost,29,iPhone
Design And Quality,17,iPhone
Have Internet Connectivity,22,iPhone
Large Screen,02,iPhone
Price Of Device,21,iPhone
To Be A Smartphone,50,iPhone
Battery Life,27,SAmsung
Brand,16,SAmsung
Contract Cost,24,SAmsung
Design And Quality,14,SAmsung
Have Internet Connectivity,22,SAmsung
Large Screen,05,SAmsung
Price Of Device,15,SAmsung
To Be A Smartphone,48,SAmsung
`;
const data = d3.csvParse(csv);
const grouped = Array.from(d3.group(data, d => d.type));
const arrForViz = grouped.map(g => g[1]);
console.log(arrForViz);
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/6.7.0/d3.min.js"></script>
I am making a tool that analyzes words and tries to identify when they were most used. I am using Google's Ngram datasets to do so. In my code, I am streaming this data (which is about 2 gigabytes). I am turning the stream data into an array, with each line of data as one entry. What I want to do is to search for a certain word in the data, and store all the array entries containing the word in a variable. I can find if the word is in the dataset, and print that word (or the position of it in the dataset) to the console. I am still learning to program, so please keep that in mind if my code is messy.
// imports fs (filesystem) package duh
const fs = require('fs');
// the data stream
const stream = fs.createReadStream("/Users/user/Desktop/authortest_nodejs/testdata/testdata - p");
// gonna use this to keep track of whether ive found the search term or not
let found = false;
// this is the term the program looks for in the data
var search = "proceeded";
// lovely beautiful unclean way of turning my search term into regular expression
var searchThing = `\\b${search}`
var searchRegExp = new RegExp(searchThing, "g");
// starts streaming the test data file
stream.on('data', function(data) {
// if found is false (my search term isn''t found in this data chunk), set the found variable to true or false depending on whether it found anything
if (!found) found = !!('' + data).match(searchRegExp);
// turns raw data to a string and tries to find the location of the search term within it
var dataLoc = data.toString().search(searchRegExp);
var dataStr = data.toString().match(searchRegExp);
// if the data search is null, continue streaming (gotta do this cuz if .match() turns up with no results it throws an error smh)
if (!dataStr) return;
// removes the null spots and line breaks, pretty up the displayed stuff
var dataDisplay = dataStr.toString().replace("null", " ");
var dataLocDisplay = dataLoc.toString().replace(/(\r\n|\n|\r)/gm,"");
// turns each line of raw data into array
var dataArray = data.toString().split("\n");
// log found instances of search term (dunno why the hell id wanna do that, should fix to something useful) edit: commented it out cuz its too annoying
//console.log(dataDisplay);
// log location of word in string (there, more useful now?)
console.log(dataDisplay);
});
// what happens when the stream thing returns an error
stream.on('error', function(err) {
console.log(err, found);
});
// what happens when the stream thing finishes streaming
stream.on('close', function(err) {
console.log(err, found, searchRegExp);
});
This currently outputs every instance of the search term in the data (basically one word repeated a hundred times or so), but I need an output of each entire line that contains the search term, not just the term. ("Proceeded 2006 5 3", not just "proceeded")
From what I understood, you're looking for something like this:
const fs = require('fs');
function grep(path, word) {
return new Promise((resolve) => {
let
stream = fs.createReadStream(path, {encoding: 'utf8'}),
buf = '',
out = [],
search = new RegExp(`\\b${word}\\b`, 'i');
function process(line) {
if (search.test(line))
out.push(line);
}
stream.on('data', (data) => {
let lines = data.split('\n');
lines[0] = buf + lines[0];
buf = lines.pop();
lines.forEach(process);
});
stream.on('end', () => {
process(buf);
resolve(out);
});
});
}
// works?
grep(__filename, 'stream').then(lines => console.log(lines))
I guess this is pretty straightforward, the buf stuff is needed to emulate line-by-line reading (you can also use readline or a dedicated module for the same).