I am trying to loop through some files but I have a problem when trying to put an if and get some data from there.
I have a folder named Channels with 654 txt files each have 1 json with 100 json nested in an array
Also I have a list of phone numbers which I take from a csv (2500 numbers)
What I need to do it's check if a phone number from the list matches with channel from any json of the 654 text files
I made this code but the problem here its if I just put 1 phone number which matches with 8 json, it's works fine, but If I check the same phone number adding other 3 new numbers to the list, the result its completely different and only matches with just one json
const fs = require('fs');
let csvFormatPhonesList = fs.readFileSync(`${__dirname}\\PhonesNumbersFile.csv`, 'utf8'); //<= This came one below the other like '+549XXXXXXXXX\n+549XXXXXXXXX\n+549XXXXXXXXX'
let phonesNumbersSplited = csvFormatPhonesList.split('\n');
const channelsFound = [];
const numbersOfFilesChannels = 654; //I have 654 txt. Each file have 100 jsons
for(let i=0; i<numbersOfFilesChannels; i++) {
let getOneSingleFile = JSON.parse(fs.readFileSync(`${__dirname}\\Channels\\channels${i}.txt`, 'utf8'));
phonesNumbersSplited.forEach((phone) => {
getOneSingleFile.channels.forEach((channel) => {
if (channel.friendly_name.includes(phone)){
let parseAttributes = JSON.parse(channel.attributes);
if (parseAttributes.long_lived == true) {
channelsFound.push(channel);
}
}
})
});
}
//****File of json example****//
//The array " channels " has 100 jsons inside
"channels": [
{
"unique_name": null,
"members_count": 1,
"date_updated": "2021-09-28T18:54:59Z",
"friendly_name": "whatsapp_+549XXXXXXXXX",
"created_by": "system",
"account_sid": "ACXXXXXXXXXXX",
"url": "https://chat.twilio.com/v2/Services/ISXXXXX/Channels/CHXXXXXXXX",
"date_created": "2021-09-28T13:06:16Z",
"sid": "CHXXXXXXXX",
"attributes": "{\"serviceNumber\":\"whatsapp_+549XXXXXXXXX\",\"task_sid\":\"WTXXXXX\",\"from\":\"whatsapp:+549XXXXXXXXX\",\"forwarding\":true,\"proxySession\":\"KCXXXXX\",\"twilioNumber\":\"whatsapp:+549XXXXXXXXX\",\"channel_type\":\"whatsapp\",\"status\":\"INACTIVE\",\"long_lived\":true}",
"service_sid": "ISXXXXX",
"type": "private",
"messages_count": 1,
"links": {
"webhooks": "https://chat.twilio.com/v2/Services/ISXXXXX/Channels/CHXXXXXXXX/Webhooks",
"messages": "https://chat.twilio.com/v2/Services/ISXXXXX/Channels/CHXXXXXXXX/Messages",
"invites": "https://chat.twilio.com/v2/Services/ISXXXXX/Channels/CHXXXXXXXX/Invites",
"members": "https://chat.twilio.com/v2/Services/ISXXXXX/Channels/CHXXXXXXXX/Members",
"last_message": "https://chat.twilio.com/v2/Services/ISXXXXX/Channels/CHXXXXXXXX/Messages/IMXXXXXXXXXX"
}
}
],
"meta": {
"page": 0,
"page_size": 100,
"first_page_url": "https://chat.twilio.com/v2/Services/ISXXXXX/Channels?PageSize=100&Page=0",
"previous_page_url": null,
"url": "https://chat.twilio.com/v2/Services/ISXXXXX/Channels?PageSize=100&Page=0",
"next_page_url": "https://chat.twilio.com/v2/Services/ISXXXXX/Channels?PageSize=100&Page=1",
"key": "channels"
}
Related
I am an old-school C++ programmer - trying to get to grips with Postman, JSON, REST APIs, etc, and struggling..
I am trying to write a Postman test to visualize some JSON response data - which I would like to show in a table format with some merged key names as column headings.
The problem is that the number of data items in a response can vary and the key names can also vary - depending on the input parameters.
So say, for the following JSON which has two data items:
{
"data": [
{
"input": [
{
"value": "ABC",
"identifierType": "a1"
}
],
"output": [
{
"value": "BT",
"identifierType": "b1",
"name": "BT GROUP",
"status": "Active",
"classification": "Ordinary"
}
]
},
{
"input": [
{
"value": "BCD",
"identifierType": "a1"
}
],
"output": [
{
"value": "EFG",
"identifierType": "b1",
"name": "SIEMENS",
"status": "Active",
"classification": "Ordinary"
}
]
}
]
}
I want to end up with a collection containing column headings that looks something like this:
[“Input value”, “Input identifierType”,“Output value”,“Output identifierType”,“Output name”,“Output status”,“Output classification”]
I can get part of the way with something like the following:
function parseData(response, host) {
const results = response.map(elem => (
[
elem.input[0].value,
elem.input[0].identifierType,
elem.output[0].value,
elem.output[0].identifierType,
elem.output[0].name,
elem.output[0].status,
elem.output[0].classification,
]
));
const headers = [];
for (const key in response[0].input[0]){
headers.push(`Input ${key}`)
}
for (const key in response[0].output[0]){
headers.push(`Output ${key}`)
}
return [results, headers]
}
This gives me the desired headers: [“Input value”, “Input identifierType”,“Output value”,“Output identifierType”,“Output name”,“Output status”,“Output classification”]
However, I want to make this more general i.e. not have to specify input[0] and output[0] in the for loops - as these key names could differ for different query responses.
I did ask this question on the Postman forums and someone helpfully provided a code snippet that allows me to extract the 'input' and 'output' names.
for (const _outer in response[0]) {
for (const _inner in _outer) {
for (const key in _inner) {
headers.push(`${_outer} ${key}`)
}
}
}
But that only gives me: ["input 0", "input 0", "input 0", "input 0", "input 0", "output 0' …] for the headers. For some reason I cannot access the inner keys such as value, name, identifierType, status etc
Can someone please suggest where the above is going wrong / how to get what I am after?
Thanks.
Ok - I managed to get an answer on the Postman Forums - thanks.
the solution is:
for (const outerkey in response[0]){
for (const innerkey in response[0][`${outerkey}`][0]){
headers.push(`${outerkey} ${innerkey}`)
}
}
and gave me:
"input value", "input identifierType", "output value", "output identifierType", "output name", …]
UPDATE: The data items extraction was still hardcoded with key names (e.g. elem.input[0].identifierType and so), so I replaced that also to give the following solution:
function parseData(response, host) {
const results = response.map(function(val, index){
temp = [];
for (const outerkey in val){
for (const innerkey in val[`${outerkey}`][0]){
temp.push(val[`${outerkey}`][0][`${innerkey}`]);
}
}
return temp
}
);
const headers = [];
for (const outerkey in response[0]){
for (const innerkey in response[0][`${outerkey}`][0]){
headers.push(`${outerkey} ${innerkey}`)
}
}
return [results, headers]
}
Given an array of data objects
const data = [{
"id": "CT20",
"type": "a11y-unknown",
"urls": ["https://www.example.com/test/"]
},
{
"id": "BC192",
"type": "a11y-true",
"urls": [
"https://www.example.com/something/",
"https://www.example.com/another-thing/"
]
}
]
I'm trying to convert the objects to a CSV file that can be imported to Excel so that it shows as:
id | type | urls
CT20 | a11y-unknown| https://www.example.com/test/
I'm using the following to get the keys:
const keys = Object.keys(data[0]);
then map over the data like so:
const commaSeparatedString = [keys.join(","),data.map(row => keys.map(key => row[key]).join(",")).join("\n")].join("\n");
However, this returns the following:
'id,type,urls\nCT20,a11y-unknown,https://www.example.com/test/\nBC192,a11y-true,https://www.example.com/something/,https://www.example.com/another-thing/'
When imported to Excel as a CSV file, and delimited with \, it appears like this:
How can I correctly map the objects so that they are delimited and line break after each set of urls?
const data = [{
"id": "CT20",
"type": "a11y-unknown",
"urls": ["https://www.example.com/test/"]
},
{
"id": "BC192",
"type": "a11y-true",
"urls": [
"https://www.example.com/something/",
"https://www.example.com/another-thing/"
]
}
]
const keys = Object.keys(data[0]);
const commaSeparatedString = [keys.join(","),data.map(row => keys.map(key => row[key]).join(",")).join("\n")].join("\n");
console.log(commaSeparatedString)
You need to have fixed number of columns. So either JSON.stringify the urls array, or designate columns such as url1, url2, url3...
EDIT: naturally if you don't escape commas by enclosing them in quotes, it will break the CSV. Genrally speaking you should use a library for parsing CSV such as papaparse.
const data = [{
"id": "CT20",
"type": "a11y-unknown",
"urls": ["https://www.example.com/test/"]
},
{
"id": "BC192",
"type": "a11y-true",
"urls": [
"https://www.example.com/something/",
"https://www.example.com/another-thing/"
]
}
]
var keys = Object.keys(data[0]);
var arr = [keys, ...data.map(row => keys.map(key => {
return typeof row[key] === "string" ? row[key] : JSON.stringify(row[key])
}))];
// don't!
// .join(",")).join("\n")].join("\n");
// instead
var csv = Papa.unparse(arr);
console.log(csv)
<script src="https://cdnjs.cloudflare.com/ajax/libs/PapaParse/5.1.0/papaparse.min.js"></script>
As a general rule, the csv must be delimited with the same delimiter used when importing it, also for reliability all the fields should be included into quotes, so since in your case the delimiter is \, your attempt can be rewritten as below:
const data = [{
"id": "CT20",
"type": "a11y-unknown",
"urls": ["https://www.example.com/test/"]
},
{
"id": "BC192",
"type": "a11y-true",
"urls": [
"https://www.example.com/something/",
"https://www.example.com/another-thing/"
]
}
]
const keys = Object.keys(data[0]);
const commaSeparatedString = [keys.join("\\"),data.map(row => keys.map(key => `"${[row[key]].flat().join()}"`).join("\\")).join("\n")].join("\n");
console.log(commaSeparatedString)
If there is the following data and loops the specified number of times (example: 6 times) and number = n at the nth time, the contents of number are displayed in console.log, if there is no data of number = n I want to do "no data". In this case, should I use the for statement or map?
In Python, I used Pandas to fill in the data, but how can I implement this in JavaScript?
I would appreciate it if you could tell me how to do this in JavaScript.
item = [{
"id": 1,
"number": 2
},
{
"id": 2,
"number": 3
}
]
In this case↓
no data
2
3
no data
no data
no data
item = [{
"id": 1,
"number": 2
},
{
"id": 2,
"number": 3
}
]
n = 6
const obj = {}
item.forEach((data) => {
obj[data.number] = true
})
for (let i=1;i<=n;i++) {
if (obj[i])
console.log(i);
else
console.log("no data")
}
I have an output of REST API in following JSON format:
I need to convert the format to flat format so it can be passed as input to another API call.
{
"result": {
"data": [
{
"data": 2.824315071105957,
"dateTime": "2019-09-10T11:32:05.220Z",
"device": { "id": "b3" },
"diagnostic": { "id": "DiagnosticAccelerationForwardBrakingId" },
"controller": "ControllerNoneId",
"version": "00000000000363b0",
"id": "a5UyPzhknSC-N2wtLBph3BA"
},
{
"data": 0,
"dateTime": "2019-09-10T11:32:05.220Z",
"device": { "id": "b3" },
"diagnostic": { "id": "DiagnosticAccelerationSideToSideId" },
"controller": "ControllerNoneId",
"version": "00000000000363b1",
"id": "a5UyPzhknSC-N2wtLBph3BQ"
},
// ... 1000's of rows like this
]
}
}
I need to convert it in below format using a java-script
Desired format:
{"result":{ "data":[{"id":"b3","dateTime":"2019-09- 10T11:32:05.220Z","DiagnosticAccelerationSideToSideId":0,"DiagnosticAccelerationForwardBrakingId ":2.824315071105957},...
The rows needs to be merged with primary key as combination of ID and dateTime attributes. Please note the diagnostic id value becomes key for the required format and data value is the value of the key.
Is there any way to convert this JSON to above flat format.
Need to convert JSON having many rows for single data entry to single row format. Need one java-script function that can accept a string of rows format and convert or merge it and return the string in desired format
function String mergeRows(String flatDataJSONString) {
...
}
If the items are ordered (meaning i and i+1 are merged) than iterate with jumps of i += 2;
If its not ordered or the amount of items to be merged can be > 2 you use an object with unique key composed of the id and date, and override its data whenever a record match this key:
function merger (jsonStr) {
// convert str to obj
const jsonObj = JSON.parse(jsonStr);
const dataObj = {};
for (let i = 0; i < jsonObj.result.length; i++) {
const item = jsonObj.result[i];
// use unique key to merge by
const itemUniqueKey = item.device.id + item.dateTime;
// take last value or create empty object if not exists
const existingItem = dataObj[itemUniqueKey] || {};
// add some logic to merge item with existingItem as you need
...
// set the result back to dataObj to be used on next merges
dataObj[itemUniqueKey] = [merge result of item and existing item];
}
// take dataObj values, you don't need the keys any more
const dataArr = Object.values(dataObj);
const finalResult = {
result: {
data: dataArr
}
}
// convert back to json
return JSON.stringify(finalResult);
}
As stated in the comment you want first to have a clean json definition in order to stringify it. Please get to the following definition of your JSON first:
const json = {
"result": [
{
"data": 2.824315071105957,
"dateTime": "2019-09-10T11:32:05.220Z",
"device": { "id": "b3" },
"diagnostic": { "id": "DiagnosticAccelerationForwardBrakingId" },
"controller": "ControllerNoneId",
"version": "00000000000363b0",
"id": "a5UyPzhknSC-N2wtLBph3BA"
},
{
"data": 0,
"dateTime": "2019-09-10T11:32:05.220Z",
"device": { "id": "b3" },
"diagnostic": { "id": "DiagnosticAccelerationSideToSideId" },
"controller": "ControllerNoneId",
"version": "00000000000363b1",
"id": "a5UyPzhknSC-N2wtLBph3BQ"
}]
};
and then you will be able to perform like hereafter :
JSON.stringify(json)
Hope this helps !
I am currently trying to merge two Json files - one that is nested and one that is flat:
"ampdata": [
{
"nr": "303",
"code": "JGJGh4958GH",
"Anr": "AVAILABLE",
"ability": [ "" ],
"type": "wheeled",
"conns": [
{
"nr": "447",
"status": "",
"version": "3",
"format": "sckt",
"amp": "32",
"vol": "400",
"vpower": 22
}
]
}
[ {
"nr" : 91643421,
"Anr" : "Real",
"Title" : null,
"Comp" : null,
"Name" : "Smith",
"CompanyName" : "WhiteC"
}]
My current Approach is:
var flowFile = session.get();
if (flowFile != null) {
var StreamCallback = Java.type("org.apache.nifi.processor.io.StreamCallback")
var IOUtils = Java.type("org.apache.commons.io.IOUtils")
var StandardCharsets = Java.type("java.nio.charset.StandardCharsets")
flowFile = session.write(flowFile,
new StreamCallback(function(inputStream, outputStream) {
var text = IOUtils.buffer(inputStream)
var obj = JSON.parse(text)
var neu = [];
var neuesObjekt = {};
for (var i = 0; i < obj.ampdata.length; i++) {
var entry = obj.ampdata[i];
if(obj.ampdata[i].nr != obj2.nr) {
obj2.nr = obj.ampdate[i].nr
}
}
outputStream.write(JSON.stringify(newObj, null, '\t').getBytes(StandardCharsets.UTF_8))
}))
flowFile = session.putAttribute(flowFile, "filename", flowFile.getAttribute('filename').split('.')[0]+'_translated.json')
session.transfer(flowFile, REL_SUCCESS)
How do I parse two flowfiles that are incoming at the same time? I do like to work with both at the same time as I have to compare them at several positions. I can not figure out how I can avoid overwriting the first flowfile.
I had another Approach with using the MergeConent-Processor, but the result was just the concatenation of the both Jsons in a way that was not a valid Json anymore. Anyway I do prefer the Javascript attempt more, I just need your help in figuring out, how to do it in a proper way.
i think you can use merge content with parameters:
merge format: binary
header: [
footer: ]
demarcator: ,
by this merge of two json files into one will produce a valid json (array).
then, if you need to reformat json - you still can use ExecuteScript processor...
and you don't need to implement join files logic.
PS: to get two files from input queue use this type of code:
var flowFiles = session.get(2);
if(!flowFiles)return;
if(flowFiles.size()!=2){
session.transfer(flowFiles); //return files back to input queue
return;
}
//we have exactly two files. let's process them...
var flowFile1 = flowFiles[0];
var flowFile2 = flowFiles[1];
//read each, parse, apply logic, write result
...