AngularJS/JavaScript - How to rename an array key name - javascript

I have an application that reads an excel file, and displays its contents in an html table.
These elements are in an array called "AgendaItems".
There are 2 files the user can read, each is formatted with some differences.
Both have 3 columns.
File 1: AgendaItem, LegistarId, Title
File 2: Agenda #, File #, Title
I am able to read the file and populate the array $scope.AgendaItens with the contents of either file, depending what the user selects.
My problem is that before making these modifications to accept a second file with a different format, I used:
<td scope="row">{{ai.AgendaItem}}</td>
<td>{{ai.LegistarID}}</td>
When processing the new file, now the array contains: Agenda #, and File #, which are equivalent to AgendaItem, and LegistarID respectively.
Is there a way to, in the HTML choose which value to display? for example, if array has AgendaItem, display AgendaItem, if array has Agenda #, display Agenda #?
Or, is it possible to rename a the name of a key, from Agenda # to AgendaItem and from File # to LegistarID?
Please let me know if I need to post more details, more code in order for me to be able to get help.
Thank you in advance,
Erasmo

You can use javascript array.map extension ? May be better for later reading the code.
Source data
var source = [{"Agenda #":"1","File #":"file1.xls"}]
Map it for cleaner usage
var maped = source.map(function(p) {
return {"AgendaItem":p["Agenda #"], "LegistarID":p["File #"]}
})
Other wise you have to use Object[fieldName] usage in html.(not suggesting to do so) like; ng-if="expression"
< ... ng-if='data["Agenda #"]' > {{data["Agenda #"]}}
...

Related

Replacing items skips values

I have a file that includes Database Name and Label - these labels correspond to sectors. My script goes as follows:
I read an excel file that has sector names on it, I then use this to get an allocation of calcrt_field and sector:
es_score_fields['esg_es_score_peers']= es_score_fields.iloc[:,10:100].apply(lambda x: '|'.join(x.dropna().astype(str)), axis=1)
once I have each calcrt_field aligned to the relevant peers (sectors), I read another file that has 2 columns: Database Name and Label. The end goal is to map the score peer sectors to each of these Database Names, examples:
Database Name1: Chemicals (123456)
Label1: Chemicals
Database Name 2: Cement (654321)
Label2: Cement
Once I read the file i use the following (multiple rows) to remove any symbol, space, comma:
score_peers_mapping.Label= BECS_mapping.Label.str.replace('&', '')
this gives me a list with both Database Name (no changes) and Label (all words combined into a single string)
I then map these based on string length as follows:
score_peers_mapping['length'] = score_peers_mapping.Label.str.len()
score_peers_mapping= score_peers_mapping.sort_values('length', ascending=False)
score_peers_mapping
peers_d = score_peers_mapping.to_dict('split')
peers_d = becs_d['data']
peers_d
finally, I do the following:
for item in peers_d:
esg_es_score_peers[['esg_es_score_peers']]= esg_es_score_peers[['esg_es_score_peers']].replace(item[1],item[0],regex=True)
I exported to csv at this stage to see if the mapping was being done correctly but I can see that only some of the fields are correctly being mapped. I think the problem is this replace step
Things I have checked (that might be useless but thought were a good start):
All Labels are already the same as the esg_es_score_peers - no need to substitute labels like i did to remove "&" and so on
Multiple rows have the same string length but the error does not necessarily apply to those ones (my initial thought was that maybe when sorting them by string length something was going wrong whenever there were multiple outputs with the same string length)
Any help will be welcome
thanks so much

need to export individual features to shapefiles using geopandas

I'm using geopandas to read a geojson file and output shapefiles. The issue is I cannot figure out how to export single features within that shapefile - only the entire shapefile. Just for ref, I'm using google colab.
here's what I have so far
os.makedirs('/content/drive/MyDrive/shapes')
gdf = gpd.read_file('/content/sample_data/countries.geojson')
for num, row in gdf.iterrows():
key = row.city
fileName = key+".shp"
path = '/content/drive/MyDrive/shapes/'+fileName
os.makedirs('/content/drive/MyDrive/shapes/'+fileName)
os.chdir('/content/drive/MyDrive/shapes/'+fileName)
gdf.to_file(fileName) # need to do something like row to file here
this code will export a bunch of shapefiles of the original geojson file & name them by a certain key. I can't figure out how to loop through the individual features and make a shapefile for each.
Since I didn't have your shapefile, I am going to answer your question based on what you have told in your question. First of all, you should not save the gdf dataframe every time because you want to save the row into a file(I think each row represents a city and you want to have this city in a different shapefile that is named after the city name). So, what I suggest is:
os.makedirs('/content/drive/MyDrive/shapes')
gdf = gpd.read_file('/content/sample_data/countries.geojson')
for num, row in gdf.iterrows():
key = row.city
fileName = key+".shp"
path = '/content/drive/MyDrive/shapes/'+fileName
os.makedirs(path)
os.chdir('/content/drive/MyDrive/shapes/'+fileName)
gdf.iloc[num:num+1,:].to_file(path)
Note that, instead of saving in filename I saved the file into path because if it wasn't the case, the last few lines in your code would be for nothing!

recursively parse json in javascript to retrieve certain keys' values

I am new to javascript and every time i try learning it, I just end up in an immense amount of frustration and disgust ! No offense, but this is my opinion or perhaps I am too stupid to not understand how it works at all.
I have a simple requirement. I have a pretty deeply nested dictionary (which is what it is called in many backend languages) at hand. To be more specific it is the raw text of a postman collection. The collection itself could have multiple nested directories.
Now all I want to do is to be able to parse this dictionary and do something with it recursively.
For example, if I had to do the same in python I would do it as simply as :
def createRequests(self, dic):
total_reqs = 0
headers = {}
print type(dic)
keys = dic.keys()
if 'item' in keys:
print "Folder Found. Checking for Indivisual request inside current folder . . \n"
self.item_list = dic.get('item')
for each_item in self.item_list:
self.folder.append(self.createRequests(each_item))
else:
print "Found Indivisual request. Appending. . . \n"
temp_list = []
temp_list.append(dic)
self.requestList.append(temp_list)
return self.requestList
where dic would be my dictionary that I want to parse.
Is there any simple and straight forward way to do the same in Javascript?
Let's just say all I want to do is that if I have a text file that has properly formed json data in it and whose contents have been read into dataReadFromFile and then it has been converted into a JSON as :
var obj = JSON.parse(dataReadFromFile);
is there any simple and easy way to convert this JSON to dictionary or the dataReadFromFile directly into a dictionary such that I can say something like dictioanry.keys() if I wanted a list of the keys in it.
Note that the content of the file is not fixed. It may have multiple levels of nesting, which may not be known beforehand.

How to lowercase field name in pdi (pentaho)?

I'm actually new to PDI and i need to do some extract from csv however sometimes field name are in lowercase or uppercase.
I know how to modify it for rows but don't know how to do it for fields names.
Does exist a step to do it?
I tried ${fieldName}.lower(), lower(${fieldName}) in select value and javascript script but without succes
thanks in advance
The quick fix is to right-click the list of column provided by the CSV file input to copy/paste it back and forth into Excel (or whatever).
If you also have 150 input files, the step which changes dynamically the column names (and other metadata like type) is called Metadata Injection, Kettle doc. The Official doc gives details and examples.
Your specific case is covered in BizCubed. Download the sample near the end of the web page, unzip, load the ktr in PDI. You'll need to adapt the Fields step in the MetaDataInjection transformation. It is currently a DataGrid that you may change by a Javascript lowercase (or better a String operation), after having kept the first line only of your CSV (read with header NOT present, include the rownumber and Filter rownumber=1).
If you want to change a column name you can use the 'Select values' step.
There is a 'Rename to' option in the 'Select & Alter' tab as well as the 'Meta-data' tab that you can use to change a column name to whatever you want.

Issue with picking multiple JSON

I have recorded a script and "search" an id from it.
I have performed the following things
Have parametrize the "searcID" so that it can be picked from the "CSV_Data Config"
Have extracted the "key" from the URL through "Regular Expression Extractor" to provide it to the desired URL requiring "key", so that it can be dynamic
Now the issue is, since the script is recorded for one search id, in "/build-4.4.10.0/SECChecker/Search/Html?_dc=0.5557150364018139&Grid-Ajax", the last line of my script has a body of the one recorded "searchId".
The script runs and return for each thread this same result of JSON (that is present in the last line i mentioned), i want this too to be dynamic, how can i do that? Please guide
If you want to parametrize the last request, you should use the following notation: ${"var name"} and use a CSV manager. link2
for instance, if you want to parametrize the fist param of the body you should have something like this:
{"SortField":"${var_name}",....
One thing, the dc param, part of the path, looks like a random used to avoid cache, so I use this to simulate the requests during the test:
.../Html?_dc=${__RandomString(15,0123456789)}&Grid-Ajax
This function returns a string which its length is 15 (1st param) and has a set of numbers (2nd param)
Hope It helps you.

Categories

Resources