I'm performing a search on customer payments with a given date range, and I need to fetch the invoice reference number of the invoice that has been paid for each customer payment. The invoice reference number is under the sublist apply where the field apply is set to true.
I'll put some piece of code/payload:
search.create({
type: search.Type.CUSTOMER_PAYMENT,
filters: [['lastmodifieddate', 'within', context.from_datetime, context.to_datetime]],
columns: [
'entity',
'status',
]
}).run().forEach(function(result) {
// Do stuff
});
And this is a (short version) of a customer payment payload:
{
"id": "103",
"type": "customerpayment",
"isDynamic": false,
"fields": {
// payment main fields
},
"sublists": {
// Other sublists
"apply": {
"line 1": {
"apply": "T",
"currency": "GBP",
"refnum": "TEST-00002",
// Other fields
},
"line 2": {
"apply": "F",
"currency": "GBP",
"refnum": "TEST-00001",
// Other fields
}
}
}
So, in the search columns array, I want to grab the refnum field from the line item where the apply field is T. (in this case should return TEST-00002)
It's good enough also to grab the whole apply sublist, then I'll work out the refnum looping into the object.
What I want to avoid is to load every time the payment record as it's going to slow down the search.
Is this possible? Anyone can help?
Thanks a lot!
I believe what you are looking for are the Applied To Transaction fields. You can access these as a join in the UI at the bottom of the list or via SuiteScript like below. In my account, the refnum field is the same as the document number of the Applied To transaction, so I can get the number with the following:
var customerpaymentSearchObj = search.create({
type: "customerpayment",
filters:
[
["type","anyof","CustPymt"],
],
columns:
[
"tranid",
"entity",
"amount",
"appliedtotransaction",
"appliedtolinkamount",
"appliedtolinktype",
search.createColumn({
name: "tranid",
join: "appliedToTransaction"
}) // <--- This is the one
]
});
customerpaymentSearchObj.run().each(function(result){
// .run().each has a limit of 4,000 results
var refnum = result.getValue({ name: 'tranid', join: 'appliedToTransaction' });
return true;
});
Related
I am new to KTDatatable in Metronic.
I am trying to use server side pagination in Metronic dashboard, and I am parsing the data in a KTDatatable, but I can't find a way to parse the returned data from the API and to view number of pages and each page URL.
The code that I was able to write so far is:
data: {
type: 'remote',
source: {
read: {
url: dataURL,
method: 'GET',
contentType: 'application/json',
map: function(data) {
var cards = data.cards.data;
var currentPage = data.cards.current_page;
var lastPage = data.cards.last_page;
return cards;
}
},
},
pageSize: 10,
serverPaging: true,
},
In this code I was able to get the first ten records but:
1- I wasn't able to parse them the way I want in the table.
2- I wasn't able to show the pages number nor calling the API for the second page or the (x) page I want.
These are the things I want to do.
Thanks in advance.
You can go back to the end of the KT-Datatable documentation to find most of answers you want KT-Datable documentation, but I am gonna explain more hoping it will be more clear.
So the returned value from the API (Json) should look have two main objects meta and data, and it looks something like this:
{
"meta": {
"sort": "desc",
"field": "IssueName",
"page": 1,
"pages": 2,
"perpage": "10",
"total": 11
},
"data": [
{
"IssueName": "******",
"CardNumber": "******"
},
{
"IssueName": "******",
"CardNumber": "******"
}
]
}
And after getting the values of the response from the API you should only return the data object to be parsed by the datatable so the map function should look something like this:
map: function(data) {
return data.data;
}
and it will process the meta data itself.
To parse the data into the columns you should use the same key name of the data in column definition array, so in my example I used it like this:
columns: [
{
field: 'IssueName',
title: columnsTitles.issue,
type: 'string',
},
{
field: 'CardNumber',
title: columnsTitles.card_number,
type: 'string',
},
]
And every time the datatable calls the API it will send more data that will help you give the right response, the data will be on a shape of an array (The field name should be the same as the key):
[
"pagination" => array:4 [
"page" => "1"
"pages" => "2"
"perpage" => "10"
"total" => "11"
],
"sort" => array:2 [
"field" => "IssueName"
"sort" => "desc"
],
]
The sent data is related with the pagination and sorting type you have to get from the API, and you can also add filters and they will be stored in the array in the "query" field, and you can handle them in the backend.
I have dynamic children input fields that need to be rendered in a function, but when they are, then they are not included in inputData properly/not under the parent input field's key. When the children are included directly in the inputFields, it works as expected, but I can't use a function within the children array with Zapier.
Here is the inputData currently, when the line items are rendered in a function, the LI_ denotes that it is a child input key -
"inputData": {
"supplier": "1",
"LI_budget": 1,
"LI_tax": 1,
"company": "1",
"currency": "1",
"LI_price": "1",
"LI_description": "1"
}
I'm expecting ("parent" is the inputField parent key here):
"inputData": {
"supplier": "1",
"parent": [{
"LI_budget": 1,
"LI_tax": 1,
"LI_price": "1",
"LI_description": "1"
}],
"company": "1",
"currency": "1",
}
This is the function I'm using to pull in the parent and children input fields:
const getLineItems = async (z, bundle) => {
let lineItem = {
key: 'parent',
children: [{
key: 'LI_description',
label: 'Description',
required: true
},
{
key: 'LI_budget',
required: true,
label: 'Budget',
dynamic: 'budget.id'
},
{
key: 'LI_price',
required: true,
type: 'number',
label: 'Unit price',
helpText: 'Example: 50.25'
},
{
key: 'LI_tax',
required: true,
label: 'Tax Rate',
dynamic: 'tax_rate.id'
},
]
}
return [lineItem];
};
There are dynamic fields generated in the getLineItems function that I took out to simplify. TIA
Caleb here from Zapier Platform Support. This is a tough one! We have a pretty long-standing issue report on our platform for supporting custom fields with parent keys (it boils down to a chicken vs the egg problem that really makes my head spin when I read the discussion on the issue). Your inputFields function is spot-on, it's just a matter of properly storing it in the bundle on our part.
I think we could cobble together a workaround to unflatten it. Before I do that though, could you give this a test in the editor and submit actual line items from a previous step to this step? I'm not sure what the inputData looks like (e.g. if multiple items are split like 1,2,3 or in some other fashion). If you want to iterate on this, it might be better to switch over to our public developer Slack (http://zpr.io/ttvdr); then we can post the results here for the next person to run into this. 😁
I've got a lambda function, which acts as a trigger on a table with best scores of users to handle a leaderboard table.
In my leaderboard table, the sort key is the score, and the player's name is a separate entry with a list, because it's possible that there could be more than one player with the same score. Never mind.
So when adding a player I do:
var paramsNewEntry = {
"TableName": leaderboardTable,
"Key": {
"trackId": trackId,
"time": newValue
},
"UpdateExpression": "SET players = list_append(if_not_exists(players, :emptyList), :playersList),
"ExpressionAttributeValues": {
":playersList": [userId],
":emptyList":[]
},
"ReturnValues": "NONE"
};
And this works fine. I wanted to remove it this way:
var paramsOldEntry = {
"TableName": myTable,
"Key": {
"trackId": trackId,
"time": oldValue
},
"UpdateExpression": "DELETE players :playerToRemove",
"ExpressionAttributeValues": {
":playerToRemove": [userId]
},
"ReturnValues": "ALL_NEW"
}
But I get: Invalid UpdateExpression: Incorrect operand type for operator or function; operator: DELETE, operand type: LIST error.
The players attribute is a list, query response example:
{
"Items": [
{
"time": {
"N": "99994"
},
"players": {
"L": [
{
"S": "krystianPostman2"
}
]
},
"trackId": {
"S": "betaTrack001"
}
}
],
"Count": 1,
"ScannedCount": 1,
"LastEvaluatedKey": {
"time": {
"N": "99994"
},
"trackId": {
"S": "betaTrack001"
}
}
}
I've not seen any question on SO which would provide any details on this in javascript, when using the dynamodb Document API.
DynamoDB API doesn't have an option to delete the value from LIST datatype based on its value. However, if you know the index of the value to be deleted, you can use REMOVE to delete the entry from list.
The DELETE action only supports Set data types.
UpdateExpression: 'REMOVE players[0]'
If the LIST is going to have only name attribute, it is better to save it as SET rather than LIST DynamoDB datatype.
Creating Set:-
var docClient = new AWS.DynamoDB.DocumentClient();
docClient.createSet( ["v1", "v2"]);
Deleting the values from SET using DELETE
I have a dataset of records stored in mongodb and i have been trying to extract a complex set of data from the records.
Sample records are as follows :-
{
bookId : '135wfkjdbv',
type : 'a',
store : 'crossword',
shelf : 'A1'
}
{
bookId : '13erjfn',
type : 'b',
store : 'crossword',
shelf : 'A2'
}
I have been trying to extract data such that for each bookId, i get a count (of records) for each shelf per store name that holds the book identified by bookId where the type of the book is 'a'.
I understand that the aggregation query allows a pipeline that allows grouping, matching etc, but I have not been able to reach a solution.
The desired output is of the form :-
{
bookId : '135wfkjdbv',
stores : [
{
name : 'crossword'
shelves : [
{
name : 'A1',
count : 12
},
]
},
{
name : 'granth'
shelves : [
{
name : 'C2',
count : 12
},
{
name : 'C4',
count : 12
},
]
}
]
}
The process isn't really that difficult when you look at at. The aggregation "pipeline" is exactly that, where each "stage" feeds a result into the next for processing. Just like unix "pipe":
ps -ef | grep mongo | tee out.txt
So it's just adding stages, and in fact three $group stages where the first does the basic aggregation and the remaining two simply "roll up" the arrays required in the output.
db.collection.aggregate([
{ "$group": {
"_id": {
"bookId": "$bookId",
"store": "$store",
"shelf": "$shelf"
},
"count": { "$sum": 1 }
}},
{ "$group": {
"_id": {
"bookId": "$_id.bookId",
"store": "$_id.store"
},
"shelves": {
"$push": {
"name": "$_id.shelf",
"count": "$count"
}
}
}},
{ "$group": {
"_id": "$_id.bookId",
"stores": {
"$push": {
"name": "$_id.store",
"shelves": "$shelves"
}
}
}}
])
You could possibly $project at the end to change the _id to bookId, but you should already know that is what it is and get used to treating _id as a primary key. There is a cost to such operations, so it is a habit you should not get into and learn doing things correctly from the start.
So all that really happens here is all the fields that would make up the grouping detail are made the primary key of $group with the other field being produced as count, to count the shelves within that grouping. Think the SQL equivalent:
GROUP BY bookId, store, shelf
All each other stage does is transpose each grouping level into array entries, first by shelf within the store and then the store within the bookId. Each time the fields in the primary grouping key are reduced down by the content going into the produced array.
When you start thinking in terms of "pipeline" processing, then it becomes clear. As you construct one form, then take that output and move it to the next form and so on. This is basically how you fold the results within two arrays.
I would like to have a JSON File through which i feed my database with new entries. If the entry already exists, it should be checked if the entry is modified. if so, then the entry should be updated. If its new, it should be inserted.
I managed to insert new entries but for the updating i got stuck and the code won't run:
Fruits = new Mongo.Collection('fruits');
var fruitSeeds = [
{
"nameId": "passionFruit",
"name": "Passion Fruit",
},
{
"nameId": "banana",
"name": "Banana",
},
{
"nameId": "pineapple",
"name": "Pineapple",
},
{
"nameId": "orange",
"name": "Orange",
}
];
_.each(fruitSeeds, function (fruit) {
if (fruit.nameId === Fruits.findOne({
name: fruit.nameId
}).nameId) {
Fruits.update(fruit);
console.log("updated", fruit.name);
} else {
Fruits.insert(fruit);
console.log("inserted", fruit.name);
};
});
Thanks for your help!
Vin
Use upserts. See http://docs.meteor.com/#/full/upsert.
Example for your case:
_.each(fruitSeeds, function (fruit) {
Fruits.upsert({nameId: fruit.nameId}, fruit);
});
The problem with your implementation is that the update function requires a selector parameter. The first parameter selects which object in the database to update. The second parameter is the modifier.
Thus this would work fine as well:
Fruits.update({nameId: fruit.nameId}, fruit);
However, I recommend to use upsert in this case. Upsert will create a document in the database if the selector doesn't match any existing documents. If it does, it will update the document.