I've been developing a diagramming tool, I used JSPlumb.
I made shapes using css and connections are made through JSplumb.
I need to save the diagram as json or xml format. But I am having a hard time.
For example, this is the function for saving the diagram
$(function save() {
//$("#editor").resizable("destroy");
Objs = [];
$('#editor').each(function(){
Objs.push({id:$(this).attr('id'), html:$(this).html(), left:$(this).css('left'), top:$(this).css('top'), width:$(this).css('width'), height:$(this).css('height')});
});
console.log(Objs);
});
Also, I've been trying the stringify for getting the data and parse for loading but I still can't figure it out.
Is there a way that I can save jsplumb to json or xml?
Whenever a connection is established, "connection" event is triggered. You need to store the connection endpoints details in that triggered function so that you can retrieve them later.
First make sure that you have set proper id for your endpoints. You can manually set at time of endpoint creation as:
var e0 = jsPlumb.addEndpoint("div1",{uuid:"div1_ep1"}), // You can also set uuid based on element it is placed on
e1 = jsPlumb.addEndpoint("div2",{uuid:"div2_ep1"});
Now bind the connection event where you will store the established connections info:
var uuid, index=0; // Array to store the endpoint sets.
jsPlumb.bind("connection", function(ci) {
var eps = ci.connection.endpoints;
console.log(eps[0].getUuid() +"->"+ eps[1].getUuid()); // store this information in 2d-Array or any other format you wish
uuid[index][0]=eps[0].getUuid(); // source endpoint id
uuid[index++][1]=eps[1].getUuid(); // target endpoint id
}
});
You can convert the array information to JSON format and store it. On restoring, connect the endpoints based on uuid. code:
jsPlumb.connect({ uuids:["div1_ep1","div2_ep1"] });
Here is the jsFiddle for making connections based on endpoints.
NOTE: The above code is only for restoring the connection and endpoints information after you have restored the div's css. You can store the css properties of all div's by using the same method which you wrote in your question.
I just recently tied this and its working
function createJSON(){
var data = new Object();
$("input[class = process]").each(function(){
data[$(this).attr("name")] = $(this).val();
jsonString = JSON.stringify(data);
});
console.log(jsonString);
}
Related
I'm trying some basic API Connect tutorials on IBM's platform (running locally using loopback) and have got completely stuck at an early point.
I've built a basic API service with some in-memory data and setter / getter functions. I've then built a separate API which takes two GET parameters and uses one of my getter functions to perform a search based on two criteria. When I run it, I successfully get a response with the following JSON object:
[{"itemId":1,"charge":9,"itemSize":2,"id":2}]
I've then tried to add a piece of server logic that modifies the response data - at this point, I'm just trying to add an extra field. I've added a Javascript component in the Assemble view and included the following code (taken from a tutorial), which I thought should modify the message body returned by the API while still passing it through:
//APIC: get the payload
var json = apim.getvariable('message.body');
//console.error("json %s", JSON.stringify(json));
//same: code to inject new attribute
json.platform = 'Powered by IBM API Connect';
//APIC: set the payload
//message.body = json;
apim.setvariable('message.body', json);
Instead of getting an extra JSON parameter ("platform"), all I get is a 500 error when I call the service. I'm guessing that I'm doing something fundamentally wrong, but all the docs suggest these are the right variable names to use.
You can't access json.platform but at that point json variable is json type. Are you sure that you can add a property to a json type variable if your json object lacks of that property? I mean: What if you first parse the json variable of json type to a normal object, then add new property, and finally stringify to json type again for body assigning purposes?
var json = JSON.parse(apim.getvariable('message.body')); //convert to normal object
json.platform = 'Powered by IBM API Connect'; //add new property
apim.setvariable('message.body', JSON.stringify(json)); //convert to json again before setting as body value
You need to get the context in some determined format, and in this function do your logic. For example if your message is in json you need to do:
apim.readInputAsJSON(function (error, json) {
if (error)
{
// handle error
apim.error('MyError', 500, 'Internal Error', 'Some error message');
}
else
{
//APIC: get the payload
var json = apim.getvariable('message.body');
//console.error("json %s", JSON.stringify(json));
if(json){
//same: code to inject new attribute
json.platform = 'Powered by IBM API Connect';
//APIC: set the payload
//message.body = json;
apim.setvariable('message.body', json);
}
}
});
Reference:
IBM Reference
You have the message.body empty, put a invoke/proxy policy before your gateway/javascript policy for example.
I am trying to figure out a way to fetch only the filtered values from a table if a filter is active in Office-JS API.
Right now the only way I have figured to fetch all the table data is from the table range values property:
var table = tables.getItemAt(0);
var tableRange = table.getRange();
tableRange.load("values");
ctx.sync().then(function () {
// This returns all the values from the table, and not only the visible data
var values = tableRange.values;
});
Any ideas on how I can proceed to fetch only the visible values from the table if a filter is active?
From previous experience with Office Interop I have achieved the same by looping through the different Areas of the table range, but I am unable to find the equivalent to Areas in Office-JS.
The upcoming next wave of features as part of Excel JS APIs 1.3 will include a new object "RangeView" that allows you to read only the visible values off the Range object.
Here's a link to the open spec on GitHub - https://github.com/OfficeDev/office-js-docs/tree/ExcelJs_1.3_OpenSpec/excel.
Note that this isn't available just yet, but will be in the near future.
Usage for your case off a table would look like this:
var table = tables.getItemAt(0);
var visibleView = table.getRange().getVisibleView();
ctx.load(visibleView);
ctx.sync().then(function () {
var values = visibleView.values;
});
One way to get only filtered data is through the Binding.getDataAsync method, which takes a filterType parameter.
Office.select("bindings#myTableBinding1").getDataAsync({
coercionType: "table",
filterType: "onlyVisible"
},function(asyncResult){
var values = (asyncResult.value.rows);
});
This code assumes you have already created a binding to the table. If not, you can run the following code first, which uses the table name to call Bindings.addFromNamedItemAsync:
Office.context.document.bindings.addFromNamedItemAsync("Table1","table",{
id: "myTableBinding1"
},function(asyncResult){
// handle errors and call code sample #1
});
Note that the solution above is supported as far back as Excel 2013 because it uses the shared APIs. The Excel-specific API set doesn't yet have the capability to return only unfiltered data.
-Michael Saunders, PM for Office add-ins
Currently, I have a table named Appointments- on appointments, I have a Relation of Clients.
In searching the parse documentation, I haven't found a ton of help on how to eagerly fetch all of the child collection of Clients when retrieving the Appointments. I have attempted a standard query, which looked like this:
var Appointment = Parse.Object.extend("Appointment");
var query = new Parse.Query(Appointment);
query.equalTo("User",Parse.User.current());
query.include('Rate'); // a pointer object
query.find().then(function(appointments){
let appointmentItems =[];
for(var i=0; i < appointments.length;i++){
var appt = appointments[i];
var clientRelation = appt.relation('Client');
clientRelation.query().find().then(function(clients){
appointmentItems.push(
{
objectId: appt.id,
startDate : appt.get("Start"),
endDate: appt.get("End"),
clients: clients, //should be a Parse object collection
rate : appt.get("Rate"),
type: appt.get("Type"),
notes : appt.get("Notes"),
scheduledDate: appt.get("ScheduledDate"),
confirmed:appt.get("Confirmed"),
parseAppointment:appt
}
);//add to appointmentitems
}); //query.find
}
});
This does not return a correct Clients collection-
I then switched over to attempt to do this in cloud code- as I was assuming the issue was on my side for whatever reason, I thought I'd create a function that did the same thing, only on their server to reduce the amount of network calls.
Here is what that function was defined as:
Parse.Cloud.define("GetAllAppointmentsWithClients",function(request,response){
var Appointment = Parse.Object.extend("Appointment");
var query = new Parse.Query(Appointment);
query.equalTo("User", request.user);
query.include('Rate');
query.find().then(function(appointments){
//for each appointment, get all client items
var apptItems = appointments.map(function(appointment){
var ClientRelation = appointment.get("Clients");
console.log(ClientRelation);
return {
objectId: appointment.id,
startDate : appointment.get("Start"),
endDate: appointment.get("End"),
clients: ClientRelation.query().find(),
rate : appointment.get("Rate"),
type: appointment.get("Type"),
notes : appointment.get("Notes"),
scheduledDate: appointment.get("ScheduledDate"),
confirmed:appointment.get("Confirmed"),
parseAppointment:appointment
};
});
console.log('apptItems Count is ' + apptItems.length);
response.success(apptItems);
})
});
and the resulting "Clients" returned look nothing like the actual object class:
clients: {_rejected: false, _rejectedCallbacks: [], _resolved: false, _resolvedCallbacks: []}
When I browse the data, I see the related objects just fine. The fact that Parse cannot eagerly fetch relational queries within the same call seems a bit odd coming from other data providers, but at this point I'd take the overhead of additional calls if the data was retrieved properly.
Any help would be beneficial, thank you.
Well, in your Cloud code example - ClientRelation.query().find() will return a Parse.Promise. So the output clients: {_rejected: false, _rejectedCallbacks: [], _resolved: false, _resolvedCallbacks: []} makes sense - that's what a promise looks like in console. The ClientRelation.query().find() will be an async call so your response.success(apptItems) is going to be happen before you're done anyway.
Your first example as far as I can see looks good though. What do you see as your clients response if you just output it like the following? Are you sure you're getting an array of Parse.Objects? Are you getting an empty []? (Meaning, do the objects with client relations you're querying actually have clients added?)
clientRelation.query().find().then(function(clients){
console.log(clients); // Check what you're actually getting here.
});
Also, one more helpful thing. Are you going to have more than 100 clients in any given appointment object? Parse.Relation is really meant for very large related collection of other objects. If you know that your appointments aren't going to have more than 100 (rule of thumb) related objects - a much easier way of doing this is to store your client objects in an Array within your Appointment objects.
With a Parse.Relation, you can't get around having to make that second query to get that related collection (client or cloud). But with a datatype Array you could do the following.
var query = new Parse.Query(Appointment);
query.equalTo("User", request.user);
query.include('Rate');
query.include('Clients'); // Assumes Client column is now an Array of Client Parse.Objects
query.find().then(function(appointments){
// You'll find Client Parse.Objects already nested and provided for you in the appointments.
console.log(appointments[0].get('Clients'));
});
I ended up solving this using "Promises in Series"
the final code looked something like this:
var Appointment = Parse.Object.extend("Appointment");
var query = new Parse.Query(Appointment);
query.equalTo("User",Parse.User.current());
query.include('Rate');
var appointmentItems = [];
query.find().then(function(appointments){
var promise = Parse.Promise.as();
_.each(appointments,function(appointment){
promise = promise.then(function(){
var clientRelation = appointment.relation('Clients');
return clientRelation.query().find().then(function(clients){
appointmentItems.push(
{
//...object details
}
);
})
});
});
return promise;
}).then(function(result){
// return/use appointmentItems with the sub-collection of clients that were fetched within the subquery.
});
You can apparently do this in parallel, but that was really not needed for me, as the query I'm using seems to return instantaniously. I got rid of the cloud code- as it didnt seem to provide any performance boost. I will say, the fact that you cannot debug cloud code seems truly limiting and I wasted a bit of time waiting for console.log statements to show themselves on the log of the cloud code panel- overall the Parse.Promise object was the key to getting this to work properly.
We are trying to create an entity that has date attributes via an odata service. Backend is an sap system. This entity has only 3 key attributes plus a bunch of other attributes. We have identified that dates in the keys are the root cause of the problem.
Keys:
Pernr type string,
begda type datetime
endda type datetime.
The code below, (which does not work), has been severely simplified when trying to troubleshoot the issue. At the moment, it reads an entity from an entity set and immediately tries to create one with exactly the same data.
Code:
var oODataModel = new sap.ui.model.odata.ODataModel("/sap/opu/odata/sap/Z_PERSONAL_DATA_SRV/");
//Test entity to be saved
var entity = null;
//Handler for read error
var handleReadE = function (oEvent){
alert("error");
};
//Handler for read success
var handleRead = function (oEvent){
//Get the data read from backend
entity = oEvent.results[0];
//Try to create a new entity with same data
oODataModel.create('/PersDataSet', entity, null, function(){
alert("Create successful");
},function(oError){
alert("Create failed", oError);
});
};
oODataModel.read("/PersDataSet", null, [], true, handleRead, handleReadE);
In the gateway error log, an xml parsing error appears. In this log, we can see the request data and it can be seen that the dates are transported with String types. These dates are defined in the service as DateTimes so the request is rejected.
Example:
<m:properties>
<d:Pernr m:type="Edm.String">00000001</d:Pernr>
<d:Endda m:type="Edm.String">9999-12-31T00:00:00</d:Endda>
<d:Begda m:type="Edm.String">1979-05-23T00:00:00</d:Begda>
When the entity is read, the backend does not send any type information. It sends like the following example:
<m:properties>
<d:Pernr>72010459</d:Pernr>
<d:Endda>9999-12-31T00:00:00</d:Endda>
<d:Begda>1876-07-21T00:00:00</d:Begda>
And, indeed, if we try to save the same info without the type=".." it works. So the problem are the incorrect types ODataModel.create adds to the xml.
My question is:
Can I tell ODataModel.create to not add this type info? It is not doing a good job inferring the types.
Can anyone share an example reading and writing dates through odata?
Thank you very much in advance.
the data returned from oODataModel.read is raw, before you post you need to parse it
var handleRead = function (oEvent){
//Get the data read from backend
entity = oEvent.results[0];
var newEntity = jQuery.extend({},entity);
delete newEntity.__metadata;
newEntity.Begda = new Date(entity.Begda);
newEntity.Endda = new Date(entity.Endda);
//Try to create a new entity with same data
oODataModel.create('/PersDataSet', newEntity, null, function(){
why not use json instead of xml?
Thanks all for the help.
We got this working accounting for the following:
The problem of the wrong types appended to the attributes comes from the read itself. The object returned by read has a __metadata attribute which describes the values. In this object the dates are set with type=edm.string, even when the service says they are DateTime. To me this is a bug of the .read function.
When trying to use the same object to save, create sees the __metatada on the entry and uses those values, producing type edm.string type for the dates. This caused the request to be rejected. Manually changing these __metadata.properties...type to Edm.DateTime makes it work.
In the end, we did the following:
Dates are parsed manually from the Odata response, creating a js Date
object from the strings in format "yyyy-mm-ddT00:00:00", to make it work with control bindings. When we want to save, the reverse is done.
The object to be created is a new object with
only the attributes we care (no __metadata)
I've got an update to my question.
What I really wanted to know was this:
How do I get csv data into netsuite?
Well, it seems I use the csv import tool to create a mapping and use this call to import the csv nlapiSubmitCSVImport(nlobjCSVImport).
Now my question is: How do I iterate through the object?!
That gets me half way - I get the csv data but I can't seem to find out how I iterate through it in order to manipulate the date. This is, of course, the whole point of a scheduled script.
This is really driving me mad.
#Robert H
I can think of a million reasons why you'd want to import data from a CSV. Billing, for instance. Various reports on data any company keeps and I wouldn't want to keep this in the file cabinet nor would I really want to keep the file at all. I just want the data. I want to manipulate it and I want to enter it.
Solution Steps:
To upload a CSV file we have to use a Suitelet script.
(Note: file - This field type is available only for Suitelets and will appear on the main tab of the Suitelet page. Setting the field type to file adds a file upload widget to the page.)
var fileField = form.addField('custpage_file', 'file', 'Select CSV File');
var id = nlapiSubmitFile(file);
Let's prepare to call a Restlet script and pass the file id to it.
var recordObj = new Object();
recordObj.fileId = fileId;
// Format input for Restlets for the JSON content type
var recordText = JSON.stringify(recordObj);//stringifying JSON
// Setting up the URL of the Restlet
var url = 'https://rest.na1.netsuite.com/app/site/hosting/restlet.nl?script=108&deploy=1';
// Setting up the headers for passing the credentials
var headers = new Array();
headers['Content-Type'] = 'application/json';
headers['Authorization'] = 'NLAuth nlauth_email=amit.kumar2#mindfiresolutions.com, nlauth_signature=*password*, nlauth_account=TSTDRV****, nlauth_role=3';
(Note: nlapiCreateCSVImport: This API is only supported for bundle installation scripts, scheduled scripts, and RESTlets)
Let's call the Restlet using nlapiRequestURL:
// Calling Restlet
var output = nlapiRequestURL(url, recordText, headers, null, "POST");
Create a mapping using Import CSV records available at Setup > Import/Export > Import CSV records.
Inside the Restlet script Fetch the file id from the Restlet parameter. Use nlapiCreateCSVImport() API and set its mapping with mapping id created in step 3. Set the CSV file using the setPrimaryFile() function.
var primaryFile = nlapiLoadFile(datain.fileId);
var job = nlapiCreateCSVImport();
job.setMapping(mappingFileId); // Set the mapping
// Set File
job.setPrimaryFile(primaryFile.getValue()); // Fetches the content of the file and sets it.
Submit using nlapiSubmitCSVImport().
nlapiSubmitCSVImport(job); // We are done
There is another way we can get around this although neither preferable nor would I suggest. (As it consumes a lot of API's if you have a large number of records in your CSV file.)
Let's say that we don't want to use the nlapiCreateCSVImport API, so let's continue from the step 4.
Just fetch the file Id as we did earlier, load the file, and get its contents.
var fileContent = primaryFile.getValue();
Split the lines of the file, then subsequently split the words and store the values into separate arrays.
var splitLine = fileContent.split("\n"); // Splitting the file on the basis of lines.
for (var lines = 1,count=0; lines < splitLine.length; lines++)
{
var words = (splitLine[lines]).split(","); // words stores all the words on a line
for (var word = 0; word < words.length; word++)
{
nlapiLogExecution("DEBUG", "Words:",words[word]);
}
}
Note: Make sure you don't have an additional blank line in your CSV file.
Finally create the record and set field values from the array that we created above.
var myRec = nlapiCreateRecord('cashsale'); // Here you create the record of your choice
myRec.setFieldValue('entity', arrCustomerId[i]); // For example, arrCustomerId is an array of customer ID.
var submitRec = nlapiSubmitRecord(myRec); // and we are done
fellow NetSuite user here, I've been using SuiteScripts for a while now but never saw nlobjCSVImport object nor nlapiSubmitCSVImport .. I looked in the documentation, it shows, but there is no page describing the details, care to share where you got the doc from?
With the doc for the CSVImport object I might be able to provide some more help.
P.S. I tried posting this message as a comment but the "Add comment" link didn't show up for some reason. Still new to SOF
CSV to JSON:
convert csv file to json object datatable
https://code.google.com/p/jquery-csv/
If you know the structure of the CSV file, just do a for loop and map the fields to the corresponding nlapiSetValue.
Should be pretty straightforward.