How to access Xively datapoints history with javascript? - javascript

I am new to Xively. Now I am trying to access the datapoints history from the feed that I get.
From this documentation: http://xively.github.io/xively-js/docs/ It seems that I can use the method xively.datapoint.history(feedID, datastreamID, options{}, callback(data)) but I don't know how to use it.
I know the parameter feedID, datastreamID, but I am not sure about the options...
from Xively site https://xively.com/dev/docs/api/quick_reference/historical_data/, I think I should put start and end parameter. I used feed id:40053 and datastream id:airpressure. You can try to input the feed id here to get more info about it:http://xively.github.io/xively-js/demo/
I tried the code below but its not working. Am I doing something wrong, or the datapoints history itself is restricted and cant be accessed?
// Make sure the document is ready to be handled
$(document).ready(function($) {
// Set the Xively API key (https://xively.com/users/YOUR_USERNAME/keys)
xively.setKey("yWYxyi3HpdqFCBtKHueTvOGoGROSAKxGRFAyQWk5d3JNdz0g" );
// Replace with your own values
var feedID = 40053;
var datastreamID = "airpressure"; // Datastream ID
// Get datastream data from Xively
xively.datapoint.history(feedID, datastreamID,
{
start:"2013-09-10T00:00:00.703576Z",
end:"2013-10-10T00:00:00.703576Z"
},
function(data){
//data.forEach(function(datapoints){document.write(JSON.stringify(datapoints["value"], null, 4));});
document.write(JSON.stringify(data, null, 4));
});
});

I didn't read the documentation right...
The maximal duration for each query is 6 hrs, so changing the end time to "2013-09-10T06:00:00.703576Z solved my problem.

You can use parameters: duration, interval
xively.datapoint.history (feedID, datastreamID1, **{ duration: "14days", interval: "1000"}**,
function(data){
document.write(JSON.stringify(data, null, 4));
}
);

Alvinadi,
That's correct. The other thing you could do is set the interval parameter to something greater than 0. This will reduce the density of datapoints and only return one datapoint for every number of seconds specified in the interval. However, this can be useful when trying to retrieve the average of large amounts of data.
Here is the API documentation explaining the available intervals: https://xively.com/dev/docs/api/quick_reference/historical_data/
Pro tip: Set the parameter limit=1000 to return the maximum number of results and not have to paginate through data.

Related

How to solve the issue of NetSuite Restlet timeout limit issue?

Now I am working on NetSuite Restlet for the first time.
I have the following data retrieved from savedSearch.
{
"recordType": "receipt",
"id": "sample-id",
"values": {
"customer.customerid": "sample-id",
"customer.customercompany": "sample-customercompany",
"customer.addressone": "sample-addressone",
"customer.addresstwo": "sample-addresstwo",
"customer.addresscity": "sample-addresscity",
"customer.addressstate": "sample-addressstate",
"country": "Australia",
"transacitionrecordid": "sample-id",
"unit": "Dollar",
"total": "120"
}
}
And I have to loop the resultsets and push each record to the array and return the array at the end.
There are no fields that I can drop. All the fields have to be included.
The problem is that the number of records is roughly 31,000.
When I run my script, the execution goes over 5 mins which is the Restlet execution time limit.
Here is my script.
define(['N/search'], function(search) {
function get(event) {
var saved = search.load({ id: "search-id" });
var searchResultSet = saved.run();
var results = [];
var start = 0;
do {
searchRecords = searchResultSet.getRange({ start: start, end: start + 1000 });
start = start + 1000;
results.concat(searchRecords);
} while(results.length);
return JSON.stringify(results); // return as string for now to see the output on browser
}
return {
get: get
};
})
This is what my script looks like.
Ideally, I call this script once and return the whole 31,000 records of data.
However, due to the execution limit, I am thinking of passing a parameter(works as a pointer?index?cursor) and passing this variable to getRange function as a starting index.
I have tested and I can call for 10,000 records. So call this script 3 times by passing the parameter like 0, 10000, 20000.
But is there any better way to solve this issue? What I am really looking for is to call this script only once and return 31,000 records without having the issue of timeout.
Can I have any suggestions, please?
Thank you very much in advance.
It sounds like you need a map reduce script type. I am not sure what the overall result you are trying to achieve is but in general MapReduce scripts are made for processing large amounts of data.
The script can be scheduled OR you can trigger it using N/Task (seems like this is what you need if you want to trigger it from the RESTLET)
MapReduce scripts have 4 native functions. Each function has its own usage limit each time it is triggered which makes it ideal for processing large amounts of data like this.
The first is used for generating the dataset (you can return a search: return search.create({}) )
Second is for grouping data - it will run once per search result.
Third is for executing code on the data - it will run once per unique key that was passed from the previous funciton.
Fourth is for summarizing the script. You can catch errors here etc.
It is possible that 31k results will be to large for the first function to query in which case you can split it into sections. For example get up to 5k results and then in the summarize check if there are still more results to process and if there are trigger the script again (to do this you will need multiple deployments and a marker on the transaction to tell you that it was updated OR a global field which holds the the last chunk of data that was processed)

DynamoDB: Query only every 10th value

I am querying data between two specific unixtime values. for example:
all data between 1516338730 (today, 6:12) and 1516358930 (today, 11:48)
my database receives a new record every minute. Now, when i want to query the data of last 24h, its way too dense. every 10th minute would be perfect.
my question now is: how can i read only every 10th database record, using DynamoDB?
As far as i know, theres no posibility to use modulo or something similar that pleases my needs.
This is my AWS Lambda Code so far:
var read = {
TableName: "user",
ProjectionExpression:"#time, #val",
KeyConditionExpression: "Id = :id and TIME between :time_1 and :time_2",
ExpressionAttributeNames:{
"#time": "TIME",
"#val": "user_data"
},
ExpressionAttributeValues: {
":id": event, // primary key
":time_1": 1516338730,
":time_2": 1516358930
},
ScanIndexForward: true
};
docClient.query(read, function(err, data) {
if(err) {
callback(err, null);
}
else {
callback(null, data.Items);
}
});
};
You say that you insert 1 record every minute?
The following might be an option:
At the time of insertion, set another field on the record, let's call it MinuteBucket, which is calculated as the timestamp's minute value mod 10.
If you do this via a stream function, you can handle new records, and then write something to touch old records to force a calculation.
Your query would change to this:
/*...snip...*/
KeyConditionExpression: "Id = :id and TIME between :time_1 and :time_2 and MinuteBucket = :bucket_id",
/*...snip...*/
ExpressionAttributeValues: {
":id": event, // primary key
":time_1": 1516338730,
":time_2": 1516358930,
":bucket_id": 0 //can be 0-9, if you want the first record to be closer to time_1, then set this to :time_1 minute value mod 10
},
/*...snip...*/
Just as a follow-up thought: if you want to speed up your queries, perhaps investigate using the MinuteBucket in an index, though that might come at a higher price.
I don't think that it is possible with dynamoDB API.
There are FilterExpression that contains conditions that DynamoDB applies after the Query operation, but before the data is returned to you.
But AFAIK it isn't possible to use a custom function. And build-in functions are poor.
As a workaround, you could mark each 10th item on the client side. And then query with checking attribute_exists (or attribute value) to filter them.
BTW, it would be nice to create the index for 'Id' attribute with sort key 'TIME' for improving query performance.

Querying a parse table and eagerly fetching Relations for matching

Currently, I have a table named Appointments- on appointments, I have a Relation of Clients.
In searching the parse documentation, I haven't found a ton of help on how to eagerly fetch all of the child collection of Clients when retrieving the Appointments. I have attempted a standard query, which looked like this:
var Appointment = Parse.Object.extend("Appointment");
var query = new Parse.Query(Appointment);
query.equalTo("User",Parse.User.current());
query.include('Rate'); // a pointer object
query.find().then(function(appointments){
let appointmentItems =[];
for(var i=0; i < appointments.length;i++){
var appt = appointments[i];
var clientRelation = appt.relation('Client');
clientRelation.query().find().then(function(clients){
appointmentItems.push(
{
objectId: appt.id,
startDate : appt.get("Start"),
endDate: appt.get("End"),
clients: clients, //should be a Parse object collection
rate : appt.get("Rate"),
type: appt.get("Type"),
notes : appt.get("Notes"),
scheduledDate: appt.get("ScheduledDate"),
confirmed:appt.get("Confirmed"),
parseAppointment:appt
}
);//add to appointmentitems
}); //query.find
}
});
This does not return a correct Clients collection-
I then switched over to attempt to do this in cloud code- as I was assuming the issue was on my side for whatever reason, I thought I'd create a function that did the same thing, only on their server to reduce the amount of network calls.
Here is what that function was defined as:
Parse.Cloud.define("GetAllAppointmentsWithClients",function(request,response){
var Appointment = Parse.Object.extend("Appointment");
var query = new Parse.Query(Appointment);
query.equalTo("User", request.user);
query.include('Rate');
query.find().then(function(appointments){
//for each appointment, get all client items
var apptItems = appointments.map(function(appointment){
var ClientRelation = appointment.get("Clients");
console.log(ClientRelation);
return {
objectId: appointment.id,
startDate : appointment.get("Start"),
endDate: appointment.get("End"),
clients: ClientRelation.query().find(),
rate : appointment.get("Rate"),
type: appointment.get("Type"),
notes : appointment.get("Notes"),
scheduledDate: appointment.get("ScheduledDate"),
confirmed:appointment.get("Confirmed"),
parseAppointment:appointment
};
});
console.log('apptItems Count is ' + apptItems.length);
response.success(apptItems);
})
});
and the resulting "Clients" returned look nothing like the actual object class:
clients: {_rejected: false, _rejectedCallbacks: [], _resolved: false, _resolvedCallbacks: []}
When I browse the data, I see the related objects just fine. The fact that Parse cannot eagerly fetch relational queries within the same call seems a bit odd coming from other data providers, but at this point I'd take the overhead of additional calls if the data was retrieved properly.
Any help would be beneficial, thank you.
Well, in your Cloud code example - ClientRelation.query().find() will return a Parse.Promise. So the output clients: {_rejected: false, _rejectedCallbacks: [], _resolved: false, _resolvedCallbacks: []} makes sense - that's what a promise looks like in console. The ClientRelation.query().find() will be an async call so your response.success(apptItems) is going to be happen before you're done anyway.
Your first example as far as I can see looks good though. What do you see as your clients response if you just output it like the following? Are you sure you're getting an array of Parse.Objects? Are you getting an empty []? (Meaning, do the objects with client relations you're querying actually have clients added?)
clientRelation.query().find().then(function(clients){
console.log(clients); // Check what you're actually getting here.
});
Also, one more helpful thing. Are you going to have more than 100 clients in any given appointment object? Parse.Relation is really meant for very large related collection of other objects. If you know that your appointments aren't going to have more than 100 (rule of thumb) related objects - a much easier way of doing this is to store your client objects in an Array within your Appointment objects.
With a Parse.Relation, you can't get around having to make that second query to get that related collection (client or cloud). But with a datatype Array you could do the following.
var query = new Parse.Query(Appointment);
query.equalTo("User", request.user);
query.include('Rate');
query.include('Clients'); // Assumes Client column is now an Array of Client Parse.Objects
query.find().then(function(appointments){
// You'll find Client Parse.Objects already nested and provided for you in the appointments.
console.log(appointments[0].get('Clients'));
});
I ended up solving this using "Promises in Series"
the final code looked something like this:
var Appointment = Parse.Object.extend("Appointment");
var query = new Parse.Query(Appointment);
query.equalTo("User",Parse.User.current());
query.include('Rate');
var appointmentItems = [];
query.find().then(function(appointments){
var promise = Parse.Promise.as();
_.each(appointments,function(appointment){
promise = promise.then(function(){
var clientRelation = appointment.relation('Clients');
return clientRelation.query().find().then(function(clients){
appointmentItems.push(
{
//...object details
}
);
})
});
});
return promise;
}).then(function(result){
// return/use appointmentItems with the sub-collection of clients that were fetched within the subquery.
});
You can apparently do this in parallel, but that was really not needed for me, as the query I'm using seems to return instantaniously. I got rid of the cloud code- as it didnt seem to provide any performance boost. I will say, the fact that you cannot debug cloud code seems truly limiting and I wasted a bit of time waiting for console.log statements to show themselves on the log of the cloud code panel- overall the Parse.Promise object was the key to getting this to work properly.

Filtering while HTTP is still in progress Angular

I got I problem and I cant wrap my head around it.
I send an http request retrieving me data from an API. This may take up to 15 seconds. But I implemented a cache so that the user can see parts of the data already that has already arrived. Every 1500ms I update this data. Once data.length>0, I show filters to the user, so that he can filter his results by price.
When he applies the filters all filters are set back to the original state after each update of the data.
$scope.$watch('trainRequest.getRequest()',function(newVal,oldVal){
$scope.data = trainRequest.getRequest();
$scope.trainArray = resultService.trainArray;
if($scope.data.error!==1){
// here I generate the minimum and maximum price in the data that I use to show the filter.
$scope.maxValue=(function(){
var tempArray=[];
for(var i=0;i<$scope.data.length;i++){
tempArray.push($scope.data[i].price); // jshint ignore:line
}
return tempArray.length>0 ? Math.round(Math.max.apply( Math, tempArray)) : 5000;
})();
$scope.minValue=(function(){
var tempArray=[];
for(var i=0;i<$scope.data.length;i++){
tempArray.push($scope.data[i].price); // jshint ignore:line
}
return tempArray.length>0 ? Math.round(Math.min.apply( Math, tempArray)) : 0;
})();
}
Here is my issue. Let's say the data provides 100$ and 1000$ as the minimum and maximum of the price array. Then my filter (a slider) moves in this interval. But Let's now say the user only accepts 800 as maxmimum price. Then he uses the slider and the data that is shown updates.
Then data updates because I receive new data from the server. Let's now say the actual maximum is 1400$. Then the slider would range from 100 to 1400. What aslo happens is that the slider is set to this state and I want the slider to remain at 800$ maximum.
My problem is that everytime $scope.data updates (because it is in the watch function and the maxValue is also there) I am not able to save the status as desired by the user.
How is it possible to only save the state of the filter if it is changed by the user and not by the update of $scope.data?
You need another property in your scope e.g. $scope.selectedMaxValue. This property will contain the selected max value from the user. If it is different than the $scope.maxValue before the update, you should not change the $scope.selectedMaxValue if it is the same as the $scope.maxValue you should update it as well.
You can you this approach for the minValue as well.
EDIT:
Here is an example of how you can do this.
You can write the following function
function updateMaxValue(){
var tempArray=[];
for(var i=0;i<$scope.data.length;i++){
tempArray.push($scope.data[i].price); // jshint ignore:line
}
var newValue = tempArray.length>0 ? Math.round(Math.max.apply( Math, tempArray)) : 5000;
if($scope.selectedMaxValue === $scope.maxValue)
{
$scope.selectedMaxValue = newValue;
}
$scope.maxValue = newValue;
}
And use it like this
$scope.$watch('trainRequest.getRequest()',function(newVal,oldVal){
$scope.data = trainRequest.getRequest();
$scope.trainArray = resultService.trainArray;
if($scope.data.error!==1){
// here I generate the minimum and maximum price in the data that I use to show the filter.
updateMaxValue();
updateMinValue();
}

Making a step-by-step ajax request

I'm thinking about how to change a content of a div dynamically. So, here is the ajax request:
$.ajax({
url:'/foos',
cache: false,
type: 'get',
}).done( function( foo_array ) {
for( foo in foo_array ) {
$('#foo-container').append('<div class="foo-var">'+foo+'</div>');
}
});
So bassically, this ajax append all foo-var divs from the server, but if the foo_array is too long or a big very big array there is a problem because i think that takes more and more time depending on the foo_array's length
How can I append one by one??, how can I query one by one and append in foo-container instead query all foos and make an iteration??
I want to do something like this
if(foos.hasNext()){ $.ajax..... append(foo)....}
foos is an array made by many documents from a mongodb database, so I cant get the length of the array because depends of the query's find() arguments..
I'm using nodejs, mongodb, expressjs and jquery for ajax
Sorry for my bad English, and thank you all!
EDIT 2
this is an example of the data in mongodb
{category:1, name:'robert',personal:true,option:'class'}
{category:1, name:'alfredo',personal:false,option:'class'}
{category:4, name:'ricardo',personal:true,option:'class'}
{category:1, name:'genaro',personal:true,option:'class'}
{category:2, name:'andres',personal:false,option:'class'}
{category:1, name:'jose',personal:true,option:'class'}
db.collection.find({personal:true}) // gives me 4 documents
db.collection.find({option:'class'}) // gives me 6 documents
db.collection.find({category:4}) // gives me 1 document
i dont know how many documents can get from the cursor, i need to charge one by one cause there are 5097841 documents in the databse so, ajax can take long time to return all the information, i need to query one by one if hasNext() in the cursor of mongodb
You can use skip and limit and can make multiple requests. It's like paging.The following syntax may help you
db.collection.find().skip(200).limit(100);

Categories

Resources