Is there a way to read all the cards that I have in Jira for a period?
My goal is to create a report with a number of cards that I have in REVIEW, the number of cards that I have in DONE and so on. I want to read the columns and know how many cards I have it.
It could be a script in JAVA or JAVASCRIPT.
Check out this post from the JIRA/Atlassian community: https://community.atlassian.com/t5/Jira-questions/REST-Client-get-all-issues-of-project/qaq-p/494826
Basically, you can use the REST endpoint /rest/api/2/search?jql=project="Your Project Key". This will return issues for whichever project you specify.
There is a wealth of documentation and knowledge on JIRA's API here
If the only information you need is a number of issues per status you can simply create a pie chart gadget for this. Because you want to display issues for a specific timeframe you will have to create a JQL filter first, then open existing or create a new dashboard (you need to be a dashboard administrator in order to add gadgets, you can become one by creating a new dashboard), then add a pie chart gadget, as a statistic type choose Status.
To find only the issues you are assigned to and that were updated within the specific time frame you need to create a search filter like:
assignee = currentUser() and updated > "2018/08/01"
this will find only the issues that are assigned to you and were updated after 2018/08/01.
The resulting chart will look like:
Related
I'm currently building a Google Data Studio connector where I need to make use of the stepped configuration to achieve dynamic user input.
For example, connecting to Google Analytics will have a configuration of account -> properties -> view.
I want to achieve something exactly of this sort but when a user input the first answer from a dropdown, the connector should make an API request to pull data into the dropdown of the next config.
Please, how do I go about this.
You should be able to get the parameters the user selects from "request.configParams" (in the getConfig function). Using your chosen parameter, you can do an API call to get the data you need so that you can set up your options for the stepped configuration.
I'm trying to connect my Google Sheet with a Slack Channel. The use case is as follow:
I have a Google Sheet which looks like this:
There are around 50 so called topics, each of them has internal and external experts listed, also some related files. For now, only the internal experts column of each topic is of interest. Whenever a new person is listed as an internal expert (new cell on the bottom of the column), I want to send out a message to a related slack channel which looks like this:
The message should contain the name or content of the added cell and the name of the Topic he/she was added in as an internal expert (in this case "Topic 1"). The rest is a predefined string. As you can see, I tried to do this with Zapier, and it works. The problem is with Zapier, there seem to be too many limitations when working with several columns. As far as I know, I would have to create a separate Zapier Action for each column, which isn't a really nice solution.
Does anybody can give me a hint how to build this with a Google Script (+ Slack Webhook)? Or if it even makes sense to do so? I'm not an expert in JavaScript + datasets, so I'm struggling a bit on how to start this.
I would suggest to take a look at Triggers for Google App Scripts. You can set a trigger to run every time your Google sheet is edited - see onEdit(). Your trigger can than check if the edit is relevant (e.g. new cell at the bottom of a specific column) and send a request to Slack using incoming webhooks.
I am trying to develop a Dojo DataGrid that returns a user's documents from the categorized BidsByDriver view and allows them to edit the Priority field in the grid. After getting past the hurdle of using the keys property to filter over the categoryFilter, this was easy to set up using an xe:viewFileItemService read/write service. However the problem with xe:viewFileItemService as a data source is it will display empty lines for each entry in the view after showing the user's documents in the grid.
To get around the blank lines I went down the path of creating an xe:customRestService that returned the jasonData for just the current user's documents. This fixes my blank lines problem but my data source is not in the correct read/write format to support the in-grid editing.
Here is the resulting Json data returned form the xe:customRestService ...
[{"Driver":"ddd","BidID":"123","Priority":"1","Trip":"644"},
{"Driver":"ddd","BidID":"123","Priority":"2","Trip":"444"},
{"Driver":"ddd","BidID":"123","Priority":"4","Trip":"344"},
{"Driver":"ddd","BidID":"123","Priority":"4","Trip":"643"}
]
Here are the Dojo modules I am loading:
<xp:this.resources>
<xp:dojoModule name="dojo.store.JsonRest"></xp:dojoModule>
<xp:dojoModule name="dojo.data.ObjectStore"></xp:dojoModule>
</xp:this.resources>
And here is the script to develop the data store for the grid:
<xp:scriptBlock id="scriptBlock2">
<xp:this.value><![CDATA[
var jsonStore = new dojo.store.JsonRest({target: "InGridCustom.xsp/pathinfo"});
var dataStore = dojo.data.ObjectStore({objectStore: jsonStore});
]]></xp:this.value>
</xp:scriptBlock>
All of this works very nicely except for the bit on providing the in-grid editing support. Any ideas appreciated.
How are you trying to save the changes? With a custom REST service, I would not expect that saving the data store would make any changes to the back-end data, which is why a refresh would revert it to the original value.
I would expect that you'd need to write a doPost method in your custom REST service to process the change on the server side, along with client-side code to call the post method and pass in the updates to process (along with the document ID).
UPDATED ANSWER:
I would try one or both of these approaches to fix your issue.
1) Have a category in your view, and use a categoryFilter and use the hack to make the service only return the correct values. Outlines in this question: XPages Dojo Grid editable cell does not save value when REST Service save() method is called
2) Change the rest service type to viewJsonService in combination with #1. If you get an error, double check the configuration document that Per mentioned. Also heed Per's comments in the linked question relating to configuration and using Firebug to make sure the correct method is used. The update must be a PUT, a POST will not work with the viewJsonService.
Original Answer (for context of comments)
Paul,
I believe that you need to have a button with code to save the changes back. Maybe you do, but you don't mention it and it isn't in your screen shots. The step that Per mentioned is very necessary so it is good that you have it taken care of. The button is necessary, to 'commit' the changes back. The act of inline editing doesn't trigger the PUT call. If you think about it, you wouldn't want an update after each change but one update when the user is finished editing.
If you don't figure out by this evening, I have working code that I can send you, but don't have access to at work.
I'm building a node.js application that opens up a connection to the Twitter Streaming API (v1.1)
I would like to filter multiple keywords (hashtags & words) as separate queries. My original idea was to have multiple public streams.
However, I understand that I can only have one open connection to the Twitter streaming api per application and per IP address and that Twitter encourages us to come up with creative solutions to get what we want.
So my question is this:
If I stream with no filters, such as using statuses/sample (which I believe is 1%) and use custom javascript to filter the output, would I get the same tweets if I used the API method of filtering (i.e track='twitter').
Edit: I have created a diagram explaining this:
As you can see, I want to know if the two outputs wil be the same. I suspect that they won't be because although both outputs are effectively the same filter, one source is a 1% sample, and maybe the other source is a 100% sample but only delivering 1% tweets from that.
So can someone please clarify if both outputs are the same?
Thank you.
According to the Twitter streaming api rules, if the keywords that you track doesn't exceed 1% of the whole global traffic you will receive all data (some tweets might be lost due to network issues etc but it is not significant). This is called garden-hose (firehose is a special filter which gives you all the data but it is given as a paid service through third parties such as http://datasift.com/)
So if a tweet is filtered through public stream then it would be part of your custom filter too unless your keyword set is too broad.
By using custom filters you can track multiple search keywords, and if you miss some data because your keyword set is too broad twitter sends a track limitation notice indicating how much data you are missing.
My suggestion to you would be to use a custom filter and analyze what you get from the stream and what you get as a result for the same keywords from twitter. And when you start getting track limitation notice from twitter, it is time for you to split your keyword set into chunks and start streaming through different streamers by running them from different machines.
The details of the filter streaming is below (taken from official website https://dev.twitter.com/docs/api/1.1/post/statuses/filter)
Returns public statuses that match one or more filter predicates. Multiple parameters may be specified which allows most clients to use a single connection to the Streaming API. Both GET and POST requests are supported, but GET requests with too many parameters may cause the request to be rejected for excessive URL length. Use a POST request to avoid long URLs.
The default access level allows up to 400 track keywords, 5,000 follow userids and 25 0.1-360 degree location boxes. If you need elevated access to the Streaming API, you should explore our partner providers of Twitter data here.
I would like to answer my question with the results of my findings.
I tested both side by side in the same time frame and concluded that the custom filter method, whilst it supports multiple filters does not provide enough tweets to create an interesting enough visualisation.
I think the only way to get something more interesting with concurrent filters is to look at other methods but I am wondering if its not possible. Maybe with a third party.
I have attached a screenshot of the visualisation tracking 'barackobama' The left is the custom filter, the right is statuses/filter.
The statuses/filter api operate on all tweets, instead of those returned by statuses/sample, you can tell by looking at their tweet id's: sample tweets all come from a specific time window. So from millisecond-resolution creation time, you can definitely tell that filter returns tweets outside of sample.
For more details about getting creation time from tweet id and the time window on sample tweets, consult this post: http://blog.falcondai.com/2013/06/666-and-how-twitter-samples-tweets-in.html
I've been assigned a research project to enhance social networking based adaptive e-learning and to do so I need to be able to extract several (hundreds, maybe thousands) of status updates or tweets in order to perform factor analysis on key words. Apparently this can be done with javascript but I have never used javascript before so I'm a bit lost. I know I need a Twitter API but not sure even how to use one. Anybody have any idea how I can do this?
use statuses/followers for getting all followers of a user and statuses/friends_timeline to get tweets by your friend. Response will be in JSON or XML format which can be parsed and used very easily.
http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-statuses%C2%A0followers
http://apiwiki.twitter.com/Twitter-REST-API-Method%3A-statuses-friends_timeline