Atlassian apps
SQL Query Examples

Advanced Reports LogoSQL Query Examples

This page provides practical examples of SQL queries you can use with the Advanced Reports app, based on the database schema generated from your Jira data.

Refer to the SQL Queries page for a detailed description of the available tables and views like work_items, work_items_display, histories, statuses, users, etc.

Basic Queries

Count Work Items by Status

This query uses the work_items_display view to count how many work items exist for each status name.

Example Result:

statuswork_item_count
In Progress45
To Do32
Done128
In Review18
-- Note: Ensure the 'status' field is selected in the Database Configuration
SELECT
    status,
    COUNT(*) AS work_item_count
FROM work_items_display
GROUP BY status
ORDER BY work_item_count DESC;

List Work Items Assigned to a Specific User

Replace 'Display Name of User' with the actual display name.

Example Result:

keyissuetypestatus
PROJ-123StoryIn Progress
PROJ-456BugTo Do
PROJ-789TaskIn Review
-- Note: Ensure the 'issuetype', 'status' fields are selected in the Database Configuration
SELECT
    "key",
    issuetype,
    status
FROM work_items_display
WHERE assignee = 'Display Name of User';

Find Work Items Created in the Last 30 Days

This query uses the created field (assuming it's configured and available in the work_items view) to find recently created work items.

Example Result:

keyissuetypestatuscreated
PROJ-901StoryTo Do2025-05-25 09:15:00
PROJ-902BugIn Progress2025-05-28 14:30:00
PROJ-903TaskDone2025-06-01 11:45:00
-- Note: Ensure the 'created', 'issuetype', 'status' fields are selected in the Database Configuration
SELECT
    "key",
    issuetype,
    status,
    created
FROM work_items
WHERE created >= current_date - INTERVAL '30' DAY;

History-Based Queries (Time in Status)

These queries require the "Include historical data (changelog)" option to be enabled in the Database Configuration.

Calculate Average Time Spent in 'In Progress' Status (in Days)

This query uses the histories table and joins with statuses to filter by status name. It calculates the average duration work items spent in the 'In Progress' status.

Example Result:

avg_days_in_progress
3.45
-- Note: Ensure the 'status' field is selected in the Database Configuration
SELECT
    AVG(h.seconds_in_state) / (60*60*24.0) AS avg_days_in_progress
FROM histories h
JOIN statuses s ON h.field_id = 'status' AND h.value = s.id
WHERE s.name = 'In Progress';

Total Time Spent by Each Work Item in Each Status (in Hours)

This query groups by work item key and status name to show the total time each work item spent in every status it transitioned through.

This query uses the histories table, so make sure "Include work item histories for selected fields" is enabled in the Database Configuration.

Example Result:

work_item_keystatus_nametotal_hours_in_status
PROJ-123In Progress72.5
PROJ-123In Review24.0
PROJ-123To Do8.5
PROJ-456Done96.0
-- Note: Ensure the 'status' field is selected in the Database Configuration
SELECT
    i."key" AS work_item_key,
    s.name AS status_name,
    SUM(h.seconds_in_state) / (60*60.0) AS total_hours_in_status
FROM histories h
JOIN work_items i ON h.work_item_id = i.id
JOIN statuses s ON h.field_id = 'status' AND h.value = s.id
GROUP BY i."key", s.name
ORDER BY i."key", total_hours_in_status DESC;

Joining Tables

List Work Items with Assignee Email

This query joins work_items_display with the users table to retrieve the email address associated with the assignee. It assumes the user email is stored within the data JSON field of the users table.

Example Result:

keystatusassignee_display_nameassignee_email
PROJ-123In ProgressJohn Smith[email protected]
PROJ-456To DoJane Doe[email protected]
PROJ-789DoneMike Johnson[email protected]
-- Note: Ensure the 'status', 'assignee' fields are selected in the Database Configuration
SELECT
    i."key",
    i.status,
    i.assignee AS assignee_display_name,
    u.emailAddress AS assignee_email
FROM work_items_display i
LEFT JOIN users u ON i.assignee = u.displayName
LIMIT 100;

Time-based Analysis

Cycle Time Analysis by Team and Work Item Type

Show the average cycle time for work items by assignee and work item type, including median cycle time and total items.

Required fields: assignee, issuetype, resolutiondate, created.

Example Result:

assigneework_item_typeavg_cycle_timemedian_cycle_timetotal_items
John SmithBug8.56.012
Jane DoeStory12.310.525
Mike JohnsonTask4.23.518
SELECT 
    users.displayName as assignee,
    issuetypes.name as work_item_type,
    AVG(cycle_time_days) as avg_cycle_time,
    PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY cycle_time_days) as median_cycle_time,
    COUNT(*) as total_items
FROM (
    SELECT 
        wi.assignee,
        wi.issuetype,
        (EXTRACT(EPOCH FROM wi.resolutiondate - wi.created) / 86400) as cycle_time_days
    FROM work_items wi 
    WHERE wi.resolutiondate IS NOT NULL
) cycle_data
LEFT JOIN users ON users.id = cycle_data.assignee
LEFT JOIN issuetypes ON issuetypes.id = cycle_data.issuetype
GROUP BY users.displayName, work_item_type
ORDER BY avg_cycle_time DESC;

Historical Trend Analysis

Sprint Velocity Trends with Predictive Insights

This query helps you understand how much work your team completes in each 'sprint' (a short work cycle). It calculates the 'velocity' for each sprint, which is the total amount of 'Story Points' from tasks that were finished and marked as resolved within that sprint.

It also compares each sprint's velocity to the one before it, showing you how much the team's speed has changed. Finally, it provides a 'rolling average' of the last three sprints' velocities, giving you a smoother picture of the team's typical performance over time, rather than just looking at individual sprints.

Required fields: resolutiondate.

Required custom fields: {{sprint_customfield}} (Sprint), {{story_points_customfield}} (Story Points). Include Sprint and Story Points in the Database Configuration, and substitute the custom field IDs with the actual ones from your Jira instance (e.g. customfield_10004 for Sprint and customfield_10009 for Story Points).

Example Result:

sprint namevelocityprev velocityvelocity change percentrolling avg velocity
Sprint 15423810.5340.0
Sprint 163542-16.6738.3
Sprint 17483537.1441.7
Sprint 184448-8.3342.3
WITH completed_sprint_items AS (
    SELECT
        s.id AS sprint_id,
        s.name AS sprint_name,
        s.startDate AS sprint_start_date,
        s.endDate AS sprint_end_date,
        wi.{{story_points_customfield}} AS story_points
    FROM
        work_items AS wi
    JOIN
        sprints AS s ON s.id = wi.{{sprint_customfield}}[1]
    WHERE
        wi.resolutiondate IS NOT NULL
        AND wi.resolutiondate <= s.endDate
        AND wi.{{story_points_customfield}} IS NOT NULL
        AND wi.{{story_points_customfield}} > 0
),
sprint_velocity AS (
    SELECT
        sprint_id,
        sprint_name,
        sprint_start_date,
        SUM(story_points) AS velocity,
        COUNT(*) AS stories_completed,
        LAG(SUM(story_points), 1) OVER (ORDER BY sprint_start_date) AS prev_velocity
    FROM
        completed_sprint_items
    GROUP BY
        sprint_id, sprint_name, sprint_start_date
)
SELECT
    sprint_name as "sprint name",
    velocity,
    prev_velocity as "prev velocity",
    CASE
        WHEN prev_velocity IS NOT NULL AND prev_velocity > 0 THEN
            ROUND(((velocity - prev_velocity) * 100.0) / prev_velocity, 2)
        ELSE NULL
    END AS "velocity change percent",
    AVG(velocity) OVER (ORDER BY sprint_start_date ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) AS "rolling avg velocity"
FROM
    sprint_velocity
ORDER BY
    sprint_start_date;

Cross-Project Dependencies

Blocked Items Impact Analysis

Coming soon

This query is not yet possible due to lacking support for issue_links table.

This SQL query analyzes blocking relationships between work items across different projects. It identifies instances where a work item entered a "blocked" status and then determines which work item (and its associated project) was causing the block, and which work item (and its project) was blocked.

For each blocking incident, it calculates how long the blocked work item remained in the "blocked" status. If an item is still blocked, the current time is used to calculate the duration.

The query then groups these incidents by the blocking project and the blocked project. For each pair of projects, it reports:

  • The total number of times items from the first project blocked items from the second project (blocked_items).
  • The average duration (in days) that these items were blocked (avg_days_blocked).
  • The count of items that were blocked for more than 7 days (items_blocked_over_week).

Finally, it filters the results to only show project pairs that have had more than 5 blocking incidents and orders them by the average blocking duration in descending order, showing the longest average blocks first.

Example Result:

blocker_projectblocked_projectblocked_itemsavg_days_blockeditems_blocked_over_week
InfrastructureFrontend128.57
BackendMobile86.23
PlatformAnalytics64.81
SELECT 
    blocker_project,
    blocked_project,
    COUNT(*) as blocked_items,
    AVG(days_blocked) as avg_days_blocked,
    SUM(CASE WHEN days_blocked > 7 THEN 1 ELSE 0 END) as items_blocked_over_week
FROM (
    SELECT 
        blocker.project as blocker_project,
        blocked.project as blocked_project,
        EXTRACT(EPOCH FROM COALESCE(h2.date, NOW()) - h1.date) / 86400 as days_blocked
    FROM histories h1
    JOIN work_items blocked ON h1.work_item_id = blocked.id
    JOIN issue_links il ON il.inward_issue = blocked.id
    JOIN work_items blocker ON il.outward_issue = blocker.id
    LEFT JOIN histories h2 ON h2.work_item_id = h1.work_item_id 
        AND h2.field_id = h1.field_id 
        AND h2.date > h1.date
    WHERE h1.field_id = 'status' 
        AND h1.value = 'blocked_status_id'
) blocking_analysis
GROUP BY blocker_project, blocked_project
HAVING COUNT(*) > 5
ORDER BY avg_days_blocked DESC;

Resource Allocation & Capacity Planning

Team Capacity vs. Workload Analysis

Note

This query demonstrates use of custom tables. To run it, you'll need to create a custom table by uploading it from CSV file. Example file is provided here. For information on how to create custom tables, refer to the Custom Tables page.

Required fields: assignee, status.

Required custom fields: {{story_points_customfield}} (Story Points). Include Story Points in the Database Configuration, and substitute the custom field IDs with the actual ones from your Jira instance (e.g. customfield_10009).

This query calculates the workload, capacity, and utilization percentage for each team. It uses predefined team capacities from the custom_team_capacity table and sums up "Story Points" for work items currently "In Progress" as the current workload. The final output shows each team's total capacity, current workload, remaining available capacity, and a utilization percentage, ordered from highest to lowest utilization.

This helps in understanding how busy each team is relative to its capacity.

Example Result:

team_namecapacity_pointscurrent_workloadavailable_capacityutilization_percent
Frontend Team4038295.00
Backend Team5042884.00
Mobile Team30181260.00
QA Team25121348.00
WITH team_capacity AS (
    SELECT team_name, capacity_points FROM custom_team_capacity
),
current_workload AS (
    SELECT 
        users.displayName as team_name,
        SUM({{story_points_customfield}}) as current_points,
        COUNT(*) as active_items
    FROM work_items wi
    LEFT JOIN users ON users.id = wi.assignee
    JOIN statuses s ON s.id = wi.status
    JOIN status_categories sc ON sc.id = s.statusCategoryId
    WHERE sc.name IN ('In Progress')
    GROUP BY team_name
)
SELECT 
    tc.team_name,
    tc.capacity_points,
    COALESCE(cw.current_points, 0) as current_workload,
    tc.capacity_points - COALESCE(cw.current_points, 0) as available_capacity,
    ROUND((COALESCE(cw.current_points, 0) / tc.capacity_points::FLOAT) * 100, 2) as utilization_percent
FROM team_capacity tc
LEFT JOIN current_workload cw ON tc.team_name = cw.team_name
ORDER BY utilization_percent DESC;

Quality & Technical Debt Analysis

Bug Introduction Rate by Component

Coming soon

This query is not yet possible due to lacking support for issue_links table.

Required fields: created, component, issuelinks, issuetype, resolutiondate.

This SQL query analyzes the rate at which bugs are reported after their associated parent stories are completed, broken down by software component.

It first identifies all 'Bug' work items that are linked to a 'parent story' (a larger piece of work) that has a resolution date (meaning it's completed). For each such bug, it captures its creation date and the parent story's completion date.

Then, for each software component, it calculates:

  • Total Bugs: The overall count of bugs linked to completed parent stories.
  • Immediate Bugs: The count of bugs that were created within 7 days of their parent story being completed.
  • Month Bugs: The count of bugs created within 30 days of their parent story being completed.
  • Immediate Bug Rate: The percentage of 'immediate bugs' out of the 'total bugs'.

The final result set includes components with at least 10 bugs, ordered by their immediate bug rate, from highest to lowest.

Example Result:

componenttotal_bugsimmediate_bugsmonth_bugsimmediate_bug_rate_percent
Payment System1581253.33
User Authentication2291840.91
Search Engine1861433.33
Report Generator123825.00
WITH bug_timeline AS (
    SELECT 
        wi.id,
        wi.component,
        wi.created as bug_created,
        parent_story.resolutiondate as story_completed
    FROM work_items wi
    LEFT JOIN issue_links il ON il.inward_issue = wi.id
    LEFT JOIN work_items parent_story ON il.outward_issue = parent_story.id
    JOIN issuetypes it ON it.id = wi.issuetype
    WHERE it.name = 'Bug'
        AND parent_story.resolutiondate IS NOT NULL
)
SELECT 
    component,
    COUNT(*) as total_bugs,
    COUNT(CASE WHEN bug_created <= story_completed + INTERVAL '7 days' THEN 1 END) as immediate_bugs,
    COUNT(CASE WHEN bug_created <= story_completed + INTERVAL '30 days' THEN 1 END) as month_bugs,
    ROUND(
        COUNT(CASE WHEN bug_created <= story_completed + INTERVAL '7 days' THEN 1 END) * 100.0 / 
        COUNT(*), 2
    ) as immediate_bug_rate_percent
FROM bug_timeline
WHERE component IS NOT NULL
GROUP BY component
HAVING COUNT(*) >= 10
ORDER BY immediate_bug_rate_percent DESC;

Advanced Workflow Analysis

Bottleneck Detection with Statistical Significance

Required fields: status.

This query uses histories table. To generate the table enable "Include historical data (changelog)" option in the Database Configuration.

This query analyzes how much time work items spend in different statuses within your workflow.

It calculates the average time, the 95th percentile time (meaning 95% of items complete that status within this time), and the variability for each status. Based on these metrics, it categorizes statuses as 'High Variability Bottleneck', 'Consistent Bottleneck', or 'Normal Flow' to help identify stages where work items might be getting stuck or experiencing inconsistent delays. It only considers statuses with at least 5 historical entries for reliable analysis.

Example Result:

status_nameavg_daysp95_daysbottleneck_typecoefficient_of_variation
Waiting for Approval8.528.2High Variability Bottleneck1.85
Code Review6.212.1Consistent Bottleneck0.95
In Progress3.89.5Normal Flow1.12
Testing2.15.8Normal Flow0.76
WITH status_times AS (
    SELECT 
        work_item_id,
        field_id,
        value as status_id,
        seconds_in_state / 86400.0 as days_in_status
    FROM histories 
    WHERE field_id = 'status' AND seconds_in_state > 0
),
status_stats AS (
    SELECT 
        json_extract_string(statuses."data", 'name') as status_name,
        AVG(days_in_status) as avg_days,
        STDDEV(days_in_status) as stddev_days,
        PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY days_in_status) as p95_days,
        COUNT(*) as sample_size
    FROM status_times st
    LEFT JOIN statuses ON statuses.id = st.status_id
    GROUP BY status_name
    HAVING COUNT(*) >= 5
)
SELECT 
    status_name,
    avg_days,
    p95_days,
    CASE 
        WHEN p95_days > avg_days * 3 THEN 'High Variability Bottleneck'
        WHEN avg_days > 5 THEN 'Consistent Bottleneck'
        ELSE 'Normal Flow'
    END as bottleneck_type,
    ROUND(stddev_days / avg_days, 2) as coefficient_of_variation
FROM status_stats
ORDER BY avg_days DESC;
Note

These examples provide a starting point. You can adapt and combine them to perform more complex analyses tailored to your specific needs.