Skip to main content
Back to blog

Integrate Power BI with Azure DevOps for QA Reporting

How to integrate Power BI with Azure DevOps to build advanced QA reports and dashboards. Covers the Azure DevOps Power BI connector, OData queries for test results and work items, and building executive-ready quality dashboards.

InnovateBits4 min read
Share

Azure DevOps dashboards are functional but limited. Power BI unlocks advanced analytics: cross-sprint trend analysis, custom calculations, drill-through reports, and scheduled email delivery to stakeholders. This guide connects Azure DevOps data to Power BI for QA reporting.


Connecting Power BI to Azure DevOps

Azure DevOps exposes an Analytics OData endpoint that Power BI can query directly.

In Power BI Desktop:

  1. Get Data → OData Feed
  2. URL: https://analytics.dev.azure.com/{org}/{project}/_odata/v4.0-preview/
  3. Authentication: Basic auth with a Personal Access Token (PAT)

Method 2: Azure DevOps Power BI connector

  1. Get Data → Online Services → Azure DevOps
  2. Sign in with your Azure AD account
  3. Select your organisation and project

Key OData entities for QA

EntityWhat it contains
TestRunsAll test runs with pass/fail counts, dates
TestResultsDailyDaily aggregated test results (faster)
WorkItemsAll work items including bugs, test cases
WorkItemSnapshotHistorical work item states
PipelineRunsPipeline execution history

Querying test results via OData

// In Power BI OData query — test pass rate by sprint
TestResultsDaily
  ?$filter=Pipeline/PipelineName eq 'Regression'
    and CompletedOn ge 2025-07-01T00:00:00Z
  &$select=CompletedOn,ResultCount,ResultPassCount,ResultFailCount
  &$orderby=CompletedOn asc
// Bug count by sprint and severity
WorkItems
  ?$filter=WorkItemType eq 'Bug'
    and State ne 'Closed'
  &$select=WorkItemId,Title,State,Priority,Severity,
           IterationSK,CreatedDate,ClosedDate
  &$expand=Iteration($select=IterationName,StartDate,EndDate)

Building the QA executive dashboard

Page 1: Sprint Quality Summary

Key visuals:

  • KPI card: Sprint pass rate (target: 95%)
  • KPI card: Open P1/P2 bugs (target: 0)
  • Gauge: Requirement coverage %
  • Line chart: Pass rate trend — last 6 sprints
  • Table: Current sprint bugs by severity with assignee

Key visuals:

  • Area chart: Tests executed vs passed over time
  • Waterfall: Defects opened vs closed per sprint
  • Scatter plot: Test flakiness (failure rate) vs frequency
  • Bar chart: Test execution time by suite (identify slow suites)

Page 3: Defect Analysis

Key visuals:

  • Stacked bar: Bugs by component, coloured by severity
  • Line chart: Escape rate trend (bugs in production vs QA)
  • Funnel: Bug lifecycle — New → Active → Resolved → Closed
  • Matrix: Bug count heatmap (sprint × component)

DAX measures for QA metrics

// Pass rate
Pass Rate = 
DIVIDE(
    SUM(TestResultsDaily[ResultPassCount]),
    SUM(TestResultsDaily[ResultCount]),
    0
) * 100
 
// Defect escape rate
Escape Rate = 
DIVIDE(
    COUNTROWS(FILTER(Bugs, Bugs[Environment] = "Production")),
    COUNTROWS(Bugs),
    0
) * 100
 
// Defect density (bugs per user story)
Defect Density = 
DIVIDE(
    COUNTROWS(Bugs),
    COUNTROWS(UserStories),
    0
)
 
// Sprint quality score (composite)
Quality Score = 
[Pass Rate] * 0.4 +
(100 - [Escape Rate]) * 0.3 +
MIN([Coverage %], 100) * 0.3

Scheduling and sharing reports

  1. Publish the Power BI report to Power BI Service (powerbi.com)
  2. Set data refresh: daily (or hourly for pipeline metrics)
  3. Create a workspace for the QA team
  4. Configure Subscriptions: send email snapshots to stakeholders every Monday morning

Common errors and fixes

Error: OData query returns 403 Forbidden Fix: The PAT must have Analytics (read) scope. Go to User Settings → Personal Access Tokens → create new token with Analytics read permission.

Error: Power BI shows "Query timeout" for large data sets Fix: Add date filters to the OData query: ?$filter=CompletedOn ge 2025-01-01. Loading all historical data causes timeouts. Power BI incremental refresh can handle large data sets after initial filtered load.

Error: Sprint names appear as integers (IterationSK) not names Fix: Expand the Iteration relationship: &$expand=Iteration($select=IterationName). The IterationSK is a surrogate key; the readable name is in the Iteration navigation property.

Free newsletter

Stay ahead in AI-driven QA

Get practical tutorials on test automation, AI testing, and quality engineering — straight to your inbox. No spam, unsubscribe any time.

Discussion

Sign in with GitHub to comment · powered by Giscus