Skip to main content
Back to blog

How QA Engineers Use Azure DevOps in Real Projects

A practical walkthrough of how QA engineers use Azure DevOps day-to-day in real software projects. Covers sprint planning, test case creation, bug tracking, pipeline integration, and reporting — with real examples.

InnovateBits7 min read
Share

Reading documentation about Azure DevOps is one thing. Understanding how QA engineers actually use it inside a real sprint — juggling test case management, bug tracking, pipeline failures, and reporting deadlines — is another.

This article walks through a realistic two-week sprint from a QA engineer's perspective, showing exactly which Azure DevOps features come into play, when, and why.


The sprint setup (Day 1)

At sprint planning, the team commits to 8 user stories. The QA engineer's first job is to review each story's acceptance criteria and create a test plan for the sprint.

Creating the sprint test plan

Project: ShopFlow
Sprint: Sprint 18 (2025-07-01 to 2025-07-14)

Test Plan: Sprint 18 — Regression + New Features
  ├── Suite: Checkout Flow (new feature)
  ├── Suite: User Authentication (regression)
  ├── Suite: Product Search (regression)
  └── Suite: Order History (new feature)

Each suite maps to a functional area. The new feature suites will contain new test cases written this sprint. The regression suites link to existing test cases that need re-running.

Linking test cases to user stories

This is the step most QA engineers skip — and it costs them later. Linking test cases to user stories in Azure DevOps creates a requirements traceability matrix automatically.

  1. Open a test case
  2. Click Links → Add link → Related Work
  3. Search for the user story by ID or title
  4. Set link type to Tests

Now when someone asks "which tests cover user story #847?", the answer is one click away. And when a user story is marked Done, you can see whether all its linked test cases have passed.


Writing test cases (Days 1–3)

For a user story: "As a customer, I want to apply a discount code at checkout so that I get the advertised price reduction", a QA engineer would write test cases covering:

Test CaseTypePriority
Valid discount code applies correct percentageFunctional1
Invalid discount code shows descriptive errorNegative1
Expired discount code shows expiry messageNegative2
Discount code only usable once per accountBusiness rule2
Discount applied before tax calculationCalculation1
Discount field visible on mobile viewportUI3

Each test case gets detailed steps. For the first test case:

Step 1 → Navigate to /checkout with one item in cart
         Expected: Checkout page displays "Discount code" input field

Step 2 → Enter "SAVE20" in the discount field and click Apply
         Expected: Green confirmation message "20% discount applied"

Step 3 → Verify the order summary shows:
         Subtotal: £50.00
         Discount (20%): -£10.00
         Total: £40.00 + tax
         Expected: All values correct

Executing tests during the sprint (Days 3–10)

Once development is complete for a story, QA picks it up. The typical execution workflow:

Manual testing

  1. Open the sprint test plan in Test Plans
  2. Select the suite for the completed story
  3. Click Run → Run for web application
  4. The Test Runner opens in a side panel
  5. Work through each step, marking Pass/Fail
  6. For any failure: click Create Bug, add screenshot, assign to the dev

Tracking test execution status

The test plan shows a live summary:

Checkout Flow suite:
  Passed:      4 / 6  ████████░░  67%
  Failed:      1 / 6
  Not Run:     1 / 6

Sprint overall:
  Passed:     18 / 32
  Failed:      3 / 32
  Not Run:    11 / 32
  Blocked:     0 / 32

This is the QA engineer's working view throughout the sprint — it shows what's been tested, what's failing, and what's still pending.


Bug management workflow

When a bug is raised from Test Runner, Azure DevOps pre-populates the bug with the test case name and a link to the test run. The QA engineer adds:

  • Reproduction steps (copied from the failed test step)
  • Screenshot (attached directly in the work item)
  • Environment (browser, device, test environment URL)
  • Priority and Severity

The bug appears in the sprint backlog. The developer fixes it, marks it Resolved. The QA engineer re-runs the specific test case, marks it Passed, and closes the bug.

Tracking bugs by sprint

A useful Azure Boards query for QA:

Work Item Type = Bug
AND Iteration = @CurrentIteration
AND State <> Closed
ORDER BY Priority ASC

Save this as a chart (donut chart by Severity) and pin it to your team dashboard.


Integrating automated tests (Days 1–14, ongoing)

While manual testing continues, automated regression tests run on every pull request via Azure Pipelines. The QA engineer's role here is to:

  1. Ensure the pipeline publishes test results to Azure Test Plans
  2. Review failed test runs in the pipeline and determine if they're product bugs or test flakiness
  3. Investigate flaky tests — tests that sometimes pass and sometimes fail without code changes

A pipeline that publishes test results looks like this:

- task: PublishTestResults@2
  inputs:
    testResultsFormat: 'JUnit'
    testResultsFiles: '**/test-results.xml'
    mergeTestResults: true
    testRunTitle: 'Sprint 18 Automated Regression'
  condition: always()

The condition: always() is important — it ensures results are published even when tests fail. Without it, a failing test prevents results from being uploaded.


End-of-sprint activities (Days 12–14)

Sign-off report

Before the sprint review, the QA engineer exports a sign-off summary. In Azure Test Plans:

  1. Go to Test Plans → Progress Report
  2. Select the sprint test plan
  3. The report shows pass rates, pending tests, open bugs

A simple sign-off checklist:

Sprint 18 QA Sign-Off Checklist:
☑ All P1/P2 test cases executed
☑ Pass rate: 94% (47/50 test cases passed)
☑ 3 failing test cases: all have open bugs assigned
☑ All P1 bugs resolved and verified
☑ 2 P2 bugs deferred to Sprint 19 (approved by PO)
☑ Regression suite fully executed — no new failures
☑ Automated pipeline: green for 3 consecutive days

Carrying over work items

Any test cases not executed get moved to the next sprint by updating their Iteration Path in Azure Boards.


Common errors and fixes

Problem: Test cases executed but the count doesn't update in the test plan Fix: Ensure you're running test cases through Test Runner (not marking them directly). Only Test Runner execution updates the test plan statistics.

Problem: Bugs linked to test cases don't show in the sprint board Fix: Check the Area Path on the bug matches your team's area path. Bugs in a different area won't appear on your board.

Problem: Automated test results not showing in the test plan Fix: The pipeline must use PublishTestResults task AND the test run title must match an existing test plan. Set testRunTitle to match your Azure Test Plans name, or link the pipeline to the test plan manually.

Problem: Team dashboards are empty Fix: Dashboards are team-scoped. You may be viewing a different team's dashboard. Check the team selector in the top navigation.


The QA engineer's daily Azure DevOps routine

A focused daily routine keeps the work manageable:

  • Morning standup (10 min): Review sprint board for blockers. Check pipeline status — any overnight failures?
  • Test execution (3–4 hrs): Work through queued test cases in priority order
  • Bug management (30 min): Follow up on resolved bugs, re-test and close
  • Pipeline review (20 min): Investigate any new automated test failures
  • End of day (10 min): Update test execution count, raise any blockers in the sprint board

This rhythm — testing, tracking, communicating — is the core of what QA engineers do in Azure DevOps every day.

Free newsletter

Stay ahead in AI-driven QA

Get practical tutorials on test automation, AI testing, and quality engineering — straight to your inbox. No spam, unsubscribe any time.

Discussion

Sign in with GitHub to comment · powered by Giscus