Azure DevOps Workflow for Testing Teams: End-to-End Example
A complete end-to-end Azure DevOps workflow for QA teams — from requirements to release. Covers how Boards, Test Plans, Pipelines, and Repos work together across a full sprint with real examples and YAML snippets.
Understanding individual Azure DevOps features is useful. Understanding how they connect into a complete workflow is what makes you productive. This article traces a single user story from creation through verification, showing exactly where each Azure DevOps service comes into play.
The scenario
User Story #312: "As a registered user, I want to save items to a wishlist so that I can purchase them later."
Acceptance Criteria:
- User can add any product to wishlist from product detail page
- Wishlist persists between sessions
- User can remove items from wishlist
- Wishlist is limited to 50 items (show error at limit)
- Wishlist page shows current price (updates if price changes)
Sprint: Sprint 19 (2025-07-15 to 2025-07-28)
Phase 1: Requirements analysis (Day 1)
The QA engineer reviews the story in Azure Boards:
- Open story #312 in Azure Boards
- Read acceptance criteria carefully
- Add a comment: "Clarification needed: does wishlist sync across devices for the same account?"
- Tag the story:
needs-clarification
The product owner clarifies: yes, wishlists sync across devices. The QA engineer updates the AC and removes the tag.
Test planning output: 12 test cases identified (8 functional, 2 negative/boundary, 1 performance, 1 UI).
Phase 2: Test case creation (Days 1–2)
In Azure Test Plans, the QA engineer creates a requirement-based suite linked to #312:
Test Plan: Sprint 19
└── Suite: Wishlist Feature (#312)
├── TC-201: Add product to wishlist
├── TC-202: Wishlist persists after logout/login
├── TC-203: Remove item from wishlist
├── TC-204: Wishlist limit (50 items) — add at limit
├── TC-205: Wishlist limit — error message at 51st item
├── TC-206: Wishlist price updates when product price changes
├── TC-207: Wishlist syncs across two browser sessions
├── TC-208: Add already-wishlisted item (idempotent)
├── TC-209: Wishlist renders correctly on mobile viewport
├── TC-210: Wishlist accessible without JavaScript
├── TC-211: Anonymous user wishlist redirects to login
└── TC-212: Wishlist page load time < 2 seconds with 50 items
Each test case is linked to #312 using the Tests link type. The requirement-based suite now shows #312 with 12 associated test cases.
Phase 3: Automated test creation (Days 1–3, parallel)
In Azure Repos, the QA engineer creates a feature branch:
git checkout -b feature/wishlist-e2e-testsWrites Playwright tests covering the automatable test cases (TC-201 through TC-208):
// tests/e2e/wishlist.spec.ts
import { test, expect } from '@playwright/test'
test.describe('Wishlist feature', () => {
test('TC-201: Add product to wishlist', async ({ page }) => {
await page.goto('/products/laptop-pro-x')
await page.click('[data-testid="add-to-wishlist"]')
await expect(page.locator('[data-testid="wishlist-count"]')).toContainText('1')
})
test('TC-204: Wishlist enforces 50-item limit', async ({ page, request }) => {
// Seed wishlist to 49 items via API
const token = await getAuthToken(request)
for (let i = 0; i < 49; i++) {
await request.post('/api/wishlist', {
data: { productId: `prod_${i}` },
headers: { Authorization: `Bearer ${token}` }
})
}
await page.goto('/products/new-item')
await page.click('[data-testid="add-to-wishlist"]')
// 50th item should succeed
await expect(page.locator('[data-testid="wishlist-count"]')).toContainText('50')
// 51st item should show error
await page.goto('/products/another-item')
await page.click('[data-testid="add-to-wishlist"]')
await expect(page.locator('[data-testid="error-toast"]')).toContainText('Wishlist is full')
})
})Opens a pull request in Azure Repos. Branch policy requires:
- CI pipeline to pass
- QA lead approval
Phase 4: CI pipeline runs (automated, on PR)
The PR triggers the CI pipeline in Azure Pipelines:
trigger: none
pr:
branches:
include:
- main
pool:
vmImage: ubuntu-latest
stages:
- stage: Test
jobs:
- job: E2E
steps:
- script: npm ci
- script: npx playwright install --with-deps chromium
- script: npx playwright test tests/e2e/wishlist.spec.ts
env:
BASE_URL: $(STAGING_URL)
TEST_USER_EMAIL: $(TEST_USER_EMAIL)
TEST_USER_PASSWORD: $(TEST_USER_PASSWORD)
- task: PublishTestResults@2
inputs:
testResultsFormat: JUnit
testResultsFiles: playwright-report/results.xml
condition: always()
- task: PublishPipelineArtifact@1
inputs:
targetPath: playwright-report
artifact: playwright-report
condition: always()The pipeline runs, all tests pass. QA lead reviews and approves the PR. Code merged.
Phase 5: Manual testing (Days 5–8)
Development completes the wishlist feature. QA executes manual tests:
- Open Test Plans → Sprint 19 → Wishlist Feature suite
- Click Run → Run for web application
- Test Runner opens side-by-side with the staging environment
- Work through TC-201 to TC-212
TC-210 (wishlist without JavaScript) fails — the wishlist silently does nothing when JS is disabled instead of showing a graceful degradation message.
Create bug from Test Runner:
Title: Wishlist add button does nothing when JavaScript disabled
Steps to reproduce: [auto-populated from test case]
Expected: Message "Please enable JavaScript to use wishlist feature"
Actual: Button click has no visible effect, no error shown
Priority: P3, Severity: 3
Linked to: User Story #312, Test Case TC-210
Sprint: Sprint 19
Phase 6: Bug resolution and verification (Days 8–12)
The developer fixes the bug and resolves it in Azure Boards. QA receives notification (via email or Teams integration). QA re-runs TC-210:
- Open the bug work item, note the fix commit
- Open Test Plans, navigate to TC-210
- Click Run → execute the test case
- Test passes → mark as Passed in Test Runner
- Open the bug work item → set state to Closed
Phase 7: Regression and sign-off (Days 12–14)
Before sprint sign-off, QA runs the automated regression suite against the current staging build:
Pipeline triggered manually (or on schedule):
schedules:
- cron: "0 22 * * 1-5" # 10 PM weekdays
displayName: Nightly regression
branches:
include:
- mainResults published to Azure Test Plans. Sprint 19 sign-off view:
Test Plan: Sprint 19
Wishlist Feature: 12/12 passed ✓
Regression Suite: 47/47 passed ✓
Total pass rate: 100%
Open P1/P2 bugs: 0
Open P3 bugs: 0 (1 closed this sprint)
Common errors and fixes
Error: Test run shows "Not executed" for automated tests
Fix: The pipeline must publish results AND be linked to the test plan. Add testPlanId to the PublishTestResults task or set testRunTitle to match the plan name.
Error: Bug created from Test Runner appears in wrong sprint Fix: Check the Iteration Path on the bug. Test Runner may default to the project root. Manually update to the current sprint iteration.
Error: PR pipeline fails with "Test user credentials not found"
Fix: Add the test credentials as secret variables in Pipeline Library → Variable Groups and reference them with $(VARIABLE_NAME) in the YAML.
Error: Regression pipeline picks up test files from feature branches
Fix: Set trigger: none on the regression pipeline and trigger it only from main via a scheduled or manual trigger.
Stay ahead in AI-driven QA
Get practical tutorials on test automation, AI testing, and quality engineering — straight to your inbox. No spam, unsubscribe any time.
Discussion
Sign in with GitHub to comment · powered by Giscus