Azure DevOps Workflow for Testing Teams
A complete end-to-end Azure DevOps workflow for QA teams — from requirements to release. Covers how Boards, Test Plans, Pipelines, and Repos work together.
Understanding individual Azure DevOps features is useful. Understanding how they connect into a complete workflow is what makes you productive. This article traces a single user story from creation through verification, showing exactly where each Azure DevOps service comes into play.
The scenario
User Story #312: "As a registered user, I want to save items to a wishlist so that I can purchase them later."
Acceptance Criteria:
- User can add any product to wishlist from product detail page
- Wishlist persists between sessions
- User can remove items from wishlist
- Wishlist is limited to 50 items (show error at limit)
- Wishlist page shows current price (updates if price changes)
Sprint: Sprint 19 (2025-07-15 to 2025-07-28)
Phase 1: Requirements analysis (Day 1)
The QA engineer reviews the story in Azure Boards:
- Open story #312 in Azure Boards
- Read acceptance criteria carefully
- Add a comment: "Clarification needed: does wishlist sync across devices for the same account?"
- Tag the story:
needs-clarification
The product owner clarifies: yes, wishlists sync across devices. The QA engineer updates the AC and removes the tag.
Test planning output: 12 test cases identified (8 functional, 2 negative/boundary, 1 performance, 1 UI).
Phase 2: Test case creation (Days 1–2)
In Azure Test Plans, the QA engineer creates a requirement-based suite linked to #312:
Test Plan: Sprint 19
└── Suite: Wishlist Feature (#312)
├── TC-201: Add product to wishlist
├── TC-202: Wishlist persists after logout/login
├── TC-203: Remove item from wishlist
├── TC-204: Wishlist limit (50 items) — add at limit
├── TC-205: Wishlist limit — error message at 51st item
├── TC-206: Wishlist price updates when product price changes
├── TC-207: Wishlist syncs across two browser sessions
├── TC-208: Add already-wishlisted item (idempotent)
├── TC-209: Wishlist renders correctly on mobile viewport
├── TC-210: Wishlist accessible without JavaScript
├── TC-211: Anonymous user wishlist redirects to login
└── TC-212: Wishlist page load time < 2 seconds with 50 items
Each test case is linked to #312 using the Tests link type. The requirement-based suite now shows #312 with 12 associated test cases.
Phase 3: Automated test creation (Days 1–3, parallel)
In Azure Repos, the QA engineer creates a feature branch:
BASH1git checkout -b feature/wishlist-e2e-tests
Writes Playwright tests covering the automatable test cases (TC-201 through TC-208):
TYPESCRIPT1// tests/e2e/wishlist.spec.ts 2import { test, expect } from '@playwright/test' 3 4test.describe('Wishlist feature', () => { 5 test('TC-201: Add product to wishlist', async ({ page }) => { 6 await page.goto('/products/laptop-pro-x') 7 await page.click('[data-testid="add-to-wishlist"]') 8 await expect(page.locator('[data-testid="wishlist-count"]')).toContainText('1') 9 }) 10 11 test('TC-204: Wishlist enforces 50-item limit', async ({ page, request }) => { 12 // Seed wishlist to 49 items via API 13 const token = await getAuthToken(request) 14 for (let i = 0; i < 49; i++) { 15 await request.post('/api/wishlist', { 16 data: { productId: `prod_${i}` }, 17 headers: { Authorization: `Bearer ${token}` } 18 }) 19 } 20 await page.goto('/products/new-item') 21 await page.click('[data-testid="add-to-wishlist"]') 22 // 50th item should succeed 23 await expect(page.locator('[data-testid="wishlist-count"]')).toContainText('50') 24 25 // 51st item should show error 26 await page.goto('/products/another-item') 27 await page.click('[data-testid="add-to-wishlist"]') 28 await expect(page.locator('[data-testid="error-toast"]')).toContainText('Wishlist is full') 29 }) 30})
Opens a pull request in Azure Repos. Branch policy requires:
- CI pipeline to pass
- QA lead approval
Phase 4: CI pipeline runs (automated, on PR)
The PR triggers the CI pipeline in Azure Pipelines:
YAML1trigger: none 2pr: 3 branches: 4 include: 5 - main 6 7pool: 8 vmImage: ubuntu-latest 9 10stages: 11 - stage: Test 12 jobs: 13 - job: E2E 14 steps: 15 - script: npm ci 16 - script: npx playwright install --with-deps chromium 17 - script: npx playwright test tests/e2e/wishlist.spec.ts 18 env: 19 BASE_URL: $(STAGING_URL) 20 TEST_USER_EMAIL: $(TEST_USER_EMAIL) 21 TEST_USER_PASSWORD: $(TEST_USER_PASSWORD) 22 23 - task: PublishTestResults@2 24 inputs: 25 testResultsFormat: JUnit 26 testResultsFiles: playwright-report/results.xml 27 condition: always() 28 29 - task: PublishPipelineArtifact@1 30 inputs: 31 targetPath: playwright-report 32 artifact: playwright-report 33 condition: always()
The pipeline runs, all tests pass. QA lead reviews and approves the PR. Code merged.
Phase 5: Manual testing (Days 5–8)
Development completes the wishlist feature. QA executes manual tests:
- Open Test Plans → Sprint 19 → Wishlist Feature suite
- Click Run → Run for web application
- Test Runner opens side-by-side with the staging environment
- Work through TC-201 to TC-212
TC-210 (wishlist without JavaScript) fails — the wishlist silently does nothing when JS is disabled instead of showing a graceful degradation message.
Create bug from Test Runner:
Title: Wishlist add button does nothing when JavaScript disabled
Steps to reproduce: [auto-populated from test case]
Expected: Message "Please enable JavaScript to use wishlist feature"
Actual: Button click has no visible effect, no error shown
Priority: P3, Severity: 3
Linked to: User Story #312, Test Case TC-210
Sprint: Sprint 19
Phase 6: Bug resolution and verification (Days 8–12)
The developer fixes the bug and resolves it in Azure Boards. QA receives notification (via email or Teams integration). QA re-runs TC-210:
- Open the bug work item, note the fix commit
- Open Test Plans, navigate to TC-210
- Click Run → execute the test case
- Test passes → mark as Passed in Test Runner
- Open the bug work item → set state to Closed
Phase 7: Regression and sign-off (Days 12–14)
Before sprint sign-off, QA runs the automated regression suite against the current staging build:
Pipeline triggered manually (or on schedule):
YAML1schedules: 2 - cron: "0 22 * * 1-5" # 10 PM weekdays 3 displayName: Nightly regression 4 branches: 5 include: 6 - main
Results published to Azure Test Plans. Sprint 19 sign-off view:
Test Plan: Sprint 19
Wishlist Feature: 12/12 passed ✓
Regression Suite: 47/47 passed ✓
Total pass rate: 100%
Open P1/P2 bugs: 0
Open P3 bugs: 0 (1 closed this sprint)
Common errors and fixes
Error: Test run shows "Not executed" for automated tests
Fix: The pipeline must publish results AND be linked to the test plan. Add testPlanId to the PublishTestResults task or set testRunTitle to match the plan name.
Error: Bug created from Test Runner appears in wrong sprint Fix: Check the Iteration Path on the bug. Test Runner may default to the project root. Manually update to the current sprint iteration.
Error: PR pipeline fails with "Test user credentials not found"
Fix: Add the test credentials as secret variables in Pipeline Library → Variable Groups and reference them with $(VARIABLE_NAME) in the YAML.
Error: Regression pipeline picks up test files from feature branches
Fix: Set trigger: none on the regression pipeline and trigger it only from main via a scheduled or manual trigger.
Share this article
Follow for more
Follow me on social media for more developer tips, tricks, and tutorials. Let's connect and build something great together!