Back to blog
QE#test-cases#quality-engineering#qa#software-testing#test-design

How to Write Effective QA Test Cases: A Practical Guide

Learn how to write clear, comprehensive, and maintainable QA test cases — covering structure, naming conventions, boundary value analysis, equivalence partitioning, and common mistakes to avoid.

InnovateBits7 min read

Test cases are the foundation of any organised QA effort. A well-written test case is unambiguous, reproducible, and specific enough that any team member can execute it and reach the same conclusion. A poorly written test case is vague, misses edge cases, and gives false confidence when it passes.

This guide covers how to write test cases that are genuinely useful — not just busy work.


What Makes a Good Test Case

A good test case has six properties:

Clear preconditions — exactly what state the system must be in before the test begins. Vague preconditions like "user is logged in" leave room for interpretation. Specific preconditions like "user is logged in as a standard user with no existing orders" eliminate it.

Unambiguous steps — each step is a single, specific action. "Navigate to the checkout page and fill in payment details" is two steps, not one.

Expected result — what should happen if the system behaves correctly. Not "the order is placed" but "the user is redirected to /order-confirmation and sees 'Order #12345 confirmed'."

Test data — any specific data required to execute the test. Including test data in the test case makes it reproducible and prevents the "works on my machine" problem.

Traceability — a link to the requirement, user story, or acceptance criterion this test validates. This makes it possible to identify which tests to run when a requirement changes.

Atomic — each test case tests one thing. Tests that validate multiple features are hard to triage when they fail.


Test Case Structure

A standard test case template:

Test Case ID: TC-LOGIN-001
Title: Valid credentials redirect user to dashboard
Module: Authentication
Priority: High
Preconditions:
  - User account exists with email: test@example.com, password: Test1234!
  - User is not currently logged in
  - Browser cache is cleared

Test Steps:
  1. Navigate to https://yourapp.com/login
  2. Enter "test@example.com" in the Email field
  3. Enter "Test1234!" in the Password field
  4. Click the "Log In" button

Expected Result:
  - User is redirected to /dashboard
  - Header shows "Welcome, Test User"
  - Session cookie is set

Actual Result: [filled during execution]
Status: Pass / Fail / Blocked
Notes: [filled during execution]

Test Design Techniques

Writing effective test cases is not just about documenting steps — it's about thinking systematically about what could go wrong. Two techniques that dramatically improve coverage:

Equivalence Partitioning

Instead of testing every possible input (impossible), divide inputs into groups (partitions) where all values in a group should produce the same result. Test one representative value from each partition.

For a password field with requirements "8–20 characters, must include one uppercase, one number":

PartitionExampleExpected
Valid passwordSecurePass1Accept
Too short (< 8 chars)Pass1Reject
Too long (> 20 chars)VeryLongPassword12345!Reject
No uppercasesecurepass1Reject
No numberSecurePassXReject
Empty``Reject

Six test cases cover the entire input space more thoroughly than testing 100 random valid passwords.

Boundary Value Analysis

Bugs cluster at boundaries. If a field accepts 8–20 characters, test at 7 (just below minimum), 8 (minimum), 20 (maximum), and 21 (just above maximum).

Password length tests:
- 7 characters: Reject ← just below boundary
- 8 characters: Accept ← minimum valid
- 14 characters: Accept ← mid-range (one representative)
- 20 characters: Accept ← maximum valid
- 21 characters: Reject ← just above boundary

This is more effective than testing 10, 11, 12, 13... — those all belong to the same equivalence class.


Positive vs Negative Test Cases

Most teams write too many positive tests and too few negative ones. A healthy test suite has roughly 40-60% negative cases.

Positive test cases validate that the system works correctly under normal, expected conditions.

Negative test cases validate that the system handles incorrect or unexpected inputs gracefully — returning meaningful errors, not crashing, not corrupting data.

For a login form, negative cases to always cover:

TC-LOGIN-002: Invalid password shows error message
TC-LOGIN-003: Non-existent email shows generic error (not "email not found" — security)
TC-LOGIN-004: Empty email field shows validation error
TC-LOGIN-005: Empty password field shows validation error
TC-LOGIN-006: Account locked after 5 failed attempts
TC-LOGIN-007: SQL injection in email field is handled safely
TC-LOGIN-008: XSS attempt in password field does not execute
TC-LOGIN-009: Extremely long email (500+ chars) handled without crash
TC-LOGIN-010: Leading/trailing whitespace in email is trimmed or flagged

Teams that only write TC-LOGIN-001 miss nine ways the login feature can go wrong.


Naming Conventions

Consistent naming makes test cases scannable and helps you quickly find what you need.

Format: [Module]-[Scenario]-[Condition]

Good names:

  • Login_ValidCredentials_RedirectsToDashboard
  • Login_InvalidPassword_ShowsErrorMessage
  • Login_AccountLocked_After5FailedAttempts
  • Checkout_EmptyCart_DisablesSubmitButton

Bad names:

  • Test1
  • Login test
  • Check that the login works

The goal is that you can read the test name and immediately understand what is being tested and what the expected outcome is.


Exploratory vs Scripted Testing

Scripted test cases cover known scenarios. But software has an infinite number of unexpected states — which is where exploratory testing complements scripted cases.

In exploratory testing, you investigate the system without a predefined script, using curiosity and domain knowledge to find defects that scripted tests miss. You record what you did (session-based test management), so the exploration is reproducible.

A mature QE strategy uses both: scripted test cases for regression testing and compliance, exploratory testing for discovery and complex user journeys.


Test Case Review

Test cases should go through a review process before being added to the suite, just like code:

  • Developer review — ensures the test reflects how the feature actually works
  • Product review — ensures the test matches the intended behaviour, not just the implementation
  • QE peer review — checks for missing edge cases, vague steps, and ambiguous expected results

A 15-minute review catches more gaps than hours of debugging a poorly-specified test case later.


From Test Cases to Automation

Well-written test cases are easy to automate because the ambiguity has already been removed. Each step maps directly to a Playwright or Selenium action:

Manual step: "Enter 'test@example.com' in the Email field"
↓
Playwright: await page.getByLabel('Email').fill('test@example.com');

Manual step: "Click the 'Log In' button"
↓  
Playwright: await page.getByRole('button', { name: 'Log In' }).click();

Manual expected result: "User is redirected to /dashboard"
↓
Playwright: await expect(page).toHaveURL('/dashboard');

If your test case has a step like "complete the checkout process," it's not specific enough to automate reliably. Rewrite it before automating it.


Common Mistakes

Combining multiple tests into one. "Verify that login, product search, and checkout all work" is three test cases, not one.

Testing the UI implementation instead of the behaviour. "Verify the button turns blue on hover" tests CSS. "Verify the submit button is disabled when the form is empty" tests behaviour. Focus on behaviour.

Not including test data. "Enter a valid email" leaves too much interpretation. "Enter test@example.com" is specific and reproducible.

Writing tests after automation. Test cases should be written from requirements, before coding, so they can inform development. Test cases written after the fact often just document what was built, missing the edge cases the implementation didn't consider.

For more on building a complete QE strategy, see our Quality Engineering Strategy Roadmap. For test automation, see our guides on Playwright and API testing.