It may seem like a minor element of software testing, but consistent and clear naming conventions play a key role in organizing complex test elements, easing collaboration, and simplifying traceability.
However, we recognize that developing that structure can often be easier said than done: This Academy Article aims to give you the best practices your team can use for the various elements in TestMonitor, including requirements, test cases, milestones, test runs, and issues.
Here’s how:
Having consistent and clear naming conventions is essential for:
While every team can develop their own structure, here are some best practices:
An example that reflects each of these points could be: "RQ1: User Authentication."
Note: TestMonitor creates Requirement, Test Case, and Issue naming codes for you when these items are added.
Similar to naming requirements, test case names should clearly reflect their purpose and scope.
Related: How to Write Effective Test Case Names with Examples >>
To start, implement a system of prefixes or keywords to group related test cases. You can even go a step further by using folders to group related text cases.
From there, ensure the naming convention is consistent across all test cases.
Examples include:
Naming Milestones should reflect the key project phases or key deliverables.
For example: “Milestone 1: Initial Feature Set”.
Similarly, for Test Runs, create names that indicate the purpose, scope, and associated milestones. This could look like: “Sprint 1 Test Run—Regression Testing”.
By now, you’re probably noticing a pattern: Here, we recommend using a combination of a unique identifier as a prefix followed by the focus area of the issue and then a brief description of the bug, such as: “I29: Login - Error on Password Reset".
Related: How to Write A Bug Report That Resolves Issues Effectively >>
Although it can take some time to integrate naming conventions like these, the efficiency benefits are well worth the effort. Once they are in place, we recommend: