The Quality Effectiveness dashboard provides prescriptive recommendations to improve testing coverage, both for the organization and specifically for a Project or Engineering Manager. The dashboard provides you with measures to test the effectiveness of your test coverage and also provides views to help identify gaps in testing.
As a business stakeholder, you can find answers to some of the key business scenarios such as:
The Quality Effectiveness dashboard displays data in five separate tabs (called chapters). Each chapter provides detailed information about test coverage and defects identified by the system. The five chapters are:
You can use the filter in the dashboard to analyze information related to specific Projects or Engineering Managers. The dashboard supports hierarchy in case of filtering results based on Engineering Managers (up to three levels by default).
The Overview chapter, as the name suggests, gives you an overall picture of the Quality metrics. The KPI (%) visible on the tab is an aggregate percentage of all the remaining metrics (code quality%, rejected defects%, test execution coverage%, and number of average test cases). The Overview chapter has the following distinct panels:
Recommendations: This panel on top of the dashboard gives recommendations on actions that can be taken by the stakeholders for defect prevention.
Test Effectiveness:
By Project: This panel displays a list of projects with their defect projection metrics. The metrics displayed are Code Quality%, Rejected Defect%, Test Execution Coverage%, Avg Test Case, and Opportunity Rank.
By Engineering Manager: You can tab across to the Engineering Manager tab within this table to view these metrics for Engineering Managers. While the previous list gives out projects with highest opportunity rank, the Engineering Manager list gives out information on which Engineering Manager has the highest opportunity to work towards reducing defects.
Lower the opportunity rank, greater are the chances of increasing test coverage.
The metrics are color coded for easy understanding:
Impact of Test Execution Quality: This panel displays a bubble graph to indicate the number of defects created and test execution coverage% for each project. Bigger bubble indicates that a huge number of defects have been identified for the indicated project. The closer the bubble is to the Y-axis, lesser is the test coverage % and hence, focus should be on increasing test coverage % for such projects.
Time Spent: This panel displays a simple bar graph to understand time spent on requirements. Time spent (in days) is plotted for the last 7 months. There are two bars; one indicates the time spent on defects and the other indicates the time spent on stories (refer to the legend on the user interface to know more).
The Code Quality chapter, gives you an overall picture of the code quality. The KPI (%) visible on the tab is an aggregate percentage of code quality% of all projects put together. The Code Quality chapter has the following distinct panels:
Recommendations: This panel on top of the dashboard gives recommendations on actions that can be taken by the stakeholders for defect prevention.
Test Effectiveness by Project: This panel displays a list of projects with their test related metrics. The metrics displayed are Code Quality%, # of Test Case Executed, # of Valid Defects, and Opportunity Rank.
You can tab across to the Engineering Manager tab within this table to view these metrics for Engineering Managers.
Here, lower opportunity rank indicates that the project has greater number of valid defects, and hence needs to be looked at on priority.
The Code Quality% is color coded for easy understanding:
Defect Type: This panel displays a heat map to help understand the nature of the defect. The heat map categorizes defect type into 'New' and 'Obsolete', allowing you to allocate resources to the valid defects, and ensure no time is wasted on obsolete defects.
Defect Analysis: This panel displays specific details about defects such as defect ID, defect type, short description about the defect, and flag to indicate if the defect is a production defect.
The Rejected Defects chapter, gives you an overall picture of rejected defects. This dashboard helps you identify rejected defects, so you can plan to reduce the amount of time spent on identifying and categorizing these defects. The KPI (%) visible on the tab is an aggregate percentage of Rejection Defects% across all projects. The Rejected Defects chapter has the following distinct panels:
Recommendations: This panel on top of the dashboard gives recommendations on actions that can be taken by the stakeholders for defect prevention.
Defect Effectivenes:
By Project: This panel displays a list of projects with their defect effectiveness related metrics. The metrics displayed are Rejected Defects%, # Defects Rejected, # Defects Reported, and Opportunity Rank.
By Engineering Manager: You can tab across to the Engineering Manager tab within this table to view these metrics for Engineering Managers.
The # Defects Rejected metric is color coded for easy understanding:
Defect Tracking: This panel displays a heat map to help understand the average priority of defects identified for primary requirement work items. For example, a story or feature or epic. The heat map categorizes features on the basis of number of defects identified for that feature, and average priority of all defects allowing you to allocate resources to work on features with most number of high priority defects.
Breakdown By Defect Type: This panel displays a bar graph to help you understand the number of defects fixed and the number of defects not fixed, categorized on basis of severity.
Time Spent on Reported Defects (Days): This panel displays the sum total time spent on all defects. The time spent displayed is in terms of days.
Time Spent on Rejected Defects (Days): This panel displays the time spent in days for analyzing invalid defects due to lack of understanding or due to ambiguity in requirement.
Defect Details: This panel displays specific details about a defect such as Requirement Work Item, Defect ID, Defect Status, Created By, and the Root Cause of the defect.
The Test Execution Coverage chapter, gives you an overall picture of test execution coverage metrics for projects. The KPI (%) visible on the tab is an aggregate percentage of Test Execution Coverage across all projects. The Test Execution Coverage chapter has the following distinct panels:
Recommendations: This panel on top of the dashboard gives recommendations on actions that can be taken by the stakeholders for quality effectiveness.
Test Execution Effectiveness
By Project: This panel displays a list of projects with their test effectiveness related metrics. The metrics displayed are %Test Execution Coverage, %Executed Test Cases, and Opportunity Rank.
By Engineering Manager: You can tab across to the Engineering Manager tab within this table to view these metrics for Engineering Managers.
Test Coverage: This panel displays a bubble graph to indicate the percentage of test cases executed plotted with Test Execution Coverage% for each project. Bigger bubble indicates that the % of Executed Test Cases is sufficient whereas smaller bubble indicates scope for improving test coverage.
Test Execution Improvement: This panel displays a combo graph indicating the number of passed test executions and the number of failed test executions, sorted month-wise. The data is displayed for the current month and completed 6 months. This panel gives you an overall analysis of whether test execution has improved over months or not.
Work Item Details: This panel displays specific details about a work item such as Work Item ID, In Progress Date, Due Date, Test Cases Linked, # Test Execution Skipped, and # Test Execution Passed.
The Avg Test Case chapter, gives you an overall picture of test cases created for specific projects. The KPI (%) visible on the tab is an aggregate of Avg Test Cases across all projects. The Avg Test Case chapter has the following distinct panels:
Recommendations: This panel on top of the dashboard gives recommendations on actions that can be taken by the stakeholders for defect prevention.
Test Execution Effectiveness by Project: This panel displays a list of projects with their test case related metrics. The metrics displayed are Avg Test Case, # Test Cases Created, Work Item Count, and Opportunity Rank.
You can tab across to the Engineering Manager tab within this table to view these metrics for Engineering Managers.
The Avg Test Case metric is color coded for easy understanding:
Test Coverage: This panel displays a bubble graph to understand the test coverage. Test Execution Coverage % is plotted against % Executed Test Cases. The bubble size indicates the test execution coverage%.
Test Execution Improvement: This panel displays graph to understand the improvement in test executions. You can view the number of passed test executions and number of failed executions for the prior month. Work Items are sorted on basis of current staus. Data is displayed for the past five months.
Work Item Details: This panel displays specific details about a work item such as Work Item ID, In Progress Date, Due Date, Status, Test Cases Linked, and Task Linked.
Here is a list of all the metrics used in this dashboard:
Metric | Description |
---|---|
% Executed Test Cases | Percentage of test case executed compared to all test cases |
% Test Execution Coverage | Count of Feature for which test cases are executed |
% Test Execution Coverage Opportunity Rank | Rank that indicates the project or value stream needs the most attention in terms of Test Execution Coverage % |
Avg Test Case for In Progress Primary Work Items | Average number of test cases that are needed to test the capability |
Avg Test Case Opportunity Rank | Rank that indicates the project or value stream needs the most attention in terms of Average test cases |
Code Quality % | Ratio of Defects to Test Cases Executed |
Code Quality % Opportunity Rank | Metric to sort the Projects on Better Code Quality % |
Defect Rejection % | Ratio of number of Invalid Bugs compared to total number of Defects Created |
Defect Rejection % Opportunity Rank | Rank of Defect Rejection % in descending order |
No of In Progress Primary Work Items | Count of work item for which test cases are created |
No of Tasks Linked to In Progress Primary Work Items | Count of all child work item linked to a story or defect |
No of Test Case Executions | Count of all test case executions across all projects |
No of Test Cases Linked to In Progress Primary Work Items | Count of test cases that are needed to test the capability |
No of Work Items Planned in Release (Primary WI) | Count of feature that are planned |
No of Work Items Tested (Primary WI) | Count of features that are planned where testing is in progress |
No. of Defects Created | Count of defects reported |
No. of Defects Valid | Count of all valid defects |
No. of Invalid Bugs | Count of all invalid bugs |
No. of Unique Test Cases Executed DQI | Count of all test cases executed |
No. of Unique Test Cases Executed DQI (Passed/Failed) | Count of unique test cases passed or unique test cases failed |
Quality Effectiveness Overview - Opportunity Rank | Metric to sort the Projects on Better Code Quality % |
Quality Effectiveness Overview KPI | Ratio of Defects to Test Cases Executed |
Rejected Defects Threshold Best DQI {Project Avg} | Best Threshold Value for Rejected Defects based on Project Average |
Rejected Defects Threshold Worst DQI {Project Avg} | Worst Threshold Value for Rejected Defects based on Project Average |
Time Spent on Primary Defects | Total time spent on primary defects (in days) |
Defect Keyword Occurrence | Count of keyword occurrence in the Defect Description |
© 2022 Digital.ai Inc. All rights reserved.