Test Management - Closing

Get Started. It's Free
or sign up with your email address
Test Management - Closing by Mind Map: Test Management - Closing

1. Criteria For Test Completion

1.1. All the Planned Tests Executed and Passed

1.1.1. Weakest criterion

1.1.2. May be ignored if stop-test criteria is used in isolation

1.2. All Coverage Goals Met

1.2.1. Stop testing when the goals specified in Test Plan are met

1.3. The Detection of a Specific Number of Defects Has Been Accomplished.

1.3.1. Requires defect data from past releases or similar projects

1.3.2. High risk; Assuming the current project built, tested, and behave like past projects

1.3.3. Could be disastrous

1.4. The Rates of Defect Detection Have Fallen Below a Specified Level.

1.4.1. Utilizing Defect detection rate threshold

1.4.2. Defect detection rate threshold can be based on data from past projects

1.5. Fault Seeding Ratios Are Favorable.

1.5.1. Intentionally inserting a known set of defects into a program to provide support for stop-test decision

2. Documentation - Test Log

2.1. Have A Unique Test Log Identifier

2.2. Description

2.2.1. Items being tested

2.2.2. Version/Revision number

2.2.3. Testing enviroment

2.2.4. Hardware and operating system details

2.3. Activity and Event Entries

2.3.1. Execution description

2.3.2. Procedure results

2.3.3. Environmental information

2.3.4. Anomalous events

2.3.5. Incident report identifiers

3. Documentation - Test Incident Report

3.1. Have A Unique Test Incident Report Identifier

3.2. Summary

3.2.1. Test Item Involved

3.2.2. Test Procedures

3.2.3. Test Case

3.2.4. Associated Test Log

3.3. Incident Description

3.3.1. Useful information as detailed possible to the developer to repair the code

3.4. Impact

3.4.1. On testing effort

3.4.2. On test plan

3.4.3. On test procedures

3.4.4. On test case

3.4.5. Include severity rating

4. Documentation - Test Summary Report

4.1. Have A Unique Test Summary Report Identifier

4.2. Summary

4.3. Variances

4.3.1. Descriptions of any variances of the test items from their original design

4.4. Comprehensiveness Assesment

4.5. Summary of Results

4.5.1. All resolved incidents and their solutions should be described

4.5.2. Unresolved incidents should be recorded.

4.6. Evaluation

4.6.1. Does the tested object pass or fail?

4.6.2. If it failed, what was the level of severity of the failure?

4.7. Summary of Activities

4.7.1. Resource Consumption

4.7.2. Actual Task Duration

4.7.3. Hardware and software tool usage

4.8. Approvals

4.8.1. Name

4.8.2. Signture

4.8.3. Date

5. Testing Without Documentation

5.1. Use the benefit of competitors in a commercial product testing

5.2. Consider asking to sales and marketing department about what the product should do

5.3. Ask customer support for information of experience and feedback of a product

5.4. Unless the product is very unique, inductively figure out what constitutes reasonable expectations and correct behavior in many cases

5.5. Consider any suspect behavior buggy

6. Layoffs and Liquidation

6.1. Testers bear a disproportionate burden in layoffs, especially in economic downturns.

6.2. Worrisome Warning Signs

6.2.1. Being asked to participate in an employee ranking exercise

6.2.2. Decline in company's revenue

6.2.3. Sudden and unexplainable invasion of accountants or consultants

6.2.4. HRD have long work hours even under no typical HR related events

6.2.5. Hearing rumors on layoff list

6.2.6. Seeing signs that independent test team are reorganized into smaller unit

7. What to Report, To Whom, and How ?

7.1. Information for testing team

7.1.1. workload carried out thus far and what is remaining

7.1.2. service quality level

7.1.3. coverage of requirements, of specifications, and of test conditions

7.1.4. number of identified, fixed, or to-be-retested defects and their impact in regression testing

7.1.5. components with more defects than others

7.1.6. the number of executed test cases

7.2. Information for development team

7.2.1. identified defects, its impact, and its impacted module

7.2.2. test delivery dates

7.2.3. quality level of delivered software per version

7.3. Information for testing management

7.3.1. software delivery dates

7.3.2. functionalities, specifications, or requirement changes compared to initial requirements

7.3.3. number of defects detected per period of time

7.3.4. number of duplicated or rejected defects

7.3.5. effort spent compared to the effort planned

7.3.6. average test design, test implementation, and test execution effort

7.4. Information for higher level hierarchy

7.4.1. identified quality level

7.4.2. use of the planned effort and comparison between planned effort and actual effort

7.4.3. planned delivery dates

7.4.4. improvement of the application maturity

7.4.5. evolution of the costs and efforts (planned, carried out, to be done)

7.4.6. evaluation of the effectiveness and maturity of the processes,

7.4.7. process improvement actions that need to be considered

7.5. Information for customer, user, or marketing representative

7.5.1. system testing dates

7.5.2. provisional date for the beginning of acceptance testing

7.5.3. planned delivery date

7.5.4. postponed requirements or functionalities due to limitation

8. Presenting the Results

8.1. Presenting bad results

8.1.1. maintain a sense of perspective about how a bug will actually affect a customer or user

8.2. Use dashboard

8.2.1. using numbers and metrics

8.2.2. manage and fine-tune the dashboard

8.3. Reports and target audience