Test Management - Closing

Get Started. It's Free
or sign up with your email address
Test Management - Closing by Mind Map: Test Management - Closing

1. Documentation - Test Log

1.1. Have A Unique Test Log Identifier

1.2. Description

1.2.1. Items being tested

1.2.2. Version/Revision number

1.2.3. Testing enviroment

1.2.4. Hardware and operating system details

1.3. Activity and Event Entries

1.3.1. Execution description

1.3.2. Procedure results

1.3.3. Environmental information

1.3.4. Anomalous events

1.3.5. Incident report identifiers

2. Documentation - Test Summary Report

2.1. Have A Unique Test Summary Report Identifier

2.2. Summary

2.3. Variances

2.3.1. Descriptions of any variances of the test items from their original design

2.4. Comprehensiveness Assesment

2.5. Summary of Results

2.5.1. All resolved incidents and their solutions should be described

2.5.2. Unresolved incidents should be recorded.

2.6. Evaluation

2.6.1. Does the tested object pass or fail?

2.6.2. If it failed, what was the level of severity of the failure?

2.7. Summary of Activities

2.7.1. Resource Consumption

2.7.2. Actual Task Duration

2.7.3. Hardware and software tool usage

2.8. Approvals

2.8.1. Name

2.8.2. Signture

2.8.3. Date

3. Layoffs and Liquidation

3.1. Testers bear a disproportionate burden in layoffs, especially in economic downturns.

3.2. Worrisome Warning Signs

3.2.1. Being asked to participate in an employee ranking exercise

3.2.2. Decline in company's revenue

3.2.3. Sudden and unexplainable invasion of accountants or consultants

3.2.4. HRD have long work hours even under no typical HR related events

3.2.5. Hearing rumors on layoff list

3.2.6. Seeing signs that independent test team are reorganized into smaller unit

4. Presenting the Results

4.1. Presenting bad results

4.1.1. maintain a sense of perspective about how a bug will actually affect a customer or user

4.2. Use dashboard

4.2.1. using numbers and metrics

4.2.2. manage and fine-tune the dashboard

4.3. Reports and target audience

5. Criteria For Test Completion

5.1. All the Planned Tests Executed and Passed

5.1.1. Weakest criterion

5.1.2. May be ignored if stop-test criteria is used in isolation

5.2. All Coverage Goals Met

5.2.1. Stop testing when the goals specified in Test Plan are met

5.3. The Detection of a Specific Number of Defects Has Been Accomplished.

5.3.1. Requires defect data from past releases or similar projects

5.3.2. High risk; Assuming the current project built, tested, and behave like past projects

5.3.3. Could be disastrous

5.4. The Rates of Defect Detection Have Fallen Below a Specified Level.

5.4.1. Utilizing Defect detection rate threshold

5.4.2. Defect detection rate threshold can be based on data from past projects

5.5. Fault Seeding Ratios Are Favorable.

5.5.1. Intentionally inserting a known set of defects into a program to provide support for stop-test decision

6. Documentation - Test Incident Report

6.1. Have A Unique Test Incident Report Identifier

6.2. Summary

6.2.1. Test Item Involved

6.2.2. Test Procedures

6.2.3. Test Case

6.2.4. Associated Test Log

6.3. Incident Description

6.3.1. Useful information as detailed possible to the developer to repair the code

6.4. Impact

6.4.1. On testing effort

6.4.2. On test plan

6.4.3. On test procedures

6.4.4. On test case

6.4.5. Include severity rating

7. Testing Without Documentation

7.1. Use the benefit of competitors in a commercial product testing

7.2. Consider asking to sales and marketing department about what the product should do

7.3. Ask customer support for information of experience and feedback of a product

7.4. Unless the product is very unique, inductively figure out what constitutes reasonable expectations and correct behavior in many cases

7.5. Consider any suspect behavior buggy

8. What to Report, To Whom, and How ?

8.1. Information for testing team

8.1.1. workload carried out thus far and what is remaining

8.1.2. service quality level

8.1.3. coverage of requirements, of specifications, and of test conditions

8.1.4. number of identified, fixed, or to-be-retested defects and their impact in regression testing

8.1.5. components with more defects than others

8.1.6. the number of executed test cases

8.2. Information for development team

8.2.1. identified defects, its impact, and its impacted module

8.2.2. test delivery dates

8.2.3. quality level of delivered software per version

8.3. Information for testing management

8.3.1. software delivery dates

8.3.2. functionalities, specifications, or requirement changes compared to initial requirements

8.3.3. number of defects detected per period of time

8.3.4. number of duplicated or rejected defects

8.3.5. effort spent compared to the effort planned

8.3.6. average test design, test implementation, and test execution effort

8.4. Information for higher level hierarchy

8.4.1. identified quality level

8.4.2. use of the planned effort and comparison between planned effort and actual effort

8.4.3. planned delivery dates

8.4.4. improvement of the application maturity

8.4.5. evolution of the costs and efforts (planned, carried out, to be done)

8.4.6. evaluation of the effectiveness and maturity of the processes,

8.4.7. process improvement actions that need to be considered

8.5. Information for customer, user, or marketing representative

8.5.1. system testing dates

8.5.2. provisional date for the beginning of acceptance testing

8.5.3. planned delivery date

8.5.4. postponed requirements or functionalities due to limitation