ISTQB FL(v4)_CH3

A mindmap summarizing for ISTQB CTFL 4.0 Chapter 3

Laten we beginnen. Het is Gratis
of registreren met je e-mailadres
ISTQB FL(v4)_CH3 Door Mind Map: ISTQB FL(v4)_CH3

1. (2) Static Testing vs Dynamic Testing

1.1. Static Testing

1.1.1. finds defects directly

1.1.2. applicable to non-executable work products

1.1.3. measure quality characteristics that aren't dependent on executing code like: maintainability

1.1.4. defects that are easier and/or cheaper to find

1.1.4.1. Defects in requirements

1.1.4.2. Design defects

1.1.4.3. Certain types of coding defects

1.1.4.4. Deviations from standards

1.1.4.5. Incorrect interface specifications

1.1.4.6. Specific types of security vulnerabilities

1.1.4.7. Gaps or inaccuracies in test basis coverage

1.2. Dynamic Testing

1.2.1. uncovers failures, leading to the associated defects

1.2.2. applicable to executable work products

1.2.3. measure quality characteristics that are dependent on executing code like: performance efficiency

2. (1) Static Testing Basics

2.1. can be applied for verification and validation

2.2. work product examined

2.2.1. Static testing can be used almost with any work product like:

2.2.1.1. requirement specification documents

2.2.1.2. source code

2.2.1.3. test plans, test cases, product and test charters

2.2.1.4. project documentation, contracts and models

2.2.2. 3rd party executable code can't be static tested

2.3. Value of Static Testing

2.3.1. detect defects in the earliest phases of the SDLC (early testing)

2.3.2. identify defects which can't be detected by dynamic testing like:

2.3.2.1. unreachable code, design patterns, non-executable work products

2.3.3. evaluate the quality of work products

2.3.4. ensure that requirements describe stakeholders actual needs

2.3.5. less time and effort for fixing defects later

2.3.6. detecte code defect more efficiently

3. (3) Review Process

3.1. Activities

3.1.1. Planning

3.1.1.1. define the:

3.1.1.1.1. scope of the review

3.1.1.1.2. work product to be reviewed

3.1.1.1.3. quality characteristics to be evaluated

3.1.1.1.4. areas to focus on

3.1.1.1.5. exit criteria

3.1.1.1.6. effort and the timeframes for the review

3.1.2. Review initiation

3.1.2.1. make sure that everyone and everything involved is prepared to start the review

3.1.3. Individual review

3.1.3.1. assess the quality of the work product under review

3.1.3.2. identify anomalies, recommendations, and questions

3.1.4. Communication and analysis

3.1.4.1. review meeting to analyze and discusse the anomalies

3.1.5. Fixing and reporting

3.1.5.1. A defect report created for every defect

3.1.5.2. Accepte the work product when the exit criteria are reached

3.2. Roles

3.2.1. Manager

3.2.1.1. decides what is to be reviewed , staff and time for the review

3.2.2. Author

3.2.2.1. creates and fixes the work product under review

3.2.3. Moderator ( facilitator )

3.2.3.1. ensures the effective running of review meetings

3.2.4. Scribe ( recorder )

3.2.4.1. collates anomalies and records review information

3.2.5. Reviewer

3.2.5.1. performs reviews

3.2.6. Review leader

3.2.6.1. takes overall responsibility for the review

3.3. Types

3.3.1. Informal review

3.3.1.1. don't follow a defined process

3.3.1.2. don't require a formal documented output

3.3.2. Walkthrough

3.3.2.1. led by the author and can serve many objectives

3.3.3. Technical Review

3.3.3.1. performed by technically qualified reviewers and led by a moderator

3.3.4. Inspection

3.3.4.1. the most formal type of review that follow the complete generic process

3.4. Success Factors

3.4.1. Defining clear objectives

3.4.2. Choosing the appropriate review type

3.4.3. Conducting reviews on small chunks

3.4.4. Providing feedback

3.4.5. Providing time to prepare for the review

3.4.6. Support from management

3.4.7. Facilitating meetings