1. Test design techniques
1.1. THE TEST DEVELOPMENT PROCESS
1.1.1. test case specification
1.1.2. test design specification
1.1.3. test design technique
1.1.4. test execution schedule
1.1.5. horizontal traceability
1.1.6. vertical traceability
1.1.7. traceability
1.2. CATEGORIES OF TEST DESIGN TECHNIQUES
1.2.1. experience-based test design technique
1.2.2. specification-based test design technique
1.2.3. structure-based
1.2.4. test design technique
1.3. SPECIFICATION-BASED OR BLACK-BOX TECHNIQUES
1.3.1. boundary value analysis
1.3.1.1. boundary value
1.3.1.2. equivalence partition
1.3.1.3. equivalence partitioning
1.3.2. decision table testing
1.3.2.1. decision table
1.3.3. state transition testing
1.3.3.1. state diagram
1.3.3.2. state table
1.3.4. use case testing
1.4. STRUCTURE-BASED OR WHITE-BOX TECHNIQUES
1.4.1. branch coverage
1.4.2. code coverage
1.4.3. coverage, decision coverage
1.4.4. statement coverage
1.4.5. test coverage
1.5. EXPERIENCE-BASED TECHNIQUES
1.5.1. attack
1.5.2. exploratory testing
1.5.3. fault attack
2. Test management
2.1. TEST ORGANIZATION
2.1.1. tester
2.1.2. test leader
2.1.3. test manager
2.1.4. test management
2.2. TEST PLANNING AND ESTIMATION
2.2.1. reasons
2.2.2. relate to projects, test levels,..
2.3. TEST PROGRESS MONITORING AND CONTROL
2.3.1. defect density
2.3.2. failure rate
2.4. CONFIGURATION MANAGEMENT
2.4.1. configuration control
2.4.2. configuration management
2.4.3. version control
2.5. RISK AND TESTING
2.5.1. product risk
2.5.2. project risk
2.5.3. risk-based testing
2.6. INCIDENT MANAGEMENT
2.6.1. defect detection percentage
2.6.2. defect report
2.6.3. incident logging
2.6.4. incident management
2.6.5. incident report
2.6.6. priority
2.6.7. root cause
2.6.8. severity
3. Fundamentals of testing
3.1. TESTING NECESSARY
3.1.1. Software systems context
3.1.1.1. Error (Mistake)
3.1.1.2. Defect (bug, fault)
3.1.1.3. Failure
3.2. WHAT IS TESTING
3.2.1. not debugging
3.2.2. not fix defects
3.3. CODE OF ETHICS
3.4. FUNDAMENTAL TEST PROCESS
3.4.1. Test approach
3.4.2. Test execution
3.5. THE PSYCHOLOGY OF TESTING
3.5.1. test policy
3.5.2. independence of testing
3.5.3. error guessing
3.6. SEVEN TESTING PRINCIPLES
3.6.1. Exhaustive testing is impossible
3.6.2. Testing is context dependent
3.6.3. Defect clustering
3.6.4. Early testing
3.6.5. Absence-of-errors fallacy
3.6.6. Testing show presence of defects
3.6.7. Pesticide paradox
4. Testing throughout the software life cycle
4.1. V-model
4.1.1. performance
4.1.2. Test level
4.1.3. Integration
4.1.4. commercial off-the-shelf (COTS) software
4.2. TEST LEVELS
4.2.1. Component testing
4.2.2. Integration testing
4.2.3. System testing
4.2.4. Acceptance testing
4.3. TEST TYPES
4.3.1. Testing of function (functional testing)
4.3.2. Testing of software product characteristics (non-functional testing)
4.3.3. Testing of software structure/architecture (structural testing
4.3.4. Testing related to changes (confirmation and regression testing)
4.4. MAINTENANCE TESTING
4.4.1. Impact analysis and regression testing
4.4.2. Triggers for maintenance testing
5. Static techniques
5.1. Dynamic
5.1.1. Behavioural
5.1.1.1. Non-functional
5.1.1.1.1. Usability
5.1.1.1.2. Performance
5.1.1.2. Functional
5.1.1.2.1. Random
5.1.1.2.2. Boundary Value Analysis
5.1.1.2.3. State Transition
5.1.1.2.4. Equivalence Partitioning
5.1.1.2.5. Cause-Effect Graphing
5.1.2. Structural
5.1.2.1. Control Flow
5.1.2.2. Data Flow
5.1.2.2.1. Symbolic Execution
5.1.2.2.2. Definition-Use
5.2. Static
5.2.1. Static Analysis
5.2.2. Walkthroughs
5.2.3. Reviews
5.2.4. Inspection
5.2.5. Desk-checking