Software Testing Version 5.0

ISTQB

Jetzt loslegen. Gratis!
oder registrieren mit Ihrer E-Mail-Adresse
Software Testing Version 5.0 von Mind Map: Software Testing Version 5.0

1. Free Quiz

2. Fundamentals of Testing

2.1. Testing

2.1.1. Planing - Analyzing

2.1.2. Designing - Implementing

2.1.3. Executing - Reporting

2.1.4. Verification - Validation

2.1.5. Ensure Quality - Prevent defects

2.1.6. Types

2.1.6.1. Dynamic

2.1.6.2. Static

2.2. Why is Testing Necessary?

2.2.1. Chain of effects

2.2.1.1. ERROR(mistake)->Human

2.2.1.2. DEFECT(Bug)-> Environment

2.2.1.3. FAILURE->Execution

2.2.2. Measures the quality of the software

2.2.3. Gives confidence in the quality

2.2.4. Reduces the overall level of risk

2.2.5. How much testing? Depends on risk, safety & project constraints

2.2.6. Quality management

2.2.6.1. Quality assurance

2.2.6.2. Quality Control

2.2.6.2.1. Static

2.2.6.2.2. Dynamic

2.2.6.3. Quality

2.2.6.3.1. Funct Suitability - Performance Effiency

2.2.6.3.2. Compatability - Usability

2.2.6.3.3. Reliability - Security

2.2.6.3.4. Maintainability - Portability

2.3. Testing Objectives

2.3.1. Finding Defects

2.3.2. Providing information for decision-making

2.3.3. Preventing defects

2.3.4. Gaining confidence about the level of quality

2.4. Seven Testing Principles

2.4.1. Testing shows presence of defects

2.4.2. Exhaustive testing is impossible

2.4.3. Early testing

2.4.4. Defect clustering

2.4.5. Pesticide paradox

2.4.6. Testing is context dependent

2.4.7. Absence-of-error fallacy

2.5. Fundamental Test Process

2.5.1. Planning

2.5.1.1. Objectives

2.5.1.2. Approach

2.5.2. Monitoring & Control

2.5.3. Analysis

2.5.3.1. Identify Test

2.5.3.2. Conditions

2.5.3.3. What to test

2.5.4. Design

2.5.4.1. How to test

2.5.4.2. Priority

2.5.4.3. Environment

2.5.4.4. Preconditions

2.5.5. Implementation

2.5.5.1. Automation

2.5.5.2. Test Suits

2.5.5.3. Schedules

2.5.6. Executions

2.5.6.1. Recording

2.5.6.2. Logs

2.5.6.3. Reporting

2.5.7. Comnpletion

2.5.7.1. Colecting data

2.5.7.2. Experiences

2.5.7.3. Milestones

2.5.7.4. Closing

2.5.8. WORK PRODUCT

2.5.9. Traceability

2.6. The Psychology of Testing

2.6.1. Mindset of Developer & Tester

2.6.2. Communication in a constructive manner

2.6.3. Test Independence

2.7. Code of Ethics

2.7.1. Code is necessary, among other reasons, to ensure information accessed by testers are not put to inappropriate use

2.7.1.1. Public

2.7.1.2. Client and Employer

2.7.1.3. Product

2.7.1.4. Judgement

2.7.1.5. Management

2.7.1.6. Profession

2.7.1.7. Colleagues

2.7.1.8. Self

3. Testing Throughout the Software Life Cycle

3.1. Software Development Models

3.1.1. Sequential

3.1.1.1. Waterfall

3.1.1.2. V-model

3.1.2. Iterative-Incremental

3.1.2.1. Agile

3.1.2.2. Scrum

3.1.2.3. Rational Unified Process

3.1.2.4. Kanban

3.2. Test Levels

3.2.1. Component Testing

3.2.2. Integration Testing

3.2.2.1. Component (Developers)

3.2.2.2. System (Testers)

3.2.3. System Testing

3.2.4. Acceptance Testing

3.3. Test Types

3.3.1. Black Box

3.3.1.1. Functional Testing

3.3.1.2. Non-Functional Testing (Software Characteristics)

3.3.1.2.1. Test how well

3.3.2. White Box

3.3.2.1. Structural Testing

3.3.3. Testing Related to Change

3.3.3.1. Re-Testing

3.3.3.2. Regression

3.4. Maintenance Testing

3.4.1. Change to deployed software system or its enviroment

3.4.2. Triggered by

3.4.2.1. Modification

3.4.2.2. Migration

3.4.2.3. Retirement

3.4.3. Extensive regression testing required

3.4.4. Impact Analysis for Maintenance

4. Static Techniques

4.1. Reviewers Types

4.1.1. ad hoc

4.1.2. Checkist-based

4.1.3. Formal

4.1.3.1. Inspction

4.1.3.2. Technical

4.1.3.3. Walkthrough

4.1.4. Informal

4.1.5. Perspective-based

4.1.6. Role-based

4.1.7. Scenario-based

4.2. Basics

4.2.1. Dynamic Testing

4.2.2. Static Testing

4.2.3. Work Product

4.3. Review Process

4.3.1. Work Product

4.3.1.1. Planning

4.3.1.2. Kick-off

4.3.1.3. Individual Preparation

4.3.1.4. Review Meeting

4.3.1.5. Fixing and Reporting

4.3.1.6. Rework

4.3.1.7. Follow-Up

4.3.2. Roles and Respons

4.3.2.1. Author

4.3.2.2. Management

4.3.2.3. Facilitator

4.3.2.4. Review leader

4.3.2.5. Reviewers

4.3.2.6. Stribe

5. Test Design Techniques

5.1. Purpose: Identification

5.1.1. Test Conditions

5.1.2. Test Data

5.1.3. Test Cases

5.2. Choosing test Techniques

5.2.1. Risk & Objectives

5.2.2. Type of System & Dev Cycle

5.2.3. Regulatory Standards

5.2.4. Time & Budget

5.2.5. Knowledge & Experience

5.3. Test Levels

5.3.1. Use Cases-Acceptance Criteria

5.3.2. Decision Table - System Test level

5.3.3. Interface Coverage - Integration test level

5.3.4. Statement Coverage - Component system level

5.4. White-box Technique

5.4.1. Statement Testing & Coverage (weakest)

5.4.2. Decision Testing & Coverage (stronger )

5.4.3. Other Structure-based Techniques

5.4.3.1. Condition Testing

5.4.3.2. Multiple Condition testing

5.4.3.3. All Path Testing (Strongest)

5.5. Experience-based Techniques

5.5.1. Exploratory Testing

5.5.1.1. Mistakes

5.5.1.2. Defects

5.5.1.3. Failures

5.5.2. Error Guessing

5.5.3. Checklist-based Testing

5.6. Black-box Techniques

5.6.1. Equivalence Partitioning

5.6.1.1. Valid values

5.6.1.2. Invalid Values

5.6.2. Boundary Value Analysis

5.6.2.1. Displaced

5.6.2.2. Omitted

5.6.2.3. Supplmented

5.6.3. Decision Tables

5.6.3.1. True

5.6.3.2. False

5.6.4. State Transition Diagrams / Tables

5.6.5. Use Case Testing

5.6.5.1. Preconditions

5.6.5.2. Main Scenario

5.6.5.3. Post-Conditions

5.6.5.4. Behavior

6. Test Management

6.1. Test Organisation

6.1.1. Independent Testing

6.1.2. Tasks of Test Manager & Tester

6.2. Test Planning & Estimation

6.2.1. Test Plan

6.2.1.1. Test Planning

6.2.1.1.1. Test Policy - Criticality - Testabilitiy

6.2.1.1.2. Resources - Constraints - Development Lifecycle

6.2.1.1.3. Method - Strategy - Objectives

6.2.1.1.4. Scopes of Tsting - Risk

6.2.1.2. IEEE 829 Test Plan

6.2.1.2.1. 1-Testing Plan 2-Introduction 3-Test Items 4-Features

6.2.1.2.2. 5-Not features 6-Approach 7-Items pass/fail 8-Suspension and resumpiton requirements

6.2.1.2.3. 9-Test deliverables 10-Test Tasks 11-Environmental needs 12-Responsabilities

6.2.1.2.4. 13-Staffing & Training 14-Schedules 15-Risk & Contingncies 16-Approvals

6.2.1.3. Level Test Plan

6.2.1.3.1. Unit Test

6.2.1.3.2. Integration Test

6.2.1.3.3. System Test

6.2.1.3.4. Acceptances Test

6.2.1.3.5. Usability

6.2.2. Test Strategy & Approach

6.2.2.1. Analytical

6.2.2.2. Model-based

6.2.2.3. Methodical

6.2.2.4. Process-Compliant or Standard-Compliant

6.2.2.5. Consultative

6.2.2.6. Regression-Averse

6.2.2.7. Reactive

6.2.3. Entry & Exit Criteria

6.2.3.1. Entry - READY - When Start

6.2.3.2. Exit - DONE - Complete

6.2.4. Test Execution Schedule

6.2.4.1. Priority - Dependencies - Confirmation

6.2.4.2. Regression - efficiency - Risk Level

6.2.5. Factors - Effort

6.2.5.1. Product Characteristics

6.2.5.2. Development process Characteristics

6.2.5.3. People Characteristics

6.2.5.4. Test Results

6.2.6. Estimating Techniques

6.2.6.1. Metric-based

6.2.6.2. Expert-based

6.3. Test Progress Monitoring & Control

6.3.1. Test Monitoring

6.3.2. Test Reporting & Control

6.3.3. Test Summary Report

6.4. Configuration Management

6.4.1. Establish and maintain the integrity of the products and ensure all items of test-ware are identified, version controlled, tracked for changes, related to each other.

6.5. Risks and Testing

6.5.1. Risk: Probability / Likelihood & Impact

6.5.2. Project & Product Risks

6.5.3. Risk-based testing approach

6.6. Incident Management

6.6.1. Incident Management

6.6.2. Incident Logging

6.6.3. Test Incident Report

7. Tool Support for Testing

7.1. Types of Classification

7.1.1. Per License

7.1.2. Purpose

7.1.3. Degree of Intrusion

7.1.4. User Group

7.2. Test Tools Classification

7.2.1. Management of testing & testware

7.2.2. Static Testing

7.2.3. Test design & implementation

7.2.4. Test Execution & Logging

7.2.5. Performande measures / Dynamic analysis

7.2.6. Spedialized testing needs

7.2.7. Overview

7.2.7.1. Test management tools

7.2.7.2. Test specification tools

7.2.7.3. tools for test execution

7.3. Effective Use of Tools

7.3.1. Potential Benefits & Risks

7.3.2. Special consideration for Test Execution, Static Analysis & Test Management tools

7.4. Introducing a Tool into an Organisation

7.4.1. Main Considerations

7.4.2. Start with a Pilot project

7.4.3. Success factors for deployment