Get Started. It's Free
or sign up with your email address
ISTQB by Mind Map: ISTQB

1. Common Concepts

1.1. Not work as expected, could lead to many problems, including loss of money, time or business reputation, even injury or death

1.2. Causes of defect: human mistakes, time pressure, complex code, complexity of infrastructure, changing technologies, system interactions, or environmental conditions (radiation, magnetism, etc)

1.3. Role of test: reduce risk of problems and contribute to the quality of software system, to meet contractual or legal requirements, industry-specific standards

1.4. Testing and quality: Measure quality of software. Learn lessons from previous projects. Part of quality assurance (alongside with development standards, training and defect analysis)

2. How much test is enouth

2.1. Take account of risk (technical, safety, business) and project constraints (time, budget). Testing should provide sufficient information to stakeholders to make informed decisions.

3. SW development Modules

3.1. V-model

3.1.1. Component (Unit) testing

3.1.2. Integration Testing

3.1.3. System Testing

3.1.4. Acceptance Testing

3.1.5. Software work products, such as business scenarios or user cases, requirements, specifications, etc, are often the basis of testing, used in one or more test levels.

3.2. Interative-incremental Development

3.2.1. New increment

3.2.2. Regression Test, increasingly important on all interations

3.2.3. Verification and Validation, can be carried out on each increment

3.3. No matter which module, test work is better to be involved as early as possible, every develop activity follows a test activity.

4. Test Levels

4.1. Component testing

4.1.1. Test Basis: Component requirement, Detailed design, Code.

4.1.2. Test Objects: Components, Programs, Data conversion, DB modules

4.1.3. Could be test-first approach (test driven development)

4.2. Integration Testing

4.2.1. Test Basis: SW and System design, Architecture, Workflows, Use cases

4.2.2. Test Objects: Subsystems, DB implementation, Infrastructure, Interfaces, System and DB configuration

4.2.3. There may be multiple levels of integration testing

4.2.3.1. Component integration testing, after component testing

4.2.3.2. System integration testing, test interaction between different systems or HW or SW.

4.2.3.2.1. Normally, developing organisation controls one side of the interface. Others could be risk.

4.2.3.2.2. Maybe a series of systems, then cross-platform issues maybe significant.

4.2.3.2.3. May include both functional testing and non-functional characteristics

4.2.3.2.4. At stage of integration testing, testers focus on integration itself rather than modules.

4.3. System Testing

4.3.1. Test basis: System and SW requirement specification, Use cases, Functional specification, Risk analysis reports, etc.

4.3.1.1. It may include high level description, models of system behaviour, interactions with operating system, system resources

4.3.2. Test Objects: System, Manuals, System and data configuration

4.3.3. Test environment should correspond to the final target to minimise the risk of environment-specific failures not found in testing.

4.3.4. It should include both Black-box and White-box test. It usually performed by independent team.

4.4. Acceptance Testing

4.4.1. Test basis: System and SW requirement specification, Use cases, Risk analysis reports, Business processes

4.4.2. Test Objects: Business processes on fully integrated system, Operational and maintenance processes, User procedures, Forms, Reports, Configuration Data.

4.4.3. It usually performed by customers, users of system, other stakeholders as well. It is to establish confidence, but not finding bugs.

4.4.3.1. It is not necessarily the final level of testing. Some large-scale system will do integration test after acceptance testing.

4.4.3.2. It can occur at various times in the life cycle.

4.4.4. Acceptance Testing Forms

4.4.4.1. User Acceptance Testing, by business users

4.4.4.1.1. Fitness of use of the system

4.4.4.2. Operational (acceptance) testing, by system administrator

4.4.4.2.1. Backup, restore

4.4.4.2.2. Disaster recovery

4.4.4.2.3. User Management

4.4.4.2.4. Maintenance tasks

4.4.4.2.5. Data load

4.4.4.2.6. Periodic checks of security vulnerabilities

4.4.4.3. Contract and Regulation acceptance testing

4.4.4.4. Alpha and beta testing

5. Test Types

5.1. Functional Testing (Black-box testing)

5.1.1. Specification-based functions or undocumented

5.1.2. Security testing, to detect malicious outside

5.1.3. Inter-operational Testing

5.2. Non-functional Testing (maybe at any level)

5.2.1. Performance Testing (measure response times), Load Testing, Stress Testing, Usability Testing, Maintainability Test, Reliability Test, Portability Test (Compatibility)

5.2.2. It is to test characteristics that can be quantified on a varying scale.

5.2.3. Most cases use black-box test

5.3. Structural Testing (White-box)

5.3.1. It can be performed at all test levels, but especially in component testing and component integration testing

5.4. Re-testing and Regression testing

5.4.1. Re-testing is to confirm the defect is fixed

5.4.2. Regression testing is to discover any defects introduced or uncovered as a result of change. Can be SW, or environment changes.

5.4.3. The extent of regression testing is based on the risk of not finding defects in SW working previously

5.4.4. Regression testing usually run many times and generally evolve slowly. So better using Automation

6. Maintenance Testing

6.1. It includes planned enhancement changes, correctives and emergency changes, and changes of environment (OS, DB, Planned upgrade of Commercial-Off-The-Shelf sw)

6.2. Test scopes, regression testing includes parts of the system not changed.

6.2.1. Risk of change

6.2.2. Size of existing system

6.2.3. Size of the change

6.2.4. Regression test is hard if specification is out of date or missing, or testers with domain knowledge is not available.

7. Stastic Test

7.1. Introduction

7.1.1. It is to find the cause of defect, other than defect itself.

7.1.2. It can find omissions in code, which is hard to find in Dynamic testing

7.1.3. Typical defects found: deviation from standards, requirement defect, design defect, insufficient maintainability, incorrect interface specifications

7.1.4. It is a good practice to use checklist for review meetings

7.2. Types of reviews

7.2.1. Informal Review

7.2.1.1. Pair programming or a technical reviewing

7.2.1.2. Vary in usefulness depending on the reviewers

7.2.2. Walkthrough

7.2.2.1. Peer group participation

7.2.2.2. Author lead meeting

7.2.3. Technical Reviews

7.2.3.1. Includes peers and technical experts, optional management participation

7.2.3.2. Led by a trained moderator (Not Author)

7.2.4. Inspection

7.2.4.1. Main purpose is to find defects

7.3. Static Analysis

7.3.1. Control Flow

7.3.2. Data Flow

7.3.3. It is done by developers. Could find many types of defects, such as dead code, referencing a variable with an undefined value. Note that Compilers is also a tool to check.

8. Test Design Techniques

8.1. Test Development Process

8.1.1. Test Design Specification, Test case Specification,Test Procedure Specification

8.1.1.1. Test design specification defines content of test cases should include input value, precondition, expected result, postcondition.

8.1.1.2. Test Procedure Specification specifies the sequence of actions for test execution

8.1.2. Test condition, test case, test procedure

8.1.2.1. Test Condition: an item or event that could be verified by test cases, e.g. function, translation, structural element, etc.

8.1.3. Evaluate quality of test cases, clear traceability to requirements and expected results

8.2. Categories

8.2.1. Black-box

8.2.1.1. Equivalence partitioning

8.2.1.1.1. Include Valid and Invalid values. Can be identified for inputs, outputs, interval values, time-related values, interface parameters, etc.

8.2.1.2. Boundary value analysis

8.2.1.3. Decision tables

8.2.1.3.1. It is a good way for logical conditions

8.2.1.4. State transition Testing

8.2.1.4.1. State, transitions, inputs or events that trigger transitions

8.2.1.4.2. Tests can be designed to cover a typical sequence of states, to cover every state, or every transition, or specific sequences of transitions, or invalid transitions

8.2.1.5. Use case

8.2.1.5.1. It can at abstract level, or at system level. It is very useful for designing acceptance test.

8.2.2. White-box

8.2.2.1. Statement testing

8.2.2.1.1. Did at component level. Increase statement coverage

8.2.2.2. Decision coverage

8.2.2.2.1. Related to branch testing. 100% decision coverage guarantees 100% statement coverage, but not vice versa.

8.2.2.3. Other structural coveage

8.2.2.3.1. Condition coverage, and multiple condition coverage. They are stronger levels beyond decision coverage

8.2.3. Experienced-based Tchniques

8.2.3.1. It is to augment systematic techniques, especially when applied after formal approaches.

8.2.3.2. Can be Error Guessing and Exploratory testing

9. Test Management

9.1. Typical testers. component and integration: developers; acceptance: business experts and users; operational acceptance: testers.

9.2. Test plan and estimation

9.2.1. Test plan is a continuous activity and is performed in all life cycle processes and activities. Feedback from test activities may contribute to plan adjustment.

9.2.2. Test plan includes scope, risks, identifying objectives, defining of test levels and entry and exit criteria, resources, work load, schedule.

9.2.3. Entry criteria

9.2.3.1. Test environment availability and readiness

9.2.3.2. Test tool readiness in the test environment

9.2.3.3. Testable code availability

9.2.3.4. Test data availabiltiy

9.2.4. Exit Criteria

9.2.4.1. Thoroughness measures, such as coverage of code, functionality or risk

9.2.4.2. Estimates of defect density or reliability measures

9.2.4.3. Cost

9.2.4.4. Residual risks, such as defects not fixed or lack of test coverage in certain areas

9.2.4.5. Schedules such as those based on time to market

9.2.5. Test estimation of test effort

9.2.5.1. Metrics-based approach, on metrics of former or similar projects or on typical values

9.2.5.2. Expert-based approach, by the owner of tasks or by experts

9.2.5.3. Test effort depends on 1. Characteristics of product (quality of spec and docs, size and complexity of product, requirement for reliability and security, requirement for documentation). 2. Characteristics of development process (organisation stability, tools, test process, skills of people, time pressure). 3. The outcome of testing (The number of defects and amount of rework required).

9.2.6. Test approach

9.2.6.1. Analytical approaches, such as risk-based testing

9.2.6.2. Model-based approaches, such as reliability growth models or operational profiles

9.2.6.3. Methodical approaches, such as error guessing, fault attacks, experienced-based, checklist-based, quality characteristic-based

9.2.6.4. Process or standard approaches

9.2.6.5. Dynamic and heuristic approaches, such as exploratory testing

9.2.6.6. Consultative approaches

9.2.6.7. Regression-adverse approaches

9.3. Progress Monitor and Control

9.3.1. Test Progress monitor

9.3.1.1. test case preparation

9.3.1.2. test environment preparation

9.3.1.3. Case execution

9.3.1.4. Defect info (density, found/fixed, failure rate, re-test result)

9.3.1.5. Coverage of requirements, risks or code

9.3.1.6. Dates of test milestones

9.3.1.7. Testing costs (find but, run case)

9.3.2. Test control

9.3.2.1. Make decisions based on test info monitering

9.3.2.2. Re-prioritising tests

9.3.2.3. Change schedule due to availability or unavailability of test environment.

9.4. Configuration management

9.4.1. All items of test ware are identified, version controlled, tracked for changes so that trace-ability can be maintained throughout the test process

9.4.2. All documents and software items are referenced unambiguously in test document.

9.5. Risks control

9.5.1. Project risks

9.5.1.1. Organisational factors

9.5.1.2. Technical issues

9.5.1.3. Supplier issues

9.5.2. Product risks (used to decide test techniques, extent of testing, find critical bugs at early stage) (Identify new risks, determine what risks to reduce, lower uncertainty about risks)

9.5.2.1. Failure-prone software delivered

9.5.2.2. Potential harm to individual or company

9.5.2.3. Poor software characteristics, data integrity and quality

9.5.2.4. Software not perform its intended functions

9.6. Incident management, that is a defect management tool.

10. Tool support for testing

10.1. Types of test tools

10.1.1. Management

10.1.1.1. Test Management Tools, like QC

10.1.1.2. Requirement Management Tools, could identify missed ones to test

10.1.1.3. Incident Management tool

10.1.1.4. Configuration Management Tools, like version control software

10.1.2. Static Testing

10.1.2.1. Provide a cost effective way of finding bugs in early stage of life cycle.

10.1.2.2. Review tools, online review for geographically dispersed teams

10.1.2.3. Static Analysis Tools, Developers use, enforcing coding standards, structures, dependencies

10.1.2.4. Modeling Tools, Developes use

10.1.3. Test Specification

10.1.3.1. Test Design Tools

10.1.3.2. Test Data Preparation Tools

10.1.4. Test Execution and Logging

10.1.4.1. Test Execution Tools, automation

10.1.4.2. Test Comparators

10.1.4.3. Security Testing Tools

10.1.4.4. Unit Test Framework Tools (D)

10.1.4.5. Coverage Measurement Tools(D)

10.1.5. Performance and monitoring

10.1.5.1. Dynamic Analysis Tools(D)

10.1.5.1.1. Time dependencies

10.1.5.1.2. Memory leaks

10.1.5.2. Performance testing / load testing / stress testing tools

10.1.5.3. Monitoring Tools

10.1.6. Specific Testing Needs

10.1.6.1. Data Quality Assessment