1. How much test is enouth
1.1. Take account of risk (technical, safety, business) and project constraints (time, budget). Testing should provide sufficient information to stakeholders to make informed decisions.
2. Test Types
2.1. Functional Testing (Black-box testing)
2.1.1. Specification-based functions or undocumented
2.1.2. Security testing, to detect malicious outside
2.1.3. Inter-operational Testing
2.2. Non-functional Testing (maybe at any level)
2.2.1. Performance Testing (measure response times), Load Testing, Stress Testing, Usability Testing, Maintainability Test, Reliability Test, Portability Test (Compatibility)
2.2.2. It is to test characteristics that can be quantified on a varying scale.
2.2.3. Most cases use black-box test
2.3. Structural Testing (White-box)
2.3.1. It can be performed at all test levels, but especially in component testing and component integration testing
2.4. Re-testing and Regression testing
2.4.1. Re-testing is to confirm the defect is fixed
2.4.2. Regression testing is to discover any defects introduced or uncovered as a result of change. Can be SW, or environment changes.
2.4.3. The extent of regression testing is based on the risk of not finding defects in SW working previously
2.4.4. Regression testing usually run many times and generally evolve slowly. So better using Automation
3. Maintenance Testing
3.1. It includes planned enhancement changes, correctives and emergency changes, and changes of environment (OS, DB, Planned upgrade of Commercial-Off-The-Shelf sw)
3.2. Test scopes, regression testing includes parts of the system not changed.
3.2.1. Risk of change
3.2.2. Size of existing system
3.2.3. Size of the change
3.2.4. Regression test is hard if specification is out of date or missing, or testers with domain knowledge is not available.
4. Stastic Test
4.1. Introduction
4.1.1. It is to find the cause of defect, other than defect itself.
4.1.2. It can find omissions in code, which is hard to find in Dynamic testing
4.1.3. Typical defects found: deviation from standards, requirement defect, design defect, insufficient maintainability, incorrect interface specifications
4.1.4. It is a good practice to use checklist for review meetings
4.2. Types of reviews
4.2.1. Informal Review
4.2.1.1. Pair programming or a technical reviewing
4.2.1.2. Vary in usefulness depending on the reviewers
4.2.2. Walkthrough
4.2.2.1. Peer group participation
4.2.2.2. Author lead meeting
4.2.3. Technical Reviews
4.2.3.1. Includes peers and technical experts, optional management participation
4.2.3.2. Led by a trained moderator (Not Author)
4.2.4. Inspection
4.2.4.1. Main purpose is to find defects
4.3. Static Analysis
4.3.1. Control Flow
4.3.2. Data Flow
4.3.3. It is done by developers. Could find many types of defects, such as dead code, referencing a variable with an undefined value. Note that Compilers is also a tool to check.
5. Test Management
5.1. Typical testers. component and integration: developers; acceptance: business experts and users; operational acceptance: testers.
5.2. Test plan and estimation
5.2.1. Test plan is a continuous activity and is performed in all life cycle processes and activities. Feedback from test activities may contribute to plan adjustment.
5.2.2. Test plan includes scope, risks, identifying objectives, defining of test levels and entry and exit criteria, resources, work load, schedule.
5.2.3. Entry criteria
5.2.3.1. Test environment availability and readiness
5.2.3.2. Test tool readiness in the test environment
5.2.3.3. Testable code availability
5.2.3.4. Test data availabiltiy
5.2.4. Exit Criteria
5.2.4.1. Thoroughness measures, such as coverage of code, functionality or risk
5.2.4.2. Estimates of defect density or reliability measures
5.2.4.3. Cost
5.2.4.4. Residual risks, such as defects not fixed or lack of test coverage in certain areas
5.2.4.5. Schedules such as those based on time to market
5.2.5. Test estimation of test effort
5.2.5.1. Metrics-based approach, on metrics of former or similar projects or on typical values
5.2.5.2. Expert-based approach, by the owner of tasks or by experts
5.2.5.3. Test effort depends on 1. Characteristics of product (quality of spec and docs, size and complexity of product, requirement for reliability and security, requirement for documentation). 2. Characteristics of development process (organisation stability, tools, test process, skills of people, time pressure). 3. The outcome of testing (The number of defects and amount of rework required).
5.2.6. Test approach
5.2.6.1. Analytical approaches, such as risk-based testing
5.2.6.2. Model-based approaches, such as reliability growth models or operational profiles
5.2.6.3. Methodical approaches, such as error guessing, fault attacks, experienced-based, checklist-based, quality characteristic-based
5.2.6.4. Process or standard approaches
5.2.6.5. Dynamic and heuristic approaches, such as exploratory testing
5.2.6.6. Consultative approaches
5.2.6.7. Regression-adverse approaches
5.3. Progress Monitor and Control
5.3.1. Test Progress monitor
5.3.1.1. test case preparation
5.3.1.2. test environment preparation
5.3.1.3. Case execution
5.3.1.4. Defect info (density, found/fixed, failure rate, re-test result)
5.3.1.5. Coverage of requirements, risks or code
5.3.1.6. Dates of test milestones
5.3.1.7. Testing costs (find but, run case)
5.3.2. Test control
5.3.2.1. Make decisions based on test info monitering
5.3.2.2. Re-prioritising tests
5.3.2.3. Change schedule due to availability or unavailability of test environment.
5.4. Configuration management
5.4.1. All items of test ware are identified, version controlled, tracked for changes so that trace-ability can be maintained throughout the test process
5.4.2. All documents and software items are referenced unambiguously in test document.
5.5. Risks control
5.5.1. Project risks
5.5.1.1. Organisational factors
5.5.1.2. Technical issues
5.5.1.3. Supplier issues
5.5.2. Product risks (used to decide test techniques, extent of testing, find critical bugs at early stage) (Identify new risks, determine what risks to reduce, lower uncertainty about risks)
5.5.2.1. Failure-prone software delivered
5.5.2.2. Potential harm to individual or company
5.5.2.3. Poor software characteristics, data integrity and quality
5.5.2.4. Software not perform its intended functions
5.6. Incident management, that is a defect management tool.
6. Test Objects: Business processes on fully integrated system, Operational and maintenance processes, User procedures, Forms, Reports, Configuration Data.
7. SW development Modules
7.1. V-model
7.1.1. Component (Unit) testing
7.1.2. Integration Testing
7.1.3. System Testing
7.1.4. Acceptance Testing
7.1.5. Software work products, such as business scenarios or user cases, requirements, specifications, etc, are often the basis of testing, used in one or more test levels.
7.2. Interative-incremental Development
7.2.1. New increment
7.2.2. Regression Test, increasingly important on all interations
7.2.3. Verification and Validation, can be carried out on each increment
7.3. No matter which module, test work is better to be involved as early as possible, every develop activity follows a test activity.
8. Test Levels
8.1. Component testing
8.1.1. Test Basis: Component requirement, Detailed design, Code.
8.1.2. Test Objects: Components, Programs, Data conversion, DB modules
8.1.3. Could be test-first approach (test driven development)(Developer testa in modo ciclico in modo automatizzato e incrementale dei piccoli pezzi di codice del componente e si ferma quando è completo)
8.2. Integration Testing
8.2.1. Test Basis: SW and System design, Architecture, Workflows, Use cases
8.2.1.1. At stage of integration testing, testers focus on integration itself rather than modules.
8.2.2. Test Objects: Subsystems, DB implementation, Infrastructure, Interfaces, System and DB configuration
8.2.2.1. May include both functional testing and non-functional characteristics
8.2.3. There may be multiple levels of integration testing
8.2.3.1. Component integration testing, after component testing
8.2.3.2. System integration testing, test interaction between different systems or HW or SW.
8.2.3.2.1. Normally, developing organisation controls one side of the interface. Others could be risk.
8.2.3.2.2. Maybe a series of systems, then cross-platform issues maybe significant.
8.3. System Testing
8.3.1. Test basis: System and SW requirement specification, Use cases, Functional specification, Risk analysis reports, etc.
8.3.1.1. It may include high level description, models of system behaviour, interactions with operating system, system resources
8.3.2. Test Objects: System, Manuals, System and data configuration
8.3.3. Test environment should correspond to the final target to minimise the risk of environment-specific failures not found in testing.
8.3.4. It should include both Black-box and White-box test. It usually performed by independent team.
8.4. Acceptance Testing
8.4.1. Test basis: System and SW requirement specification, Use cases, Risk analysis reports, Business processes
8.4.2. It usually performed by customers, users of system, other stakeholders as well. It is to establish confidence, but not finding bugs.
8.4.2.1. It is not necessarily the final level of testing. Some large-scale system will do integration test after acceptance testing.
8.4.2.2. It can occur at various times in the life cycle.
8.4.3. Acceptance Testing Forms
8.4.3.1. User Acceptance Testing, by business users
8.4.3.1.1. Fitness of use of the system
8.4.3.2. Operational (acceptance) testing, by system administrator
8.4.3.2.1. Backup, restore
8.4.3.2.2. Disaster recovery
8.4.3.2.3. User Management
8.4.3.2.4. Maintenance tasks
8.4.3.2.5. Data load
8.4.3.2.6. Periodic checks of security vulnerabilities
8.4.3.3. Contract and Regulation acceptance testing
8.4.3.4. Alpha and beta testing
9. Test Design Techniques
9.1. Test Development Process
9.1.1. Test Design Specification, Test case Specification,Test Procedure Specification
9.1.1.1. Test design specification defines content of test cases should include input value, precondition, expected result, postcondition.
9.1.1.2. Test Procedure Specification specifies the sequence of actions for test execution
9.1.2. Test condition, test case, test procedure
9.1.2.1. Test Condition: an item or event that could be verified by test cases, e.g. function, translation, structural element, etc.
9.1.3. Evaluate quality of test cases, clear traceability to requirements and expected results
9.1.4. Black-box
9.1.4.1. Equivalence partitioning
9.1.4.1.1. Include Valid and Invalid values. Can be identified for inputs, outputs, interval values, time-related values, interface parameters, etc.
9.1.4.2. Boundary value analysis
9.1.4.3. Decision tables
9.1.4.3.1. It is a good way for logical conditions
9.1.4.4. State transition Testing
9.1.4.4.1. State, transitions, inputs or events that trigger transitions
9.1.4.4.2. Tests can be designed to cover a typical sequence of states, to cover every state, or every transition, or specific sequences of transitions, or invalid transitions
9.1.4.5. Use case
9.1.4.5.1. It can at abstract level, or at system level. It is very useful for designing acceptance test.
9.2. Experienced-based Tchniques
9.2.1. It is to augment systematic techniques, especially when applied after formal approaches.
9.2.2. Can be Error Guessing and Exploratory testing
9.3. Categories
9.3.1. White-box
9.3.1.1. Statement testing
9.3.1.1.1. Did at component level. Increase statement coverage
9.3.1.2. Decision coverage
9.3.1.2.1. Related to branch testing. 100% decision coverage guarantees 100% statement coverage, but not vice versa.
9.3.1.3. Other structural coveage
9.3.1.3.1. Condition coverage, and multiple condition coverage. They are stronger levels beyond decision coverage
10. Common Concepts
10.1. Not work as expected, could lead to many problems, including loss of money, time or business reputation, even injury or death
10.2. Causes of defect: human mistakes, time pressure, complex code, complexity of infrastructure, changing technologies, system interactions, or environmental conditions (radiation, magnetism, etc)
10.3. Role of test: reduce risk of problems and contribute to the quality of software system, to meet contractual or legal requirements, industry-specific standards
10.4. Testing and quality: Measure quality of software. Learn lessons from previous projects. Part of quality assurance (alongside with development standards, training and defect analysis)
11. Tool support for testing
11.1. Types of test tools
11.1.1. Management
11.1.1.1. Test Management Tools, like QC
11.1.1.2. Requirement Management Tools, could identify missed ones to test
11.1.1.3. Incident Management tool
11.1.1.4. Configuration Management Tools, like version control software
11.1.2. Static Testing
11.1.2.1. Provide a cost effective way of finding bugs in early stage of life cycle.
11.1.2.2. Review tools, online review for geographically dispersed teams
11.1.2.3. Static Analysis Tools, Developers use, enforcing coding standards, structures, dependencies
11.1.2.4. Modeling Tools, Developes use
11.1.3. Test Specification
11.1.3.1. Test Design Tools
11.1.3.2. Test Data Preparation Tools
11.1.4. Test Execution and Logging
11.1.4.1. Test Execution Tools, automation
11.1.4.2. Test Comparators
11.1.4.3. Security Testing Tools
11.1.4.4. Unit Test Framework Tools (D)
11.1.4.5. Coverage Measurement Tools(D)
11.1.5. Performance and monitoring
11.1.5.1. Dynamic Analysis Tools(D)
11.1.5.1.1. Time dependencies
11.1.5.1.2. Memory leaks
11.1.5.2. Performance testing / load testing / stress testing tools
11.1.5.3. Monitoring Tools
11.1.6. Specific Testing Needs
11.1.6.1. Data Quality Assessment