1. 1. Testing Process (420 min)
1.1. 1.2. Planning, Monitoring, Controlling
1.1.1. Test Management Roles
1.1.2. 1.2.1. Test Planning
1.1.2.1. For each level, from initiation through completion
1.1.2.2. Identifies: *testing approach; *activities; *resources; *how to gather and track metrics; *training needs; *tools needs; *documentation guidelines; *how to relate work products with test basis; *scope and out of scope; *initial test environment specification (done with the architects); *external dependencies and SLA (resources, vendors, deployment, other projects, etc.)
1.1.2.2.1. (Traceability matrix: used to create many-to-many relationships between test basis and: -design specification; -business requirements; -work products;)
1.1.2.3. TP is directly affected by Test Strategy chosen (e.g. Risk Based: more effort on highest risks; Reaction based: test charter & tools for dynamic testing)
1.1.3. 1.2.2. Test Monitoring and Control
1.1.3.1. Establish: *testing schedule; *monitoring framework (with detailed measures and targets);
1.1.3.2. Relate the status of test work products and activities to the test basis
1.1.3.3. Track test work products and resources against the plan
1.1.3.4. Defining targets and measuring progress based on test conditions
1.1.3.5. Is an ongoing activity
1.1.3.6. Compares actual progress against the plan and implement corrective actions when needed
1.1.3.7. Guides the testing to fulfill the mission, strategies, and objectives, including revisiting the test planning activities as needed
1.2. 1.3. Test Analysis
1.2.1. Defines “what” is to be tested in the form of test conditions.
1.2.1.1. Derived from Test basis, test objectives and product risks
1.2.1.2. Traced back to its origins and forward to Test Design and Test Work Products
1.2.1.3. Can be performed as soon as the test basis for that level is available
1.2.1.4. Formal test techniques and other general analytical techniques (e.g., analytical risk-based strategies and analytical requirements-based strategies) can be used
1.2.1.5. Varies from basic to very detailed
1.2.1.5.1. Depending on: *detail / quality of test basis; *system / sw complexity; *project / product risk; *skills and knowledge of test analysts; *maturity of test process; *availability of stakeholders;
1.2.1.5.2. Advantages: *better corelation with test basis, test work products and test objectives; *better defect prevention; *easier understanding by stakeholders; *influence and direct testing and development; *test design, implementation and execution optimization; *better traceability
1.2.1.5.3. Disadvantages: *time-consuming; *worse maintainability in changing environment; *formality needs to be implemented across the team;
1.2.1.5.4. Less details in component and acceptance testing. More details on higher complex/risk projects or to compensate lack of details in specs or test design
1.3. 1.4. Test Design
1.3.1. Defines "how" something is to be tested in the form of test cases.
1.3.1.1. High-level or low-level TC
1.3.1.2. Direct or indirect (test conditions) related to test basis and objectives
1.3.2. Performed once test conditions are identified and enough information available for producing TC
1.4. 1.5. Test Implementation
1.4.1. Organization and prioritization of tests (included in test execution schedule)
1.4.1.1. TM monitors constraints, risks and priorities for defyning order
1.4.2. Responsible: test analyst
1.4.3. Implementation of test design as concrete test-cases, procedures and data
1.4.4. Creation of stored test data
1.4.5. Checks if ready to execute: *test environment ready; *test data; *code;
1.4.6. Detailed description of test environment and test data
1.4.7. Detailed if need future regression or regulatory aspects
1.4.8. Early implementation might lead to rework if requirement changes, but might be helpful to identify weaknesses in specs.
1.5. 1.6. Test Execution
1.5.1. Begins when test object delivered and entry criteria satisfied
1.5.2. Tools should be in place
1.5.3. Test results and metrics tracking should be working and known by all members
1.5.4. Standards for test logging and defect reporting should be published
1.5.5. Executed according to test cases (except in reactive testing - experience based/defect based)
1.5.6. TM monitors progress and take control actions if needed
1.6. 1.7. Evaluating exit criteria and reporting
1.6.1. Consolidate results of all processes (analysis, design, implement, execution)
1.6.2. TM through planning determines how info will be gathered and reported
1.6.3. TM through monitoring and control makes sure each member reports its part of the work
1.7. 1.8. Test Closure
1.7.1. Test completion check: *all planned tests run or skipped; *all bugs reported; or fixed; or confirmed
1.7.2. Test artifacts handover. E.g.: *known defects communicated to support team; *tests and its environments given to maintenance test team;
1.7.3. Lessons Learned. E.g.: *risk analysis good enough or too many defects clusters unantecipated? *were the estimates accurate? *cause and effect analysis of the defects have any pattern? *potential process improvements?
1.7.4. Archive results, logs, reports, documents and work products in configuration management system.
2. General concepts:
2.1. Testing approach: defyning which test levels will be employed; goals and objectives for each level; what test techniques will be used at each level;
2.2. Metrics: Used to guide the project, to determine adherence to plan, to assess achievement of the objectives.
2.3. Testing Strategy: based on the project's needs, how testing will be driven, planned and executed. Examples: Risk-based; Reaction-based;
3. 2. Test Management (750 min)
3.1. 2.2. Test Management in Context
3.1.1. Main role of a manager is to secure and utilize resources to carry out value-adding processes.
3.1.1.1. Test manager must optimally arrange the test processes, activities and work products according to the needs of the stakeholders
3.1.2. 2.2.1. Testing Stakeholders
3.1.2.1. Direct or indirect
3.1.2.2. *developers, dev leads and managers: results; *architects and designers: results; *marketing and business analysts: definitions and results; *SR mgmt, product manager and sponsors: definitions and results; *project managers: supports; *tech support, helpdesk: results; *users: final result (product);
3.1.2.3. TM plans to meet their needs
3.1.3. 2.2.2. Development lifecycle activities
3.1.3.1. Testing is closely interconected with the following activities. TM roles are: *requirements engineering and mgmt: scoping and estimation; *project management: schedule and resource requirements; *config / release mgmt: estabilish test object delivery processes; *SW dev and maintenance: coordinate deliveries and defect mgmt; *tech support: delivery of results and WA for issues; and identify QA improvements needs; *tech doc production: ensure to receive doc for testing and finding defects in documents;
3.1.4. 2.2.3. Software Development Models
3.1.4.1. Sequential (waterfall, v-model, w-model)
3.1.4.1.1. Activities are serialized
3.1.4.1.2. Test execution proceeds sequentially
3.1.4.1.3. System test level aligned as follows: *test planning with project planning; *test analysis and design with requirement spec; high and low-level design spec; *test implementation with coding and component testing; *test execution when entry criteria met (e.g. comp testing/ comp integration testing done); *exit criteria evaluation and report throughout execution; *closure when exit criteria met.
3.1.4.2. Interative or incremental (RAD, RUP)
3.1.4.2.1. Whole project is split in grouped features (phases)
3.1.4.2.2. Each phase done either sequentially or overlapped
3.1.4.2.3. Iterations sequentially or overlapped
3.1.4.2.4. High level test planning and analysis during project initiation. Detailed test activities occurs at the beginning of each interaction.
3.1.4.2.5. Same tasks as v-model, but for each iteration, as a mini-project
3.1.4.3. Agile (SCRUM, XP)
3.1.4.3.1. Very short interations (2-4w)
3.1.4.3.2. Iterations are sequential (All activities complete within iteration, next one starts upon completion of previous)
3.1.4.3.3. TM plays technical authority/advisory role
3.1.4.3.4. Same tasks as v-model, but for each iteration, as a mini-project
3.1.4.4. Spiral
3.1.4.4.1. Prototypes used to confirm feasibility, experiment design and determine technical problems to resolve.
3.1.4.4.2. Once main problems resolved, project proceeds either sequentially or iteratively
3.1.4.5. Test levels
3.1.4.5.1. Characterized by (CTFL): *specific objectives; *test basis; *test object; *typical defects; *approaches and responsabilities;
3.1.4.5.2. Each level have its own (CTAL): *objectives and goals; *test scope and itens; *test basis and corresponding traceability; *entry and exit criteria; *test deliverables and results reporting; *test techniques and coverage; *measures and metrics; *tools *resources (e.g. environment); *responsible individual and groups.
3.1.4.5.3. CTFL (base-levels): *component; *integration; *system; *acceptance;
3.1.4.5.4. CTAL (additional levels): *HW-SW integration testing; *System integration testing; *Feature interaction testing; *Customer product integration t.;
3.1.5. 2.2.4. Managing non-functional testing
3.1.5.1. Might be too expensive, so TM must select which NF tests to run according to risk and constraint
3.1.5.2. Non-functional test planning activities are delegated to TTA
3.1.5.2.1. General factors considered: *stakeholders requirements; *required tools acquisition and training; *test environment requirements; *organization considerations; *data security considerations.
3.1.5.3. Shouldn't wait until the end of functional testing, but prioritize according to the risk
3.1.5.4. Might happen outside of iterations if it takes more time than an iteration.
3.1.6. 2.2.5. Managing Experience-based testing
3.1.6.1. Efficient finding defects and checking completeness of other techniques
3.1.6.2. Difficult to determine coverage and reproducibility of defects, specially if multiple testers involved
3.1.6.3. Break testing into test sessions: *30-120 minutes; *covers a test charter (list of test conditions);
3.1.6.4. Integrating experience-based into pre-designed testing > exploring beyond the explicit steps; > adding spare time to explore
3.2. 2.3. Risk-based testing and Test prioritization and effort alocation
3.2.1. TM challenge: proper selection, allocation, and prioritization of tests to optimize the effectiveness and efficiency of testing work to be done.
3.2.2. 2.3.1. Risk-based testing
3.2.2.1. Risk is the possibility of a negative or undesirable outcome or event which could decrease perception of quality or success. *Product risks: Product, product quality, or quality risks; *Project risks: Project, or planning risks;
3.2.2.2. Product quality risk analysis with the stakeholders
3.2.2.3. Tests mitigates risks either by finding or not defects
3.2.2.4. 4 Risk-based testing activities:
3.2.2.5. 2.3.1.1. Risk identification
3.2.2.5.1. Stakeholders can identify risks through: *expert interviews; *independent assessments; *use of risk templates; *project retrospectives; *risk workshops; *brainstorming; *checklists; *calling on past experience.
3.2.2.5.2. Often produces by-products, like project risks;
3.2.2.6. 2.3.1.2. Risk assessment
3.2.2.6.1. Categorization (ISO25000): *functionality; *reliability; *usability; *efficiency; *maintainability; *portability.
3.2.2.6.2. Assessing PROBABILITY: *complexity of tech and teams; *personnel and trainning; *conflict within the team; *contractual problems; *geographically distributed teams; *legacy vs new approaches; *weak managerial and leadership; *time, resource, budget and mgmt pressure; *lack of earlier QA; *high change rates; *high earlier defect rates; *interfacing and integration issues;
3.2.2.6.3. Assessing IMPACT: *frequency of use of affected feature; *critically of the feature; *damage to reputation; *loss of business; *financial,ecological or social losses os liability; *civil or criminal legal sanctions; *loss of license; *lack of reasonable workarounds; *negative publicity; *safety.
3.2.2.6.4. Assessed quantitatively or qualitatively
3.2.2.6.5. Calculate IMPACT and PROBABILITY to create an aggregate risk score > risk priority number
3.2.2.6.6. Risk assessment should be consensualized / agreed among the stakeholders
3.2.2.7. 2.3.1.3. Risk mitigation
3.2.2.7.1. Safety related standards: *FAA DO-178B/EX 12B; *IEC 61508;
3.2.2.7.2. Risk level influence decisions like: *effort and priority on developing and executing tests; *level of independence of the test team; *level of experience of the tester; *degree of confirmation and regression testing performed; *need of reviews;
3.2.2.7.3. Periodic adjustment of the quality risk analysis should occur at least at major milestones: *identifying new risks; *re-assessing the level of risks; *evaluating effectiveness of mitigation activities.
3.2.2.8. 2.3.1.4. Risk management
3.2.2.8.1. Occurs throughout the entire lifecycle
3.2.2.8.2. Test policy/strategy document should describe how risk management is integrated into all stages
3.2.2.8.3. Takes place at many levels, not just for testing
3.2.2.8.4. Might identify the source of risks and its consequences
3.2.2.8.5. Risk-based testing methods: *depth-first: all high risk tests run before any low one; *breadth-first: all risks covered proportionally at the same time.
3.2.2.8.6. If time is up before tests completion, mgmt decides whether extending tests or passing risks forward.
3.2.3. 2.3.2. Risk-based testing techniques
3.2.3.1. Lightweight techniques