
1. **1. Fundamental testing**
1.1. 1.1 What is testing
1.1.1. Test Objectives
1.1.1.1. Evaluating work products (req, user story, design, ...)
1.1.1.2. Causing failures and finding defects
1.1.1.3. Ensuring required coverage of a test object
1.1.1.4. Reducing the risk level of inadequate software quality
1.1.1.5. Verifying whether specified requirements have been fulfilled
1.1.1.6. Verifying that a test object complies with contractual, legal, and regulatory requirements
1.1.1.7. Providing information to stakeholders to allow them to make informed decisions
1.1.1.8. Building confidence in the quality of the test object
1.1.1.9. Validating whether the test object is complete and works as expected by the stakeholders
1.1.2. Testing and debugging
1.1.2.1. Testing
1.1.2.1.1. Causing failures which are causes by defect (Dynamic)
1.1.2.1.2. Directly find defects (Static)
1.1.2.2. Debugging
1.1.2.2.1. Find cause of failures (Dynamic)
1.1.2.2.2. Fixing the defect
1.2. 1.2 Why is testing neccessary
1.2.1. Testing's contribution to Success
1.2.1.1. **Indirectly ** contributes to higher quality test objects
1.2.1.2. **Directly** evaluating quality of test objects at various phases in SDLC
1.2.1.3. Testing provides **users ** with indirect representation on the development project
1.2.2. Testing and QA
1.2.2.1. Testing and QA are not the same
1.2.2.1.1. **Testing** is a product-oriented
1.2.2.1.2. Testing is a major form of quality control
1.2.2.1.3. **QA** is a process-oriented
1.2.2.1.4. Good process mean good product
1.2.2.1.5. QA applies to both the development and testing processes
1.2.2.2. Test result used both by QA and testing
1.2.2.2.1. Testing use to fix defects
1.2.2.2.2. QA use to provide feedback on process
1.2.3. Errors, Defects, Failures, and Root Causes
1.2.3.1. Error -> Defect -> Failure
1.2.3.1.1. **Error** is a mistake by human being
1.2.3.1.2. **Defects** (Fault/Bug) in test script, source code, supporting work product, requirement
1.2.3.1.3. **Failures** Caused by defects or by environmental condition
1.2.3.2. Root causes
1.2.3.2.1. Time pressure
1.2.3.2.2. Complexcity of work product
1.2.3.2.3. Tired or lack of adequate training
1.2.3.2.4. Missunderstood
1.3. 1.3 Testing Principle
1.3.1. Testing shows the presence, not the absence of defects
1.3.2. Exhaustive testing is impossible
1.3.3. Early testing saves time and money
1.3.4. Defects cluster together
1.3.5. Tests wear out
1.3.6. Testing is context dependent
1.3.7. Absence-of-defects fallacy
1.4. 1.4 Test activities, Testware and Test Roles
1.4.1. Test activities
1.4.1.1. Test planning (Test Manager)
1.4.1.1.1. To define test objectives and selecting approach
1.4.1.2. Test monitoring and test control (Test Manager)
1.4.1.2.1. Related to all test activities
1.4.1.3. Test analysis (Testing role)
1.4.1.3.1. Priority and define test condition, analyze test basis and test objects
1.4.1.4. Test design (Testing role)
1.4.1.4.1. Elaborate test condition to test case
1.4.1.4.2. Defining test data, design test environment
1.4.1.4.3. Identify needed tools and infras
1.4.1.5. Test implementation (Testing role)
1.4.1.5.1. Prioritize test procedure,
1.4.1.5.2. Creating test data
1.4.1.5.3. Ensuring test environment built and setup correctly
1.4.1.6. Test execution (Testing role)
1.4.1.6.1. Execute test cases
1.4.1.7. Test completion (Test manager)
1.4.1.7.1. Occures at project milestones
1.4.1.7.2. For any unresolved defects, change requests or product backlog items are created
1.4.1.7.3. Testware is handle to appropriate team
1.4.1.7.4. Test completion report
1.4.1.7.5. Lesson learn and improvement
1.4.2. Test Process in Context
1.4.2.1. Test activities are an internal part of the development processes
1.4.2.1.1. Stakeholders (needs, expectations, requirements, willingness to cooperate, etc.)
1.4.2.1.2. Team members (skills, knowledge, level of experience, availability, training needs, etc.)
1.4.2.1.3. Business domain (criticality of the test object, identified risks, market needs, specific legal regulations, etc.)
1.4.2.1.4. Technical factors (type of software, product architecture, technology used, etc.)
1.4.2.1.5. Project constraints (scope, time, budget, resources, etc.)
1.4.2.1.6. Organizational factors (organizational structure, existing policies, practices used, etc.)
1.4.2.1.7. Software development lifecycle (engineering practices, development methods, etc.)
1.4.2.1.8. Tools (availability, usability, compliance, etc.)
1.4.3. Traceability between the Test Basis and Testware
1.4.3.1. In order to implement test monitoring and control
1.4.3.1.1. Traceability provides information to assess product quality, process capability, and project progress against business goals
1.4.4. Testware
1.4.4.1. Output from test activities
1.4.4.1.1. Test planning work products include: test plan, test schedule, risk register, entry criteria and exit criteria
1.4.4.1.2. Test monitoring and test control work products include: test progress reports, documentation of control directives and information about risks
1.4.4.1.3. Test analysis work products include: (prioritized) test conditions and defect reports regarding defects in the test basis (if not fixed directly)
1.4.4.1.4. Test design work products include: (prioritized) test cases, test charters, coverage items, test data requirements and test environment requirements.
1.4.4.1.5. Test implementation work products include: test procedures, manual and automated test scripts, test suites, test data, test execution schedule, and test environment items
1.4.4.1.6. Test execution work products include: test logs, and defect reports
1.4.4.1.7. Test completion work products include: test completion report), action items for improvement of subsequent projects or iterations, documented lessons learned, and change requests
1.4.5. Roles in testing
1.4.5.1. Different people may take on these roles at different times. For example, the test management role can be performed by a team leader, by a test manager, by a development manager, etc. It is also possible for one person to take on the roles of testing and test management at the same time.
1.5. 1.5 Essential Skills and Good Practices in Testing
1.5.1. Generic skill required for testing
1.5.1.1. Testing knowledge (to increase effectiveness of testing, e.g., by using test techniques)
1.5.1.2. Thoroughness, carefulness, curiosity, attention to details, being methodical (to identify defects, especially the ones that are difficult to find)
1.5.1.3. Good communication skills, active listening, being a team player (to interact effectively with all stakeholders, to convey information to others, to be understood, and to report and discuss defects)
1.5.1.4. Analytical thinking, critical thinking, creativity (to increase effectiveness of testing)
1.5.1.5. Technical knowledge (to increase efficiency of testing, e.g., by using appropriate test tools)
1.5.1.6. Domain knowledge (to be able to understand and to communicate with end users/business representatives)
1.5.2. Whole team approach
1.5.2.1. A practice coming from Extreme programming (XP)
1.5.2.1.1. In the whole team approach any team member with the necessary knowledge and skills can perform any task, and everyone is responsible for quality
1.5.2.1.2. The whole team approach improves team dynamics, enhances communication and collaboration within the team, and creates synergy by allowing the various skill sets within the team to be leveraged for the benefit of the project
1.5.2.1.3. Collaborate with business representative to help create suitable acceptance test
1.5.2.1.4. Working with developer to agree with test strategy and decide on test automation approaches
1.5.2.1.5. Transfer knowledge to other team members
1.5.2.1.6. Might be inapproriate
1.5.3. Independence of testing
1.5.3.1. No independence (Tested by the author)
1.5.3.2. Some independence (By Author peer from the same team)
1.5.3.3. High independence (Outside author team but within organization)
1.5.3.4. Very high independence (Outside org)
1.5.3.5. An independent tester can verify, challenge, or disprove assumptions made by stakeholders during specification and implementation of the system
1.5.3.6. Recognize different kinds of failures and defects compared to developers
2. **2. Testing throughout SDLC**
2.1. 2.1 Testing in the context of a SDLC
2.1.1. Impact of SDLC in testing
2.1.1.1. SDLC impact on
2.1.1.1.1. Scope and timing of test activities (e.g., test levels and test types)
2.1.1.1.2. Level of detail of test documentation
2.1.1.1.3. Choice of test techniques and test approach
2.1.1.1.4. Extent of test automation
2.1.1.1.5. Role and responsibilities of a tester
2.1.1.2. Sequential development models (Waterfall, V model)
2.1.1.2.1. Initial phase
2.1.1.2.2. Later phases
2.1.1.3. Iterative & incremental development models
2.1.1.3.1. Each iteration both static and dynamic testing may be performed at all test levels
2.1.1.3.2. Requires fast feedback and extensive regression testing
2.1.1.4. Agile software development has lightweight work product documentation and extensive test automation. Most manual testing tends to be experience-based test techniques
2.1.2. SDLC and Good testing practices
2.1.2.1. All development activities are subject to quality control
2.1.2.2. Different test levels have specific and different test objectives, which allows for testing to be appropriately comprehensive while avoiding redundancy
2.1.2.3. Test analysis and design for a given test level begins during the corresponding development phase of the SDLC
2.1.2.4. Testers are involved in reviewing work products as soon as drafts of these work products are available
2.1.3. Testing as a Driver for Software Development
2.1.3.1. TDD
2.1.3.1.1. Directs the coding through test cases (instead of extensive software design)
2.1.3.1.2. Tests are written first, then the code is written to satisfy the tests, and then the tests and code are refactored
2.1.3.2. ATDD
2.1.3.2.1. Derives tests from acceptance criteria as part of the system design process
2.1.3.2.2. Tests are written before the part of the application is developed to satisfy the tests
2.1.3.3. BDD
2.1.3.3.1. Expresses the desired behavior of an application with test cases written in a simple form of natural language, which is easy to understand by stakeholders – usually using the Given/When/Then format
2.1.3.3.2. Test cases should then automatically be translated into executable tests
2.1.4. DevOps and Testing
2.1.4.1. Benefit
2.1.4.1.1. Fast feedback on the code quality, and whether changes adversely affect existing code
2.1.4.1.2. CI promotes shift left in testing by encouraging developers to submit high quality code accompanied by component tests and static analysis
2.1.4.1.3. Automated processes are promoted like CI/CD that facilitates establishing stable test environments
2.1.4.1.4. The visibility on non-functional quality characteristics increases (e.g., performance efficiency, reliability)
2.1.4.1.5. Automation through a delivery pipeline reduces the need for repetitive manual testing
2.1.4.1.6. The risk of regression is minimized due to the scale and range of automated regression tests
2.1.4.2. Risk and Challenges
2.1.4.2.1. The DevOps delivery pipeline must be defined and established
2.1.4.2.2. CI / CD tools must be introduced and maintained
2.1.4.2.3. Test automation requires additional resources and may be difficult to establish and maintain
2.1.5. Shift Left Approach
2.1.5.1. Good practice
2.1.5.1.1. Reviewing the specification from the perspective of testers.
2.1.5.1.2. Writing test cases before the code is written and have the code run in a test harness during code implementation
2.1.5.1.3. Using CI and even better CD as it comes with fast feedback and automated component tests to accompany source code when it is submitted to the code repository
2.1.5.1.4. Completing static analysis of source code prior to dynamic testing, or as part of an automated process
2.1.5.1.5. Performing non-functional testing starting at the component test level.
2.1.5.1.6. Shift left might result in extra training, effort and/or costs earlier in the process but is expected to save efforts and/or costs later in the process.
2.1.5.1.7. For shift left it is important that stakeholders are convinced and bought into this concept
2.1.5.2. Challenges
2.1.5.2.1. Shift left might result in extra training, effort and/or costs earlier in the process but is expected to save efforts and/or costs later in the process.
2.1.5.2.2. For shift left it is important that stakeholders are convinced and bought into this concept
2.1.6. Retrospectives and Process Improvement
2.1.6.1. Participants (Testers, developer, architect, PO, BA)
2.1.6.1.1. Discuss about
2.1.6.1.2. Benefits
2.2. Test levels and test types
2.2.1. **Test levels** In sequential SDLC models, the test levels are often defined such that the exit criteria of one level are part of the entry criteria for the next level In some iterative models, this may not apply
2.2.1.1. Component testing (unit testing)
2.2.1.1.1. Normally performed by developers in development environment
2.2.1.2. Component integration testing (unit integration testing)
2.2.1.2.1. Testing interfaces and interaction between components
2.2.1.3. System testing
2.2.1.3.1. Focuses on the overall behavior and capabilities of an entire system or product
2.2.1.3.2. Including functional testing of end-to-end tasks and the non-functional testing
2.2.1.3.3. Related to specification for the system (Business rules)
2.2.1.4. System integration testing
2.2.1.4.1. Focuses on testing the interfaces of the system under test and other systems and external services
2.2.1.5. Acceptance testing
2.2.1.5.1. Fulfil the user's business needs
2.2.1.6. Test levels are distinguished by the following non-exhaustive list of attributes, to avoid overlapping of test activities
2.2.1.6.1. Test object
2.2.1.6.2. Test objective
2.2.1.6.3. Test basis
2.2.1.6.4. Defects and failures
2.2.1.6.5. Approach and responsibility
2.2.2. **Test types**
2.2.2.1. Functional testing
2.2.2.1.1. What test object should do
2.2.2.2. Non-funtional testing
2.2.2.2.1. How well the system behaves
2.2.2.2.2. Nonfunctional testing sometimes needs a very specific test environment, such as a usability lab for usability testing
2.2.2.3. Black-box testing
2.2.2.3.1. Specification-based
2.2.2.4. White-box testing
2.2.2.4.1. Structure-based
2.2.3. **Confirmation testing and Regression testing** Confirmation testing and/or regression testing for the test object are needed on all test levels if defects are fixed and/or changes are made on these test levels
2.2.3.1. Confirmation testing
2.2.3.1.1. Confirms that an original defect has been successfully fixed or not
2.2.3.1.2. When time or money is short when fixing defects, confirmation testing might be restricted to simply exercising the test steps that should reproduce the failure
2.2.3.2. Regression testing
2.2.3.2.1. Confirms that no adverse consequences have been caused by a change, including a fix that has already been confirmation tested
2.2.3.2.2. It is advisable first to perform an impact analysis to recognize the extent of the regression testing. Impact analysis shows which parts of the software could be affected
2.2.4. Maintenance Testing
2.2.4.1. Impact analysis may be done before a change is made, to help decide if the change should be made
2.2.4.1.1. What triggers maintenance testing
3. **3. Static testing**
3.1. **3.1 Static testing basics** Static analysis can identify problems prior to dynamic testing while often requiring less effort, since no test cases are required, and tools are typically used.
3.1.1. Work Products Examinable by Static Testing
3.1.1.1. Any work product that can be read and understood can be the subject of a review. However, for static analysis, work products need a structure against which they can be checked
3.1.1.2. Work products that are not appropriate for static testing include those that are difficult to interpret by human beings and that should not be analyzed by tools (e.g., 3rd party executable code due to legal reasons)
3.1.2. Value of static testing
3.1.2.1. Fulfil the principle of early testing
3.1.2.2. Provide the ablitity to evaluate the quality of, and to build confidence in work products
3.1.3. **Differences between Static Testing and Dynamic Testing**
3.1.3.1. Static testing and dynamic testing (with analysis of failures) can both lead to the detection of defects, however there are some defect types that can only be found by either static or dynamic testing.
3.1.3.2. Static testing finds defects directly, while dynamic testing causes failures from which the associated defects are determined through subsequent analysis
3.1.3.3. Static testing may more easily detect defects that lay on paths through the code that are rarely executed or hard to reach using dynamic testing
3.1.3.4. Static testing can be applied to non-executable work products, while dynamic testing can only be applied to executable work products
3.1.3.5. Static testing can be used to measure quality characteristics that are not dependent on executing code (e.g., maintainability), while dynamic testing can be used to measure quality characteristics that are dependent on executing code (e.g., performance efficiency)
3.1.4. **Typical defects that are easier and/or cheaper to find through static testing include**
3.1.4.1. Defects in requirements
3.1.4.2. Design defects
3.1.4.3. Certain types of coding defects
3.1.4.4. Deviations from standards (e.g., lack of adherence to naming conventions in coding standards)
3.1.4.5. Incorrect interface specifications
3.1.4.6. Specific types of security vulnerabilities (e.g., buffer overflows)
3.1.4.7. Gaps or inaccuracies in test basis coverage (e.g., missing tests for an acceptance criterion)
3.2. **3.2 Feedback and Review Process**
3.2.1. **Benefits of Early and Frequent Stakeholder Feedback**
3.2.1.1. Early communication of potential quality problems
3.2.1.2. A failure to deliver what the stakeholder wants can result in costly rework, missed deadlines, blame games, and might even lead to complete project failure
3.2.1.3. Prevent misunderstandings about requirements and ensure that changes to requirements are understood and implemented earlier
3.2.2. **Review Process Activities** The size of many work products makes them too large to be covered by a single review. The review process may be invoked multiple times to complete the review for the entire work product
3.2.2.1. Planning
3.2.2.1.1. Scope of the review
3.2.2.1.2. The work product to be reviewed, quality characteristics to be evaluated, areas to focus on, exit criteria, supporting information
3.2.2.2. Review initiation
3.2.2.2.1. To make sure that everyone and everything involved is prepared to start the review (making sure that every participant has access to the work product under review, understands their role and responsibilities and receives everything needed to perform the review)
3.2.2.3. Individual review
3.2.2.3.1. Every reviewer performs an individual review to assess the quality of the work product under review.
3.2.2.3.2. The reviewers log all their identified anomalies, recommendations, and questions.
3.2.2.3.3. checklist-based reviewing, scenario-based reviewing can be use
3.2.2.4. Communication and analysis
3.2.2.4.1. Since the anomalies identified during a review are not necessarily defects, all these anomalies need to be analyzed and discussed
3.2.2.4.2. Decide what the quality level of reviewed work product is and what follow-up actions are required
3.2.2.5. Fixing and reporting
3.2.2.5.1. Defect report should be created so that corrective actions can be follow up
3.2.3. Roles and Responsibility in Reviews
3.2.3.1. Manager – decides what is to be reviewed and provides resources, such as staff and time for the review
3.2.3.2. Author – creates and fixes the work product under review
3.2.3.3. Moderator (also known as the facilitator) – ensures the effective running of review meetings, including mediation, time management, and a safe review environment in which everyone can speak freely
3.2.3.4. Scribe (also known as recorder) – collates anomalies from reviewers and records review information, such as decisions and new anomalies found during the review meeting
3.2.3.5. Reviewer – performs reviews. A reviewer may be someone working on the project, a subject matter expert, or any other stakeholder
3.2.3.6. Review leader – takes overall responsibility for the review such as deciding who will be involved, and organizing when and where the review will take place
3.2.4. Review Types
3.2.4.1. Informal review
3.2.4.1.1. Do not require formal documented output, not follow process, just for detecting anomalies
3.2.4.2. Walkthrough
3.2.4.2.1. Led by author
3.2.4.2.2. Evaluating quality and building confidence in the work product
3.2.4.2.3. Educating reviewer
3.2.4.2.4. Gaining consensus, generating new ideas, motivating and enabling authors to improve and detecting anomalies
3.2.4.2.5. Review can perform individual review in advance but not required
3.2.4.3. Technical Review
3.2.4.3.1. Led by Moderator
3.2.4.3.2. Performed by technically qualified reviewers
3.2.4.3.3. Gain consensus and make decisions regarding a technical problem, but also to detect anomalies
3.2.4.4. Inspection
3.2.4.4.1. Most formal review type follow complete generic process
3.2.4.4.2. To find maximum number of anomalies
3.2.4.4.3. Metrics are collected and used to improve the SDLC, including the inspection process
3.2.4.4.4. Author cannot act as the review leader or scribe
3.2.5. Success Factors for Reviews
3.2.5.1. Defining clear objectives and measurable exit criteria. Evaluation of participants should never be an objective
3.2.5.2. Choosing the appropriate review type to achieve the given objectives, and to suit the type of work product, the review participants, the project needs and context
3.2.5.3. Performing reviews on small chunks, so that reviewers do not lose concentration during an individual review and/or the review meeting (when held)
3.2.5.4. Providing feedback from reviews to stakeholders and authors so they can improve the product and their activities
3.2.5.5. Providing adequate time to participants to prepare for the review
3.2.5.6. Support from management for the review process
3.2.5.7. Making reviews part of the organization’s culture, to promote learning and process improvement
3.2.5.8. Providing adequate training for all participants so they know how to fulfil their role
3.2.5.9. Facilitating meetings
4. Drawbacks
4.1. Lack of collab and communication with development team
4.2. Developers may lose a sense of responsibility for quality
4.3. May be seen as a bottleneck or be blamed for delays in release
5. **4. Test Analysis and Design**
5.1. **4.1 Test techniques Overview**
5.1.1. Black-box test techniques
5.1.1.1. Specification-based
5.1.1.2. Not related to software implementation and its internal structure
5.1.2. White-box test techniques
5.1.2.1. Structure-based
5.1.2.2. Test cases can only be created after design or implementation of test objects
5.1.3. Experience-based test techniques
5.1.3.1. Depends heavily on the tester’s skills
5.2. **4.2 Black-Box Test Techniques**
5.2.1. Equipvalence Partitioning
5.2.1.1. The simplest coverage criterion in the case of multiple sets of partitions is called Each Choice coverage. Each Choice coverage requires test cases to exercise each partition from each set of partitions at least once
5.2.2. Boundary Value Analysis (BVA)
5.2.2.1. 2-value BVA
5.2.2.1.1. This boundary value and its closest neighbor belonging to the adjacent partition
5.2.2.2. 3-value BVA
5.2.2.2.1. This boundary value and both its neighbors
5.2.3. Decision Table Testing
5.2.3.1. Decision tables are used for testing the implementation of requirements that specify how different combinations of conditions result in different outcomes. Decision tables are an effective way of recording complex logic, such as business rules
5.2.4. State Transition Testing
5.2.4.1. All states coverage
5.2.4.1.1. To achieve 100% all states coverage, test cases must ensure that all the states are exercised
5.2.4.1.2. The coverage items are the states
5.2.4.1.3. Coverage is measured as the number of exercised states divided by the total number of states and is expressed as a percentage
5.2.4.2. Valid transitions coverage (0-switch coverage)
5.2.4.2.1. The coverage items are single valid transitions
5.2.4.2.2. To achieve 100% valid transitions coverage, test cases must exercise all the valid transitions
5.2.4.2.3. Coverage is measured as the number of exercised valid transitions divided by the total number of valid transitions and is expressed as a percentage
5.2.4.3. All transition coverage
5.2.4.3.1. The coverage items are all the transitions shown in a state table
5.2.4.3.2. To achieve 100% all transitions coverage, test cases must exercise all the valid transitions and attempt to execute invalid transitions
5.2.4.3.3. Testing only one invalid transition in a single test case helps to avoid defect masking, i.e., a situation in which one defect prevents the detection of another
5.2.4.3.4. Coverage is measured as the number of valid and invalid transitions exercised or attempted to be covered by executed test cases, divided by the total number of valid and invalid transitions, and is expressed as a percentage
5.3. **4.3 White-Box Test Techniques**
5.3.1. Statement Testing and Statement Coverage
5.3.1.1. When 100% statement coverage is achieved, it ensures that all executable statements in the code have been exercised at least once
5.3.1.2. Exercising a statement with a test case will not detect defects in all cases
5.3.1.3. It may not detect defects that are data dependent
5.3.1.4. 100% statement coverage does not ensure that all the decision logic has been tested as, for instance, it may not exercise all the branches in the code
5.3.2. Branch Testing and Branch Coverage
5.3.2.1. A branch is a transfer of control between two nodes in the control flow graph, which shows the possible sequences in which source code statements are executed in the test object. Each transfer of control can be either unconditional (i.e., straight-line code) or conditional (i.e., a decision outcome)
5.3.2.2. Coverage is measured as the number of branches exercised by the test cases divided by the total number of branches and is expressed as a percentage
5.3.2.3. Conditional branches typically correspond to a true or false outcome from an “if...then” decision, an outcome from a switch/case statement, or a decision to exit or continue in a loop
5.3.2.4. However, exercising a branch with a test case will not detect defects in all cases.
5.3.2.5. Any set of test cases achieving 100% branch coverage also achieves 100% statement coverage (but not vice versa)
5.3.3. The Value of White-box Testing
5.3.3.1. Entire software implementation is taken into account during testing, which facilitates defect detection even when the software specification is vague, outdated or incomplete
5.3.3.2. A corresponding weakness is that if the software does not implement one or more requirements, white-box testing may not detect the resulting defects of omission
5.3.3.3. White-box test techniques can be used in static testing
5.3.3.4. Performing only black-box testing does not provide a measure of actual code coverage. White-box coverage measures provide an objective measurement of coverage and the necessary information to allow additional tests to be generated to increase this coverage, and subsequently increase confidence in the code
5.4. **4.4 Experience-based Test Techniques**
5.4.1. Error Guessing
5.4.1.1. How the application has worked in the past
5.4.1.2. The types of errors the developers tend to make and the types of defects that result from these errors
5.4.1.3. The types of failures that have occurred in other, similar applications
5.4.1.4. Fault attacks are a way to implement error guessing. This test technique requires the tester to create or acquire a list of possible errors, defects and failures, and to design tests that will identify defects associated with the errors, expose the defects, or cause the failures.
5.4.2. Exploratory Testing
5.4.2.1. Tests are simultaneously designed, executed, and evaluated while the tester learns about the test object.
5.4.2.2. Sometimes performed using session-based testing to structure the testing
5.4.2.3. The tester uses a test charter containing test objectives to guide the testing
5.4.2.4. Usually followed by a debriefing that involves a discussion between the tester and stakeholders interested in the test results of the test session
5.4.2.5. Exploratory testing is useful when there are few or inadequate specifications or there is significant time pressure on the testing
5.4.2.6. Exploratory testing will be more effective if the tester is experienced, has domain knowledge and has a high degree of essential skills, like analytical skills, curiosity and creativeness
5.4.3. Checklist-Based Testing
5.4.3.1. In checklist-based testing, a tester designs, implements, and executes tests to cover test conditions from a checklist.
5.4.3.2. Can be built based on experience, knowledge about what is important for the user, or an understanding of why and how software fails
5.4.3.3. Should not contain items that can be checked automatically, items better suited as entry criteria, exit criteria, or items that are too general
5.4.3.4. Checklist items are often phrased in the form of a question.
5.4.3.5. Can be created to support various test types, including functional and non-functional testing
5.4.3.6. If the checklists are high-level, some variability in the actual testing is likely to occur, resulting in potentially greater coverage but less repeatability
5.5. **4.5.Collaboration-based Test Approaches**
5.5.1. Collaborative User Story Writing
5.5.1.1. Card – the medium describing a user story
5.5.1.2. Conversation – explains how the software will be used
5.5.1.3. Confirmation – the acceptance criteria
5.5.1.4. The collaboration allows the team to obtain a shared vision of what should be delivered, by taking into account three perspectives: business, development and testing
5.5.1.5. Good user stories should be: Independent, Negotiable, Valuable, Estimable, Small and Testable (INVEST)
5.5.2. Acceptance Criteria
5.5.2.1. Define the scope of the user story
5.5.2.2. Reach consensus among the stakeholders
5.5.2.3. Describe both positive and negative scenarios
5.5.2.4. Serve as a basis for the user story acceptance testing
5.5.2.5. Allow accurate planning and estimation
5.5.2.6. **How to write?**
5.5.2.6.1. Scenario-oriented (e.g., Given/When/Then format used in BDD
5.5.2.6.2. Rule-oriented (e.g., bullet point verification list, or tabulated form of input-output mapping
5.5.3. Acceptance Test-driven Development (ATDD)
5.5.3.1. ATDD is a test-first approach
5.5.3.2. Test cases are created prior to implementing the user story. The test cases are created by team members with different perspectives, e.g., customers, developers, and testers
5.5.3.3. Test cases may be executed manually or automated
5.5.3.4. Steps to create ATDD
5.5.3.4.1. Analyzed, discussed, and written by the team members
5.5.3.4.2. Next step is to create the test cases
5.5.3.5. This will help the team implement the user story correctly
5.5.3.6. Typically, the first test cases are positive, confirming the correct behavior without exceptions or error conditions, and comprising the sequence of activities executed if everything goes as expected
5.5.3.7. The test cases must cover all the characteristics of the user story and should not go beyond the story
5.5.3.8. The acceptance criteria may detail some of the issues described in the user story
5.5.3.9. No two test cases should describe the same characteristics of the user story
5.5.3.10. When captured in a format supported by a test automation framework the developers can automate the test cases by writing the supporting code as they implement the feature described by a user story
5.5.3.11. The acceptance tests then become executable requirements
6. **5. Managing the Test Activities**
6.1. **5.1 Test Planning**
6.1.1. Purpose and Content of a Test Plan
6.1.1.1. To describe the test objectives, resource and processes for a test project
6.1.1.2. A test plan
6.1.1.2.1. Documents the means and schedule for achieving test objectives
6.1.1.2.2. Helps to ensure that the performed test activities will meet the established criteria
6.1.1.2.3. Serves as a means of communication with team members and other stakeholders
6.1.1.2.4. Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why the testing will deviate from them)
6.1.1.3. Typical content of a test plan includes
6.1.1.3.1. Context of testing (e.g., test scope, test objectives, test basis)
6.1.1.3.2. Assumptions and constraints of the test project
6.1.1.3.3. Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs)
6.1.1.3.4. Communication (e.g., forms and frequency of communication, documentation templates)
6.1.1.3.5. Risk register (e.g., product risks, project risks)
6.1.1.3.6. Test approach (e.g., test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the test policy and test strategy)
6.1.1.3.7. Budget and schedule
6.1.2. Tester's Contribution to Iteration and Release Planning
6.1.2.1. Release planning
6.1.2.1.1. Looks ahead to the release of a product, defines and re-defines the product backlog, and may involve refining larger user stories into a set of smaller user stories
6.1.2.1.2. It also serves as the basis for the test approach and test plan across all iterations
6.1.2.1.3. Testers involved in release planning participate in writing testable user stories and acceptance criteria, participate in project and quality risk analyses, estimate test effort associated with user stories), determine the test approach, and plan the testing for the release.
6.1.2.2. Iteration planning
6.1.2.2.1. Looks ahead to the end of a single iteration and is concerned with the iteration backlog
6.1.2.2.2. Testers involved in iteration planning participate in the detailed risk analysis of user stories, determine the testability of user stories, break down user stories into tasks (particularly testing tasks), estimate test effort for all testing tasks, and identify and refine functional and non-functional aspects of the test object
6.1.3. Entry Criteria and Exit Criteria
6.1.3.1. Entry Criteria
6.1.3.1.1. Define the preconditions for undertaking a given activity
6.1.3.1.2. If entry criteria are not met, it is likely that the activity will prove to be more difficult, time-consuming, costly, and riskier
6.1.3.1.3. Typical entry criteria includes
6.1.3.2. Exit Criteria
6.1.3.2.1. Define what must be achieved to declare an activity completed
6.1.3.2.2. Typical exit criteria includes
6.1.3.2.3. Running out of time or budget can also be viewed as valid exit criteria
6.1.3.3. Entry criteria and exit criteria should be defined for each test level, and will differ based on the test objectives
6.1.4. Estimation Techniques
6.1.4.1. Metrics-based techniques
6.1.4.1.1. Estimation based on ratios
6.1.4.1.2. Extrapolation
6.1.4.2. Expert-based techniques
6.1.4.2.1. Wideband Delphi
6.1.4.2.2. Three-point estimation
6.1.5. Test Case Prioritization
6.1.5.1. Risk-based prioritization, where the order of test execution is based on the results of risk analysis. Test cases covering the most important risks are executed first.
6.1.5.2. Coverage-based prioritization, where the order of test execution is based on coverage (e.g., statement coverage). Test cases achieving the highest coverage are executed first. In another variant, called additional coverage prioritization, the test case achieving the highest coverage is executed first; each subsequent test case is the one that achieves the highest additional coverage.
6.1.5.3. Requirements-based prioritization, where the order of test execution is based on the priorities of the requirements traced back to the corresponding test cases. Requirement priorities are defined by stakeholders. Test cases related to the most important requirements are executed first
6.1.5.4. Ideally, test cases would be ordered to run based on their priority levels, using, for example, one of the above-mentioned prioritization strategies. However, this practice may not work if the test cases or the features being tested have dependencies. If a test case with a higher priority is dependent on a test case with a lower priority, the lower priority test case must be executed first
6.1.5.5. The order of test execution must also take into account the availability of resources. For example, the required test tools, test environments or people that may only be available for a specific time window
6.1.6. Test Pyramid
6.1.6.1. The test pyramid is a model showing that different tests may have different granularity
6.1.6.2. The higher the layer, the lower the test granularity
6.1.6.3. The lower the test isolation (i.e., the degree of dependency on other elements of the system) and the higher the test execution time
6.1.6.4. Tests in the bottom layer are small, isolated, fast, and check a small piece of functionality, so usually a lot of them are needed to achieve a reasonable coverage
6.1.6.5. The top layer represents complex, high-level, end-to-end tests
6.1.6.6. These high-level tests are generally slower than the tests from the lower layers, and they typically check a large piece of functionality, so usually just a few of them are needed to achieve a reasonable level of coverage
6.1.6.7. The number and naming of the layers may differ
6.1.7. Testing Quadrants
6.1.7.1. Group the test levels with the appropriate test types, activities, test techniques and work products in the Agile software development.
6.1.7.2. This model also provides a way to differentiate and describe the test types to all stakeholders, including developers, testers, and business representatives
6.1.7.3. The model supports test management in visualizing these to ensure that all appropriate test types and test levels are included in the SDLC and in understanding that some test types are more relevant to certain test levels than others
6.1.7.4. Threre are 4 quadrants
6.1.7.4.1. Quadrant Q1 (technology facing, support the team). This quadrant contains component tests and component integration tests. These tests should be automated and included in the CI process.
6.1.7.4.2. Quadrant Q2 (business facing, support the team). This quadrant contains functional tests, examples, user story tests, user experience prototypes, API testing, and simulations. These tests check the acceptance criteria and can be manual or automated
6.1.7.4.3. Quadrant Q3 (business facing, critique the product). This quadrant contains exploratory testing, usability testing, user acceptance testing. These tests are user-oriented and often manual.
6.1.7.4.4. Quadrant Q4 (technology facing, critique the product). This quadrant contains smoke tests and non-functional tests (except usability tests). These tests are often automated.
6.2. **5.2 Risk Management**
6.2.1. Allows the organizations to increase the likelihood of achieving objectives, improve the quality of their products and increase the stakeholders’ confidence and trust
6.2.2. Risk management activities are
6.2.2.1. Risk analysis (consisting of risk identification and risk assessment
6.2.2.1.1. To provide an awareness of product risk to focus the test effort in a way that minimizes the residual level of product risk. Ideally, product risk analysis begins early in the SDLC
6.2.2.1.2. Consist of
6.2.2.1.3. May influence the thoroughness and test scope. Its results are used to
6.2.2.2. Risk control (consisting of risk mitigation and risk monitoring
6.2.2.2.1. Comprises all measures that are taken in response to identified and assessed product risks
6.2.2.2.2. Actions that can be taken to mitigate the product risks by testing are as follows
6.2.3. The test approach, in which test activities are selected, prioritized, and managed based on risk analysis and risk control, is called risk-based testing
6.2.4. Risk Definition and Risk Attributes
6.2.4.1. Risk is a potential event, hazard, threat, or situation whose occurrence causes an adverse effect
6.2.4.2. Risk can be characterized by two factors
6.2.4.2.1. Risk likelihood – the probability of the risk occurrence (greater than zero and less than one)
6.2.4.2.2. Risk impact (harm) – the consequences of this occurrence
6.2.4.3. These two factors express the risk level, which is a measure for the risk. The higher the risk level, the more important is its treatment Risk level = likelihood * impact
6.2.5. Project Risks and Product Risks
6.2.5.1. Project risks
6.2.5.1.1. Organizational issues (e.g., delays in work products deliveries, inaccurate estimates, cost cutting)
6.2.5.1.2. People issues (e.g., insufficient skills, conflicts, communication problems, shortage of staff)
6.2.5.1.3. Technical issues (e.g., scope creep, poor tool support)
6.2.5.1.4. Supplier issues (e.g., third-party delivery failure, bankruptcy of the supporting company)
6.2.5.1.5. Project risks, when they occur, may have an impact on the project schedule, budget or scope, which affects the project's ability to achieve its objectives
6.2.5.2. Product risks
6.2.5.2.1. User dissatisfaction
6.2.5.2.2. Loss of revenue, trust, reputation
6.2.5.2.3. Damage to third parties
6.2.5.2.4. High maintenance costs, overload of the help desk
6.2.5.2.5. Criminal penalties
6.2.5.2.6. In extreme cases, physical damage, injuries or even death
6.3. **5.3.Test Monitoring, Test Control and Test Completion**
6.3.1. **Test monitoring** is concerned with gathering information about testing
6.3.1.1. This information is used to assess test progress and to measure whether the exit criteria or the test tasks associated with the exit criteria are satisfied, such as meeting the targets for coverage of product risks, requirements, or acceptance criteria
6.3.2. **Test control** uses the information from test monitoring
6.3.2.1. To provide, in a form of the control directives, guidance and the necessary corrective actions to achieve the most effective and efficient testing
6.3.2.1.1. Reprioritizing tests when an identified risk becomes an issue
6.3.2.1.2. Re-evaluating whether a test item meets entry criteria or exit criteria due to rework
6.3.2.1.3. Adjusting the test schedule to address a delay in the delivery of the test environment
6.3.2.1.4. Adding new resources when and where needed
6.3.3. **Test completion** collects data from completed test activities to consolidate experience, testware, and any other relevant information
6.3.3.1. Test completion activities occur at project milestones such as when a test level is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is released, or a maintenance release is completed
6.3.4. **Metrics used in Testing** to show progress against the planned test schedule and budget, the current quality of the test object, and the effectiveness of the test activities with respect to the test objectives or an iteration goal
6.3.4.1. Common test metrics include:
6.3.4.1.1. Project progress metrics (e.g., task completion, resource usage, test effort)
6.3.4.1.2. Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)
6.3.4.1.3. Product quality metrics (e.g., availability, response time, mean time to failure)
6.3.4.1.4. Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection percentage)
6.3.4.1.5. Risk metrics (e.g., residual risk level)
6.3.4.1.6. Coverage metrics (e.g., requirements coverage, code coverage)
6.3.4.1.7. Cost metrics (e.g., cost of testing, organizational cost of quality)
6.3.5. Purpose, Content and Audience for **Test Reports**
6.3.5.1. During test monitoring and test control, the test team generates test progress reports for stakeholders to keep them informed. Test progress reports are usually generated on a regular basis (daily, weekly, ...)
6.3.5.2. And include:
6.3.5.2.1. Testing period
6.3.5.2.2. Test progress (e.g., ahead or behind schedule), including any notable deviations
6.3.5.2.3. Impediments for testing, and their workarounds
6.3.5.2.4. Test metrics
6.3.5.2.5. New and changed risks within testing period
6.3.5.2.6. Testing planned for the next period
6.3.5.3. A test completion report is prepared during test completion, when a project, test level, or test type is complete and when, ideally, its exit criteria have been met
6.3.5.4. This reports uses test progress reports and otherdata. Typical test completion reports include
6.3.5.4.1. Test summary
6.3.5.4.2. Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit criteria)
6.3.5.4.3. Deviations from the test plan (e.g., differences from the planned test schedule, duration, and effort)
6.3.5.4.4. Testing impediments and workarounds
6.3.5.4.5. Test metrics based on test progress reports
6.3.5.4.6. Unmitigated risks, defects not fixed
6.3.5.4.7. Lessons learned that are relevant to the testing
6.3.5.5. Different audiences require different information in the reports and influence the degree of formality and the frequency of test reporting. Test progress reporting to others in the same team is often frequent and informal, while test completion reporting follows a set template and occurs only once.
6.3.6. Communicating the Status of Testing
6.3.6.1. The options include
6.3.6.1.1. Verbal communication with team members and other stakeholders
6.3.6.1.2. Dashboards (e.g., CI/CD dashboards, task boards, and burn-down charts)
6.3.6.1.3. Electronic communication channels (e.g., email, chat)
6.3.6.1.4. Online documentation
6.3.6.1.5. Formal test reports
6.4. **5.4 Configuration Management**
6.4.1. Provides a discipline for identifying, controlling, and tracking work products such as test plans, test strategies, test conditions, test cases, test scripts, test results, test logs, and test reports as configuration items.
6.4.2. Configuration management keeps a record of changed configuration items when a new baseline is created. It is possible to revert to a previous baseline to reproduce previous test results.
6.4.3. To properly support testing, CM ensures the following
6.4.3.1. All configuration items, including test items (individual parts of the test object), are uniquely identified, version controlled, tracked for changes, and related to other configuration items so that traceability can be maintained throughout the test process
6.4.3.2. All identified documentation and software items are referenced unambiguously in testware
6.4.4. Continuous integration, continuous delivery, continuous deployment and the associated testing are typically implemented as part of an automated DevOps pipeline, in which automated CM is normally included
6.5. **5.5.Defect Management**
6.5.1. The defect management process includes a workflow for handling individual defects or anomalies from their discovery to their closure and rules for their classification
6.5.2. The process must be followed by all involved stakeholders
6.5.3. It is advisable to handle defects from static testing (especially static analysis) in a similar way
6.5.4. Typical defect reports have the following objectives
6.5.4.1. Provide those responsible for handling and resolving reported defects with sufficient information to resolve the issue
6.5.4.2. Provide a means of tracking the quality of the work product
6.5.4.3. Provide ideas for improvement of the development and test process
6.5.5. A defect report logged during dynamic testing typically includes
6.5.5.1. Unique identifier
6.5.5.2. Title with a short summary of the anomaly being reported
6.5.5.3. Date when the anomaly was observed, issuing organization, and author, including their role
6.5.5.4. Identification of the test object and test environment
6.5.5.5. Context of the defect (e.g., test case being run, test activity being performed, SDLC phase, and other relevant information such as the test technique, checklist or test data being used)
6.5.5.6. Description of the failure to enable reproduction and resolution including the test steps that detected the anomaly, and any relevant test logs, database dumps, screenshots, or recordings
6.5.5.7. Expected results and actual results
6.5.5.8. Severity of the defect (degree of impact) on the interests of stakeholders or requirements
6.5.5.9. Priority to fix
6.5.5.10. Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed, rejected)
6.5.5.11. References (e.g., to the test case)
7. **6.1.Tool Support for Testing**
7.1. Test tools support and facilitate many test activities. Examples include, but are not limited to
7.1.1. Test management tools – increase the test process efficiency by facilitating management of the SDLC, requirements, tests, defects, configuration
7.1.2. Static testing tools – support the tester in performing reviews and static analysis
7.1.3. Test design and test implementation tools – facilitate generation of test cases, test data and test procedures
7.1.4. Test execution and test coverage tools – facilitate automated test execution and coverage measurement
7.1.5. Non-functional testing tools – allow the tester to perform non-functional testing that is difficult or impossible to perform manually
7.1.6. DevOps tools – support the DevOps delivery pipeline, workflow tracking, automated build process(es), CI/CD
7.1.7. Collaboration tools – facilitate communication
7.1.8. Tools supporting scalability and deployment standardization (e.g., virtual machines, containerization tools)
7.1.9. Any other tool that assists in testing (e.g., a spreadsheet is a test tool in the context of testing)
8. **6.2 Benefits and Risks of Test Automation**
8.1. Potential benefits of using test automation include:
8.1.1. Time saved by reducing repetitive manual work (e.g., execute regression tests, re-enter the same test data, compare expected results vs actual results, and check against coding standards)
8.1.2. Prevention of simple human errors through greater consistency and repeatability (e.g., tests are consistently derived from requirements, test data is created in a systematic manner, and tests are executed by a tool in the same order with the same frequency)
8.1.3. More objective assessment (e.g., coverage) and providing measures that are too complicated for humans to determine
8.1.4. Easier access to information about testing to support test management and test reporting (e.g., statistics, graphs, and aggregated data about test progress, failure rates, and test execution duration)
8.1.5. Reduced test execution times to provide earlier defect detection, faster feedback and faster time to market
8.1.6. More time for testers to design new, deeper and more effective tests
8.2. Potential risks of using test automation include:
8.2.1. Unrealistic expectations about the benefits of a tool (including functionality and ease of use).
8.2.2. Inaccurate estimations of time, costs, effort required to introduce a tool, maintain test scripts and change the existing manual test process.
8.2.3. Using a test tool when manual testing is more appropriate.
8.2.4. Relying on a tool too much, e.g., ignoring the need of human critical thinking.
8.2.5. The dependency on the tool vendor which may go out of business, retire the tool, sell the tool to a different vendor or provide poor support (e.g., responses to queries, upgrades, and defect fixes).
8.2.6. Using an open-source software which may be abandoned, meaning that no further updates are available, or its internal components may require quite frequent updates as a further development.
8.2.7. The automation tool is not compatible with the development platform.
8.2.8. Choosing an unsuitable tool that did not comply with the regulatory requirements and/or safety standards.