1. (2) White-Box Test Techniques
1.1. Statement testing
1.1.1. <--About it
1.1.1.1. The aim is to design test cases that exercise statements in the code until an acceptable level of coverage is achieved
1.1.1.2. 100% statement coverage does not ensure that all the decision logic has been tested
1.1.2. coverage
1.1.2.1. No. of statements exercised by the test cases coverage= ---------------------------------------------------------------------- X100% total number of executable statements in the code
1.1.2.2. 100% statement coverage is achieved, it ensures that all executable statements in the code have been exercised at least once
1.2. Branch testing
1.2.1. <--About it
1.2.1.1. the aim is to design test cases to exercise branches in the code until an acceptable level of coverage is achieved
1.2.1.2. 100% statement coverage does not ensure that all the decision logic has been tested
1.2.2. coverage
1.2.2.1. No. of branches exercised by the test cases coverage= ----------------------------------------------------------------- X100% total number of branches
1.2.2.2. 100% branch coverage is achieved, all branches in the code, unconditional and conditional, are exercised by test cases
1.2.3. Any set of test cases achieving 100% branch coverage also achieves 100% statement coverage (but not vice versa).
1.3. Value of White-box Testing
1.3.1. White-box techniques can be used in static testing
1.3.2. well suited to reviewing code that is not yet ready for execution
1.3.3. Performing only black-box testing does not provide a measure of actual code coverage
2. Collaboration-based Test Approaches
2.1. focus on defect avoidance by collaboration and communication
2.2. Collaborative User Story Writing
2.2.1. User stories have three critical aspects , called together the “3 C’s”
2.2.1.1. Card – the medium describing a user story
2.2.1.2. Conversation – explains how the software will be used
2.2.1.3. Confirmation – the acceptance criteria
2.2.2. The most common format for a user story
2.2.2.1. “As a [role], I want [goal to be accomplished], so that I can [resulting business value for the role]”
2.2.3. Good user stories should be (INVEST)
2.2.3.1. Independent
2.2.3.2. Negotiable
2.2.3.3. Valuable
2.2.3.4. Estimable
2.2.3.5. Small and Testable
2.3. Acceptance Criteria
2.3.1. It's the conditions that an implementation of the user story must meet to be accepted by stakeholders
2.3.2. <---used to
2.3.2.1. Define the scope of the user story
2.3.2.2. Reach consensus among the stakeholders
2.3.2.3. Describe both positive and negative scenarios
2.3.2.4. Serve as a basis for the user story acceptance testing
2.3.2.5. Allow accurate planning and estimation
2.3.3. most common formats to write acceptance criteria for a user story
2.3.3.1. Scenario-oriented (e.g., Given/When/Then format used in BDD)
2.3.3.2. Rule-oriented (e.g., bullet point verification list, or tabulated form of input-output mapping)
2.4. Acceptance Test-driven Development (ATDD)
2.4.1. it is a test-first approach
2.4.2. Test cases are created prior to implementing the user story
2.4.3. The test cases are created by team members with different perspectives
2.4.4. Test cases may be executed manually or automated
2.4.5. After the positive test cases are done, the team should perform negative testing
2.4.6. The test cases must cover all the characteristics of the user story
3. (1) Black-Box Test Techniques
3.1. Equivalence Partitioning (EP)
3.1.1. About it-->
3.1.1.1. Divides data into partitions
3.1.1.2. including inputs, outputs, configuration items, internal values, time-related values, and interface parameters
3.1.1.3. The partitions may be continuous or discrete, ordered or unordered, finite or infinite
3.1.1.4. The partitions must not overlap and must be non-empty sets
3.1.1.5. A invalid partition containes invalid values A valid partition containes valid values
3.1.2. coverage
3.1.2.1. No. of partitions exercised by at least one test case coverage= ----------------------------------------------------------------------- X 100% total number of identified partitions
3.1.2.2. 100% coverage with this technique, test cases must exercise all identified partitions (including invalid partitions) by covering each partition at least once
3.1.2.3. Each Choice coverage
3.1.2.3.1. The simplest coverage criterion in the case of multiple sets of partitions
3.2. Boundary Value Analysis (BVA)
3.2.1. About it-->
3.2.1.1. focuses on the boundary values of the partitions because developers are more likely to make errors with these boundary values
3.2.1.2. used only for ordered partitions
3.2.1.3. The minimum and maximum values of a partition are its boundary values
3.2.2. 2-value BVA
3.2.2.1. two coverage items for each boundary value
3.2.2.2. the boundary value and its closest neighbor belonging to the adjacent partition
3.2.3. 3-value BVA
3.2.3.1. three coverage items for each boundary value
3.2.3.2. the boundary value and both its neighbors.
3.2.4. coverage
3.2.4.1. No. of boundary values tested by at least one value coverage= ----------------------------------------------------------------------- X 100% Total number of boundary values
3.3. Decision Table Testing
3.3.1. About it-->
3.3.1.1. effective way of recording complex logic, such as business rules
3.3.1.2. the conditions and the resulting actions of the system are defined
3.3.1.3. Each column corresponds to a decision rule
3.3.1.4. It provides a systematic approach to identify all the combinations of conditions
3.3.1.5. helps to find any gaps or contradictions in the requirements
3.3.2. coverage
3.3.2.1. No. of decision rules tested by at least one test case coverage= ----------------------------------------------------------------------- X 100% Total number of decision rules
3.3.2.2. 100% coverage with this technique, test cases must exercise all these columns
3.4. State Transition Testing
3.4.1. state transition diagram
3.4.1.1. It models the behavior of a system by showing its possible states and valid state transitions
3.4.1.2. A transition is initiated by an event
3.4.1.3. The transitions may sometimes result in the software taking action
3.4.1.4. The common transition labeling syntax is as follows: “event [guard condition] / action”
3.4.2. State table
3.4.2.1. equivalent to a state transition diagram
3.4.2.2. ts rows represent states, and its columns represent events
3.4.2.3. Table entries (cells) represent transitions,and contain the target state
3.4.3. coverage
3.4.3.1. all states coverage
3.4.3.1.1. No. of visited states coverage= -------------------------------- X 100% total number of states
3.4.3.2. valid transitions coverage (0-switch coverage)
3.4.3.2.1. No. of exercised valid transitions coverage= ----------------------------------------------- X 100% total number of valid transitions
3.4.3.3. all transitions coverage
3.4.3.3.1. No. of valid and invalid transitions exercised by executed test cases coverage= ---------------------------------------------------------------------------------------------- X 100% total number of valid and invalid transitions
3.4.3.4. Achieving full all transitions coverage guarantees both full all states coverage and full valid transitions coverage
4. (3) Experience-based Test Techniques
4.1. Error guessing
4.1.1. a technique used to anticipate the occurrence of errors, defects, and failures, based on the tester’s knowledge
4.1.1.1. How the application has worked in the past
4.1.1.2. The types of errors the developers tend to make
4.1.1.3. The types of failures that have occurred in other, similar applications
4.1.2. Fault attacks
4.1.2.1. methodical approach to the implementation of error guessing
4.1.2.2. the tester to create or acquire a list of possible errors, defects and failures, and to design tests that will identify defects
4.1.2.3. These lists can be built based on experience, defect and failure data, or from common knowledge
4.2. Exploratory testing
4.2.1. It useful-->
4.2.1.1. when there are few or inadequate specifications or there is significant time pressure on the testing.
4.2.1.2. to complement other more formal test techniques.
4.2.1.3. will be more effective if the tester is experienced
4.2.1.4. can incorporate the use of other test techniques
4.2.1.5. In a session-based approach, exploratory testing is conducted within a defined time-box
4.2.2. It is used to learn more about the test object, to explore it more deeply with focused tests, and to create tests for untested areas
4.2.3. The tester uses a test charter containing test objectives to guide the testing
4.3. Checklist-based testing
4.3.1. tester designs, implements, and executes tests to cover test conditions from a checklist
4.3.2. Checklists can be built based on--->
4.3.2.1. experience
4.3.2.2. knowledge about what is important for the user
4.3.2.3. understanding of why and how software fails
4.3.3. Checklists should not contain--->
4.3.3.1. items that can be checked automatically
4.3.3.2. items better suited as entry/exit criteria
4.3.3.3. items that are too general
4.3.4. Checklists can be created to support various test types, including functional and non-functional testing
4.3.5. In the absence of detailed test cases, it can provide guidelines and consistency
4.3.6. If the checklists are high-level, some variability in the actual testing is likely to occur, resulting in potentially greater coverage but less repeatability.