1. Software Faults, Failure and Errors.
1.1. Faults in software are equivalent to design mistakes in hardware.
1.2. Software Fault : A static defect in the software (Programmer mistakes). Ex. The doctor tries to diagnose the root cause, the ailment
1.3. Software Failure : External, incorrect behavior with respect to the requirements or other description of the expected behavior. Ex. A patient gives a doctor a list of symptoms.
1.4. Software Error : An incorrect internal state that is the manifestation(تظهر) of some fault (Refers to difference between Actual Output and Expected output ) Ex. The doctor may look for anomalous internal conditions (high blood pressure, irregular heartbeat, bacteria in the blood stream)
1.5. Bug is used informally Sometimes speakers mean fault, sometimes error, sometimes failure ... often the speaker doesn’t know what it means !
1.6. Spectacular Software Failure
1.6.1. NASA’s Mars lander
1.6.2. THERAC-25 radiation machine
1.6.3. Ariane 5 explosion
1.6.4. Intel’s Pentium FDIV fault
2. Cost of not Testing
2.1. Testing is the most time consuming and expensive part of software development
2.2. Not testing is even more expensive
2.3. If we have too little testing effort early, the cost of testing increases
2.4. Planning for testing after development is prohibitively expensive
3. Test cases
3.1. is a set of test inputs, execution conditions, and expected results developed for a particular requirement or method.
4. Stages of testing
4.1. Development testing, where the system is tested during development to discover bugs and defects.
4.1.1. Development testing includes all testing activities that are carried out by the team developing the system.
4.1.1.1. Unit testing
4.1.1.1.1. where individual program units or object classes are tested. Unit testing should focus on testing the functionality of objects or methods.
4.1.1.2. Component testing
4.1.1.2.1. where several individual units are integrated to create composite components. Component testing should focus on testing component interfaces.
4.1.1.3. System testing,
4.1.1.3.1. where some or all of the components in a system are integrated and the system is tested as a whole. System testing should focus on testing component interactions.
4.2. Release testing, where a separate testing team test a complete version of the system before it is released to users.
4.2.1. Release testing is a form of system testing
4.2.2. Release testing is to check that the system meets its requirements and is good enough for external use
4.2.3. Release testing is usually a black-box testing process where tests are only derived from the system specification
4.2.4. Release testing, therefore, has to show that the system delivers its specified functionality, performance and dependability, and that it does not fail during normal use.
4.2.5. Performance testing part of release testing
4.2.5.1. Stress testing is a form of performance testing where the system is deliberately overloaded to test its failure behaviour.
4.3. User testing, where users or potential users of a system test the system in their own environment.
4.3.1. stage in the testing process in which users or customers provide input and advice on system testing
4.3.2. Types of user testing
4.3.2.1. Alpha testing
4.3.2.1.1. Users of the software work with the development team to test the software at the developer’s site.
4.3.2.2. Beta testing
4.3.2.2.1. A release of the software is made available to users to allow them to experiment and to raise problems that they discover with the system developers. Primarily for software products that are used in many different environments.
4.3.2.3. Acceptance testing
4.3.2.3.1. Customers test a system to decide whether or not it is ready to be accepted from the system developers and deployed in the customer environment. Primarily for custom systems.
4.3.2.4. Agile only uses alpha testing because the release is not available in public
4.3.2.5. acceptance testing in agile but it is not as a separated activity.
5. Testing Techniques
5.1. Black-box testing
5.1.1. Looking at the program from an external point of view and deriving test cases based on the specification
5.1.2. The only criterion upon which the program is judged is if it produces the correct output for a given input.
5.2. White-box testing
5.2.1. Testing based on analysis of internal logic (design, code, etc.),(But expected results still come from requirements.)
5.2.2. White-box testing techniques apply primarily to lower levels of testing (e.g., Unit and Component).
5.2.3. structural testing
5.2.4. White-bos testing Techniques
5.2.4.1. 1. Control flow testing
5.2.4.1.1. ▶ The control structure of a program can be represented by the Control Flow Graph (CFG).
5.2.4.2. 2. Data flow testing.
6. Software inspections and testing
6.1. Software inspections Concerned with analysis of the static system representation to discover problems (static verification or static V&V) ▶ Inspections and reviews analyze and check the system requirements, design models, the program source code, and even proposed system test ▶ May be supplement by tool-based document and code analysis. ▶ don’t need to execute the software to verify it
6.1.1. May be used in requirement specification, software architecture , UML design model , program ,Database schemas .
6.2. Software testing Concerned with exercising and observing product behaviour (dynamic verification) ▶ The system is executed with test data and its operational behaviour is observed.
6.2.1. May be used in program , prototype .
6.2.1.1. ▶ Inspections and testing are complementary and not opposing verification techniques. ▶ Both should be used during the V & V process. ▶ Inspections can check conformance with a specification but not conformance with the customer’s real requirements. ▶ Inspections cannot check non-functional characteristics such as performance, usability, etc.
7. Testing Can reveal the presence of errors NOT their absence
8. Testing is intended to show that a program does what it is intended to do and to discover program defects before it is put into use.
9. Program testing goals
9.1. To demonstrate that the software meets its requirements. thats leads to validation testing
9.1.1. Validation testing. You expect the system to perform correctly using a given set of test cases that reflect the system’s expected use.
9.1.1.1. A successful test shows that the system operates as intended.
9.2. To discover situations in which the behavior of the software is incorrect, undesirable or does not conform to its specification. thats leads to defect testing
9.2.1. Defect testing. The test cases are designed to expose defects. The test cases in defect testing can be deliberately obscure(غامضة عمدًا) and need not reflect how the system is normally used.
9.2.1.1. A successful test is a test that makes the system perform incorrectly and so exposes a defect in the system.
10. Testing is part of a more general Verification and Validation
10.1. Verification ▶"Are we building the product right?”. ▶ The software should conform to its specification ▶The aim of verification is to check that the software meets its stated functional and non-functional requirements.
10.2. Validation ▶ "Are we building the right product?”. ▶ The software should do what the user really requires. ▶ The aim of validation is to ensure that the software meets the customer’s expectations.
11. Test-Driven Development
11.1. is an approach to program development in which you inter-leave testing and code development
11.2. Tests are written before code and ‘passing’ the tests is the critical driver of development.
11.3. You develop code incrementally, You don’t move on to the next increment until the code that you have developed passes its test.
11.4. Test-Driven activities
11.4.1. Start by identifying the increment of functionality that is required. This should normally be small and implementable in a few lines of code.
11.4.2. Write a test for this functionality and implement this as an automated test.
11.4.3. Run the test, along with all other tests that have been implemented. Initially, you have not implemented the functionality so the new test will fail.
11.4.4. Implement the functionality and re-run the test.
11.4.5. Once all tests run successfully, you move on to implementing the next chunk of functionality.
11.5. Benefits of test-driven development
11.5.1. Code coverage
11.5.2. Regression testing
11.5.2.1. Regression testing is testing the system to check that changes have not ‘broken’ previously working code.
11.5.3. Simplified debugging
11.5.4. System documentation
11.5.4.1. System testing by the development team should focus on discovering bugs in the system (defect testing).