Types de tests

Types de tests classés par famille (plus de 100 typologies identifiées)

Comienza Ya. Es Gratis
ó regístrate con tu dirección de correo electrónico
Types de tests por Mind Map: Types de tests

1. Tests par utilisateurs clé / MOA (fonctionnels)

1.1. Qualification Testing

1.1.1. Testing against the specifications of the previous release, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements.

1.2. Acceptance testing

1.2.1. Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. It is usually performed by the customer.

2. Tests par utilisateurs finaux

2.1. Alpha Testing

2.1.1. Type of testing a software product or system conducted at the developer's site. Usually it is performed by the end users

2.2. Beta Testing

2.2.1. Final testing before releasing application for commercial purpose. It is typically done by end-users or others.

3. Tests de bout en bout

3.1. End-to-end Testing

3.1.1. Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. It is performed by QA teams

3.2. Interface Testing

3.2.1. Testing conducted to evaluate whether systems or components pass data and control correctly to one another. It is usually performed by both testing and development teams.

3.3. Inter-Systems Testing

3.3.1. Testing technique that focuses on testing the application to ensure that interconnection between application functions correctly. It is usually done by the testing teams.

4. Tests manuels fonctionnels (équipe de tests)

4.1. Active testing

4.1.1. Type of testing consisting in introducing test data and analyzing the execution results. It is usually conducted by the testing team.

4.1.2. Manual Scripted Testing

4.1.2.1. Testing method in which the test cases are designed and reviewed by the team before executing it. It is done by Manual Testing teams.

4.1.2.2. Test hauts niveau (couverture limitée)

4.1.2.2.1. Basis Path Testing

4.1.2.2.2. Breadth Testing

4.1.2.3. Validation de spécifications

4.1.2.3.1. Requirements Testing

4.1.2.3.2. Black box Testing

4.1.2.3.3. Model-Based Testing

4.1.2.3.4. Scenario Testing

4.1.2.3.5. Functional Testing

4.1.2.3.6. GUI software Testing

4.1.2.4. Couverture avancée

4.1.2.4.1. All-pairs Testing

4.1.2.4.2. Equivalence Partitioning Testing

4.1.2.4.3. Path Testing

4.1.2.4.4. Assertion Testing

4.1.2.4.5. Boundary Value Testing

4.1.2.5. Méthodologies

4.1.2.5.1. Agile Testing

4.1.2.5.2. Context Driven Testing.

4.1.2.5.3. Gorilla Testing

4.1.3. Ad-hoc Testing

4.1.3.1. Testing performed without planning and documentation - the tester tries to 'break' the system by randomly trying the system's functionality. It is performed by the testing team

4.1.4. Exploratory Testing

4.1.4.1. Black box testing technique performed without planning and documentation. It is usually performed by manual testers.

4.1.5. Regression Testing

4.1.5.1. Type of software testing that seeks to uncover software errors after changes to the program (e.g. bug fixes or new functionality) have been made, by retesting the program. It is performed by the testing teams

4.1.6. Comparison Testing

4.1.6.1. Testing technique which compares the product strengths and weaknesses with previous versions or other similar products. Can be performed by tester, developers, product managers or product owners

4.1.7. Formal verification Testing

4.1.7.1. The act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics. It is usually performed by QA teams.

4.2. Passive Testing

4.2.1. Testing technique consisting in monitoring the results of a running system without introducing any special test data. It is performed by the testing team.

5. Tests automatisé à haut-niveau sur l'application (fonctionnels)

5.1. Keyword-driven Testing

5.1.1. Also known as table-driven testing or action-word testing, is a software testing methodology for automated testing that separates the test creation process into two distinct stages: a Planning Stage and an Implementation Stage. It can be used by either manual or automation testing teams.

5.2. Workflow Testing

5.2.1. Scripted end-to-end testing technique which duplicates specific workflows which are expected to be utilized by the end-user. It is usually conducted by testing teams.

5.3. User Interface Testing

5.3.1. Type of testing which is performed to check how user-friendly the application is. It is performed by testing teams

5.4. Orthogonal array Testing

5.4.1. Systematic, statistical way of testing which can be applied in user interface testing, system testing, Regression Testing, configuration testing and Performance Testing. It is performed by the testing team

6. Tests d'intégration (fonctionnels)

6.1. Integration Testing

6.1.1. The phase in software testing in which individual software modules are combined and tested as a group. It is usually conducted by testing teams.

6.2. Parallel Testing

6.2.1. Testing technique which has the purpose to ensure that a new application which has replaced its older version has been installed and is running correctly. It is conducted by the testing team.

6.3. System integration Testing

6.3.1. Testing process that exercises a software system's coexistence with others. It is usually performed by the testing teams.

6.4. Méthodologies d'intégration

6.4.1. Top Down Integration Testing

6.4.1.1. Testing technique that involves starting at the top of a system hierarchy at the user interface and using stubs to test from the top down until the entire system has been implemented.

6.4.2. Thread Testing

6.4.2.1. A variation of top-down testing technique where the progressive integration of components follows the implementation of subsets of the requirements. It is usually performed by the testing teams.

6.4.3. Bottom Up Integration Testing

6.4.3.1. In bottom-up Integration Testing, module at the lowest level are developed first and other modules which go towards the 'main' program are integrated and tested one at a time. It is usually performed by the testing teams.

6.4.4. Big Bang Integration Testing

6.4.4.1. Testing technique which integrates individual program modules only when everything is ready. It is performed by the testing teams

6.4.5. Hybrid Integration Testing

6.4.5.1. Testing technique which combines top-down and bottom-up integration techniques in order leverage benefits of these kind of testing. It is usually performed by the testing teams.

6.5. Tests minimaux de validation

6.5.1. Sanity Testing

6.5.1.1. Testing technique which determines if a new software version is performing well enough to accept it for a major testing effort. It is performed by the testing teams.

6.5.2. Smoke Testing

6.5.2.1. Testing technique which examines all the basic components of a software system to ensure that they work properly. Typically, smoke testing is conducted by the testing team, immediately after a software build is made

7. Tests de modules / composants (fonctionnels)

7.1. Component Testing

7.1.1. performed by tester, developers, product managers or product owners. Read More on Component Testing Component Testing: Testing technique similar to unit testing but with a higher level of integration - testing is done in the context of the application instead of just directly testing a specific method. Can be performed by testing or development teams.

7.2. API Testing

7.2.1. Testing technique similar to Unit Testing in that it targets the code level. Api Testing differs from Unit Testing in that it is typically a QA task and not a developer task

7.3. Gray Box Testing

7.3.1. A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings. It can be performed by either development or testing teams

7.4. Negative Testing

7.4.1. Also known as "test to fail" - testing method where the tests' aim is showing that a component or system does not work. It is performed by manual or automation testers

7.5. Behavior Driven Development

7.5.1. La programmation pilotée par le comportement (en anglais behaviour-driven development ou BDD) est un méthode de programmation agile qui encourage la collaboration entre les développeurs, les ingénieurs qualité et les intervenants non techniques ou commerciaux participant à un projet logiciel. Il encourage les équipes à utiliser la conversation et les exemples concrets pour formaliser une compréhension commune de la façon dont l'application doit se comporter. Le BDD combine les techniques et principes du développement piloté par les tests avec les principes de la conception pilotée par le domaine et de la conception orientée objet pour partager une méthode et des outils communs entre les équipes de développement et les autres parties prenantes. Le BDD est largement facilité par l'utilisation d'un langage dédié simple utilisant des constructions en langage naturel qui peuvent exprimer le comportement et les résultats attendus. Cela permet aux développeurs de se concentrer sur les raisons pour lesquelles le code doit être créé, plutôt que les détails techniques, et minimise la traduction entre le langage technique dans lequel le code est écrit et le domaine de la langue parlée par les entreprises, les utilisateurs, les intervenants, la gestion de projet…

8. Tests unitaires (fonctionnels)

8.1. Static Testing

8.1.1. A form of software testing where the software isn't actually used it checks mainly for the sanity of the code, algorithm, or document. It is used by the developer who wrote the code

8.2. Dynamic testing

8.2.1. Term used in software engineering to describe the testing of the dynamic behavior of code. It is typically performed by testing teams.

8.2.2. Unit Testing

8.2.2.1. Software verification and validation method in which a programmer tests if individual units of source code are fit for use. It is usually conducted by the development team

8.2.3. Glass box Testing

8.2.3.1. Similar to white box testing, based on knowledge of the internal logic of an application's code. It is performed by development teams.

8.2.4. Structural Testing

8.2.4.1. White box testing technique which takes into account the internal structure of a system or component and ensures that each program statement performs its intended function. It is usually performed by the software developers.

8.2.5. Domain Testing

8.2.5.1. White box testing technique which contains checkings that the program accepts only valid input. It is usually done by software development teams and occasionally by automation testing teams.

8.2.6. White box Testing

8.2.6.1. Testing technique based on knowledge of the internal logic of an application's code and includes tests like coverage of code statements, branches, paths, conditions. It is performed by software developers.

8.2.7. Code-driven Testing

8.2.7.1. Testing technique that uses testing frameworks (such as xUnit) that allow the execution of unit tests to determine whether various sections of the code are acting as expected under various circumstances. It is performed by the development teams.

8.2.8. Coverage

8.2.8.1. Statement Testing

8.2.8.1.1. White box testing which satisfies the criterion that each statement in a program is executed at least once during program testing. It is usually performed by the development team.

8.2.8.2. Condition Coverage Testing

8.2.8.2.1. Type of software testing where each condition is executed by making it true and false, in each of the ways at least once. It is typically made by the Automation Testing teams.

8.2.8.3. Loop Testing

8.2.8.3.1. A white box testing technique that exercises program loops. It is performed by the development teams

8.2.8.4. Decision Coverage Testing

8.2.8.4.1. Type of software testing where each condition/decision is executed by setting it on true/false. It is typically made by the automation testing teams.

8.2.8.5. Branch Testing

8.2.8.5.1. Testing technique in which all branches in the program source code are tested at least once. This is done by the developer

8.2.9. Mutation Testing

8.2.9.1. Method of software testing which involves modifying programs' source code or byte code in small ways in order to test sections of the code that are seldom or never accessed during normal tests execution. It is normally conducted by testers

8.3. Methodologies

8.3.1. Pair Testing

8.3.1.1. Software development technique in which two team members work together at one keyboard to test the software application. One does the testing and the other analyzes or reviews the testing. This can be done between one Tester and Developer or Business Analyst or between two testers with both participants taking turns at driving the keyboard.

9. Conformité

9.1. Compliance Testing

9.1.1. Type of testing which checks whether the system was developed in accordance with standards, procedures and guidelines. It is usually performed by external companies which offer "Certified OGC Compliant" brand.

10. Tests non fonctionnels

10.1. Testing technique which focuses on testing of a software application for its non-functional requirements. Can be conducted by the performance engineers or by manual testing teams.

10.2. Performance

10.2.1. Performance Testing

10.2.1.1. Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements. It is usually conducted by the performance engineer.

10.2.2. Age Testing

10.2.2.1. Type of testing which evaluates a system's ability to perform in the future. The evaluation process is conducted by testing teams

10.2.3. Benchmark Testing

10.2.3.1. Testing technique that uses representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration. It is performed by testing teams.

10.2.4. Configuration Testing

10.2.4.1. Testing technique which determines minimal and optimal configuration of hardware and software, and the effect of adding or modifying resources such as memory, disk drives and CPU. Usually it is performed by the Performance Testing engineers.

10.2.5. Concurrency Testing

10.2.5.1. Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. It it usually done by performance engineers.

10.2.6. Volume Testing

10.2.6.1. Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner. It is usually conducted by the performance engineer.

10.2.7. Destructive Testing

10.2.7.1. Type of testing in which the tests are carried out to the specimen's failure, in order to understand a specimen's structural performance or material behavior under different loads. It is usually performed by QA teams

10.2.8. Endurance Testing

10.2.8.1. Type of testing which checks for memory leaks or other problems that may occur with prolonged execution. It is usually performed by performance engineers

10.2.9. Load Testing

10.2.9.1. Testing technique that puts demand on a system or device and measures its response. It is usually conducted by the performance engineers

10.2.10. Stress Testing

10.2.10.1. Testing technique which evaluates a system or component at or beyond the limits of its specified requirements. It is usually conducted by the performance engineer.

10.2.11. Ramp Testing

10.2.11.1. Type of testing consisting in raising an input signal continuously until the system breaks down. It may be conducted by the testing team or the performance engineer.

10.2.12. Scalability Testing

10.2.12.1. Part of the battery of non-functional tests which tests a software application for measuring its capability to scale up - be it the user load supported, the number of transactions, the data volume etc. It is conducted by the performance engineer

10.3. Utilisabilité

10.3.1. Usability Testing

10.3.1.1. Testing technique which verifies the ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component. It is usually performed by end users.

10.3.2. Accessibility Testing

10.3.2.1. Type of testing which determines the usability of a product to the people having disabilities (deaf, blind, mentally disabled etc). The evaluation process is conducted by persons having disabilities

10.3.3. i18n

10.3.3.1. Localization Testing

10.3.3.1.1. Part of software testing process focused on adapting a globalized application to a particular culture/locale. It is normally done by the testing teams

10.3.3.2. Globalization Testing

10.3.3.2.1. Testing method that checks proper functionality of the product with any of the culture/locale settings using every type of international input possible. It is performed by the testing team.

10.3.3.3. Internationalization Testing

10.3.3.3.1. The process which ensures that product's functionality is not broken and all the messages are properly externalized when used in different languages and locale. It is usually performed by the testing teams.

10.4. Sécurité

10.4.1. Security Testing

10.4.1.1. A process to determine that an information system protects data and maintains functionality as intended. It can be performed by testing teams or by specialized security-testing companies.

10.4.2. Vulnerability Testing

10.4.2.1. Type of testing which regards application security and has the purpose to prevent problems which may affect the application integrity and stability. It can be performed by the internal testing teams or outsourced to specialized companies

10.4.3. Penetration Testing

10.4.3.1. Testing method which evaluates the security of a computer system or network by simulating an attack from a malicious source. Usually they are conducted by specialized penetration testing companies.

10.5. Maintenabilité

10.5.1. Operational Testing

10.5.1.1. Testing technique conducted to evaluate a system or component in its operational environment. Usually it is performed by testing teams.

10.5.2. Modularity-driven Testing

10.5.2.1. Software testing technique which requires the creation of small, independent scripts that represent modules, sections, and functions of the application under test. It is usually performed by the testing team.

10.5.3. Manual-Support Testing

10.5.3.1. Testing technique that involves testing of all the functions performed by the people while preparing the data and using these data from automated system. it is conducted by testing teams.

10.6. Fiabilité

10.6.1. Stability Testing

10.6.1.1. Testing technique which attempts to determine if an application will crash. It is usually conducted by the performance engineer

10.6.2. Recovery Testing

10.6.2.1. Testing technique which evaluates how well a system recovers from crashes, hardware failures, or other catastrophic problems. It is performed by the testing teams

10.6.3. Storage Testing

10.6.3.1. Testing type that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. It is usually performed by the testing team

10.6.4. Comportement en cas d'erreur

10.6.4.1. Error-Handling Testing

10.6.4.1.1. Software testing type which determines the ability of the system to properly process erroneous transactions. It is usually performed by the testing teams.

10.6.4.2. Fault injection Testing

10.6.4.2.1. Element of a comprehensive test strategy that enables the tester to concentrate on the manner in which the application under test is able to handle exceptions.

10.6.4.3. Fuzz Testing

10.6.4.3.1. Software testing technique that provides invalid, unexpected, or random data to the inputs of a program - a special area of mutation testing. Fuzz testing is performed by testing teams

10.7. Compatibilité

10.7.1. System Testing

10.7.1.1. The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It is conducted by the testing teams in both development and target environment.

10.7.2. Backward Compatibility Testing

10.7.2.1. Testing method which verifies the behavior of the developed software with older versions of the test environment. It is performed by testing team.

10.7.3. Compatibility Testing

10.7.3.1. Testing technique that validates how well a software performs in a particular hardware/software/operating system/network environment. It is performed by the testing teams

10.7.4. Upgrade Testing

10.7.4.1. Testing technique that verifies if assets created with older versions can be used properly and that user's learning is not challenged. It is performed by the testing teams.

10.7.5. Dependency Testing

10.7.5.1. Testing type which examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality. It is usually performed by testing teams.

10.8. Transferabilité

10.8.1. Binary Portability Testing

10.8.1.1. Technique that tests an executable application for portability across system platforms and environments, usually for conformation to an ABI specification. It is performed by the testing teams.

10.8.2. Conversion Testing

10.8.2.1. Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

10.8.3. Install/uninstall Testing

10.8.3.1. Quality assurance work that focuses on what customers will need to do to install and set up the new software successfully. It may involve full, partial or upgrades install/uninstall processes and is typically done by the software testing engineer in conjunction with the configuration manager.