1. Reliable
1.1. When a test fails, it should mean something inadvertant happened
1.2. Flakiness is the enemy
2. Logging
2.1. Mean time to diagnose
2.2. screenshots/videos/bug reporting
3. Scalable
3.1. Devices
3.1.1. Same set of tests can be run on multiple devices, browser sizes etc
3.2. Parallel
3.2.1. Tests can be run in parallel
4. Readable
4.1. Test name and steps should be highlevel
4.2. Following Arrange/Act/Assert/Cleanup
4.3. Explicit on intent with expected and actual
5. Repeatable
5.1. They can be run from
5.1.1. Local
5.1.2. Build machine
5.1.3. Cloud labs
5.2. Establish that flakiness won't happen
6. Data driven
6.1. No hardcoding
6.1.1. Locators
6.1.2. Strings
6.2. Same tests can run for different
6.2.1. User accounts
6.2.2. Environments
6.2.2.1. OTE
6.2.2.2. Stg
6.2.2.3. Prod
6.2.3. Browsers/devices
6.2.3.1. Chrome
6.2.3.2. Firefox
6.2.3.3. IE
6.2.3.4. devices if using appium
7. Fast feedback
7.1. Whole test suite should run in > 10 mins
8. Transparent
8.1. Failures are notified immediately
8.2. Everybody on the team can look at failures and attempt fixes
9. DRY
9.1. Minimal
9.1.1. Have only few high smoke tests for E2E scenarios
9.2. Right level of abstraction
9.2.1. Knowing what isn't and what is a good test for each kind of test
9.2.2. Shared understanding of
9.2.2.1. Unit tests
9.2.2.2. Functional tests
9.2.2.3. Partial integration tests
9.2.2.4. E2E/Smoke/Integration/UI tests
10. Concrete strategies
10.1. Integrated into build pipelines
10.1.1. PR jobs for code merge
10.1.2. Large daily jobs
10.1.3. Build validation tests
10.2. Same dev repo
10.2.1. Same set of coding standards as dev code
10.2.2. Contirbutions from devs
10.2.3. Re-use helpers and locators
11. Maintainable
11.1. Organized
11.1.1. By
11.1.1.1. Components
11.1.1.2. Time to run
11.1.1.3. Size of the tests
11.1.1.4. Ignored/Flaky
11.2. Flexible
11.2.1. Subsets of the tests can be run individually
11.2.2. Using various data arguments