1. Benifits
1.1. Implementing changes more efficiently
1.2. Higher product quality
1.3. Less rework
1.4. Better alignment of the activities of different roles on a project
2. Key Process Patterns
2.1. Deriving scope from goals
2.1.1. Building the right scope
2.1.1.1. Understand the "WHY" and "WHO"
2.1.1.1.1. Understanding why something is needed and who needs it is crucial to evaluating a suggested solution.
2.1.1.2. Understand where the value is coming from
2.1.1.3. Understand what outputs the business users expect
2.1.1.3.1. Instead of trying to collaborate with business users on specifying how to put things into the system, we should start with examples of outputs. This helps engage business users in the discussion and gives them a clear picture of what they’ll get out of the system.
2.1.1.4. Ask how something would be useful
2.1.1.4.1. Instead of a technical feature specification, we should ask for a high-level example o f h ow a feature would be useful. This will point us towards the real problem.
2.1.1.5. Ask for an alternative solution
2.1.1.5.1. This helps business users to express the value of a given feature.
2.2. Specifying Collaboratively
2.2.1. All team workshops (when-starting out with SBE example - JPBR)
2.2.2. Collaborate with the development team.
2.2.3. Pair writing the SBE
2.2.4. Try informal conversations - Face-to-face conversation is considered best.
2.2.5. Involve stakeholders
2.2.6. Undertake detailed preparation and review up front (When: Remote stakeholders)
2.2.7. Don't Hinder discussion by over-preparing
2.3. Illustrating specifications using examples
2.3.1. Examples should be precise
2.3.1.1. Don’t have yes/no answers in your examples (When: The underlying concept isn’t separately defined)
2.3.1.2. Avoid using abstract classes of equivalence (When: You can specify a concrete example)
2.3.2. Examples should be complete
2.3.2.1. Experiment with data
2.3.2.2. Ask for an alternative way to check the functionality (When: Complex/legacy infrastructures)
2.3.3. Examples should be realistic
2.3.3.1. Avoid making up your own data (When: Data-driven projects)
2.3.3.2. Get basic examples directly from customers (When: Working with enterprise customers)
2.3.4. Examples should be easy to understand
2.3.4.1. Avoid the temptation to explore every combinatorial possibility
2.3.4.2. Look for implied concepts
2.3.5. Illustrating nonfunctional requirements
2.3.5.1. Get precise performance requirements (When: Performance is a key feature)
2.3.5.2. Use a checklist for discussions (When: Cross-cutting concerns)
2.3.5.3. Build a reference example (When: Requirements are impossible to quantify)
2.4. Refining the specifications
2.4.1. What to focus on when refining specifications
2.4.1.1. Examples should be precise and testable
2.4.1.2. Scripts are not specifications
2.4.1.3. Don’t create flow-like descriptions
2.4.1.4. Specifications should be about business functionality, not software design
2.4.1.5. Avoid writing specifications that are tightly coupled with code
2.4.1.6. Resist the temptation to work around technical difficulties in specifications (When: Working on a legacy system)
2.4.1.7. Don’t get trapped in user interface details (When: Web projects)
2.4.1.8. Specifications should be self-explanatory
2.4.1.8.1. Don’t overspecify examples
2.4.1.8.2. Start with basic examples; then expand through exploring (When: Describing rules with many parameter combinations)
2.4.1.9. Specifications should be focused
2.4.1.9.1. Use “Given-When-Then” language in specifications In order to: Make the test easier to understand
2.4.1.9.2. Don’t explicitly set up all the dependencies in the specification (When: Dealing with complex dependencies/referential integrity)
2.4.1.9.3. Apply defaults in the automation layer
2.4.1.9.4. Don’t always rely on defaults (When: Working with objects with many attributes)
2.4.1.10. Specifications should be in domain language
2.5. Automating validation without changing specifications
2.5.1. Starting with automation
2.5.1.1. To learn about tools, try a simple project first (When: Working on a legacy system)
2.5.1.2. Plan for automation upfront
2.5.1.3. Don’t postpone or delegate automation
2.5.1.4. Avoid automating existing manual test scripts
2.5.1.5. Gain trust with user interface tests (When: Team members are skeptical about executable specifications)
2.5.2. Managing the automation layer
2.5.2.1. Don’t treat automation code as second-grade code
2.5.2.2. Describe validation processes in the automation layer
2.5.2.3. Don’t replicate business logic in the test automation layer
2.5.2.4. Automate along system boundaries (When: Complex integrations)
2.5.2.5. Don’t check business logic through the user interface
2.5.2.6. Automate below the skin of the application (When: Checking session and workflow constraints)
2.5.3. Automating user interfaces
2.5.3.1. Specify user interface functionality at a higher level of abstraction
2.5.3.2. Check only UI functionality with UI specifications (When: User interface contains complex logic)
2.5.3.3. Avoid recorded UI tests
2.5.3.4. Set up context in a database
2.5.4. Test data management
2.5.4.1. Avoid using prepopulated data (When: Specifying logic that’s not data driven)
2.5.4.2. Try using prepopulated reference data (When: Data-driven systems)
2.5.4.3. Pull prototypes from the database (When: Legacy data-driven systems)
2.6. Validating the system frequently
2.6.1. Reducing unreliability
2.6.1.1. Find the most annoying thing, fix it, and repeat (When: Working on a system with bad automated test support)
2.6.1.2. Identify unstable tests using CI test history When: Retrofitting automated testing into a legacy system
2.6.1.3. Set up a dedicated continuous validation environment
2.6.1.4. Employ fully automated deployment
2.6.1.5. Create simpler test doubles for external systems (When: Working with external reference data sources)
2.6.1.6. Selectively isolate external systems (When: External systems participate in work)
2.6.1.7. Try multistage validation (When: Large/multisite groups)
2.6.1.8. Execute tests in transactions (When: Executable specifications modify reference data)
2.6.1.9. Run quick checks for reference data (When: Data-driven systems)
2.6.1.10. Wait for events, not for elapsed time
2.6.1.11. Make asynchronous processing optional (When: Greenfield projects)
2.6.1.12. Don’t use executable specifications as end-to-end validations (When: Brownfield projects)
2.6.2. Getting feedback faster
2.6.2.1. Introduce business time (When: Working with temporal constraints)
2.6.2.2. Break long test packs into smaller modules
2.6.2.3. Avoid using in-memory databases for testing (When: Data-driven systems)
2.6.2.4. Separate quick and slow tests (When: A small number of tests take most of the time to execute)
2.6.2.5. Keep overnight packs stable (When: Slow tests run only overnight)
2.6.2.6. Create a current iteration pack
2.6.2.7. Parallelize test runs (When: You can get more than one test environment)
2.6.2.8. Try disabling less risky tests When: Test feedback is very slow
2.6.3. Managing failing tests
2.6.3.1. Create a known regression failures pack
2.6.3.2. Automatically check which tests are turned off (When: Failing tests are disabled, not moved to a separate pack)
2.7. Evolving living documentation
2.7.1. Living documentation should be easy to understand
2.7.1.1. Don’t create long specifications
2.7.1.2. Don’t use many small specifications to describe a single feature
2.7.1.3. Look for higher-level concepts
2.7.1.4. Avoid using technical automation concepts in tests (When: Stakeholders aren’t technical)
2.7.2. Living documentation should be consistent
2.7.2.1. Evolve a language
2.7.2.2. Base the specification language on personas When: Web projects
2.7.2.3. Collaborate on defining the language (When: Choosing not to run specification workshops)
2.7.2.4. Document your building blocks
2.7.3. Living documentation should be organized for easy access
2.7.3.1. Organize current work by stories
2.7.3.2. Reorganize stories by functional areas
2.7.3.3. Organize along UI navigation routes When: Documenting user interfaces
2.7.3.4. Organize along business processes When: End-to-end use case traceability required
2.7.3.5. Use tags instead of URLs when referring to executable specifications (When: You need traceability of specifications)
2.7.4. Listen to your living documentation
3. Living Documentation
3.1. Why we need authoritative documentation
3.1.1. Need to know what system does (what is the business functionality)
3.1.2. Easy and cheap to maintain
3.1.3. Documentation can be kept consistent with the system functionality even if the underlying programming language code is changed frequently