7 Ways to Find Software Defects http://bit.ly/LJ8seU by Matt Heusser

Seven Ways to Find Software Defects Before They Hit Production

Get Started. It's Free
or sign up with your email address
7 Ways to Find Software Defects http://bit.ly/LJ8seU by Matt Heusser by Mind Map: 7 Ways to Find Software       Defects  http://bit.ly/LJ8seU by Matt Heusser

1. #2: Equivalence & Boundary Conditions

1.1. Behavior & criteria rules

1.2. Insurance example

1.2.1. Ages 0-91 (do you really want to do 92 test cases?)

1.2.2. Each test case

1.2.2.1. Multiplied by number of points on license

1.2.2.2. Variety of available discounts

1.2.2.3. Types of coverage

1.2.2.4. Other varables

1.2.2.5. Lots of test cases! Combinatorial explosion

1.2.3. Boundary testing

1.2.3.1. Instead of 92 test cases, put ages into groups of 8 columns

1.2.3.1.1. Table

1.2.3.2. Test each column once

1.2.3.3. Data within a column might be equal or equivalent

1.2.3.4. Designed to catch off-by-one errors

1.2.3.4.1. Program used "greater than" to compare two numbers

1.2.3.4.2. Should have used "greater than or equal to"

1.2.3.4.3. Catches "age 45 problem" (see doc)

1.2.3.5. Test every transition, both before & after

1.2.3.6. All cases are covered

1.2.3.7. A technique to reduce an infinite test set into something manageble

1.2.3.8. Shows that requirements are covered

1.2.3.9. Look out for other hidden classes that may exist

2. #7: Regression & High-volume Test Techniques

2.1. Prove out new functionality

2.2. Make sure nothing broke previous version

2.3. Make sure software did not regress (something worked yesterday but fails today)

2.4. Example

2.4.1. Record input & output data

2.4.2. Send input data to old/new versions of app

2.4.3. Gather/compare output (recorded output)

2.4.4. If output is different, possible bug

3. #6: Code-based Coverage Models

3.1. Statement Coverage

3.1.1. Turn on when you start testing

3.1.2. Turn off when testing finished

3.1.3. See which lines of code are untested (red)

3.1.4. Improve testing of red functions & branches

3.1.5. Tools that record every line of code executed

4. #5: Use case & Soap Opera Tests

4.1. Use cases

4.1.1. Who, what, why behaviors of users

4.2. Scenarios

4.2.1. How someone might use system

4.3. Soap opera tests

4.3.1. Crazy, wild combos of improbable scenarios

4.3.2. Can provide a quick, informal estimate of software quality

4.3.3. If succeeds, lesser test may work too

5. #4: State-transition Diagrams

5.1. Map through application

5.1.1. List places/pages

5.1.2. Links between places/pages

5.1.3. Tests for every transition thru application

5.1.4. Work w/team for "hidden" states

5.1.5. Compare other diagrams w/yours for differences

5.1.5.1. Can indicate gaps in requirements

5.1.5.2. Defects in software

5.1.5.3. Different expectations among team

5.1.6. Diagram

5.1.6.1. http://bit.ly/LJ8seU

6. #3: Common Failure Mode

6.1. Lose coverage

6.2. Many applications open at same time w/low memory device

7. #1: Quick Attacks

7.1. Purpose

7.1.1. Little or no prior knowledge of system

7.1.2. Do not know the requirements

7.1.3. Attack system to send it into a state of panic by filling in the wrong thing

7.2. Attacks

7.2.1. Leave required field blank

7.2.2. Different workflow than implied

7.2.3. Number field

7.2.3.1. Type alpha chars

7.2.3.2. Number too large for system to handle

7.2.3.3. If expects whole number, use decimal

7.2.4. Alpha field

7.2.4.1. Type numerics

7.2.4.2. Start > Run > charmap

7.2.4.3. French-Canadian & Spanish chars

7.2.5. Combine things programmers did not expect w/common failure modes of platform

7.2.6. Web app

7.2.6.1. Resize browser window

7.2.6.2. Flip back/forth quickly between tabs

7.2.7. Submit buttons should submit

7.2.8. Other

7.2.8.1. Test Heuristics Cheat Sheet (E. Hendrickson)

7.2.8.1.1. http://bit.ly/yteMp8

7.2.8.2. How to Break Web Software (Andrews/Whittaker)

7.2.8.2.1. http://amzn.to/WO7diB

8. Strengths/Weaknesses: Techniques 1-7

8.1. http://bit.ly/LJ8seU

9. Combine techniques

9.1. What can be automated?

9.2. Which are business facing (human being uses)?

9.3. Which overlap?

9.4. Which are completely unrelated?

9.5. Challenges

9.5.1. How many sets of possible inputs & transformations?

9.5.2. Limit ideas to ones that will provide the most information right now

9.5.3. Balance single tests we could automate in an hour vs. the 50 we could run by hand in that hour

9.5.4. Figure out what we learned in that hour & how to report it