Software Testing World Cup

Get Started. It's Free
or sign up with your email address
Rocket clouds
Software Testing World Cup by Mind Map: Software Testing World Cup

1. Heuristics we want to use?

1.1. http://karennicolejohnson.com/wp-content/uploads/2012/11/KNJohnson-2012-heuristics-mnemonics.pdf

1.2. http://karennicolejohnson.com/2012/07/testing-mnemonics-as-a-card-deck/

1.3. http://testobsessed.com/wp-content/uploads/2011/04/testheuristicscheatsheetv1.pdf

1.4. To be clear, I'm cool with SFDPOT - Claire

2. Questions for Customer

2.1. Non-functional

2.1.1. Do you care about testability?

2.1.2. Do you care about usability?

2.1.3. Do you care about accessibility?

2.1.4. Do you care about security?

2.1.5. Do you care about performance?

2.1.5.1. Can we load or performance test against this system?

2.2. DO YOU HAVE REQUIREMENTS YOU CAN GIVE US TO USE FOR TESTING???

2.3. WHEN SHOULD WE STOP ASKING QUESTIONS?

2.4. Functional?

2.4.1. Weinberg recommends...

2.4.1.1. Attribute wish list

2.4.1.1.1. dimensions we are interested in

2.4.1.1.2. vs points along those dimensions

2.4.1.2. How would you (the client) like to use this product?

2.4.1.2.1. Decisions

2.4.1.3. What is the purpose of this product?

2.4.1.3.1. What are you most looking forward to?

2.4.1.3.2. Which part of the system will be most valuable to you?

2.4.1.3.3. Expectations list

2.4.1.3.4. Replacing existing product?

2.4.1.4. How would others like to use this product? (i.e. user list)

2.4.1.4.1. Doctrine of Reasonable Use: we'll know what is reasonable when we see it

2.4.1.4.2. Measuring satisfaction

2.4.1.4.3. What groups of people have what troubles with the present product?

2.4.1.4.4. How much time do people spend on various parts of the present product?

2.4.1.4.5. How much money do people spend on various parts of the present product?

2.4.1.5. Don't worry about how much it would cost. What would you like?

2.4.1.5.1. What's it worth?

2.4.1.6. Don't worry if it can really be done. What would you like?

2.4.1.6.1. When do you need it?

2.4.1.6.2. What is the expertise of the estimators?

2.4.1.7. Evident (visible) vs Hidden (non-obvious or taken for granted) vs Frill (only if free)

2.4.1.8. Ambiguity

2.4.1.8.1. Problem-statement

2.4.1.8.2. Design-process

2.4.1.8.3. Final-product

2.5. What risks are you concerned about?

2.5.1. What is important?

2.5.2. What do my customers care about?

2.5.3. Priority/risk

2.5.4. Consequences of failure

2.5.5. Project details

2.6. Use any of Weinberg's context-free questions?

2.6.1. Process

2.6.1.1. Where else can the solution to this design problem be obtained? Can we copy something that already exists?

2.6.1.1.1. (Doesn't seem useful)

2.6.1.2. Should we use a single design tesam or more than one?

2.6.1.2.1. (Doesn't seem useful)

2.6.1.2.2. Who should be on the teams?

2.6.1.3. Who is the client?

2.6.1.3.1. (I think we know Matt's the client rep, but may not be the ultimate end-user type of customer)

2.6.1.4. How much time do we have for this project? What is the trade-off between time and value?

2.6.1.4.1. (Doesn't seem useful)

2.6.1.5. What is a highly successful solution really worth to this client?

2.6.1.5.1. What is the real reason for wanting to solve this problem?

2.6.1.6. Are you comfortable with the process right now?

2.6.1.7. Is there any reason you don't feel you can answer freely?

2.6.2. Product

2.6.2.1. What problems does this system solve?

2.6.2.2. What problems could this system create?

2.6.2.3. What environment is this system likely to encounter?

2.6.2.4. What kind of precision is required or desired in the product?

2.6.3. Metaquestions

2.6.3.1. Am I asking you too many questions?

2.6.3.2. Do my questions seem relevant?

2.6.3.3. Are you the right person to answer these questions?

2.6.3.3.1. (Seems like Matt is all we get, maybe judge team)

2.6.3.3.2. When I asked X about that, she said Y. Do you have any idea why she might have said Y?

2.6.3.3.3. I notice that you don't seem to agree with that reply. Would you tell us about that?

2.6.3.3.4. What can you tell me about the other people on this project?

2.6.3.3.5. How do you feel about the other people working with us on this project?

2.6.3.3.6. Is there anybody we need on this project whom we don't have?

2.6.3.3.7. Is there anybody we have on this project whom we don't need?

2.6.3.3.8. Can you tell me more about that person?

2.6.3.4. Are your answers official?

2.6.3.5. In order to be sure we understand each other, I've found that it helps me to have things in writing so I can study them at leisure. May I write down your answers and give you a written copy to study and approve?

2.6.3.5.1. (Seems inherent in the YouTube commenting process)

2.6.3.6. The written material has been helpful, but I find that I understand some things better if I can discuss them face to face. Can we get together at some point so we can know each other better and can clarify some of these points?

2.6.3.6.1. (Not sure that we'll get any live "face time" with judges, so probably not relevant)

2.6.3.7. Is there anyone else who can give me useful answers?

2.6.3.7.1. (Seems like Matt is all we get, maybe judge team)

2.6.3.8. Is there someplace I can go to see the environment in which this product will be used?

2.6.3.9. Is there anything else I should be asking you?

2.6.3.9.1. I notice that you hesitated a long time before answering that question. Is there something else we should know?

2.6.3.10. Is there anything you want to ask me?

2.6.3.11. May I return or call you with more questions later, in case I don't cover everything this time?

2.6.3.11.1. (I think the process said we have a limited question-asking window and it's up-front)

2.7. from demo

2.7.1. Are we only testing U.S. site? Canada too?

3. Tools

3.1. YouTube comments

3.2. Skype/Google Hangout

3.3. Google Docs

3.4. Mindmeister

4. Specializations

4.1. Accessibility

4.1.1. Curtis

4.2. Security

4.2.1. Nawwar

4.3. Question Asking

4.3.1. Claire

4.4. Report Beautification

4.4.1. Elizabeth

5. Deliverables

5.1. Issues found

5.1.1. Lean Testing

5.1.2. Emailed instructions on bug reporting

5.2. Test report

5.2.1. Format?

5.2.1.1. Could include Mindmeister?

5.2.1.2. Describing the state of the SUT

5.2.1.3. what was most important to test

5.2.1.4. details of strategy

5.2.1.5. major issues found

5.2.2. Purpose?

5.2.2.1. Help decision-maker figure out whether to ship

5.2.2.2. Needs more fixes?

5.2.2.3. What to invest in next?

5.2.3. Content

5.2.3.1. How you decided what to test

5.2.3.2. How team spent its time

5.3. Submission

5.3.1. title

5.3.1.1. Functional Test Report for TEAM_NAME

5.3.1.2. North America in the subject line

5.3.2. email address

5.3.2.1. [email protected]

5.3.3. time

5.3.3.1. grace period of 5 minutes after end time

5.4. Score

5.4.1. On mission

5.4.1.1. <= 20 pts

5.4.2. Quality of bug reports

5.4.2.1. <= 20 pts

5.4.3. Quality of test report

5.4.3.1. <= 20 pts

5.4.4. Accuracy of test report

5.4.4.1. <= 20 pts

5.4.5. Non-functional testing

5.4.5.1. <= 20 pts

5.4.6. Interacting with judges/customer

5.4.6.1. bonus <= 10 pts

6. Focus

6.1. Major issues

6.2. Informing the customer

6.3. Suggesting fixes & their importance

6.3.1. Bug advocacy

7. Product Owner inputs!!

7.1. helping consumers make decisions

7.1.1. credit cards

7.1.1.1. helping consumer select best rewards card for them

7.1.2. bank accounts

7.1.3. cell phones

7.2. text at the top of the page

7.2.1. subject/verb mismatch

7.3. prioritized/sort list of credit cards suitable for you

7.4. filters

7.4.1. more tab

7.4.1.1. annual fee

7.4.1.2. foregin transaction fee percentage

7.5. inputs

7.5.1. credit score

7.5.2. types of cards

7.5.2.1. card issuer

7.5.3. rewards you want

7.5.3.1. redemption categories affect value of points

7.5.3.1.1. airfare makes points worth more

7.5.4. time horizon for rewards

7.5.4.1. 1 year

7.5.4.2. 10 years

7.5.5. spending patterns

7.5.5.1. monthly

7.5.5.2. categories

7.6. toggles

7.6.1. sorting preferences

7.6.1.1. plain sorting

7.6.1.1.1. avg annual reward over chosen time horizon

7.6.1.2. annual fees

7.6.1.3. prioritized

7.6.1.4. order of sorts matters

7.6.2. Pareto ranking

7.6.2.1. tradeoffs

7.6.2.2. finding best options depending on your credit score

7.7. better than diagram

7.7.1. directed graph

7.7.2. arrows point from worse option to better option

7.7.3. no errors, not much other value

7.7.4. lower priority

7.8. customize columns display

7.8.1. groupps of columns

7.9. concerns

7.9.1. accuracy

7.9.1.1. calculations

7.9.1.1.1. not focused on small details

7.9.1.1.2. respond well

7.9.1.1.3. what can people do to compromiste the calcuatlations?

7.9.1.2. avoid "gross inaccuracy"

7.9.1.2.1. legal issues

7.10. browsers

7.10.1. using websocket

7.11. accessibility

7.11.1. easy fixes

7.11.2. not complete overhaul

7.12. availability

7.12.1. concurrent users

7.12.1.1. up to 20?

7.13. privacy

7.13.1. PII????

7.13.1.1. nope

7.14. responsive design

7.15. lcoale

7.15.1. USA

7.15.1.1. larger potential user base

7.15.2. Canada

7.15.2.1. affiliate kickbck

7.15.3. wrong aspect ratio on US flag

7.16. sourece of data

7.16.1. card issuer websites

7.17. programming language

7.17.1. R

8. Possible bugs

8.1. ... at the bottom is looks like it should be clickable but is not