Project X
von Sara Åkerlindh
1. Test result
1.1. Test execution So how did it actually go? In a real project you could choose to add this information to the test scenarios you already have written or you could choose to just mark them as not run/run with issues/run ok and persent the result in it's own part of the mind map. It's up to you! For this project we felt it would be easier to list the results separately to make it clearer that it is different parts of the work shop. You should try to add as much information as makes sence to the context but NOT more. Try to make it clear which are important and which are not and as usual: WHY. You can group them by severity, by area of the system, by complexity, by part of the requirement or any other way that makes sense to you and your context. We will not include any examples right not as they might affect your testing. :-) You will get them afterwards
2. Feedback on my testing
2.1. Documentation and review Take a few minutes to consider how you actually did. What did you test? What did you not test that you planned to? Why? What did you not plan for but did anyway? Why? Were there issues that caused your tests to go less smooth? Environments perhaps? Test data? Maybe you were not focused and feel that you might have missed things? It could look something like this: "I had time to complete the sessions I planned for the two configurations that were set to "must haves". I had to stop my tests on mobile because I found that the screen resolution caused the site to crash. Since this was a "nice to have" according to the client I decided to not spend more time on it: I have a feeling that there is some security issue but could not prove it, I will do a quick check with the developers and see if they could shed some light. If I can't find more information I will just leave a note of it. I feel I could have done a better job with the engraving tests but they were not a priority so I will add more time to it in my next session."
3. Suggested plan moving forward
3.1. As a tester, your job is not to decide if a release candidate will be deployed or not or even if the release needs to be postponed. You might feel frustrated by that or you might feel pressured into acting as the gate keeper (or to "accept" a release) but that is NOT your job! You should present as much important information as possible, in a way that makes sense to the receivers, to allow others to make an informed decision. Those "others" might be a project manager, a product owner, a client or the team but unless it is actually formalizes as one of your responsibilities (and if it is you should make sure to get rid of it!) you should stop thinking in terms of "I accept /reject this release" . What information is important and how it will be best received depends on context but telling a compelling story usually makes it easier to get your recommendation accepted. As a tester our focus if often that quality is the most important aspect in development but that might not always be the case and a lot of times we end up frustrated with bugs being ignored or versions being released even though we know it has issues. This could be for a number of reasons, among those: - The person in charge of the decision knows the profit is worth the risk - You did not manage to explain the severity of the issue - The fix is planned for a patch in a few days/weeks - The functionality will be obsolete in x days/weeks/months because the next release has a complete refactoring of that part - Someone did not trust you - Someone convinced someone that no user will find the issue HOW you phrase yourself can make a ton of difference in how you are received, both by business and development side. Speaking in absolutes is usually not helpful and also makes the few times you feel the need to use them have more impact (Think of the boy who cried wolf). Speaking in words that feel to personal/critical can make you less friendly with the developers. Usually you can say the same thing without sounding like you think their job was really bad and suddenly it will be easier to collaborate. Gaining trust is so important! Thinking another step from "STOP THE RELEASE" is also helpful. Maybe there is a way forward that you (or others) have not considered? Maybe the critical bugs you found are misunderstandings that could be fixed in an hour if someone just discussed the functionality with the devs? Maybe a pair programming/testing session could work wonders? Maybe removing a few extra hard stories will allow for the rest to be released on time? Or you might actually feel that releasing the application will cause lifes lost - then make sure to convince them! I have not added an example because it might affect how you tackle the assignment. You will get one afterwards :-)
4. Assumptions/Questions
4.1. This is a short list of assumptions you make that are not spelled out the oracles you have to test by. There is no such thing as a perfect spec that is impossible to misinterpret. Forcing yourself to write them down will both make it easier to communicate with different stakeholders afterwards and could potentially save time if you run them by developers and/or client beforehand. We make a lot of these assumptions all the time and it takes practice to see them and challenge them. The developers make their own assumptions and the person writing the spec certainly makes them! In the end a lot of the requierments, implementation and tests/bugs reported are based on assumptions leading to the famous "That's not in the requirement!" comment that all testers love. Of course you should not take it to absurd and write down everything but take a few minutes to consider your own bias. Assumptions could be: - That standard icons have the same meaning (save, open, shopping cart etc) - That you should be able to empty/change the content of a shopping cart - That the general flow of a system/web page goes from top left to bottom right - That tab-orders follow a logical flow between fields - That date fields handle all possible (and non valid!) dates - That 3+ means "three and more" (Or? Maybe you would say four and more? )
5. My mission
5.1. Analysis and planning This is a short description of how you interpret your mission. The mission sets the scene for everything from now on so make sure you think it through. You could for example focus on - What would potentially cost the customer the most if it’s broken? - Where is the spec (the most) unclear/ambiguous? There are diffenent kinds of missions, some examples are: - Discovery missions: - Quick scan - Touring - Targeted One mission could be: My testing mission is to do a targeted scan of the discount functionality. My main purpose is to provide information to the stakeholders, in this case the CEO and marketin g director , about problems in the area that might affect the 2-year plan of increasing monthly order amount10x. Discounts and targeted campaigns is crucial in that plan.
6. Testing Approach
6.1. This is a short summary of the approach you will take. Considering different methods, techniques and/or mind sets will have a big impact on how the testing will proceed and how you will adapt to findings. There are many approaches, some examples are: - Risk-based - Requirement-based - Model-based - Experience-based - Ad-Hoc Test techniques could, among others, include: - Boundary value analysis - Equivalence partionioning - Decision tables - Cause-effect analysis/grappling - Error guessing - Exploratory You should also try to put into words how you categorize issued and why. A summary could be: My approach will be a combination of risk and requirement based where I will design a set of quick checks against the requirements, mainly using boundary value analysis and decision tables, and a few end-to-end tests using a session based exploratory test approach based on error guessing and my previous experience of where errors tend to appear . I will divide issues into red, yellow and green where red is only issues that stops a major function and can't be worked-around and could cause a major loss of money, green is things that are only mildly annoying, probably only affects a few users and that will not have big impacts on the ervenue. Yellow if everything inbetween.
7. Test plan
7.1. This is a rough plan of the tests you hope to do. IIf there is time we would advice you to add any idea that comes into mind, even if they are discarded or not prioritized. When asked afterwards you will be glad you documented it and maybe they will end up important in the next session. How you write them, what level of detail and how many is all up to you and the time span you are allowed! Use the heuristics you like or try a new one just to learn somthing new. A few ideas is included with some links for your convenience. We like to add some information about configurations and a priority to each configuration and test idea/scenario. An easy red/yellow/green or must/should/could will be enough and again: when you are asked two weeks later why you didn't find a particular bug you can look back and see that you decided not to test on IE 9.0 because of... reasons... Scenario based test ideas could be written as: "When checking out with a valid order and all mandatory customer details added, user should get an invoice with correct information and any details needed to follow-up on order" Requirements based checking idead could be written as: - VAT (25%) should be added to the listing price in shopping cart - Shipping fee should be included in total price before adding VAT" - VAT must be specified separately due to legal requirement A security based session could be written as: "I will use Bug Magnet to try and force SQL injections and cross side scripting to major fields in the top 3 web browsers based on Google Analytics statistics: IE 11, Firefox and Google Chrome. "I will also try the basic analysis in OWASP ZAP on the shop but not do any deeper scan. I will only focus on OWASP top 3 as 4-10 are out of scope for now" A performance based session could be written as: "I will set up ten basic user flows and create agents simulating a concurrent number of users exceeding the expected amount of daily orders by ten. I will use Tool X in our Azure environment. " An accesability bases session could be written as: "Run the different pages through A11y complience platform to check that we meet WCAG AAA" Link to OWASP Link to A11Y Compliance Platform