Get Started. It's Free
or sign up with your email address
Challenges by Mind Map: Challenges

1. Predictability level is low

1.1. QA: I can’t estimate

1.1.1. The US is so big so that I can’t predict the whole effort (A)

1.1.2. My business knowledge lacks of details

1.1.2.1. I've rarely or never touched the area and the documentation is not helping me

1.1.2.2. I don’t actually know how the user is using the app. Should I test 10 scenarios or 100?

1.2. QA: My estimations are inaccurate/wrong

1.2.1. 30% - Estimated affected areas by the development differs from the actual areas affected so that I need to test more than what I’ve predicted

1.2.2. 20% - AC changes during the development

2. Solutions

2.1. Business Knowledge Workshop

2.1.1. Dedicated for QAs, but all team involved

2.1.2. Documentation and videos as an outcome

2.1.3. Done with Real users, PO, POS

2.1.4. Define Happy flow scenarios in the Regression tests -> Build the Automation suite

2.2. QA Processes and Procedures applied by the book

2.2.1. FPR executed following the rules

2.2.1.1. Test Plan before the meeting (but no validation)

2.2.1.2. Dev presenting happy flows, e2e

2.2.1.3. Test Plan adjustments (after the meeting)

2.2.1.4. Clarity about trigger: the dev. If it conflicts with QA capacity -> TL decision

2.2.2. Test Plan

2.2.2.1. Test plan task is closed before FPR

2.2.2.1.1. adjustments < 5 tc -> put the external effort consumed into the execution task

2.2.2.1.2. adjustments > 5 tc -> create a task calls “Test Plan updated”

2.2.2.1.3. Parametrizare shared steps: configs, etc. Ex: Agent invoice

2.2.3. Test Review

2.3. Refinement improvements

2.3.1. Product: US improvements

2.3.1.1. US granulation is adjusted based on INVEST and team input

2.3.1.2. New requirements/AC triggers new US

2.3.1.3. Scenarios isolation: only scenarios used in Prod and agreed with stakeholers

2.3.2. Team: US contains in the details field the following

2.3.2.1. Dev: areas impacted by development

2.3.2.2. QA: test plan highlights

2.4. Environments improvements

2.4.1. QA env to be linked to Prod DB clone

2.5. Dev test is done thoroughly

2.6. Retros improvements

2.6.1. Take tasks plan updates discussed. Outcome: causes and solutions

2.6.2. Take defects and discussed. Outcome: causes and solutions

2.6.3. US changes KPI and how to diminish the number over time

2.6.4. Free meetings day idea?

2.7. Daily Sync

2.7.1. Not focused on details/solutions

2.7.2. What's your promise to deliver today

2.7.3. What you did yesterday vs what you committed to do

3. My work efficiency is affected.

3.1. context switching when US are on/off the sprint

3.2. Takes time to build test data

3.3. Happy flows not presented by the dev in the FPR

3.4. I'm spotting defects very soon after starting to test

3.5. No# of defects is high -> new collection of dataset and re-run tests

3.6. Interruptions

3.6.1. unplanned discussions

3.6.2. planned meetings - high number

4. Metrics

4.1. Focus on existing KPIs

4.1.1. Track and adjust no# of Defects

4.1.2. Track PO satisfaction level

4.1.3. Track of bugs in INT/Prod

4.2. New KPIs

4.2.1. Track Estimations vs Actuals for Test Plans

4.2.2. Track Estimations vs Actuals for Test executions

4.2.3. Track no# of “Task plan adjustments” per Sprint

4.2.4. Track no# of US changes during a Sprint

4.2.5. Track QA satisfaction level

5. What we did so far

5.1. Gathering Input

5.1.1. From QAs, 2-3 sessions

5.1.2. From TL, 1-2 sessions

5.1.3. From PO & POS, 1 session

5.2. Processing data

5.2.1. Impact analysis

5.2.2. Identifying root causes

5.2.3. Build cause - effect relation

5.2.4. Prioritizing topics

5.3. Offering input on Processes

5.3.1. FPR Checklist

5.3.2. Test Plan HowTo

5.4. Draft Solutions

5.4.1. Link solutions to causes

5.4.2. Validate solutions

5.4.3. Metrics to track the solutions efficiency

6. The Prod bugs numbers is high

6.1. I can't find bugs earlier

6.1.1. Prod data complexity is different from what we have in QA

6.1.2. Some users flow are different from what we know and test

6.2. Engineering mindset "Plan before you act" is rarely used

6.2.1. Not able to create a test plan

6.2.1.1. The level of undertanding the US impact is low

6.2.1.2. The unknown is too big, I need to explore

6.2.2. Creating a plan in advance might be wasting time

6.2.2.1. Changes due to the dev/PO touch

6.2.2.2. When I start testing I find new stuff I didn't think of and need to re-plan