Eovendo improvements

Get Started. It's Free
or sign up with your email address
Eovendo improvements by Mind Map: Eovendo  improvements

1. Blockers

1.1. Setup automated Jira alerts eg. in blocked status more than for 5 days

1.2. Assign blockers to people responsible for unblocking (or Team lead)

1.3. Introduce mandatory weekly review of blockers

2. Priorities

2.1. Review confirmed rules violations

2.1.1. If found - to be discussed with all teammates on retrospective

2.2. Onboarding: make sure rules are properly communicated to new people

2.3. Daily: explicitly communicate next tasks (priorities)

2.4. Document violations, add scores for each dev

3. Team

3.1. Regular visits of Martin to Kyiv

3.1.1. Improve team involvement by regular syncs and sharing of vision, roadmap and key company goals

3.2. Perform regular retrospectives, focus on constant improvements

3.3. Introduce proper onboarding process

3.4. Ensure quick transition from storming to performing phase

3.5. Perform regular one-to-one meetings to minimize attrition and internal conflicts

3.6. Obey to the work discipline

3.7. Enable releases on Friday

3.7.1. To enable releases on Friday overtime rates over the weekend to be introduced

4. Scope

4.1. Introduce estimations

4.1.1. Measure velocity

4.1.1.1. Martin says that team is not becoming faster, but the feeling is not we wok with. We need to measure velocity trends and work with it accordingly - only this trend can show the ground truth of speeding or slowing down.

4.1.1.2. To enable measurement, estimations to be introduced

4.1.2. Measure scope trend

4.1.2.1. Martin claims that the number of tasks increased from 700 to 1000. It doesn't confirms that the team works slower. Team speed is quantified by velocity index. Number of tasks means that the product owner was more productive. On the other hand team has grown from 10 to 15 (150%) and the quantity of task has grown proportionally. We need to measure not just quantity of tasks but their size as well to see how the added scope is changing over the time.

5. Start measuring Quality

5.1. Static code anaysis

5.1.1. Sonar Cube

5.1.2. PVS Studio

5.2. QA Metrixes

5.2.1. Defect trend

5.2.2. Measure number of defect reopening per dev

5.2.3. Bug scorecard per dev (only with normal critical levels, not applicable for current severity approach)

5.3. Use normal critical levels

5.3.1. For now only Blocker and Critical levels exist. Number of issues in that statuses have to be limited as otherwise this is 'Fake' prioritization approach.

5.4. Definition of done

5.4.1. Create a dev checklist of quality attributes to be checked against completed task not to miss anything important (to be updated on regular basis)

5.5. Code review mandatory practice

5.6. Architecture audit

5.6.1. To identify stuff that "Martin is afraid to imagine" like 2 DBs for the same stuff

5.7. Introduce TDD

5.8. Always cover fixes by tests

5.9. Automated smoke suite

5.9.1. Maybe even on each Merge Request

5.10. Performance quality

5.10.1. Do we need to ensure performance is appropriate?

5.11. Security testing

5.11.1. Do we need to ensure there is no major security vulnerabilities?

5.12. Proactively validate TOP-X most important company / financial reports

5.13. Dedicated stream to rework critical modules instead of fixing never ending defects