Building products

Get Started. It's Free
or sign up with your email address
Rocket clouds
Building products by Mind Map: Building products

1. Desirable quality

1.1. Define up front success criteria so we know when what we delivered made a product increment worth it

1.1.1. Make sure we're building product increments and not custom features.

1.1.1.1. When we suspect we're building product increments, calculate a cost-of-ownership for that feature

1.2. Talk to customers more often

1.2.1. Create more opportunities for this to happen

1.2.2. Some PM's are doing this a lot, some not at all. Where's the difference?

1.2.3. BUY/PAY vs PlatServ is tricky? PlatServ customers are Buy/Pay

1.3. Test suites

1.3.1. Flaky so people don't trust them

1.3.2. Difficult to learn/implement extensions to the suite so people avoid doing it

1.4. Performance should be measured and benchmarked more

1.4.1. It's not a concern taken seriously

1.4.2. We don't have any benchmarks or goals up front, only once we realise something is broken

1.4.3. Poor metrics make proactively identifying issues hard to identify

1.5. UX testing

1.5.1. We do too much UX testing after a product has been built. It should be done up-front when it's more disposable.

1.5.1.1. UX mockups = cheap, easy to change

1.5.1.2. Engineering work = expensive, people are reluctant to change it

1.5.2. We just hired a team with this specific role in the last ~6 months.

1.5.2.1. (I don't think this team is big enough)

1.5.2.2. Team is distributed across many teams

1.6. Quality assurance teams

1.6.1. Some teams have them. Many PM's want them.

1.6.2. This is the most obvious way forward. Someone who "owns" quality for a team, and helps ensure the team are thinking about it consistently.

1.6.3. "Everyone is responsible for quality" = nobody is truly responsible if that responsibility is distributed

1.7. Reducing "product debt"

1.7.1. Aggressively remove features we know aren't used much

1.7.2. Make being on the latest version of UBL a priority at all times

1.7.2.1. So we don't have to hack the existing one and create all kinds of headaches for ourselves

1.7.2.2. Or continue hacking our existing one into TSUBL-version-2 and integrate standard UBL at the "edges" via Babelway. Stop people integrating directly with our pipeline?

1.7.3. Success criteria would help us determine whether we need to keep a feature/app or kill it. Whether it actually contributed to the overall health of the platform.

1.7.3.1. Often we don't know what will work. We are guessing.

1.7.3.2. We are "building without learning" as John Cutler would say

1.7.4. Legacy API's we continue to support

1.8. Reduce product scope

1.8.1. Accept fewer feature requests and prioritise doing the ones we chose "properly"

1.8.2. Reduce the number of products and apps we support by deleting them, merging them, reducing redundant features and apps

1.8.3. Aggressively remove features we know aren't used much so we can focus on delivering the things that are really well.

1.8.4. Sometimes our "they don't need that, they need this!" is too confident

1.8.4.1. We should actually verify these things

1.8.4.2. We lean on this too heavily - we don't know best a lot of the time, but it's a common crutch

2. Desirable cost

2.1. 1 man teams mask capacity and create unrealistic expectations

2.1.1. It also reduces quality and introduces risk - code reviews can't be as good if only one person understands the area, and if they quit we're screwed

2.2. More extensibility points make it possible for partners or customers themselves to build functionality instead

2.2.1. This increases cost unless unit economics of build/maintenance is borne out

2.3. Hire fewer people but hire more experienced people

2.4. Focus on building things in a way that requires less re-work at the engineering phase

2.4.1. Improved up front UX testing

2.4.2. Improved validation of building the "right thing" up front

2.4.3. Improved gates on when we consider a customer request. Make it easier for PM's to say no to things. Right now it's not obvious who has the authority to draw the line and when.

2.5. Improve underlying tooling to give engineering/design "force multipliers"

2.5.1. CI/CD, test suites, etc.

2.6. Duplicated admin work can be time consuming - communication w/ different functional groups.

2.6.1. Can we embed these deeper in RISE's?

2.6.2. Can we embed these within teams?

3. Delivered quickly

3.1. People

3.1.1. Hire more experienced people

3.1.1.1. Show me: mistakes/choices we'd change because of the seniority of our people?

3.1.2. Hire more people

3.1.3. Invest in educating them

3.1.3.1. Teams should invest in educating each other!

3.1.3.2. On the product side - building without learning. On the engineering side - building without teaching? Eng syncs are good but could be more in depth, mandatory, workshops, etc.

3.1.3.3. Show me: How many people have put wanting to learn more in their exit interview?

3.2. Clear engineering principles

3.2.1. Hire a chief architect!

3.2.1.1. Someone to defer to when large decisions need to be made

3.2.1.2. Someone to identify systemic issues and identify plans to fix them

3.2.1.3. Show me: a number of similar things implemented differently

3.2.2. Set clear targets and expectations

3.2.3. Set clear templates for different types of work - avoid re-inventing the wheel

3.2.4. Perhaps we are "empowering" engineers with freedom to early into their careers at Tradeshift. Until you've done things the standard way a few times, you should have a really good reason to deviate from that.

3.2.5. Better documentation all round

3.2.5.1. This is a cultural problem

3.2.5.2. We don't "care" about documentation

3.2.5.3. And we're only just starting to take quality seriously

3.2.5.4. Show me: get senior leadership to build some apps? (and presumably struggle with it)

3.3. Invest in CI/CD tooling

3.3.1. The faster a change can be made, the less attached anyone is to it. This leads to improved quality.

3.3.2. The faster a change can be made, the more changes will be made. This helps fix small bugs, UI quirks, etc. that we avoid fixing because the time-to-fix sucks

3.3.2.1. Key metric: time between merge to master and deployment (under 10 minutes?)

3.3.2.2. Show me: time spent deploying, maintaining stacks, etc. that isn't spent developing. If we have points - show how much more we could accelerate the roadmap

3.4. Standardise development environments

3.4.1. Many features have to work across V4, Grails, Frontend 2.5 etc.

3.4.2. We should aim to kill as many bespoke weird-bits as soon as possible so people don't have to think about it

3.4.2.1. Because thinking about it is expensive

3.4.2.2. And "accidentally" not thinking about it causes bugs and regressions

4. how do we "pile up the gloves" for these topics?