Get Started. It's Free
or sign up with your email address
Rocket clouds
Uko's Ph.D. by Mind Map: Uko's Ph.D.

1. Quality rules

1.1. Dedicated reports

1.1.1. Do dedicated reports increase the interest of developers in a critic?

1.1.2. If a custom fixing UI is available do developers resolve this kind of issue more often?

1.2. Summarization

1.2.1. How can we summarize multiple critics about multiple entities?

1.2.2. Does the live information about the "quality index" trends encourage developer to produce a better code? Does the gamification layer based on the quality of code make developer write a better code?

1.3. Rule definition

1.3.1. Can we reduce the cost of a custom rule definition? What rules of UContracts can be implemented with rewrite rules? What kind of tooling do developers need to implement rules. Can we reduce the cost of writing rules by relying on code snippets? (Case with deprecations)

1.3.2. Add API change miner rules

2. Quality model

2.1. Model for external properties

2.1.1. Can dependencies make use of EPM?

2.1.2. Can test coverage make use of EPM? Interactive test coverage properties

2.1.3. Can issues make use of a EPM?

2.2. Criticizing objects

2.2.1. Is it easier to use different kind of builders if they would provide a custom critics about their objects?

2.2.2. Will the development process in dynamic languages improve if inspection of an object will also proved a list of critics about it.

3. Non-quality

3.1. Recommending a semantic vesion

3.2. Mining collaboration from super-repositories

3.2.1. Does estimation of the "collaboration value" improve collaboration visualization?

3.2.2. Can we distinguish amount of collaboration by mining repositories? MSR 2013

4. Quality Tools

4.1. Inspertor

4.1.1. Moldable hierarchical inspector

4.1.2. ViDI ICSE 2015 Demo SANER 2015

4.2. Dev Plugins

4.2.1. QualityAssistant Do intrusive tools work better than on-demand ones? Which kind of tools do developers prefer? How do developers react to intrusions? Does an introduction of an intrusive tool cause a change in project's code quality?

4.3. Diff report

4.3.1. Do developers rewrite their code if before committing they are presented with the quality changes?

4.3.2. Does critic information embedded in a diff view help to understand the reason of a change?

5. Mining Critics

5.1. Deriving rules

5.1.1. Can we create a developer specific "best practice" bundles?

5.2. Cost/Value

5.2.1. What is the impact of a fix?

5.2.2. What is the cost of a fix?

5.2.3. What critics are less likely to be false positves?

5.3. Rule properties

5.3.1. What critics are more/less common?

5.3.2. Which kind of critics do developers (not) care about?

5.3.3. Can we identify critics more relevant for a certain user?

5.3.4. What kind of critics do developers solve immediately, and which ones do they postpone?

5.3.5. Do developers use autofix feature of custom rules rather than generic ones?

6. Process

6.1. Can critics enhance code review?

6.1.1. Can change untangling be enhanced by critics?

6.2. Can a rewrite rule made after API changes help library clients to migrate their code?

6.2.1. Development/evolution workflow idea

6.3. What do I do if I will not solve critic (now)?

6.3.1. Can I postpone a critic?

6.3.2. Can I change the rule/ban it Do developers care bout metric-rules more if they can tweak them? What is the reason of a threshold change? What are the changes that developers make to rules?