Get Started. It's Free
or sign up with your email address
Uko's Ph.D. by Mind Map: Uko's Ph.D.

1. Quality rules

1.1. Dedicated reports

1.1.1. Do dedicated reports increase the interest of developers in a critic?

1.1.2. If a custom fixing UI is available do developers resolve this kind of issue more often?

1.2. Summarization

1.2.1. How can we summarize multiple critics about multiple entities?

1.2.2. Does the live information about the "quality index" trends encourage developer to produce a better code?

1.2.2.1. Does the gamification layer based on the quality of code make developer write a better code?

1.3. Rule definition

1.3.1. Can we reduce the cost of a custom rule definition?

1.3.1.1. What rules of UContracts can be implemented with rewrite rules?

1.3.1.2. What kind of tooling do developers need to implement rules.

1.3.1.3. Can we reduce the cost of writing rules by relying on code snippets? (Case with deprecations)

1.3.2. Add API change miner rules

2. Quality model

2.1. Model for external properties

2.1.1. Can dependencies make use of EPM?

2.1.2. Can test coverage make use of EPM?

2.1.2.1. Interactive test coverage properties

2.1.3. Can issues make use of a EPM?

2.2. Criticizing objects

2.2.1. Is it easier to use different kind of builders if they would provide a custom critics about their objects?

2.2.2. Will the development process in dynamic languages improve if inspection of an object will also proved a list of critics about it.

3. Quality Tools

3.1. Inspertor

3.1.1. Moldable hierarchical inspector

3.1.2. ViDI

3.1.2.1. ICSE 2015 Demo

3.1.2.2. SANER 2015

3.2. Dev Plugins

3.2.1. QualityAssistant

3.2.1.1. Do intrusive tools work better than on-demand ones?

3.2.1.1.1. Which kind of tools do developers prefer?

3.2.1.1.2. How do developers react to intrusions?

3.2.1.1.3. Does an introduction of an intrusive tool cause a change in project's code quality?

3.3. Diff report

3.3.1. Do developers rewrite their code if before committing they are presented with the quality changes?

3.3.2. Does critic information embedded in a diff view help to understand the reason of a change?

4. Non-quality

4.1. Recommending a semantic vesion

4.2. Mining collaboration from super-repositories

4.2.1. Does estimation of the "collaboration value" improve collaboration visualization?

4.2.2. Can we distinguish amount of collaboration by mining repositories?

4.2.2.1. MSR 2013

5. Mining Critics

5.1. Deriving rules

5.1.1. Can we create a developer specific "best practice" bundles?

5.2. Cost/Value

5.2.1. What is the impact of a fix?

5.2.2. What is the cost of a fix?

5.2.3. What critics are less likely to be false positves?

5.3. Rule properties

5.3.1. What critics are more/less common?

5.3.2. Which kind of critics do developers (not) care about?

5.3.3. Can we identify critics more relevant for a certain user?

5.3.4. What kind of critics do developers solve immediately, and which ones do they postpone?

5.3.5. Do developers use autofix feature of custom rules rather than generic ones?

6. Process

6.1. Can critics enhance code review?

6.1.1. Can change untangling be enhanced by critics?

6.2. Can a rewrite rule made after API changes help library clients to migrate their code?

6.2.1. Development/evolution workflow idea

6.3. What do I do if I will not solve critic (now)?

6.3.1. Can I postpone a critic?

6.3.2. Can I change the rule/ban it

6.3.2.1. Do developers care bout metric-rules more if they can tweak them?

6.3.2.1.1. What is the reason of a threshold change?

6.3.2.1.2. What are the changes that developers make to rules?