
1. Start Here!
1.1. What follows from 'loss of opportunity'?
1.2. What follows from 'lack of engagement'?
1.3. What follows from 'research effort discotinuity'?
1.4. What follows from 'overconfident problem analysis'?
1.5. Why does 'no RCA frame available' happen?
1.6. Why does 'RCA fra' happen?
1.7. How does 'No longer RCA & reading tools uncoupling' happen?
1.8. How does 'No longer RCA & reading tools uncoupling' happen?
1.9. What theoretical construct represents 'No longer RCA & reading tools uncoupling'?
1.10. Assess Level of Coercive Power for 'PhD students'
1.11. Assess Level of Normative Power for 'PhD students'
1.12. Assess Level of Utilitarian Power for 'PhD students'
1.13. Identify the property or right of the level of legal legitimacy for 'PhD students'
1.14. Identify the property or right of the level of contractual legitimacy for 'PhD students'
1.15. Assess Level of Customary Legitimacy for 'PhD students'
1.16. Assess Level of Moral Legitimacy for 'PhD students'
1.17. Assess the Time Sensitivity for 'PhD students'
1.18. Assess the Criticality for 'PhD students'
1.19. Click icon to address 'No longer re-reads'
2. Describe Practice
3. Explicate Problem
3.1. Set Problem Statement
3.1.1. PhD students not bearing “the RCA frame” in mind when reading
3.2. Analyse Stakeholders
3.2.1. Identify Stakeholders
3.2.1.1. Add Client(s)
3.2.1.2. Add Decision Maker(s)
3.2.1.3. Add Professional(s)
3.2.1.4. Add Witness(es)
3.3. Assess Problem as Difficulties
3.3.1. Ascertain Consequences
3.3.1.1. important facts overlooked
3.3.1.1.1. ...follows from...
3.3.1.2. boredom
3.3.1.2.1. ...follows from...
3.3.1.3. references not tracked back to RCA rationales
3.3.1.3.1. ...follows from...
3.3.2. Ascertain Causes
3.3.2.1. RCA frame unavailable
3.3.2.1.1. Why?
3.3.2.2. RCA frame inaccessible
3.3.2.2.1. ...leads to...
3.4. Assess Problem as Solutions
3.4.1. Alleviate Consequences
3.4.1.1. No longer important facts overlooked
3.4.1.1.1. ...follows from...
3.4.1.2. No longer boredom
3.4.1.2.1. ...follows from...
3.4.1.3. No longer references not tracked back to RCA rationales
3.4.1.3.1. ...follows from...
3.4.2. Lessen Causes
3.4.2.1. No longer RCA frame unavailable
3.4.2.2. No longer RCA frame inaccessible
3.4.2.2.1. ...leads to...
4. Manage DSR Risks
4.1. Characterise DSR Project Context
4.1.1. Identify and analyse research constraints
4.1.1.1. What are the time constraints?
4.1.1.1.1. How long do you have?
4.1.1.1.2. Is that enough?
4.1.1.2. What are the funding constraints?
4.1.1.2.1. What sources of funding do you have?
4.1.1.2.2. How much funding do you have?
4.1.1.2.3. What do you need to spend it on?
4.1.1.2.4. How much will that cost?
4.1.1.2.5. Is that enough?
4.1.1.3. Do you have access to needed hardware and/or software?
4.1.1.3.1. What hardware do you need?
4.1.1.3.2. What software do you need?
4.1.1.3.3. Do you have access?
4.1.1.4. Do you need and can you get access to organisations for evaluation?
4.1.1.4.1. Do you need access to one or more organisations for problem analysis and/or evaluation?
4.1.1.5. Do you have the skills needed to conduct the research?
4.1.1.5.1. What skills are needed?
4.1.1.5.2. For each needed skill, do you have sufficient skills?
4.1.1.5.3. For each insufficient skill, can you learn and obtain sufficient skills?
4.1.1.6. Are there any ethical constraints that limit what you can or should do on your research?
4.1.1.6.1. Animal research constraints? (List them)
4.1.1.6.2. Privacy constraints? (List them)
4.1.1.6.3. Human research subject constraints? (List them)
4.1.1.6.4. Organisational risk constraints? (List them)
4.1.1.6.5. Societal risk constraints? (List them)
4.1.2. Identify development and feasibility uncertainties
4.1.2.1. Technical feasibility
4.1.2.2. Human usability
4.1.2.3. Organisational feasibility
4.2. Identify Risks
4.2.1. A. Business Needs: Risks arising from identifying, selecting, and developing understanding of the business needs (problems and requirements) to address in the research.
4.2.1.1. A-1. Selection of a problem that lacks significance for any stakeholder.
4.2.1.2. A-2. Difficulty getting information about the problem and the context.
4.2.1.3. A-3. Different and even conflicting stakeholder interests (some of which may not be surfaced).
4.2.1.4. A-4. Poor understanding of the problem to be solved.
4.2.1.5. A-5. Solving the wrong problem, i.e., a problem that isn’t a main contributor to undesirable outcomes that motivate the problem solving.
4.2.1.6. A-6. Poor/vague definition/statement of problem to be solved, with potential misunderstanding by others.
4.2.1.7. A-7. Inappropriate choice or definition of a problem according to a solution at hand.
4.2.1.8. A-8. Inappropriate formulation of the problem.
4.2.2. B. Grounding: Risks arising from searching for, identifying, and comprehending applicable knowledge in the literature.
4.2.2.1. B-1. Ignorance or lack of knowledge of existing research relevant to the problem understanding and over-reliance on personal experience with or imagination of the problem.
4.2.2.2. B-2. Ignorance or lack of knowledge of existing design science research into solution technologies for solving the problem, i.e., lack of knowledge of the state of the art.
4.2.2.3. B-3. Ignorance or lack of knowledge of existing relevant natural and behavioural science research forming kernel theories for understanding or solving the problem.
4.2.3. C. Build: Risks arising from designing artefacts, including instantiations, and developing design theories.
4.2.3.1. C-1. Development of a conjectural (un-instantiated) solution which cannot be instantiated (built or made real).
4.2.3.2. C-2. Development of a hypothetical (untried) solution which is ineffective in solving the problem, i.e., the artefact doesn’t work doesn’t work well in real situations with various socio-technical complications.
4.2.3.3. C-3. Development of a hypothetical (untried) solution which is inefficient in solving the problem, i.e., requiring overly high resource costs.
4.2.3.4. C-4. Development of a hypothetical (untried) solution which is inefficacious in solving the problem, i.e., the artefact isn’t really the cause of an improvement observed during evaluation.
4.2.3.5. C-5. Development of a hypothetical (untried) solution which cannot be taught to or understood by those who are intended to use it, e.g., overly complex or inelegant.
4.2.3.6. C-6. Development of a hypothetical (untried) solution which is difficult or impossible to get adopted by those who are intended to use it, whether for personal or political reasons.
4.2.3.7. C-7. Development of a hypothetical (untried) solution which causes new problems that make the outcomes of the solution more trouble than the original problem, i.e., there are significant side effects.
4.2.4. D. Evaluation: Risks arising from evaluating purposeful artefacts and justifying design theories or knowledge.
4.2.4.1. D-1a. Tacit requirements (which by definition cannot be surfaced) are not dealt with when evaluating the solution technology, leading to failure of the solution technology to meet those requirements.
4.2.4.2. D-1b. Failure to surface some or all of the relevant requirements leads to those requirements not being dealt with when evaluating the solution technology, leading to failure of the solution technology to meet those requirements.
4.2.4.3. D-2. Incorrectly matching the articulated requirements to the meta-requirements of the ISDT lead to the testing of the IDST and evaluation of an instantiation of the metadesign in a situation for which neither should be applied.
4.2.4.4. D-3. Incorrectly matching the meta-design or the design method to the meta-requirements (not following the ISDT correctly) leads to evaluation of something other than the correct solution technology or the ISDT as stated.
4.2.4.5. D-4. Improper application of the meta-design or the design method (not in accordance with the ISDT) in designing an instantiation leads to evaluation of something other than the correct solution technology or the ISDT as stated.
4.2.4.6. D-5. Improperly building an instantiation of the solution technology (such that it does not properly embody the meta-design) leads to evaluation of something other than the correct solution technology or the ISDT as stated.
4.2.4.7. D-6. Difficulties in implementing the solution technology during naturalistic evaluation, due to such things as unforeseen complications within the business/organization, prevent the instantiation of the solution technology from successfully meeting its objectives.
4.2.4.8. D-7. Success of the solution technology to meet its objectives is not obtained due to dynamic or changing requirements beyond the scope of the solution technology.
4.2.4.9. D-8. Success of the solution technology to meet its objectives is not achieved due to poor change management practices.
4.2.4.10. D-9. Determination of success or failure in reaching the objectives of the solution technology is error-prone or impossible due to disagreement about objectives or inability to measure.
4.2.4.11. D-10. Existing organizational culture, local organizational culture differences (sub-cultures), political conflicts, etc. complicate the evaluation process or weaken the abilityto make meaningful measurement of the achievement of the objectives of the solution technology.
4.2.4.12. D-11. Existing organizational priorities, structures, practices, procedures, etc. complicate the evaluation process or ability to make/measure the achievement of the objectives.
4.2.4.13. D-12 Emergence of new organizational or individual practices, structures, priorities, norms, culture, or other aspects that complicate the acceptability, workability, or efficiency of the application of the solution technology in a naturalistic setting.
4.2.5. E. Artefact dissemination and use: Risks arising from disseminating new purposeful artefacs and design theories or knowledge to people and organisations for use in practice to address business need(s).
4.2.5.1. E-1. Implementation in practice of a solution does not work effectively, efficiently, and/or efficaciously.
4.2.5.2. E-2. Misunderstanding the appropriate context for and limitations of the solution.
4.2.5.3. E-3. Misunderstanding how to apply the solution.
4.2.5.4. E-4. Inappropriate handling of adoption, diffusion, and organizational implementation.
4.2.6. F. Knowledge additions: Risks arising from publishing new design artefacts and design theories or knowledge and adding them to the body of knowledge.
4.2.6.1. F-1. Inability to publish or present research results
4.2.6.2. F-2. Publication of low significance research
4.2.6.3. F-3. Publication of incorrect research
4.2.6.4. F-4. Design artefacts prove too unique to disseminate
4.3. Analyse Risks
4.4. Prioritise Risks
4.4.1. High
4.4.2. Medium
4.4.3. Low
4.5. Determine Risk Treatments
4.6. Enact Risk Treatments
5. Formulate Design Theory
5.1. PhD students not bearing “the RCA frame” in mind when reading
5.1.1. It is hypothesized that this problem leads to
5.1.2. It is hypothesized that this problem causes
5.1.2.1. RCA frame inaccessible
5.1.2.1.1. ...leads to...
6. Go to Upper Right
7. Decide Requirements and Capture Design Ideas
7.1. Functional Requirements
7.1.1. Requirements for Achieving purpose and benefits
7.1.2. Requirements for Reducing causes of the problem
7.1.2.1. No longer RCA & reading tools uncoupling
7.1.2.1.1. How?
7.2. Non-functional Requirements
7.2.1. In the checklists below, tick any non-functional requirements that are relevant to your artefact. Where alleviating causes from your problem as opportunities above can help, Copy&Paste the causes to be alleviated onto the how nodes for the relevant non-functional requirement below.
7.2.1.1. No longer RCA & reading tools uncoupling
7.2.1.1.1. How?
7.2.2. Structural
7.2.2.1. Coherence
7.2.2.2. Consistency
7.2.2.3. Modularity
7.2.2.4. Conciseness
7.2.2.5. Add your own
7.2.2.5.1. provide a name
7.2.2.5.2. provide a description
7.2.3. Usage
7.2.3.1. Usability
7.2.3.2. Comprehensibility
7.2.3.3. Learnability
7.2.3.4. Customisability
7.2.3.5. Suitability
7.2.3.6. Accessibility
7.2.3.7. Elegance
7.2.3.8. Fun
7.2.3.9. Traceability
7.2.3.10. Add your own
7.2.3.10.1. provide a name
7.2.3.10.2. provide a description
7.2.4. Management
7.2.4.1. Maintainability
7.2.4.2. Flexibility
7.2.4.3. Accountability
7.2.4.4. Add your own
7.2.4.4.1. provide a name
7.2.4.4.2. provide a description
7.2.5. Environmental
7.2.5.1. Expresiveness
7.2.5.2. Correctness
7.2.5.3. Generality
7.2.5.4. Interoperability
7.2.5.5. Autonomy
7.2.5.6. Proximity
7.2.5.7. Completeness
7.2.5.8. Effectiveness
7.2.5.9. Efficiency
7.2.5.10. Robustness
7.2.5.11. Resilience
7.2.5.12. Add your own
7.2.5.12.1. provide a name
7.2.5.12.2. provide a description
8. Design Purposeful Artefact <nameYourArtefact>
8.1. Description
8.2. Artefact inputs
8.3. Artefact outputs
8.4. Intended user(s)
8.4.1. PhD students
8.5. Drag&Drop the "hows" from the requirements and design ideas above to the components you add below that will have responsibility or carry out that "how".
8.6. Enumerate top level components
9. Finish Here!
10. Evaluate Purposeful Artefact (and Its Design Theory)
10.1. Identify Evaluation Purpose & Goal Priorities
10.1.1. Determine and characterise the artefact(s) to be evaluated
10.1.2. Determine evaluation purpose(s)
10.1.2.1. Develop evidence supporting my artefact's utility and design theory
10.1.2.2. Develop evidence my artefact has better utility for its purpose than other artefacts do
10.1.2.3. Identify side effects of the use of my artefact
10.1.2.4. Identify weaknesses and ways of improving my artefact
10.1.3. Determine evaluation goal(s)
10.1.3.1. Develop rigorous evidence of the efficacy of my artefact for achieving its purpose(s)
10.1.3.2. Develop rigorous evidence of the effectiveness of my artefact for achieving its purposes(s)
10.1.3.3. Evaluate my artefact efficiently and within resource constraints
10.1.3.4. Conduct my evaluations ethically
10.1.4. Prioritise evaluation purposes, goals, constraints, and uncertainties, so you can address higher priorities as early as possible
10.1.4.1. Evaluation Goals
10.1.4.2. Evaluation Purposes
10.1.4.3. Research Constraints
10.1.4.4. Feasibility Uncertainties
10.2. Choose Evaluation Strategy/Trajectory
10.2.1. DSR Evaluation Strategy Selection Framework
10.2.2. One or more strategies are suggested (ticked) below. Revise as appropriate.
10.2.3. Human Risk and Effectiveness Strategy
10.2.4. Technological Risk and Efficacy Strategy
10.2.5. Purely Technical Strategy
10.2.6. Quick and Simple Strategy
10.3. Define theoretical constructs and measures of requirements to be evaluated
10.3.1. No longer RCA & reading tools uncoupling
10.3.1.1. What theoretical construct represents this?
10.4. Choose and Design Evaluation Episodes
10.4.1. Copy&Paste the requirements to be evaluated above into an evaluation episode in one of the four quadrants
10.4.1.1. No longer RCA & reading tools uncoupling
10.4.2. Identify requirements to be evaluated early (formatively) and artificially (lower left quadrant)
10.4.2.1. Formative Artificial Evaluation Episode 1
10.4.2.1.1. Paste property(ies) to be evaluated in this episode here
10.4.2.1.2. Choose evaluation method
10.4.2.1.3. Research Method Literature for Chosen Evaluation Method(s)
10.4.2.1.4. What do you want to learn from the evaluation?
10.4.2.1.5. Record details of evaluation design here (add nodes or include a link to a file)
10.4.2.1.6. Enact this Evaluation Episode
10.4.2.1.7. Record Learning from the Evaluation here (e.g. link to files)
10.4.2.1.8. Reconsider and Revise Evaluation Plan as DSR Progresses
10.4.3. Identify requirements to be evaluated early (formatively) and naturalistically (upper left quadrant)
10.4.3.1. Formative Naturalistic Evaluation Episode 1
10.4.3.1.1. Paste property(ies) to be evaluated in this episode here
10.4.3.1.2. Choose evaluation method
10.4.3.1.3. Research Method Literature for Chosen Evaluation Method(s)
10.4.3.1.4. What do you want to learn from the evaluation?
10.4.3.1.5. Record details of evaluation design here (add nodes or include a link to a file)
10.4.3.1.6. Enact this Evaluation Episode
10.4.3.1.7. Record Learning from the Evaluation here (e.g. link to files)
10.4.3.1.8. Reconsider and Revise Evaluation Plan as DSR Progresses
10.4.4. Identify requirements to be evaluated late (summatively) and artificially (lower right quadrant)
10.4.4.1. Summative Artificial Evaluation Episode 1
10.4.4.1.1. Paste property(ies) to be evaluated in this episode here
10.4.4.1.2. Choose evaluation method
10.4.4.1.3. Research Method Literature for Chosen Evaluation Method(s)
10.4.4.1.4. What do you want to learn from the evaluation?
10.4.4.1.5. Record details of evaluation design here (add nodes or include a link to a file)
10.4.4.1.6. Enact this Evaluation Episode
10.4.4.1.7. Record Learning from the Evaluation here (e.g. link to files)
10.4.4.1.8. Reconsider and Revise Evaluation Plan as DSR Progresses
10.4.5. Identify requirements to be evaluated late (summatively) and naturalistically (upper right quadrant)
10.4.5.1. Summative Naturalistic Evaluation Episode 1
10.4.5.1.1. Paste property(ies) to be evaluated in this episode here
10.4.5.1.2. Choose evaluation method
10.4.5.1.3. Research Method Literature for Chosen Evaluation Method(s)
10.4.5.1.4. What do you want to learn from the evaluation?
10.4.5.1.5. Record details of evaluation design here (add nodes or include a link to a file)
10.4.5.1.6. Enact this Evaluation Episode
10.4.5.1.7. Record Learning from the Evaluation here (e.g. link to files)
10.4.5.1.8. Reconsider and Revise Evaluation Plan as DSR Progresses