StrategicReading Public Copy

Comienza Ya. Es Gratis
ó regístrate con tu dirección de correo electrónico
StrategicReading Public Copy por Mind Map: StrategicReading Public Copy

1. Formulate Design Theory

1.1. PhD students not bearing “the RCA frame” in mind when reading

1.1.1. It is hypothesized that this problem leads to

1.1.2. It is hypothesized that this problem causes

1.1.2.1. RCA frame inaccessible

1.1.2.1.1. ...leads to...

2. Design Purposeful Artefact <nameYourArtefact>

2.1. Description

2.2. Artefact inputs

2.3. Artefact outputs

2.4. Intended user(s)

2.4.1. PhD students

2.5. Drag&Drop the "hows" from the requirements and design ideas above to the components you add below that will have responsibility or carry out that "how".

2.6. Enumerate top level components

3. Go to Upper Right

4. Finish Here!

5. Describe Practice

6. Evaluate Purposeful Artefact (and Its Design Theory)

6.1. Identify Evaluation Purpose & Goal Priorities

6.1.1. Determine and characterise the artefact(s) to be evaluated

6.1.2. Determine evaluation purpose(s)

6.1.2.1. Develop evidence supporting my artefact's utility and design theory

6.1.2.2. Develop evidence my artefact has better utility for its purpose than other artefacts do

6.1.2.3. Identify side effects of the use of my artefact

6.1.2.4. Identify weaknesses and ways of improving my artefact

6.1.3. Determine evaluation goal(s)

6.1.3.1. Develop rigorous evidence of the efficacy of my artefact for achieving its purpose(s)

6.1.3.2. Develop rigorous evidence of  the effectiveness of my artefact for achieving its purposes(s)

6.1.3.3. Evaluate my artefact efficiently and within resource constraints

6.1.3.4. Conduct my evaluations ethically

6.1.4. Prioritise evaluation purposes, goals, constraints, and uncertainties, so you can address higher priorities as early as possible

6.1.4.1. Evaluation Goals

6.1.4.2. Evaluation Purposes

6.1.4.3. Research Constraints

6.1.4.4. Feasibility Uncertainties

6.2. Choose Evaluation Strategy/Trajectory

6.2.1. DSR Evaluation Strategy Selection Framework

6.2.2. One or more strategies are suggested (ticked) below. Revise as appropriate.

6.2.3. Human Risk and Effectiveness Strategy

6.2.4. Technological Risk and Efficacy Strategy

6.2.5. Purely Technical Strategy

6.2.6. Quick and Simple Strategy

6.3. Define theoretical constructs and measures of requirements to be evaluated

6.3.1. No longer RCA & reading tools uncoupling

6.3.1.1. What theoretical construct represents this?

6.4. Choose and Design Evaluation Episodes

6.4.1. Copy&Paste the requirements to be evaluated above into an evaluation episode in one of the four quadrants

6.4.1.1. No longer RCA & reading tools uncoupling

6.4.2. Identify requirements to be evaluated early (formatively) and artificially (lower left quadrant)

6.4.2.1. Formative Artificial Evaluation Episode 1

6.4.2.1.1. Paste property(ies) to be evaluated in this episode here

6.4.2.1.2. Choose evaluation method

6.4.2.1.3. Research Method Literature for Chosen Evaluation Method(s)

6.4.2.1.4. What do you want to learn from the evaluation?

6.4.2.1.5. Record details of evaluation design here (add nodes or include a link to a file)

6.4.2.1.6. Enact this Evaluation Episode

6.4.2.1.7. Record Learning from the Evaluation here (e.g. link to files)

6.4.2.1.8. Reconsider and Revise Evaluation Plan as DSR Progresses

6.4.3. Identify requirements to be evaluated early (formatively) and naturalistically (upper left quadrant)

6.4.3.1. Formative Naturalistic Evaluation Episode 1

6.4.3.1.1. Paste property(ies) to be evaluated in this episode here

6.4.3.1.2. Choose evaluation method

6.4.3.1.3. Research Method Literature for Chosen Evaluation Method(s)

6.4.3.1.4. What do you want to learn from the evaluation?

6.4.3.1.5. Record details of evaluation design here (add nodes or include a link to a file)

6.4.3.1.6. Enact this Evaluation Episode

6.4.3.1.7. Record Learning from the Evaluation here (e.g. link to files)

6.4.3.1.8. Reconsider and Revise Evaluation Plan as DSR Progresses

6.4.4. Identify requirements to be evaluated late (summatively) and artificially (lower right quadrant)

6.4.4.1. Summative Artificial Evaluation Episode 1

6.4.4.1.1. Paste property(ies) to be evaluated in this episode here

6.4.4.1.2. Choose evaluation method

6.4.4.1.3. Research Method Literature for Chosen Evaluation Method(s)

6.4.4.1.4. What do you want to learn from the evaluation?

6.4.4.1.5. Record details of evaluation design here (add nodes or include a link to a file)

6.4.4.1.6. Enact this Evaluation Episode

6.4.4.1.7. Record Learning from the Evaluation here (e.g. link to files)

6.4.4.1.8. Reconsider and Revise Evaluation Plan as DSR Progresses

6.4.5. Identify requirements to be evaluated late (summatively) and naturalistically (upper right quadrant)

6.4.5.1. Summative Naturalistic Evaluation Episode 1

6.4.5.1.1. Paste property(ies) to be evaluated in this episode here

6.4.5.1.2. Choose evaluation method

6.4.5.1.3. Research Method Literature for Chosen Evaluation Method(s)

6.4.5.1.4. What do you want to learn from the evaluation?

6.4.5.1.5. Record details of evaluation design here (add nodes or include a link to a file)

6.4.5.1.6. Enact this Evaluation Episode

6.4.5.1.7. Record Learning from the Evaluation here (e.g. link to files)

6.4.5.1.8. Reconsider and Revise Evaluation Plan as DSR Progresses

7. Decide Requirements and Capture Design Ideas

7.1. Functional Requirements

7.1.1. Requirements for Achieving purpose and benefits

7.1.2. Requirements for Reducing causes of the problem

7.1.2.1. No longer RCA & reading tools uncoupling

7.1.2.1.1. How?

7.2. Non-functional Requirements

7.2.1. In the checklists below, tick any non-functional requirements that are relevant to your artefact. Where alleviating causes from your problem as opportunities above can help, Copy&Paste the causes to be alleviated onto the how nodes for the relevant non-functional requirement below.

7.2.1.1. No longer RCA & reading tools uncoupling

7.2.1.1.1. How?

7.2.2. Structural

7.2.2.1. Coherence

7.2.2.2. Consistency

7.2.2.3. Modularity

7.2.2.4. Conciseness

7.2.2.5. Add your own

7.2.2.5.1. provide a name

7.2.2.5.2. provide a description

7.2.3. Usage

7.2.3.1. Usability

7.2.3.2. Comprehensibility

7.2.3.3. Learnability

7.2.3.4. Customisability

7.2.3.5. Suitability

7.2.3.6. Accessibility

7.2.3.7. Elegance

7.2.3.8. Fun

7.2.3.9. Traceability

7.2.3.10. Add your own

7.2.3.10.1. provide a name

7.2.3.10.2. provide a description

7.2.4. Management

7.2.4.1. Maintainability

7.2.4.2. Flexibility

7.2.4.3. Accountability

7.2.4.4. Add your own

7.2.4.4.1. provide a name

7.2.4.4.2. provide a description

7.2.5. Environmental

7.2.5.1. Expresiveness

7.2.5.2. Correctness

7.2.5.3. Generality

7.2.5.4. Interoperability

7.2.5.5. Autonomy

7.2.5.6. Proximity

7.2.5.7. Completeness

7.2.5.8. Effectiveness

7.2.5.9. Efficiency

7.2.5.10. Robustness

7.2.5.11. Resilience

7.2.5.12. Add your own

7.2.5.12.1. provide a name

7.2.5.12.2. provide a description

8. Start Here!

8.1. What follows from 'loss of opportunity'?

8.2. What follows from 'lack of engagement'?

8.3. What follows from 'research effort discotinuity'?

8.4. What follows from 'overconfident problem analysis'?

8.5. Why does 'no RCA frame available' happen?

8.6. Why does 'RCA fra' happen?

8.7. How does 'No longer RCA & reading tools uncoupling' happen?

8.8. How does 'No longer RCA & reading tools uncoupling' happen?

8.9. What theoretical construct represents 'No longer RCA & reading tools uncoupling'?

8.10. Assess Level of Coercive Power for 'PhD students'

8.11. Assess Level of Normative Power for 'PhD students'

8.12. Assess Level of Utilitarian Power for 'PhD students'

8.13. Identify the property or right of the level of legal legitimacy for 'PhD students'

8.14. Identify the property or right of the level of contractual legitimacy for 'PhD students'

8.15. Assess Level of Customary Legitimacy for 'PhD students'

8.16. Assess Level of Moral Legitimacy for 'PhD students'

8.17. Assess the Time Sensitivity for 'PhD students'

8.18. Assess the Criticality for 'PhD students'

8.19. Click icon to address 'No longer re-reads'

9. Manage DSR Risks

9.1. Characterise DSR Project Context

9.1.1. Identify and analyse research constraints

9.1.1.1. What are the time constraints?

9.1.1.1.1. How long do you have?

9.1.1.1.2. Is that enough?

9.1.1.2. What are the funding constraints?

9.1.1.2.1. What sources of funding do you have?

9.1.1.2.2. How much funding do you have?

9.1.1.2.3. What do you need to spend it on?

9.1.1.2.4. How much will that cost?

9.1.1.2.5. Is that enough?

9.1.1.3. Do you have access to needed hardware and/or software?

9.1.1.3.1. What hardware do you need?

9.1.1.3.2. What software do you need?

9.1.1.3.3. Do you have access?

9.1.1.4. Do you need and can you get access to organisations for evaluation?

9.1.1.4.1. Do you need access to one or more organisations for problem analysis and/or evaluation?

9.1.1.5. Do you have the skills needed to conduct the research?

9.1.1.5.1. What skills are needed?

9.1.1.5.2. For each needed skill, do you have sufficient skills?

9.1.1.5.3. For each insufficient skill, can you learn and obtain sufficient skills?

9.1.1.6. Are there any ethical constraints that limit what you can or should do on your research?

9.1.1.6.1. Animal research constraints? (List them)

9.1.1.6.2. Privacy constraints? (List them)

9.1.1.6.3. Human research subject constraints? (List them)

9.1.1.6.4. Organisational risk constraints? (List them)

9.1.1.6.5. Societal risk constraints? (List them)

9.1.2. Identify development and feasibility uncertainties

9.1.2.1. Technical feasibility

9.1.2.2. Human usability

9.1.2.3. Organisational feasibility

9.2. Identify Risks

9.2.1. A. Business Needs: Risks arising from identifying, selecting, and developing understanding of the business needs (problems and requirements) to address in the research.

9.2.1.1. A-1. Selection of a problem that lacks significance for any stakeholder.

9.2.1.2. A-2. Difficulty getting information about the problem and the context.

9.2.1.3. A-3. Different and even conflicting stakeholder interests (some of which may not be surfaced).

9.2.1.4. A-4. Poor understanding of the problem to be solved.

9.2.1.5. A-5. Solving the wrong problem, i.e., a problem that isn’t a main contributor to undesirable outcomes that motivate the problem solving.

9.2.1.6. A-6. Poor/vague definition/statement of problem to be solved, with potential misunderstanding by others.

9.2.1.7. A-7. Inappropriate choice or definition of a problem according to a solution at hand.

9.2.1.8. A-8. Inappropriate formulation of the problem.

9.2.2. B. Grounding: Risks arising from searching for, identifying, and comprehending applicable knowledge in the literature.

9.2.2.1. B-1. Ignorance or lack of knowledge of existing research relevant to the problem understanding and over-reliance on personal experience with or imagination of the problem.

9.2.2.2. B-2. Ignorance or lack of knowledge of existing design science research into solution technologies for solving the problem, i.e., lack of knowledge of the state of the art.

9.2.2.3. B-3. Ignorance or lack of knowledge of existing relevant natural and behavioural science research forming kernel theories for understanding or solving the problem.

9.2.3. C. Build: Risks arising from designing artefacts, including instantiations, and developing design theories.

9.2.3.1. C-1. Development of a conjectural (un-instantiated) solution which cannot be instantiated (built or made real).

9.2.3.2. C-2. Development of a hypothetical (untried) solution which is ineffective in solving the problem, i.e., the artefact doesn’t work doesn’t work well in real situations with various socio-technical complications.

9.2.3.3. C-3. Development of a hypothetical (untried) solution which is inefficient in solving the problem, i.e., requiring overly high resource costs.

9.2.3.4. C-4. Development of a hypothetical (untried) solution which is inefficacious in solving the problem, i.e., the artefact isn’t really the cause of an improvement observed during evaluation.

9.2.3.5. C-5. Development of a hypothetical (untried) solution which cannot be taught to or understood by those who are intended to use it, e.g., overly complex or inelegant.

9.2.3.6. C-6. Development of a hypothetical (untried) solution which is difficult or impossible to get adopted by those who are intended to use it, whether for personal or political reasons.

9.2.3.7. C-7. Development of a hypothetical (untried) solution which causes new problems that make the outcomes of the solution more trouble than the original problem, i.e., there are significant side effects.

9.2.4. D. Evaluation: Risks arising from evaluating purposeful artefacts and justifying design theories or knowledge.

9.2.4.1. D-1a. Tacit requirements (which by definition cannot be surfaced) are not dealt with when evaluating the solution technology, leading to failure of the solution technology to meet those requirements.

9.2.4.2. D-1b. Failure to surface some or all of the relevant requirements leads to those requirements not being dealt with when evaluating the solution technology, leading to failure of the solution technology to meet those requirements.

9.2.4.3. D-2. Incorrectly matching the articulated requirements to the meta-requirements of the ISDT lead to the testing of the IDST and evaluation of an instantiation of the metadesign in a situation for which neither should be applied.

9.2.4.4. D-3. Incorrectly matching the meta-design or the design method to the meta-requirements (not following the ISDT correctly) leads to evaluation of something other than the correct solution technology or the ISDT as stated.

9.2.4.5. D-4. Improper application of the meta-design or the design method (not in accordance with the ISDT) in designing an instantiation leads to evaluation of something other than the correct solution technology or the ISDT as stated.

9.2.4.6. D-5. Improperly building an instantiation of the solution technology (such that it does not properly embody the meta-design) leads to evaluation of something other than the correct solution technology or the ISDT as stated.

9.2.4.7. D-6. Difficulties in implementing the solution technology during naturalistic evaluation, due to such things as unforeseen complications within the business/organization, prevent the instantiation of the solution technology from successfully meeting its objectives.

9.2.4.8. D-7. Success of the solution technology to meet its objectives is not obtained due to dynamic or changing requirements beyond the scope of the solution technology.

9.2.4.9. D-8. Success of the solution technology to meet its objectives is not achieved due to poor change management practices.

9.2.4.10. D-9. Determination of success or failure in reaching the objectives of the solution technology is error-prone or impossible due to disagreement about objectives or inability to measure.

9.2.4.11. D-10. Existing organizational culture, local organizational culture differences (sub-cultures), political conflicts, etc. complicate the evaluation process or weaken the abilityto make meaningful measurement of the achievement of the objectives of the solution technology.

9.2.4.12. D-11. Existing organizational priorities, structures, practices, procedures, etc. complicate the evaluation process or ability to make/measure the achievement of the objectives.

9.2.4.13. D-12 Emergence of new organizational or individual practices, structures, priorities, norms, culture, or other aspects that complicate the acceptability, workability, or efficiency of the application of the solution technology in a naturalistic setting.

9.2.5. E. Artefact dissemination and use: Risks arising from disseminating new purposeful artefacs and design theories or knowledge to people and organisations for use in practice to address business need(s).

9.2.5.1. E-1. Implementation in practice of a solution does not work effectively, efficiently, and/or efficaciously.

9.2.5.2. E-2. Misunderstanding the appropriate context for and limitations of the solution.

9.2.5.3. E-3. Misunderstanding how to apply the solution.

9.2.5.4. E-4. Inappropriate handling of adoption, diffusion, and organizational implementation.

9.2.6. F. Knowledge additions: Risks arising from publishing new design artefacts and design theories or knowledge and adding them to the body of knowledge.

9.2.6.1. F-1. Inability to publish or present research results

9.2.6.2. F-2. Publication of low significance research

9.2.6.3. F-3. Publication of incorrect research

9.2.6.4. F-4. Design artefacts prove too unique to disseminate

9.3. Analyse Risks

9.4. Prioritise Risks

9.4.1. High

9.4.2. Medium

9.4.3. Low

9.5. Determine Risk Treatments

9.6. Enact Risk Treatments

10. Explicate Problem

10.1. Set Problem Statement

10.1.1. PhD students not bearing “the RCA frame” in mind when reading

10.2. Analyse Stakeholders

10.2.1. Identify Stakeholders

10.2.1.1. Add Client(s)

10.2.1.2. Add Decision Maker(s)

10.2.1.3. Add Professional(s)

10.2.1.4. Add Witness(es)

10.3. Assess Problem as Difficulties

10.3.1. Ascertain Consequences

10.3.1.1. important facts overlooked

10.3.1.1.1. ...follows from...

10.3.1.2. boredom

10.3.1.2.1. ...follows from...

10.3.1.3. references not tracked back to RCA rationales

10.3.1.3.1. ...follows from...

10.3.2. Ascertain Causes

10.3.2.1. RCA frame unavailable

10.3.2.1.1. Why?

10.3.2.2. RCA frame inaccessible

10.3.2.2.1. ...leads to...

10.4. Assess Problem as Solutions

10.4.1. Alleviate Consequences

10.4.1.1. No longer important facts overlooked

10.4.1.1.1. ...follows from...

10.4.1.2. No longer boredom

10.4.1.2.1. ...follows from...

10.4.1.3. No longer references not tracked back to RCA rationales

10.4.1.3.1. ...follows from...

10.4.2. Lessen Causes

10.4.2.1. No longer RCA frame unavailable

10.4.2.2. No longer RCA frame inaccessible

10.4.2.2.1. ...leads to...