Solve your problems or get new ideas with basic brainstorming

Get Started. It's Free
or sign up with your email address
Rocket clouds
Assessments by Mind Map: Assessments

1. Complex-Performance

1.1. Scores: rubric based

1.2. Language: open-ended questions, written responses

1.3. Purposes: to reflect long-term instructional goals. Solve complex problems and complex tasks.

1.4. Types: projects, laboratory experiments, essays, oral presentations

1.5. Subjective

2. Fixed-Choice

2.1. Scores: objective and reliable

2.1.1. Interpret scores in terms of non/mastery, error bands (confidence bands)

2.2. Criterion referenced: designed to measure performance that is interpretable in terms of clearly defined and delimited domain of learning tasks

2.3. Norm referenced: compare student performance relative to a group

2.4. Purposes: used to test large quantities of information in a short period of time. Easily graded, objective scoring, high reliability, and cost effective. Tests skills and memory.

2.5. Types: multiple choice, matching or true and false, standardized tests

3. A procedure to gain information about student learning; used to influence instruction and increase learning progress

4. Formative: assessment given during instruction to monitor progress and influence instruction

5. Placement: assessment given before instruction to determine current understanding

6. Diagnostic: assessment given during instruction to diagnose any learning difficulties

7. Summative: assessment given after instruction to determine understanding gained

8. Validity: the evaluation of the adequacy and appropriateness of the interpretations of the assessment results: the content of the test determines the results, therefore the results should only be interpreted in relation to the learning goals

8.1. Content: must contain a representative sample of the content to be tested- must be relative and representative of the domain

8.2. Content: table of specifications- the test should contain a balance of content and skills to be measured

8.3. Content- the test selected should reflected the learning goals in order to produce results that can be adequately interpreted

8.4. Construct validation: does the assessment adequately represent the intended construct? Is performance influenced by factors that are ancillary or irrelevant to the construct?

8.5. Assessment-criterion relationship: compare results of one test with the results of another based on similar criterion

8.6. Consequences: evaluate the effectiveness of the assessment

9. Reliability: the consistency of assessment results; concerned with consistency over time

9.1. Test-retest: give the same test twice with time in between tests, then correlate the two sets of scores (measure of stability)

9.2. Equivalent forms: give two forms of the test to the same group in close succession, then correlate the two sets of scores (measure of equivalence

9.3. Split-half: give test once; score two equivalent halves of est; correct correlation between halves to fit Spearman-Brown formula (measure of internal consistency)

9.4. Interrater: judgmental scoring of two or more raters who independently score the responses (measure of consistency of ratings)

9.5. Test-retest with equivalent forms: give two forms of the same test to the same group with an increased time interval between forms (measure of stability and equivalence)

9.6. Kuder-Richardson and coefficient alpha: test is given once and the KR or Cronbach's alpha formula is applied

9.7. Standard measure of error: the amount of variation in scores: test scores should be interpreted by a band of scores

9.7.1. Low reliability: large variations in the student's assessment results

9.7.2. High reliability: small standard of error

9.8. Objectivity: the degree to which equally competent scorers obtain the same results

10. Reliability of results depends on those grading the assessments