Assessments

Solve your problems or get new ideas with basic brainstorming

Get Started. It's Free
or sign up with your email address
Assessments by Mind Map: Assessments

1. Fixed-Choice

1.1. Scores: objective and reliable

1.1.1. Interpret scores in terms of non/mastery, error bands (confidence bands)

1.2. Criterion referenced: designed to measure performance that is interpretable in terms of clearly defined and delimited domain of learning tasks

1.3. Norm referenced: compare student performance relative to a group

1.4. Purposes: used to test large quantities of information in a short period of time. Easily graded, objective scoring, high reliability, and cost effective. Tests skills and memory.

1.5. Types: multiple choice, matching or true and false, standardized tests

2. A procedure to gain information about student learning; used to influence instruction and increase learning progress

3. Validity: the evaluation of the adequacy and appropriateness of the interpretations of the assessment results: the content of the test determines the results, therefore the results should only be interpreted in relation to the learning goals

3.1. Content: must contain a representative sample of the content to be tested- must be relative and representative of the domain

3.2. Content: table of specifications- the test should contain a balance of content and skills to be measured

3.3. Content- the test selected should reflected the learning goals in order to produce results that can be adequately interpreted

3.4. Construct validation: does the assessment adequately represent the intended construct? Is performance influenced by factors that are ancillary or irrelevant to the construct?

3.5. Assessment-criterion relationship: compare results of one test with the results of another based on similar criterion

3.6. Consequences: evaluate the effectiveness of the assessment

4. Reliability: the consistency of assessment results; concerned with consistency over time

4.1. Test-retest: give the same test twice with time in between tests, then correlate the two sets of scores (measure of stability)

4.2. Equivalent forms: give two forms of the test to the same group in close succession, then correlate the two sets of scores (measure of equivalence

4.3. Split-half: give test once; score two equivalent halves of est; correct correlation between halves to fit Spearman-Brown formula (measure of internal consistency)

4.4. Interrater: judgmental scoring of two or more raters who independently score the responses (measure of consistency of ratings)

4.5. Test-retest with equivalent forms: give two forms of the same test to the same group with an increased time interval between forms (measure of stability and equivalence)

4.6. Kuder-Richardson and coefficient alpha: test is given once and the KR or Cronbach's alpha formula is applied

4.7. Standard measure of error: the amount of variation in scores: test scores should be interpreted by a band of scores

4.7.1. Low reliability: large variations in the student's assessment results

4.7.2. High reliability: small standard of error

4.8. Objectivity: the degree to which equally competent scorers obtain the same results

5. Complex-Performance

5.1. Scores: rubric based

5.2. Language: open-ended questions, written responses

5.3. Purposes: to reflect long-term instructional goals. Solve complex problems and complex tasks.

5.4. Types: projects, laboratory experiments, essays, oral presentations

5.5. Subjective

6. Formative: assessment given during instruction to monitor progress and influence instruction

7. Placement: assessment given before instruction to determine current understanding

8. Diagnostic: assessment given during instruction to diagnose any learning difficulties

9. Summative: assessment given after instruction to determine understanding gained

10. Reliability of results depends on those grading the assessments