1. two views of the reflections
1.1. several themes related to education, assessment, and the role of AI in teaching and learning
1.1.1. 1. Assessment and Rubrics
1.1.1.1. 1) How might instructors devote a section for reflection on one of the VALUES rubrics? How can they structure a rubric with reflection to make it a “fun” project? 2) Can it be said that feedback from a good rubric is more effective than traditional grading? 3) How is VALUE different than traditional assessment methods? Can these be applied to standardized tests, too? 4) How can I use VALUE while creating an assessment strategy for my course? 5) Unlike traditional tests, performance-based assessments lack a clear standard, so how can educators ensure fairness and consistency in grading? 6) Are current assessment practices truly capturing the depth of student learning, or are they reinforcing superficial knowledge? 7) How can we assess the assessments itself? How do we know if we are using the right assessment methods?
1.1.2. 2. AI in Education
1.1.2.1. 1) Could there be a relationship between AI and the assessment key points addressed by Wiggins and McTighe (2005)? 2) How can we integrate performance-based assessments as a form of standardized testing in an AI-driven world? 3) How can AI be used to facilitate shifts in assessment that allow students to feel increased agency and motivation? 4) How can educators be transparent with their students about the implications of AI? 5) What role should human oversight play in AI-driven assessments to maintain educational integrity? 6) How can we make sure teachers are not overly using AI to grade students or design classes? 7) How can educators overcome their fear of new technologies like AI? 8)Can AI one day be trained to grade and provide descriptive feedback for students to learn successfully? 9) Will the existence of on-screen pedagogical agents make students more dependent, weakening their ability to learn and think independently? 10) What are the best practices for educators to design AI-powered companions that support student learning?
1.1.3. 3. Student Motivation and Emotional Engagement
1.1.3.1. 1) I wonder if cultural differences play a role in what students use as motivation for learning; are some students more motivated by quantitative scores? 2) How do emotional engagement and prior knowledge affect learning? 3) How can we help students detach their value from grades while also decentering grading from learning? 4) In regions where grades are highly emphasized, how can we shift students’ and parents’ perspectives on grades to prevent discouragement? 5) If teachers rely too heavily on AI, can we still truly call ourselves teachers?
1.1.4. 4. Performance-Based and Multimodal Assessments
1.1.4.1. 1) How can multimodal assessments be structured to ensure they measure deep learning without overwhelming students? 2) How do we design assessments to prioritize growth and understanding over performance metrics? 3) Even though performance tasks better evaluate students’ learning, how do these tasks align with real-world situations and learning objectives?
1.1.5. 5. Feedback and Grading Practices
1.1.5.1. 1) Research shows that most students either do not read or do nothing with feedback. How do we make feedback accessible and readable? 2) How can educators assess and score students effectively without depending on letter grades? 3) How do educators decide which materials should be presented in what form to be more conducive to student acceptance?
1.1.6. 6. Instructional Design and Teaching Strategies
1.1.6.1. 1) What instructional design method(s) result in a more adequate understanding? 2) Wiggins and McTighe (2005) mention “thinking like an assessor” and “thinking like an activity designer.” How can educators switch between these roles? 3) How can we create a welcoming and engaging learning environment that motivates students to learn? 4) Empathy mapping may add burden to teachers. How can we make sure teachers are not overloaded while designing classes based on individual differences?
1.1.7. 7. Equity and Subjectivity in Education
1.1.7.1. 1) What are the implications of subjectivity in assessment for equity in education? 2) How do cultural differences impact motivation for learning, and can we accommodate these differences in the classroom? 3) What are meaningful ways K-12 educators can utilize empathy mapping in lessons? 4) How do we ensure subject-specific assessments align with real-world contexts?
1.1.8. 8. Standardized Tests and Professional Assessments
1.1.8.1. 1) Are professional tests like the Law Bar, CPA, and CFP still relevant in today’s society and education landscape? 2) What are the pitfalls of focusing a course solely on passing an assessment?
1.2. Holistic view
1.2.1. 1. Human-AI Integration in Assessment and Learning Design
1.2.1.1. This category captures the dynamics of using AI in educational assessments and instructional design, reflecting on both opportunities and challenges. It focuses on AI's role in grading, feedback, and instructional design while ensuring human elements remain central to learning.
1.2.1.1.1. 1) Could there be a relationship between AI and assessment key points addressed by Wiggins and McTighe (2005)? 2) If teachers rely too heavily on AI, can we still truly call ourselves teachers? 3) How can AI be used to facilitate shifts in assessment that allow students to feel increased agency and motivation? 4) How can educators be transparent with their students about the implications of AI? How can we reach a balance between using the efficiency and personalization of AI-driven assessments while ensuring that the human elements, such as empathy and contextual understanding, remain central? 5) Given the belief that AI threatens teaching jobs, can AI be trained to grade and provide sufficiently descriptive feedback for successful student learning? 6) What are best practices for educators to design AI-powered companions that support student learning? 7) How can educators overcome their fear of new technologies like AI? 8)What role should human oversight play in AI-driven assessments to maintain educational integrity? 9) How can AI be utilized to provide students with more meaningful feedback that extends beyond mere letter grades? 10) Will the existence of on-screen pedagogical agents make students more dependent and weaken their ability to think independently? 11) How can we make sure teachers are not overly using AI to grade students or design classes?
1.2.2. 2. Designing Assessments for Deep Learning and Authentic Understanding
1.2.2.1. This category includes questions on designing assessments that go beyond surface learning, focus on real-world applications, and foster a deep understanding of content. It covers multimodal assessments, performance tasks, rubric design, and feedback strategies to support authentic learning.
1.2.2.1.1. 1) Are current assessment practices truly capturing the depth of student learning, or are they reinforcing superficial knowledge? 2) How can assessments be designed to prioritize growth and understanding over mere performance metrics? 3) How can multimodal assessments be structured to ensure they measure deep learning without overwhelming students? 4) Can feedback from a good rubric be more effective than traditional grading? 5) What are meaningful and effective ways for K-12 educators to utilize empathy mapping in a whole group lesson? 6) How can educators assess and score students effectively without depending on letter grades? 7) What are the implications of subjectivity in assessment for equity in education? 8)Are some interactive environments better suited for specific disciplines? Would it be worthwhile to develop several interactive options for the same course? 9) How do designers decide which teaching materials should be presented in what form to maximize acceptance by most students? 10) How can self-assessment contribute to the development of metacognitive skills in students? 11) Can a project on photosynthesis or Asian history be structured with reflection to make it a "fun" project so students will WANT to learn?
1.2.3. 3. Equitable and Culturally Responsive Assessment Practices
1.2.3.1. This category includes questions related to fairness, cultural responsiveness, and inclusivity in assessments. It addresses how educators can design assessment strategies that account for diverse learner needs and cultural differences.
1.2.3.1.1. 1) How can we help students detach their value from grades while also working to decenter grading from learning? 2) How do cultural differences play a role in what students use as motivation for learning? Can we accommodate these differences? 3) In regions where grades are highly emphasized, how can we shift students' and parents' perspectives on grades? 4) How can educators ensure fairness and consistency in grading performance-based assessments? 5) What are the pitfalls of focusing a learning engagement or course only on passing an assessment? 6) What are the implications of subjectivity in assessment for equity in education? 7) How do we ensure standards established by school districts, states, etc., align with what our students need to learn?
1.2.4. 4. Rethinking Traditional Assessment Models and Standards
1.2.4.1. This category addresses questions on the relevance, evolution, and structure of traditional assessment models, such as standardized tests and professional exams. It explores how to integrate new assessment types that better reflect learning goals and real-world contexts.
1.2.4.1.1. 1) How can performance-based assessments be integrated into an AI-driven world where traditional standardized tests are prone to cheating? 2) How can we assess the assessments themselves—are we using the right methods that support learning? 3) Are professional tests (e.g., Law Bar, CPA) still relevant in today’s society and education landscape? 4) How is VALUE different than traditional assessment methods? Can these be applied to standardized tests? 5) Even though performance tasks better evaluate students’ learning, how do these tasks align with real-world situations and subject-specific learning objectives? 6) How can self-assessment contribute to the development of metacognitive skills? 7) What instructional design method(s) result in a more adequate understanding? 8)If learning standards do not align with our students' needs, how do we proceed?
2. Objective of the class: Provide an overview of the use of assessments and rubrics in education.
2.1. Assessment: A key tool for measuring learning and guiding instruction. VALUE Rubrics: A qualitative approach to assess competencies in college students, promoting transparency and equity. Multimodal Evaluation: Use of multiple methods to adapt to students' needs and optimize learning.
2.1.1. VALUE Rubrics for Assessment Assess key competencies such as critical thinking, effective communication, and social responsibility. Establish clear criteria that promote transparency and fairness in evaluation. Comparability of results between institutions and development of transferable skills. Reducing teacher workload by focusing on meaningful feedback.
2.1.2. Interactive Multimodal Learning Environments" Combining media to maximize meaningful learning (text, images, interactive activities). Design Principles: Guided activity, feedback, reflection, and learner-paced control. Importance of minimizing unnecessary cognitive load to allow for deep learning.
2.2. Teaching More by Grading Less (or Differently)
2.2.1. Critique of Traditional Assessments: Traditional methods often promote competition rather than reflecting true learning. Grades tend to encourage extrinsic motivation, such as fear of failure. Suggested Alternatives: Descriptive feedback. Self-assessment and peer evaluation to promote more meaningful learning.