Tuesday, May 10, 2011

Ensuring Valid and Reliable Assessments of Student Learning Workshop

Megan Oakleaf, Syracuse University
Thursday, May 5, 2011, 1:00pm - 4:30pm
2011 LOEX Preconference

The first part of the workshop focused on outcomes and performance assessment. The second part covered rubrics.

Outcomes
Megan Oakleaf briefly discussed the necessity for "clear, meaningful, transferable, learning outcomes." How will students be able to use what they learn in other contexts. Ensure that the outcomes are relatable, but not overwhelming.

There are a lot of different formulas for writing outcomes, but all good outcomes begin with active verbs. See Assessment-as-Learning.

Performance Assessment
Oakleaf stressed active learning. She briefly discussed the "Understanding by Design" approach before discussing performance assessment. Performance assessments “focus on student’s tasks or products/artifacts of those tasks.” These assessments “simulate real life application of skills.” I was surprised to learn that your instruction/learning tool can also be your assessment tool. It seems so obvious, but I had never thought of it that way. For example, observing students completing a class activity such as a database search can be an assessment. Develop a checklist and check off as a student meets the criteria. She used the example of tallying how many students locate an appropriate article. In our introductory workshop, we could keep track of how many students are able to locate on book on their own in our find-a-book activity.

Rubrics
Rubrics “describe student learning in 2 dimensions:” indicators/criteria and “levels of performance.” Oakleaf introduced us to the RAILS (Rubric Assessment of Information Literacy Skills) project: www.railsontrack.com.

Rubric Creation Process
  1. Reflecting: "why did we create this assignment/assessment?" "What happened the last time we gave it?" "What is the relationship between this assignment/assessment and the rest of what students will learn?"
  2. Listing: "What specific learning outcomes do we want to see in the completed assignment/assessment?" "What evidence can students provide in this assignment/assessment that would demonstrate their learning.". "What are our expectations of student work? What does it look like?"
  3. Grouping and labeling: "Can we group our brainstorms in categories?" "How can we label them?" The labeled groups are now the "criteria."
  4. Creating: Draft the performance descriptions. Define the highest level of or best possible student performance. Define the worst and other developmental levels as needed.
In order to avoid the most common rubric design flaws, describe what a student at that level ‘looks’ like.

Rubric Norming Process
Oakleaf recommends norming rubrics when multiple graders will be using the same rubric. This will limit discrepancies among evaluators.
  1. Think aloud about an example. Criterion by criterion, share how you came to your performance rating.
  2. Raters independently evaluate examples.
  3. Raters come together to identify similarities and differences in scoring patterns.
  4. “Discuss and reconcile inconsistent scores.”
  5. Repeat steps 2-4 new samples until consensus.
Closing the Loop
Based on your assessment, “Enact decisions to increase learning.” Assessment is useless if you don’t do this.

No comments: