Lesson Plan 5

What students had as an assignment: Assignment 4

[10 mins] Part 1: Students Share Refined Learning Goals

  • Activity: Students go around and present the refined learning learning objectives they wrote for their personal topic for Assignment 4 (during this first iteration, no comments/feedback are given).

[20 mins] Part 2: Discussion: Refining Learning Goals and Bloom’s Taxonomy

  • Writing the rubric can help evaluate learning goals and identify the holes in your learning goals. This can be an iterative process that you use to then refine the learning goals, material, evaluation, and rubric.
  • Learning goals are only as good as what you can evaluate.
  • It can be difficult to identify expert blind spots in learning goals, and
  • When it comes to learning objectives and rubrics theres no right or wrong answer, but you do have to figure out what the class will be on. This iterative process creates a chance for you to explore what you think the class should actually be about in a clear concise way – thats what the larning goals should be.
  • Q: Does anyone have any thoughts or feelings about the Bloom’s taxonomy reading?
  • Bloom’s taxonomy is helpful but its important to understand it is not science, it is opinion so they’re not a source of absolute truth. Its ok to use verbs that are not included in the taxonomy.
  • Explicitly gently pushing back on bloom’s taxonomy because some people treat it like a static rule that must be followed
  • Learning goals can have a lot of value when co-teaching because it can help make sure both instructors are on the same page on what should be covered, so they can be used as a communication tool
  • If you can turn a learning goal into a rubric, it means you really know how you’re going to evaluate it. And doing so, and adding corresponding point values to each learning objective can help make students care a lot more about them.
  • Example Learning Goal: Remember– Define 3 common coverage metrics
  • Q: What kind of question could you ask to evaluate this learning goal?
    • Multiple Choice Question: What is branch coverage?
  • If you struggle to come up with questions to evaluate a leaning goal, then you should change the learning goal (not just the language), usually by expanding or contracting the learning goal so it aligns with the desired learning outcome for the students
  • Learning objectives should be things that you can evaluate
  • Q: What would be a bad larning goal that we wouldn’t be able to evaluate about coverage metrics?
    • “Be able to work with coverage metrics”

[20 mins] Part 3: Whiteboard Demo: Iteratiely Refining Learning Goals by Designing Evaluations

  • Go through example of taking a learning goal and trying to design questions to evaluate it, then using that to iteratively refine the learning goal
  • Wrong questions on multiple choice questions are called “distractors” because they distract from the correct answer
  • Example Learning Goal: Remember– Define 3 common coverage metrics
  • Q: What kind of question could you ask to evaluate this learning goal?
    • Multiple Choice Question: What is branch coverage?
  • If you struggle to come up with questions to evaluate a leaning goal, then you should change the learning goal (not just the language), usually by expanding or contracting the learning goal so it aligns with the desired learning outcome for the students
  • Learning objectives should be things that you can evaluate
  • Q: What would be a bad larning goal that we wouldn’t be able to evaluate about coverage metrics?
    • “Be able to work with coverage metrics”
  • What kind of other question could we use to evaluate this “Define” learning goal?
    • Fill in the blank
  • What other type of verb could we use here that we could ask a different type of question for? (Feel free to refer to Bloom’s taxonomy, however in situations like this sometimes the taxonomy shows its age because it doesn’t contain technology-specific verbs we might want to use like “debug”)

[20 mins + 10 mins] Part 4: Discussion HLW Appendix C: Designing Rubrics

Discussion: Generating Rubrics

  • Q: Does anybody have any questions or thoughts about the readings about rubrics?
  • Nuance in rubrics means pain for grading, and you need to evaluate how much pain per student assignment you and your grading team can handle. This is a realistic tradeoff you need to make when designing rubrics and leaning goals.
  • There is a tradeoff: Questions with no nuance are much easier to grade but a downside is they are an all-or-nothing proposition. One way to address this to ask multiple low-nuance questions evaluating the same topic that in union give a more holistic perspective of student knowledge. How much nuance you want to evaluate for a question often depends on how important that learning goal is to the class.
  • There is also a consideration of students with varying levels of English proficciency, because verbose multiple choice questions take longer for them to read so they’re spending more time reading and trying to understand the question than they would with a free response question. And on the reverse side multiple choice questions will take less time to grade than free response questions. This is a balance that you should keep in mind and consider when designing evaluations.

Discussion: Philosophy of Grading

  • Are people familiar with specification-based grading?
  • Idea: Students should be able to decide what grade they want to get in the class based on the amount of effort they put into the class
  • Argues you should have tiers of effort that students can choose to do that corresponds to grading. Rubrics should be given to students ahead of time, it should be clear, and they can turn it the work that corresponds to the score and amount of effort they’d like to aim for.
  • You objectively set the standard for performance before students work on the assignment
  • It can be very helpful to provide rubrics to students before they start working on the assignments. Ideally students should know what grade they’re going to get when they submit the assingent
  • Specifications based grading would argue to make it clear upfront how points will be allocated so you don’t have to fight about points with students (assuming the rubric was applied correctly to the assignment)

[10 mins] BREAK

[15 mins] Part 5: Discussion of Grading Mechanics

  • Grading varies widely by the type of course (e.g., capstone course vs large introduction course)
  • Practically speaking: When writing rubrics you should be realistic about what you can do in terms of grading resources you have available to you
  • If you are going to give more nuanced questions, you need more grading resources and support (which brings
  • Principles for grading:
    • Give feedback, it is important. Consistent, prompt feedback.
    • Grading should not take a long time, feedback should be swift
    • Grading should be reliable and consistent across graders
  • When you have multiple people grading the same assignment, a big concern is making sure grading is consistent across graders, which can be difficult in practice. And the more granular the scoring, the harder it can be
  • Bucket grading can be an effective approach to try to making grading across different graders more consistent
  • Another strategy is to have one grader grade all of a given question
  • Discussion of Late Days and Extensions:
    • Deadlines can help students by giving them structure, but there is a balance regarding how much balance to give
    • Sometimes giving students too much freedom causes students to fall behind on assignments to an extent that makes it difficult to catch up by the end of the course
    • Students who need support the most often benefit from some sort of structure
    • Transparent policies about the number of late days can be helpful
  • Social Expectations on Student Side
    • Sometimes students will come to you saying things like “My mom won’t let me quit CS” because they have external pressures, and being empathetic about that can be helpful when they’re pressuring you to bump their grade