Having written a piece on how to prepare for #comprehensiveexams (comps), I was asked how I approached evaluating responses to questions.
I was taken aback by the question - because while I have a personal process - and every place that I have worked used a #rubric they distribute to graders - I have never seen anyone offer direct guidance to faculty or insight to students on the evaluation process.
I’m sure there are many approaches - so this post reflects mine - and lessons learned at three US-based universities.
So for #PhDstudents, understand this is my view, not your faculty’s view.
So for faculty, understand that I know there are many views on evaluating comps. I’m not suggesting adopting mine. I do think first-time raters might find it helpful.
First, the evaluation process.
Usually, every #compquestion has multiple raters, never less than two, sometimes as many as six.
Usually, the #faculty have agreed upon a rating scheme: fail, low pass, pass, and high pass.
Usually, the raters do not talk to each other about specific responses or scores.
Usually, at least one rater adds a plus or minus to the evaluation, which is ignored, but makes them feel good.
Usually, despite not talking to each other and not adhering to the rating scheme, the scores are remarkably consistent.
Never have I seen a bimodal evaluation of comps, where 1/2 the raters pass and 1/2 the raters fail students.
Usually, the raters only meet as a group if the aggregate score suggests a need to fail or take remedial action on a student.
Never have I seen an exam casually failed bc no one wants to attend another meeting.
So a pass\fail is serious & reflects many raters’ scores.
So how do I rate comps?
First, I read the answer & see if it responds directly to the question.
Many students write answers to the question they want, not to the presented question.
This earns a low mark.
Second, I ask, is the answer coherent?
Many students do an incoherent data dump.
This earns a #lowmark
Third, I ask, is the answer accurate?
An accurate answer responds directly to the question with references.
This earns a #highmark.
Fourth, I ask, is the answer unexpected?
Assuming the answer to 3 is yes, unexpected answers show you might someday advance the literature.
This earns a high mark.
If no, then the student is in trouble.
Finally, I ask whether the question was reasonable.
Question writers tend to write independent questions minimally edited by the #examcoordinator.
Sometimes, #insanequestions slip through the process.
If so, I go back to 1 & restart the evaluation.
I’ve found my marks out of step with others on rare occasions. Invariably, the #phd student has passed the exam.
In those moments, I have gone back & re-read the exam to evaluate why. This helps me prepare for the following year.
I hope this takes some of the mystery out of #compsgrading for students!
Best of luck!
https://www.linkedin.com/posts/jason-thatcher-0329764_comprehensiveexams-rubric-phdstudents-activity-6937172863271538688-LJXp?utm_source=linkedin_share&utm_medium=member_desktop_web
Comments