Grading papers always provides a good opportunity to think through the process of writing, reading, and assessing student work. Over the last few weeks, I've graded a stack of undergraduate papers from both a mid-level course for history majors and an introductory-level, survey course. As I worked my way through the stack my mind kept returning to a few points (and, yes, one of those points was "how many papers are left in this stack.).
1. When did people stop grading with red pens? I am always excited to rip into my box of new pens, pull out the strangely colored ones, and set them aside for grading. (I can't imagine taking notes at a faculty meeting in a purple pen...). I never grade in red any more. I think somewhere, perhaps in graduate school, someone convinced me that it was traumatic to grade a student with a red pen. Red stood for blood, fear, anger, and possibly communism. So I dutifully stopped grading in red and have brought on board every other color in the rainbow.
2. Writing and Reading. Reading student papers, even the good ones, frequently gets me wondering about what my students read (outside of my class, of course!). Some of the characteristics of student writing (in a general sense) like the short paragraphs and the tendency to rely heavily on direct quotation suggests the strong influence of journalism. The long, tenuously organized sentences that appear in so many student papers must reflect some other major influence on their writing. Perhaps the wandering, grammatically ambiguous sentence does not have roots in a written context at all, but reflects the influence of everyday speech patterns on writing. The spoken word might even account for the tendency to write almost exclusively in passive voice. Certainly passive voice is used by us far more frequently in speech than in writing.
3. Assessment. Like the hula-hoop, Cabbage-Patch Kids, and the so-called "iPod", the assessment craze is sweeping our country. It seems to me that the goal of assessment is to ascertain more clearly what makes a good teacher or a good course effective. To do this, it is necessary to break down the course into small parts -- grading, lecturing, leading discussion, witty banter, collegiality -- and subject as many of these facets of the classroom experience to assessment as possible. In most cases, the instructor conducts the assessment using some kind of common template. Since it is necessary to aggregate most assessment results for their presentation to assessment committees or panels, most assessment techniques have a quantitative component.
As someone who feels fairly comfortable dealing with numbers, samples, and quantitative data, it is almost fun to break down the process of grading, for example, into assessable steps. At present our assessment techniques focus heavily on student learning as a barometer for faculty effectiveness. Consequently, there is less emphasis on what a faculty member does in the classroom and more on whether the student improves over the course of the semester or the degree program. Each students performance on their papers, for example, gets broken down into certain assessable categories (do they have a clear thesis, do they use evidence successfully, do they advance a coherent argument, and is their paper structured rationally). Invariably these categories coincide with the key categories of the more traditional mode of assessment (i.e. grading). I record both the assessment data and the grades to determine exactly how closely our assessment rubric and the student's grade coincide both now at the mid-term point of the semester and at the final papers. Improvement in grades should correlate closely with improvements in the assessable skills, but writing style (and, of course, grading pen color) fall to the more blurry margins of our assessment criteria and could produce some deviation between student grade and assessable performance.