Adjusting GPAs: A Statistician’s Effort to Tackle Grade Inflation

Geek

Active member
Jun 6, 2008
21,639
0
36
Nearly two decades ago, Texas A&M University statisticianÂ*Valen E. JohnsonÂ*found himself in the trenches of one of the most contentious fights in higher education: curbing grade inflation — a problem that still persists nationwide, as does Johnson’s interest in finding a solution.
As a professor at Duke University in 1997, Johnson became the face and statistical muscle behind a first-of-its-kind effort to address grade inflation. The plan would have created an adjusted grade-point average (GPA), but it was defeated because of opposition, mostly from faculty members in the humanities and students worried about having lower GPAs.
A recent analysis of 200 colleges and universities published in theÂ*Teachers College RecordÂ*found that 43 percent of all letter grades awarded in 2008 were A’s, compared to 16 percent in 1960. And Harvard’s student paper recently reported that the median grade awarded to undergraduates at the elite school is now an A-.
“I think it’s resulting in something of a reduction in academic standards,” said Johnson, the author ofÂ*Grade Inflation: A Crisis in College Education, published in 2003 and now in its second printing. “There’s no real incentive for change. The students want higher grades. The faculty — because their promotion, tenure and merit increases are based, to some extent, on student evaluations — know they’re more likely to get better evaluations if they give better grades. And administrators don‘t want to jump in to impose reform.”
Proposal created ‘adjusted GPA’
Johnson, who joined theÂ*Texas A&M StatisticsÂ*faculty in 2012, is a renowned expert in Bayesian statistics, which uses probability distributions to represent uncertainties on all unknown quantities. His involvement in the Duke fight, which attracted national headlines at the time, started when the university’s provost asked Johnson to sit on a committee to examine grade inflation. Johnson requested and received a trove of data from the registrar’s office. It quickly became clear, he said, that different faculty members were using vastly different standards to assign grades. For instance, on aggregate, humanities instructors graded more leniently than social sciences instructors, who graded more leniently than natural sciences instructors.
“So I created a peer-reviewed statistical model that accounted for the differences in grading and came up with an adjusted GPA,” Johnson said. “There was then a semester-long discussion before the dean of undergraduate affairs proposed that the adjusted GPA replace GPA on student transcripts starting the next semester. That was a really bad move. Nobody knew what their adjusted GPA was. Students all thought they were taking easier classes than everybody else, so everyone thought their GPAs would go down, even though the overall average would remain the same.”
Johnson said that five universities had already asked him to compute adjusted GPAs and that he believed, had a powerhouse like Duke taken the first step, the other schools would have followed. But after contentious debate, Duke’s Arts and Sciences Council, which had the final say on the matter, defeated the proposal on a 19-14 vote.
Following the failed effort, the provost gave Johnson $10,000 to further study questions about grades raised by the debate. His research revealed a pair of key findings: Students gravitate toward taking courses offered by instructors they deem to have laxer standards, and they also tend to give better evaluations to instructors who gave them higher grades.
Better grades, better evaluations
As part of his follow-up research, Johnson set up a website where students could see mean grades that instructors had given in the past and course evaluations of instructors left by other students. When students looked at a past course’s mean grade, Johnson received a record of it. He then looked at whose courses those same students signed up for in the spring and examined the relationship between the variables.
“For example, if they looked at three sections of calculus with different instructors, and one instructor graded easier than the other instructors, I could see how much more likely they were to register in the class in which instructors graded more leniently,” Johnson said. “It turns out they were about twice as likely to enroll in a course that was graded with an A-minus average versus a B average after they looked at the course mean grades.”
To study the impact of course grades on student evaluations, Johnson had freshmen fill out evaluations for courses they were currently taking in the fall along with their grade expectations for that course. He then had those same students fill out course evaluations again in the spring for the same course with the knowledge of what they had actually earned in the course.
“That allowed me to look directly at the influence of course grades on student evaluations,” Johnson said. “As you might expect, the effect of either expected course grade or received course grade is very powerful in student evaluations of teaching. If a student was getting a C in a course, he or she was very unlikely to rate the instructor highly. If they were getting an A in the course, they’re more likely to rate the instructor highly. I think this provides quantitative evidence for something most instructors know: If they grade easier, they will tend to get better course evaluations.”
Recommendations
Today, Johnson is circumspect about the Duke effort. In hindsight, he calls it “a disaster.” And if he had it to do over again, he would have recommended a different approach rather than eliminating the current GPA system, which caused apprehension among faculty and students alike. He now recommends keeping the same GPA measure, but perhaps using the adjusted GPA to distinguish students with a special mark or honor so that graduate schools and employers know the student stood out.
To alleviate some of the fear that instructors have of receiving negative student evaluations in response to awarding poorer grades, Johnson said administrators should consider an approach that would eliminate a certain percentage of the instructor’s lowest evaluations. The percentage of evaluations that were eliminated would be tied to the number of lower grades that the instructor assigned.
“At least when students come to argue about grades to try to get them up, the instructor would feel more comfortable in holding the line, so that incentive would disappear,” Johnson said.
Johnson noted efforts to include the average or median grade for a course alongside the actual grade earned haven’t helped correct grade inflation when tried at other universities. While well intentioned, he said such measures have merely given students a new mechanism to figure out how to find the most lenient instructors. In addition, Johnson said, such a move actually could put the affected students at a disadvantage when potential employers compare them with students from other universities that don’t have such requirements.
To learn more about Johnson, his research into grade inflation and his applications of statistics to solve an eclectic range of issues impacting a variety of industries, visitÂ*http://www.stat.tamu.edu/~vjohnson/.
pixy.gif


p-89EKCgBk8MZdE.gif
 
Back
Top