A friend and colleague who read and sympathized with my previous post passed along to me an essay by the late C. Q. Drummond, a long-time member of the Department of English at the University of Alberta. The essay is called “On the Duties of Professors,” and it addresses many of the same issues as my post, particularly the competition for attention, resources, and rewards between research and teaching. As competitions go, all academics know, this is a distinctly unequal one these days: officially, university policies may stress the equal importance of both duties, but inadequacy or irresponsibility in teaching will never hold back someone’s tenure or promotion if they have a “strong” publication record, and while the administrative infrastructure for research is large and powerful, topping out at the Vice Presidential level, if the two factors are really equally important, where, Drummond rightly asks, is the “Vice President (Teaching)”? (Here at Dalhousie, our office of Research Services has 22 staff, including a VP and an Associate VP. Our Center for Learning and Teaching has 10, with a Director and Associate Director at the top.) Not that Drummond wants to see an expansion of teaching-related bureaucracy–though I quite like his idea for how a VP (Teaching) would go about his or her business: this VP “would move through all the Faculties, visiting classes, hearing lectures, attending seminars, drinking coffee, joining oral examinations, talking into the night.” Through qualitative engagement with teachers and students, this VP would become “another source of evidence, besides tabulated student assessments, for who teaches well and who poorly.”
Drummond’s remarks are directed specifically at his own situation: at the time of writing (around 1984), he had recently been “penalize[d] for insufficient publication during a year in which [his Faculty] received extraordinary evidence of his merit as a teacher.” There’s a polemical thrust to them, as a result, but Drummond uses the occasion to place his own professional experience into its larger context: the increasing dominance of precisely the kind of quantitative measures of research “output” about which I was complaining yesterday. Actually, there is one difference that signals the 30-year gap between us: I didn’t notice any mention of research grants in his piece. I expect he would have objected still more strenuously to measuring scholarly success by level of external funding. He directs his criticism at “forced publication,” and at the reductive equation of publication with research or scholarship:
The Salaries and Promotions Committee certainly does not ask for wisdom; it does not ask for erudition or for scholarship; it does not ask for learning, or even for research; it asks for output, something to be measured or counted. . . . What good does such output do anyone? If research in an Arts Faculty means humane learning, then we all hope our teachers are as much involved in research as they possibly can be. We want them to know better and better what they are talking about, so that they will have, and will continue to have, something intelligent and important to profess to their students. But if research means output or publication, as it so often does today, how do the students profit? And how does the scholarly world profit from the forced production of ephemera? Most professors in Arts Faculties would be better off reading more and publishing less, and their students would be better off too, and so would the world of scholarship.
The very term “research” is, he argues, part of the problem. He quotes George Whalley, who argued in an essay of his own that “research” suggests a goal-oriented activity, work carried out in pursuit of something in particular. “The functions of research,” Whalley writes, “are specialized and limited; … the word research is not a suitable term for referring to the central initiative and purpose of sustained inquiry in “the humanities” . . . “The humanities” is what “humanists” do; not only what they study, but how they study, and why . . . .”
Drawing on the Handbook published by the CAUT (invoked by his Dean in response to Drummond’s appeal of the Committee’s decision), Drummond himself brings in the vocabulary of knowledge “dissemination” which is once again very current in discussions of our aims:
Research should result in teaching, and might result in publication, teaching and publication being the most important means of dissemination of knowledge. We may teach those near at hand in our lectures, discussions, tutorials, apprenticeships, and supervised practical training, or we may teach those distant through our published papers, articles, essays, and books. But in either case we will have to have found out and shown something worth lecturing about, discussing, or writing down. And where will we have our greatest effect in disseminating what we have found out and know? . . . Dissemination has to do with sowing seed; what we hope when we disseminate is that the seed will take root and grow. . . . So much of the seed one sows in publication falls by the wayside and is devoured by birds, or falls on stony ground, or among thorns and yields no fruit. What the good teacher sows in his class or tutorial is far more likely to find the good ground, spring up, increase, and itself bring forth.
He reiterates at intervals throughout the piece that he is not opposed to either research or publication, only to a mechanistic understanding of both, especially when it “drives out teaching”–which almost inevitably follows: institutional systems of measurement and incentives are set up not “to encourage the combination of knowing and teaching,” but to “encourage the production of printed pages,” and “because we live in a world in which time itself is scarce, the time taken for one must be taken from the other.” Again, it’s not that he wishes teaching, in its turn, to drive out research–teaching depends on research, broadly understood as inquiry.
It’s not, in my turn, that I wish to drive out either research or publication, both of which are essential (as Drummond too acknowledges) to learning, teaching, and knowledge dissemination. What bothers me is the incessant identification of “productive” scholarly activity with a narrow model of output, a cloistered, specialized, self-referential kind of publishing supported, ideally, by as large an external grant as possible. It’s a shame that the faux-scientific model Drummond objects to is now so firmly entrenched–so deeply entangled in the values, practices, and especially the finances of our universities–that it seems unimaginable that we could ever undo it. Some might argue that we have won more by it than we have lost–that without playing the game that way, we would have forfeited any place in the contemporary academy. Others might reply that, yes, we are playing the game, but on terms by which we can only, ultimately, lose: however vast our research output, will we ever win either the public or the institutional respect enjoyed by the sciences? Hasn’t our preoccupation with research actually isolated us and cost us public support? And in our effort to insist on the goal-oriented practicality of our fields, we may have flagged in our defense of their intrinsic value. Again, it’s not that I think we should not do research, or publish what it teaches us–but it’s a shame that the system is so rigged in favor of hurrying it along and rushing it into print–not to mention aiming it at a specific (and very narrow) audience. “I know for a fact,” Drummond observes, “that policies of forced publication never brought into being–nor could ever have brought into being–those critical books that have been to me most valuable.” That’s certainly true of my reading as well. The narrow concept of research and the pressure to publish also, when made the primary measures of professional success, marginalize undergraduate teaching. (The emphasis in grantsmanship on teaching and funding graduate students, or “HQP” [Highly Qualifed Personnel] is another whole area of trouble.) Finally, it seems to me paradoxically retrograde to be urging or following a model that measures productivity by grant size or output of peer-reviewed publications at a time when the entire landscape of scholarly communication is changing. We can circulate our ideas, enhance our and others’ understanding, pursue our inquiries and disseminate our knowledge in more, and often cheaper, ways than ever before. As long as we are all using our time in service of the university’s central mission–the advancement of knowledge, including through teaching–by the means best suited to the problems we think are most important and interesting to pursue, aren’t we doing our duty as professors?
But as the Associate Vice President who spoke to my Faculty on Thursday said repeatedly, there aren’t “metrics” for those other ways of doing (or discussing) research or measuring its impact: they do not yield data that can be counted, measured, and easily compared across departments, faculties, and campuses. Apparently, that means we have to set them aside–or, at any rate, that the VP (Research) will do so, when reporting to us on our “performance.”
The essay I discuss here is in the volume In Defence of Adam: Essays on Bunyan, Milton, and Others by C. Q. Drummond, edited by John Baxter and Gordon Harvey (Edgeways Books, 2004).