Reading and Research Redux: The Somerville Novelists Project

I admit, my earlier question “When is reading research?” was a bit disingenuous: obviously, research is purposeful reading. Of course, this definition can get batted around a bit too, depending on how you define your purpose: the pursuit of pleasure? aesthetic enrichment? familiarity with current best-sellers? Perhaps it’s better to say that, at least in a university context, research is reading in pursuit of knowledge, or reading directed towards solving a problem or answering a question or accomplishing a task. As Jo VanEvery also points out in her recent post on this topic, though, we have become preoccupied with the results of that reading, so that oddly, the process of exploration fundamental to defining a question in the first place has become devalued. And in universities we have also become preoccupied with research funding as a measure of productivity and success. If you don’t have a grant, you aren’t doing it right. Here, for instance, (with specifics expunged) is what the Assistant Dean of Research for my Faculty reported at the last Faculty meeting:

X has been awarded a —- Grant; X and Y have received a —- Grant for a conference… —- Grant applications this year are numerous and promising; X’s project on Y received a very positive mid-term review [from its funding agency].

At a recent presentation from one of our VP’s for research, at which he tracked our “success” and goals exclusively in terms of granting dollars, he made the point that money is measurable and thus is the easiest aspect of research to track and evaluate. The same is true, of course, of publications. But (as I and others pointed out to him emphatically in the Q&A that followed) that’s only true if the rubric you want to use is a pie chart or bar graph. If you really understand (as he claimed to) that research funding does not tell the whole story about research productivity, much less about the value of any given research project (especially in the arts and humanities), why continue using such inadequate tools? Perhaps there are fields of research in which research is better explained in a narrative, rather than a PowerPoint slide. Would it be too much, I wonder, to try to change our habits so that we acknowledge other dimensions of research activity–and stopped sending the incessant message that the best research is the most expensive? What about research that culminates in new classes, also? Isn’t that work valuable to the university? Isn’t that a purpose to which universities are fundamentally committed? You wouldn’t think so, by the way the term “research” is typically used on campus.

In any case, I can tell when my own reading has crossed into research of that more recognizable kind because I start to think about it in terms of obligations–things I should look up, things I need to know in order to achieve my purpose. I start to think in terms of depth and definition: more about this and this and this, but not that. Still, it’s always hard to draw the lines: there are no external rules about relevance, so you have to keep reading somewhat open-endedly as you figure out just how it is that you are going to define your project. There’s not a question “out there” waiting for me to turn my attention (and my students’ attention) to it: I have to mess around in all kinds of material until I see what I could do with it that is interesting and new. This conceptual work is, for me, among the most interesting and creative phase: there’s the whole “tempting range of relevancies called the universe,” and then there’s your part of it, but where that begins and ends, and why, is something that, in literary research at least, is rarely self-evident.

I’m in that happy stage right now with my Somerville novelists reading. I have defined a purpose for it–my fall seminar–and the reading I had been doing out of personal interest, which had included all of Brittain’s Testament volumes as well as the volume of Brittain and Holtby’s journalism, some of their fiction (as well as Margaret Kennedy’s), and some biographical materials, is now the first phase of a more deliberate investigation. I think this phase is happy for me because it involves focus but not the kind of micro-specialization that would be required to say or do anything research-like on Middlemarch now. Instead of having to read abstruse ruminations on theoretical or other kinds of topics that have less and less to do with the things that excite me about Middlemarch, reading I would be doing only out of a weary sense of professional duty (must keep up with the latest!), I’m doing reading I’m genuinely interested in–maybe because this material has simply not attracted the degree of scholarly attention Middlemarch has, it’s still possible to talk about it quite directly and with a real sense of discovery.

Here are some of the books I’ve collected so far for this research:

Letters from a Lost Generation: First World War Letters of Vera Brittain and Four Friends. Ed. Alan Bishop and Mark Bostridge (I’ll be posting a bit about this soon, as I’m over half way through – the stories are familiar from Testament of Youth but the letters in full have a remarkable immediacy and personality)

Winifred Holtby, Women and A Changing Civilization (I have a sad feeling that this 1934 book may have more relevance today than we’d like – “Wherever a civilisation deliberately courts its old memories, its secret fears and revulsions and unacknowledged magic, it destroys that candour of co-operation upon which real equality only can be based,” Holtby observes near the end – and flipping another page, I find “we must have effective and accessible knowledge of birth control.” Yes, I thought we’d had some of these fights before!)

Vera Brittain, The Women at Oxford

Vera Brittain, Lady into Woman: A History of Women from Victoria to Elizabeth II (I’m curious to see what this reads like in comparison to the many volumes of women’s historical biography I worked with for my Ph.D. thesis, later my book)

Susan Leonardi, Dangerous By Degrees: Women at Oxford and the Somerville College Novelists (as far as I know, this is the only critical work specifically dedicated to my seminar topic, and so far it is my main source for other relevant titles)

Behind the Lines: Gender and the Two World Wars. (This collection includes an essay Lynne Layton specifically on “Vera Brittain’s Testament(s)” as well as some useful-looking contextual ones.)

Jane Roland Martin, Reclaiming a Conversation: The Ideal of the Educated Woman.

This list shows the some of the frameworks that I expect will be important to talking about the core readings for the seminar in a rich and informed way: the stories of the writers; their works (our “primary” sources); the history of women at Oxford and in WWI (which means making sure I am reasonably well-prepared about general contexts); and theories and contexts on women and education, particularly university education. Each of the writers we’ll look at in detail will also raise more particular questions: with Sayers, for instance, the history of detective fiction will be of some relevance.

Doesn’t this sound like fun? That I’m excited about it makes me think it isn’t really research after all: research is work, right? Reading for pleasure isn’t work. And yet it can be, of course, and that’s the ideal of this kind of career–that it lets you do what you love, as well as you can, to make your living. That love itself can’t be the sole purpose of your reading makes sense in a professional context, but I’ve read an awful lot of scholarly writing that seems motivated by nothing more than the need to make certain moves in order to pass professional hurdles. In a previous post I quoted C. Q. Drummond saying “policies of forced publication never brought into being–nor could ever have brought into being–those critical books that have been to me most valuable.” Too much of the apparatus and discourse of research in the university seems to me to emphasize and reward everything but love of learning: it favors, as I said in that earlier post, “a narrow model of  output, a cloistered, specialized, self-referential kind of publishing supported, ideally, by as large an external grant as possible.” This project so far has been supported only by me, with some help from my university library. So it won’t ever get me mentioned in the Assistant Dean’s report (just as my publications in Open Letters had no place, literally, at the display of recent books and articles put on in my Faculty)–especially if its only output is a class, not an academic article or book. I haven’t ruled out that kind of result down the road, but I haven’t defined it as a plan yet either. In the meantime, I’m going to keep calling what I’m doing “research.”

“On the Duties of Professors”: Research vs. Scholarship

A friend and colleague who read and sympathized with my previous post passed along to me an essay by the late C. Q. Drummond, a long-time member of the Department of English at the University of Alberta. The essay is called “On the Duties of Professors,” and it addresses many of the same issues as my post, particularly the competition for attention, resources, and rewards between research and teaching. As competitions go, all academics know, this is a distinctly unequal one these days: officially, university policies may stress the equal importance of both duties, but inadequacy or irresponsibility in teaching will never hold back someone’s tenure or promotion if they have a “strong” publication record, and while the administrative infrastructure for research is large and powerful, topping out at the Vice Presidential level, if the two factors are really equally important, where, Drummond rightly asks, is the “Vice President (Teaching)”? (Here at Dalhousie, our office of Research Services has 22 staff, including a VP and an Associate VP. Our Center for Learning and Teaching has 10, with a Director and Associate Director at the top.) Not that Drummond wants to see an expansion of teaching-related bureaucracy–though I quite like his idea for how a VP (Teaching) would go about his or her business: this VP “would move through all the Faculties, visiting classes, hearing lectures, attending seminars, drinking coffee, joining oral examinations, talking into the night.” Through qualitative engagement with teachers and students, this VP would become “another source of evidence, besides tabulated student assessments, for who teaches well and who poorly.”

Drummond’s remarks are directed specifically at his own situation: at the time of writing (around 1984), he had recently been “penalize[d] for insufficient publication during a year in which [his Faculty] received extraordinary evidence of his merit as a teacher.” There’s a polemical thrust to them, as a result, but Drummond uses the occasion to place his own professional experience into its larger context: the increasing dominance of precisely the kind of quantitative measures of research “output” about which I was complaining yesterday. Actually, there is one difference that signals the 30-year gap between us: I didn’t notice any mention of research grants in his piece. I expect he would have objected still more strenuously to measuring scholarly success by level of external funding. He directs his criticism at “forced publication,” and at the reductive equation of publication with research or scholarship:

The Salaries and Promotions Committee certainly does not ask for wisdom; it does not ask for erudition or for scholarship; it does not ask for learning, or even for research; it asks for output, something to be measured or counted. . . . What good does such output do anyone? If research in an Arts Faculty means humane learning, then we all hope our teachers are as much involved in research as they possibly can be. We want them to know better and better what they are talking about, so that they will have, and will continue to have, something intelligent and important to profess to their students. But if research means output or publication, as it so often does today, how do the students profit? And how does the scholarly world profit from the forced production of ephemera? Most professors in Arts Faculties would be better off reading more and publishing less, and their students would be better off too, and so would the world of scholarship.

The very term “research” is, he argues, part of the problem.  He quotes George Whalley, who argued in an essay of his own that “research” suggests a goal-oriented activity, work carried out in pursuit of something in particular. “The functions of research,” Whalley writes, “are specialized and limited; … the word research is not a suitable term for referring to the central initiative and purpose of sustained inquiry in “the humanities” . . . “The humanities” is what “humanists” do; not only what they study, but how they study, and why . . . .”

Drawing on the Handbook published by the CAUT (invoked by his Dean in response to Drummond’s appeal of the Committee’s decision), Drummond himself brings in the vocabulary of knowledge “dissemination” which is once again very current in discussions of our aims:

Research should result in teaching, and might result in publication, teaching and publication being the most important means of dissemination of knowledge. We may teach those near at hand in our lectures, discussions, tutorials, apprenticeships, and supervised practical training, or we may teach those distant through our published papers, articles, essays, and books. But in either case we will have to have found out and shown something worth lecturing about, discussing, or writing down. And where will we have our greatest effect in disseminating what we have found out and know? . . . Dissemination has to do with sowing seed; what we hope when we disseminate is that the seed will take root and grow. . . . So much of the seed one sows in publication falls by the wayside and is devoured by birds, or falls on stony ground, or among thorns and yields no fruit. What the good teacher sows in his class or tutorial is far more likely to find the good ground, spring up, increase, and itself bring forth.

 He reiterates at intervals throughout the piece that he is not opposed to either research or publication, only to a mechanistic understanding of both, especially when it “drives out teaching”–which almost inevitably follows: institutional systems of measurement and incentives are set up not “to encourage the combination of knowing and teaching,” but to “encourage the production of printed pages,” and “because we live in a world in which time itself is scarce, the time taken for one must be taken from the other.” Again, it’s not that he wishes teaching, in its turn, to drive out research–teaching depends on research, broadly understood as inquiry.

It’s not, in my turn, that I wish to drive out either research or publication, both of which are essential (as Drummond too acknowledges) to learning, teaching, and knowledge dissemination. What bothers me is the  incessant identification of “productive” scholarly activity with a narrow model of  output, a cloistered, specialized, self-referential kind of publishing supported, ideally, by as large an external grant as possible. It’s a shame that the faux-scientific model Drummond objects to is now so firmly entrenched–so deeply entangled in the values, practices, and especially the finances of our universities–that it seems unimaginable that we could ever undo it. Some might argue that we have won more by it than we have lost–that without playing the game that way, we would have forfeited any place in the contemporary academy. Others might reply that, yes, we are playing the game, but on terms by which we can only, ultimately, lose: however vast our research output, will we ever win either the public or the institutional respect enjoyed by the sciences? Hasn’t our preoccupation with research actually isolated us and cost us public support? And in our effort to insist on the goal-oriented practicality of our fields, we may have flagged in our defense of their intrinsic value. Again, it’s not that I think we should not do research, or publish what it teaches us–but it’s a shame that the system is so rigged in favor of hurrying it along and rushing it into print–not to mention aiming it at a specific (and very narrow) audience. “I know for a fact,” Drummond observes, “that policies of forced publication never brought into being–nor could ever have brought into being–those critical books that have been to me most valuable.” That’s certainly true of my reading as well. The narrow concept of research and the pressure to publish also, when made the primary measures of professional success, marginalize undergraduate teaching. (The emphasis in grantsmanship on teaching and funding graduate students, or “HQP” [Highly Qualifed Personnel] is another whole area of trouble.) Finally, it seems to me paradoxically retrograde to be urging or following a model that measures productivity by grant size or output of peer-reviewed publications at a time when the entire landscape of scholarly communication is changing. We can circulate our ideas, enhance our and others’ understanding, pursue our inquiries and disseminate our knowledge in more, and often cheaper, ways than ever before. As long as we are all using our time in service of the university’s central mission–the advancement of knowledge, including through teaching–by the means best suited to the problems we think are most important and interesting to pursue, aren’t we doing our duty as professors?

But as the Associate Vice President who spoke to my Faculty on Thursday said repeatedly, there aren’t “metrics” for those other ways of doing (or discussing) research or measuring its impact: they do not yield data that can be counted, measured, and easily compared across departments, faculties, and campuses. Apparently, that means we have to set them aside–or, at any rate, that the VP (Research) will do so, when reporting to us on our “performance.”

The essay I discuss here is in the volume In Defence of Adam: Essays on Bunyan, Milton, and Others by C. Q. Drummond, edited by John Baxter and Gordon Harvey (Edgeways Books, 2004).

This Week at Work: Reflections on Our Research Culture

DALHOUSIE-UNIVERSITYYesterday I received a reminder from the Mellon Foundation about a follow-up survey they are doing of people who did Ph.D.s supported by Mellon Fellowships.  I remember how exciting it was when I learned I had won one of these fellowships, which was both generous and prestigious. I had mixed success with my actual Ph.D. applications–indeed, I was rejected by many more schools than accepted me–and I’ve often thought that the crucial factor in my winning the Mellon was the interview. I was (am?) more charming in person than on paper–it’s something about my sense of humor, I think, which apparently doesn’t carry over much into my writing!

In any case, winning a Mellon Fellowship made me a more attractive target for the schools that had offered me places: I ended up with the luxury of comparing complete five-year funding packages from a couple of excellent schools, and the even greater luxury of comparing these North American alternatives to using a Commonwealth Scholarship to go to the UK. In the end, I chose Cornell, starting in 1990 and finishing in 1995 with job offer in hand–job offers, in fact: while my job market success was also mixed and I got a lot of rejections, when I got close, I did pretty well (speaking of rejection, though, I’ll never forget the message telling me I was not offered the job I wanted most of all, which hit me like an emotional bomb when I read it in the dank basement computer lab where, in those olden days, I had to go to check my email–would it have been so hard to give me a phone call so I could have absorbed the blow in private?). Anyway, I chose Dalhousie, and (though I have made a few attempts over the years to move on) here I still am today.

Dal_MarionMcCain_BuildingThe Mellon survey focused primarily on career paths and job satisfaction. Most of it was pretty easy stuff (how many peer-reviewed articles did you publish before tenure? what kind of pre-tenure mentoring did you get? were there explicit expectations about the kind or quantity of publications you’d need for tenure?), but towards the end there were some more open-ended ones, and the very final one proved a real poser: If you had to do it all over again, they asked, would you do the same? Same degree, same school? Same degree, different school? Different degree? Or no Ph.D. at all?

Maybe this would not have been such a stumper of a question if they’d asked it on a different day, but yesterday was kind of a tough day for me at work. It’s not that I was busier than usual or overwhelmed with new tasks or dealing with confrontational students upset with their grades, or dead-ended on a writing project or behind in my class preparation. Rather, it was a day (one of many recent days) in which different priorities clashed in the department and I ended up feeling that more and more, we are steering by (or allowing ourselves to be steered by) the wrong values. There are a lot of moving parts behind the motions we have voted on recently, but the net effect is that a majority of the department has carried through an agenda by which we will reduce class offerings at all levels and increase class sizes at the undergraduate level, in order to bring our nominal teaching load down and thus clear more time for research during the academic term.

macke woman readingI emphasize that last clause because we have dedicated research time already (the spring and summer terms, when we do not regularly teach undergraduate classes, as well as our sabbaticals); the argument was being made for the importance of making more time for research while teaching, and thus the new plan deliberately favors reducing our contact hours and prep time. We’ll remain individually responsible for the same number of students, so any time savings won’t come from reducing our grading. Now, I find marking assignments as tedious as the next prof. What I don’t find tedious or want less of is face time with my students. My hours in the classroom are almost the only hours during which I have no doubts about my answer to the Mellon Foundation’s question. It’s true that class prep can be relentless, and in the middle of my heavier teaching term, I’m too busy with it–too overwhelmed by it, in combination with the marking–to do anything ambitious regarding other research or writing projects. Not nothing at all, but nothing much. But class prep can also be  intellectually stimulating, and often is itself research, or feeds into ongoing research interests: I didn’t like the presumed opposition between teaching and research that dominated the arguments for the latest motion.

The problem is that this pitting of two of our essential tasks against each other is in large part a consequence of the pervasive research culture promulgated especially by administrators who talk about “productivity” and “output” in terms of grant dollars pursued and won, and of quantity (rather than quality and significance) of (conventionally peer-reviewed) publications. Tomorrow, for instance, we are invited to a “presentation” on “trends in FASS [Faculty of Arts and Social Sciences] research performance.” Let’s just say I will be pleasantly surprised if the emphasis is not squarely on those kinds of quantifiable measures. Everyone I’ve spoken to about it fully anticipates that the event has been set up as an occasion to chastise us for our failure to measure up, both to other faculties on campus and perhaps also to comparable faculties at other universities.

But the conversation we should be having is about the adequacy of the measures, about the damage they do and the absurdities they create. We should be talking about whether it’s really a good use of time for a humanities scholar to spend weeks, months even, on a grant proposal for a program with a success rate of below 25%; we should be talking about the culture of greed and hypocrisy and cynicism that has been created by the pressure to ask for more and more money whether you need it or not, because big grants bring prestige (and support graduate students–and that’s another can of worms right there); about the flawed logic of trying to get grants because the university relies on its share of them to cover ‘indirect costs.’ We should be resisting the pressure to increase our research productivity according to such ill-fitting measures, and we should especially resist chipping away at our curriculum and at our undergraduate students’ educational experience because we want to look like the kind of “productive researchers” the university seems exclusively to recognize and reward. I don’t measure my “performance” as a scholar exclusively on my output of specialized peer-reviewed publications, or on my success at competing for external funding, and I don’t think my university should either. Here too, there are a lot of moving parts, and the funding challenges universities faced are not something I take lightly (or understand completely, given their intricacies). But that doesn’t change the oddity of trying to twist and bend humanistic inquiry into something that looks like scientific research, and of treating us as failures precisely because we don’t do expensive projects.

woman-writing-1934Let me be clear: I don’t think there’s no point in our doing our research. I don’t think it’s a waste of time; I do think that there are both intellectual and social pay-offs from our efforts to understand the world better by way of understanding its literature. But I do think we produce enough of it already. I don’t think Mark Bauerlein makes a particularly fair or coherent argument about its excesses, but I also don’t think we need to “protect” more time to produce more of it faster. I actually think we should slow down and produce less of it, especially in conventional forms. How much “output” is enough? It’s not the quantity that should matter. How much research time is enough? If we let go of the artificial urgency fueled by the kind of presentation I’m looking forward to tomorrow, I think we’d find we already have enough time.

Now, to be fair, we haven’t exactly decimated our program, and we still have plenty of classes on the small side. But the pressure is undoubtedly upward. Big classes are routine elsewhere, I’m told, and a lower teaching load for full-time faculty is also the norm at other “research institutions.” But is this a good thing? Is this the way we want our resources distributed? Well, judging by yesterday’s voting, the answer for a lot of us is ‘yes.’   I understand why, but I feel that we’re in pursuit of a model of success or excellence that I just don’t believe in anymore. Sometimes sitting with my colleagues I feel like a nonbeliever in church! And it’s a church in which two things are sacrosant: our research, and our graduate program–in the interests of which we have made all of the recent changes to our overall curriculum.

And this is why the Mellon survey question was so hard to answer. How can I be sorry that I’ve been able to pursue this career, which in many ways suits me so well? How can I regret that I can dedicate my time to things I not only think are really important, but love? In what other job can you be paid to spend hours and hours a week concentrating on literature, and working with bright, eager students to nurture their love of reading and their interest in the kinds of questions it opens up? But the other values of the profession have troubled me from the start of my Ph.D. work, and the systems of incentives and rewards, and of prestige and reputation too, skew very far in one direction. How can I not feel I’m out of step and perhaps unsuited for the career I chose when I can’t commit myself wholeheartedly to two of its central pursuits?

If I had the choice, would I do the same again? Today, I’m not sure. But ask me again  after my small group discussion of Great Expectations on Friday. I bet my answer then will be “of course!”