What We Talk About When We Talk About Academic Blogging

Logistics and institutional issues: how do you find time for it, where (if anywhere) should it go on your c.v., and how should tenure and promotion committees evaluate it?

At least, this is what the audience questions were almost exclusively about when I spoke about blogging at my faculty’s “research retreat” on Friday. Here’s a link to the Prezi I used, which is basically a condensed version of the one I prepared for the British Association of Victorian Studies conference in August. I was supposed to speak for only 8-10 minutes, so I just highlighted the arguments for and against blogging as I see them and quickly pointed out what the illustrative quotations were, on the principle that interested parties can easily find the Prezi and read them (and follow the links) themselves. What I really tried to emphasize in my own remarks is that if we think about why we do research and publish it in the first place–to advance or improve a conversation–then writing online makes perfect sense. I also stressed that for me, the real benefits are intellectual. I specifically invited follow-up questions about ways my blogging had affected my teaching, my research, my writing, and/or my intellectual life. I didn’t get any questions about that at all, leading me to think that the single most important quotation in the presentation is the one from Jo VanEvery: “Scholars lose sight of the fact that academic publishing is about communication. Or, perhaps more accurately, communication appears disconnected from the validation process.” What people wanted to talk about was “validation.” As I said at the close of the discussion, I think that preoccupation in itself is worth reflecting on. It’s inevitable, perhaps, because we are professionals trying to get and keep jobs and build careers, but I think concern about bureaucratic processes should follow on reaching a better understanding of the value of the activity, to the individual scholar, to the university, and to the broader community. Maybe people were taking for granted that blogging could be beneficial in the ways I was describing and so didn’t need to ask about it, but the impression I got (perhaps unfairly) was that they couldn’t quite imagine those benefits trumping the low likelihood of professional rewards for the time spent. The one specific positive benefit someone raised from the floor was that blogging might help lay the groundwork for a grant application–but as I noted, that assumes that getting grants is itself a priority. What if we don’t need them to do the work we think is important? (You certainly don’t need a grant to keep a blog.)

And my responses to the questions that were asked? Well, the “how do you find time” question is not one that gets asked about activities that we do not perceive as “extra” to our “real” work, so the answer to that would depend on how you find time for anything you think should be among your priorities. I don’t have a strong opinion about what heading the blog should be under on a c.v. except that I think it should in some way be treated as a research, writing, and publishing project, not as “service.” And I think tenure and promotion committees should evaluate it by reading it — not one post, or even a few posts at once, but ideally by following it for a while as well as exploring the archives. I think bloggers (and academics involved in any non-traditional kinds of work) need to help by explaining clearly what they are up to and contextualizing it so that people who have never read a blog before (and there are still many of these people in academia) have some appropriate frameworks for what they are looking at, and they should also help by thinking about how to curate their blogs so that newcomers can easily grasp their range as well as follow key examples. In my own case, I think (I hope!) the index pages I’ve built are useful in this way. As indicated in the new MLA guidelines for evaluating digital scholarship, I also think that tenure and promotion committees need to include people who understand new forms of scholarly communication, including as external reviewers. Someone who is also a blogger, for instance, is more likely to appreciate and fairly assess the quality and contribution of another blogger’s site than someone who reads only conventional scholarship.

The other panelists  were talking primarily about newspaper op-eds and letters to the editors. It was interesting to me that in general, they expressed more discomfort or dislike for the experience of being exposed to the unfiltered world of the internet. Being social scientists and historians, though, they were talking about writing on political topics, so they are engaging in conversations where stupid virulent attacks are more likely, not just because a national newspaper is much higher profile than my own quiet corner here, but also because politics rile people up more than whatever someone happens to think about The Good Soldier or Lightning Rods.* I can understand why one piece of advice they had, then, was simply not to read the comment threads that follow but to wait for the wave of attention to pass and hope to have made a small difference to the public conversation and perhaps to create further networking or writing opportunities for yourself by the exposure. I felt lucky, really, that though I am not Utopian or idealistic about the openness of the internet, my own experience of it has been, by and large, really positive and rewarding.

*Though it is possible to rile people up a bit on these topics, if you have the right audience!

Research That Matters: Knowledge and Novelty

OK, I admit it. My previous post about reading and research is also disingenuous. In a university context, research is not just “purposeful reading” or “reading in pursuit of knowledge” or “reading directed towards solving a problem or answering a question.” University-level research, research that is publishable in professional venues, research that is eligible for funding, is research that produces new knowledge.  The research mission of a university is to move the frontier of knowledge, to add to the world’s sum of knowledge, to be at the cutting edge of knowledge… I know that! I’m only sort of pretending not to know it when I ask why research that serves other academic purposes, including teaching and individual intellectual development, does not earn a researcher the same support or the same professional credit.

But I’m pretending not to know it because “the pursuit of new knowledge” is not as obvious, or as easily applied, a principle as it sounds. One possible line of questioning begins with “new to whom?” The degree of hyper-specialization that characterizes the contemporary university is the result of the standard answer: new to other specialists in the field. This is obviously the right standard, isn’t it? It doesn’t advance knowledge to repeat what has been done before, to redo what has been tested. You can’t discover what is already known; you can’t have progress in a field unless you are constantly finding out something new.

This makes perfect sense, right? And yet it isn’t 100% obvious that what I’ve just said applies as well to literary research as it does to, say, research in genetics. What counts as a “discovery” in literary scholarship? Turning up a lost manuscript? OK, that’s an easy one. Explicating and contextualizing the work of a previously unknown or little-known author? Yes, good. Overturning a longstanding theoretical paradigm? Yup, I think so. Proposing a new reading of a novel based on paying attention to a detail nobody has ever paid attention to before? Well, OK. Contradicting a proposed new reading of a novel based on an alternative interpretation of a detail nobody has ever paid attention to before? Constructing a large theoretical claim based on readings of novels that pay attention to details usually disregarded? Yes, fine. Applying a theoretical framework from another discipline to a novel in order to read it in a way that it has never been read before? Yes! Of course! These are exactly the kinds of things literary scholars do (not all the things they do, but how long did you want this paragraph to get?). I wouldn’t argue that understanding texts in new ways doesn’t produce something reasonably called “knowledge.” At any rate, all of these activities affect the way we think about things. If our activity leads us and others to think in a new way, to see something in a new light, that moves some kind of frontier, surely.

But it seems to me there’s a difference that is at least worth thinking about between the importance of doing something new in genetics (or whatever) and pursuing novelty in literary studies. In some kinds of research, work that isn’t new and that doesn’t take into account every other recent discovery will be useless and irrelevant to anyone. But the drive towards novelty and hyper-specialization in literary studies is itself generating a great deal of work that is relevant only to other specialists, and even then, not so much. There is no large project or inquiry, after all, towards which incremental additions are being made; there’s just a proliferation of pieces often with little connection to each other. Even to other specialists, the work of keeping up is not only nearly impossible now, but also (and relatedly) of diminishing importance.

I’ve written about some aspects of this situation before, here in this post on Mark Bauerlein’s “The Research Bust.” I don’t think this kind of observation has to lead into an argument for the cessation of literary research, or for insisting that literary scholars return to doing only certain kinds of research that are more measurably productive of new information (for the case against literary “readings” and in favor of “a more traditionally scholarly conception of literary study”, see this post by D. G. Myers, also triggered by Bauerlein). One reason that I would support people continuing to do new readings is that we can’t be sure where inquiry will take us, and our sense of what “more traditionally scholarly” research is a priority might well be affected by ideas arising from rethinking texts we thought we already knew. If these new readings are truly driven by intellectual curiosity, by attempts to puzzle through problems, however abstract, then there’s value in them, for the researcher as well as for the audience of other people also interested. “Who can say,” as George Eliot remarks in Middlemarch, “what will be the effect of writing?” And I think we ought to have the same open-minded assumption about thinking. (If the research is not truly curiosity-driven, on the other hand, then we might remark, with Dorothea, “what could be sadder than so much ardent labour all in vain?”)

But I don’t think that the only paradigm for valuable work in literary studies should be one derived from a scientific model, as if a similar cumulative advance of information is ongoing, or one that disregards the other kinds of audiences there are for literary understanding. The reason the umpteenth interpretation of Middlemarch is important to at least some specialists is that they already know a whole lot about Middlemarch — but lots of people don’t, people who would be interested in knowing more. Our focus on novelty underestimates the value of what we already know, even though unlike old theories of the atom, old ideas about books have not lost their real-life significance; it also undervalues the skills we have at making what we know accessible to people who don’t know it yet, and reduces our audience to each other instead of trying to imagine how we could be part of the broader literary culture. The ‘cutting edge’ is actually a much less important place to be in literary studies (as well as a much more shifting territory).

 

Reading and Research Redux: The Somerville Novelists Project

I admit, my earlier question “When is reading research?” was a bit disingenuous: obviously, research is purposeful reading. Of course, this definition can get batted around a bit too, depending on how you define your purpose: the pursuit of pleasure? aesthetic enrichment? familiarity with current best-sellers? Perhaps it’s better to say that, at least in a university context, research is reading in pursuit of knowledge, or reading directed towards solving a problem or answering a question or accomplishing a task. As Jo VanEvery also points out in her recent post on this topic, though, we have become preoccupied with the results of that reading, so that oddly, the process of exploration fundamental to defining a question in the first place has become devalued. And in universities we have also become preoccupied with research funding as a measure of productivity and success. If you don’t have a grant, you aren’t doing it right. Here, for instance, (with specifics expunged) is what the Assistant Dean of Research for my Faculty reported at the last Faculty meeting:

X has been awarded a —- Grant; X and Y have received a —- Grant for a conference… —- Grant applications this year are numerous and promising; X’s project on Y received a very positive mid-term review [from its funding agency].

At a recent presentation from one of our VP’s for research, at which he tracked our “success” and goals exclusively in terms of granting dollars, he made the point that money is measurable and thus is the easiest aspect of research to track and evaluate. The same is true, of course, of publications. But (as I and others pointed out to him emphatically in the Q&A that followed) that’s only true if the rubric you want to use is a pie chart or bar graph. If you really understand (as he claimed to) that research funding does not tell the whole story about research productivity, much less about the value of any given research project (especially in the arts and humanities), why continue using such inadequate tools? Perhaps there are fields of research in which research is better explained in a narrative, rather than a PowerPoint slide. Would it be too much, I wonder, to try to change our habits so that we acknowledge other dimensions of research activity–and stopped sending the incessant message that the best research is the most expensive? What about research that culminates in new classes, also? Isn’t that work valuable to the university? Isn’t that a purpose to which universities are fundamentally committed? You wouldn’t think so, by the way the term “research” is typically used on campus.

In any case, I can tell when my own reading has crossed into research of that more recognizable kind because I start to think about it in terms of obligations–things I should look up, things I need to know in order to achieve my purpose. I start to think in terms of depth and definition: more about this and this and this, but not that. Still, it’s always hard to draw the lines: there are no external rules about relevance, so you have to keep reading somewhat open-endedly as you figure out just how it is that you are going to define your project. There’s not a question “out there” waiting for me to turn my attention (and my students’ attention) to it: I have to mess around in all kinds of material until I see what I could do with it that is interesting and new. This conceptual work is, for me, among the most interesting and creative phase: there’s the whole “tempting range of relevancies called the universe,” and then there’s your part of it, but where that begins and ends, and why, is something that, in literary research at least, is rarely self-evident.

I’m in that happy stage right now with my Somerville novelists reading. I have defined a purpose for it–my fall seminar–and the reading I had been doing out of personal interest, which had included all of Brittain’s Testament volumes as well as the volume of Brittain and Holtby’s journalism, some of their fiction (as well as Margaret Kennedy’s), and some biographical materials, is now the first phase of a more deliberate investigation. I think this phase is happy for me because it involves focus but not the kind of micro-specialization that would be required to say or do anything research-like on Middlemarch now. Instead of having to read abstruse ruminations on theoretical or other kinds of topics that have less and less to do with the things that excite me about Middlemarch, reading I would be doing only out of a weary sense of professional duty (must keep up with the latest!), I’m doing reading I’m genuinely interested in–maybe because this material has simply not attracted the degree of scholarly attention Middlemarch has, it’s still possible to talk about it quite directly and with a real sense of discovery.

Here are some of the books I’ve collected so far for this research:

Letters from a Lost Generation: First World War Letters of Vera Brittain and Four Friends. Ed. Alan Bishop and Mark Bostridge (I’ll be posting a bit about this soon, as I’m over half way through – the stories are familiar from Testament of Youth but the letters in full have a remarkable immediacy and personality)

Winifred Holtby, Women and A Changing Civilization (I have a sad feeling that this 1934 book may have more relevance today than we’d like – “Wherever a civilisation deliberately courts its old memories, its secret fears and revulsions and unacknowledged magic, it destroys that candour of co-operation upon which real equality only can be based,” Holtby observes near the end – and flipping another page, I find “we must have effective and accessible knowledge of birth control.” Yes, I thought we’d had some of these fights before!)

Vera Brittain, The Women at Oxford

Vera Brittain, Lady into Woman: A History of Women from Victoria to Elizabeth II (I’m curious to see what this reads like in comparison to the many volumes of women’s historical biography I worked with for my Ph.D. thesis, later my book)

Susan Leonardi, Dangerous By Degrees: Women at Oxford and the Somerville College Novelists (as far as I know, this is the only critical work specifically dedicated to my seminar topic, and so far it is my main source for other relevant titles)

Behind the Lines: Gender and the Two World Wars. (This collection includes an essay Lynne Layton specifically on “Vera Brittain’s Testament(s)” as well as some useful-looking contextual ones.)

Jane Roland Martin, Reclaiming a Conversation: The Ideal of the Educated Woman.

This list shows the some of the frameworks that I expect will be important to talking about the core readings for the seminar in a rich and informed way: the stories of the writers; their works (our “primary” sources); the history of women at Oxford and in WWI (which means making sure I am reasonably well-prepared about general contexts); and theories and contexts on women and education, particularly university education. Each of the writers we’ll look at in detail will also raise more particular questions: with Sayers, for instance, the history of detective fiction will be of some relevance.

Doesn’t this sound like fun? That I’m excited about it makes me think it isn’t really research after all: research is work, right? Reading for pleasure isn’t work. And yet it can be, of course, and that’s the ideal of this kind of career–that it lets you do what you love, as well as you can, to make your living. That love itself can’t be the sole purpose of your reading makes sense in a professional context, but I’ve read an awful lot of scholarly writing that seems motivated by nothing more than the need to make certain moves in order to pass professional hurdles. In a previous post I quoted C. Q. Drummond saying “policies of forced publication never brought into being–nor could ever have brought into being–those critical books that have been to me most valuable.” Too much of the apparatus and discourse of research in the university seems to me to emphasize and reward everything but love of learning: it favors, as I said in that earlier post, “a narrow model of  output, a cloistered, specialized, self-referential kind of publishing supported, ideally, by as large an external grant as possible.” This project so far has been supported only by me, with some help from my university library. So it won’t ever get me mentioned in the Assistant Dean’s report (just as my publications in Open Letters had no place, literally, at the display of recent books and articles put on in my Faculty)–especially if its only output is a class, not an academic article or book. I haven’t ruled out that kind of result down the road, but I haven’t defined it as a plan yet either. In the meantime, I’m going to keep calling what I’m doing “research.”

When is Reading Research?

I’ve been thinking more about what we mean when we say “research.” In my post on the ‘duties of professors,’ I quote C. Q. Drummond’s remark,

If research in an Arts Faculty means humane learning, then we all hope our teachers are as much involved in research as they possibly can be. We want them to know better and better what they are talking about, so that they will have, and will continue to have, something intelligent and important to profess to their students. But if research means output or publication, as it so often does today, how do the students profit?

In his turn, Drummond quotes George Whalley, who suggests that the word “research” is altogether misleading or inappropriate when applied to humanistic inquiry: ““The functions of research are specialized and limited; … the word research is not a suitable term for referring to the central initiative and purpose of sustained inquiry in ‘the humanities.'” “Most professors in Arts Faculties,” Drummond proposes, “would be better off reading more and publishing less.” Of course, reading is research for most humanists–that is, it’s the research process. But not all reading is research–or it it?

When we talk about “doing research,” I think we conventionally mean reading in service of a particular research project, that is, reading in pursuit of a foreseen research product, a published essay or book. Does that mean that reading for which we cannot already identify such an outcome is not research, then? Certainly it’s reading for which we can get no particular institutional support. For instance, if I want to get a research grant, it does me no good to justify my budget on the grounds that I am gathering materials on subjects about which I would simply like to know more than I do, or in which I have a developing interest but, as yet, no idea what, if any, payoff there will be in terms of publications. I also can’t get research support to develop new classes. I might be able to get a grant from our Center for Teaching and Learning–although peering at their page, the only grants I see them offering are for “faculty members who are seeking new and innovative ways to incorporate technology into their teaching practice” and “high impact initiatives that address student engagement activities/projects in the first year of their studies.” Too bad if I just want to follow my curiosity, acquire new expertise, and then gather students up to share it through reading and discussion.

My own new class on the ‘Somerville Novelists’ may, in fact, incorporate technology (brace yourselves, students–I’m thinking wikis again!), but it will have been developed from reading I did initially purely out of interest–and of books I bought with my own money. I don’t mind about the money–though it’s sometimes frustrating to realize how much the university relies on our willingness to do things “on our own” without which the institution would be a much poorer place, and by that I don’t mean poorer financially. (I bought the laptop I’m using with my own money too–the university doesn’t provide “home” or portable computers, or at least our faculty doesn’t, but imagine how academic work would grind to a halt if we could not work evenings and weekends, or not without coming in to campus. But that’s another issue…sort of.) I don’t really draw strict lines between what I do for work and what I do for myself, precisely because being a professor is not just having a job but having a certain identity–filling (or aspiring to fill) a certain kind of role in the world. But especially since reading Drummond’s essay I’ve been thinking about the way our particular understanding of “research,” one that yokes together the process and the product, undervalues other kinds of reading. I do mind about that, because I think it artificially narrows both that job and that identity.

Is there really only one professionally worthwhile kind of reading? I’ve recently bought Rebecca West’s Black Lamb and Grey Falcon. I bought it out of interest: I’ve been exhilarated by learning about other early 20th-century women writers, and West is a major figure. I’m not sure where to place her: she’s not specifically in the Somerville crowd I’ve been looking into, and she’s not really a Modernist (I don’t think). I’m curious to figure out more about her. Reading The Return of the Soldier made me more curious. She is not–and Black Lamb and Grey Falcon is not–obviously continuous with any issues or genres I have an explicit “research” interest in. There are plenty of books in “my field” of Victorian literature that I haven’t read, and there are also plenty of books about Victorian literature that I haven’t read. I have some declared “research” projects that have not reached the official finishing point of publication in an academic journal (much less an academic monograph). Clearly, if I read (when I read) Black Lamb and Grey Falcon I am doing it only for myself: it’s not research. And yet reading it will almost certainly  help me have “something intelligent and important to profess to [my] students,” and that I don’t know exactly what else will come of it isn’t necessarily a bad thing. It isn’t even necessarily a bad thing that nothing concrete (beyond some blog posts) may ever come of it. But by some measures–the only ones that mean much, professionally, these days–it would be more productive for me to read the umpteenth specialized analysis of Middlemarch. Now that would be research.

“On the Duties of Professors”: Research vs. Scholarship

A friend and colleague who read and sympathized with my previous post passed along to me an essay by the late C. Q. Drummond, a long-time member of the Department of English at the University of Alberta. The essay is called “On the Duties of Professors,” and it addresses many of the same issues as my post, particularly the competition for attention, resources, and rewards between research and teaching. As competitions go, all academics know, this is a distinctly unequal one these days: officially, university policies may stress the equal importance of both duties, but inadequacy or irresponsibility in teaching will never hold back someone’s tenure or promotion if they have a “strong” publication record, and while the administrative infrastructure for research is large and powerful, topping out at the Vice Presidential level, if the two factors are really equally important, where, Drummond rightly asks, is the “Vice President (Teaching)”? (Here at Dalhousie, our office of Research Services has 22 staff, including a VP and an Associate VP. Our Center for Learning and Teaching has 10, with a Director and Associate Director at the top.) Not that Drummond wants to see an expansion of teaching-related bureaucracy–though I quite like his idea for how a VP (Teaching) would go about his or her business: this VP “would move through all the Faculties, visiting classes, hearing lectures, attending seminars, drinking coffee, joining oral examinations, talking into the night.” Through qualitative engagement with teachers and students, this VP would become “another source of evidence, besides tabulated student assessments, for who teaches well and who poorly.”

Drummond’s remarks are directed specifically at his own situation: at the time of writing (around 1984), he had recently been “penalize[d] for insufficient publication during a year in which [his Faculty] received extraordinary evidence of his merit as a teacher.” There’s a polemical thrust to them, as a result, but Drummond uses the occasion to place his own professional experience into its larger context: the increasing dominance of precisely the kind of quantitative measures of research “output” about which I was complaining yesterday. Actually, there is one difference that signals the 30-year gap between us: I didn’t notice any mention of research grants in his piece. I expect he would have objected still more strenuously to measuring scholarly success by level of external funding. He directs his criticism at “forced publication,” and at the reductive equation of publication with research or scholarship:

The Salaries and Promotions Committee certainly does not ask for wisdom; it does not ask for erudition or for scholarship; it does not ask for learning, or even for research; it asks for output, something to be measured or counted. . . . What good does such output do anyone? If research in an Arts Faculty means humane learning, then we all hope our teachers are as much involved in research as they possibly can be. We want them to know better and better what they are talking about, so that they will have, and will continue to have, something intelligent and important to profess to their students. But if research means output or publication, as it so often does today, how do the students profit? And how does the scholarly world profit from the forced production of ephemera? Most professors in Arts Faculties would be better off reading more and publishing less, and their students would be better off too, and so would the world of scholarship.

The very term “research” is, he argues, part of the problem.  He quotes George Whalley, who argued in an essay of his own that “research” suggests a goal-oriented activity, work carried out in pursuit of something in particular. “The functions of research,” Whalley writes, “are specialized and limited; … the word research is not a suitable term for referring to the central initiative and purpose of sustained inquiry in “the humanities” . . . “The humanities” is what “humanists” do; not only what they study, but how they study, and why . . . .”

Drawing on the Handbook published by the CAUT (invoked by his Dean in response to Drummond’s appeal of the Committee’s decision), Drummond himself brings in the vocabulary of knowledge “dissemination” which is once again very current in discussions of our aims:

Research should result in teaching, and might result in publication, teaching and publication being the most important means of dissemination of knowledge. We may teach those near at hand in our lectures, discussions, tutorials, apprenticeships, and supervised practical training, or we may teach those distant through our published papers, articles, essays, and books. But in either case we will have to have found out and shown something worth lecturing about, discussing, or writing down. And where will we have our greatest effect in disseminating what we have found out and know? . . . Dissemination has to do with sowing seed; what we hope when we disseminate is that the seed will take root and grow. . . . So much of the seed one sows in publication falls by the wayside and is devoured by birds, or falls on stony ground, or among thorns and yields no fruit. What the good teacher sows in his class or tutorial is far more likely to find the good ground, spring up, increase, and itself bring forth.

 He reiterates at intervals throughout the piece that he is not opposed to either research or publication, only to a mechanistic understanding of both, especially when it “drives out teaching”–which almost inevitably follows: institutional systems of measurement and incentives are set up not “to encourage the combination of knowing and teaching,” but to “encourage the production of printed pages,” and “because we live in a world in which time itself is scarce, the time taken for one must be taken from the other.” Again, it’s not that he wishes teaching, in its turn, to drive out research–teaching depends on research, broadly understood as inquiry.

It’s not, in my turn, that I wish to drive out either research or publication, both of which are essential (as Drummond too acknowledges) to learning, teaching, and knowledge dissemination. What bothers me is the  incessant identification of “productive” scholarly activity with a narrow model of  output, a cloistered, specialized, self-referential kind of publishing supported, ideally, by as large an external grant as possible. It’s a shame that the faux-scientific model Drummond objects to is now so firmly entrenched–so deeply entangled in the values, practices, and especially the finances of our universities–that it seems unimaginable that we could ever undo it. Some might argue that we have won more by it than we have lost–that without playing the game that way, we would have forfeited any place in the contemporary academy. Others might reply that, yes, we are playing the game, but on terms by which we can only, ultimately, lose: however vast our research output, will we ever win either the public or the institutional respect enjoyed by the sciences? Hasn’t our preoccupation with research actually isolated us and cost us public support? And in our effort to insist on the goal-oriented practicality of our fields, we may have flagged in our defense of their intrinsic value. Again, it’s not that I think we should not do research, or publish what it teaches us–but it’s a shame that the system is so rigged in favor of hurrying it along and rushing it into print–not to mention aiming it at a specific (and very narrow) audience. “I know for a fact,” Drummond observes, “that policies of forced publication never brought into being–nor could ever have brought into being–those critical books that have been to me most valuable.” That’s certainly true of my reading as well. The narrow concept of research and the pressure to publish also, when made the primary measures of professional success, marginalize undergraduate teaching. (The emphasis in grantsmanship on teaching and funding graduate students, or “HQP” [Highly Qualifed Personnel] is another whole area of trouble.) Finally, it seems to me paradoxically retrograde to be urging or following a model that measures productivity by grant size or output of peer-reviewed publications at a time when the entire landscape of scholarly communication is changing. We can circulate our ideas, enhance our and others’ understanding, pursue our inquiries and disseminate our knowledge in more, and often cheaper, ways than ever before. As long as we are all using our time in service of the university’s central mission–the advancement of knowledge, including through teaching–by the means best suited to the problems we think are most important and interesting to pursue, aren’t we doing our duty as professors?

But as the Associate Vice President who spoke to my Faculty on Thursday said repeatedly, there aren’t “metrics” for those other ways of doing (or discussing) research or measuring its impact: they do not yield data that can be counted, measured, and easily compared across departments, faculties, and campuses. Apparently, that means we have to set them aside–or, at any rate, that the VP (Research) will do so, when reporting to us on our “performance.”

The essay I discuss here is in the volume In Defence of Adam: Essays on Bunyan, Milton, and Others by C. Q. Drummond, edited by John Baxter and Gordon Harvey (Edgeways Books, 2004).

This Week at Work: Reflections on Our Research Culture

Yesterday I received a reminder from the Mellon Foundation about a follow-up survey they are doing of people who did Ph.D.s supported by Mellon Fellowships.  I remember how exciting it was when I learned I had won one of these fellowships, which was both generous and prestigious. I had mixed success with my actual Ph.D. applications–indeed, I was rejected by many more schools than accepted me–and I’ve often thought that the crucial factor in my winning the Mellon was the interview. I was (am?) more charming in person than on paper–it’s something about my sense of humor, I think, which apparently doesn’t carry over much into my writing! In any case, winning a Mellon Fellowship made me a more attractive target for the schools that had offered me places: I ended up with the luxury of comparing complete five-year funding packages from a couple of excellent schools, and the even greater luxury of comparing these North American alternatives to using a Commonwealth Scholarship to go to the UK. In the end, I chose Cornell, starting in 1990 and finishing in 1995 with job offer in hand–job offers, in fact: while my job market success was also mixed and I got a lot of rejections, when I got close, I did pretty well (speaking of rejection, though, I’ll never forget the message telling me I was not offered the job I wanted most of all, which hit me like an emotional bomb when I read it in the dank basement computer lab where, in those olden days, I had to go to check my email–would it have been so hard to give me a phone call so I could have absorbed the blow in private?). Anyway, I chose Dalhousie, and (though I have made a few attempts over the years to move on) here I still am today.

The Mellon survey focused primarily on career paths and job satisfaction. Most of it was pretty easy stuff (how many peer-reviewed articles did you publish before tenure? what kind of pre-tenure mentoring did you get? were there explicit expectations about the kind or quantity of publications you’d need for tenure?), but towards the end there were some more open-ended ones, and the very final one proved a real poser: If you had to do it all over again, they asked, would you do the same? Same degree, same school? Same degree, different school? Different degree? Or no Ph.D. at all?

Maybe this would not have been such a stumper of a question if they’d asked it on a different day, but yesterday was kind of a tough day for me at work. It’s not that I was busier than usual or overwhelmed with new tasks or dealing with confrontational students upset with their grades, or dead-ended on a writing project or behind in my class preparation. Rather, it was a day (one of many recent days) in which different priorities clashed in the department and I ended up feeling that more and more, we are steering by (or allowing ourselves to be steered by) the wrong values. There are a lot of moving parts behind the motions we have voted on recently, but the net effect is that a majority of the department has carried through an agenda by which we will reduce class offerings at all levels and increase class sizes at the undergraduate level, in order to bring our nominal teaching load down and thus clear more time for research during the academic term. I emphasize that last clause because we have dedicated research time already (the spring and summer terms, when we do not regularly teach undergraduate classes, as well as our sabbaticals); the argument was being made for the importance of making more time for research while teaching, and thus the new plan deliberately favors reducing our contact hours and prep time. We’ll remain individually responsible for the same number of students, so any time savings won’t come from reducing our grading. Now, I find marking assignments as tedious as the next prof. What I don’t find tedious or want less of is face time with my students. My hours in the classroom are almost the only hours during which I have no doubts about my answer to the Mellon Foundation’s question. It’s true that class prep can be relentless, and in the middle of my heavier teaching term, I’m too busy with it–too overwhelmed by it, in combination with the marking–to do anything ambitious regarding other research or writing projects. Not nothing at all, but nothing much. But class prep can also be  intellectually stimulating, and often is itself research, or feeds into ongoing research interests: I didn’t like the presumed opposition between teaching and research that dominated the arguments for the latest motion.

The problem is that this pitting of two of our essential tasks against each other is in large part a consequence of the pervasive research culture promulgated especially by administrators who talk about “productivity” and “output” in terms of grant dollars pursued and won, and of quantity (rather than quality and significance) of (conventionally peer-reviewed) publications. Tomorrow, for instance, we are invited to a “presentation” on “trends in FASS [Faculty of Arts and Social Sciences] research performance.” Let’s just say I will be pleasantly surprised if the emphasis is not squarely on those kinds of quantifiable measures. Everyone I’ve spoken to about it fully anticipates that the event has been set up as an occasion to chastise us for our failure to measure up, both to other faculties on campus and perhaps also to comparable faculties at other universities. But the conversation we should be having is about the adequacy of the measures, about the damage they do and the absurdities they create. We should be talking about whether it’s really a good use of time for a humanities scholar to spend weeks, months even, on a grant proposal for a program with a success rate of below 25%; we should be talking about the culture of greed and hypocrisy and cynicism that has been created by the pressure to ask for more and more money whether you need it or not, because big grants bring prestige (and support graduate students–and that’s another can of worms right there); about the flawed logic of trying to get grants because the university relies on its share of them to cover ‘indirect costs.’ We should be resisting the pressure to increase our research productivity according to such ill-fitting measures, and we should especially resist chipping away at our curriculum and at our undergraduate students’ educational experience because we want to look like the kind of “productive researchers” the university seems exclusively to recognize and reward. I don’t measure my “performance” as a scholar exclusively on my output of specialized peer-reviewed publications, or on my success at competing for external funding, and I don’t think my university should either. Here too, there are a lot of moving parts, and the funding challenges universities faced are not something I take lightly (or understand completely, given their intricacies). But that doesn’t change the oddity of trying to twist and bend humanistic inquiry into something that looks like scientific research, and of treating us as failures precisely because we don’t do expensive projects.

Let me be clear: I don’t think there’s no point in our doing our research. I don’t think it’s a waste of time; I do think that there are both intellectual and social pay-offs from our efforts to understand the world better by way of understanding its literature. But I do think we produce enough of it already. I don’t think Mark Bauerlein makes a particularly fair or coherent argument about its excesses, but I also don’t think we need to “protect” more time to produce more of it faster. I actually think we should slow down and produce less of it, especially in conventional forms. How much “output” is enough? It’s not the quantity that should matter. How much research time is enough? If we let go of the artificial urgency fueled by the kind of presentation I’m looking forward to tomorrow, I think we’d find we already have enough time.

Now, to be fair, we haven’t exactly decimated our program, and we still have plenty of classes on the small side. But the pressure is undoubtedly upward. Big classes are routine elsewhere, I’m told, and a lower teaching load for full-time faculty is also the norm at other “research institutions.” But is this a good thing? Is this the way we want our resources distributed? Well, judging by yesterday’s voting, the answer for a lot of us is ‘yes.’   I understand why, but I feel that we’re in pursuit of a model of success or excellence that I just don’t believe in anymore. Sometimes sitting with my colleagues I feel like a nonbeliever in church! And it’s a church in which two things are sacrosant: our research, and our graduate program–in the interests of which we have made all of the recent changes to our overall curriculum.

And this is why the Mellon survey question was so hard to answer. How can I be sorry that I’ve been able to pursue this career, which in many ways suits me so well? How can I regret that I can dedicate my time to things I not only think are really important, but love? In what other job can you be paid to spend hours and hours a week concentrating on literature, and working with bright, eager students to nurture their love of reading and their interest in the kinds of questions it opens up? But the other values of the profession have troubled me from the start of my Ph.D. work, and the systems of incentives and rewards, and of prestige and reputation too, skew very far in one direction. How can I not feel I’m out of step and perhaps unsuited for the career I chose when I can’t commit myself wholeheartedly to two of its central pursuits?

If I had the choice, would I do the same again? Today, I’m not sure. But ask me again  after my small group discussion of Great Expectations on Friday. I bet my answer then will be “of course!”

Mark Bauerlein’s “The Research Bust”

I have mixed but mostly negative feelings about Mark Bauerlein’s recent piece in the Chronicle of Higher Education about literary research. Reporting on a study* he did for the Center for College Affordability and Productivity, Bauerlein argues that (most) literary research and publishing is not worth the investment of time and money that goes into it. His major evidence in support of this argument is that academic books and articles aren’t cited very much. Interestingly, he doesn’t argue that this is because they aren’t any good, that they aren’t worth doing because they contribute nothing to knowledge or understanding, or because they are opaque to the lay reader (popular forms of the attack on academic criticism, both of which are to be found in the long comments thread on his post) . In fact, he opens with the example of an article that is “learned, wide-ranging, and conversant with scholarship on the poet and theoretical currents in literary studies. The argument is dense, the analysis acute, on its face a worthy illustration of academic study deserving broad notice and integration into subsequent research in the field.” What he finds, however, after diligently entering the article’s title into Google Scholar, is only “a handful of sentences of commentary on the original article by other scholars in the 10 years after its publication.” There’s a dramatic imbalance, as he sees it, between the input (“100-plus hours of hard work by a skilled academic, plus the money the university paid the professor to conduct the research”) and the impact (” we can be sure of only a few scholars who incorporated it into their work”).

There’s plenty to be said about Bauerlein’s methodology, and some of the comments on the piece are sharp about the reliability of citations indexes in general and Google Scholar in particular, as well as about his very reductive notion of impact, which doesn’t consider the impact of scholarship that is read but not formally cited, read as teaching preparation, and so on. That we can’t count something doesn’t make it irrelevant, and all practising scholars know from their own first-hand experience, I’m sure, that they read and are influenced in their thinking by a great deal of material that never makes it into their footnotes or bibliographies.

But suppose we grant Bauerlein a modified version of his quantitative point: suppose it’s true that much specialized research does not change the conversation the way its authors probably hope it will. In fact, my own experience to some extent supports this–not only of watching the fate of my own publications, but of burrowing through masses of work by other scholars that really does, as Bauerlein says, “overwhelm[] the capacity of individuals to absorb the annual output.” What puzzles and disturbs me is what Bauerlein believes follows from this ‘finding,’ which is that we ought to stop doing (or at least funding) literary scholarship (he doesn’t actually say this in so many words, and at some points seems to be making the more temperate suggestion that we simply scale back expectations and output). Along the way to this modest proposal he also makes some dubious further claims–or at least claims that would require a great deal more nuance and specificity to be satisfactory.

Further to his point about the overwhelming mountain of publications, for instance, he proposes that we have “reached a saturation point, the cascade of research having exhausted most of the subfields.” But his examples  are Melville and George Eliot, two of the most emphatically canonical authors imaginable. Yes, it’s a near impossibility to read “all of the 80 items of scholarship that are published on George Eliot each year”: I can’t do it–I wouldn’t want to do it. But the realities of specialization are also such that I don’t need to do it: there isn’t one subfield of George Eliot scholarship anymore but a multiplicity of potential angles on George Eliot, and the researcher’s task is to navigate among the available material to find what’s relevant. Yes, that’s difficult, even frustrating at times, but it’s hard to see how a continuing “cascade of research” is a sign of exhaustion: surely it’s a sign that people are still finding questions to ask, and doing their best to answer them? In these cases it may be true that the results will matter only to ” a microscopic audience of interested readers,” but that’s what happens in all highly specialized fields, not just in the humanities. The objection, then, can only be that for some reason literary subjects are not suited to specialization, which seems a suspect argument, one that harks back to a time when literary scholars were comfortably certain they knew what needed to be known and said about the books that really mattered, and those books and that knowledge could be neatly summed up and pronounced upon.

Having said this much, I should acknowledge what readers of this blog (certainly, any who have read it from its early days!) already know about me, which is that I have often complained about the pressure of specialization and the related trend towards metacriticism. I started blogging in part because of my own dissatisfaction with the norms of academic literary criticism. My early complaints about that got me in hot water with a commenter who charged me with “offering nourishment” to those who want “to eliminate literary studies from university curricula altogether.” Though I know more now than I did then about these kinds of criticisms of and attacks on the academic humanities, my view continues to be that what we need is not to end, but to diversify the kinds of research and writing that institutions recognize and support as valuable uses of academic expertise. There needs to be room for ‘knowledge dissemination’ that serves non-specialist purposes and audiences, for instance. Some researchers have less inclination and talent for microspecialization, but excel at synthesis and exposition–I think that is actually where my own strengths lie. But ask any academic whether writing a textbook or a popularization (or a series of reviews and essays in a non-academic, non-peer-reviewed journal) “counts” the same way that 5001st study of Melville will, when it comes time for hiring, tenure, or promotion, or just for earning the respect and support of your institution and administration…

To return to Bauerlein’s argument, the 5001st article on Melville may yet have its value to the small group of Melville specialists, provided it is, like the article he mentions in his opening, a high quality piece of professional scholarship. But it’s true that it can’t maximize its impact if it is not widely read, and the burden of reading 5000 other studies may be too much for most scholars. I think Bauerlein is right to suggest that quantitative measures for tenure and promotion are detrimental to individual scholars as well as to the profession as a whole.  (I interviewed for one position where I was told I would need two books or six articles for tenure. That’s absurd, not least because the fetishization of books creates what I described in an earlier post as “the corrupting pressure to inflate, not only our prose and our manuscripts, but our claims.”) The MLA has been making the argument for decentering the monograph for years now, but as Bauerlein points out, “nobody wants to take the first step in reducing the demand.” Between the crisis in academic publishing and the changing demands and expectations of scholars themselves, perhaps eventually the ‘publish or perish’ model will be reformed.

But let’s consider, again, the article Bauerlein opens with. The problem Bauerlein identifies is not that the author’s time (and the university’s resources) were wasted because the article never needed to be written the first place, but because the article had little measurable impact–it didn’t make a conspicuous difference to the field. Again, Bauerlein’s claims are undermined by their lack of specificity: depending on how specialized the essay’s argument is, perhaps nobody should expect it to transform the overall discussion about that particular canonical poet. The 10-year time frame also does not allow for the glacially slow pace of academic publishing. But let’s, again, grant him a modified version of his premise, this time that the impact of the piece really was inappropriately (or unfortunately) light. Why isn’t that a reason, not to stop producing learned, wide-ranging, acute analysis, but to change the mechanisms for circulating it? What’s wrong with the processes, the apparatus, of our scholarship, if good ideas are not circulating as widely as they should? How can we open up the research and publishing process so that scholars engage each other in more direct, productive conversations? Why aren’t the scholars working in this area actually talking to each other–not face to face, but through Twitter, blogs, listservs, or other kinds of scholarly networks? Is it that there are too many of them, each of them individually overwhelmed by the difficulty of trying to keep up with the output of scholarship from others? Or is it something about their work habits–keeping their heads down, trying to beat the tenure clock, looking only so far and no further? Is the sheer pressure to publish a lot a disincentive to more exhaustive research? What are the logistical impediments, in other words, to improving the circulation of ideas? Also, how can we change the way we work so that the value Bauerlein himself claims to recognize in an essay such as that one can be perceived by readers outside the academy as well? Why is the best response to a (perceived) oversupply of exemplary scholarship to denigrate or even halt the scholarship, rather than to champion it and ask that we and our institutions work to solve the problem of its reception and distribution?

From Bauerlein’s perspective, the answer would appear to be that he thinks literary research has already run its course–that there’s really nothing left of any significance for scholars to find out, at least not on behalf of the rest of us. But here his choice of George Eliot and Melville is misleading, if not disingenuous. I’m prepared to concede that the latest articles on George Eliot are pretty specialized. Indeed, I have nearly lost interest in reading them myself, and I don’t want to be compelled to contribute to them myself. Curiosity-driven research can hardly, in consistency, be made compulsory. But I don’t think that means they have no value (why should my interests and preferences be the arbiter?), and I wouldn’t want to propose (as Bauerlein certainly implies) that “saturation” means “completion”–what would it mean to be finished studying something? how could we ever be sure we have found out everything there is to know? “We can no longer pretend … that studies of Emily Dickinson are as needed today,” Bauerlein proclaims, but how can he know this? There’s some irony in his relying on simple quantity of research to decide there’s nothing of interest or value left to be said. Still, perhaps in these cases scholars are working mostly for each other. Again, this happens in all fields once you reach a certain level of specialization.

Suppose we consider if every subfield is as densely populated as those he cites, however. I’ve been looking up Winifred Holtby: there’s very little scholarship about her novels, compared to the vast output on her Bloomsbury contemporaries. That absence of material is already provocative, to my mind: what has given one literary movement so much more critical value? In learning more about Holtby and Brittain, I feel that I am also learning (again) about the ways our scholarship is shaped by expectations and priorities that are not intrinsic to the literature but may, in fact, interfere with our understanding of its forms and ideas. Much was made at one time about the “end of history”: does Bauerlein believe we have reached the end of literary history? Surely not. The landscape of literary studies is in constant flux, not just in the theoretical apparatus readers bring to primary texts, but in our selection of primary texts to look at in the first place. Imagine if we had concluded, as a profession, that Leavis’s The Great Tradition was the last word on the British novel, or that the list of Oxford World’s Classics as of, say, 1970, was definitive. In my own undergraduate course on the Victorian novel, in 1988, the term “sensation fiction” never came up–and neither did Elizabeth Gaskell. In our discussion of Jane Eyre, at no point did we consider whether British imperialism was a significant context. In my own academic lifetime and my own specialized field, that is, there have been enormous changes in just a couple of decades. It’s easy to take the horizons of our own interests and knowledge as actual limits on what is worth asking or knowing, but surely the last 100 years of literary studies have shown us just how limiting and even dangerous that assumption can be. What a depressingly anti-intellectual proposition, that we have nothing more to learn or say, or that even if we do, it’s not worth finding it out. It’s precisely because we can’t foresee the significance of research that we need to preserve a space to do it open-mindedly, in a spirit of sheer intellectual curiosity. Up close, in the moment, it may be difficult to discern how or where the multitude of individual projects is moving us–but yet, look back and see what a different place we are in now. Who, in 1900, or 1950, or even 1980, could have told us what would turn out to make the most difference?

Ah, but you see, all that research is expensive. As Bauerlein says, “we cannot devote our energies to projects of little consequence”–but note the presumed correlation between measurable impact and “consequence.” And, again, “impact” is a complex issue, one hardly amenable to simple metrics. What will those “undergraduate reading groups” Bauerlein wants us to lead (in lieu of going to conferences or archives) be talking about in 10 or 20 years, if specialized research grinds to a halt? Exactly the same things we would bring to them today, I suppose–but why would we want time to stand still in that way? Or, how will he decide who will carry on the research while the rest of us focus on mentoring undergraduates (not, presumably, to be scholars) and “pushing foreign languages in general-education requirements”? (How that last is the particular responsibility of English professors, I’m not clear.) Bauerlein argues,

 If a professor who makes $75,000 a year spends five years on a book on Charles Dickens (which sold 43 copies to individuals and 250 copies to libraries, the library copies averaging only two checkouts in the six years after its publication), the university paid $125,000 for its production. Certainly that money could have gone toward a more effective appreciation of that professor’s expertise and talent.

But that professor’s “expertise” is surely in part defined (and expanded) precisely by that long-term effort to know more about Dickens. Why is it a “more effective appreciation” of that professor to discourage  (and perhaps even to prevent, by withholding time and resources) the research and publication of the book? (How can you judge the importance of the book’s ideas from the number of times it has been checked out, anyway? Haven’t you ever just sat in the stacks and read stuff?)

Bauerlein is right to challenge the reigning paradigm that values quantity over quality and specialization over synthesis and accessibility. But throughout the piece, there’s an uneasy slippage between making the case for a more rational, deliberative research model and a wholesale dismissal of the entire enterprise. At one point he acknowledges that “research is an intellectual good,” but then he shrugs it off as “ineffectual toil.” He concedes that those who object to his position are not wrong “on principle”–but then rules them out of order on grounds of pragmatism. He agrees that research “makes better teachers and colleagues” but then he characterizes it as the pursuit of an identity that is alluring because it “flatter[s] people that they have cutting-edge brilliance”–as if literary research is no more than egotistical posturing. (Perhaps he has been reading Eugenides?) He concludes by looking forward to the waning of “the research years of literary study.” As many of the comments on his piece show, this kind of thing is music to the ears of those who see no value at all in what we do–his gestures towards moderation and reform are eclipsed by his larger narrative of excess and waste.

That Bauerlein’s column is clearly having a large impact (as measured by external links–including both Arts and Letters Daily and the Book Bench–as well as by the number of comments it has garnered) seems to me pretty good evidence that we need better ways to measure what a piece of writing is really worth.

*I haven’t read the entire study; my response is just to the presentation of its main ideas in the Chronicle article.