A wearying discussion in the media today attempts to answer a single question: “Should I go to graduate school?” Its yes-or-no structure provides an accurate hint as to what a reader can expect from this debate: polarized answers based mostly on the author’s experience.
The situation is about as bad as it sounds; those who dropped out or failed to achieve a tenure-track job write blistering objections to the entirety of academia, while offering inane or impossible advice. One of the worst cases is Ron Rosenbaum’s cranky Slatearticle titled “Should I Go to Grad School? My Story.” It features a stock photo of a woman staring despairingly at a pile of open books and characterizes the “academic mindset” as “the sort of tin-eared arrogance that would consign to the dustbin on no good authority 35 eloquently tormented lines of self-reflection by one of the greatest characters in world literature [Hamlet]—a character defined by his penchant for introspection and self-reflection—on the basis of a half-baked theory.” Rosenbaum’s personal story of success outside of academe amounts to a series of lucky breaks, a classic case of being in the right place at the right time with the added benefit of a kind of innate talent for journalism. Still, it is almost as unsettling to advocate for graduate school in light of some of the facts about the job market, the length of graduate studies, and the quality of life common to graduate students.
It is thus that Joshua Rothman finds himself “impressed by the clarity of the opinion of these essays” in his New Yorker blog post, “The Impossible Decision.” Unlike the other authors of advice articles, he is willing to admit that he just doesn’t know whether you should go to grad school. “I’ve come to feel that giving good advice about graduate school is impossible,” he writes. “It’s like giving people advice about whether they should have children, or move to New York, or join the Army, or go to seminary.”
Rothman’s article makes the claim that the decision to go to grad school is less a career or education decision and more a philosophically fraught existential choice. He borrows George Eliot’s term, “a fragment of life,” to drive this home. The value of graduate school is particularly difficult to judge because of the sheer scope of our lives—both in breadth and depth—that it alters; decisions like these are always thorny to evaluate because their meaning is dependent on how our lives ultimately end and what the final consequences of our decisions are. In other words, they must be seen in the context of the entire life to be rightly understood. And yet, even at the end of our lives, these choices won’t necessarily be any more transparent; human beings’ legacies after death are just as difficult to judge as their major decisions in life.
Ultimately, the grad school decision is obscured in the “perfectly ordinary ways” that all of life is, to some extent. In Rothman’s words, “I’m aware that there are too many unknowns. There are too many ways in which a person can be disappointed or fulfilled. It’s too unclear what happiness is. It’s too uncertain how the study of art, literature, and ideas fits into it all… And, finally, life is too variable, and subject to too many influences.” Perhaps the only defining difference is that PhD programs allow us to choose, in a single decision, a way of life that promises to span a large chunk of our limited number of years, require much of our energy and ability, and radically transform our daily lived experience. Most other life paths require a series of decisions in which the next decision may be completely obscured or unknown, and so you could find yourself accidentally, but nonetheless profoundly, fulfilled by your career in journalism without ever having consciously made one single, fraught decision to become a journalist. Graduate school, however, is easy to frame as a single choice (in some cases, literally the press of a button that says “I accept”) and thus seems to be a qualitatively different kind of decision altogether, “an existential quandary” rather than a mere “career conundrum.”
Looking at it another way, though, it is precisely because of its particularly existential nature that the decision to go to grad school is not a single, all-encompassing choice, but a series of choices that happen almost moment to moment for the duration of your life in grad school. Accepting the offer of admission to a PhD program does not doom you, nor is it some kind of once-and-for-all salvific transformation that will automatically lead you on the path toward tenure. Grad school is a life that must be lived, and thus continually chosen, in a way, by a host of small, mundane gestures: picking up a book to read, submitting a paper for a conference, typing the next word of your dissertation. You can continue to do these things, or you can simply stop and pursue some other way of life. This perspective may not make the decision between graduate school and some other option any clearer in individual cases, but it does save the decision from becoming unduly saturated with existential dread by the media, friends, and parents.
Should you go to grad school? Weigh the options, seek advice from trusted professors and friends, work out the things that will imbue your life with meaning and bring you satisfaction, and then decide for yourself. But don’t make the decision even more difficult than it already is; avoid things like the catalogue of existential horror that is The Chronicle of Higher Education’s advice section.
Elaine Scarry’s fascinating article in the Boston Review last summer, which draws parallels between literature and the establishment of social structures intended to “diminish acts of injuring,” is worth reading. Among a sea of articles that trendily and superficially question the good of literary scholarship, or even literature itself, “Poetry Changed the World” is a smart summary of Scarry’s ideas about the ethics of reading.
She starts with Steven Pinker’s The Better Angels of Our Nature, which asserts the diminution of certain types of violence over the past fifty centuries and attempts to give explanations for how reading played a part. Scarry is rightly suspicious of Pinker’s general claim, but is persuaded by “his documentation of the many specific forms of cruelty that have subsided.” The telling link between this ebb of violent acts and literature is the simultaneity in the seventeenth and eighteenth centuries of an increase in book production (as well as literacy rates) and the abatement of “an array of brutal acts—executing accused witches, imprisoning debtors, torturing animals, torturing humans, inflicting the death penalty, [and] enslaving fellow human beings.”
But Scarry is convinced that the link can be established even earlier with ancient and medieval disputation poetry. “In their own time,” she says, “these poems helped to give rise to new civic institutions in which disputation was carried out obsessively.” The twelfth through fifteenth centuries show a simultaneous burgeoning of poetic disputation and public institutions like universities, Inns of Court, and Parliament akin to the rise of the novel and the Humanitarian Revolution in a later period.
Scarry is careful to point out that these connections only imply correlation and that the direction of influence is more complex and probably reciprocal. But these points serve to support a more general one: that literature does, after all, have the capacity to change us in very particular ways by providing a space in which we can practice experiencing the counterfactual, the points of view that our own convictions would keep us from fully investigating otherwise. The power of, what are for Scarry, some of literature’s most useful tools—its beauty and its promotion of empathy—seem to echo those espoused in Eric Wilson’s recent article in The Chronicle of Higher Education. Literature promotes an “ethical imagination,” he claims, citing Keats’ “negative capability” as an example. And that ethical imagination challenges preconceived assumptions about the world; “The purpose of suspending stereotypes is to make one more sensitive to the irreducible intricacies of the real, and so be better able to forge informed judgments about what is right and wrong.” In Scarry’s parlance: “Imagine Pamela, and her right to be free of injury will become self-evident to you.” This translates to the law—in her expression of the formula used by Hunt and Pinker—as, “We are not interested in your imaginative abilities or disabilities; whether or not you can imagine Pamela, you are prohibited from injuring her.”
This highlights something crucial about the importance of literary studies, something that seems to be missing from much of the discourse around their decline. Imaginative meditation through a specific text is necessarily, at least in part, a consideration of the particular over the general. Yes, we can and should and do extrapolate from Pamela to ideas about women and domesticity or the novel in general. But we are all the while still talking about Pamela in particular. For Wilson, this takes the form of an encounter with the uncanny and is paramount to ethical considerations of reality, pedagogy, and the strategies of thinking that should be employed as part of good scholarship. “I continue to hope,” he concludes his article, “that during a Monday-morning class, when the weather and the mood are right, I can chant Keats’s reverie of the ‘murmurous haunt of flies on summer eves’ and a drowsy student will jerk awake. Green-blue bugs will buzz eerily in his head. Suddenly nothing is right. Something has happened.” Elaine Scarry’s article convincingly argues that the “something” can add up to “shifts in ethical behavior” and to the “sea change across wide populations of readers” that they require.
A friend drew to our attention a book published last year: The Colorful Conservative: American Conversations with the Ancients from Wheatley to Whitman, by R. O. P. López. Professor Justice comments:
I’m grateful that I got to read this serious and challenging book, which presents a core of strong, persuasive critical discussions of earlier American literature wrapped in layers of provocative, deeply felt polemic. At its heart there are chapters on Wheatley, Poe, Thoreau, Whitman, and William Wells Brown. American literature is not my field, and so I can’t say how López’s readings fit into it. But they are stirring readings, compellingly argued but also compellingly imagined: the authors that emerge look quite unlike their anthology portraits. Thoreau gains from the Iliad an experience of intensity and the imperative of mastering it; and the civil disobedience he practiced and advanced celebrates not impulse but discipline. The Aeneid is for Whitman first the occasion of a glib and spurious superiority, then a challenge he cannot shake; but then he goes beyond both Virgil’s poetry and his own, finding in the nursing of the wounded soldiers and the verse that emerges from it an understanding of a fully engaged comradeship that includes all the registers of erotic, amiable, idealistic, and democratic aspiration.
Outside this core of the book is an argument about how to understand the relations through time of readers to authors and authors to predecessors. A judicious chapter takes us through theorists of literary-historical relation (Eliot, Bloom, Foucault, Henry Louis Gates). Moving through this progression, almost a century’s worth, we find the historical aspect of the historical relation treated with increasing depth and justness, but the relational aspect of it growing impoverished. To these he adds his own model, conversation. It’s a deliberately modest term he chooses, and its implication is immediately evident: thinking of conversation with a tradition lets you imagine both listening to it, even being changed by it, and talking back to it. He is able to cite Anzaldúa and Bhabha as models of borderland, but one that borders on different times. In this evocative model, a conversation is somewhere you can go to be changed.
And to be pulled from what is easy, obvious, narrow in the moment. For the outer layer of this book is, as its title suggests, an argument about, and for, a style of conservatism he calls colorful; not “conservativism of color,” as if his were some exotic species, but a thing broader in some directions and more specific in others. “Colorful” conservatives are those who value tradition as a specific against convention, those who appeal to the “other customs” of “other times” in order to get some distance on, some freedom from and power over, oppressively right-thinking and non-thinking attitudes of their own moment. He contrasts these with liberals of most stripes on the one hand, who value convention as a specific against tradition, and Burkean conservatives, who value tradition as a sort of long-term convention. His critiques of both these energies are trenchant, though there is something more Burkean than he would care to admit in his own model; for, like Burke, López seems to offer few substantive grounds for the value (the liberating value in his case) of particular traditions.
López’s argument about the politics of academic literary studies is original. It is also deliberately provoking, beginning with an unapologetic screed that quotes those conservative journalists most notorious among and most obnoxious to the academic world of which he is himself part—Glenn Beck, Ann Coulter, Rush Limbaugh. It is clearly meant to stagger readers, and it staggered me a bit. I can’t say I like it, but I take the implicit point: when a professor at a major research university suggested a perfectly middlebrow MSNBC commentator might make a good university president, the reaction was polite; had he suggested a Fox News commentator, it would have been contemptuous. At the other pole of the book, the final chapter concludes with an unapologetically admiring and unconventional reading of Brokeback Mountain, which seems designed to get on the nerves of those who love the movie and those who hate it. It’s a fight in which I have no dog, but I thought the reading superb.
Here’s the thing. Political and cultural discussion in the humanities, and still more the shared attitudes that underlie them, are generally monotone, thoughtless, and banal. When conservatives complain about that, they are right. But when they complain that the research and thinking done within its disciplines is just a con, a self-rewarding game, a fog of empty jargon, they are wrong. The problem is that too few of them have been willing to engage intelligently with poststructuralism, ethnic literary studies, queer studies, and to sort out what is valuable and rational in them from what is not. López has had the wit and courage to do just that. We could use more of it.
Helen Vendler’s recent article in Harvard magazine poses a lot of compelling questions. “The critical question for us is not whether we are admitting a large number of future doctors and scientists and lawyers and businessmen (even future philanthropists): we are. The question is whether we can attract as many as possible of the future Emersons and Dickinsons. How would we identify them? What should we ask them in interviews? How would we make them want to come to us?” It is refreshing to see a poetry critic—someone who clearly values the arts and humanities—put her money where her mouth is and ask the practical and thorny questions of how to reflect that value in the actual constitution of the university at the most basic level: admission standards.
No less refreshing is the way she lets these practical considerations lead to even more difficult questions about how these values disclose our convictions about the good of a human life. “Can we preach the doctrine of excellence in an art; the doctrine of intellectual absorption in a single field of study; even the doctrine of unsociability; even the doctrine of indifference to money?…Can we preach a doctrine of vocation in lieu of the doctrine of competitiveness and worldly achievement?” These are the kind of questions that are sometimes easy to dismiss for being naive or idealistic; Austin Allen, while agreeing with Vendler, wonders if “in a country that now graduates over 20,000 MFAs per decade in creative writing alone, cynics might question how many more young people need universities encouraging exclusive cultivation of their delicate creative flowers.”
But the central focus of these considerations perhaps shouldn’t be whether we are doing justice to creative genius. That pertains to the larger concern about how universities can and should prepare students for what comes after. Vendler says that it is important for a university like Harvard to admit artists and students in the humanities because they have an ultimately greater “cultural resonance.” By this she means the wider culture and the immense cultural influences of major figureheads like Homer, Aeschylus, Gandhi, Beethoven, Dickinson, Lincoln, Picasso, Wittgenstein, and Woolf. But how many of the thousands of students admitted for such prowess will mature into such icons? Vendler is to be lauded for “explod[ing] the ideal of the ‘well-rounded’ student,” but isn’t the image of the solitary, quiet artistic genius a conventional ideal as well? It is tempting to juggle mythical categories like this into some kind of impossible cost-benefit analysis (“if we admit this kind of student, then we can fulfill the needs of society in the following ways”). But universities exist primarily to educate their students, not to churn out individuals useful to society (the latter is a product of the former; at least it had better be). Encountering other ways of doing and thinking the world—creative or rational or traditional or what have you—is an essential part of a university education. Is this reason enough to challenge and question the priorities set by admissions standards?
Nicholas Dames’ article about the generation exposed to Theory in the institutionalized context of the university classroom is not without some surprising parts. But at the heart of his essay is a consideration of the effects of Theory (he gives it an honorary capital, and so will we) on a life, which takes a tired criticism of the University—that it no longer adequately prepares its students for the “real world”—and makes it something worthy of serious consideration.
The most striking instance of this comes as an implicit question: Can a student take his education too seriously? Dames’ materials are six novels written by authors who grew up with Theory and semiotics as a dominating part of their university education, and so the answer for their doomed “bookish and diffident” characters is of course “yes.” Its effects can be comic and deleterious: a semiotics student in The Marriage Plotproclaims “I’m finding it hard to introduce myself, actually, because the whole idea of social introductions is so problematized.”
But the comedy is born out of serious considerations—”What kind of a person does Theory make? What did it once mean to have read theorists? What does it mean now? How does Theory help you hold a job? Deal with lovers, children, bosses, and parents? Decide between the restricted alternatives of adulthood?”–the kind of questions that Theory “could only recognize as regressive or naive.” The problem with Theory exhibited by these novels is that it is not of this world. It is too forward-looking, almost apocalyptic in the way that it prepares students, not for the world they will actually enter, but for “the different world to come: a world of genuine difference genuinely encountered… a world that would be more transparent and, as a result, less painful.” Dames calls it “utopian,” “a training in interpreting the world as a path toward changing it.” And so it is by “taking their educations so seriously” that these novels’ characters “disabled themselves from the supposed rewards of education.”
But could we not just as easily say that these characters are simply failed readers? It seems that it is not necessarily Theory, not a too-serious investment in what they were taught that has failed these “erudite misfits” but a misunderstanding in what interpretation of this sort is for. A good reading is productive; it reveals the ways in which meaning can be negotiated from a text. But this is precisely where Theory’s forward-looking character is most profitable: it points to the manifold ways in which texts, people, the world resist attempts at interpretation. If these novels’ characters are looking for a world that is less painful as a result of its transparency, of course they are disappointed and ill-equipped to encounter it. But if we look toward all of the endless generative means by which we can continue to encounter others and the world and even derive pleasure or fulfillment from our experience of seeing through a glass darkly, there should be no reason why we can’t also grasp these meanings and live a life that is both resolute in seeing the world as it is and intent on exploring its possibilities.
Since this past July, there’s been a lot of talk about the struggle for work-life balance. Anne-Marie Slaughter’s Atlantic article on whether American women can “have it all,” brought a flood of responses from various perspectives across the Internet—and the discussion expanded from the lives of American women to “having it all” as such. But all responses followed Slaughter’s model of interpreting the dilemma of work-life balance as an issue of time—as a competition between the personal and the professional in an economy where that is the only resource which we feel we can control.
Between public career advancement and private household commitments, other things must be negotiated. Professional demands edge in on those goods required for personal fulfillment. To define any serious pursuit as a machine that only requires the fuel of time is to belittle the very place of work in a flourishing human life; to define the intellectual life as such a machine is all the more so, and requires deceiving ourselves about what that life really requires. Similarly, the development of one’s private life, of personal relationships and moral commitments, requires more than merely a space in a schedule. Both demand the whole self.
But the model of making time for things leaves no “whole self” to devote to anything wholly demanding and wholly worthwhile; it divides the self into separate functions and assigns each its allotted time. Each scheduled sphere is guided by its own norms and codes for success, for the virtues esteemed in the office or the lab are not necessarily prioritized in the home or the classroom. Our participation in each scheduled activity is therefore often justified by only the operational codes intrinsic to it. Our individual functions and actions can become alienated experiences, and our lives can become a series of demarcated role-playing, with the dilemma of time management never solved, and no whole self to devote to anything.
Only in the pursuit of something larger, something that encompasses each of our smaller roles, can anyone actually commit the whole self. A unified life comes only when each partitioned function is deemed worthwhile not only by the code of its internal logic, but by the logic of the whole. Time can never be managed if there is no system of evaluation external to those within our time-consuming tasks—but when each smaller goal is subjected to a larger good, competing demands on our personal resources are no longer incommensurable and therefore no longer really compete. We can reasonably negotiate what our tasks require when our concern is not the tasks themselves, but the larger pursuit they altogether constitute. Each function becomes intelligible in the context of the others, and the creativity and vision demanded by each are shared across the different sectors of our lives, lavishly enriched.
Exemplifying this, an article from UC Berkeley’s Greater Good Science Center reports that feelings of awe can create feelings of having more time. “Awe-eliciting experiences might offer one effective solution to the feelings of time starvation that plague so many people in modern life,” the study’s researchers say. But awe is not merely pain-relief for the sound of a ticking clock—it may be an index of a life that has a sense of the tremendous—of a larger vision that shakes everything loose (as the word implies) and scales down all lesser tasks. Lives without hermetically sealed divisions afford intellectual connections, pragmatic correspondences, and draw new clarity, and therefore deeper unity. Knowing how work, relationships, beliefs, and commitments all fit into a bigger picture might be the best resource in the struggle for work-life balance.