Academia Shrugged? or, Why You Should Just Quit Your Ph.D. Now

July 27, 2011

Grad school and academia as a potential career have taken a real beating in the media lately. It seems the world has finally woken up and smelled the (three-day old, re-used) coffee beans that are all grad students can afford. The bottom line is that humanities students should run, not walk, away from a life of debt and uncertainty, and a “dream job” that will never quite pan out.

In an article for Slate.com, William Pannapacker, himself a professor at a liberal arts college, proposes a few steps to fix graduate school in the humanities. Some of what he advises – such as providing networking opportunities and internships, and recognizing that it may be better to keep one’s passion for obscure fields of study as a hobby – is similar to what I proposed in my own post on a Three-Pronged Approach to Saving Humanities Departments.

But I was really intrigued by his addition of a final, “nuclear option”: quit. In his words:

Just walk away. Do not let your irrational love for the humanities make you vulnerable to ongoing exploitation. Do not remain a captive to dubious promises about future rewards. Cut your losses, now. Accumulate work experiences and contacts that will enable you to support yourself, have health coverage, and something like a normal life. Even the more privileged students I mentioned earlier—and the ones who are not seeking traditional employment—could do a lot of good by refusing to support the current academic labor system. It exists because so many of us who care about the humanities and higher education in a sincere, idealistic way have been passively complicit with the destruction of both. You don’t have to return to school this fall, but the academic labor system depends on it.

Wow. A group of highly intelligent, capable individuals upon whom “the system” depends but who are scorned by it decide one day to “go on strike” in the hopes of seeing said system implode and leave behind a twitching lump of ingrates suddenly begging them to return and save them.

This sounds familiar. Where did I read about that recently? Oh, yes – in Atlas Shrugged, Ayn Rand’s infamous treatise on objectivism. In it, the heroes grapple with their own superior morality in a world of incompetent ingrates and eventually come to realize they are complicit in the very system that condemns them for their unchecked ambition and capitalistic greed. (Of course, unchecked ambition and capitalistic greed are positive attributes in Rand’s heroes.) So, one by one, they go on strike by withdrawing to a hidden, magical place where labour is justly rewarded, and nobody ever gives anybody anything they haven’t worked for, while they watch the world crumbling around them without their input.

(There, I’ve saved you from slogging through 1000+ pages of libertarian/objectivist bluster that would probably outrage and offend anyone who believes in silly things like equality of opportunity and altruism.)

But putting aside the absurd pairing of tweed-jacketed academics and Wall Street “fat cats,” let’s think a minute about the implications of this Randian proposition for academics. Would it work? As Pannapacker points out, there is always the possibility of having a day job with health care and indulging in one’s “irrational love for the humanities” as a hobby. As he says, “more and more people are learning [that] universities do not have a monopoly on the ‘life of the mind.’”

Maybe. But I think universities should at least have a competitive edge on it, or else they stand to become exactly what vocationalists want them to be: training for jobs that exist today and have clear mandates and required preparation. This would certainly be the case if all the most brilliant liberal arts minds suddenly decided to be brilliant elsewhere in the world.

Because if not universities, then where? Will we have to start paying people to hear about their ideas? Will we have the same level of interaction if everyone is decentralized and off thinking interesting thoughts in different industries? How will we prepare people to think innovatively, and prepare them for the jobs of tomorrow that don’t have clear mandates or preparatory courses?

The whole point of a university is that it is a hub, a place a little apart from the rest of the world (yes, perhaps even an ivory tower) where people can reflect on the “big questions” that may not be directly related to the pressing fads of the moment. What happens when this education becomes more decentralized? Can we trust that individuals will independently seek out all of the different perspectives they’re likely to get in an undergraduate humanities course?

I reflect on what Stanley Fish wrote in the New York Times a few weeks ago: basically, that academics, and by extension universities, should just abandon trying to be relevant and focus instead on academic inquiry for the sake of it alone. I think that would be unwise. Knowledge for the sake of knowledge is a great thing, and we need independent places (i.e. not ones funded by corporations) that will ensure that people continue to seek it. But relevance is important too, and while it should not be the only goal, it needs to be a goal.

In short, the current academic system needs to be refined from within, not by walking away and shrugging off all its problems. (Besides, academic types don’t have a magical land in Colorado where we can retreat and live out our ideals of hard work and free love and no taxes.) Professors and administrators could start with being honest about the reality of life as a grad student, i.e. mostly unemployed without the health coverage Pannapacker so enjoys having. And they should stop denigrating non-academic career choices by framing them as a continuation on the path of intelligent, creative thinking, not a deviation from it.

And then we – all of us – can start changing the way we view “amateur” non-academics outside the system, and invite them in. Let’s not exclude people with insider jargon and inaccessible writing. Let’s make a place inside the ivory tower for people who think about the “big questions” of life outside of it, so we can examine the practical implications of our ideas. Let’s show the vocationalists that “academic” is not a dirty word but one that can bring significant insight and positive change to the world outside universities, as well as in its libraries.

Let’s ask people to help us hold up the world, instead of just dropping it.

Advertisements

Tiger Moms, Drop-Outs, and Credentialism (oh my!)

May 31, 2011

Now here’s a controversial news item: Peter Thiel, famous for having founded PayPal and investing early in facebook, and now a billionaire, is paying young entrepreneurs to drop out of school. His Thiel Foundation has just named 24 “fellows” under 20 who are recipients of $100 000 and mentoring opportunities with high-powered and successful entrepreneurs in order to launch a profitable initiative. They are all drop-outs (of college or high school), a requirement for the prize.

His logic is that many would-be students of elite schools would be better off going right out into the world to generate “significant wealth,” rather than learn about the theories behind what others have said and done and invented. And while I would never blindly advocate that anyone drop out of school, given the prevailing societal opinion about education and the very real value of exposure to new ways of thinking, his initiative is perhaps a useful antidote to those who do blindly advocate more schooling as the solution to all of society’s ills. Education is a wonderful thing – I would even say that it is the key to solving many of the world’s great scourges, such as intolerance, authoritarianism, and the solid grip of misinformation. In a way, Thiel is saying that it is the ideas and the work behind them to make them successful that counts, not the name of one’s alma mater (or even the existence of one).

Credentialism – in the form of collecting degrees and designations from acclaimed institutions – has become a powerful shorthand for measuring societal status. It is iron-clad in an aura of meritocracy, because in theory only the best are able and would choose to obtain a degree or three at the world’s (and especially America’s) finest educational institutions. But, as with all shorthands, a focus on credentials alone as a stand-in for intellectual or societal worth fails is insufficient and at times unfair.

The education situation in many developed countries is drastic. Every year, millions of the world’s best students vie for a place in one of the mere hundreds of top institutions, trying to best each other in any way possible. A recent issue of the Atlantic explores the phenomenon in depth as part of an extended look at the “tiger mom” debate brought about by the now-infamous book by Amy Chua (there is a good write-up on it here, in case you live under a rock and missed it). Much of the furore over the book was caused by the implicit challenge of Chua’s winner-take-all style of parenting. In refusing to give in to her daughters’ tears, frustration, exhaustion, and in some cases disturbing behaviour (biting the piano?), Chua claims she paved the way to their happiness by allowing them to know what they were capable of.  More recently, her eldest daughter’s acceptance to Harvard has renewed the wave of anxious hang-wringing by “Western” parents who think they aren’t pushing their children hard enough to get into good schools and assure their futures.

But are the hours of heartache, rebellions and tooth marks on family instruments worth it? Is pushing a child to his or her limit, encouraging activities like building orphanages in Africa, chairing the local youth orchestra, and volunteering as an assistant surgeon on weekends in order to secure a spot at the Ivies the key to lifelong success and happiness? Is it even likely to yeild a coveted admission letter? Not really, according to what Caitlin Flanigan writes in response to Chua’s book:

Elite-college admissions offices drive professional-class parents crazy because in many respects they do not operate as meritocracies. Consider, for example, those students admitted via one of the two programs that stand as strange mirror opposites: those that give preferential treatment to the sons and daughters of alumni, and those that extend it to the children of unrepresented minorities. The latter practice suggests that generations of injustice and prejudice can be redressed by admission to a fancy college, the former that generations of inclusion and privilege demand their own special prize; the two philosophies would seem to cancel one another out, but each has its place in the larger system.

In fact, when you account for all of the “hooked” seats in the freshman class—spaces specifically set aside for kids who have some kind of recruited talent or family connection or who come from an underrepresented minority group—you accomplish, at the most selective colleges, two things: you fill a large percentage of the class (some researchers believe the figure is as high as 60 percent), and you do so with kids whose average grades and scores are significantly lower than your ideal. Now it’s time to swing a meritocracy into place; the caliber of the class is at stake. All of the unhooked students are now going to be thrown into a hypercompetitive pool, the likes of which the layperson can’t imagine. As daunting as the median grades and test scores of the typical Princeton admittee may appear, those statistics have taken into account all of the legacies and volleyball players and rich people’s children who pushed the averages down.

Sounds terrifying, doesn’t it? And what’s more, there is a growing pile of literature that argues it isn’t worth it. These days, people go to university for four main reasons:

  1. To attain practical/vocational knowledge that will tangibly help them get a job.
  2. To attain theoretical or other knowledge that will expand their minds in an area of interest.
  3. To please their parents/society/employers who consider a post-secondary education to be a mandatory status symbol. The better the reputation of the school, the better the status symbol.
  4. To make connections with peers and professors.

The main benefits of a “top-tier” education, as opposed to one from a large public American or Canadian school, lie in the last two, status and connections. Sharing a room with a future Mark Zuckerberg or getting to vacation on the yachts of the rich and famous must be worth the price of admission, right?

William D. Cohan thinks not, writing in the New York Times that getting into an Ivy League school is a “Pyrrhic victory,” with the outcome of having monstrous student debts (from $50 000+ per year fees) and only slight better-than-average job prospects in a glum economy. Many other American schools have astronomical fees, and even relatively cheap Canadian educations place graduates in debt. There is also, as I referred to above, the non-monetary cost of an education at a prestigious school. Whole families are swept up in the hyper-competitive race to the top, where lazy summer vacations and boredom and play are replaced with summer volunteer trips to Kenya, SAT prep courses, and the endless repetition of mastering a musical instrument.

But the saddest part is that it may all be for naught. A sister article in the same Atlantic issue as the above quotation charts the potential life course of many products of tiger-led households:

Harangued by my own Tiger Dad, I grew up believing in crack math skills and followed—at least initially—a stereotypical Chinese path of acing my tests; getting into the world’s most prestigious science university, Caltech (early admission, no less); majoring in the hardest, most rarefied subject, physics … And then what? Almost 50 years old now, some 30 years after graduation, I look at my Caltech classmates and conclude that math whizzes do not take over the world. The true geniuses—the artists of the scientific world—may be unlocking the mysteries of the universe, but the run-of-the-mill really smart overachievers like me? They’re likely to end up in high-class drone work, perfecting new types of crossword-puzzle-oriented screen savers or perhaps (really) tweaking the computer system that controls the flow in beer guns at Applebee’s. As we know, in this tundra-like new economy, even medical degrees, and especially law degrees, may translate into $250,000 of unrecoverable higher-education debt and no job prospects, despite any amount of hard work and discipline.

The reality, of course, is that there is life after graduation, and I imagine that a lot of students and parents who sacrifice their lives perfecting their viola performance and polishing their resumes will get there and wonder what the hell happened — and what to do next. The same is true for all graduates who feel lost after school, and who may have underplayed their social and entrepreneurial skills in favour of tailoring their lives to academic pursuits that will not help them once they have their degrees. And so I support Peter Thiel’s initiative because it addresses the fact that it takes more than a few letters from any school to achieve success in life.


A Three-Pronged Approach to Saving Humanities Departments

October 29, 2010

So you graduated with a humanities degree. Well, what are you going to do with that?

I really, really hate this question. There are only 3 answers that make sense to the people who ask it:

  1. I’m going to teachers college/law school.
  2. I’m going to grad school (be careful – this one only staves off the questions for another few years and then they come back louder and more persistently than ever).
  3. I have no idea. I just wasted the last four years of my life. Yep, I’m unemployed, bitter, and poor.

For many humanities majors, the trouble with life is that it doesn’t end with university – unless you seek to become a professor in one for the rest of your life, which is a whole different story that I’m not going to talk about today. In reality, most humanities majors will not apply their deep knowledge of the sea battles of 1812 or the role of family in Hegel’s Philosophy of Right in their day-to-day jobs. Many do not even want to. They aren’t able to respond to the many, many people who ask the question above without feeling as though they have to either defend their choice of degree because it makes them “well rounded” and “interesting” or denounce it as useless in helping them find employment.

So a lot of commentators think this means humanities programs are useless, and call for eliminating French departments or combine Comparative Literature departments with a whole host of others to save on administration costs. I’m not going to get into why this is a bad thing; I think that’s fairly obvious and, besides, I write about it all the time. Instead, I’m going to advance a theory about how to fix it.

Read the rest of this entry »


The Rise and Fall of the Grand Narrative

August 12, 2010

Those of you who read my blog regularly will know how frequently I lament the increasing specificity required of academic writing, and how it threatens to render the profession obsolete due to lack of readership or general interest in the subject matter. My thoughts were echoed in a recent book review which, in discussing the life of Hugh Trevor-Roper, a prominent historian, remarked that he could never be the great academic he wanted to be – an E.P. Thompson, or a Thomas Macauley, or an Edward Gibbon – because of two key factors. The first was the passing of the “grand narrative” approach to history, which is now seen as unprofessional, or worse, imperialistic in the Marxist teleological sense. The second was a result of his being British, and, as the article notes, “By Trevor-Roper’s day … Britain had become too insignificant to provide the subject of a grand narrative of progress in the style of Macaulay.”  The only nation that could conceivably produce historians claiming to write the story of its own empire today would be the United States, and those who do are usually right-wing polemicists who garner little respect in academic circles.

It’s true that the grand narrative has its drawbacks, as I’ve written before. Huge swaths of history that don’t fit in can be glossed over or ignored entirely in order to weave a tight story. And the grand narrative remains a common way for writers to (consciously or otherwise) impose a single, usually Western, trajectory upon world events that can be interpreted as modern intellectual imperialism. But it remains an anchoring lens through which historical events can be contextualized and patterns examined, and is usually more interesting than a narrow study. So what has caused the violent turn away from the grand narrative?  Is it justified?

Read the rest of this entry »


What is History Writing Now?

April 27, 2010

People reach post historical all the time by searching for odd little historical, philosophical and political science-related phrases. Given the obscure nature of many of these terms to those not deep within postcolonial or imperial studies, I assume they’re doing research for some paper or project. I wonder if they feel they can trust what they read. Am I a reliable source? Are my ideas sound? Can one cite a blog, or is this an even bigger research no-no than citing Wikipedia?

If it is, why? Consider this blogger: I have a graduate history degree from a good school, which, for many, constitutes formal “training” in the discipline.  I know how to cite sources and (hopefully) construct a logical and well-supported argument. Does this make me “qualified” to comment on things? Does being qualified today require being intelligent, well-trained, and peer-reviewed (in the traditional sense), or does it come from an even more democratic approvals process based on sheer number of readers? Would having six million hits to my blog make me a “qualified” opinion leader? Or do I need to have published six books through a university press that only 6 000 people will ever read in order to be a real “expert”?  And is either something to which I should aspire?

These questions have far-reaching implications for me as I go through the process of deciding whether to continue on with studying history as a career, or do something else entirely – something more practical, that would affect people more directly than a well-researched book in an obscure field and a few impassioned lectures about Lord Curzon and the Raj for a dwindling number of undergraduates who don’t care. Because it’s very important to me that I influence the way people think, not in a creepy mind control kind of way but by presenting a fresh perspective that makes them reconsider the world around them and how things work within it.

I’m not sure academic writing is the best way to do that: its scope is too narrow, and its audience is those who are already predisposed to thinking from many angles, and who likely know a lot about the subject already. Traditional academic writing is also very dry. It connects with the reader because it is persuasive, and offers a sourced argument with little personal point of view. Blogs and new media, in contrast, connect with readers because they cover current events and are often based on personal biases or feelings. They are inherently populist, because the vast majority of bloggers want others to read their blogs, and so they talk about things that appeal to a large audience: fashion, entertainment, celebrities, popular political news, etc. And the vast majority of people who read blogs read about the above topics. But does this make them experts in their fields? And does it translate to “academic” subjects like history?

One of my main goals for post historical is to bridge this gap with a forum that is flexible enough to talk about current events and timeless philosophical questions at the same time, yet with a focus that isn’t so personal or academically specialized to be unappealing to a broad audience outside of a strict historical discipline. One might call this “accessible” writing, though as I wrote about in my last post, “accessible” can be a bit of a loaded term. What matters most to me is making an impact in a way that is direct and tangible, which is why the thought of another history degree and a life as a struggling academic is slightly off-putting at times. It’s very clear what such a life could do for me: I’d be a recognized expert in my field; I wouldn’t have to get out of bed torturously early every morning to go to another soul-crushing corporate meeting; I’d be able to have great chats over coffee with fellow bright people and give speeches about things like maps; I could help out engaged students by giving them interesting research suggestions; and I would generally get to run around having people think I was a big smartypants. Clearly, these things all sound fantastic. But what would a life like that do for others, even if I did manage to actually get a job out of it (which these days, as my fellow blogger and history professor trivium points out on his excellent blog, almost nobody does)? How would it contribute to my big life goal of being a respected public intellectual who makes people think in exciting new ways?

I don’t mean to criticize academics, who are generally brilliant, insightful, creative people. It’s the system that is at fault, a system that encourages people to go to school for 10 years with absolutely no hope of finding employment of any kind at the end of it, a system that encourages killing trees by publishing books nobody cares about, and a system that has created the popular feeling that it is so removed from the everyday that it serves only to train its own. I fear academia is becoming so specialized that it just doesn’t have the impact, or the scope, or the popular appeal, to be taken seriously. When the people who make the most money and important decisions all have MBAs and law degrees, humanities majors are in for some trouble. Actually, we’re all in trouble because we’re losing out on diversity of thought and experience – big time.

As I’ve written before, I think great writing is all about having a conversation, which necessitates a connection between readers and writers. One of the great things about blogs, and Wikipedia, and other new media is that the connection – and the feedback, via the comments or revisions – is immediate, and the process of forming consensus iterative. This is when history and philosophy are really exciting (and this is why I love to receive comments and feedback from readers, particularly when you disagree or want to point out something I’ve missed). Traditional academic writing just isn’t set up to react quickly enough to changes in events, or popular feeling.

So, to paraphrase the great E.H. Carr, what is history writing now? One would think that it would adapt to the changing relationship between reader and writer, from words sent down from a lofty perch in an ivory tower to those that are picked over in comments forums around the world. It hasn’t. And we’ve all lost something in the process.  The Economist ran an article today about how this election (in Britain) is bound to be a landmark one, and yet has no landmark book or philosophy written about the popular mood to match it, or to spur discussion, as was the case in 1945, 1964, 1979 and 1997. (I was particularly excited to see that the article cited one of my historian idols, Linda Colley, as having written a significant work from which Tony Blair drew inspiration in 1997.)

Can it be that nobody has written anything groundbreaking in the past five or ten years that bears mention? Or is it that the political audience is too fragmented – or too busy writing their own blog posts – to notice? Is there still a place for the academic as a public intellectual, or has academic writing been pushed to the fringes of literate society by virtue of being irrelevant to everyday concerns? And if academia is on the fringes, who is in the centre?

I suppose we can all take comfort in the fact that there is still the expectation of something by the intelligent people who read and write for publications like The Economist. There is an intellectual void that will always need filling, by academics or writers or particularly insightful and far-reaching bloggers. The question for the next few years, it seems, is whether those who step up to fill it will have new job titles, and if so, what they will be.


Jargon and Power: Why “Touching Base” Equals Linguistic Imperialism

April 26, 2010

I’ve always thought that jargon was just another way to measure inclusivity. Newcomers to the corporate scene are often barraged with inscrutable acronyms, and people who want to “touch base” and “connect” in order to decide on “actionable next steps.” Other favourites of mine are the ever-present “deck,” otherwise known as a PowerPoint presentation in which one expands five sentences into thirty slides with swirling slide transitions, and the “ask” [n.], which, from what I’ve been able to discern, is a way to cut down on the syllables required to say “request.” Efficiency indeed.

In academia, it’s even worse. It seems that no book or article can be taken seriously until the author has proven his or her credentials by name-checking every obscure phrase that has been written on a subject. This practice serves only to repeat ad nauseam the same tired debates over and over with little new beyond increasing specialization, which I’ve attacked at length before.

Considering how pernicious it is to the Plain Language Movement, however, there is shockingly little popular or academic treatment of the subject of jargon. Perhaps it is because, as New Left academic Peter Ives says in his fantastic 1997 article “In defense of jargon,” “jargon is only jargon for those who don’t use it.” Maybe we like to be inscrutable because it makes us feel more intelligent. Or maybe the world is changing so quickly these days, we need something familiar to hold onto, and clichéd language represents a security blanket of sorts.

The ways in which jargon has evolved seem to support this theory. In “‘As Per Your Request’: A History of Business Jargon,” Kitty Locker writes that jargon has eras, identifying the pre-1880s, 1880s-1950s, and post-1950s as distinct periods in business communication. (Given that the article appears in a relatively obscure academic journal and was published in 1987, it obviously doesn’t touch the Internet age, and so I imagine the author would have to add another for the post-1990s period for all of the tech speak we use now.) But if we think that the 1880s-1950s (when jargon use was at its peak, apparently) saw the rise of corporate America, and with that an emphasis on professionalism and specialization, we can see the early roots of corporate-style conformity. And today there is just as much human need for conformity, but more arenas from which to choose one’s allegiance: corporate, social, technological, generational, geographical, etc.

Locker argues that corporate jargon and ‘stock phrases’ came about primarily because new employees tended to copy old correspondence, either in style or in actual phraseology. Often letters doubled as legal documents, and so the terminology had to be fairly set. Then, from the 1920s onward, American firms were interested in improving business communication, with big companies often having a person or department who monitored it and tried to get everyone to use the same words and phrases. (O, that I could have the job of whipping corporate employees’ communications into shape! Alas, cost cutting.)

Today, I suspect jargon use comes less from official processes than by the subtle attempts to reinforce unofficial corporate/academic norms and hierarchies with new employees. Using jargon – in the form of acronyms, company-specific words, or highly technical language – creates a sense of inclusivity among workers, which is exactly why, if senior executives/group leaders ever thought about it, they would have a vested interest in keeping it around.  It is a badge of honour even today for new recruits to master the new group’s/company’s lingo.

Interestingly, Locker points out that companies have had little success in eliminating jargon even when they have tried. A bank in the 1960s tried to freshen up its letters by taking out the standard greetings and salutations, and received numerous complaints from customers who were having trouble recognizing the letters for what they were. As she amusingly quotes, “the value the reader places on the distinctiveness of a business letter can easily be overestimated.” (Indeed.) And it is a daring academic who braves the censure of his or her peers by not mentioning what Foucault thought about the issue, or how “post-x” something is. (One might wonder if s/he even had an advanced degree.) It seems that there is comfort in the conventionality of jargon for both user and receiver.

I wonder if this emphasis on conventionality spreads beyond the walls of corporations and academia. Familiarity and belonging are powerful emotions, after all, and it takes a lot more effort to be fresh and original than to retreat into the comfort of clichéd words and phrases. It is often easier to be anonymous than to be articulate.

Jargon may also have more sinister undertones. Peter Ives argues that most of the jargon we use today (he was writing in 1997) originated in the right-wing military/political/business elite. It seems that we are endorsing a pro-capitalist, individualist language, because the section of society that uses such words also happens to have the means to diffuse their particular linguistic preferences more broadly.

By this logic, even our exhortations to “speak plainly” in language that is “accessible” can be read as elitist, because, as Ives asks, who gets to determine what “accessible” is? Democracy? If so, Chinese would be most accessible. Instead, we assume that “plain English” wins out, and enforce that presumption upon everyone else. Such is the stuff of linguistic imperialism.

It seems language is inextricably tied to power structures, existing hierarchies, and even imperialism. So next time someone asks you to “touch base” later, consider that by deciding just to “talk” instead, you’re standing up for the little guy.


The Modern Good Life, Part 3: The End of Progress

March 25, 2010

What is the modern “good life,” and how do we know if we are living it?  Is what we have now “good”? Can we honestly look to the past and say that the way we live now is better? And can we reasonably expect that things will continue to improve? These are the questions that started me thinking about this series in the first place.

In Part 1, I wrote about our peculiarly modern bias to action, and in Part 2 I discussed the different ways in which can become slaves to history. In Part 3, I will address our unconscious and seemingly unshakeable assumption of human progress and how our current historical “moment” is unsettling because it may be challenging its dominance.

Gen Y is supposed to be more optimistic than past generations: according to a recent article in Time magazine, 88% of its members believe that one day they will lead the lives they desire.  The “hope gap” (presumably the ‘gap’ is with reality) increases with age, apparently, as people get more disillusioned — but deep down we all remain, at heart, subscribers to a fundamentally optimistic narrative of our present. It is the progress narrative, articulated no better than by John Stuart Mill, its eternally optimistic Victorian proponent, when he said that the goal of progress was for individuals to live long, happy lives without physical or mental suffering, including “indigence, unkindness, worthlessness or premature loss of objects of affection.” Who can argue with that?

I’m sure many of you have heard of Whig History, the idea that humans are progressing toward an ever more liberal and enlightened society: freer, more peaceful, more democratic, more comfortable, and more convenient. Historians like to scoff that Whiggish histories are teleological, Eurocentric, and poorly sourced. We criticize the philosophies of Mill and G.W.F. Hegel, among others, who argued that modern European (now “Western”) society was located at the apex of historical development, and was its logical conclusion. We laugh that Mill and his contemporaries considered nineteenth-century British society to be the most advanced on the scale of civilizations, a trajectory based on liberal criteria such as constitutional checks on rulers, and freedom of the individual enabling the full use of his faculties. But in reality, we think the same thing in our own time.  We know that things been continually improving, and expect that they will continue to do so. And we expect that too will always be at the apex of historical progress.

Amongst all of this certainty, the past few years have been a stumbling block. Suddenly, the balance of media coverage is negative. Is it a temporary setback, we wonder, or a lasting trend? We feel a deep-seated unease as a reputable voice – or collection of voices – begins to think that the past was better than the present. And the main area in which we have concerns is ethical, societal, moral. We can see that technology is advancing, making us smarter (perhaps), wealthier, and more comfortable. But we are no more able to solve society’s eternal ills – poverty, violence, want, fear – than before. New technologies, government policies, or even human kindnesses still have not managed to create our Utopia.

Of course, it isn’t rational to expect Utopia. We all know that. But secretly, we hope that we can achieve it, and we have a vision of the future as “the best of all possible worlds,” as our Panglossian friends would say. And we want to be a part of it, and we want to help it along. We have a bias toward action.

So the question becomes, has the West become a slave to its own idea of progress? I wrote in my last post that today we are unique in seeing history and linear and cumulative. But have we been fooled, and is the “progress” we have seen not really progress at all? Could our technological progress be in fact contributing to a moral decline?

This line of thinking has certainly had its supporters. Several centuries ago, Jean-Jacques Rousseau contested the established idea of progress in his time: economic development, the creation of a state and protection of private property, and the ability to live comfortably. (It appears not much has changed since the eighteenth century.) As he wrote in his Second Discourse:

Due to a multitude of new needs, [man is] subjected…to all of nature and especially to his fellow-men, whose slave he becomes in a sense even in becoming their master; rich, he needs their services; poor, he needs their help.

It certainly isn’t a powerful exhortation to buy that new flat screen TV. Though it is perhaps a given that having more things engenders a need for more things, it doesn’t seem to say much for our evolution as a species. In Toward a History of Needs, Ivan Illich writes that “The model American puts in 1600 hours to get 7500 miles: less than five miles per hour.” Most of us can walk almost that fast, with a lot less effort spent selling our souls for a salary.

Nietzsche continued this anti-progress train of thought in the Geneaolgy of Morals, deriding those who thought comfort and luxury were the end of life:

The diminution and leveling of European man constitutes our greatest danger…We can see nothing today that wants to grow greater, we suspect that things will continue to go down, down, to become thinner, more good-natured, more prudent, more comfortable, more mediocre, more indifferent…there is no doubt that man is getting “better” all the time.

For both Rousseau and Nietzsche, the economic and technological progress that had led to large societies, sedentary means of acquiring food (i.e. non-hunter-gatherer communities), and the general ease of life that Mill had in mind had caused humans to lose something along the way. This something was morality. They had different definitions but meant something of the same thing.

In truth, I don’t think morality is declining, not even with the advent of sexting, or video games, or La-Z-Boy recliners. It’s natural that, by measuring it against objective progress in so many other areas, the presence of our human constants of good and evil will inevitably make us feel like failures. Because there certainly is evidence of objective progress. Are we, the middle class in a developed country, better off today than 25, 50, or 100 years ago? In a multitude of ways, absolutely: we have extended many basic rights to larger populations (de jure and de facto), have much more advanced medical care (and likely better access to it), use a host of labour-saving devices which reduce the amount of manual drudgery we have to endure day to day, have technologies that allow us to control our reproductive output (and therefore our careers, financial situation, etc. better), and, perhaps most importantly, can access vast amounts of information near-instantaneously.

Utopia? Certainly not. But I feel pretty good about being part of a society that is free, and liberal, and generally supportive of those who can’t support themselves. And I have a recurring dream in which (dork alert!) John Stuart Mill comes to visit me in the present, and he’s pretty pleased with how things have turned out as well, though of course we still have a lot of work to do.

In an excellent article on the idea of progress, a columnist in The Economist writes that our constant striving for morality is like aiming for an “unattainable horizon,” and the eternal battle between forces of altruism and selfishness keep society on an even keel (clearly, this author also has a bias to action). I think it’s more important that we keep up the faith that we’ll get there. Gen Y has it right: optimism is one of the keys to happiness. Society may not be perfect, but we have to believe we can keep improving it.

I started this post series with Madonna, so it only seems appropriate to end with the Beatles: I’ve got to admit it’s getting better; a little better all the time.

Read the other posts in this series:

Part 1: The Bias to Action

Part 2: History and its (Ab)Uses