What is History Writing Now?

April 27, 2010

People reach post historical all the time by searching for odd little historical, philosophical and political science-related phrases. Given the obscure nature of many of these terms to those not deep within postcolonial or imperial studies, I assume they’re doing research for some paper or project. I wonder if they feel they can trust what they read. Am I a reliable source? Are my ideas sound? Can one cite a blog, or is this an even bigger research no-no than citing Wikipedia?

If it is, why? Consider this blogger: I have a graduate history degree from a good school, which, for many, constitutes formal “training” in the discipline.  I know how to cite sources and (hopefully) construct a logical and well-supported argument. Does this make me “qualified” to comment on things? Does being qualified today require being intelligent, well-trained, and peer-reviewed (in the traditional sense), or does it come from an even more democratic approvals process based on sheer number of readers? Would having six million hits to my blog make me a “qualified” opinion leader? Or do I need to have published six books through a university press that only 6 000 people will ever read in order to be a real “expert”?  And is either something to which I should aspire?

These questions have far-reaching implications for me as I go through the process of deciding whether to continue on with studying history as a career, or do something else entirely – something more practical, that would affect people more directly than a well-researched book in an obscure field and a few impassioned lectures about Lord Curzon and the Raj for a dwindling number of undergraduates who don’t care. Because it’s very important to me that I influence the way people think, not in a creepy mind control kind of way but by presenting a fresh perspective that makes them reconsider the world around them and how things work within it.

I’m not sure academic writing is the best way to do that: its scope is too narrow, and its audience is those who are already predisposed to thinking from many angles, and who likely know a lot about the subject already. Traditional academic writing is also very dry. It connects with the reader because it is persuasive, and offers a sourced argument with little personal point of view. Blogs and new media, in contrast, connect with readers because they cover current events and are often based on personal biases or feelings. They are inherently populist, because the vast majority of bloggers want others to read their blogs, and so they talk about things that appeal to a large audience: fashion, entertainment, celebrities, popular political news, etc. And the vast majority of people who read blogs read about the above topics. But does this make them experts in their fields? And does it translate to “academic” subjects like history?

One of my main goals for post historical is to bridge this gap with a forum that is flexible enough to talk about current events and timeless philosophical questions at the same time, yet with a focus that isn’t so personal or academically specialized to be unappealing to a broad audience outside of a strict historical discipline. One might call this “accessible” writing, though as I wrote about in my last post, “accessible” can be a bit of a loaded term. What matters most to me is making an impact in a way that is direct and tangible, which is why the thought of another history degree and a life as a struggling academic is slightly off-putting at times. It’s very clear what such a life could do for me: I’d be a recognized expert in my field; I wouldn’t have to get out of bed torturously early every morning to go to another soul-crushing corporate meeting; I’d be able to have great chats over coffee with fellow bright people and give speeches about things like maps; I could help out engaged students by giving them interesting research suggestions; and I would generally get to run around having people think I was a big smartypants. Clearly, these things all sound fantastic. But what would a life like that do for others, even if I did manage to actually get a job out of it (which these days, as my fellow blogger and history professor trivium points out on his excellent blog, almost nobody does)? How would it contribute to my big life goal of being a respected public intellectual who makes people think in exciting new ways?

I don’t mean to criticize academics, who are generally brilliant, insightful, creative people. It’s the system that is at fault, a system that encourages people to go to school for 10 years with absolutely no hope of finding employment of any kind at the end of it, a system that encourages killing trees by publishing books nobody cares about, and a system that has created the popular feeling that it is so removed from the everyday that it serves only to train its own. I fear academia is becoming so specialized that it just doesn’t have the impact, or the scope, or the popular appeal, to be taken seriously. When the people who make the most money and important decisions all have MBAs and law degrees, humanities majors are in for some trouble. Actually, we’re all in trouble because we’re losing out on diversity of thought and experience – big time.

As I’ve written before, I think great writing is all about having a conversation, which necessitates a connection between readers and writers. One of the great things about blogs, and Wikipedia, and other new media is that the connection – and the feedback, via the comments or revisions – is immediate, and the process of forming consensus iterative. This is when history and philosophy are really exciting (and this is why I love to receive comments and feedback from readers, particularly when you disagree or want to point out something I’ve missed). Traditional academic writing just isn’t set up to react quickly enough to changes in events, or popular feeling.

So, to paraphrase the great E.H. Carr, what is history writing now? One would think that it would adapt to the changing relationship between reader and writer, from words sent down from a lofty perch in an ivory tower to those that are picked over in comments forums around the world. It hasn’t. And we’ve all lost something in the process.  The Economist ran an article today about how this election (in Britain) is bound to be a landmark one, and yet has no landmark book or philosophy written about the popular mood to match it, or to spur discussion, as was the case in 1945, 1964, 1979 and 1997. (I was particularly excited to see that the article cited one of my historian idols, Linda Colley, as having written a significant work from which Tony Blair drew inspiration in 1997.)

Can it be that nobody has written anything groundbreaking in the past five or ten years that bears mention? Or is it that the political audience is too fragmented – or too busy writing their own blog posts – to notice? Is there still a place for the academic as a public intellectual, or has academic writing been pushed to the fringes of literate society by virtue of being irrelevant to everyday concerns? And if academia is on the fringes, who is in the centre?

I suppose we can all take comfort in the fact that there is still the expectation of something by the intelligent people who read and write for publications like The Economist. There is an intellectual void that will always need filling, by academics or writers or particularly insightful and far-reaching bloggers. The question for the next few years, it seems, is whether those who step up to fill it will have new job titles, and if so, what they will be.

Jargon and Power: Why “Touching Base” Equals Linguistic Imperialism

April 26, 2010

I’ve always thought that jargon was just another way to measure inclusivity. Newcomers to the corporate scene are often barraged with inscrutable acronyms, and people who want to “touch base” and “connect” in order to decide on “actionable next steps.” Other favourites of mine are the ever-present “deck,” otherwise known as a PowerPoint presentation in which one expands five sentences into thirty slides with swirling slide transitions, and the “ask” [n.], which, from what I’ve been able to discern, is a way to cut down on the syllables required to say “request.” Efficiency indeed.

In academia, it’s even worse. It seems that no book or article can be taken seriously until the author has proven his or her credentials by name-checking every obscure phrase that has been written on a subject. This practice serves only to repeat ad nauseam the same tired debates over and over with little new beyond increasing specialization, which I’ve attacked at length before.

Considering how pernicious it is to the Plain Language Movement, however, there is shockingly little popular or academic treatment of the subject of jargon. Perhaps it is because, as New Left academic Peter Ives says in his fantastic 1997 article “In defense of jargon,” “jargon is only jargon for those who don’t use it.” Maybe we like to be inscrutable because it makes us feel more intelligent. Or maybe the world is changing so quickly these days, we need something familiar to hold onto, and clichéd language represents a security blanket of sorts.

The ways in which jargon has evolved seem to support this theory. In “‘As Per Your Request’: A History of Business Jargon,” Kitty Locker writes that jargon has eras, identifying the pre-1880s, 1880s-1950s, and post-1950s as distinct periods in business communication. (Given that the article appears in a relatively obscure academic journal and was published in 1987, it obviously doesn’t touch the Internet age, and so I imagine the author would have to add another for the post-1990s period for all of the tech speak we use now.) But if we think that the 1880s-1950s (when jargon use was at its peak, apparently) saw the rise of corporate America, and with that an emphasis on professionalism and specialization, we can see the early roots of corporate-style conformity. And today there is just as much human need for conformity, but more arenas from which to choose one’s allegiance: corporate, social, technological, generational, geographical, etc.

Locker argues that corporate jargon and ‘stock phrases’ came about primarily because new employees tended to copy old correspondence, either in style or in actual phraseology. Often letters doubled as legal documents, and so the terminology had to be fairly set. Then, from the 1920s onward, American firms were interested in improving business communication, with big companies often having a person or department who monitored it and tried to get everyone to use the same words and phrases. (O, that I could have the job of whipping corporate employees’ communications into shape! Alas, cost cutting.)

Today, I suspect jargon use comes less from official processes than by the subtle attempts to reinforce unofficial corporate/academic norms and hierarchies with new employees. Using jargon – in the form of acronyms, company-specific words, or highly technical language – creates a sense of inclusivity among workers, which is exactly why, if senior executives/group leaders ever thought about it, they would have a vested interest in keeping it around.  It is a badge of honour even today for new recruits to master the new group’s/company’s lingo.

Interestingly, Locker points out that companies have had little success in eliminating jargon even when they have tried. A bank in the 1960s tried to freshen up its letters by taking out the standard greetings and salutations, and received numerous complaints from customers who were having trouble recognizing the letters for what they were. As she amusingly quotes, “the value the reader places on the distinctiveness of a business letter can easily be overestimated.” (Indeed.) And it is a daring academic who braves the censure of his or her peers by not mentioning what Foucault thought about the issue, or how “post-x” something is. (One might wonder if s/he even had an advanced degree.) It seems that there is comfort in the conventionality of jargon for both user and receiver.

I wonder if this emphasis on conventionality spreads beyond the walls of corporations and academia. Familiarity and belonging are powerful emotions, after all, and it takes a lot more effort to be fresh and original than to retreat into the comfort of clichéd words and phrases. It is often easier to be anonymous than to be articulate.

Jargon may also have more sinister undertones. Peter Ives argues that most of the jargon we use today (he was writing in 1997) originated in the right-wing military/political/business elite. It seems that we are endorsing a pro-capitalist, individualist language, because the section of society that uses such words also happens to have the means to diffuse their particular linguistic preferences more broadly.

By this logic, even our exhortations to “speak plainly” in language that is “accessible” can be read as elitist, because, as Ives asks, who gets to determine what “accessible” is? Democracy? If so, Chinese would be most accessible. Instead, we assume that “plain English” wins out, and enforce that presumption upon everyone else. Such is the stuff of linguistic imperialism.

It seems language is inextricably tied to power structures, existing hierarchies, and even imperialism. So next time someone asks you to “touch base” later, consider that by deciding just to “talk” instead, you’re standing up for the little guy.

Today’s Nihilism: The Sound and Fury of Apathy

April 19, 2010

Nihilism is often defined as believing in nothing, having no purpose or loyalties and no true beliefs. For Nietzsche, who is most commonly identified with it, nihilism is the realization of the extreme subjectivity of the human existence. By this philosophy, all of the structures and beliefs we are raised with are just imposed upon us, not objective realities. In the twentieth century, nihilism is most commonly of the existential type, that is, the belief that life is purposeless and meaningless.

Existential nihilism may have been common among early pagans, but ever since the major religions took hold, life has always been thought by most to have a higher purpose: kindness to others, enlightenment, contemplation, even “making something” of ourselves. We may not think of ourselves as nihilists today: there are still a lot of people who believe in an afterlife to strive for, or the possibility of easing the burdens of others in this life.

But there is a steadily growing cynicism the Western world that approaches the limit of what humans can take and still have any hope for the future. One of the less common definitions of nihilism is, in fact, extreme skepticism. In some ways, this is just hipster culture writ large.

I was recently directed to an interesting article in Adbusters by one of my readers (thanks!) that speaks of the “coming barbarism,” an extreme anti-capitalist reaction by Gen Y that includes a wilful return to “barbaric” unreason. It is choosing to be in the dark as the antidote to, as the article quotes, “all the cant and bullshit and sales commercials fed to us by politicians, bishops and academics…People are deliberately re-primitivizing themselves.”

This is meta-post-modernism, in the sense that everything is a parody of itself, somewhere back along the line. It is an arresting idea, that there is no escape from having one’s sincerely-held beliefs turned into the backdrop of a music video or an ironic ad campaign for jeans. And it leads to a society in which the ideologies of inter-generational conflict play out almost as though they’re scripted by those in power. As the article puts it:

Unlike Gen Xers, many of whom found ways to express anticapitalist sentiment through subculture, Gen Y has nowhere to run or hide. All forms of cultural rebellion have long since been appropriated and integrated into the ideology of capital. Marketing firms and advertising agencies now enjoy an unprecedented relationship with the avant-garde, so much so that they’ve become one and the same.

By this logic, war protests are not unwelcome but expected as part of the success of a functioning democracy. Ad execs steal from hipsters in order to market jeans to their mothers. Even the existential nihilist is an identifiable brand, a beret-wearing, cigarette-smoking philosopher who takes his expected societal place even in the midst of his narrative of pointlessness.

Is this not nihilism by abstraction, this sense that we are all players in someone else’s game, with our moves predetermined and ultimately ineffective? And, it seems, the only remedy is opting out – the coming barbarism of which the article speaks. It is the willing removal of oneself from any genuine commitments or passions. And it is terrifying. With no passion, no investment – and so on in a vicious cycle.

Adbusters advocates political involvement as the solution, to “storm and occupy whatever political and economic space we can.” But I suspect this new meta-nihilism has spread to politics as well: how can one support a politician when in five years it is conceivable that he will have “crossed the floor” or possibly be found in hypocritical violation of every principle he espoused? (Former Representative Mark Foley, anyone?) Even Obama, who actually managed to turn the tide and reach those who traditionally wouldn’t care with his politics of audacious hope, seems to have let us down by not being the messiah he was purported to be. The disillusionment has spread, because if he can’t change Washington (or our world as we know it), stop climate change, end the wars, quiet radical Islam, and bridge all divides, then who can?

Still, for those who can swim against the tide of apathy, perhaps political action is indeed a cure. I can think of another: power.  When (if) the members of the current generation stop listening to Lady Gaga and start to buy into (pun intended) the established structures and perks of capitalism – pensions, mortgages, SUVs, fancy titles on business cards – they will not only have more invested in making the system work, but the ability to actually effect change. And maybe then the rebellious thing to do will be just that: the gradual dismantlement of what we know and what we know is wrong with it into an uncertain future. It would be a postmodern reckoning with the structures and beliefs in which we have been raised, and an examination of whether they hold true as objective “good.”

It would be nihilism … with purpose. A new dark age indeed.

A Culture of Free = Communism 2.0

April 14, 2010

Past ideologies and cultural movements were usually associated with a class, or a gender, or a specific subset of the population that wears funny hats and goes to art shows. These days they’re associated with whole generations, Gen Y in particular. And Gen Y’s ideological leanings are ambitious. A recent article in Adbusters claims that “there is a revolutionary current running through the subconscious of this generation that has yet to be realized or defined. We champion piracy, instinctively believing that information should be free and open, that intellectual property law is contra-progress and that capital is not a necessary intermediary for social organization.”

Capital is not necessary for social organization – that is, we want things to be free. Today, everything from news to music to classified ad services has a new benchmark to attract our attention: no cost to us, and preferably none of those pesky ads, either.  It is a race to the bargain basement, which Gen Y began, but which now encompasses everyone. Meanwhile, content providers are struggling to keep up (and many are not). And the “culture of free” has become an ideology.

I’m going to make the bold statement that I don’t believe we are revolutionary for wanting to get things for free. This supposed worldview – that information should be accessible and open – was and is just a convenient position to hold right now. It is no coincidence that the noble championship of piracy arose when most of the members of Gen Y were teenagers, making them a) not yet old enough to be generating capital of their own and happy to get something for nothing; b) at the age when they wanted to “stick it to the man” (or however that sentiment is phrased now), especially the capitalist pigs who profited off their (parents’) hard-earned cash; and c) able to master new pirating technologies before anybody else could devise a clever way to stop them doing it.

Piracy therefore developed so rapidly simply because there was an option. People figured out how to share music (and books, and movies, and opinions) in a new way, and so they did.  It started with just ripping and burning, making mix CDs rather than mix tapes.  Eventually P2P took off because it was offering a useful new service: downloading music directly to one’s computer.  There was no legal competitor, so the free (not-yet-illegal-but-definitely-immoral) version took off. And now it is expected by all that the thieving will continue unabated. We are affronted when record labels attempt to regain their lost profits from hapless downloaders. We scorn those who prosecute the programmers at Pirate Bay. And we revel in the fact that blogs are thriving while subscription-fuelled media giants are hemorrhaging readers.

Now, there is certainly something to the argument that the freer exchange of copyrighted materials that enables piracy can be a good thing. It exposes people to more music, and many people who “pirate” music get engaged then proceed to purchase more music or go to more concerts than they would otherwise. But I dispute the idea that the free stuff movement is anything more than a convenient justification of existing behaviour. Ideologies rarely stick solely because they are noble and altruistic. More often they are useful, and solve practical problems. The “free stuff” movement solved the problem of paying for copyright materials.

History has seen many excellent, convincing justifications for getting something from nothing. Real pirates and thieves perfected the art of it, and were/are only stopped with effective policing (whether by international tribunals or the more traditional method of hanging). Aristocrats, priests, and the nobility for most of human existence claimed that they deserved by divine right to profit from their vassals’ labour. They were only coerced into some semblance of fairness by the threat or occurrence of violent uprisings. And communists claimed the right to free things with an ideology based on the natural inequalities between humans. From each according to his abilities, to each according to his needs, as Karl Marx wrote. But Communism has never really been successful because whenever some people are getting something for free (or with minimal labour), others are working hard and getting nothing.

This is a fact that modern pirates inherently know: the record label executives and established acts are suffering, but so are the sound engineers and indie artists. So how did stealing from others turn into an altruistic ideology?

Part of it is the appeal of the “web 2.0” culture: it is democratic, innovative, and authentic. It bypasses the elitist filters of Hollywood, publishing or old media. According to Andrew Keen, a leading critic of the Web 2.0 movement, it “worships the creative amateur: the self-taught filmmaker, the dorm-room musician, the unpublished writer. It suggests that everyone–even the most poorly educated and inarticulate amongst us–can and should use digital media to express and realize themselves.” It is appealing because it allows us all to be dabblers, part-time writers (through blogs) or directors (through YouTube) or experts (through Wikipedia). This is, Keen argues, exactly what Marx promised of Communism:

[I]n communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, shepherd or critic.

And yet the dabbling, for all its appeal, is why the “culture of free” is ultimately unsustainable. Humans want recognition as individuals, and Gen Y wants this more than anybody. But dabblers are rarely experts, and their output is rarely singled out for recognition. As Keen notes, the problem with the democratization of media is that it creates a situation in which everybody has an opinion but nobody has an audience. And no audience means no capital, which will become a problem when Gen Y moves out of their capitalist-pig-baby-boomer-parents’ houses and has to pay for their own internet connections.

The Educated Class and Its Discontents

April 13, 2010

In a Special Report on Germany in the Economist recently, the traditional German system of education, while excellent at producing great engineers and skilled trade workers, came under criticism for its rigidity and unfairness. In Germany, ten-year-olds are marked out for either a career of manual labour (skilled or otherwise), white-collar work, or the bureaucratic/professional work that comes after university, and sent to separate schools accordingly. Ten is too young, its critics argue, to give a child a direction for life, which will become difficult to change later on with guild-like labour markets that prohibit entry into professions without the right qualifications. And many complain that Germany does not have equality of opportunity. Family background is more likely to determine test scores and social status in life in Germany than it is in any other country.

With any talk of equality of opportunity, it comes up again, that old aspirational myth of moving between classes, the Horatio Alger or perhaps Will Hunting story of a genius saved from poverty by good education, mentoring or his own perseverance to rise to a different class. Because it is about class. Germans (and the writers of the Economist) are not concerned as much about eventual income distribution, which is quite fair, as they are about having the opportunity to do something else: move up the social ladder.

Focusing on class seems to be a very Old Europe thing. Only in Europe do we see that holdover of a very, very privileged elite (or aristocracy) that has old family wealth, and a poor or working class that never really seems to shrink outside of meddling with statistics, and isn’t going to because those within it have a sense of pride in being working class. A recent article on class and politics in Britian in the Economist seems to describe the six established statistical class divisions as essentially fixed. David Cameron must appeal to the same middle-class voters as Margaret Thatcher, who appreciated their aspirations to “improve their homes and their lives; to get gradually better cars, washing machines and televisions; to go on holiday in Spain rather than Bournemouth.” Hardly a rapid rise to the upper echelons of power – really just a desire to keep up with what is expected from being “middle class.”

In fact, it seems the most common way of achieving a material increase in living standards is immigration. The quality of life is much higher in “New World” countries like Canada and Australia because the basic cost of living is less, while health care and education are still available at the same high standard, or higher. It’s hard not to notice that eight out of 10 cities ranked “most liveable” by the Economist last year were in Canada, Australia, and New Zealand.

And there is more opportunity for movement between classes in the New World (a term I’ll keep using despite the fact that it makes me sound like Columbus, because I can’t think of a better one), not least because there is less emphasis on “class” in general as something that matters, at least explicitly. The class system of North America has less of a focus on income and history and more on the combination of these with other factors, such as education. My theory is that because New World societies were formed based on merit, and evolved with much less distinction based on income or family wealth (since most everyone was a poor immigrant upon arrival), education and occupation became the primary means of separating out the kind of people with whom one should associate.

The North American system is thus designed to provide more equality of opportunity. In theory, all have the same access to education, even, in some ways, up to the university level. It is a noble goal, and higher education is certainly more accessible in Commonwealth and countries and the US than in continental Europe, as this 2005 study ranking university enrollment in developed countries shows.

But the result of our comparatively open and well-attended university system has been a generation or two of liberal arts or natural science graduates who spend ten years flailing around the entry-level job market before eventually settling into corporate middle management in a completely unrelated field somewhere, making essentially they same money they would have had they been pre-classified at age ten as they do in Germany. Most look back fondly on the days they spent at university, but more for the social connections they made than the time spent reading Cicero. And we, as a society, have trouble finding enough people to sell us mortgages or build our houses, because there aren’t really university programs that teach those skills. Universities have become training grounds for the “middle class” as a whole – including the low end of white collar work – instead of training grounds for occupations where they actually provide valuable preparation, that is, the “upper middle class” work of medicine, law, academia and the like.

If nothing else, we North Americans are certainly losing efficiency with all of this finding ourselves that comes after attaining our university qualifications. We’ve also created a society in which having a B.A. means you’re under-qualified for many jobs – either in experience, or because everyone else applying also has an M.A. or the college-level diploma which is all that’s really required to do the job. It isn’t going to change, though, because we value two things too highly: our “right” to attend school (especially university) for as long as we want to, and the class position that doing so will get us.

True, recently there has been a real push by the government and colleges to recognize skilled labour and professional work as viable career options for high school graduates to consider, and one often hears flippant comments about the world needing more plumbers and electricians, who “actually make a fair bit of money.” (Reality check: this website puts a plumber’s average hourly wage at $24 in Toronto, which over a year works out to about $47 000. This is around what your average white collar worker earns, at least at first, and a plumber doesn’t carry the same student loan debt.)

But while the logic of matching skills to actual jobs may have (almost) caught up, the overall effect on what class one will end up in has not. Doctors and lawyers are still far more likely to associate with white collar workers who have attended university than electricians who earn the same amount, because education and occupation are still important class signifiers.

What would it take to change these biases? And would changing the biases reverse the trend toward hiring managers requiring ever-more degrees when hiring someone to answer telephones and make photocopies? Is there a happy medium between the German and North American systems, where there is still mobility between classes, and still equality of opportunity, but more cultural acceptance that skilled trades and professional work is a respectable way to earn a living? I’m not sure – but for all that, I would still struggle to recommend that anybody give up learning about politics or history or biology and instead learn about practical data models in order to secure a job. We are fortunate to have the privilege of being able to buy those three or four (or more) years of time to learn. I would advise anybody who asked to enjoy it while it lasts, because there’s plenty of time for uninspiring desk work later, if they so choose.

Toward a Hierarchy of National Needs

April 6, 2010

My last post was about how we use our own bodies as the lens and language through which we describe the world, based on a hypermasculine focus on physicality that is in many ways a holdover from the later Victorian period. In this one I’d like to explore how we tend to anthromorphize nations as well, and consider what this means for a “national hierarchy of needs.” 

Humans feel national consciousness so deeply that in some cases the nation becomes an extension of ourselves. Gandhi once famously said of the post-independence partition of India and Pakistan that “before partitioning India, [his] body [would] have to be cut into two pieces.” The Economist recently described the German Federal Republic as a “matronly 60” and unification approaching a “post-adolescent 20.” And the “body politic” is a familiar concept to all of us. 

But what about national needs? I’m sure many of you are familiar with the hierarchy of individual human needs first articulated by American psychologist Abraham Maslow in 1943: 

Maslow's Hierarchy of Needs

 Maslow’s hierarchy charts the progression of human needs from basic physiological survival – breathing, eating, etc. – to the highest-order need of self-actualization, which involves acceptance of oneself as is, being all that one is capable of becoming, and living with internally-motivated purpose. 

What does this look like when applied to whole nations? Because this kind of thing keeps me up at night, I created a conceptual model of what might be included in a national hierarchy of needs: 

Exon's Hierarchy of National Needs (Click for a larger version)

 The Hierarchy: Definitions

Like Maslow’s personal needs hierarchy, the national one assumes that lower-level needs must be met before progressing to the next level. And like Maslow’s pyramid, the upward progression through the different kinds of needs is one from physical security/territorial necessities to more psychological or social ones. In both, the apex represents the fulfillment of potential and is the optimal state.

The most basic national need is territorial integrity, through a defined physical space, and both de facto and de jure independence. It seems obvious that physical borders have a huge impact on the populations therein: they shift people’s movements into school districts, tax codes and trade permits, as well as, on a more general level, forcing them to go through particular national channels to conduct their daily business. They represent official languages and religions, laws and norms, war and peace. We often term nations that do not have this need satisfied “failed states”: failed, perhaps, because they cannot sustain any higher level without it. 

Next is a free government, and by that I really mean a functioning representative democracy. I debated whether or not to put democracy here, as there are certainly many examples of states in history and in the present day that seem to function on higher levels in some ways without fully democratic government. But I do think that in order to secure the kind of economic and cultural freedom of the higher levels that democracy is a must. The government must have fair laws enshrined in a constitution that are not easy to meddle with (there goes Italy, I suppose), and civil rights. Perhaps the majority of nations today haven’t passed this stage, especially without universal adult suffrage. 

The third need I identified is a free economy, because a nation that is unable to sustain its independence without bailouts from international organizations, or one that is subject to a “colonial economy” where natural resources are exported and manufactured elsewhere can never really fulfil the higher-order needs that come next. (Let’s not get into the fact that many modern nations weren’t able to choose their borders and/or natural resource allocations and whether or not this is fair. It isn’t.) 

The fourth is a thriving public sphere. Really this is the nation’s view of itself, through its political discussions, art, literature, and history. It is at this level that individuals really start to use the nation as a cultural touch point for identity, as I wrote about in an earlier post. This is where national pride comes from. An appropriate term for this is “imagined communities.” I have borrowed here from Benedict Anderson’s landmark book by the same name, which explores how nations are formed by citizens who “imagine” national political communities. These communities exist at a higher level than one-on-one interaction and as such are in the mind, which makes them a powerful force. I also wrote here of the concept of “loyal opposition,” which, from the British tradition, means a party or individual can disagree with the policies or ideas of the governing party but still respect the authority by which it is in power – an essential trait of a functioning democracy. Implicit in this is respect for the opinions/culture of others (which I sometimes fear is being lost in many political debates today). 

The highest-order national need, which occupies the same place as “self-actualization” on Maslow’s pyramid, is that of global leadership. At this point, a nation achieves the pinnacle of influence. It will probably exert extensive “hard power” through international organizations for the betterment of other nations (i.e. the United Nations or NATO). But more importantly, it will have soft power in the form of a defined image outside of its borders which other individuals and nations respect. An example that sums this concept up perfectly is the “American Dream,” the idea that, in the United States, one can achieve anything with hard work and determination. Soft power like this is a powerful force – much more so than armies or multinational corporations. As The Economist put it in a recent article, “the greatest strength of America is that people want to live there.” 

Inconsistencies Within the Model 

As with the personal model, progression through the needs is not always linear or complete. Nations may exist on several levels simultaneously (as people do), or may to fulfill higher-order needs without yet having satisfied lower-level ones. Would nineteenth-century Britain, which in many ways could be seen to have positive global influence and a set of national ideals, be considered a global leader? Certainly – yet this despite not having attained universal adult suffrage, or even peaceful relations with its neighbours. 

Another obvious contradiction that springs to mind is a colonial state, which may have a thriving public sphere, entrenched civil rights, or international influence without having the basic needs of independence, its own elected government, or an independent economy. India in the 1930s and 1940s is a good example of this, certainly in that it had an active public sphere, well-established art and literature, and global sympathy to its cause — while still under the yoke of the British Empire and held back by its own anachronistic caste system and bitter history of conflict. (Perhaps this is why they did not remain under the yoke much longer?) 

One might also point to “nations” that are not independent political entities, like Quebec, which have a defined culture separate from the rest of Canada. Interestingly, Gilles Duceppe, leader of the federal secessionist Bloc Quebecois, recently used his own corporal metaphor in referring to his party as a young twenty years old, a “nice age…Especially when compared to the Liberal and Conservative parties, which are 143 years old… When you’re 20, you have the energy to fight against the system, which in our case is the federal system.” But in response to Duceppe’s incendiary claim that Quebec separatists are akin to French Resistance fighters in WWII (!), I would hold up a federally united Canada as an example of a self-actualized nation of imagined communities, at the top of the pyramid. Canada is strong and unique because of unity among its differences, linguistic, cultural, historical – whatever. It is the peaceful acceptance of dissenting and disparate views within (and without) that allows a nation to have such global influence. This is what separates Canada from, say, Iraq, and why it is one of the most common destinations for immigrants. 

But what do you think? Does the model make sense to you? Have I missed anything out? Is anything in the wrong place, in your opinion? Do you believe Canada and the United States are “self-actualized” nations? If not, why not?

Shuffling Off Our Mortal Coils – Or Making Them Our Centres?

April 5, 2010

These days we seem obsessed with our bodies: thinning them out, bulking them up, getting them into “shape,” perfecting their curves and improving their features, and generally doing all we can to modify or preserve our outer encasements. My body is my temple, as the saying goes.  And I will worship it with margarine and Red Bull.

The body today is seen as the beginning of life: we must treat it well in order to function well in other areas of existence. We must get a good breakfast with plenty of protein and fat (but only good fats!) to fire up our metabolism for the day. We are advised to consume more carbohydrates to get our brains in gear. Running will help us sleep better. Sleeping better will help us live longer. Living longer will give us more time to watch what we eat.

I don’t disagree with any of the above advice, but I do wonder when our mortal coils became separate entities from our minds. In The Republic, Socrates notes that both music and gymnastic education contribute mostly toward forming the whole person (note that to him, music was more important). Physical activity, moreover, was meant mainly to avoid illness. Socrates points out (as is explored in this article) that the soul comes first, and produces a good body, and that a healthy intellect results in a healthy body. The mind is the primary concern, and the instigator of physicality.

Perhaps the corporal obsession comes from our modern need for control. We don’t feel it anymore over our minds. We sense that life is one big game of Survivor with people out to outwit, outplay, and outlast us: advertisers trying to con us into buying more products we don’t need, politicians lying about what they’ll do if they are elected, even subliminal messages that influence how we think without our knowledge. But we can slim and sculpt and swap out bits of our physical exteriors that we don’t like. As Olivia Newton John would say, let’s get physical.

Or perhaps it goes back further, to the late nineteenth-century fixation upon masculinity that took root in Western culture and never quite left. In the logic of British imperialism, for example, “masculine” traits like aggression, control, competition and power were all inherent qualities of a successful imperial people, in contrast to the primitive effeminacy and weakness that characterized the “lesser races” they sought to civilize. This hypermasculinity found its expression in an overt and growing militarism, spurred on by the imperialist canon of Robert Baden Powell and Rudyard Kipling, among others.  Men delighted in proving themselves in war, perhaps an outlet of barbarism in their cloistered, prim, restrictive society.

In this period, Teddy Roosevelt (my personal favourite president) advocated a “strenuous life” of strife and toil, as individuals and as a nation (in the form of imperialism), in order to “ultimately win the goal of true national greatness.” [Come to think of it, he may have been one of the founders of our modern bias toward action that I wrote about in an earlier post.] Individual strength, ruggedness, and power would lead to national victories, particularly in the imperialistic wars that were coming, in Europe and around the world.

And they prepared for war with sport, and play. It’s no coincidence that the Olympics were revived in the middle of all of this imperial scrambling, in 1896. Though we have since created a story about how the games are rooted in friendly international competition, they were no doubt seen by many then as a proxy for battle. (Some modern commentaries on the national medal counts make this apparent even today.) And though Robert Baden-Powell’s Scouting for Boys launched a century of camping, orienteering, and general outdoorsy skills being taught to young men in Boy Scouts (hilariously anachronistic title notwithstanding), its origins were in a survival manual Baden-Powell had written for his fellow army men camped in India. It was adopted largely as preparation for future imperial warfare.

Even today, we worship those whose bodies are their primary known attributes much more than those whose minds are – at least with our money. Consider how many more people know who David Beckham is than, say, Tom Standage, or how many more watch America’s Next Top Model than the Reach for the Top provincial finals.

Corporal strength, power, even perfection, has become the ideal we seek and worship, and often the lens and language through which we describe the world. My next post will discuss how this applies to nations, but for now I’ll leave you with two images:

The heroes of their day…

…and of ours.

What do you think? Has the hypermasculine focus on physicality of the high imperial age stayed with us to the present day, or do we have a new ideal now? Do you think corporality is the primary way through which we understand and describe the world? Do you use your mind to serve your body, or vice versa?

Human Definitions: Evolution and the Next Theoretical Phase of Being

April 3, 2010

I’m fascinated by the idea that space is not a fixed but a fluid concept, and one subject to change depending on the era. A major consequence of industrialization in the nineteenth century was the new popular sense that, through inventions like the telegraph or steam engine, space had been conquered by humans. The world got even smaller, theoretically, in the post-war era with more advanced telecommunications and increased travel by air. A fantastic article I read recently by Sadeq Rahimi describes this as “depriving geography of identity,” a hallmark of the modern era.

We are fairly confident in our mastery of space, I think, in the age of blogs and Google Maps. But, as the same article points out, we have not yet mastered time. We exist as time-bound creatures, both as individuals and as a species (since, eventually, our sun will go cold and unless we’ve found another inhabitable solar system, that’s it). And we define ourselves in time, as products of our age: as postmodern, perhaps, or Baby Boomers, or twenty-first-century humans. But this becomes problematic when time shifts quickly enough that we struggle to form coherent identities. As Rahimi notes:

The conflict is fundamental: if self-identification has traditionally always already implied a reference in time, then acceleration is inherently the enemy of identity, by continuously curtailing the ‘stuff’ identity is made of. It is not a coincidence perhaps that the concerns of social sciences have gradually moved from being able to predict the future to being content with simply explaining the present, as the high speed of change leaves little room for the luxury of prediction.

If this continues, we will ultimately be unable to define ourselves as we do, in time, as human. We would need to be constantly re-inventing and re-iterating ourselves, re-imagining who we are. In a theoretical sense, this would be termed “posthuman.”

Since I just love terms that are prefixed by “post-,“ I did some research into the concept of posthumanness, to see how much it had to do with its cousins, postmodernism, postfeminism, and posthistoricalism. Turns out, it has as much to do with Battlestar Galactica.

Theoretical posthumanism dates back to Steve Nichols’s 1988 The Posthuman Manifesto, a controversial work that argued that because we are now so advanced, we are already posthuman compared with our ancestors. The debate has evolved to centre around the idea that (and this is according to Wikipedia, so take it with a grain of salt), “posthuman is not necessarily human in the first place, but is rather an embodied medium through which critical consciousness is manifested.”

Heavy stuff. But it makes sense, when we think of constantly re-imagining our present-day identities, to also talk of re-imagining our future selves. According to many posthumanists, the next stage of our evolution is the gradual incorporation of technology with biology, a coming roboticization of the human race. Some might recoil in horror or fear at the thought of becoming machines. It doesn’t seem human somehow, not to be distinct, and flawed, and somewhat messy. The uniformity and perfectibility of technology seems to be at odds with the very idea of what it means to be human. But it’s only natural, they argue, and in some ways I’d even argue that it has already happened. We have pacemakers and various plastic augmentations inside our bodies, and corrective lenses and steel limbs outside of them. Is it so much of a stretch to think that next will be automated GPS mapping or artificial memory recall implants, as some have suggested?

I suspect the reason people fear such a future is because they fear the inequalities it would create. Visions of a Brave New World-like universe abound, in which a small group of oligarchs rule, and all others exist in a semi-comatose state effected by drug dependency and decayed mental facilities.  In such a world, there would be an upper class of posthumans who’d have the capital and access to technology to improve upon their human blueprint, and an underclass who wouldn’t. The upper class would be so far superior to the other that they would rule absolutely. They would own both robot and human means of production, in essence forming a Marxist superbourgeoisie.  And, their production needs satisfied by machines or ordinary humans, they would exist solely to seek amusement from others.

In some ways, these doomsday visions aren’t so different from what we see today. At the risk of sounding like a conspiracist, I’d argue that there is indeed an upper class today with access to the means of production, money, and technology – not quite a superbourgeoisie, but one closer to Marx’s idea of a the ruling class. In a broad way, this includes most members of the Western world, who are healthier, wealthier, and more able to access the things they want and need in life that those, say, roughly south of the equator. In a more narrow sense, it is the powerbrokers of society: the wealthy tycoons, the politically powerful, and those who have the authority to make military or other government policy decisions.

And the rest of the world does in some ways exist in a kind of semi-comatose, consumption-fuelled dream (nightmare?) already. Much of the fear surrounding the “Great Recession,” as we are apparently now calling it, has grown from the realization that Western societies operate almost entirely as service economies in the kind of era-specific invented industries that I spoke about in earlier posts: hair dressers, insurance adjusters, pop psychiatrists, entertainment bloggers. In some ways we already do exist solely to amuse each other, particularly if we count acquiring and then playing with money as amusement. And new needs are invented every day, from material objects like iPads, to emotions, like the need to be “in the know” (through using Twitter) or the growing popularity of being “green.” And since we live only to consume, and rarely to produce, we in the West are in trouble, slaves to a growing China because of our love of cheap TVs from Wal-Mart.

I’ve always thought that much of the fear people have for the future comes from a deep, if unacknowledged, sense that they currently hold an advantageous position relative to their peers and that this might not be so were things to change. This is why so many of the robot-takeover or China-ascendant polemicists are from advanced Western societies: they don’t want their own status to change. And who would? But I doubt they need worry, as the trajectory of this new evolution favours just those who are currently powerful. This is where technology is so dangerous. As a tool, it vastly augments the existing inequalities within human relations. Humans have never been able to control their evolution before, and being able to do so would only magnify the differences that are now but slight, in a biological, evolutionary sense. Biological inequalities can be the most deeply-rooted. But there will no doubt be a power struggle that will even the playing field – again.

What I’ve read of a coming posthuman era seems to me only to validate the tendencies of the current, all-too-human age, the desire to have power over others paramount among them. Visions of the future always reflect the present (and past). Indeed, the very idea of posthumanity itself betrays a very human (and apparently ‘postmodern’) need to identify everything within the linearity of time. If being posthuman is, as Wikipedia so confusingly asserts, not necessarily human at all but rather “an embodied medium through which critical consciousness is manifested,” we need to re-think what being human is, because I’m pretty sure I meet embodied media every day that display critical consciousness (to varying degrees…). By this definition, I think my blog might be posthuman, since there’s a fair bit of critical consciousness in the comments sections.

And I suspect that anything with critical consciousness will seek to form an identity, if not through time than through language, or physicality, or some other “human” means of classification. Would this not be human, then? So much of theory seems to come down to semantics, anyway – the world is what we think it is, as individuals and as a society. So our identity, in the end, comes down to what we say it is: space-bound, time-bound, human-entity-bound… or not.  To paraphrase a famous Ford saying (as we are now in the Year of Our Ford…what, 107?): whether you think you are, or you think you aren’t, you’re right.

Gosh, isn’t that such a posthuman thing to say?