The Rise and Fall of the Grand Narrative

August 12, 2010

Those of you who read my blog regularly will know how frequently I lament the increasing specificity required of academic writing, and how it threatens to render the profession obsolete due to lack of readership or general interest in the subject matter. My thoughts were echoed in a recent book review which, in discussing the life of Hugh Trevor-Roper, a prominent historian, remarked that he could never be the great academic he wanted to be – an E.P. Thompson, or a Thomas Macauley, or an Edward Gibbon – because of two key factors. The first was the passing of the “grand narrative” approach to history, which is now seen as unprofessional, or worse, imperialistic in the Marxist teleological sense. The second was a result of his being British, and, as the article notes, “By Trevor-Roper’s day … Britain had become too insignificant to provide the subject of a grand narrative of progress in the style of Macaulay.”  The only nation that could conceivably produce historians claiming to write the story of its own empire today would be the United States, and those who do are usually right-wing polemicists who garner little respect in academic circles.

It’s true that the grand narrative has its drawbacks, as I’ve written before. Huge swaths of history that don’t fit in can be glossed over or ignored entirely in order to weave a tight story. And the grand narrative remains a common way for writers to (consciously or otherwise) impose a single, usually Western, trajectory upon world events that can be interpreted as modern intellectual imperialism. But it remains an anchoring lens through which historical events can be contextualized and patterns examined, and is usually more interesting than a narrow study. So what has caused the violent turn away from the grand narrative?  Is it justified?

Read the rest of this entry »


Changing Landscapes: A New Kind of Public Space

May 13, 2010

There’s a lot of talk about public space, especially in urban centres.  In Toronto there are whole movements dedicated to using it, preserving it and creating more of it – more parks, sidewalks, markets, waterfronts, and civic centres. Many see public space as a fundamental pillar of democracy, particularly at the local level where grassroots community organization can impact politics to a greater degree than at the national level.

Central Park

Central Park, A Classic Example of Physical Public Space

A lot of these campaigns are led by the young and largely propertyless, often leftists, renters or students. Case in point: the authors of the Project for Public Spaces website ranked the 20 best neighbourhoods in North America in their “Great Public Spaces” section. Of course, I looked for Toronto and wasn’t surprised to find Kensington Market – somewhere I personally find not “one of the most vibrant places in Toronto” but a sketchy and slightly smelly collection of ramshackle shops – coming in at 8th place. Hardly Rockefeller Plaza. And the local counsellors who support the cause are on the political left too, even considering municipal politics in Toronto slants quite heavily that way to in general.

I wonder if the leftist slant of public space advocacy is because the availability of public space is more important to those who don’t have/want private spaces of their own.  With the opportunity to own a tiny condo or one-room loft, concern for public space for general use seems to decline. It’s more appealing, perhaps, to be able to control one’s environment, despite the costs. And the clamouring for public space grows ever quieter as the students move from starter lofts to semi-detached homes with lawns, commutes, and bigger environmental footprints.

There is definitely, therefore, a generational aspect to the use of, and maybe even need for, public space.  Private space is increasingly necessary in raising a family or seeking financial security. And the kinds of activities that occur in public spaces – socializing, meeting new people, and acting on common interests – move further into the realm of private space as we grow older and tend to seek out the same friends, colleagues, or associations we’ve had for years. More private space makes our worlds more insular.

In part this trend is due to the decline of public-private spaces that in the past would encourage intergenerational socializing. Consider that in the Victorian era, a huge amount of time was spent at semi-private dances, in gentlemen’s clubs, or in church, all of which are areas that have fewer and fewer members each year. Like never before, space has been divided up into “privatopias,” whether owned by individuals or corporations, and access to these tightly controlled by invitation only.

It is also increasingly commercialized, something urban studies scholars have been writing about for ages. Whereas 200+ years ago town squares, public parks, and fairs abounded, today the majority of “public space” has a commercial bent, such as restaurants, arenas, nightclubs, and shops. Starbucks, renowned for its marketing campaign to make its cafes the “third space,” in the end really only wants you to buy their lattes and frou-frou yogurt cups. And the best example of the commercialized space, and one of the most popular “public” spaces for youth, is the mall. What is more likely to incite consumerism than a collection of stores, kiosks and food courts? There is much more of an incentive to do something (usually buy something) within public spaces today, whereas in the past the only thing one was expected to do was socialize.

The New Public Space

The New Public Space

It is perhaps not surprising then, given the generational divide in the use of public space and how increasingly partitioned it is into commercial zones, that young advocates of public space have turned to a generally no-cost option for interaction: the Internet. The most abundant public space today is the virtual kind. Spaces like chat rooms in the early days, and later Facebook, MySpace and YouTube were revolutionary because they allowed large groups to get together to exchange news, form communities, and interact in real time.  And they’ve since gone one better and added the advantage of collaboration outside of real time that allows group affiliations without having to all be present in the same place at the same time, through “walls,” posts and message boards visible to everyone. The Internet seems to be the solution to the ever-declining amount of physical public space: no governments need to be lobbied or protests staged; no corporations need to be fought for land; and the degree of commercialization is smaller, with relatively unobtrusive ads (so far).

But what is lost without physical space? True, there will be no “guerrilla gardening” online (except, perhaps, in other people’s FarmVilles). And it is easy to argue that virtual space is less accessible – to the underprivileged and those who can’t easily navigate the Internet – than its physical equivalent. It is also vulnerable to the same kinds of privatization that threaten space in the outside world, through access controls or commercialization.

But the potential for grassroots activism is surely greater, since the Internet is vast and largely unpoliced, unlike physical spaces. I wonder, will declining physical public space affect the quality of democracy for good or ill? It is perhaps a truism after even the short time facebook has been around that people are more likely to support a cause by joining a group than leaving the house to protest something. And joining facebook groups is no doubt less effective than voting for a local politician who can actually effect change. But the far-flung and ever-present nature of virtual public space carries advantages here too: people are perhaps even more likely to find out about something because the exchange of information is even more rapid than before, and isn’t immediately lost the way it would be in a physical gathering.

The challenge, then, is bringing the activism and accessibility of the physical public space into the virtual world. Ideally, of course, the public space of the future will be a hybrid of the virtual and physical kind. Perhaps the use of each will inspire support for the other. But in the short term, I nominate facebook as one of the “great public spaces” of our time. I wonder what Habermas and the public space committee would say to that.


How Gen Y Can Reinvent Work-Life Balance

May 4, 2010

It’s May again, that exciting time of year when newly-minted college graduates venture out into the world and attempt to find a job. Or perhaps go to Europe and attempt to find themselves instead until the hiring freezes are lifted.  What will increase their chances of success?

It seems as though it’s getting harder and harder just getting onto the bottom rung of the “career ladder” (a term which, as someone who works in HR, I can tell you is on its way out as an inappropriate metaphor for the working world – think less in terms of defined rungs and more in terms of the moving staircases in the Harry Potter movies – you never know where you’ll end up). What happened to slogging through a terrible entry-level job booking meeting rooms and fetching coffee, paying one’s dues in order to move up to a better job in a year or two? Is that still necessary, or have things changed?

Well, as it turns out, a lot of things of changed. Many articles have been written about them: an economic slump which has meant declining hire rates and more people being let go; a majority of baby boomers who were supposed to be leaving the workforce in order to live out their golden years on pensions we’re paying for who are not; a glut of “over-qualified” university graduates with little practical experience (which, as we all know, entry-level coffee-making jobs require) who are driving up competition for the few full-time jobs that are out there; and organization structures that are getting flatter, with fewer roles at the top. So the situation now is that one can work making coffee and booking meeting rooms for three or four years and perhaps find there’s no promotional pot-of-gold at the end of the rainbow, or find that it’s still a few years out.

So where does that leave new graduates? If “paying your dues” was the baby boomer way to climb the corporate ladder (which actually existed then), what happens now? As my favourite career blogger, Penelope Trunk, once wrote: paying dues is out; that kind of lifestyle doesn’t allow for real growth or balance at work, because it forces new recruits to work ridiculous hours doing menial tasks. (It also sets a precedent that’s hard to follow once you have commitments outside of work.)

What’s better? In theory, doing many different things to acquire enough experiences to figure out what we really want to do over the long term. One of the advantages new grads have is the freedom to move around and go where the jobs are. But the trouble with this theory is that the way the job market is structured now, we need to be very sure of what jobs we want, specialize early, and be prepared to slog it out for several years gaining “relevant experience” in our field. There is little room now for dilettantism, or having jobs “on the side.” Everything is a career choice.

Take the classic “job on the side” for everything from aspiring writers to rock stars: teaching. Teaching used to be the kind of thing that anybody could do (and there were, accordingly, great teachers and some not-so-great teachers in the mix). Now students are fighting tooth and nail to get a place at teacher’s college, often resorting to attending a school in a different country. And once they graduate, the job market looks terrible – there is a two-stage application process even to be considered for a supply teaching job.  And don’t even get me started on academia as a career.

So despite the fact that it’s better to do different things, we’re now seeing a kind of apprenticeship model reborn, with high entrance requirements to every guild. Career experts say that Gen Yers will have something like 10 different careers in their lives – but in order to do so, we’ll need to have transferable skills, and know very well how to market them. In practical terms, this means that job-hopping, or even industry-hopping, is key, to prove all the different places in which one’s skills have been useful. It’s a kind of paradox where focus and diversity of experience are battling for dominance.

One solution might be to have multiple income streams, or to get experience with various combinations of paid and unpaid work. (Or maybe to start a blog and wind up with a movie or book deal out of it.) Like the realization that your romantic partner can’t be everything to you, we’re now seeing the idea that your main job can’t be everything either, from a remunerative or skills-building perspective. (Forget the idea that a job by itself can’t make you happy in life; we exposed that fallacy several years ago.) This trend is called having a “portfolio career,” that is, using a functional skill to diversify revenue streams.

We’re used to seeing this with careers in things like music, where a conductor will (for example) have a community choir, a church gig, some wedding performances on the go, and a few students all at the same time. When one revenue stream dries up, he or she will pick up another. But it’s new for accountants, or those who might want to mix traditional employment (at a major corporation, say) with self-employment. They key is diversity within a specialization, having skills that people will pay for and capitalizing on them in several different ways.

It also means that members of this generation will have to live with more uncertainty about their careers. Perhaps this is the price we’ll pay for more control over the skills we use and how we spend our time day-to-day. Does this signify a shift back to a pre-industrial time where people could choose how much they worked? Not fully, I’m sure, but it may be the beginning of a new, hybrid system where workers can control their output and work to their real interests more. Maybe this is the new “work-life balance.”

If, that is, all these new grads ever manage to get hired into that first job.

What do you think? Will you try to mix paid and unpaid work? Do you plan on job-hopping or industry-hopping? Do you anticipate that many members of Gen Y will choose to have multiple/multifaceted careers? Or is this a trend that will only affect a small subset of the population? Is it better to work a terrible (paying) job for three years or to get lots of volunteer experience instead?


A Culture of Free = Communism 2.0

April 14, 2010

Past ideologies and cultural movements were usually associated with a class, or a gender, or a specific subset of the population that wears funny hats and goes to art shows. These days they’re associated with whole generations, Gen Y in particular. And Gen Y’s ideological leanings are ambitious. A recent article in Adbusters claims that “there is a revolutionary current running through the subconscious of this generation that has yet to be realized or defined. We champion piracy, instinctively believing that information should be free and open, that intellectual property law is contra-progress and that capital is not a necessary intermediary for social organization.”

Capital is not necessary for social organization – that is, we want things to be free. Today, everything from news to music to classified ad services has a new benchmark to attract our attention: no cost to us, and preferably none of those pesky ads, either.  It is a race to the bargain basement, which Gen Y began, but which now encompasses everyone. Meanwhile, content providers are struggling to keep up (and many are not). And the “culture of free” has become an ideology.

I’m going to make the bold statement that I don’t believe we are revolutionary for wanting to get things for free. This supposed worldview – that information should be accessible and open – was and is just a convenient position to hold right now. It is no coincidence that the noble championship of piracy arose when most of the members of Gen Y were teenagers, making them a) not yet old enough to be generating capital of their own and happy to get something for nothing; b) at the age when they wanted to “stick it to the man” (or however that sentiment is phrased now), especially the capitalist pigs who profited off their (parents’) hard-earned cash; and c) able to master new pirating technologies before anybody else could devise a clever way to stop them doing it.

Piracy therefore developed so rapidly simply because there was an option. People figured out how to share music (and books, and movies, and opinions) in a new way, and so they did.  It started with just ripping and burning, making mix CDs rather than mix tapes.  Eventually P2P took off because it was offering a useful new service: downloading music directly to one’s computer.  There was no legal competitor, so the free (not-yet-illegal-but-definitely-immoral) version took off. And now it is expected by all that the thieving will continue unabated. We are affronted when record labels attempt to regain their lost profits from hapless downloaders. We scorn those who prosecute the programmers at Pirate Bay. And we revel in the fact that blogs are thriving while subscription-fuelled media giants are hemorrhaging readers.

Now, there is certainly something to the argument that the freer exchange of copyrighted materials that enables piracy can be a good thing. It exposes people to more music, and many people who “pirate” music get engaged then proceed to purchase more music or go to more concerts than they would otherwise. But I dispute the idea that the free stuff movement is anything more than a convenient justification of existing behaviour. Ideologies rarely stick solely because they are noble and altruistic. More often they are useful, and solve practical problems. The “free stuff” movement solved the problem of paying for copyright materials.

History has seen many excellent, convincing justifications for getting something from nothing. Real pirates and thieves perfected the art of it, and were/are only stopped with effective policing (whether by international tribunals or the more traditional method of hanging). Aristocrats, priests, and the nobility for most of human existence claimed that they deserved by divine right to profit from their vassals’ labour. They were only coerced into some semblance of fairness by the threat or occurrence of violent uprisings. And communists claimed the right to free things with an ideology based on the natural inequalities between humans. From each according to his abilities, to each according to his needs, as Karl Marx wrote. But Communism has never really been successful because whenever some people are getting something for free (or with minimal labour), others are working hard and getting nothing.

This is a fact that modern pirates inherently know: the record label executives and established acts are suffering, but so are the sound engineers and indie artists. So how did stealing from others turn into an altruistic ideology?

Part of it is the appeal of the “web 2.0” culture: it is democratic, innovative, and authentic. It bypasses the elitist filters of Hollywood, publishing or old media. According to Andrew Keen, a leading critic of the Web 2.0 movement, it “worships the creative amateur: the self-taught filmmaker, the dorm-room musician, the unpublished writer. It suggests that everyone–even the most poorly educated and inarticulate amongst us–can and should use digital media to express and realize themselves.” It is appealing because it allows us all to be dabblers, part-time writers (through blogs) or directors (through YouTube) or experts (through Wikipedia). This is, Keen argues, exactly what Marx promised of Communism:

[I]n communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, shepherd or critic.

And yet the dabbling, for all its appeal, is why the “culture of free” is ultimately unsustainable. Humans want recognition as individuals, and Gen Y wants this more than anybody. But dabblers are rarely experts, and their output is rarely singled out for recognition. As Keen notes, the problem with the democratization of media is that it creates a situation in which everybody has an opinion but nobody has an audience. And no audience means no capital, which will become a problem when Gen Y moves out of their capitalist-pig-baby-boomer-parents’ houses and has to pay for their own internet connections.


Shuffling Off Our Mortal Coils – Or Making Them Our Centres?

April 5, 2010

These days we seem obsessed with our bodies: thinning them out, bulking them up, getting them into “shape,” perfecting their curves and improving their features, and generally doing all we can to modify or preserve our outer encasements. My body is my temple, as the saying goes.  And I will worship it with margarine and Red Bull.

The body today is seen as the beginning of life: we must treat it well in order to function well in other areas of existence. We must get a good breakfast with plenty of protein and fat (but only good fats!) to fire up our metabolism for the day. We are advised to consume more carbohydrates to get our brains in gear. Running will help us sleep better. Sleeping better will help us live longer. Living longer will give us more time to watch what we eat.

I don’t disagree with any of the above advice, but I do wonder when our mortal coils became separate entities from our minds. In The Republic, Socrates notes that both music and gymnastic education contribute mostly toward forming the whole person (note that to him, music was more important). Physical activity, moreover, was meant mainly to avoid illness. Socrates points out (as is explored in this article) that the soul comes first, and produces a good body, and that a healthy intellect results in a healthy body. The mind is the primary concern, and the instigator of physicality.

Perhaps the corporal obsession comes from our modern need for control. We don’t feel it anymore over our minds. We sense that life is one big game of Survivor with people out to outwit, outplay, and outlast us: advertisers trying to con us into buying more products we don’t need, politicians lying about what they’ll do if they are elected, even subliminal messages that influence how we think without our knowledge. But we can slim and sculpt and swap out bits of our physical exteriors that we don’t like. As Olivia Newton John would say, let’s get physical.

Or perhaps it goes back further, to the late nineteenth-century fixation upon masculinity that took root in Western culture and never quite left. In the logic of British imperialism, for example, “masculine” traits like aggression, control, competition and power were all inherent qualities of a successful imperial people, in contrast to the primitive effeminacy and weakness that characterized the “lesser races” they sought to civilize. This hypermasculinity found its expression in an overt and growing militarism, spurred on by the imperialist canon of Robert Baden Powell and Rudyard Kipling, among others.  Men delighted in proving themselves in war, perhaps an outlet of barbarism in their cloistered, prim, restrictive society.

In this period, Teddy Roosevelt (my personal favourite president) advocated a “strenuous life” of strife and toil, as individuals and as a nation (in the form of imperialism), in order to “ultimately win the goal of true national greatness.” [Come to think of it, he may have been one of the founders of our modern bias toward action that I wrote about in an earlier post.] Individual strength, ruggedness, and power would lead to national victories, particularly in the imperialistic wars that were coming, in Europe and around the world.

And they prepared for war with sport, and play. It’s no coincidence that the Olympics were revived in the middle of all of this imperial scrambling, in 1896. Though we have since created a story about how the games are rooted in friendly international competition, they were no doubt seen by many then as a proxy for battle. (Some modern commentaries on the national medal counts make this apparent even today.) And though Robert Baden-Powell’s Scouting for Boys launched a century of camping, orienteering, and general outdoorsy skills being taught to young men in Boy Scouts (hilariously anachronistic title notwithstanding), its origins were in a survival manual Baden-Powell had written for his fellow army men camped in India. It was adopted largely as preparation for future imperial warfare.

Even today, we worship those whose bodies are their primary known attributes much more than those whose minds are – at least with our money. Consider how many more people know who David Beckham is than, say, Tom Standage, or how many more watch America’s Next Top Model than the Reach for the Top provincial finals.

Corporal strength, power, even perfection, has become the ideal we seek and worship, and often the lens and language through which we describe the world. My next post will discuss how this applies to nations, but for now I’ll leave you with two images:

The heroes of their day…

…and of ours.

What do you think? Has the hypermasculine focus on physicality of the high imperial age stayed with us to the present day, or do we have a new ideal now? Do you think corporality is the primary way through which we understand and describe the world? Do you use your mind to serve your body, or vice versa?


The Modern Good Life, Part 3: The End of Progress

March 25, 2010

What is the modern “good life,” and how do we know if we are living it?  Is what we have now “good”? Can we honestly look to the past and say that the way we live now is better? And can we reasonably expect that things will continue to improve? These are the questions that started me thinking about this series in the first place.

In Part 1, I wrote about our peculiarly modern bias to action, and in Part 2 I discussed the different ways in which can become slaves to history. In Part 3, I will address our unconscious and seemingly unshakeable assumption of human progress and how our current historical “moment” is unsettling because it may be challenging its dominance.

Gen Y is supposed to be more optimistic than past generations: according to a recent article in Time magazine, 88% of its members believe that one day they will lead the lives they desire.  The “hope gap” (presumably the ‘gap’ is with reality) increases with age, apparently, as people get more disillusioned — but deep down we all remain, at heart, subscribers to a fundamentally optimistic narrative of our present. It is the progress narrative, articulated no better than by John Stuart Mill, its eternally optimistic Victorian proponent, when he said that the goal of progress was for individuals to live long, happy lives without physical or mental suffering, including “indigence, unkindness, worthlessness or premature loss of objects of affection.” Who can argue with that?

I’m sure many of you have heard of Whig History, the idea that humans are progressing toward an ever more liberal and enlightened society: freer, more peaceful, more democratic, more comfortable, and more convenient. Historians like to scoff that Whiggish histories are teleological, Eurocentric, and poorly sourced. We criticize the philosophies of Mill and G.W.F. Hegel, among others, who argued that modern European (now “Western”) society was located at the apex of historical development, and was its logical conclusion. We laugh that Mill and his contemporaries considered nineteenth-century British society to be the most advanced on the scale of civilizations, a trajectory based on liberal criteria such as constitutional checks on rulers, and freedom of the individual enabling the full use of his faculties. But in reality, we think the same thing in our own time.  We know that things been continually improving, and expect that they will continue to do so. And we expect that too will always be at the apex of historical progress.

Amongst all of this certainty, the past few years have been a stumbling block. Suddenly, the balance of media coverage is negative. Is it a temporary setback, we wonder, or a lasting trend? We feel a deep-seated unease as a reputable voice – or collection of voices – begins to think that the past was better than the present. And the main area in which we have concerns is ethical, societal, moral. We can see that technology is advancing, making us smarter (perhaps), wealthier, and more comfortable. But we are no more able to solve society’s eternal ills – poverty, violence, want, fear – than before. New technologies, government policies, or even human kindnesses still have not managed to create our Utopia.

Of course, it isn’t rational to expect Utopia. We all know that. But secretly, we hope that we can achieve it, and we have a vision of the future as “the best of all possible worlds,” as our Panglossian friends would say. And we want to be a part of it, and we want to help it along. We have a bias toward action.

So the question becomes, has the West become a slave to its own idea of progress? I wrote in my last post that today we are unique in seeing history and linear and cumulative. But have we been fooled, and is the “progress” we have seen not really progress at all? Could our technological progress be in fact contributing to a moral decline?

This line of thinking has certainly had its supporters. Several centuries ago, Jean-Jacques Rousseau contested the established idea of progress in his time: economic development, the creation of a state and protection of private property, and the ability to live comfortably. (It appears not much has changed since the eighteenth century.) As he wrote in his Second Discourse:

Due to a multitude of new needs, [man is] subjected…to all of nature and especially to his fellow-men, whose slave he becomes in a sense even in becoming their master; rich, he needs their services; poor, he needs their help.

It certainly isn’t a powerful exhortation to buy that new flat screen TV. Though it is perhaps a given that having more things engenders a need for more things, it doesn’t seem to say much for our evolution as a species. In Toward a History of Needs, Ivan Illich writes that “The model American puts in 1600 hours to get 7500 miles: less than five miles per hour.” Most of us can walk almost that fast, with a lot less effort spent selling our souls for a salary.

Nietzsche continued this anti-progress train of thought in the Geneaolgy of Morals, deriding those who thought comfort and luxury were the end of life:

The diminution and leveling of European man constitutes our greatest danger…We can see nothing today that wants to grow greater, we suspect that things will continue to go down, down, to become thinner, more good-natured, more prudent, more comfortable, more mediocre, more indifferent…there is no doubt that man is getting “better” all the time.

For both Rousseau and Nietzsche, the economic and technological progress that had led to large societies, sedentary means of acquiring food (i.e. non-hunter-gatherer communities), and the general ease of life that Mill had in mind had caused humans to lose something along the way. This something was morality. They had different definitions but meant something of the same thing.

In truth, I don’t think morality is declining, not even with the advent of sexting, or video games, or La-Z-Boy recliners. It’s natural that, by measuring it against objective progress in so many other areas, the presence of our human constants of good and evil will inevitably make us feel like failures. Because there certainly is evidence of objective progress. Are we, the middle class in a developed country, better off today than 25, 50, or 100 years ago? In a multitude of ways, absolutely: we have extended many basic rights to larger populations (de jure and de facto), have much more advanced medical care (and likely better access to it), use a host of labour-saving devices which reduce the amount of manual drudgery we have to endure day to day, have technologies that allow us to control our reproductive output (and therefore our careers, financial situation, etc. better), and, perhaps most importantly, can access vast amounts of information near-instantaneously.

Utopia? Certainly not. But I feel pretty good about being part of a society that is free, and liberal, and generally supportive of those who can’t support themselves. And I have a recurring dream in which (dork alert!) John Stuart Mill comes to visit me in the present, and he’s pretty pleased with how things have turned out as well, though of course we still have a lot of work to do.

In an excellent article on the idea of progress, a columnist in The Economist writes that our constant striving for morality is like aiming for an “unattainable horizon,” and the eternal battle between forces of altruism and selfishness keep society on an even keel (clearly, this author also has a bias to action). I think it’s more important that we keep up the faith that we’ll get there. Gen Y has it right: optimism is one of the keys to happiness. Society may not be perfect, but we have to believe we can keep improving it.

I started this post series with Madonna, so it only seems appropriate to end with the Beatles: I’ve got to admit it’s getting better; a little better all the time.

Read the other posts in this series:

Part 1: The Bias to Action

Part 2: History and its (Ab)Uses


High-Speed Rail and Globalization: On the Right Track?

March 18, 2010

Like any good student of British Imperial History, I adore trains, which is why I’m particularly excited about the current trend toward building high-speed rail (HSR) capability in the developed world. For the past 50 years, Japan and France have been at the forefront of high-speed rail innovation – today Japan has trains that run at almost 200mph and carry over 300 million riders annually, which is impressive given that their population is 128 million – but now the US, UK and China are getting in on the act as well. I spent some time reading government policy proposals (scintillating reading!) to understand the reasons behind the plans, and whether this trend marks a new era in the history of transportation. But my conclusions were more about people’s fears for the past and hopes for the future, played out in changing feelings about trains.

The history of train travel is a rocky one. Initially heralded as one of the major breakthroughs in transportation technology (and indeed, technology in general), travel by railroad was, in the nineteenth century, the quickest and often most cost-effective means of getting places. The effects of train travel on language, culture, and industry were profound (far too much so to get into here), not least for creating the sense of, as Marx once said it, “the annihilation of space by time.” Intercity rail travel precipitated one of the first major waves of thinking about global interconnectivity and “globalization.”

However, after a century of dominance, train travel went out of fashion. Almost as soon as cars became affordable enough to be widely accessible, in the very first part of the twentieth century, they became the primary means of transportation. By 1916, automobiles – albeit rather dangerous, primitive ones – were being used for recreational purposes, and as the Ford Model T decreased in price to $360 (a bargain for the middle classes even then), the number of automobile owners rapidly increased.   

Suddenly trains were inconvenient, expensive, and restrictive. (Just think about how you feel now about the TTC, but worse.) And as national funding for interstate highways and air travel steadily increased, the number of riders on inter-city railroads decreased.  Today there are few private corporations in an industry that was once dominated by Vanderbilts, and both funding and riders are scarce.

So why the return to train, in the form of high-speed rail networks, and is this a case of history repeating itself?

One of the key benefits cited by the government proposals is energy efficiency and environmental responsibility. The charmingly-titled Command Paper issued by the UK Department for Transport cites three key “green”-themed benefits (out of four total): “building a robust, green economy, gaining energy independence, reversing global climate change, and fostering more livable, connected communities.” I find this ironic considering that the popular mindset toward railroads a century ago was that they took people away from nature.

In my previous life as a grad student I studied travel and tourism in the American West in trains and automobiles (no planes, though I do know a fair bit about flight attendants ).  I discovered that for many travellers, cars were a means to get a sense of closeness to nature that was not possible through a train window, on a set schedule, from a rail line. (I suppose ‘off-roading’ was more of a thing back then.) Early car travellers referred to themselves as “gypsies” who could wander at will, on their own “natural” schedule.  Of course, the way “nature” was understood then is not the same as the how the “environment” and “environmental sustainability” is understood now – and, accordingly, cars are seen much differently too, but I find the complete role-reversal of cars and trains very interesting. It indicates that environmentalism now is less feeling a part of nature than helping to preserve it, a much more active, and less passive idea. 

The new HSR kick is also evidence that our attitudes toward technology and industrialization have changed in a big way in the last century. In the early days, cars were seen as the “rugged” and adventurous way to travel, more natural and democratic than taking the train. (The irony, of course, is that cars were equally cutting-edge technologically, if not more so.) While technological advancement certainly always had its advocates, there were many who felt as though humans’ dependence upon it had made them weak. Nothing exemplified this idea better than the comfort and luxury of rail travel.

Today, this idea that relying on technology is bad has all but disappeared. Reliance upon technology in the developed world now is ubiquitous and barely raises a concern — until it stops working. Certainly, the concerns over the increased efficiency of personal transportation vehicles, or improvements in air transport have been addressed by all of these documents, and high-speed raid still comes out on top for moderate-to-high-density intercity travel between 100 and 600 miles. But will there be a perceived “lesser-technology option” (as the automobile was understood in its early days) that arrives in the next few years that will siphon riders away from high-speed rail just as it did a century ago?

It seems unlikely, given that many of the positives first associated with automobiles – freedom from set schedules and monopolistic train companies, not having to deal with third party service providers, the appeal of the “open road” – are now mitigated for many by the drawbacks, such as high fuel prices, congestion, or guilt over carbon emissions.

 Perhaps the most interesting benefit of HSR that all of the proposals identified was what one termed “Interconnected livable communities.” It was also (to me) the most surprising. It seems that we are again talking about collapsing the gap between space and time with railroads, but now, instead of the physical distance between two points, it is the “time-space” between high-density regions that matters. These cities might actually be quite close together but still it is difficult to travel between them because of, as the report puts it, the lack of an “efficient local access and egress system” [in English, that means fewer entry and exit points].

Consider this map from the 2009 High-Speed Rail Strategic Plan, written by the U.S. Department of Transportation. Most of the rail corridors connect places that aren’t actually that far apart. And most are in places where there is already very high population density (also note: it is no fun to live in Wyoming if you like fast trains…or people).

All of this indicates to me that already-large cities connected with strategic links are going to be the drivers of prosperity in the future – a much more specific (and in my mind accurate) picture of “globalization” than this idea of everyone being connected to everyone else. It also indicates that direct, interpersonal contact is still critical for the future. Technological improvements in communications have a limit. The British government report, for example, does not anticipate that videoconferencing will reduce the need for intercity travel. It would initially reduce by 30%, they estimate, but the total amount of business conducted would increase as a result and (with follow-up meetings, etc) the net effect would be no reduction in actual travel.

My sense is that high-speed rail has the potential to be a long-term, sustainably popular alternative to cars – especially if they can seem to collapse space and time again. The one constant of popular transportation technology is that it increases time-efficiency, and this factor will always win out. Time is the one hurdle we haven’t managed to conquer yet, and humans will get behind anything that helps us maximize the little time we have.

What do you think of the future of HSR? Would you use it? Do you use it now? Do you think trains or cars have the edge on a romantic, “closer-to-nature” image? And what do you think about the idea of direct interpersonal connections as the basis for globalization?


Who Will Fight For Thee? An Ode to Sewer Grates

March 12, 2010

This country is falling apart – really. At least, that’s what Margaret Wente claimed last month in an article about Canada’s ancient infrastructure, the physical underlay that allows us to live in a modern city, such as water mains, and bridges, and roads.

The trouble is, nobody wants to stand up and fight for the sewer systems and corroded pipes by shelling out the estimated $33-billion needed to upgrade them in the next two decades. Why not? Wente claims that it is a symptom of our country’s progressive “demosclerosis,” that is, a government’s propensity to, in a democracy, hand out cash to those special interest groups that agitate for money the loudest instead of the more silent but necessary projects like infrastructure that represent no gains in political capital.

Perhaps. But I suspect that it has more to do with our overall lack of emphasis today on the physical aspects of nation-building, in favour of the intangible ones. When Canadians are asked about what makes their country great, and modern, and progressive, most talk about health care, or civil liberties, or multiculturalism. Few comment on our excellent bridges or highways, or public buildings.

Improving the solidity of the built environment used to be a key element of national and imperial pride, two hundred years ago. Improvements in infrastructure are one of the few positive things that are generally associated with imperialism (though, of course, there are ways that one could quibble with the claim that they were beneficial in the long run). Good planning and solid civil engineering were considered the hallmarks of modernity and progress – and were appropriately celebrated.

Consider the London Sewage System. When it was built in the 1860s and 1870s, under the far-sighted direction of Sir Joseph Bazalgette, it was (rightly) lauded as a triumph of engineering and public health. It was an extensive project, constructed for the then-largest metropolitan area in the world, and it led to a reduction in cholera and typhoid fever outbreaks that had plagued the city for years. Bazalgette himself was knighted and there stands to this day a memorial to his genius on the Victoria Embankment.

There are Canadian examples also: the Prince Edward Viaduct was a celebrated work of art when it opened in 1918, and of course, one needs look no further than the stunning architecture of the Ontario Provincial Legislature, opened in 1860, or Union Station, built 1914-1920, to see the kind of pride that was placed in public buildings in this country as well.

I can’t think of any sewer engineers who’ve been knighted recently. (If you can, by all means, send them over.) And most public buildings constructed today lack the opulence and grandeur of their predecessors. Today, functional utilitarianism and beauty don’t seem to be compatible, and the emphasis rests on the former. Consider the Victorian Abbey Mills pumping (sewer) station near London:

Abbey Mills Pumping Station

And its modern equivalent:

New sewer station

Also consider another celebrated imperial building, the Chhatrapati Shivaji (formerly Victoria) Terminus in Mumbai. It was built during the British Raj, in the last two decades of the nineteenth century, and still stands as a glorious example of functional beauty:

Victoria Terminus

Now consider Shanghai’s main railway station, built in 1987:

Shanghi Railway Station

It is all evidence that physical infrastructure today is little more than that – it does not represent national prowess so much as an uninteresting feature of daily life. In fact, as Wente points out, things like water mains or electrical grids are really only ever noticed when they cease to function as they should. And no wonder: they are ugly, or uninteresting, and certainly not celebrated. Quite the opposite: I’ll admit that I too find the endless reconstruction of Bloor Street a pain – and I don’t even have to drive through it.

The root cause, I believe, is a change in how we speak of ourselves as a nation, and what we consider to be important. These days nation-building in the developed world is associated with ideals: democracy, equality of opportunity, or winning many Olympic gold medals, for example. It isn’t really building at all.

Is it that these things are no longer new and shiny (literally) and revolutionary enough to be worth our notice? Are we “beyond” physical infrastructure and public buildings as markers of progress? Is there some national hierarchy of needs (similar to Maslow’s personal one) that puts basic infrastructure at the bottom and higher-level ideas at the top of the pyramid? Or is it that we consider freedom and democracy and health care so basic, so integral to our idea of ourselves as a nation, that these examples are what populate our speeches?

I wonder. For now I’m going to be thankful that my internet connection is fast enough that I can upload this post before heading out onto our barely functional, disastrously ugly subway to go for dinner. And along the way I’m going to make a point of noticing the sewer grates, and feel proud to be Canadian.

<!–[endif]–><!–[if gte mso 9]> Normal 0 false false false MicrosoftInternetExplorer4 <![endif]–><!–[if gte mso 9]> <![endif]–> <!–[endif]–><!–[if gte vml 1]> <![endif]–><!–[if !vml]–><!–[endif]–>

Suitable – For Men Only

March 11, 2010

Clothing is a funny thing. Some people argue that it means nothing, and is a mere distraction from what lies underneath (figuratively speaking). Many others argue that is sends critical messages about its wearer, and obsess over what those messages are.

The most polarizing issues are always related to women: everything from whether Hillary Clinton’s sensible trouser suits make her qualified or matronly to whether followers of Islam should be permitted/forced to wear clothes that cover their faces or hair. I once heard a model claim that all fashion is women’s fashion, and that we only let men borrow it periodically. It was a joke, but one that implies that the control lies in the hands of women. I believe that it is the opposite, and that because they are free from all the attention, it is really men who have the power in this regard.

I wrote a paper a few years ago about how men’s fashion in the nineteenth century was instrumental in shifting feelings of “otherness” from those of class to those of gender. That is, the key differentiators in society before the 1800s were class-based, and reflected in clothing styles. After the 1800s, the key differentiators were between the sexes. Now, before all of you political historians tune out because you think I’m going to start using wacky postcolonial/postmodern/psychoanalytic/feminist arguments, let me say this: what people wear, and especially what they wear to work, speaks volumes about the values of the society in which they live.

(And, for what it’s worth, most of this post will be about men anyway.)

The nineteenth century was notable for spawning the first modern ideas about working: there was a real middle class for the first time, and it generally participated in a public sphere of manufacturing and commerce. Trade was no longer considered dirty by the upper classes; instead, it was England’s “nation of shopkeepers” that was leading the charge of modernity and Empire, and entrepreneurs were raised to the level formerly attained only by military men and the aristocracy. For the first time, hard work and professional expertise had respect, and this sense of respect bonded men together. Of course, it also separated them from women, who were rarely if ever allowed to participate in this glorious public work – they had to stay at home and raise children (which, of course, isn’t work at all, right? It’s pure joy! That’s why women don’t get paid for it!).

Thus arose the suit. Ah, the suit. That most modern uniform that signifies utilitarianism, seriousness, and piety (through its emphasis on black exterior and white collared shirt overlay, like priests!) all at once. The package that is so simple, easy and flattering that men (and those who see them) don’t even have to think about it. The modern suit was so revolutionary, after so many years of tights and funny short pants and ruffs and wigs, that one eminent historian of fashion has said that since its adoption, women’s fashion has been reduced to its imitation.

Because before the suit, all fashion was men’s fashion. Think ducks: men had to be ostentatious and showy while women merely had to be pure and able to produce offspring. And after, it was only women who had clothing that was complicated, deceptive, and silly. (Don’t even get me started on the kinds of mishaps that could occur while wearing a hoop skirt.)

So what’s changed? Is the suit still master of the professional clothing universe? I think it still represents all of the above (with maybe the exception of ‘piety’) and is still the defining answer to the question of what is appropriate to wear to work. Of course, there are signature looks (Steve Jobs and his black turtlenecks, Richard Branson’s lack of ties, and “Casual Fridays”) but these are remarkable because they stand out from the norm. The suit is so powerful because it is a uniform and gives the wearer immediate currency in the professional world because he does not need to talk about it. But women aren’t included: if a woman wears a suit proper, she stands out. If she wears a pantsuit, she stands out for being too much like Hillary Clinton. If she wears something more feminine, she stands out for that too – perhaps for overly expensive designer elitism, a la Sarah Palin. If she doesn’t wear a suit, she is unprofessional — or worse. Whatever she wears, she stands out. If you don’t believe me, check out this picture of world leaders and tell me who stands out to you.

In casual wear, of course, it doesn’t matter – the separation between work and fun is clear and thus lacks a value judgement about competence. And besides, everyone, of all classes and both genders, wears jeans. But overall very little has changed on the professional world: the classes may mingle, but the genders remain distinct.

So what? you may ask. Clothing doesn’t actually change how competent (or incompetent) a person is. Of course it doesn’t – but isn’t it interesting that as a society we still can’t get past using the outside packaging as an excuse for our real opinions? Without all of the discussion about pantsuits, would Hillary still be considered “traditional” and a “feminist”? And don’t even get me started on shoes…

What do you think? Does it matter to you what people in positions of power are wearing? Do you respect a suit more than a skirt? Do you think clothing enslaves us? If so, how do we escape?


What’s Your Personal Brand?

March 8, 2010

The last post I wrote looked at how countries are attempting to portray themselves internationally through their brands. It is perhaps a bit odd to speak of nations through the lens of branding, as though they are things that can be commoditized and “sold” like sneakers and cola. However, I believe it is part of the zeitgeist; everything these days seems to have a commercial lens, and anything can be processed, packaged, and marketed for a profit. Call it the triumph of capitalism. (Lloyd Dobler would be unimpressed.)

Because the commoditization of everything has come to make a bit of sense to us, I want to examine in a bit more detail another concept I think is novel, and more than  slightly alarming: personal branding. I do a lot of workshops about this at work because of its seeming ubiquity as a concept in the business world right now. But what does it really mean, and why is it so popular?  Is it a change in how we see each other, or just an iteration of something else?

Some have criticized personal branding as emphasizing “packaging” oneself well over focusing on self-improvement. I don’t think that’s actually true. I went back to what some have identified as the first extended discussion on personal branding, a 1997 article in Fast Company titled “The Brand Called You,” by Tom Peters, to see how he positions it. Peters posits that in the late twentieth-century knowledge economy and era of the Internet, workers are no longer mere employees in others’ corporations – they are instead “CEOs of Me, Inc.” The new professional world is all about the individual. He advises readers to describe, in 15 words or less, what their unique skills and contributions are – their “feature-benefit model.” This is their personal brand.

I decided to do a bit of historical contextualization to determine if the advent of personal branding really did up the ante for artifice in the business world, of if it was just another incarnation of self-help advice. I started with one of my favourite books, Steven Covey’s “Seven Habits of Highly Effective People,” first published in 1989. His opening section discusses the history of self-help books, and distinguishes between what he refers to as the “personality” and “character” ethics. The character ethic – a long-term approach to self-improvement that gets at the fundamental roots of behaviour in order to integrate sound principles into one’s life – dominated the literature until about the 1920s, when a new idea, the personality ethic, rose to prominence. The personality ethic, he says, was much more about strategies for achieving success by, essentially, showing others what they wanted to see. By implication, what others wanted to see may not have been one’s genuine self, and in time those who subscribed to the personality ethic might be exposed as insincere frauds. (The implicit criticism of books like “How to Win Friends and Influence People” here is hilarious.) Covey calls for a return to the character ethic – a principles-based approach, over a superficial one.

So we see that, according to one of the leading writers in the genre, packaging oneself for success is not a new thing. Moreover, those who advocate personal branding do emphasize self-improvement. Peters in “The Brand Called You” clearly advocates building one’s skills in order to improve one’s personal product suite – but what he cites as the benefits are largely extrinsic. The difference, then, is not in the lack of focus on personal improvement, but in the desired outcomes from it. According to Peters, the beneficial outcomes are more power, more authority, and, most notably, more visibility. Presumably these benefits add up to personal happiness and fulfilment, but the link is not made explicit. Visibility in particular seems to be an end in itself. I suspect this is a change over the “personality ethic” kind of self-help, because that was much more focused at the interpersonal level.

To tease out the differences some more, I went back to the original in the self-help genre, Sam Smiles’s 1859 work Self-Help, widely considered the literary embodiment of liberal-progressive Victorian morality.  Contrast the personal branding mantra of visibility with what Sam Smiles says about anonymous self-improvement:

Even the humblest person, who sets before his fellows an example of industry, sobriety, and upright honesty of purpose in life, has a present as well as a future influence upon the well-being of his country; for his life and character pass unconsciously into the lives of others, and propagate good example for all time to come.

Not exactly the same as buying a building to have one’s name on it, or founding a scholarship program, or sponsoring a business school. And the return to Smiles highlights another difference. A key part of the visibility end is that it stops at the individual, whereas Smiles advocates self-betterment for the “greater good.” National progress is the Victorian self-help goal, whereas advancement of the self is the personal branding-era goal.

Perhaps the most alarming difference is in the means by which personal branding achieves its goals of visibility – through the commoditization of the self.  Such a concept had to be American, the true home of capitalism and democracy. It seems sometimes as though the history of the United States is the story of competition. Everything is settled through democratic process, and the best “product” wins. Think: religious freedom, competition among school districts, election of neighbourhood dog catchers, etc. (You can debate this concept with me in the comments section, if you like.) These days, it’s all about the money. How can I “sell” myself in a way that people want to invest in me?

Further evidence can be found in the fact that “self help” as a genre has gone from an offshoot of liberal political philosophy to sitting largely within “business literature,” because it is practical and concerned with, at the root, the effectiveness of capitalist organizations and the individuals within them. At least, that’s where the legitimate self help authors have gone, having geared their advice toward executives, or else they face relegation to the “New Age” section.

All of this leads me to believe that a desire for fame (or perhaps notability or notoriety would be a better word) is one of the defining characteristics of our era. But what do you think? I am taking too much from the visibility angle? Do you think self help does reside primarily in the business section now? And are you as alarmed by people casually discussing how to “brand” themselves as I am? I’d love to discuss this with you, so please leave a comment below if something is on your mind!