History Through Rose-Coloured Glasses

November 12, 2014

Rarely have there been so many meanings so definitively associated with the same colour.

From the innocence of childhood to the sexy, all-night glow of Las Vegas neon, pink has a colourful and controversial history associated with noble and common, demure and gaudy, masculine and feminine. And it wasn’t even known as “pink” (in English) until the late 1600s, centuries after its purported opposite — blue — really arrived on the scene, both linguistically and in the popular consciousness.

Madame de Pompadour, mistress to King Louis XV

Madame de Pompadour, mistress to King Louis XV

Some have argued that pink’s “golden” age was in the eighteenth century, when it was the mode for high-fashion ladies of the French court. At that time, of course, they were among the only people who could afford the expensive dyes that coloured the fabrics they wore. Madame de Pompadour, mistress to King Louis XV, popularized pink amid a bevy of other pastels that were favoured in the Rococo period.

Pink continued to be associated with the rich and royal until the twentieth century, when chemical dyes allowed for its more widespread use in clothing that could be washed repeatedly without the colour fading or washing out. It was also around this time that pink transitioned from being largely a pastel hue associated with the innocence of children to a more bold, exotic shade. The new dyes allowed for the creation of deeper and darker versions of pink that spread around the world in the fashions of the 1920s.

The new and the neon

Buildings started to be sheathed in rose around the same time. In the 1920s and 30s, at the height of the Art Deco movement, vivid colours emerged as an alternative to the drab sameness and deprivation of depression-era interiors. A splash of bright paint could change the tone of a whole room. And with a focus on modern, technologically-enabled streamlining of form, the architecture and products of this age contrasted both with the ornate and intricate styles from earlier in the century and the contemporary countertrends of European functional Mies Van der Rohe-style block modernism.

Pink on pink at the Hotel De Anza, a classic example of Art Deco in San Jose, California

Pink on pink at the Hotel De Anza, a classic example of Art Deco in San Jose, California

Art Deco was colourful and accessible — and immensely popular. This was particularly the case in America, where, as architectural historian Robert M. Craig puts it,“Art Deco was jazzy, bright, sexy, loud, and visually appealing.” It was everywhere: from department stores to movie theatres to the new motels that had sprung up all over the country to provide for a growing motoring class.

Pink walls and pink fashions were a way to stand out and be noticed, and thus the colour was increasingly used in advertising, from splashy storefronts to the neon signs that dominated the landscape starting in the 1920s. In this way pink came to be associated with both the egalitarianism of commerce and material things: stylish perfume bottles, vacation homes in South Beach, new living room walls. Marilyn Monroe wore a notorious pink dress on the cover of the 1953 film Gentlemen Prefer Blondes. Elvis’s famous pink convertible, purchased in 1955, was seen as the height of post-war luxury and is featured at Graceland.

Gentlemen Prefer Blondes (in pink) -- Marilyn Monroe in the 1953 movie poster.

Gentlemen Prefer Blondes (in pink) — Marilyn Monroe in the 1953 movie poster.

Flight of the pink flamingos 

Pink is everywhere in California, as it is in many places where there are beaches, single-story construction, and a touch of the exotic. It is the colour of soft sunsets (because of Rayleigh scattering, in which only the longer rays on the visual spectrum, in the red-yellow colour range, reach the eye), and flowering plants. And in its heyday in 1950s, it represented the triumph of modernism and new frontiers.

Then its meaning shifted again. From being the bright colour of the future, it became the gaudy holdover from a bygone age. The lights of Las Vegas started to look a bit too commercial, too fake. Pink houses now stand out, “island[s] of whimsy in a sea of drab conformity,” and as such aren’t always viewed positively by the neighbours. Gradually pink started to represent the Miami Vice-like excesses of the 1980s or the wastefulness of neon tube lighting, first patented almost 100 years ago.

Nothing symbolizes the pink backlash more than the popular conception of lawn flamingos. Elegant and exotic, flamingos can be found across the globe in warm and wet areas, from India to Chile. The first pink lawn ornament was created in 1957 and was a smash hit. But by the late 1960s, the negative image of the plastics industry and the “unnatural” look of giant pink birds on the lawn led to a spiralling decline in their popularity. Now, of course, they are popular again, an ironic wink and nod to the kitsch of an earlier time.

Gentlemen prefer … pink?

This was not, however, the greatest reversal in the popular perception of pink. It is perhaps surprising today to imagine that pink was for most of its history considered a very masculine colour. Contrasted (as it always is) with blue, pink was seen as more stimulating and active, appropriate for clothing young boys, and the soft daintiness of blue more appropriate for clothing young girls (think: Cinderella’s dress at the ball). It remains a symbol of strength to this day in Japan, where it is associated with cherry blossoms, said to represent fallen warriors.

In nineteenth-century Britain, when military might was shown with red uniforms, boys wore pink as a kind of lesser red. And let’s not forget that the standard map of the British Empire is coloured pink, symbolizing the strength and breadth of British power, from the Cape to Cairo, and Whitehorse to Wellington. The old pink maps cemented the idea of empire in the popular consciousness of the time, creating what Linda Colley, (my favourite) scholar of the British Empire, has termed “a sense of absolutely uncomplicated, uncompromising power.”

Imperial Federation Map of the British Empire, 1886

Imperial Federation Map of the British Empire, 1886, by John Charles Ready Colomb

Pink now, of course, is considered near-exclusively feminine. It is often used idiomatically to refer to women’s or gay rights issues, as in “pink-collar” work, or “the pink economy.” And it has been helped in this image by marketers for almost seventy years, who both helped to shape tastes in colour and hew to common perceptions of them. Pink was a target during the 1970s with the feminist backlash against the confines of gendered clothing. As women started to dress in a more unisex and stereotypically masculine way, pink was eschewed. As an interesting overview in the Smithsonian notes, there was a time in that decade when even major retailers such as Sears Roebuck didn’t sell pink baby clothes, for girls or boys.

Living in a material world

2011 Color of the Year, "Honeysuckle"

2011 Color of the Year, “Honeysuckle”

The shift toward the ownership of colour could be said to have begun with the Pantone Institute’s codification of colours for matching purposes in the late 1950s. In recent colour analyses of brands, pink is considered warm, sensitive and nurturing, commonly used in products or campaigns targeted at women, such as Cosmopolitan and Victoria’s Secret. And that most enduring lightning rod of femininity, Barbie, naturally has her own shade. Barbie pink (Pantone 219C) has been associated with everything Barbie from the very beginning, including a fuzzy pink bathroom scale released in 1965 that was permanently (and controversially) set to 110 lbs.

Love in pink. Photo courtesy of Flickr user Chris Goldberg.

Love in pink. Photo courtesy of Flickr user Chris Goldberg.

And yet pink remains an aspirational colour, just as it was when Madame de Pompadour wore it at the French court. In 2011, Pantone chose Honeysuckle (18-2120), a bright variation of classic pink, as its Color [sic] of the Year, citing its “confidence, courage and spirit to meet the exhaustive challenges that have become part of everyday life.” It is a colour for the zeitgeist, a necessary perk in the dark days of our latest recession, with its many pink slips. According to Leatrice Eisema, Pantone’s Executive Director,”In times of stress, we need something to lift our spirits. Honeysuckle is a captivating, stimulating color that gets the adrenaline going – perfect to ward off the blues.”

So often viewed in opposition to something, pink can nonetheless be understood as a world unto itself. Whether seen as high or low, kitschy or elegant, soft or strong — or all of the above — it seems doubtful we’ve reached peak pink. Who knows what it will signify next?

Advertisements

What makes a city great? Toward a hierarchy of urban needs

April 3, 2014

A few years ago I created a conceptual model of national needs, shown below, based on Maslow’s hierarchy of (personal) needs. It has become one of the most read posts on this blog, indicating that our identification with both nations and Maslow’s framework both continue to resonate today, decades after their creation.

Maslow's Hierarchy of Needs

Some context: Maslow’s Hierarchy of Needs, for individuals

Of course, it is difficult to map the idea of progressive needs of an individual cleanly to a political entity. Nations, like people, continue to evolve, and the role of nations in the world is changing too. Nonetheless, the idea of a hierarchy, in which basic needs must be satisfied before one can progress to a higher level of actualization and fulfilling one’s whole potential, can be applied to countries in various stages of development.

Since writing my National Needs post in 2010, a new country was created in South Sudan. It is still struggling (as indeed are many other nations) with the lowest level of securing territorial integrity and peaceful borders, and this remains its primary focus. The struggle for survival must come before feelings of security, esteem and morality.

Exon's Hierarchy of National Needs (Click for a larger version)

Exon [Smith]’s Hierarchy of National Needs, c. 2010 (Click for a larger version)

Yet there are other geographical entities with which we commonly identify, and which are becoming more and more important as centres of culture and economy as a greater percentage of the world’s population moves into them: cities. It is estimated that for the first time in human history, more people live in urban areas than outside of them, and cities are becoming important political players in their own right.

Since moving to California in late 2013 (and spending a lot of time on the Atlantic Cities channel), I have been thinking about how fundamentally important cities are. What makes them truly great? What makes them “cities” at all, in a sense apart from the obvious population requirements? For example, I live in San Jose, which is the third largest city in California, ahead of San Francisco in both population and area, and yet its own inhabitants curiously refer to San Francisco as “the city.” Why? What has to happen for a place to transform into a world-class city from a mere urban area?

So, as I am wont to do, I created a new model to explore the needs of a city, also along the lines of Maslow. I’m calling it the “Hierarchy of Urban Needs.” Note that I am assuming that this city exists within the context of a nation that ensures the rights and privileges of, as well as general governance over, its citizens.  Some discussion of the stages is below.

hierarchy1.pdf.001

Exon Smith’s hierarchy of urban needs (Click for a larger version)

Basic services 

At the most fundamental level, cities need key services delivered in an efficient and cost-effective way. (This is true even if such services aren’t necessarily paid for by the cities themselves, as is the case with, say, healthcare in Canadian cities.) This includes fire, police, and ambulance services; waste management; housing inspections to ensure both safety and affordability of housing; water treatment, and the like. For many cities, this means being able to control the tax base and be able to levy taxes on the population as necessary.

World-class cities will also have exceptional healthcare options and a focus on sustainability woven through even these fundamentals, such as extensive recycling and compost programs. San Francisco, for example, deploys teams to examine what its residents recycle properly and what they don’t so the city can mount better educational campaigns.

Of course, the basic running of the city must be free of corruption, and be able to pay its bills so it avoids a Detroit-like bankruptcy claim, or the succession of mayors Montreal has recently had.

Infrastructure

Historically, cities developed around major ports and, later, railway depots. Even today, no major cities exist without some kind of harbour, airport, train station or freeway linking them with the outside world. Inter-city transportation, undergirded by solid infrastructure, is a critical component of economic progress.

Cities with poor transit are at a huge disadvantage. Jakarta, a city of nearly ten million people, and the largest city of its size with no metro of any kind, has notoriously been working on an underground transit network for 20 years. Traffic congestion is thought to cost the city $1 billion a year. In another cautionary tale, it can take 12 hours to travel 40 miles in Lagos, Nigeria, and the way is fraught with crime and other dangers, a threat to legitimate trade.

Intra-city transportation is also a key factor, and how best to support the movement of people within a city is a subject of almost universal debate. Subways vs. light rail, bike lanes vs. car lanes, pedestrian-only roads and congestion pricing – these are major issues for all cities, and the thinking on public transportation keeps evolving.

This is one area in which San Jose currently struggles but has big plans for the future. My theory is that older cities, built before car use was predominant, have an easier time planning for pedestrian and bike access. Those (like San Jose) that were built after the advent of freeways and a Cadillac for every nuclear family tend to struggle to retrofit density in the downtown core when its points of interest are already quite far-flung.

And yet. San Jose is a critical location for high-speed rail between Los Angeles and San Francisco, as well as a hub for transportation around the San Francisco Bay (linking to San Francisco and Oakland), and has reserved space downtown for new transit links. It is planning for increased density to accompany the new transportation. Hopefully use of public transportation within city limits will also increase, because at the moment the city is hugely dependent on the car. Inefficient public transit routes poorly serve the population, resulting in, for example, 78% (!) of San Jose commuters travelling to work in single-occupancy vehicles.

Central Park

Infrastructure also includes sewers and other large-scale public works, including parks and other green space. More and more research indicates that green spaces make for happier communities, and many major cities can be identified by their parks alone (e.g. Central Park, Golden Gate Park, Bois du Boulogne, Sanjay Gandhi National Park). As I’ve said before, I love sewers, water mains and bridges, personally, and think more campaigns should be fought around securing funding for them. The recent, tragic gas explosion in Harlem only underlines the need to think the way the Victorians did about how cities really run and how we can leave a legacy for the future that is perhaps not glamourous, but that is critically important. One of Toronto’s great strengths, as is the case in many other cities, is the numerous cranes on the skyline building new architectural wonders (as well as a few duds). Would that we could focus on what lies beneath the soil as well.

A brief interlude on mayors…

Thinking about these lower levels of needs, it strikes me that the level of a city’s discourse (and thus position on this hierarchy) can often be seen through the lens of its mayoral elections. Toronto’s 2010 (as most likely will its 2014) election centered on the issues of transportation and waste in providing city services, leaving little room for discussion of higher-order issues (such as, ahem, drug use among elected officials). New York’s 2013 election, in which Bill de Blasio won almost three quarters of the votes, turned largely on issues of income inequality and pre-kindergarden education, the next level in my hierarchy. And the major issues of London’s 2012 election, won by incumbent Boris Johnson and his hair, were the economy, tackling crime, public transportation, and affordable housing.

Boris, Campaigning on Transit

Boris: Campaigning on Transit

It makes sense that the basics need to be taken care of, and continually improved upon, before a successful cultural scene can take root, in the same way that humans must be fed and watered, feel physically and emotionally safe, and feel a sense of belonging before they can achieve self-actualization.

…and then back to the hierarchy: Educational and research institutions

A strong educational foundation at every level is critical, and a well-educated population requires relative equality in the quality of schools. This is one of the main reasons cities should not fund their schools through neighbourhood taxes (and thus subject schools to the vagaries of house prices), as many cities in the United States do.  A well-educated citizenry contributes more to the economy than a poorly-educated one.

The presence of leading research and teaching institutions draws in talent and sows the seeds of innovation, which is why “cluster economies” such as Silicon Valley are the next big thing, because they focus research and development into localities with populations educated enough to feed them with employees. Every one of the world’s greatest cities has a leading university at its heart, without exception – this cannot be a coincidence.

Diversity is the key here. Cities built around just one industry are like monocultures: potentially dominant for a short while, but vulnerable to disastrous decline. Take any of the grand old cities in the Rust Belt: Buffalo, for example, was one of America’s greatest cities one hundred years ago, built on a strong grain-milling and shipping/railroad industry. After almost a century of decline, it is, well, no longer great – but it has managed to slow the decline by diversifying into the education and medical fields. Glasgow, once the premier city of Scotland, faced a similar decline due to its emphasis on a resource-based economy and de-emphasis on education.

Robust arts, sports and cultural scene

This stage is where the jump occurs from a merely livable city to one that is great. A safe, well-run, working city is lovely, but a city with a thriving cultural scene is one to fall in love with. In fact, social offerings, a broad category encompassing art, music, sport, religion and other community activities, are among the most significant contributing factors to residents’ feelings of attachment to their community. This is even above security or the state of the economy.

This stage of course includes both major municipal institutions such as museums, symphonies and ballets, but also spontaneous or smaller-scale, citizen-led activities. Being able to participate in a Sing-A-Long Messiah or see an independent movie at a film festival is as important as having the Bolshoi nearby, and also makes the arts more accessible to a wider population. Having Old Trafford around the corner is great, but so is the local curling league.

Doha’s Museum of Islamic Art

 

An arts and culture scene, moreover, is a key driver of tourism, which in turn feeds the economy on general feeling of being in a place worth being. (Just imagine Paris without the Louvre, or New York without the Empire State Building.) Older cities naturally have an advantage here because of the in-built history in ancient cathedrals, palaces or public art, but some newer cities have benefited by investing heavily in creating an arts scene. Doha, once little more than an oily afterthought, is planning for the time when its resources run out by creating a strong film industry and thriving place for modern art. It is also newly host to a major international economic forum, and will host the 2022 World Cup. (Probably.)

Openness to influence; becoming a symbolic beacon

Give me your tired, your poor,
Your huddled masses yearning to breathe free !

These words adorn the base of the Statue of Liberty  and represent what I have spoken of before, being a city of the imagination. These cities are the subject of books, films, Broadway musicals, and countless daydreams, and have a romance and level of impact that serves to draw people to them, for a visit or for good.

These cities, in turn, receive their tourists and immigrants in a more or less accommodating way, taking from them the best of their cultures and using that to strengthen and further diversify the metropolis. Cuzco, Islamic Seville, and the Florence of the Medici were all historical examples of the power of such “mixing bowls” of culture: out of their cultural milieu came the starting point for a massive empire, the Golden Age of exploration, and the Uffizi Gallery. Modern equivalents spring to mind precisely because they have this pull on our hearts and minds.

The last two levels of the hierarchy are quite iterative: the greater the cultural scene and economy, the greater draw a city has for immigrants, who then enrich the culture further. It is difficult to find a world-class city without a large percentage of immigrants, who bring with them new traditions, great ideas, ambition, and excellent food. It is in fact difficult to overestimate the importance – both historically and in the present day – of immigrants to cities’ successes, which is why openness to influence and disruption may be the most important trait a city can have.

 

So there’s the model. I’d love to hear your thoughts!


The Empire Strikes Back … with Hammers

March 4, 2014

This is a post about curling.

It is also a post about colonialism and the sadness and rhetoric that accompanies the sunset of an empire.

Toward the end of the 2014 Olympics came the men’s curling final, a dramatic showdown between Great Britain and Canada. Watching in Europe, as I was, meant coverage was courtesy of the BBC and commentary by two storied skips from the grand Team GB of yesteryear. (Let’s put aside the fact that, like most British curlers, the commentators and players were all Scottish, because they all displayed a sufficient amount of “national” pride to be considered British. I will get into the whole Scottish nationalism affair later.) The stage was set: the Canadian women had beaten the female British team in the semi-finals and gone on, undefeated, to win the gold medal the day before. There was an enormous amount of pressure from home on the Canadian men to repeat their gold-medal successes of the 2010 and 2006 games. The tension was palpable.

Canada ended up winning a lopsided 9-3 for the gold.

Now, the Canadians were the odds-on favourites in this match. Despite curling being a Scottish sport originally, Canada is its foremost powerhouse nation. Since curling was introduced to the Winter Olympics in Nagano 1998, Canada has won medals in both the women’s and men’s tournaments every time. Only Sweden comes close. This particular team GB was also very good – they have won several World and European Curling Championships – but I doubt many people would have bet on them for the gold.

Our Boys Aren’t Like That

And yet, to listen to the BBC commentary, the victory was Britain’s almost by rights. The callers were making a valiant effort at being neutral at first but later abandoned the impartiality to lament the way the game was going for “our boys.” But what was most fascinating to me, as a student of nationalism and empire, was the language they used. I’ve written before about how the Olympics brings out the very best/worst in our jingoistic selves and allows the media and advertising to fall back on hoary old national tropes (the whole #wearewinter Canadian twitter campaign being just one example – do they not have winter elsewhere?).  But I had never seen this rhetoric play out between former imperial power and its precocious colony before. According to the BBC, the Canadian team was (and please say this with a Scottish accent in your heads, because I assure you it’s better) “a wee bit too aggressive,” “quite loud with their calls” and “not as polite as some of the other teams.” At one point, jokes were made that the Canadians’ shirts were too tight — or perhaps their biceps were too big? It was all just too masculine for Britain! “Our boys aren’t like that.”

 

Canadian curling skip Brad Jacobs: too much muscle mass for Britain!

Canadian curling skip Brad Jacobs: too much muscle mass and yelling for Britain!

 

Uncouth colonies! How dare you go to the gym and yell at the rink and celebrate your victories! It was a distant echo of the accusations that have always been aimed at settlement colonies, like Australia and Canada – and internal colonies, like the untamed “Wild West” within the United States – as justifications for the continuation of central control. Australia, incidentally, has never shaken off its image as the raucous outpost of empire “Down Under.” (Google suggest says: “Why are Australians so…” “Racist? Obnoxious? Violent?” Notably masculine traits, and not in a good way.)

It is odd that the British should still be falling back on this language. Perhaps sport commentary, like holiday foods, preserves tradition longer than the everyday. After all, it is hardly news that the games that originated in the former imperial capitals have since spread around the world and been mastered by foreign nationals to a far greater degree than those in the home country. Golf, a typically Scottish exercise in hitting objects with sticks, has been perfected by Americans like Tiger Woods or Fijians like Vijay Singh. Cricket is now the almost exclusive realm of South Asians. And then of course there is (sigh) soccer, an originally English sport which is now dominated at the international level by South Americans and Southern Europeans, much to my biennial chagrin.

Rugger for the Empire

Perhaps the general British population is now past the point with these sports that they feel they should win, as the original players. But that is patently not the case with every sport. For comparison, I thought a look at another English game – rugby, a product of the Victorian English public school system – would be interesting. Rugby spread about as far as the former settlement colonies of Australia, New Zealand and South Africa (though really not much further, to look at the top teams), and my hypothesis is that British commentary would deem those foreign players rough and aggressive as well. Indeed, a short search of British news outlets finds the formidable NZ All Blacks masters of “thuggery” and the English team still fending off accusations of being hampered by its antiquated class system and uselessness on the pitch. One author, a former English international rugby player, talks about how the “relentless,” “ruthless” All Blacks laughed at him and assaulted his manliness when he twisted his knee, and how a recent match between the Aussies and the All Blacks was “a frightening gauntlet thrown down to all the players in the northern hemisphere.” You can’t make this stuff up.

 

The New Zealand All Blacks: "all things dark and Kiwi"

The New Zealand All Blacks: to the English, “all things dark and Kiwi”

 

It is competitive and familiar and has overtones of parent-child conflict. This same language was appropriated by the colonies themselves to justify their independence from Mother England: “You’re right: we are stronger and healthier and more willing to get our hands dirty, so we’ll have that control of our own government now, thank you.” Canada and Australia in particular used the physical superiority of their young men as indications that the centres of empire should shift to these places where willing hands were stronger at carrying its mission forth. As one former Canadian Governor General once said, “It is in climates and countries where the white man may multiply…that we must look for the strongest elements of Empire, and it is only at the Cape of Good Hope, in British North America, and in Australasia that we find these conditions realized.” And so it was that British men became stereotyped as effete weaklings more interested in their cravats than the serious business of governing a plurality of the world’s population.

And we’re still talking about it, a century later.

Hammer Time

In curling, the team that gets to throw the last stone (and has the opportunity to win points) in each end has the “hammer.” At the moment, the imperial hammer lies with the United States. And yet, Olympic jingoism was muted this year in the US, with various news outlets decrying the “step back” from previous triumphs, with fewer medals and some surprise podium shut-outs. Much national hand-wringing and poor sportsmanship ensued, perhaps signs of an empire uncertain of its own strength.

A sign of decline? Stay tuned for accusations of China’s uncouth aggression.

Oh wait…

US News Reports of Chinese Aggression

US News Reports of Chinese Aggression


Scandal, Scandal! Lisez Plus Ici…or Not

June 7, 2011

What is it with the French?

Despite the puritanical Anglo-American attitude toward sex that supposedly stifles our expression of sexual content in North America, the French press is muzzled to a far greater extent than our own. Titillating details of adultery, hypocrisy and intrigue remain untold. As one weekly puts it, “News always stops at the bedroom door.”

There has been a wave of self-examination on the part of the French media in response to the recent scandal involving former IMF managing director Dominique Strauss-Kahn, a hotel maid, and a rape charge. In response, Matthew Fraser, formerly editor of the National Post and now an academic in France, wrote a thought-provoking explanation of why the French media clam up just when politicians’ private sins and indiscretions could be selling millions of papers. He describes modern France as a guilt-free land of entitlement where power essentially allows the ruling elites – historically monarchs, but now politicians and top-level bureaucrats – to do whatever they want without fear of it being reported. And even if it is reported, they respond with a Gallic shrug as if to say, “And?”

While I’m not sure I agree that a culturally Catholic country can be devoid of guilt (!), or that French journalists are mostly unconcerned with facts (another argument Fraser makes), I am intrigued by his remarks on privacy. In France, privacy trumps freedom of speech. In Canada, the US, and especially Britain, it is just the opposite. Britain doesn’t even have a formal privacy law; thus, newspapers tend to print first and ask questions (or defend against a libel claim) later. Case in point: my favourite footballer is currently embroiled in an adultery scandal that he (unsuccessfully) attempted to quash before publication with a full court superinjuction. The matter even came up in Parliament.

Going to such lengths to stop the presses seems ridiculous, but on the other hand, once a story is out, and has been seized upon and exaggerated beyond recognition by numerous blogs, tweets, and other retellings, the damage is done – even if the content is inaccurate. Such lengths are standard in France. The French legal system, in order to treat all citizens as equals before the law, grants everyone the same level of privacy. For famous people, this amounts to establishing legal walls which severely limit the stories that can be told by the official press. There are cultural walls too, which results in a lot of open secrets in France that are never officially acknowledged.

The Public Face and the Tipping Point

Do we really need to know all the gory details? Perhaps we Anglo-American types have baser instincts for needing juicy gossip, because I suspect that if the French public were really clamouring for a story, the media would give it to them, particularly in an age when newspapers are going bankrupt on a weekly basis. But it is difficult to argue that salicious tales of seduction by the ruling elites are really essential information for the public at large.

Unless, that is, they reflect poorly on a leader’s judgment or character. Does personal biography matter? So asked the New York Times recently, in an interesting series of short opinion pieces that explored how much we really need to know about our elected officials. Should they be considered differently because they are famous? The general consensus is no. Should they be considered differently because they are powerful? Absolutely. Hypocrisy and corruptibility are certainly unattractive characteristics in figures of authority, and even I will admit to a healthy sense of schadenfreude when an undeserving hero is brought down by an enterprising journalist. The trouble arises when determining what information the public needs to judge a public figure’s accountability. What is the line between a public role and the private person? Are both real? Are both fair game for reporting?

An important duty of the media is to hold public figures to account for their actions. Sometimes they don’t go far enough. Fraser writes that in France:

…there a legal barrier between private and public lives — though when Mitterrand installed his parallel family in a state residence at taxpayers expense, the French media still observed obedient silence.

Then-President Mitterand’s tacit second family may not have been newsworthy, but there is evidently a tipping point, and one that has been reached recently: with the explosion of the DSK scandal in all its gory detail, particularly the charge of rape, a line was crossed and the media floodgates opened. Several prominent French women have since opened up about the sexual harassment they faced from politicians, colleagues, and others. It’s a dialogue that needs to be had, certainly, in order to advance women’s rights in France and break down one more barrier that prevents women from speaking up.

It is the job of the media to advance that debate, and perhaps they can do so most persuasively by bringing in anecdotal evidence of famous persons and their misdeeds. The joy and curse of leadership is the opportunity to set an example for others. Those in the public eye are often leaders, by virtue of their skills, hard work, or simply that others look to them for guidance. As such, they are not mere private citizens, and their actions – all of them – deserve scrutiny. Scandals show that leaders are human too, for better or worse, and knowing about them helps the public evaluate which leaders should stand and which should fall.


It Takes a Village: Why Not Outsource Childcare?

March 14, 2011

The 100th Anniversary of International Women’s Day last week got me thinking about how glad I am not to be Betty Draper. Yet despite our advances, the promise of happier people – which of course includes happier families – has not borne out. The feminist movement has made great strides toward equality, but often at the expense of children, many of whom now grow up in an environment with no parents at home. We could debate at length why so many families feel the need to have two working parents (is it that corporations no longer pay a “family wage”? or have standards changed and now families believe they need more things, bigger houses, etc.?), but it would not alter the fact that most families have not substituted a father working all the time – and a mother at home – with two parents alternating working half the time. Throw in a divorce rate hovering around 50% in the Western world, and single parents who have no choice but to work long hours, and the result is millions of children with almost no parental direction for much of the time, let alone quality time with two parents.

One of the enduring themes of this blog is the increasing over-specialization of work, study, and entertainment, but I have yet to touch on the arena of parenthood. So allow me to play Jonathan Swift for a moment with my own modest proposal: outsourcing childcare to those who can do it efficiently and – most important – effectively.

Why not outsource parenting? We seem to have made most of the rest of our lives as efficient as possible. Instead of each of us owning farms that grow all our own food, we have created supermarkets and other supercentres that not only sell food, but everything from pharmaceuticals to care tires. Millions of office drones sit in cubicles doing the white-collar equivalent of screwing a bolt into a chassis over and over for eight or more hours a day, the epitome of over-specialized corporate work.

And childcare itself has changed from the days of one parent teaching her young how to get on in life. Public schools were established 1 000 years ago to teach Latin to poor children who could not afford private tutors. Today it is a legal requirement in most countries that children spend their weekdays in classrooms full of other children. (And most do: the latest statistics for homeschooled children that I could find put the number at only about 3% in the United States.) We have already outsourced the majority of education to professional teachers, from the fundamentals of literacy and numeracy to advanced calculus and classic literature.

At an even more basic level, many working parents outsource childcare to day cares, nannies or relatives. Crèches, the forerunner of modern day care, were established in France in the 1840s near factories so working women could drop their children off there during the day. Today they are everywhere. As the percentage of working women (in Canada) aged 25-54 rose from around 50% in the 1970s to over 80% today, there was an accompanying rise in the number of children in non-parental care.  In 2002, 54% of Canadian parents had someone else look after their children during the day, up from 42% in the mid-nineties. In the U.S., almost two-thirds of pre-schoolers are in non-parental child care.

So outsourcing our parenting – if I can be forgiven for using such a cold, economic term – is certainly palatable to the majority of parents, at least some of the time. And there is most definitely a broader need for it, though less quantifiable. I needn’t go into the many social ills connected with a lack of influence, or parental influence, attention, or role-modelling during childhood, as these are well known.

There are many bad parents out there, but while we are quick to want to get rid of other minders who are ineffective, like teachers or nannies, social and biological conventions dictate that it is a lengthy and difficult process to “fire” parents. Leaving children exposed on mountaintops or in the care of a nunnery (in which something like 80% of the unfortunates dropped off died anyway) has gone out of fashion in developed countries, except in certain safe havens like Nebraska, so instead they remain with bad parents, or in foster care, which for most is not the optimal solution. Even parents who love their children can make bad child-rearing decisions with the best of intentions.

But what if the default option for raising children, like public schooling, was communal (or private) care by qualified parent-like figures? The right to “home parenting” (like home schooling) could be awarded only to those who are qualified to practice it, with regular supervision by a central body. Consider: specialist “parents” rearing children in groups is hardly a radical idea. The old African proverb about a child needing more than one knee, or the much more famous one that serves as the title of this post, indicates that our modern way of raising children is little more than a hiccough in the trajectory of human history.

Most parents raise only a few children, but almost all say that it gets easier the more they have, as they build experience and knowledge. Specialized parent substitutes would have the benefit of raising perhaps tens of children, and, what’s more, they would love it, because it would their career of choice. Children would also have the benefit of a diversity of tried-and-true, centrally vetted and approved child care methods, culled from what has been proven to work well internationally and throughout history — call it a “best practice” approach to parenting. Just think of what costs could be reduced or eliminated in  a society with a higher proportion of well-adjusted children – everything from healthcare (therapy and counselling) to policing and incarceration costs.

Clearly, this is not likely to happen anytime soon, and I no doubt open myself up to charges of everything from heartless communism to wanting to run state finances into the ground by proposing elaborate centralized childcare schemes such as these. But consider: we wouldn’t trust spinal surgery to someone who has never done it before and who would spend half the time we’re in the operating theatre off in corporate meetings somewhere else or on his Blackberry. We wouldn’t want an unqualified engineer building a bridge we have to drive over, especially on almost zero sleep while laying the foundations.  Yet we allow complete amateurs to raise their own children armed with little more than evolved instinct and maybe a copy of Dr. Spock. Does that really make more sense?


Silence and Schematics: The Things You Don’t See

December 16, 2010

In my last post I wrote about context and perspective in mapping, and the biases that are inherent in the information presented in different kinds of maps. Biases, of course, can be dangerous because we generally trust the information maps give us. They are more powerful for their apparent objectivity. The science behind them is sound, we think – after all, cartography is based on empirical data.

But just as maps can inform us, they can also make us ignorant – of context, of specific details, and of what we don’t know – even while they’re giving us other information. It isn’t just what we see in the frame that matters, but also what we don’t see, what’s left out. In conveying information, art can be as important as accuracy, and sometimes even more so.

Most early maps contained a lot of information. When little was known about the area beyond what had been explored, cartographers would create a sense of danger and excitement by inserting allegorical images, fantastical creatures, or mythical mountain ranges. They would decorate the frames with pictorial Biblical references, or symbols of their nation’s prowess at exploration and conquest.

A very busy map of Africa from the 1600s

In the above (relatively complete!) map of Africa from the 1600s, note the prevalence of mountain ranges and large rivers (that don’t really exist) and the animal drawings used to take up space. Also note the many decorations of ships in the ocean around the frame (side note: web address watermark not included on the original). What is silent? The cartographer’s ignorance – about the interior topography and other geographical markers. But a casual observer then would not have known this.

It was considered a great leap forward when in 1759 cartographers – influenced by French mapmaker Jean Baptiste Bourguignon d’Anville and the Enlightenment tradition dictating that all maps be empirically verifiable – begun to leave blank spaces if precise information about parts of the areas they were mapping was unclear. The practice served to encourage new forays into the “unconquered” and “uninhabited” areas they depicted to determine, for example, the as-yet undiscovered mouths of rivers or the potential treasure/glory/conquest that lay beyond established borders. But primarily these blank spaces lent increasing credibility to what was shown (whether it was accurate or not), by silencing everything else.

Accentuating some pieces of information over others with emphasis and silence grew in popularity even further as the centuries progressed. The most common world map we see, for example, privileges the northern hemisphere over the southern through the use of Mercator’s projection. It also puts the Western world – whether Europe or North America – in the centre of the frame, relegating all other areas to the peripheries.

"The Queen's Dominions at the End of the Nineteenth Century"

In the map above, the bright red colour of Britain’s imperial territories contrasts with the neutral colour of other lands. Islands of small geographical significance jump from the page with red underlines and heavy black labels indicating that they are strategic refuelling outposts, places that ship spices back to Britain, or simply more territory in red. Mercator’s projection is used to great effect, enlarging North America even above the bounds of the map’s frame, at the expense of the southern hemisphere.

It is all intended to provide a sense of a vast, interconnected Empire. While looking at this, viewers might fail to notice the absence of information not related to Britain’s imperial conquest. About other lands, the map is relatively silent, because they are not the focus.

Maps are now used for all kinds of things – everything from directions to websites or thoughts. The proliferation of maps has tended to swell the number of those used for a single purpose, and the trend seems to be toward more specificity but less context.

Consider subway maps, most of which are a legacy from the modernist era. They fall squarely into the “art” over “accuracy” way of conveying information, and are characterized by highly stylized lines, multiple colours and use of sans-serif fonts. The most famous, of course, is Harry Beck’s map of the London Underground, which dates to 1931. Its genius lies in its abstraction, its ability to draw order in the form of clean and easy-to-read visuals from the confusion and complexity of the actual system. Compare the official underground map with the actual map of the subway stations from above the ground:

Schematic Tube Map, Zone 1

 

Tube Lines Mapped to Actual Geography

It takes a certain genius to create schematic subway map order from chaos; no doubt this is the reason these maps are such iconic art pieces, found on buttons, t-shirts, and posters the world over. It’s fascinating to me that they are so simple and so focused – and yet divorced from the actual geography they represent. Almost every major city is the same.

Paris:

Paris Metro

Paris Metro

Washington DC:

Washington DC Metro

Washington DC Metro

Moscow:

Moscow Subway Map - Like an Alien Creature

Even maps of New York City’s frenzied system are relatively simple. But sometimes accuracy wins out over art. In 1975, the New York City transit authority determined that the map they had been using to that point was too much so, and commissioned something that would line up more with the streets above ground. (You will find a fascinating interview with the designer of the 1979 map, which was only just retired a few years ago, as well as several old subway maps from NYC, here.) Yet even this more “accurate” and “realistic” new map has some deviations from reality: Manhattan, and lower Manhattan in particular, have been expanded to accommodate the landmarks and subway lines that all seem to converge there; Brooklyn and the other boroughs are made relatively smaller than their actual size.
It would seem that for clarity or for a great story, some alteration is always necessary, and a bit of silence too. No map designed to emphasize transit lines could hope to show every street, and of course designers realize this.  People are perhaps more willing to put up with silence and abstraction in maps now because they are used to it, and because maps are not expected to be geographically accurate to be authoritative.  It’s an interesting trend that points to our increasing ability to cope with the abstraction and de-contextualization of cartography, even as the broader minimalist modernism movement appears to be winding down (the ever-popular clean lines of IKEA products notwithstanding). What does it mean for the future of maps? Will the definition of a map become ever-broader as we incorporate variations from site maps to schematics? Or do we need a new name for this kind of information vehicle altogether?

This post is part two of a three-part series on the past, present and future of mapping. Check back for a wrap-up later this week.


Encore, Encore! On Music and Unpredictability

November 18, 2010

I attended a remarkable performance at the Toronto Symphony Orchestra last night, and after a partial standing ovation, I was surprised to discover that we would be treated to an “encore” of sorts. Naturally, as is now the custom, it was not a repeat of anything we had just heard, but a different piece entirely. I recalled other acts I’ve seen where an encore was welcome and the pieces increasingly popular (for example, with George Michael, who did three, each contaning songs better and more well-known than the last). There were others where encores were notably absent, and the audience felt almost as though they hadn’t had their money’s worth from the evening.

I assumed it was a growing trend, this repeated encore thing, perhaps showing my bias of believing my contemporaries far sillier than our ancestors in putting up with and propagating it. Some research, however, has proven me to be wrong on this score. According to Oxford music historian Michael Burden, giving “encore” performances in fact dates to the early eighteenth century Italian opera circuit in London, when audiences would call for a repeat of an aria by a particularly good prima donna or primo uomo – sometimes right after the initial performance of the piece itself. This means audiences, who had already heard the main theme twice (as per the common ABA da capo format of such music) would ask for it again, and sometimes multiple times, with increasing ornamentation each time from the singer. It all got to be a bit much for some opera-goers, fatigued by performances that were already getting to be increasingly long, sometimes to one o’clock in the morning. (No doubt this was especially hard on those who only attended the opera for fashion’s sake.) It also became too much for many singers, who would often become exhausted and even have to take an extended break to rest their voices. Yet those who did not capitulate would be punished, sometimes for years, in the form of hissing amongst audience members and derision in the fashionable papers.

Thus, a tradition was invented. It appears we are now able to exercise some restraint in our calls for “encores,” and yet we still expect them. It is part of the performance, the elaborate dance between the musicians on stage and the audience. We are all performers now – we play our parts as appreciative audiences with the requisite ovation, even perhaps the standing sort – over the course of an evening. It can be tiresome, all this pageantry, when one might simply prefer to attend a concert, hear the lovely music, pay due appreciation, and depart. (And please feel free to debate with me in the comments section whether you believe standing ovations to be too common and expected – as I do – or audiences too stingy if they fail to leap to their feet – as I’ve heard.)

But the pageantry is now one of the only defining features of live music, encores included. The music is usually not new to us, as it was to eighteenth-century opera-goers. We can hear it whenever we like. So why attend a concert in person when we all have access to world-class recordings of any imaginable piece we would want to hear at the click of a button? Why bother with the expense, the inconvenience of travelling to and fro, the irritation of listening to hacking coughers rattling lozenge wrappers in the seats behind us, when we can simply enjoy the same music in surround sound with sub-woofer enhancements from the comfort of our own homes?

It’s the unpredictability, the multi-sensory experience, the feel of being in the audience. Pick-and-choose music downloading programs like iTunes (and of course Napster, LimeWire, and the like) have brought the recording industry to its knees. They’ve also hampered the ability of artists to choose how their music will be enjoyed (i.e. in the form of track layout on albums, etc.). But it is the appeal of live music, with its surprises, unpredictability and interactivity, that will ensure the continuation of the music industry. Differentiation will come in the form of the unexpected, even if we as audiences expect some kind of extras to make attending worth our while. We will ask musicians to push the limits of how we experience music. After all, as Burden points out, the whole idea of an encore is “not simply to hear it again, but by definition, … to hear it differently.”

We might as well expect more of them. Encore.