A Communism of Pain

April 10, 2011

When I was younger I thought often about the idea of a communism of pain.  If all humans were somehow linked to the extent that pain could spread itself out among many, what would be the net effect at the individual level? How much pain – in terms of an impossible-to-quantify objective amount – is out there in the world? Would the extreme suffering of the few spread out to a chronic, if manageable, level of pain for the rest of us? Or would it, distributed amongst the billions of humans on the planet, amount to almost nothing in a single one?

Of course, I understand that pain is a biological imperative, our bodies’ way of telling us that something is wrong and that we should stop whatever we are doing that is causing it. But from a purely sociological (or maybe political) perspective, what would be the result of averaging it out? Perhaps equal distribution wouldn’t be optimal – after all, communism in theory espouses taking from each according to his ability, and giving to each according to his need. Varying pain thresholds might in some way be taken into account. Or perhaps those most in a position to inflict pain could be those who felt it most deeply. (No pain, no gain, as it were.)

Actual sharing of pain through embedded receptors or similar technological enhancements is more in the realm of science fiction or post/transhumanism than reality at present. But empathetic pain-sharing does in fact exist. Recent research has indicated that the same areas of the brain are activated in those observing someone in pain as the actual sufferer. In both cases, our anterior insular cortex, the area that monitors how we feel about things inside our bodies, and the anterior singular cortex, the part of the brain that processes emotions and attention, are engaged. Moreover, the empathetic response is greater the higher the level of affection for, or perceived identification with, the sufferer.

Pain expert Sean Mackey theorizes that pain empathy played a role in mammalian evolution by signalling those in distress so a pack could stick together, heal together, and prosper. Noted primatologist Frans de Waal would agree. He studies bonobos, the great apes scientists now believe are as closely related to humans as chimpanzees. He has concluded, after studying bonobos extensively, that empathy is a much more basic instinct than many consider it to be, and much less intellectual. Instead of a fairness rationalization, or a sense that one can imagine himself in another’s position, he believes that empathy is much deeper, and less complex. His theory explains why infants show empathetic responses to fellow children crying, but only learn theory of mind, or the more intellectual basis for understanding others, around age four. Incidentally, a physical basis for empathy also explains the contagious nature of yawning, as he has explored in other research.

Communist bonobo

A communist bonobo (picture slightly adapted) - does he feel our pain?

Bonobos are also noted for their very sexy way of solving all kinds of problems, and for generally displaying much more cooperative and less competitive behaviour than that of chimpanzees. This is significant because the narrative of competition has coloured much of the modern period’s image of itself, and its image of the way early humans lived – nasty, brutish, and short, as Hobbes once wrote. De Waal locates the competitiveness myth around the time of the Industrial Revolution, as a necessary backbone for the proto-capitalist system that was then forming, and which has now come to dominate global economics and politics.

The political bent of the concept might be significant. A growing number of studies has pointed to those on the more liberal left end of the political spectrum being more open-minded and thus more empathetic than their more conservative counterparts. Tolerance, inclusiveness, and a passion for social justice have recently been linked with both political liberalism and high levels of empathy. (One might ask if this implies that communism is a political representation of empathy, which could set off hours of debate, I’m sure.)

Given the general trend toward a more liberal way of thinking and behaving over the past hundred or so years, and the ever-expanding list of encounters with “others” that telecommunications, air travel, and globalization has allowed us, is it possible that humans are in fact more empathetic today than they were, say, when Victoria ruled England? Or when Arthur did? Would the apparent recent setback of declining empathy and rising conservatism then be a blip, or a reversal?

And if we are more empathetic now, does that mean we inflict less pain on others than in the past?  Sadly, I believe conflicts arising out of urbanization, a skyrocketing global population, and scarce resources – coupled with the arrival every year of new ways to maim and torture others – would signal otherwise. After all, it appears that humans also share enjoyment of schadenfreude, the pleasure in seeing others’ misfortune (apparently as much as a good meal). Similar to the way being in a group can magnify feelings of competitiveness, it can also augment satisfaction in seeing rivals fail. This enjoyment also carries a political twist: in one study, Democrats were found to be secretly happy when reading about the recession, thinking it might benefit the party at the next election. And the stronger the political identification, the stronger the sense of schadenfreude.

It seems, then, that we are hardwired both for empathy towards those in pain, and a delicious satisfaction with seeing it. Perhaps a communism of pain would therefore make us more sensitive to the suffering of others, but all the more likely to enjoy it.

(Note: Almost all of the articles linked to in this post were fascinating to read; I’d highly recommend perusing the ones on primates and schadenfreude in particular.)


Knowledge and Power in a Skeptical, Connected World

March 18, 2011

Who do we listen to, and why? In an age when we can find anything information quickly, what does it take to be a voice that rises above many others? What kind of power does this represent?

I read in the latest edition of the Harvard Business Review that in 2011 companies are anticipating an increased focus not just on broadly saturating target markets with facebook ads and silly “viral” videos, but on targeting “influencers” as part of their “social media” strategies. These individuals are those who shape culture and get other people on board with new trends and ways of thinking. Oprah is an influencer. Radiohead are influencers. Steve Jobs is an influencer. And a lot of random bloggers, tweeters, and other social media characters whom you’ve never heard of are influencers, and they are going to be targets of corporations because they are both cheaper and perceived (perhaps) as more authentic shills than their more famous counterparts.

You can be sure that by the time something gets annotated up to the level of an HBR trend to watch, it has already set the Internet abuzz. Further research on “measuring influence” yielded far more twenty-first-century social media examples than any others. It seems that organizations have (finally!) learned that a “social media strategy” on its own is of little benefit without real, grassroots endorsement. However, I’m more interested in what “influence” looked like in the past, before it morphed into a social media concept to be made into the next corporate buzzword, and what characteristics have stayed with perceived “influencers” since.

It seems it is a tricky thing to quantify, or even define. An article I discovered about the role of influence in economic history discusses how it is closely related to communication, but can range from impression to force in the amount of strength it implies. The other critical factors in determining long-term influence were time and space. The example given was Saint Thomas Aquinas, whose ideas were central to much medieval thought (throughout the Latin-speaking world, at least), but are relatively inconsequential today.

Influence and Power – and Money

Influence, as the article points out, is closely related to power. One of the concepts that has stayed with me since learning it in an Organizational Behaviour class years ago is that of differences in the kinds of power wielded by individuals. They can have positional power, power stemming from one’s role as, say, a manager or a parent or some other official and likely formalized figure of authority, or they can have personal power, that stemming from an individual’s character or beliefs, and likely more informal in nature. The difference between them parallels that of practical/mental authority vs. emotional authority, and the general consensus is that emotional authority goes much further in influencing others because it does not rely on a (potentially temporary) and wholly external power differential the way practical authority does.

When I consider what influence looked like in the past, it seems there was little distinction between the two types of power mentioned above. Perhaps the theory I just articulated is a fall-out from our comparatively recent fixation on merit over birth status as a rationale for power. Indeed, the ideas (and names associated with them) that have survived best throughout history to influence many others have always been backed by great financial power. Take religion, for example, which has been perpetuated by wealthy organizations that held positional power in their communities. The familiar expression about history having been written by the victors speaks to the tendency of dominant individuals, families or states to justify their authority with historical precedent. And most of the theories in every field that are still with us today were dreamed up by men with solid financial backing and the ability to spend large amounts of time reading and philosophizing. (Even Marx lived off the generosity of his bourgeois co-author, after all.)

But today that is changing — to an extent. YouTube, twitter and other media that celebrate memes and all things viral can make ordinary people famous astonishingly quickly. Such fame is often fleeting and of dubious value to society, but savvier types can sometimes parry their sudden name recognition into the more lasting sort or influence (Justin Bieber, anyone?). This can happen because influence is magnetic and self-perpetuating. Mommy bloggers who are already widely read and respected are natural candidates to push band-name diaper bags or whatever else new mothers supposedly need and want. That corporations want to latch onto such people is hardly surprising – they are merging their corporate power with bloggers’ influence in new markets, and the bloggers want to in turn increase their own profile through association (or maybe just get free products).

Self-perpetuating influence applies to companies as well. The new techie term for this concept is “network effects” – as the Economist defined it recently, “the more users [services like facebook, eBay, etc.] have, the more valuable they become, thus attracting even more users.” Whereas in the past money and power begat more of the same, today we can add hits and click-throughs to the mix.

Knowledge Brokering from Darwin to Wikipedia

The common link between these people and corporations is the way they treat knowledge. They are what the corporate world now refers to as “knowledge brokers,” a title that refers to the ability to clarify and share information with different audiences or spheres, and determine what the common elements are between, say, Paul Revere, corporate marketing, and the AIDS epidemic. Knowledge brokering (and a bit of luck) is what separates widely-read bloggers from those who write solely for themselves (whether they want to or not). It is the ability to write things that people find interesting and useful. The CIA is investing heavily in such people after a serious of incidents that demonstrated how segregated and impotent their different bodies of knowledge were.

Knowledge brokering is more than simply aggregating (though smart aggregators of information are helpful too). It is the ability to analyze and draw connections while becoming a trusted conduit of information. Knowledge brokers are perhaps an antidote to the pervasive and growing tendency to overspecialize, because they connect many specialists and their ideas with a broad audience. They are the reason we know about Darwin’s ideas. Or Jesus. Or celebrities’ latest faux-pas. Wikipedia is one giant knowledge broker that has an army of largely volunteer knowledge brokers in their own right mobilized on its behalf. That is power.

But what makes us listen to them? I suspect the key is authenticity. A lingering distaste and a keen sense for corporate marketing disguised as something else define our era. Perhaps the main difference between influencers from the past and those of today lies in the type of power they wield, as I outlined above. Personal power – like that wielded by bloggers and Oprah – is seen as more trustworthy because it lacks an agenda (whether or not this is true). Positional power is usually distrusted simply because of what it is. We only listen to Steve Jobs because we truly believe he has our best interests – in being cool and technologically savvy, regardless of the product – at heart. In contrast, many Americans discount everything Obama says because they believe he merely wants to increase his own power and unveil his secret socialist agenda on an unwilling populace.

Is this a reflection of our philosophical allegiance to free-market democracy? Is influence and power of all kinds just the ability to get people to like and trust you? If so, many corporations are going to need a lot more than “influencers” on their side.

Food for thought: How do those with positional power gain credibility? Is this knee-jerk anti-authoritarian mindset in society as prevalent as I say it is? Do people who seek to perpetuate their influence by getting behind corporations somehow weaken their own authority (i.e. do they lose their ‘cred’)? Hm.

MARGINALIA: Though I did not explicitly link to it in this post, the Economist’s Intelligent Life ran a fascinating piece recently on The Philosophical Breakfast Club, a group of four Victorian scientists who were definitely knowledge brokers (and nifty polymaths) and who were key influencers in their time. I’d recommend reading it.


Coffee vs. Alcohol: A better brew?

February 28, 2011

Almost everyone enjoys a good brew, but some brews are more acceptable than others, it seems. Around the world, coffee consumption has far outstripped that of alcoholic beverages, with around 2.9 pounds (or around 30 litres) of coffee consumed per person, on average, in one year. Compared with an average consumption of 5 litres per person, per year of alcohol worldwide, it seems we are much more inclined to be hitting a Starbucks than a bar on an average day.

Global average alcohol consumption

Global average alcohol consumption

Coffee is also a critically important trading commodity, second only to oil in terms of dollar value globally. I won’t get into the cultural influence of Starbucks, Tim Hortons and the like, but the impact on consumers and on the business world has been significant – much more so than any individual brand of alcohol in recent history.

Coffee is a relatively modern beverage. There is no Greek god of coffee, like there is of wine (though if there were, no doubt he would be a very spirited half-child of Zeus who enjoyed bold flavours, waking up early, and being chipper). The first evidence of coffee drinking as we know it today is generally placed in the fifteenth-century Middle East. Evidence of wine and beer consumption, in contrast, dates to 6000 BC and 9500 BC, respectively, or even earlier. Yet for such a young contender, coffee’s rise in popularity has been impressive.

No doubt in part this rise in Europe related to the appeal of the exotic, like the chocolate and other goods that originated in Turkey or other Arab countries. It is also likely that, like sugar, coffee was just tasty and appealing in its own right, and those who tried it liked it and wanted more. And certainly there is the social aspect, the rise of coffeehouse culture across France and Britain in the eighteenth century, which brought together politics, business and social interaction in a public forum as never before. The purported offspring of the coffeehouses, such as the stock market, French Enlightenment ideals, and even democracy, were significant. In a TED talk I watched recently, author Steven Johnson slyly remarked that the English Renaissance was curiously closely tied to the changeover from imbibing large amounts of depressants to large amounts of stimulants with the rise of the coffeehouse (go figure).

The best part of waking up?

Today, it seems that coffee has generally been linked to a host of other caffeinated beverages that are considered “good” (such as tea and cola) and alcohol has been linked with commodities that are “bad” and “unhealthy” (such as drugs and cigarettes). Why? Perhaps it is because colas, tea and coffee are unregulated, entirely legal, and (to a point) even considered safe for children, while the opposite can be said of alcohol, drugs and cigarettes.

Is the association fair? Hardly. While the dangers of addiction may be greater for the latter group, and public drunkenness more severely chastised than public hyperactivity, coffee and sugary colas (as fantastic as they are) are hardly the healthiest choices of beverages.

I suspect it is something else, something in the inherent nature or promotion of coffee that makes it seem less threatening than alcohol. Coffee suffers from none of the religious ordinances forbidding its consumption the way alcohol does (though, interestingly, coffee was also banned in several Islamic countries in its early years). Is has also never endured the smug wrath of teetotalers or wholesale prohibition.

Alcohol is generally placed into the realms of evenings and night-times, bars, and sexy movies, while coffee is the drink of busy weekday mornings, weekends with the paper, and businesspeople. Both are oriented toward adults, but coffee is in some ways more socially acceptable. Consider the difference between remarking that you just can’t get started in the morning without your coffee versus saying the same about your morning shot of whiskey. Similarly, asking someone out for a drink connotes much more serious intentions than asking someone for a coffee. And vendors are catching on: in Britain, many pubs are weathering the downturn in business caused by the recession and changing attitudes by tapping into the morning market of coffee drinkers.

Worldwide annual average coffee consumption (courtesy of ChartsBin)

Worldwide annual average coffee consumption (graphic courtesy of ChartsBin)

I wonder if the trend toward increased coffee consumption is in place of alcohol. I also wonder if it mirrors the general cultural shift toward an American orientation. The global dominance of Starbucks and other coffee shops seem to me to be supplanting the role of the local pub or licensed hang outs of the old world with a chirpy kind of Americanism and a whole new roster of bastardized European terms and ideas like “caramelo” and “frappuccino.” The New York Times backs up the idea of American dominance, noting that the U.S. makes up 25% of global coffee consumption and was a primary instigator of the takeover of coffee shop chains. Yet coffee is also extremely popular in Europe (especially in Scandinavia, as fans of Stieg Larsson would be unsurprised to discover) and even Japan.

Is this another case of American cultural colonialism, whereby traditions from Europe are adopted, commercialized, and re-sold to captive populations who want to tap into small piece of American corporate and social culture? Or is the global interest in coffee indifferent to American opinion?

Reading the tea leaves (coffee grinds?) to tell the future of consumption

Will coffee culture continue to increase in popularity, eventually supplanting the role of alcohol in social meetings? Two factors are worth considering here. The first is that while demand for alcoholic beverages in the developed world is shrinking, there is a growing interest in all kinds of alcohol (and especially wine) in emerging markets. Take, for instance, the rise of wine as a drink of choice and status symbol in China and Hong Kong as expendable incomes have grown. A similarly proportioned increase in coffee consumption there could be monumental – will it occur?

The second factor is the great cost of producing coffee. Putting aside the fact that most coffee is produced in comparatively poorer countries than those that refine, sell, and consume the finished product, the environmental cost is staggering. Waterfootprint asserts that for every 1 cup of coffee, 141 litres of water are required (mostly at the growing stage). Compare this figure with 75 litres for one similarly sized glass of beer and 120 litres for the average glass of wine and it would seem that a rise in coffee culture at the expense of alcohol could be disastrous for the environment.

Do the above statistics figure largely in the minds of those who drink any of the above beverages? Likely not. But all might – and likely will – in time affect production, and the economics of supply and demand will come into play, changing the equation once more and making it even harder to determine which is the better brew.


I’ll Take ‘The Obsolescence of Trivia’ for $500, Please

February 15, 2011

I once heard that Albert Einstein didn’t know his own phone number, because he never bothered to memorize anything that could be written down or looked up in less than two minutes. Even for someone like me, who always prided herself on being able to remember things, from trivia to birthdays to obscure historical facts (before my memory became a sieve, that is), such a thoughtful approach to using one’s brain seemed incredibly intelligent. Theoretically, all the space that was freed by not having to remember pedestrian things like one’s telephone number could be put to use coming up with, say, the Theory of Relativity and blueprints for the atomic bomb. What an efficient use of that magic 10% of our brain power.

I wonder what Einstein would have done with the Internet.

The ability to find almost any fact with a few clicks has to be one of the defining characteristics of our age. Case in point: I just verified the above story by searching for it online. It took about 4 seconds. I didn’t have to recall which book I’d read it in and then go searching for an hour through my Library of Congress-ordered bookshelves hoping the tome in question had an index so I could easily locate the passage I needed. I also didn’t have to think about who might have mentioned it to me and then look for his or her phone number and (horror!) call to ask about it.

The ability to search in this way is literally changing how our brains work. We have become “shallower” thinkers, who absorb less because we can find information so quickly and have our comprehension constantly interrupted with new information being presented to us (for example, by blue underlined links in a body of text). Things like Wikipedia have made us more able to find information easily, but are we less able to process it?

"Watson" tries to beat Jeopardy! champs Ken Jennings and Brad Rutter

"Watson" tries to beat Jeopardy! champs Ken Jennings and Brad Rutter, but can't respond to context

It could be that knowledge easily acquired has less likelihood of being retained. (Of course, it may be that I notice this because though I get older and learn much more rapidly, I also forget more rapidly than I did when I was young.) Instead of coming up with ways to store knowledge in our long-term memory, we are becoming adept at determining how to find it in the external world. Instead of savouring text or indulging in slow reading, as I wrote about in my last post, we skim, knowing we can go back later if we need to find something. Knowledge is largely transactional, facts over tone or style. A tradition similar to that of Islam, that followers should be able to memorize and recite the Qur’an, would be unlikely to take off if established today, it seems. Most of us can barely get through an article.

University administrators are talking about fundamentally changing the way information is taught in schools. What is the point of spending a few hours a week standing in a lecture format imparting facts, when facts can be discovered within seconds? Even if professors are teaching a way of analyzing facts, this too can be discovered in the form of lesson plans, course outlines, and sample teaching schedules for those so inclined to look for them. The kind of knowledge that students need today (one could argue, perhaps, that they have always needed) is of a much higher order and involves critical thinking as opposed to simple rote learning and memorization. Certainly, this appears to be one of the few arenas left in which computers can’t best us: an article on ars technica today reports that “Watson,” a computer created by IBM to compete against repeat Jeopardy! champions Ken Jennings and Brad Rutter mostly knows the facts (by querying its own database) but not how to take other contestants’ wrong answers into account when preparing ‘his’ own.

It is certainly possible that our future will involve less fact recall. To an extent, however, it will always be necessary as a building block to learning (think: simple math, the alphabet), so we won’t lose it entirely. The real question is whether the change is good or bad, if the kind of thinking we’re doing instead is beneficial or detrimental.

It’s hard to put a value judgment on the change. One could make the case that, from an evolutionary perspective, being able to recall facts, such as where the highest-yield coconut trees were located or what time of year animals would be in a certain location, would be beneficial. This later transitioned into an affinity among many for trivia games and quizzes of all kinds.

But is this kind of knowledge as useful today, when it can so easily be obtained online? Are we missing the problem-solving and interpersonal skills associated with acquiring it? An article on Slate.com last week lamented the rise of the Internet because it made finding obscure treasures like minor league baseball hats too easy to find, without the letter writing, sleuthing and travel required to find such things in (in this case) the 1980s. Now we are limited only by what we can imagine – if we can think of it, it’s probably out there. So is the free space in our brain dedicated to imagining more of what is possible, and less of how we’ll find out about it? Or are we just getting lazy?

Time will likely tell. But will it be a human characteristic change, or merely a culture-specific one? Another thing to consider is that access to the Internet and its potentially game-changing brain alterations is anything but ubiquitous. Being able to find anything online depends on both access to technology and freedom of information. Granted, the study linked to above mentions that it takes only about 5 days to gain the brain activity of an old hand Internet searcher. But no doubt some of the more profound changes to our neural pathways will evolve more slowly, with repeated exposure. Will the unconnected, firewalled world catch up in time?

Perhaps we’ll be too busy watching computers best each other on Jeopardy! to notice.


Minimum Impact, Maximum Time, and the Goodness of Work

February 10, 2011

Is ambling antithetical to success? Is a life of purpose the only path to happiness? And is Gen Y really all that different from previous generations in wanting meaningful work?

On Marx, Meaning, and Materialism

I think often on Marx’s theory of alienation; namely, that under the capitalist system of increasing specialization, workers become alienated from the fruits of their labour, and from their own capacity as workers to work/produce things and grow in doing so. Instead of seeing work as an end in itself, and gaining feelings of fulfilment from seeing the fruit of one’s labour go from raw materials to completed items, according to Marx work had become but a means to an end as workers were increasingly slotted into automated lines of production. Instead of creating the whole shoe, they would nail in a piece of the sole, as it were, with no satisfaction in seeing the “end-to-end process” (as we might say in today’s corporatenewspeak).

Certainly, with the rise of the industrialization, Fordist assembly lines and globalization, the idea of work as a means to an end gained popularity as a way to describe life in the twentieth century. And in some ways, this was acceptable. In the 1930s, one was fortunate to have a job at all – any job. One did not pick and choose. The generation after that (those ubiquitous Boomers) observed their parents’ work ethic and adopted it without thinking, as a means to gain material prosperity. Nice cars, big houses, creature comforts, holidays in Boca Raton, and well-educated children became status symbols, ends worth working for. A life of middle management drudgery and rarely seeing one’s children was, for many, an acceptable trade-off.

But we expect so much more from our work today. Making a living, and a living that will support the lifestyle we’re used to, is mere “table stakes” (more corporatenewspeak). Because, with good education and attentive parenting and the opportunity to develop our skills as children, we have so many options for a career. Consequently, we expect much, much more out of the time we spend at work. (And before someone brings up 40% unemployment among global youth, yes, the recession has, to an extent, made Gen Ys a little less choosy – but only for now.)

The theory of work as an end in itself – and a means to happiness and fulfilment – has important research to back it up. A study out of California a few years ago remarked on the importance of hard work and purpose in achieving happiness in life. The conclusion is worth quoting at length:

A central presumption of the ‘‘American dream’’ is that, through their own efforts and hard work, people may move towards greater happiness and fulfillment in life. This assumption is echoed in the writings of philosophers, both ancient and modern. In Nicomachean Ethics, Aristotle (1985) proposed that happiness involves engagement in activities that promote one’s highest potentials. And, in the Conquest of Happiness, Bertrand Russell (1930/1975) argued that the secrets to happiness include enterprise, exploration of one’s interests, and the overcoming of obstacles. …Our data suggest that effort and hard work offer the most promising route to happiness.

Wow. Good work, it seems, is the answer to all our problems. The only thing left to do is find work that contains enough meaty, purposeful, interesting, content – related to our skills, of course, and with excellent “work-life balance” and good benefits – to meet our needs. Simple!

But is this expectation reasonable?

Really, it’s a wonder anybody finds jobs like this, let alone the majority of people. Even Marx’s (clearly idealized) autonomous, cottage industry shoe-makers (or soldiers, or second sons forced into trade…) no doubt achieved very little of this all-encompassing fulfilment through their work. Yet today we pile the expectations on our jobs. While there are certainly those out there who caution that work will not make anybody happy all on its own, the prevailing narrative remains that fulfilling work is the surest route to happiness. Consider: it’s just not socially acceptable for anyone able to participate in the “knowledge economy” to opt out and instead choose to make money solely as a means to an end with no other agenda – let alone anyone under 30. Do you know anyone? And do they want the situation to be permanent?

Minimizing Impact: Lowering our expectations? Or relieving the pressure?

While I was vacationing in the vineyards of Mendoza (rewards for a life of corporate drudgery?), I got to thinking meta thoughts about what people tend to expect from life. We use a lot of language today that revolves around impact. We want to “make a splash.” We long to stand out in interviews, on dates, and in applications. People everywhere seek to be famous for something (anything! Jersey Shore, anyone?) or to leave a legacy, something that will let current and future generations know they existed as individuals, and left something behind. Modern society refers to the more noble side of this feeling as the desire to change the world, whether through volunteering, winning a Nobel Prize or raising well-adjusted children. We have, as I have pointed out before, a strong bias to action which makes us want to do good and make things “better.” Most of us put a lot of pressure on ourselves, a vague kind of weight that is associated with the Victorian ideal of the innate goodness of work and the possibility of having a hand in making a better future. The idea of finding work that allows us to, as the above-quoted study notes, “promote [our] highest potentials,” is tied up in this pressure.

At the same time we are acutely aware that life is, as an honourary TED talk I watched recently put it, fragile and vulnerable – and short. (This fact creates a very un-Hobbesian empathy, the talk argued, not only for those with whom we share blood ties, but with other humans, other creatures, and the biosphere generally. Worth watching.) It is little wonder that, with the perception of the sand in the hourglass ever running out, we feel pressed for time, overwhelmed, and run off our feet. We try to make every moment count. We multi-task and are always tied to a communication device of some kind. Most things are done for a purpose: we educate ourselves in order to gain employment, money and “success”; we sleep and eat for our health; we watch our health to extend our lives (so we can keep doing it all longer). It has been often noted with bitter irony that with all the myriad time-saving devices we employ on a daily basis, we find ourselves busier than ever before. Trying to do things in the minimum amount of time has not made us happy.

So I decided to try an experiment in reverse-thinking. What if we sought to – even just for a day – minimize our impact, and maximize the amount of time we spent doing things? What would this look like? What does “counter-urgency” feel like in practice? Would it lessen the pressure?

Experiments in living “Slow

I suspect that it would in many ways resemble the slow movement, which has grown exponentially in popularity recently in response to the speed of life and destruction of the environment and local communities in the name of convenience. It must also be a response to the pressure of the purposeful life. The slow movement includes slow food, which is (in contrast to fast food) grown locally, often organically, and savoured. Slow reading is similar, and involves savouring text instead of skimming or summarizing, or any other kind of speed-reading I learned about in university.

A minimum-impact day would also result in fewer outputs (and here I use a very corporatenewspeak word deliberately). We would do purposeless things: ambling with no direction, daydreaming, journaling, writing poetry, reading fiction. There would be no book club to report to. No destination. Poetry, lyrics and plays could be memorized for the sake of the words themselves, lines savoured like chocolates instead of potential “gobbits” to drop into future conversations or be recalled on trivia nights.

Sadly, my brief experiment in slowly minimizing my impact was a failure: I wanted outputs. I wanted to write about it, to share it on this blog. I wanted to tie it into my life’s work and be fulfilled by it.

I sense I would not be unique in feeling this way. Is our desire for impact innate, or learned? Here we have contradictory evidence. An article in the Economist a few months ago referred to a study that concluded that the desire for good, hard work actually isn’t all that innate, particularly in Britain. But if learned, if part of the Marxist legacy we hold that says that fulfilling work is an end in itself, how do we handle the pressure of finding such fulfilment?

Perhaps the idea of work-as-end is a way to rationalize the short time we have on Earth, and that we spend most of it working. But are we destined not to find all we seek in our jobs? Is it possible to use work only as currency to “buy” time for our true passions? Should we seek to maximize the good in our work (whether employment at all, a means to material comfort and status, or even autonomous shoe-making) — even if we hate it? Do you amble purposelessly?

I’d love to hear your thoughts…


Buenos Aires: Grandeur and Decline

January 19, 2011

We arrived in Buenos Aires on a Sunday afternoon. As our hotel was located in the older part of the city in San Telmo, we were a stone’s throw from the famous Sunday market that takes place along one of the neighbourhood’s many cobbled streets. The fair featured stall after stall of traditional antiques, leather goods and artisan crafts, though it is on the verge of becoming a typically commercialized attraction in which the majority of what is on offer consists of t-shirts, wooden mugs for mate, smoking paraphernalia, custom wine bottle holders, and other cheap, tourist-oriented kitsch. The atmosphere was fun, crowded, and noisy, with street performers of all kinds competing with the noise of various versions of Depeche Mode and Sting songs rearranged as tangos (very weird) and the occasional shout of a tourist discovering that his wallet had walked off with a local. Roving bands of drummers paraded up and down the street, drowning out all other sounds and sweeping everyone up in their rhythm. I even saw the elderly couples who had been dancing tango in the market’s central square stop for a moment and unconsciously move their heads and hips to the drumbeats.

The beautiful old Confitería Del Molino: closed to the public

Later that night, we discovered that the crowds had masked sidewalks that were filthy and falling apart, littered in feces and crawling with cockroaches, and so uneven that taking one’s eyes off them would make tripping every three steps a certainty. Perhaps, again, this is what they mean by “bohemian,” or “full of character,” in which case Buenos Aires certainly qualifies. It is European in style and influence, but I found it lacking the sparkle and excitement of European cities. It felt run down, in disrepair, tired. More than anything else, I was saddened by what I found there.

Everywhere there is the feeling of former grandeur in gradual and unimpeded decline. Blocks of high rises with stunning turn-of-the-century beaux-arts architecture are shuttered with security fences and alarm systems, or crumbling and “undergoing refurbishment” (for decades). Almost every public monument – and there are many – is closed off to the public with a high, permanent fence, and visually marred by layers of graffiti. A tour guide explained that Buenos Aires had been a wealthy city in the late nineteenth century, and underwent a building boom at that time, but when the global market crash hit in the 1930s, many families lost everything and their homes have never been cared for since and have fallen into disrepair. The exceptions are the ones that were adopted by the government and turned into state buildings – but they certainly do not escape the graffiti treatment that is near-ubiquitous across the city.

A typical sidewalk

Buildings at ground level are often shuttered and covered with graffiti

As a whole, Buenos Aires looked as saggy and tired as an aging tango dancer, still wearing the revealing, sparkly dress and 4-inch heels of days gone by, but gradually slowing in her movements and with a little less of the famous kicking.

The locals seemed to echo my sense of disenchantment. Everywhere we went in Argentina, there were complaints about the interference, partisanship, and general ineptitude of the government led by Cristina Fernández de Kirchner. Its latest schemes include firing chemicals into the clouds to break up hail in Mendoza (Cloudbusting, anyone?), switching the capital’s coin-driven bus system to one with cards (as they already have in Santiago – but here already a year overdue with no apparent progress), and making the banking system more stable. (Incidentally, while we were in Buenos Aires a bank robbery caused a run on banks that meant we had to go to four different bank ATMs before finding one that had money left to dispense.) Where in Chile there was hope and excitement about the future, in Argentina and especially in Buenos Aires we found a general sense of pessimism and hopelessness that matched the state of the city’s once-great buildings. I recalled that in The Economist‘s “The World in 2011” publication, the South American leader chosen to expound on the hopes for the future of a continent that has not lived up to its potential in the past 200 years was Chile’s president, Sebastián Piñera. Kirchner is up for re-election in October. I asked one of our tour guides who she thought would win and she shrugged hopelessly and said that even though nobody likes the president, she would likely be re-elected because the opposition is in even worse shape.

A beautiful mansion, now part of the Four Seasons Hotel, in Recoleta. Note the sidewalk.

And yet, like the aging dancer, the city still has its charms. The upscale neighbourhood of Recoleta features beautifully kept mansions and intact sidewalks (a novelty), as well as wide, tree-lined avenues. Puerto Madero, the old port area that has been completely revitalized in the last 10 years with hundreds of condos, restaurants, and pedestrian paths, has the feel of any world-class, modern city. The historic cafés and restaurants we found all served excellent food, and with their wooden tables, tango posters, and colourful clientele, it wasn’t hard to believe that they really hadn’t changed much in 100 years. The wine flowed freely, the people were interesting and friendly, and the weather was warm and breezy throughout our stay.

And, of course, there was the tango itself. It is a dance of anger and sadness, and it fits the city well (even though I must note here that Montevideo, across the river in Uruguay, claims to be the city in which it was born). One hears the tango everywhere, from street musicians to cafés to the radios of taxi drivers. And people really do dance in the streets. We saw a tango show that was about as touristy as we had predicted it would be, but that was nonetheless an impressive display of athleticism, beauty, and styling gel. We took a tango lesson from a local, which was more focused on our working together as a couple than the steps themselves, which indicated to me how much more the dance is about feeling and style than it is about technical mastery (those famous ankle-over-the-shoulder kicks notwithstanding).  And, on our last night, we went to a real-life tango club, nestled deep in the stylish neighbourhood of Palermo Soho. The crowd was mostly in their middle age, though we were by no means the only young people there. All the women, regardless of age, wore very high heels and very clingy dresses. The dancing was beautiful – not showy, or technically perfect, but full of emotion. Couples would regularly switch partners, and spend almost as much time talking and laughing as dancing.

Cafe Tortoni, famous for being the meeting place of the artistic elite of Buenos Aires for decades

I liked this side of Buenos Aires, the part that wasn’t posturing for tourists (and often failing to impress), but that showed off the still-vibrant core made up of the people who live there. I would return to the city for this feeling, one I still can’t put my finger on; Buenos Aires, despite my disappointment, still fascinates me because I felt as though history was alive and ever-present there, in the fairs and the foods and the tangos. There is still the porteño spirit in the air that was once behind all those buildings and monuments. I think the city is too proud and too fiesty to stay in decline for long.


Silence and Schematics: The Things You Don’t See

December 16, 2010

In my last post I wrote about context and perspective in mapping, and the biases that are inherent in the information presented in different kinds of maps. Biases, of course, can be dangerous because we generally trust the information maps give us. They are more powerful for their apparent objectivity. The science behind them is sound, we think – after all, cartography is based on empirical data.

But just as maps can inform us, they can also make us ignorant – of context, of specific details, and of what we don’t know – even while they’re giving us other information. It isn’t just what we see in the frame that matters, but also what we don’t see, what’s left out. In conveying information, art can be as important as accuracy, and sometimes even more so.

Most early maps contained a lot of information. When little was known about the area beyond what had been explored, cartographers would create a sense of danger and excitement by inserting allegorical images, fantastical creatures, or mythical mountain ranges. They would decorate the frames with pictorial Biblical references, or symbols of their nation’s prowess at exploration and conquest.

A very busy map of Africa from the 1600s

In the above (relatively complete!) map of Africa from the 1600s, note the prevalence of mountain ranges and large rivers (that don’t really exist) and the animal drawings used to take up space. Also note the many decorations of ships in the ocean around the frame (side note: web address watermark not included on the original). What is silent? The cartographer’s ignorance – about the interior topography and other geographical markers. But a casual observer then would not have known this.

It was considered a great leap forward when in 1759 cartographers – influenced by French mapmaker Jean Baptiste Bourguignon d’Anville and the Enlightenment tradition dictating that all maps be empirically verifiable – begun to leave blank spaces if precise information about parts of the areas they were mapping was unclear. The practice served to encourage new forays into the “unconquered” and “uninhabited” areas they depicted to determine, for example, the as-yet undiscovered mouths of rivers or the potential treasure/glory/conquest that lay beyond established borders. But primarily these blank spaces lent increasing credibility to what was shown (whether it was accurate or not), by silencing everything else.

Accentuating some pieces of information over others with emphasis and silence grew in popularity even further as the centuries progressed. The most common world map we see, for example, privileges the northern hemisphere over the southern through the use of Mercator’s projection. It also puts the Western world – whether Europe or North America – in the centre of the frame, relegating all other areas to the peripheries.

"The Queen's Dominions at the End of the Nineteenth Century"

In the map above, the bright red colour of Britain’s imperial territories contrasts with the neutral colour of other lands. Islands of small geographical significance jump from the page with red underlines and heavy black labels indicating that they are strategic refuelling outposts, places that ship spices back to Britain, or simply more territory in red. Mercator’s projection is used to great effect, enlarging North America even above the bounds of the map’s frame, at the expense of the southern hemisphere.

It is all intended to provide a sense of a vast, interconnected Empire. While looking at this, viewers might fail to notice the absence of information not related to Britain’s imperial conquest. About other lands, the map is relatively silent, because they are not the focus.

Maps are now used for all kinds of things – everything from directions to websites or thoughts. The proliferation of maps has tended to swell the number of those used for a single purpose, and the trend seems to be toward more specificity but less context.

Consider subway maps, most of which are a legacy from the modernist era. They fall squarely into the “art” over “accuracy” way of conveying information, and are characterized by highly stylized lines, multiple colours and use of sans-serif fonts. The most famous, of course, is Harry Beck’s map of the London Underground, which dates to 1931. Its genius lies in its abstraction, its ability to draw order in the form of clean and easy-to-read visuals from the confusion and complexity of the actual system. Compare the official underground map with the actual map of the subway stations from above the ground:

Schematic Tube Map, Zone 1

 

Tube Lines Mapped to Actual Geography

It takes a certain genius to create schematic subway map order from chaos; no doubt this is the reason these maps are such iconic art pieces, found on buttons, t-shirts, and posters the world over. It’s fascinating to me that they are so simple and so focused – and yet divorced from the actual geography they represent. Almost every major city is the same.

Paris:

Paris Metro

Paris Metro

Washington DC:

Washington DC Metro

Washington DC Metro

Moscow:

Moscow Subway Map - Like an Alien Creature

Even maps of New York City’s frenzied system are relatively simple. But sometimes accuracy wins out over art. In 1975, the New York City transit authority determined that the map they had been using to that point was too much so, and commissioned something that would line up more with the streets above ground. (You will find a fascinating interview with the designer of the 1979 map, which was only just retired a few years ago, as well as several old subway maps from NYC, here.) Yet even this more “accurate” and “realistic” new map has some deviations from reality: Manhattan, and lower Manhattan in particular, have been expanded to accommodate the landmarks and subway lines that all seem to converge there; Brooklyn and the other boroughs are made relatively smaller than their actual size.
It would seem that for clarity or for a great story, some alteration is always necessary, and a bit of silence too. No map designed to emphasize transit lines could hope to show every street, and of course designers realize this.  People are perhaps more willing to put up with silence and abstraction in maps now because they are used to it, and because maps are not expected to be geographically accurate to be authoritative.  It’s an interesting trend that points to our increasing ability to cope with the abstraction and de-contextualization of cartography, even as the broader minimalist modernism movement appears to be winding down (the ever-popular clean lines of IKEA products notwithstanding). What does it mean for the future of maps? Will the definition of a map become ever-broader as we incorporate variations from site maps to schematics? Or do we need a new name for this kind of information vehicle altogether?

This post is part two of a three-part series on the past, present and future of mapping. Check back for a wrap-up later this week.