25 Reasons Today Is a Great Time to Be Alive

June 22, 2011

Amid all the morose and maudlin whinging about how we are losing our sense of self, our ability to self-moderate, or more generally our minds, merit, and money, today I offer up a list of things to get excited about in 2011.

Here are 25 reasons life (and mine specifically) is better today than it would have been…

250 Years Ago

25. I can reasonably expect to live longer than 33 years.

24. In my life, I’ve visited over 10 countries on 3 continents. And among my friends, I’m not well travelled. In 1761, people rarely left their hometown, let alone the country.

23. Last night I heard superb music by 10 different composers, played by a world-class orchestra, for under $30. (And I waved a Union Flag while doing it! “And did those feet, in ancient time…”) In 1761, only a fraction of the population could hear such music – and not cheaply.

22. Indoor plumbing! Sewers! Need I say more?

21. I can buy a great book for under $5; in 1761 it would have cost the equivalent of about $1000.

100 Years Ago

Hunger Strikes Among Suffragettes

20.  As a woman, I can choose who runs my country/province/ city (at least in theory). And I didn’t have to be jailed and force-fed by a tube in order to have the right to do it – all I had to do was reach majority.

 19. I didn’t die of chickenpox, infection, or the flu when I was a child, as many children did in 1911.

18.  I do laundry by putting a bunch of clothes into a machine, pouring in some liquid, and pressing a few buttons, instead of spending two days with the household staff, soaking it, wringing it out repeatedly, and stirring it around in crazy chemicals with washing bats. It’s like magic.

17. Electricity — in my home! Amazing.

16. In one of my history classes we watched 1900 House, a documentary in which a family of six lived as they would have in 1900 for three months. A memorable take-away? Modern shampoo is a hell of a lot more effective than egg yolk and citrus. “I just smell really, sort of, omlette-y.”

50 Years Ago

15. I did not have to promise to obey my husband when we got married.  In fact, I didn’t even have to get married to get all the legal benefits of a long-term committed relationship.

14. I can wear pants! and shorts! And neither is prohibited by law.

Katharine Hepburn, a pants-blazer and personal heroine

13. I can eat any kind of food I want to, and could probably find somebody from its country of origin to talk to about it. Every day just walking down the street I see a greater degree of diversity than at any other time in history, all in once place, living (relatively) harmoniously together.

12. I can choose whether or not to spawn, with near-certainty that my wishes will be protected by law and the wisdom of modern medicine.

11. A century old saying has it that “horses sweat, men perspire — ladies merely glisten.” But when I go to the gym, I can sweat all I like, and feel healthy doing it. Moreover, certain terrifying Amazonian female athletes step it up a notch by adding a soundtrack.

25 Years Ago

10. I live in the charmingly-labelled “Rainbow Village” area of Toronto, where I can watch men walk down the street holding hands, or carrying impossibly tiny dogs wearing designer hats in large purses.

9. I can find out what’s going on in any part of the world in under 10 seconds, at the click of a button.

8. I feel reasonably secure knowing that many heinous crimes are solved using DNA evidence. Bonus: I can watch any of the fascinating procedural dramas stemming from said advancements in forensic science. Bring it on, Grissom!

7. I can listen to “Tarzan Boy” over and over and over again without having to rewind, ever.

6. It’s exciting that people are taking steps to protect the environment more than at any other time in modern history. Or, at least, they’re aware of how to protect it.

10 Years Ago

5. I can press a button on a machine and be talking to my grandmother, 3500 miles away, in under 10 seconds. For free. (And I feel like God every time I do it. Think about it: your computer is calling someone else’s! This is the kind of thing they dreamed about in SciFi movies 50 years ago.)

4. I can access the Internet everywhere I go. Want to know if the restaurant I’m walking by is any good? I can read reviews. If I’m lost? I can get directions. Wondering if it’s going to rain? I can check the weather. Instantaneously.

3. Buying a home during a recession meant we got an insane deal on our mortgage.

2. I can watch things like this all day if I want to:

1. I have a place to share my latest thoughts, pictures, or links to rambling blog posts with my closest friends, and get feedback from any or all of them, immediately. Communication is more frequent than ever before. I can feel like part of  a community without even having to leave my desk. (Or put on the pants I was so excited about earlier.)

What are you excited about in 2011 that you would add to this list?


Revisiting Posthumanism: Technology and Empathetic Fallacies

April 8, 2011

Empathy is a critical component of human interactions, and has been essential to our evolution as a social species. It lies at the root of our dominance over other species that do not share the collaboration mindset. Effective social interactions and behaviour modelling create group cohesion and action. And as the world becomes ever more urban and crowded, empathy is more important that ever. There is among some scientists a palpable fear that modern technology decreases empathy, lessening our intuitive social skills. But the potential for technology to actually increase empathetic feelings is immense — so can the use of technologies therefore make us more human?

An article in the New York Times this week queried the effect of facebook on relationships: does using facebook make people less inclined to invest in face-to-face contact? It may be too soon to tell, but a recent study has indicated that technology is still just the medium. Those inclined toward fulfilling relationships will use facebook as a tool to expand and deepen them. Those inclined to withdraw from society will use facebook to withdraw still further.

One insightful commenter disagreed, noting that his own studies have found that, among college-age students, empathy has been declining over 30 the past years, and markedly so over the last 10. His findings jived with a recent article in Scientific American on the same subject. The implication, of course, is that all that time at the keyboard, along with the general trend toward social isolation, reading less fiction for pleasure, and an uptick in the number of youth who describe themselves as conservative, has re-wired our brains in such a way that we can no longer relate as well to each other. Moreover, technology makes it easier for people to be exposed to only what they want to be exposed to, and only world views that align with their own – incomprehensible amounts of such one-sided content, in fact. Limiting exposure to those who think the same way is a choice increasingly made by those who can afford to do so.

But I can’t help but wonder if technology is, again, just the medium through which all of this plays out. Those who don’t want to encounter anyone who votes for a different political party or has a lesser socio-economic status and who consequently cloister themselves in a one-note Internet news digest, for example, are the same people who will live in a gated community in the real world, lessening empathy and social cohesion in that way.

And technology can also help empathy expand and grow in the real world. An honourary TED talk I watched recently showed a historical extension of empathy from individual to blood relatives, clans, nations, and even other species. The key is exposure, understanding, and a feeling of shared goals. Without the Internet, there would be a lot less exposure to and understanding of different cultures. Would the “jasmine revolutions” have spread so quickly without knowledge sharing between underemployed 20-somethings with Twitter accounts? Thomas Friedman thinks not, and also credits other technologically-reliant factors with helping to spur them on. Among these are widespread reporting of what corrupt officials were up to through Al Jazeera, the ability to see vas swaths of underutilized government-owned farmland via Google Earth, and an image of China on the rise from the Beijing Olympics.

As far as shared goals, technological interactions are in some places considered to set the standard for cooperation and teamwork. A recent Economist article argued that playing World of Warcraft or similar team-oriented role-playing games can increase engagement and skill-building, leading to greater success in the workplace. (Hey, it worked for the CIO of Starbucks.)

The narrative of the game may be key here. Writing in the Journal of Evolution & Technology, PJ Manney locates storytelling at the centre of empathy. Stories are compelling ways of showing how humans share the same desires, values, hopes, dreams, and fears, he says, and technology has always been important to diffuse stories between different cultures. As the evolution of technology has taken us from the printing press and the novel to instantaneous news and the explosion of opinions in blogs, storytelling has become more immediate, more prolific, and more visual. And, returning to the theme of post-humanism (or the near-synonym transhumanism, or “H+”, which Manney refers to), the future human that has made use of sensory technologies to the point of incorporating them into his or her make-up can be even more directly connected to others by literally experiencing the world as they do. Manney refers to this sensory augmentation as “a more effective connection with others, through a merging of thought or telepathic link or internalized instant messaging.” This is WoW with human-human interactions, instead of human-character role-playing.

Posthumanism/transhumanism is feared, as I wrote about in an earlier post, because some believe technological “enhancements” would create inherent inequalities among humans. Yet it is possible that technology could incite a great equalization of feeling and experience — and empathy.  In effect such changes would therefore make us better able to relate to each other, and in the end more human, not less so.


Knowledge and Power in a Skeptical, Connected World

March 18, 2011

Who do we listen to, and why? In an age when we can find anything information quickly, what does it take to be a voice that rises above many others? What kind of power does this represent?

I read in the latest edition of the Harvard Business Review that in 2011 companies are anticipating an increased focus not just on broadly saturating target markets with facebook ads and silly “viral” videos, but on targeting “influencers” as part of their “social media” strategies. These individuals are those who shape culture and get other people on board with new trends and ways of thinking. Oprah is an influencer. Radiohead are influencers. Steve Jobs is an influencer. And a lot of random bloggers, tweeters, and other social media characters whom you’ve never heard of are influencers, and they are going to be targets of corporations because they are both cheaper and perceived (perhaps) as more authentic shills than their more famous counterparts.

You can be sure that by the time something gets annotated up to the level of an HBR trend to watch, it has already set the Internet abuzz. Further research on “measuring influence” yielded far more twenty-first-century social media examples than any others. It seems that organizations have (finally!) learned that a “social media strategy” on its own is of little benefit without real, grassroots endorsement. However, I’m more interested in what “influence” looked like in the past, before it morphed into a social media concept to be made into the next corporate buzzword, and what characteristics have stayed with perceived “influencers” since.

It seems it is a tricky thing to quantify, or even define. An article I discovered about the role of influence in economic history discusses how it is closely related to communication, but can range from impression to force in the amount of strength it implies. The other critical factors in determining long-term influence were time and space. The example given was Saint Thomas Aquinas, whose ideas were central to much medieval thought (throughout the Latin-speaking world, at least), but are relatively inconsequential today.

Influence and Power – and Money

Influence, as the article points out, is closely related to power. One of the concepts that has stayed with me since learning it in an Organizational Behaviour class years ago is that of differences in the kinds of power wielded by individuals. They can have positional power, power stemming from one’s role as, say, a manager or a parent or some other official and likely formalized figure of authority, or they can have personal power, that stemming from an individual’s character or beliefs, and likely more informal in nature. The difference between them parallels that of practical/mental authority vs. emotional authority, and the general consensus is that emotional authority goes much further in influencing others because it does not rely on a (potentially temporary) and wholly external power differential the way practical authority does.

When I consider what influence looked like in the past, it seems there was little distinction between the two types of power mentioned above. Perhaps the theory I just articulated is a fall-out from our comparatively recent fixation on merit over birth status as a rationale for power. Indeed, the ideas (and names associated with them) that have survived best throughout history to influence many others have always been backed by great financial power. Take religion, for example, which has been perpetuated by wealthy organizations that held positional power in their communities. The familiar expression about history having been written by the victors speaks to the tendency of dominant individuals, families or states to justify their authority with historical precedent. And most of the theories in every field that are still with us today were dreamed up by men with solid financial backing and the ability to spend large amounts of time reading and philosophizing. (Even Marx lived off the generosity of his bourgeois co-author, after all.)

But today that is changing — to an extent. YouTube, twitter and other media that celebrate memes and all things viral can make ordinary people famous astonishingly quickly. Such fame is often fleeting and of dubious value to society, but savvier types can sometimes parry their sudden name recognition into the more lasting sort or influence (Justin Bieber, anyone?). This can happen because influence is magnetic and self-perpetuating. Mommy bloggers who are already widely read and respected are natural candidates to push band-name diaper bags or whatever else new mothers supposedly need and want. That corporations want to latch onto such people is hardly surprising – they are merging their corporate power with bloggers’ influence in new markets, and the bloggers want to in turn increase their own profile through association (or maybe just get free products).

Self-perpetuating influence applies to companies as well. The new techie term for this concept is “network effects” – as the Economist defined it recently, “the more users [services like facebook, eBay, etc.] have, the more valuable they become, thus attracting even more users.” Whereas in the past money and power begat more of the same, today we can add hits and click-throughs to the mix.

Knowledge Brokering from Darwin to Wikipedia

The common link between these people and corporations is the way they treat knowledge. They are what the corporate world now refers to as “knowledge brokers,” a title that refers to the ability to clarify and share information with different audiences or spheres, and determine what the common elements are between, say, Paul Revere, corporate marketing, and the AIDS epidemic. Knowledge brokering (and a bit of luck) is what separates widely-read bloggers from those who write solely for themselves (whether they want to or not). It is the ability to write things that people find interesting and useful. The CIA is investing heavily in such people after a serious of incidents that demonstrated how segregated and impotent their different bodies of knowledge were.

Knowledge brokering is more than simply aggregating (though smart aggregators of information are helpful too). It is the ability to analyze and draw connections while becoming a trusted conduit of information. Knowledge brokers are perhaps an antidote to the pervasive and growing tendency to overspecialize, because they connect many specialists and their ideas with a broad audience. They are the reason we know about Darwin’s ideas. Or Jesus. Or celebrities’ latest faux-pas. Wikipedia is one giant knowledge broker that has an army of largely volunteer knowledge brokers in their own right mobilized on its behalf. That is power.

But what makes us listen to them? I suspect the key is authenticity. A lingering distaste and a keen sense for corporate marketing disguised as something else define our era. Perhaps the main difference between influencers from the past and those of today lies in the type of power they wield, as I outlined above. Personal power – like that wielded by bloggers and Oprah – is seen as more trustworthy because it lacks an agenda (whether or not this is true). Positional power is usually distrusted simply because of what it is. We only listen to Steve Jobs because we truly believe he has our best interests – in being cool and technologically savvy, regardless of the product – at heart. In contrast, many Americans discount everything Obama says because they believe he merely wants to increase his own power and unveil his secret socialist agenda on an unwilling populace.

Is this a reflection of our philosophical allegiance to free-market democracy? Is influence and power of all kinds just the ability to get people to like and trust you? If so, many corporations are going to need a lot more than “influencers” on their side.

Food for thought: How do those with positional power gain credibility? Is this knee-jerk anti-authoritarian mindset in society as prevalent as I say it is? Do people who seek to perpetuate their influence by getting behind corporations somehow weaken their own authority (i.e. do they lose their ‘cred’)? Hm.

MARGINALIA: Though I did not explicitly link to it in this post, the Economist’s Intelligent Life ran a fascinating piece recently on The Philosophical Breakfast Club, a group of four Victorian scientists who were definitely knowledge brokers (and nifty polymaths) and who were key influencers in their time. I’d recommend reading it.


Some Loose Thoughts on Americans and Trains

March 9, 2011

Apparently there is a movie version of Atlas Shrugged coming out soon, and while I have neither seen it nor read the book (something I plan to remedy within the next month or so), I have read a few of the many criticisms and laments out there about the book and philosophies contained within it. These come from all sides of the political spectrum, but one of the more interesting ones to me concerns the role of infrastructure and the changing nature of support among conservatives and libertarians for large-scale rail projects on American soil. While Ayn Rand’s magnum opus features libertarian railroad moguls who plough vast sums into railroad development, railroads today are pariahs of American transportation infrastructure, and to none more so than the political right.

David Weigel on Slate summarizes the opposition to high-speed rail (and rail in general) from the American right mainly as opposition to state subsidies. There is a widespread belief that money pours from government coffers into railroads – at a cost to the taxpayer of between 13 and 30 cents per passenger, compared with between 1 and 4 cent subsidies for highways and other roads. Whether these numbers are accurate is not the point of this post; merely the perception of it being true is (as with most subjects in American politics these days) enough to colour the popular and official debate substantially. I’ve heard others comment that rail travel is seen as a form of communism.

The irony of that idea, of course, is that railroad owners were among the first übercapitalists of American business, sucking profits from their trade with an almost monopolistic hold on the industry. Names like Vanderbilt and J.P. Morgan and known to us now because many of these obscenely wealthy railroad barons wanted their legacies to live on in the form of grand houses, universities, and other large-scale public charitable works.

I’ve written before about how cars in the early days of automobile travel were seen as a “less technological” option than railroads, more rugged and democratic and, well, American. Travelling by car in those days was both challenging (tires exploding or parts falling off every few miles) and exhilarating (unprecedented access to tourist sites that railroads just didn’t go to). The ideal of the open road, and by extension the “open West,” has echoed down through the annals of American history from beat poets to “Boys of Summer,” and was undergirded by the Eisenhower administration’s creation of the extensive Interstate Freeway System in the 1950s.

But I never picked up on the “communism” angle, in part because that wasn’t a concern or a term bandied about frequently in American political discourse until the second decade of the twentieth century at least. Today, of course, high-speed rail and trains in general aren’t seen as feats of American engineering and technical prowess, but symbols of European- and Chinese-style communism.

Attitudes have changed: both railroads and cars have largely lost their breathless romantic and innovative associations and have become part of the humdrum reality of everyday transportation. Many people view their cars more as prisons (especially when stuck in rush-hour traffic) than gateways to the wonders of nature. And while European-ness today still has some cachet if it involves sitting in a café in Paris on vacation, Americans are confident enough in their own government that they certainly don’t aspire to managing their infrastructure like the Europeans.

The last paragraph of Weigel’s article clearly illustrates the link between railroads, communism and other un-American ideas:

Before and after 9/11, George Will was talking up rail as a way to take more people off planes and make America less vulnerable to terrorists. That argument has more or less vanished. Why? “It helped that somebody bombed a train in Spain,” says O’Toole. “If you concentrate people in one vehicle, then the vehicle is vulnerable. You concentrate society, and it’s vulnerable. So maybe it’s not a good thing to concentrate people.

Makes sense. People concentrated together in one vehicle are vulnerable to attack without the ability to pick up and go whenever and wherever they want to, as in cars. Similarly, people who have a shared and singular collective mindset are vulnerable without the influence of democratic choices. Looks a lot like communism, right? So perhaps we shouldn’t be surprised the next time a state governor turns down a billion-dollar high-speed rail line subsidized by the federal government. He’s probably imagining that it’s the last stop on the Lenin line before Revolution Station…


I’ll Take ‘The Obsolescence of Trivia’ for $500, Please

February 15, 2011

I once heard that Albert Einstein didn’t know his own phone number, because he never bothered to memorize anything that could be written down or looked up in less than two minutes. Even for someone like me, who always prided herself on being able to remember things, from trivia to birthdays to obscure historical facts (before my memory became a sieve, that is), such a thoughtful approach to using one’s brain seemed incredibly intelligent. Theoretically, all the space that was freed by not having to remember pedestrian things like one’s telephone number could be put to use coming up with, say, the Theory of Relativity and blueprints for the atomic bomb. What an efficient use of that magic 10% of our brain power.

I wonder what Einstein would have done with the Internet.

The ability to find almost any fact with a few clicks has to be one of the defining characteristics of our age. Case in point: I just verified the above story by searching for it online. It took about 4 seconds. I didn’t have to recall which book I’d read it in and then go searching for an hour through my Library of Congress-ordered bookshelves hoping the tome in question had an index so I could easily locate the passage I needed. I also didn’t have to think about who might have mentioned it to me and then look for his or her phone number and (horror!) call to ask about it.

The ability to search in this way is literally changing how our brains work. We have become “shallower” thinkers, who absorb less because we can find information so quickly and have our comprehension constantly interrupted with new information being presented to us (for example, by blue underlined links in a body of text). Things like Wikipedia have made us more able to find information easily, but are we less able to process it?

"Watson" tries to beat Jeopardy! champs Ken Jennings and Brad Rutter

"Watson" tries to beat Jeopardy! champs Ken Jennings and Brad Rutter, but can't respond to context

It could be that knowledge easily acquired has less likelihood of being retained. (Of course, it may be that I notice this because though I get older and learn much more rapidly, I also forget more rapidly than I did when I was young.) Instead of coming up with ways to store knowledge in our long-term memory, we are becoming adept at determining how to find it in the external world. Instead of savouring text or indulging in slow reading, as I wrote about in my last post, we skim, knowing we can go back later if we need to find something. Knowledge is largely transactional, facts over tone or style. A tradition similar to that of Islam, that followers should be able to memorize and recite the Qur’an, would be unlikely to take off if established today, it seems. Most of us can barely get through an article.

University administrators are talking about fundamentally changing the way information is taught in schools. What is the point of spending a few hours a week standing in a lecture format imparting facts, when facts can be discovered within seconds? Even if professors are teaching a way of analyzing facts, this too can be discovered in the form of lesson plans, course outlines, and sample teaching schedules for those so inclined to look for them. The kind of knowledge that students need today (one could argue, perhaps, that they have always needed) is of a much higher order and involves critical thinking as opposed to simple rote learning and memorization. Certainly, this appears to be one of the few arenas left in which computers can’t best us: an article on ars technica today reports that “Watson,” a computer created by IBM to compete against repeat Jeopardy! champions Ken Jennings and Brad Rutter mostly knows the facts (by querying its own database) but not how to take other contestants’ wrong answers into account when preparing ‘his’ own.

It is certainly possible that our future will involve less fact recall. To an extent, however, it will always be necessary as a building block to learning (think: simple math, the alphabet), so we won’t lose it entirely. The real question is whether the change is good or bad, if the kind of thinking we’re doing instead is beneficial or detrimental.

It’s hard to put a value judgment on the change. One could make the case that, from an evolutionary perspective, being able to recall facts, such as where the highest-yield coconut trees were located or what time of year animals would be in a certain location, would be beneficial. This later transitioned into an affinity among many for trivia games and quizzes of all kinds.

But is this kind of knowledge as useful today, when it can so easily be obtained online? Are we missing the problem-solving and interpersonal skills associated with acquiring it? An article on Slate.com last week lamented the rise of the Internet because it made finding obscure treasures like minor league baseball hats too easy to find, without the letter writing, sleuthing and travel required to find such things in (in this case) the 1980s. Now we are limited only by what we can imagine – if we can think of it, it’s probably out there. So is the free space in our brain dedicated to imagining more of what is possible, and less of how we’ll find out about it? Or are we just getting lazy?

Time will likely tell. But will it be a human characteristic change, or merely a culture-specific one? Another thing to consider is that access to the Internet and its potentially game-changing brain alterations is anything but ubiquitous. Being able to find anything online depends on both access to technology and freedom of information. Granted, the study linked to above mentions that it takes only about 5 days to gain the brain activity of an old hand Internet searcher. But no doubt some of the more profound changes to our neural pathways will evolve more slowly, with repeated exposure. Will the unconnected, firewalled world catch up in time?

Perhaps we’ll be too busy watching computers best each other on Jeopardy! to notice.


Mapping in Three Dimensions

December 14, 2010

It’s standard fare these days to think of maps as political tools, each with its own particular bias or bent. From Mercator’s projection to Greenwich Mean Time, what we see in maps is often exactly what we have been conditioned to see by generations of cartographers and a tradition of Western mapping that prioritizes specific views, divisions, and perspectives. J.B. Harley, a preeminent scholar of maps, characterized cartography as a language, with the cartographer in a significant position of power over the viewer. Yet maps are changing. No longer static lines on a canvas, they can now be seen and experienced with modern tools like GPS and Google Maps. In this first post in my series on modern mapping, I’m going to zero in on a specific element of maps and how we read them that is changing rapidly: perspective.

 

A delightful aerial view map of the St. George Campus in 1932 (From the University of Toronto Archives)

Consider: when you think of a map, you likely think of an aerial, “bird’s eye” view of a terrain, probably subdivided by political boundaries into nations, states, provinces, or other political entities. Land is probably marked as green. Water is blue.  It’s probably centred on Europe or North America. This marks you (and me) as a student of fairly recent Western-style cartography. The method of representing the world through political divisions dates to the era of the rise of the nation-state, but the predominance of the aerial view is also a legacy of the nineteenth century and grew with the rise of scientific cartography. Imperial cartographers, often at the forefront of discovery and seizure of territory, derived much of their feeling of superiority over the populations that inhabited the lands they discovered from what they perceived as their more advanced cartographic knowledge.

To geographers who could explore, measure, name and represent large areas of land in two dimensions, ownership was the logical next step, and a natural right. Organizing territory in this way made sense, and particularly reinforced (for example) the British obsession with characterizing and arranging things. Their power was often exerted even by the act of mapping itself.  Imperial mapmakers used tools and methodology that colonial populations rarely understood, which in the minds of cartographers clearly legitimized the modernizing imperial mission. To the British, a nation that could not identify its own resources, borders and population through mapping could legitimately be colonized by one that could, as various historians from Ian Barrow to Anne McClintock (of Freudian Imperial Leather fame) have explored.  To map using scientific methods was to differentiate colonizer from colonized and project Western “progress” onto the landscape – and its peoples – through logical and rational classification and categorization.

The Great Trigonometric Survey of India

A view of India while undergoing the "Great Trigonometric Survey" in the 19th Century, at the zenith of British Imperial scientific cartography (from the Utah Education Network)

 

The irony, of course, is that science has caused the imperial map of two dimensions to become almost obsolete for practical purposes. Of course, people still pore over maps, hang them on their walls as artefacts, and even sometimes use them to find their way, but the way we actually use maps has shifted.

Native populations in what would later become colonial territories (for example, in Africa) had their own ways of categorizing and describing territory. Boundaries ran along tribal lines, and were dynamic and flexible as tribal lands changed. Rivers – often unexplored to their full extent, in the absence of a British fixation with discovering their mouths and sources – could have a variety of names as they passed through different stages of their existence, from a spring to rapids to a wide oasis on a dry savannah that changed with the season. Geographical markers made sense and were referenced in terms of local context and use, not neat aerial classification at a high level. Above all, land was experienced in three dimensions, as humans really see it, not as birds do. Native peoples were closer to the lands they lived on without the distance of science, and experienced its fluctuations and nuances deeply.

Today we appear to want maps to be more this way. With distance may come power, but with experience comes understanding. I was repeatedly tripped up by a new GPS system last weekend that depicted the area I was driving through in three dimensions. Apparently this form of navigating is far more popular than a bird’s-eye view equivalent. Drivers can feel more a part of the territory through which they are driving – and there is the added benefit of a textual overlay with street names and important markers.

Mapping UofT in Three Dimensions

Mapping UofT in Three Dimensions

And there is little distance at all with applications like Google Earth and Google Maps. With Street View, people can experience geographies without ever having been there. They can effect an instant déjà-vu and familiarize themselves with territory before arriving, enabling familiarity without experience. Mapping has returned to three dimensions.

But what are the costs? I can’t help but think of two articles I read in the last year, one of which described the “barbell” effect of living in a city, where we tend to know the areas in which we live and work, with little knowledge of the neighbourhoods in between. The other referred to the rise of GPS as sounding a death knell for getting lost, a tragedy because it forestalls ever getting to know an area to which we are not explicitly travelling. (Clearly this author had never experienced a faulty GPS that landed him in a sketchy parking garage 40 minutes from the movie theatre he had intended to visit, as I have.) Nonetheless, being lost can be an experience of discovery. But how often do we look to explore areas on Google Maps with which we aren’t already familiar?

We now have a scientific three-dimensional view, in some ways the best of both worlds. But it may be that we lose the overall sense of continuity that a scientific, small-scale map brings while also losing the sense of connectedness and local context that comes with intimate knowledge of a certain small piece of territory.

I started this post talking about bias and manipulation by cartographers. With the variety of perspectives we can simultaneously employ now, much of that manipulation has disappeared. What we choose to see and seek out is now up to us, and the bias has become less the cartographer’s than our own. We’ll learn in time which one obscures more.

This post is part one of a three-part series on the past, present and future of mapping. Stay tuned.


Secrets and Lies — and Google

November 29, 2010

There is an almost hysterical paranoia that permeates the air these days, concerning the information that is being collected about us and helping us make decisions, in many ways without our knowledge. What am I talking about? The secret of success for everything from Google searches to your mortgage approval process: algorithms.

And secret they are. Much of the fear about them is that they are “black boxes” about which we know little, and yet they are making decisions for us every day. In the process, some worry, they are taking away our capacity for decision-making and automating processes in which human input may be necessary to correct inconsistencies or mistakes. An extended report in The Globe & Mail last week examined the impact such incomprehensible and inaccessible mathematical formulas can have: according to the data collected, buying floor protectors at Canadian Tire might signal a conscientious borrower; late-night website browsing may indicate anxiousness and, in combination with a host of other minor activities, derail a credit application.

Google is another example: it uses complex algorithms to filter information to find exactly what it thinks we need, or, as its mission statement says, to “organize the world’s information and make it universally accessible and useful.” It also provides us with ads, of course, based on our search history and preferences and in theory tailored to our needs. Even online dating websites such as OkCupid and eHarmony make extensive use of algorithms to predict who will make a good match. The information that comes out of such sites is a fascinating look at the likes and dislikes of a broad cross-section of the population.

The formulas used are secret, of course, in order to protect the competitive advantages of the organizations they serve. What surprises me is why there is such intense fear of them, these unknown equations that guide our choices. We are not forced to click on any of the links Google serves up. We’re not even forced to use Google as our search engine. If we want a local plumber, we can always use the Yellow Pages, where prominence is determined by advertising payments. Is this any better?

Perhaps it is the lack of control that is so terrifying. Because algorithms filter information for us, there is an unimaginable amount that we just never see. We literally don’t know what we don’t know. Somehow this seems more sinister than the way it used to be when we were all relatively more ignorant, perhaps because, through the Internet, we are now aware of there being a lot more information out there.

Does Google have a sinister hidden agenda? One would think that such a thing would go against its code of conduct of not being evil. Does OkCupid? Likely not, but in filtering information to satisfy our (perceived) needs and wants, argues Alexis Madrigal in this month’s Atlantic, algorithms can serve to maintain the status quo – or even prevent shifts in societal norms:

By drawing on data about the world we live in, [algorithms] end up reinforcing whatever societal values happen to be dominant, without our even noticing. They are normativity made into code—albeit a code that we barely understand, even as it shapes our lives.

Madrigal goes on to say that Google, OkCupid and their ilk give us only “a desiccated kind of choice,” and that we need to break the patterns by choosing against type. We need to make ourselves less predictable, to click unexpected links and choose unexpected partners, presumably in order to ensure that society in general doesn’t stagnate. Don’t trust The Man and all that.

The growing paranoia that unseen and unchecked forces are predicting – even controlling – our behaviour seems to be growing even faster than fear of Yemeni terrorists. I think it relates back to our growing cynicism and distrust toward all large organizations. Believing in anything at all is seen by many as a mug’s game. Trust in governments is ever-declining, the more we find out about how they conceal the truth from citizens, or tap their phone lines, or watch their goings-on. People now, on average, trust NGOs (even ones that are affiliated with large government organizations) much more than governments themselves, and certainly more than the politicians and bureaucrats that staff them. Faith in organized religion has plummeted amid endless sex scandals that are officially acknowledged too late (if at all), refusals from the highest levels to acknowledge the damage done by outdated policies, and generally divergent values from most Westerners about gay marriage, reproductive rights, and female clergy members.

I’ve written before about what apathy and extreme cynicism look like in modern society. I neglected to mention an obsession with knowing the “truth,” even if part of us believes that truth to be fictional or compromised. Hence the enduring popularity of the “exposé,” tabloid journalism, insider specials, and now WikiLeaks, the non-profit whistle-blower organization that is making news (again) this week with the release of thousands of diplomatic cables sent by US ambassadors. Despite pleas from the White House not to release the information (potentially jeopardizing thousands of lives, and undermining US diplomacy and counter-terrorism efforts), the obsession to reveal won out, and the cables were posted anyway.

Why? Secrets may not be entirely benign, but what seems to be missing from the discussion is the idea that neither might their release be. In an age of over-sharing, of laying open our most personal thoughts for the world to see, is even the necessary secrecy of diplomacy unwelcome? It has fallen victim to the public’s need to know anything and everything — or else there must be some ominous conspiracy at play. In democracies, utter transparency seems to be the only option palatable to citizens, and we are unnerved when it isn’t available, so we turn to (often illegal) means of obtaining information, such as WikiLeaks.

It seems we are experiencing a seismic shift in the way we are continually using and desiring more information.  Should we expect it to be entirely accessible at all times, to all people? Knowledge is power, as they say, and everybody wants more. The irony, of course, is that everybody also wants privacy: WikiLeaks, for example, will not disclose its sources, or its founders. One wonders how long they can expect to keep that a secret.