The rise of lottery professions and why it’s so hard to get a decent job

May 4, 2014

In early September 2002, a young singer named Kelly Clarkson won the inaugural season of a new reality competition called American Idol. Her first single set a new record for fastest rise to number 1, vaulting past the previous record holders from 1964, the Beatles. Her rise to fame was categorized as meteoric, the kind of rags-to-riches story so beloved in America, and one that would be repeated, with more or less success, over the next 12 seasons and in several other similar contests.

Kelly Clarkson is fabulously talented, and also the beneficiary of a windfall. After years of struggling, she rose to the top of what has traditionally been known as a “lottery profession,” one in which there are many aspirants, very few of which succeed, and the rest do menial jobs in the hopes that one day their “big break” will come.

Often it never does. There are thousands of talented singers who never appear on our TV screens and never get Grammy awards because they are unlucky. They don’t win the professional “lottery.” (And in some cases it is a literal lottery: I’ve been to Idol auditions in Canada that have random draws of ticket stubs to determine who is even allowed an audition at the first stage.)

The concept of a “lottery profession” is usually applied to the performing arts – dance, acting, singing – and the literary ones – novel and poetry-writing – as well as sports and other fields in which aspirants need to be exceptionally talented, and also distinct, to an extent. And in these areas it has only become more difficult to succeed in the last hundred years.

The Poor Poet (Der Arme Poet), by Carl Spitzweg

 

Breakaway: how the best of the best get more market share

Chrystia Freeland writes in Plutocrats of how communications technology and economies of scale have made famous people even more famous. In the nineteenth century and before, a singer’s reach (and therefore income) was limited to those who could pay to afford a seat in a theatre; today, she can make money from records, huge live shows and merchandise. An early twentieth-century soccer player was limited in income by those who had paid to see the match – which is likely one of the reasons we don’t hear about many famous professional athletes pre-twentieth century – while today his face is on Nike ads and jerseys and he earns millions per season through licensing deals to see him on television getting yellow-carded.

And where the limited reach of a theatre or soccer pitch allowed a greater number of quite talented individuals to succeed, the limitless reach of television and the internet allow the über-talented to divide greater spoils amongst a much smaller number. Why bother going to see the local hotshot when you can watch Lionel Messi?

 

A Moment Like This: the lottery gets bigger

The trouble is, artists and athletes aren’t the only ones risking their livelihoods on a proverbial lottery ticket anymore. There are more new “lottery professions” all the time, often emerging out of professions that were once solidly middle class, able to support a family, with good salary and benefits. To give but a few examples:

  • Investment banking and stock trading, formerly quite boring, now require numerous credentials (CFA, MBA, etc.), and a good network in the right places, to get into;
  • Law work is now frequently outsourced to the developing world with intense competition for fewer and fewer spots at top firms in the West;
  • Tenured positions in academia, as I’ve already written a lot about, are quickly being eliminated with little hope for current Ph.D. holders;
  • Fire fighters have a 15% chance of acceptance at the local fire training academy, and most fire-fighting professionals have second jobs;
  • Medicine, nursing and social work programs accept fewer applicants for even fewer jobs, despite more demand for these professionals, instead hiring temporary foreign workers;
  • Even teaching, that bastion of middle-class professional employment, is a tough job to get these days, and, as anyone who has done any teaching will tell you, it ain’t a glamourous or lucrative gig.

Some of these changes are the effects of globalization, of course, which has pulled many in the developing world into the middle class even as it has displaced work from North America and Europe. The result is intense competition for what are really quite banal professions with long hours and few perquisites. After all, we are talking about work here, and while many of us can hope to enjoy what we do, more people still think of a job as more a way to make money than a calling.

 

People Like Us: what happens to the “winners”

Who holds the winning tickets? Extraordinarily talented, hardworking and lucky people like Kelly Clarkson and Lionel Messi. Those who inherit wealth. And those who are connected in the right ways. This is a self-perpetuating circle, with fame and money increasingly intertwined.

Success in one area seems to imply expertise in another, which is why we have a rise in parenting and home-making literature from people made famous by playing characters on TV and in movies. Famous actors try their hand at everything from making wine to formulating foreign policy. Just look at Dancing with the Stars, that barometer of cross-professional fame: it has deemed this season alone that comedians, “real housewives,” and Olympic gold medallists and will make money for them, ostensibly as dancers. Previous contestants include politicians, scientists, and athletes galore.

Science! …and cross-promotion.

The links here are fame and money, and while using either to get what you want in a new realm is nothing new, the potential reach of both (and opportunity to make more of each) has exponentially increased. And there is a new opportunity for unprecedented fame from the patronage of modern plutocrats, witnessed by the pantheon of celebrity chefs, celebrity dog trainers, and celebrity litigators. The über-rich want the best, and the best take a disproportionate slice of the industry pie.

 

Thankful: the rise of corporate patronage

So what happens to all those quite talented people who would have played to full theatres two hundred years ago? (Apart from making YouTube videos, that is.)

I’ve been thinking for years now that the “high” arts (theatre, ballet, classical music, dance) depend on wealthy patrons for survival, much as they did before these became popular attractions in the modern period. Those patrons today are largely corporate sponsors, instead of wealthy individuals, and the companies get cultural cachet and corporate social responsibility bonus points while the performers gain a living.

The trend goes beyond the arts. In Silicon Valley (and elsewhere in the US), corporations and wealthy benefactors are extending their philanthropy beyond traditional areas of giving. Mark Zuckerberg sponsors New Jersey school districts. Mike Bloomberg helps municipalities with their tech budgets. The Clinton Global Initiative finances green retrofits in the built environment. As the public sector falls apart, we become more dependent on the proclivities of wealthy people and the companies they run, for better or worse.

Your discretionary income at work!

 

Don’t Waste Your Time: what happens to everyone else

Those without a good corporate job or corporate patronage can still have interesting weekends. The last twenty years have seen a rise in hobby culture. Not just for hipsters anymore, farming, knitting, and brewing are all things to count as hobbies as it becomes harder and harder to actually make any money doing them. Assembly-line economics prompted a decline in bespoke items in favour of cheaper, ready-to-use/ready-to-wear equivalents, and with it the near-demise of artisan production. Hence, hobby culture has taken over. Many people today have side businesses that were once considered a main income stream, such as making crafts (e.g. through Etsy), photography (helped by the rise of Pinterest and Instagram) or self-publishing. I suspect this trend will only increase as 3D printing becomes more popular.

And for everyone else holding tickets and waiting for their numbers to come up, there is retail. The old stereotype of underemployed actors waiting tables persists because it is still true, and some are servers forever. In some industries and some places (for example, grocery cashiers in Toronto), service jobs are a means to an end, some spare cash earned while in school. In others, like much of the United States and suburban areas generally, people work in retail and/or service (the largest category of employment in North America) because they have no other option.

The result is a proliferation of companies pushing a “service culture,” a movement toward glorifying the customer experience everywhere from fast food to discount clothing stores. And while there is a long history of service as a noble profession (for example, in high-end restaurants), and giving clients what they desire is a laudable goal, claiming a service mandate while maintaining a pay gap between customer-facing employees and top management of 20, 80 or 200 times is deceitful, the false empowerment of the economically disenfranchised.

All of the above trends reflect a growing inequality in the workforce, one that becomes ever-more entrenched. Inequality is a major hot-button issue in politics at the moment, and a number of initiatives have been proposed to combat it, including raising the minimum wage. The long-term success of any solution, however, requires recognizing that the ability to earn a living can’t depend on holding a winning ticket.

 

Advertisements

7 Things I’ve Learned About History Since Moving to the Land of the Future

April 25, 2014

“Why on earth did you study history?” I was asked last night, and on many days since I arrived in what is perhaps the world’s most future-oriented place. What answer can I give to an engineer or venture capitalist who can’t rotate his perspective enough to look backward, or see the importance of doing so? I usually say that I love to explore the rich context of our modern world, so much of which was influenced by the past. Or that history, like all the humanities, is a mirror that shows us a different version of ourselves.

But such answers will not satisfy many people here, and in wondering why, I realize I’ve learned a few things about history and its uses since learning the way (to San José):

1. America ≠ California and American History Californian History.

I write a lot about nationalism, because it is one of the ways we identify as part of a group, with shared history. I feel very Canadian, and not very Ontarian at all because I don’t see Ontario’s history as disconnected from that of the Canadian historical narrative. So I assumed it would be very “American” here, like places I’ve been on the East Coast and Midwest.

I was wrong.

The United States, though a young country, seems to be very aware of (certain parts of) its history. After all, how many other countries refer so frequently to and preserve so faithfully the intentions of their founding documents? America has an acute sense of its founding myths, and the historical reenactment culture here is an ongoing source of fascination and delight. (Who wants to be that Union solider who gets shot the first moment of battle and lies on the field the rest of the day in period costume? Is there a hierarchy, and does one get promoted each successive year based on seniority until eventually he is General Lee, or is it merit-based and depends on how well you keel over in your fleeting moment of glory? Such pressing questions.)

California Republic

California is not, however, America. It is, as the t-shirts say, “California Republic,” with its “Governator” and strange direct democracy and fiercely independent, contrarian streak. Very few people here identify as “American” so much as “Californian,” and they don’t seem to share the same historical touch points. More common are nods to the Spanish and Mexican roots of the region, through the missions and street names, or a focus on the history of global trade and cosmopolitan capitalism.

2. People have a different definition of “history” in Silicon Valley.

Silicon Valley is a whole other animal altogether (a shark, perhaps?).

In a place where the next iOS release, must-have gadget or earnings report is breathlessly anticipated, “history” becomes something that matters mostly in your browser. “Legacies” and “artifacts” are usually bad things to Valley dwellers, being outmoded or standing in the way of progress. The tech industry does not look kindly on the past – or rather, doesn’t think much of it at all, an indifference which is, as we all know, much more the opposite of love than dislike.

San José then…

Silicon Valley isn’t kind to its physical history either. The historic orchards and cherry trees that once ringed San José have been paved to make way for sprawling, two-story rental accommodations and carefully landscaped corporate lawns. Giant redwoods are regularly felled to allow for a better view of the advertisements on the side of buildings (seen from the freeway, of course). Dome-shaped Space Age cinemas one frequented by Steven Spielberg are in danger of being torn down, likely so newer, bigger malls can rise up in their places.

Even churches, those bastions of beautiful architecture, look like something out of an IKEA catalogue, all light wood and glass – nary a flying buttress in sight. It’s a full-on assault of the past by the present, in the name of the future.

3. Transience produces ambivalence and a lack of investment in the past.

Many people are new here, as the region’s explosive growth in the last 30 years can attest. Others are “just passing through.” So a lot of people feel disconnected from anything greater than their jobs or family/friend networks here, and there is a pervasive sense of rootlessness.

So why bother to invest in their communities? Or care what they used to look like? So goes the logic and thus the “San José Historic District” encompasses a single square block, with fewer than ten historic monuments. These are mainly just buildings that have survived – earthquakes, vacancy and neglect. This website catalogs the “boneyard of unwanted San José monuments” that are slowly crumbling away near the freeway and very shiny corporate HQ of Adobe.

Santa Clara County Courthouse

The courthouse, crumbling in disrepair. San José is falling down, falling down, falling down…

It’s not all that surprising though when you consider that…

4. …it is personal history that fosters pride and connection.

Perhaps I and others feel disconnected from the history here because so much of historical connection depends on identifying with who made the history in the first place. Several recent studies from the British Commonwealth (Britain itself, Canada, and Australia) and the US indicate that museum attendance increases where a greater percentage of the population identifies with the ancestry of the area. That is, if you are of Scottish origin in Toronto, you are more likely to be interested in a museum about Canadian history, which was largely architected by Scots, than if you are a Native Canadian whose world was essentially trampled on by those same Scots. You’re likely still less interested if you are a recent immigrant to Toronto from Bangladesh. Feeling as though a part of you helped to make a place what it is makes it more real and more interesting. Rightly or wrongly, you feel as if you have more of a stake in the future because “your people” had more of a stake in the past.

Even people that grew up here can barely recognize it, so feel as though a part of their past has been taken from them. Wherefore the cherry blossoms and apple orchards that used to dot the landscape of the “Valley of the Heart’s Delight”? One woman told me her family used to live bordering a fruit farm, and moved six times as the farms were paved over by housing divisions, until “we lived backing on to the mountain, and there were no farms left.”

…and San José now.

And yet, I can only feel that history is critical, from my experiences in Toronto where historical consciousness, like love and Christmas, is all around.

Thus:

5. History is often the most beautiful part.

I used to love walking through downtown Toronto because every so often a beautiful Art Deco or neo-Gothic gem would emerge amid the drab tower blocks of the 1960s and 1970s. Variations in architectural style provide interest and colour in an otherwise monotonous world of glassy office towers and utilitarian apartment buildings. Grand plazas, churches and monuments make statements about what is important to a place, and what it values.

What do these people value? It is worth cherishing and celebrating the few beautiful examples of history that exist here.

Like this one!

 

6. Historical traditions provide comfort.

This surprised me. History, of course, is about customs passed down as much as it is about actual events or physical buildings. Traditions ground us and give us some consistency in a world that changes rapidly. This is part of the reason weddings, funerals, and general church-going still exist. We need traditions to mark the big events in life.

We also need traditions to mark out who we are and how we should behave. To take a small but non-trivial example I wrote about recently: our clothing sends out signals about who we are and what we expect from life. There are no standards of dress here, at work or at play. Twenty-five-year-old men dictate the business ambiance, so beards, flip flops and holey t-shirts abound, and you can’t find a restaurant in California fancy enough that you can’t wear jeans.

It is utterly unconventional, which is perhaps just a bit the point. Wearing jeans to a meeting with someone in a suit will instantly destabilize them. It’s the same idea with non-standard working hours, perfected by the tech industry, and turning work into play (both the work itself and the space in which it is done). Even the critical and traditional accent in “José” has all but disappeared, which leads me to wonder if people in future will think this city was pronounced as something that rhymes with “banjos.”

It is groundbreaking to blow up established norms, but also somewhat unsettling. And history is necessary, if only to have something to conscientiously reject.

7. Culture clusters around history.

Life without history would not only be ignorant and untethered, but very boring.

People often view San José and its surrounds as soulless, and it’s easy to see why. One need only look at the cultural draw San Francisco has on the region to appreciate why places with deep roots are attractive. Most of San Francisco’s biggest tourist attractions are historical landmarks. What would the City be without the bridge, cable cars, Alcatraz, Haight-Ashbury, the Ferry Building, or Pier 39? Just a bunch of expensive apartments and hills, really.

History infuses places with meaning, and communities gather to add more layers. So next time someone asks me why on earth I would bother to study history, I think I will tell him that it’s because I care about beauty and culture and connection to the people and places around me — and that if he wants to live in somewhere even half-decent, he should too.

History, paved over

History, paved over


Tasting Notes: A Scientific Justification for Hating Kale

April 17, 2014

There is a new East-West arms race, and it is full of bitterness. Literally.

Since moving to the West Coast, I have been struck by the preponderance of bitter foods and beverages. The coffee, beer and lettuce producers here appear to be locked in a bitterness arms race with each other to see who can make the least palatable product, with no clear victor. It seems that the West Coast version of all of these products (think: dark-roast Starbucks, exceedingly hoppy pale ales, and kale) are significantly more bitter than their east coast counterparts (think: more traditional lighter roast coffees, lagers, and Boston Bibb).

Hops: beer’s bittering agent, liberally applied on the liberal left coast

What’s going on here? Are people’s taste buds addled from years of sipping California’s notoriously strong Cabernets? Is our future all about green smoothies and kale chips? And what are picky eaters (like, ahem, this blogger) to do?

It turns out I am not alone in opposing such bitterness, and the evolution of taste is on my side. And, moreover, the future may be friendly.

A taste of history

Humans can taste five distinct flavours: sweet, salty, sour, bitter and umami (otherwise known as “savoury,” the flavour of cooked meat, among other things). And each of our taste buds contains receptors for each of these  flavours, so taste sensation is not concentrated in certain regions of the tongue as previously thought but dispersed throughout. For example, we probably lick ice cream cones because they are too cold to eat with our teeth, not because sweet receptors are located at the front of our tongues.

We can also taste all five flavours simultaneously yet distinctly; if you were to eat something that contained all of the flavour elements, you would taste each in turn (and probably not enjoy it very much – I can’t imagine what such a food would taste like). Tasting is a multi-sensory experience, in fact. As any aspiring sommelier will know, flavour is produced both by the five taste sensations and the olfactory receptors in our nose, which give foods and drinks a much more complex and multi-layered profile. Temperature, texture, and auditory inputs such as crunch also influence our experience of “taste.” No wonder we love to eat.

Humans have such developed tasting abilities because we are omnivores with varied diets, and require a plethora of nutrients found in many foods to survive. Other animals do not require such diversity of nutrients, so cannot taste such variety. Pandas, who have evolved to eat almost exclusively bamboo, cannot taste umami. Cats and chickens “lost” the ability to taste sweetness at some point in their history.

How sweet it is

It is thought that our fondness for sweet foods was among the first tastes to be developed, because we need simple sugars as a fundamental building block of nutrition. Today healthy sugars and sweet tastes come from fruits and breads. Salty food indicates the presence of sodium (or lithium, or potassium), and a certain amount of sodium is necessary for our bodies to function, since humans lose salt through sweat.

Sour foods, such as lemons, are typically acidic (in the chemical sense) and a sour taste can signify that food is rancid. Sour is also good, however: humans need a certain amount of Vitamin C, found in sour foods, to survive, so our taste buds developed to seek this flavour out. An emerging theory is that our sweet and sour tastes evolved simultaneously from exposure to fruit, which contains both tastes. Both flavours are also present in fermented foods and cooked meat, the former being important in providing good bacteria to aid digestion and the latter in being more easily digested than raw meat.

Bitterness is the most complex receptor, and it is thought that humans can perceive 25 different kinds of bitterness. Bitter foods are frequently basic (again, in the chemical sense), and bitterness is an innately aversive taste. Babies will turn away from bitter foods – such as leafy green vegetables – just as they will naturally gravitate toward sweet ones. As one article I read succinctly put it:

“Many people do not like to eat vegetables—and the feeling is mutual.”

Bitter melon. Shudder.

Evolutionarily, our aversion makes sense. Plants secrete pesticides and toxins to protect themselves from being eaten. Even now, if we taste a strong bitter food, our bodies behave as though they are preparing to ingest a toxin, activating nausea and vomiting reflexes to protect us. Pregnant women are particularly sensitive to bitterness because their bodies are hypersensitive to the baby’s health. It is also now thought that small children have some justification for hating brussels sprouts and other green, leafy vegetables in that their younger taste buds are particularly sensitive, and averse, to bitter flavours. Picky eaters vindicated!

It’s a bird! It’s a plane! It’s … Supertaster?

A relatively recent theory that has the tasting world abuzz (ataste?) is the discovery of so-called “supertasters,” individuals with a greater number of taste receptors (the typical number of taste buds in humans can range from about 3 000 to over 10 000). Some experts also theorize that supertasters may have normal receptors, but more efficient neural pathways to process the tastes. They are more likely to be female, and of African or Asian descent, and some estimates put them at 25% of the population.

Supertasters are particularly sensitive to bitter flavours present in such foods and drinks as grapefruit, coffee, wine, cabbage and dark chocolate. They are also thought to be more sensitive to sour and fatty foods, which means they are usually slim, but their aversion to vegetables makes them more susceptible to various cancers. And they are most certainly susceptible to the ire of their parents, friends at dinner parties, and anyone else who tries to feed them.

Like an evil mutant flower.

Leaving a bitter taste in our mouths

So why would anyone, supertaster or no, desire to eat foods that humans have convinced ourselves over millennia are toxic and therefore to be avoided?

In fact, many scientists theorize that we only learn to like bitter foods after seeing the other positive effects they can have on us, often pharmacological ones. Consider coffee, which makes us more alert, and wine, which makes us more relaxed. This can be the only reason anybody with taste receptors eats spinach or kale, right?

A fondness for bitterness seems, in my entirely unscientific analysis, to centre on warmer regions, where these foods are traditionally grown, such as coffee, olives, grapefruit, and bitter melon. See, for example, a traditional Mediterranean diet pyramid, which contains several bitter foods.

A Mediterranean traditional diet pyramid

Perhaps more significantly, though, scientists have discovered a link between eating bitter foods and socioeconomic status. One study in France found that men who ate a greater variety of bitter foods were more likely to be well-educated and have a lower body mass index (BMI). Women who ate a greater variety of bitter foods also had lower BMIs and were less likely to have diabetes.

It would seem that bitter foods today pose less of a threat of toxicity and yield great health benefits (well, perhaps kale more than IPAs). Likely this rational reasoning is behind the West Coast health food craze, and indeed why bitter foods are more commonly consumed for their health benefits where populations are more educated and wealthier, as a whole.

Science will continue to play a factor as well. We may know in our heads that Brussels sprouts are good for us but still dislike the taste. Food producers will likely try to engineer foods to keep the benefits without the drawbacks. In fact, many foods are already “debittered” by the food industry, from oil to chocolate to orange juice.

So good news for West Coast dwellers, supertasters, children and those averse to toxins everywhere: one day you may be able to have your kale chips and eat them too — happily.

Kale: the world’s ugliest vegetable?  It’s coming for you!

 


How people we hardly know cause us to have more serendipitous, lonelier, busier lives 

April 11, 2014

Imagine you live in a small town, circa 1750. Your daily life is spent working – maybe farming, or maybe you make shoes or are a teacher. You eat, drink, sleep, look after children, and socialize. Your social circle consists of others in the same class and gender, for the most part, and you will most likely spend your whole life living with, farming with, marrying into, reproducing with, and dying with the other families that live in your village.  You know these people really, really well.

Perhaps someone in your family emigrates – to London, or to one of the settlement colonies, say – and so you spend a bit of time every month writing letters to them, but know that it’s a bit pointless, because anyone who had moved more than a few hundred miles away would likely never come back. Every so often a traveller or vagrants will come by, and sometimes people will move in or away, but for the most part social circles are set. There is no networking to change your lot in life, or make new friends, just living.

Now imagine the richness and diversity of your current social circle. It is probably more like a multi-national organization than a village. It probably includes people living in several countries, from different backgrounds. It is probably quite large. You probably don’t know many of them very well, but may spend a lot of time, like I do, writing emails, talking on the phone, or communicating in other ways with them. I spend much of what time I have leftover in my day feeling guilty that I haven’t spent more time writing more emails or making more phone calls. When I lived in Toronto, I must have had 25 people at any given time that I had honestly been meaning to “catch up” with for about six months. Now I live further away, it is even more important (and time-consuming) to keep up links with everyone back “home.” (I am that immigrant mentioned above! Doubly so. So many letters.)

Of course, this doesn’t even include time spent on the more common definition of “networking” – the kind that makes me want to take a shower – which is to purposefully make connections with the hope of them being useful at some point hence, in a search for a new job or piece of advice.

Network Proliferation

The abundance of methods of communication and social networking technologies has made all kinds of networking almost unconscious, but quite time-consuming. Modern networks are kept alive by either the acceptance of an inferior means of communication (email, letters, FaceTime) as satisfactory grounds to sustain them, or the faint hope of a better way of interacting occurring again in the future. But it appears that quality decreases even as time spent increases, and we are left accepting many more threads of connection without time to forge many into lasting companions.

If we are being honest, it is highly impractical to spend so much time maintaining friendships with friends of friends, those who live outside of our immediate geography, or people who were major players in our lives years ago but no longer cross our minds very often. So why do we do it? What is so inherently appealing about having far-flung networks of others who share our interests and experiences?

I see the main points of the cost-benefit analysis as follows:

  • The social inclusion high. With the breakdown of actual barriers of geography through telecommunications and easier global travel, and imagined barriers of social class, we are much more likely to find others who share commonalities with us. And most of us are willing to spend time and energy building a social circle of like-minded peers, over and above the time and energy required to simply exist in the world with those who may not necessarily (e.g. colleagues, extended family members, baristas at the coffee shop, the mailman, etc.).
  • Imagined future benefits. Slightly more self-serving, but no doubt also a factor is the potential usefulness of knowing an old travel companion who lives in Auckland, NZ in case you ever need a place to stay, or a contact in the federal government in case of a future career change. This is, basically, the only reason LinkedIn exists.
  • Guilt. It’s harder to terminate a relationship than keep it vaguely open-ended. It is much easier to have friends from elementary school connected by a thin thread on a Facebook feed than acknowledge that there is no real reason to be part of each other’s lives. In this case the cost may be low (provided they don’t constantly spam us with game requests or multiple smarmy medical school acceptance status updates), but it also makes me wonder if our village-dwelling ancestors were more comfortable with saying goodbye and just letting go of outdated relationships.

Dunbar redux

There are very real advantages to having large, loose networks of connections, but the cost of all of this network upkeep is time and anxiety. According to a well-known study by anthropologist Robin Dunbar, the optimal size of a human’s social network is about 150. This number refers to how many people we can cognitively sustain stable relationships with, and is directly related to the size (and thus functionality) of our neocortex. (For a fantastic and hilarious illustration of Dunbar’s number, see this piece.)

Dunbar’s number has obvious applicability to real-world organizations, but has more recently also been found to apply to our online social networks, in the number of people we frequently interact with online. But with ever-larger networks to maintain, something has to give: the quality of the relationship, the amount of time humans are willing to spend communicating with others, or a shift in our physiology so that we are able to cognitively adjust to a greater size of stable connections.

It seems that quality is the first thing to go. A 2007 study showed that Facebook has many positive social attributes, in that it enables us to “keep tabs” on others very easily, thus “convert[ing] latent ties into weak ties,” increasing the serendipity factor in our lives. As is already widely known, however, it also carries costs. The constant identity curation necessitated by Facebook and similar social networks is exhausting. We want to project an image of ourselves as (relatively) happy, successful and social. It’s stressful, and it also makes us lonely.

I pick on Facebook, but we use the same techniques to keep up appearances across networks with all of our weak ties, and this is facilitated by not being near people for sustained periods of time in person. And it isn’t just in our personal lives. Image production has become an increasingly useful skill for knowledge workers who have to justify the value of their work through self-promotion or “personal branding,” either within an organization to get that excellent performance review, or to win more business as a sole proprietor. Such conscious displays of our better sides (I won’t go so far as to say artifice) would have been impossible to keep up in the village with so many strong ties and so few weak ones.

Back to the village…

Perhaps it is a symptom of our modern greed that we expect to have so much capital interpersonally and intellectually, as well as physically. Since we have “progressed” beyond the village, we can now create and maintain more opportunities: opportunities for more knowledge about the world, more interesting friends, better social activities, and better jobs. This is good news if you don’t want to be a shoemaker who sees the same 50 people every year for the rest of your life, but bad news if you want to have an empty inbox and be ulcer-free.

I see it as a social manifestation of the “paradox of choice” (a book I highly recommend for anyone feeling swamped by choice). Having more options actually makes us less happy, because the stress inherent in choosing between them, and the time it takes to do so, often outweighs the potential benefits of a better choice (if there even is a better choice). More weak ties naturally means more choice, and more stress.

So maybe those who withdraw from frequent socializing are (intentionally or not) limiting their options, and maybe they are happier for it. They moved back to a slightly bigger village, and they’re enjoying the lifestyle.


Greater Understanding but Less Choice? The Decline of Free Will

March 18, 2014

Every human’s behaviour is constrained. Legal or social censure, theological or other conceptions of morality, and physical restrictions all affect our agency. The idea of any of us having true free will is contested.

Technology and biological research have just accelerated the debate. For example, in less than six months, you will likely have advertising delivered to your cell phone based on your geographical position. (“Did you know that sweaters are on sale at the Macy’s you just walked past?”) In less than a year, service alerts and other helpful information will likely be added. (“Don’t take King Street on your way home; there is a traffic snarl up by the freeway entrance.”) If you use Google as your primary search engine, you already see only a fraction of all the search results available for your query, because predictive search technology has selected only the ones it thinks will be most relevant to you, based on your location and search history.

If not direct constraints, these are certainly strong determining factors in our behaviour. It’s the world, curated especially for us. Choices are made for us, about what we want, and should do and see, that are continually narrowing our conceptual field of vision. Spontaneity, and serendipity, in an online world where our doings are tracked and analyzed, may be a thing of the past.

Nudge, Nudge

Greater understanding of human brains and decision-making also explain the choices we make, and can be manipulated to affect them. The nudge theory of behaviour – popular with the Obama and Cameron governments, who think of it as a way to combine paternalism and libertarianism – advocates providing incentives to subtly change behaviour toward a more rational course. Give people tax credits for eco-friendly home improvements, and they’re more likely to go with the low-flush toilet and reflective window coating. Place the salad bar in a prominent location closer to the entrance of the cafeteria than the mac ’n’ cheese and you may end up with diners making healthier food choices. There is even a group dedicated to implementing such “nudges” within the UK government.

Recycling bins

Here’s a nudge toward recycling more – in Toronto, recycling and organics bins are free, while larger garbage bins cost more money

Successes claimed by nudge theorists include everything from a reduction in traffic fatalities (on curves where the lines are pained in such a way as to unconsciously encourage drivers to slow down) to less urine in public toilets (where an insect is painted on the men’s urinals to attract attention). And yet nudge theory has come under fire from various libertarian groups, who believe we should at least be aware of having our perception manipulated so it is less of an infringement upon our conscious choices. Some philosophers even argue that free will is not free without knowledge of the potential outcomes from different choices.

And yet, so many of our choices seem to be unconscious. Recent neurological research into what is called “haptic sensation” has inexorably chipped away at any concept of free will we may have had. Studies indicate that holding a warm mug of tea or coffee while interviewing a candidate, or having his/her resume presented on a heavy clipboard, can lead to a more favourable outcome for the interviewee than iced tea and a flimsy page. Soft furniture in a conference room can lead to more harmonious meetings than wooden benches. Sitting in a hot room can amplify anti-social tendencies like aggression.

Similarly, priming female students before math tests to consider their gender results in a much poorer performance than prompting them to consider more positive characteristics, such as their attendance at an elite school. The fear of conforming to gender stereotypes (“women are worse at math than men”) affects performance, in a phenomenon known as “stereotype threat.” The same effect has been shown on test-takers who are members of visible minority groups.

Why bother studying those limit laws when a few spoken words or a poorly placed demographics question before an exam can have significant negative effects?

Who is guilty when nobody is responsible?

Better understanding of human emotions and the human brain has already significantly affected our conception of human accountability and the choices we make. So what happens when everything can be explained away, or rationalized with a new theory of behaviour? The first verdict of “diminished responsibility” paved the way for a trend of exculpatory evidence that now encompasses hundreds of conditions rendering us unaware of, or unable to control, our actions. Everything from brain tumors to hormonal imbalances have been shown to lead to often drastic, out-of-character bevahiour.

This is often where politics draws a line between more conservative advocates for “punishment” and liberal advocates for “rehabilitation” of anti-social behaviours. And free will is an essential part of the argument, mainly because it is often linked with morality. Kant says that actions cannot be moral without being free: if we are not in control of our own actions, how can we choose to be moral or otherwise?

If you fail to slow down where nudging lines have been drawn closer together on the road, are you an unsafe driver, or merely someone on whom the psychological trick didn’t work? Should a manager be sued if he failed to hire the “better” candidate because he was sitting on too firm a chair during the interview? If two medications interact in an unprecedented way and you assault someone, are you culpable?

3d speed bumps

Slow down for the fake speed bumps! Or else?

A middle path

Perhaps there is a way to be somewhat but not entirely responsible. Many people view free will as an illusion, and consider the lack of it as a freeing, positive experience. The well-known atheist writer Sam Harris, who wrote a book on free will in 2012, argues on his blog  that in fact believing in the absence of free will lessens unhelpful emotions like pride and hatred by chalking up a good portion of the cause of our actions to unconscious reflexes and brain chemistry. If we focus less on hating “bad” people for their actions, he argues, we can spend more time meting out appropriate punishments to ensure they do not reoffend. And yet, he still leaves room for persistence, hard work and other actions that enable success and prosperity in the longer term.

The theme of intentional long-term, repetitive choices being a proxy for free will (if not the same thing) jives with another discussion I read researching this post. It mentioned religion, still the prevailing global codification of human morality, as essentially acting in one’s long-term self-interest (that is, ensuring one’s place in heaven). And really, this is what almost all morality comes down to: ensuring the harmonious relations of the species so we don’t all kill each other, couched in terms of individuals keeping those around them happy by not stealing from them, lying to them, killing them, deceiving them, etc. It is enlightened self-interest to foster mutual support networks. In this light, all historical constraints on free will (such as laws) are for “our own good.”

Perhaps the increasing awareness of the restrictions on our consciousness – and manipulations of them – are actually prosocial. Perhaps they, like morality, have given us a way to preserve the real-life networks by encouraging rehabilitation over punishment and understanding over mystery. It is certainly possible that as more behaviours are justified or explained (away), society will become more liberal in meting out criminal “justice”. Many will consider this moral progress.

Will the passage of time and progress of science eventually explain all actions we take? Will we be living in some real-life version of “Minority Report”? Will Google ever be able to know when I need a good game of trivia and be able to tell me where I have the most fun (and win)?

Perhaps – the jury is still out.

I’d be interested in hearing your thoughts in the comments below – do you think free will exists? Does it matter?


The Brands That Still Matter

February 13, 2014

Dannon Oikos’s nostalgic Superbowl spot was a great advertisement for both the French multinational and its new yogurt product.

But will knowing Oikos is a Dannon product make consumers want to purchase it? Or will they turn instead to nutritional count, pro-biotic content, or price to make their decision? How strong is this brand?

How strong, these days, is any brand?

What I need right now is some good advice

Well, it depends. An excellent piece in the New Yorker this week explores “the end of brand loyalty” and whether this spells the decline of an age in which brands were useful shorthands for purchasing everything from baked beans to luxury sedans. In an era in which the customer review is king, all companies must compete product by product, it says, even established giants. The field is open for upstarts and smaller rivals who can win over a market on the strength of a single, well-reviewed product.

It’s easier than ever for young companies to establish a foothold with a signature product: think Crocs, which have expanded from those foam clogs to flip flops and winter gear, creating a whole new hideous/comfortable footwear market. What propelled Crocs to fame was the strength of customer testimonials saying that it really was worth the price and the look to get that level of comfort.

The same trends that allowed Crocs to happen also signal the decline of major brands. When we have so much information at the click of a button, the promise of a consistent level of quality – which is really all a brand is – becomes less important than the fact – actual product reviews. Why trust a company to make things you know you’ll love when you can trust other users to tell you their opinions instead? It’s true: the level of trust in a product’s brand as a shorthand for making a good purchasing decision is at its nadir.

However, the decline of product brands has led to the rise in service brands, particularly those giving advice. Booking a holiday? The competition for who gives the best advice on hotels, restaurants and attractions seems to have been decisively won by TripAdvisor. Purchasing a book? Bypass the publishing house and read the reviews on Amazon, and then let the site recommend a choice for you. Looking for a good movie? Hardly anybody makes decisions about movies based on the studios that produce them, but Netflix can tell you what to watch based on what you’ve seen before.

These are all Internet-based examples, because the advice industry has moved online for the most part, but brick-and-mortar service brands have also maintained their strength amid the fall of brand loyalty for products. Banks are an example of organizations that are judged based on the selection of products they have curated for their customers, but more importantly how they advise their clients, particularly in the higher end, higher-margin businesses of wealth management and institutional and corporate banking. Consulting firms continue to prosper through economic slowdowns because they can advise on both growing revenue (in good economic climates) and streamlining expenses (in bad). And it all began with things like Consumer Reports, J.D. Power, and other ranking agencies who built their reputations upon being the ones who choose the products that matter, and whose advice you can trust.

The service brand becomes personal

Those who host the platforms that enable others to recommend products – the information aggregators and analysts – are poised to be the big winners of the near economic future. And this extends to individuals as well, which explains the push in the last ten years to develop “personal brands.” I’ve written before about how this makes many feel a bit icky, and yet if we think of skills as “products,” and analytical ability as “service,” it makes sense to have a personal brand that emphasizes how you think and relate to others as opposed to what you know. (This is why most personal brands focus on a combination of attitude and experience, e.g. Oprah’s signature empathy which resulted from her life experiences.)

Skills can be learned and degrees earned by many individuals, just like many companies can manufacture clothing. They are interchangeable. But proof of being able to think well, in the form of awards, complementary experiences, and attitudes, expressed through a self-aware brand, is unique.

This is likely why LinkedIn has moved to a model that goes beyond listing skills and abilities to providing references (“recommendations” and “endorsements”) to indicate past performance, and “followers” to show how popular one’s ideas are. These serve the exact same function as the ranking and number of reviews a destination has on TripAdvisor.

No doubt this has contributed to the large number of individuals wanting to strike out on their own. At a recent networking meeting I attended, 100% of attendees were looking to become independent personal nutritionists, career or life coaches, or consultants. They weren’t wanting to sell things, they wanted to sell themselves and their advice.

A strong brand – a personal one – is essential for this kind of career change, and part of creating a strong brand is ensuring consistency. Working for an organization whose values don’t align with yours – even if you are doing the same thing you’d want to do independently – is a brand mis-match.

All of this highlights another key similarity to traditional product brands: service brands, once established, have a grip on market share. Most companies would prefer to have an accountant at an established firm do their taxes over a sole proprietor. TripAdvisor has few competitors in the travel advice industry, which is why travel agencies are faring so poorly. The barriers to entry are high and name recognition and brand still counts for a lot.

My advice to newcomers: time to call up Uncle Jesse to make an ad for you and get some brand recognition.


Sparkling Water or Water Lilies? The Comfort vs. Beauty Problem

January 29, 2014

First things first: posthistorical is back! I am very excited to be blogging again. The world seems much the same: Obama, Harper, and Merkel have won more elections; politicians everywhere squabble over ridiculously trivial things and generally accomplish nothing; we collectively still spend way too much time on facebook. And yet much has changed: this blogger now lives in the Golden State instead of the True North Strong and Free, and with a government-enforced sabbatical now has a lot fewer excuses not to post frequently.

It’s also 4 years (ish) since I started posting on this blog, and that means the exciting quadrennial spectacle of nationalism that got many of my juices flowing last time (otherwise known as the Winter Olympics) will soon be upon us. Once more, in Russian! More to come.

But first!

A dichotomy for the ages

One of the things that started me on blogging again was a rush of ideas I encountered while re-reading Aldous Huxley’s Brave New World for a book club meeting. I will likely tease out a number of themes and their repercussions in the modern world in future posts, but the one that resonated most strongly with me was the dichotomy that one of the book’s main characters presents between truth & beauty and comfort & happiness. To have beauty and truth, he reasons, one needs to endure political and emotional instability, heartbreak, misery and terror – basically, opportunities to show mercy, wisdom and courage in the face of overwhelmingly bad odds. Happy and comfortable people have no need to rise above their situations in such a manner.

But who would choose discomfort and misery, given the choice?

The general trend of world history has been toward comfort, both in a material way and in the sense of social stability. If the nineteenth century was the century of engineering and industry, the twentieth century was the century of comfort. It was the century of spandex, widespread air conditioning and La-Z-Boy. More people than ever before were lifted out of poverty, and industrialization led to middle-class (or relative middle-class) comfort worldwide.

The number of people who choose sneakers over high heels or jeans and t-shirts over Little Lord Fauntleroy suits seems to back up comfort’s victory over beauty. And from the range of falsehoods – from “spin” to blatant lies – evident in government, advertising and many other areas, truth doesn’t seem to do very well either.

Have we already made the choice? And if so, is this progress?

The truth/beauty vs. happiness/comfort dichotomy mirrors the idea of moral vs. technological progress. Some thinkers, such as John Gray, whose anti-humanist work Straw Dogs I’ve written about before, believe that technological progress is in theory limitless, but that our moral progress as humans is essentially stalled. Nuclear technology, to use an example he gave, while a huge technological boon that can supply power to millions, has simultaneously allowed us to wipe cities off the map, a more efficient killing machine than had ever been known before.

Systematic discrimination

Perhaps truth and beauty – or moral progress, if we can equate the two – have seemingly lost out to comfort and happiness – technological progress – because the large-scale systems that largely control our lives have focused mainly on them. Take governments: funding for truth and beauty (whatever that would look like) will almost always come second to funding for hospitals, police, and even infrastructure – that is, the necessary building blocks for a comfortable life. The Brave New World character I mentioned earlier also points out that rule of the people leads to an emphasis on comfort and happiness over truth and beauty – certainly, this is the credo of America, “life, liberty, and the pursuit of happiness,” not, incidentally the pursuit of truth. Comfort, or at least freedom from harm and repression, was the first priority of the revolutionaries.

I went back to examine some other modern revolutionaries. Re-reading The Communist Manifesto, I discovered that the aims of Communism also begin with comfort before proceeding to truth, even if the ideals contained within the movement are based on so-called universal truths. Guaranteeing subsistence was the first step, through a radical change in property rights, the tax system, etc. followed by universal education (i.e., the pursuit of truth and beauty).

The other large system that governs our lives, free market capitalism, is also geared toward profits that can more easily be made from comfort than beauty.  This is why Proctor & Gamble, who sell deodorant and diapers, made $US 81 billion in 2013, and the New York Times, winner of more Pulitzer prizes than any other newspaper, struggles each quarter to make a profit. Perhaps this also explains the existence of the phrase “starving artist.”

First things first

There may be a way to see a positive outcome in this. Perhaps it is not so much a dichotomy between truth/beauty and comfort/happiness, as a ladder, or hierarchy, if you will. Perhaps, like ol’ Maslow said, we focus first on satiating the need for food, clean water and safety before striving for self-actualization.

Now, we all know how much I love Maslow (and so does everyone else, apparently, because this is by far my most read post). But this theory would disagree with Huxley’s characters, who imagine that it is either a comfortable, drugged out existence devoid of anything so confusing and challenging as truth, OR starving artists capitalizing on their misery and discomfort by creating beauty, that is, skipping straight to the top of the hierarchy.

I posit this theory: those who can truly move to the self-actualization stage can only do so because they feel their more basic needs have already been met. This is true even though they live in the same world as those more susceptible to advertising campaigns which introduce needs we never knew we had (for the new iPhone, elective rhinoplasty, or gluten-free dog food, for example). Maybe it’s just that those seeking truth and beauty seem deprived and miserable to those who couldn’t imagine taking their places.

Our need for comfort will stay the same as our definition of comfort changes; perhaps those who can be comfortable enough already, without soma and shiny new things, can have their truth/beauty cake and eat it too – happily.