History Through Rose-Coloured Glasses

November 12, 2014

Rarely have there been so many meanings so definitively associated with the same colour.

From the innocence of childhood to the sexy, all-night glow of Las Vegas neon, pink has a colourful and controversial history associated with noble and common, demure and gaudy, masculine and feminine. And it wasn’t even known as “pink” (in English) until the late 1600s, centuries after its purported opposite — blue — really arrived on the scene, both linguistically and in the popular consciousness.

Madame de Pompadour, mistress to King Louis XV

Madame de Pompadour, mistress to King Louis XV

Some have argued that pink’s “golden” age was in the eighteenth century, when it was the mode for high-fashion ladies of the French court. At that time, of course, they were among the only people who could afford the expensive dyes that coloured the fabrics they wore. Madame de Pompadour, mistress to King Louis XV, popularized pink amid a bevy of other pastels that were favoured in the Rococo period.

Pink continued to be associated with the rich and royal until the twentieth century, when chemical dyes allowed for its more widespread use in clothing that could be washed repeatedly without the colour fading or washing out. It was also around this time that pink transitioned from being largely a pastel hue associated with the innocence of children to a more bold, exotic shade. The new dyes allowed for the creation of deeper and darker versions of pink that spread around the world in the fashions of the 1920s.

The new and the neon

Buildings started to be sheathed in rose around the same time. In the 1920s and 30s, at the height of the Art Deco movement, vivid colours emerged as an alternative to the drab sameness and deprivation of depression-era interiors. A splash of bright paint could change the tone of a whole room. And with a focus on modern, technologically-enabled streamlining of form, the architecture and products of this age contrasted both with the ornate and intricate styles from earlier in the century and the contemporary countertrends of European functional Mies Van der Rohe-style block modernism.

Pink on pink at the Hotel De Anza, a classic example of Art Deco in San Jose, California

Pink on pink at the Hotel De Anza, a classic example of Art Deco in San Jose, California

Art Deco was colourful and accessible — and immensely popular. This was particularly the case in America, where, as architectural historian Robert M. Craig puts it,“Art Deco was jazzy, bright, sexy, loud, and visually appealing.” It was everywhere: from department stores to movie theatres to the new motels that had sprung up all over the country to provide for a growing motoring class.

Pink walls and pink fashions were a way to stand out and be noticed, and thus the colour was increasingly used in advertising, from splashy storefronts to the neon signs that dominated the landscape starting in the 1920s. In this way pink came to be associated with both the egalitarianism of commerce and material things: stylish perfume bottles, vacation homes in South Beach, new living room walls. Marilyn Monroe wore a notorious pink dress on the cover of the 1953 film Gentlemen Prefer Blondes. Elvis’s famous pink convertible, purchased in 1955, was seen as the height of post-war luxury and is featured at Graceland.

Gentlemen Prefer Blondes (in pink) -- Marilyn Monroe in the 1953 movie poster.

Gentlemen Prefer Blondes (in pink) — Marilyn Monroe in the 1953 movie poster.

Flight of the pink flamingos 

Pink is everywhere in California, as it is in many places where there are beaches, single-story construction, and a touch of the exotic. It is the colour of soft sunsets (because of Rayleigh scattering, in which only the longer rays on the visual spectrum, in the red-yellow colour range, reach the eye), and flowering plants. And in its heyday in 1950s, it represented the triumph of modernism and new frontiers.

Then its meaning shifted again. From being the bright colour of the future, it became the gaudy holdover from a bygone age. The lights of Las Vegas started to look a bit too commercial, too fake. Pink houses now stand out, “island[s] of whimsy in a sea of drab conformity,” and as such aren’t always viewed positively by the neighbours. Gradually pink started to represent the Miami Vice-like excesses of the 1980s or the wastefulness of neon tube lighting, first patented almost 100 years ago.

Nothing symbolizes the pink backlash more than the popular conception of lawn flamingos. Elegant and exotic, flamingos can be found across the globe in warm and wet areas, from India to Chile. The first pink lawn ornament was created in 1957 and was a smash hit. But by the late 1960s, the negative image of the plastics industry and the “unnatural” look of giant pink birds on the lawn led to a spiralling decline in their popularity. Now, of course, they are popular again, an ironic wink and nod to the kitsch of an earlier time.

Gentlemen prefer … pink?

This was not, however, the greatest reversal in the popular perception of pink. It is perhaps surprising today to imagine that pink was for most of its history considered a very masculine colour. Contrasted (as it always is) with blue, pink was seen as more stimulating and active, appropriate for clothing young boys, and the soft daintiness of blue more appropriate for clothing young girls (think: Cinderella’s dress at the ball). It remains a symbol of strength to this day in Japan, where it is associated with cherry blossoms, said to represent fallen warriors.

In nineteenth-century Britain, when military might was shown with red uniforms, boys wore pink as a kind of lesser red. And let’s not forget that the standard map of the British Empire is coloured pink, symbolizing the strength and breadth of British power, from the Cape to Cairo, and Whitehorse to Wellington. The old pink maps cemented the idea of empire in the popular consciousness of the time, creating what Linda Colley, (my favourite) scholar of the British Empire, has termed “a sense of absolutely uncomplicated, uncompromising power.”

Imperial Federation Map of the British Empire, 1886

Imperial Federation Map of the British Empire, 1886, by John Charles Ready Colomb

Pink now, of course, is considered near-exclusively feminine. It is often used idiomatically to refer to women’s or gay rights issues, as in “pink-collar” work, or “the pink economy.” And it has been helped in this image by marketers for almost seventy years, who both helped to shape tastes in colour and hew to common perceptions of them. Pink was a target during the 1970s with the feminist backlash against the confines of gendered clothing. As women started to dress in a more unisex and stereotypically masculine way, pink was eschewed. As an interesting overview in the Smithsonian notes, there was a time in that decade when even major retailers such as Sears Roebuck didn’t sell pink baby clothes, for girls or boys.

Living in a material world

2011 Color of the Year, "Honeysuckle"

2011 Color of the Year, “Honeysuckle”

The shift toward the ownership of colour could be said to have begun with the Pantone Institute’s codification of colours for matching purposes in the late 1950s. In recent colour analyses of brands, pink is considered warm, sensitive and nurturing, commonly used in products or campaigns targeted at women, such as Cosmopolitan and Victoria’s Secret. And that most enduring lightning rod of femininity, Barbie, naturally has her own shade. Barbie pink (Pantone 219C) has been associated with everything Barbie from the very beginning, including a fuzzy pink bathroom scale released in 1965 that was permanently (and controversially) set to 110 lbs.

Love in pink. Photo courtesy of Flickr user Chris Goldberg.

Love in pink. Photo courtesy of Flickr user Chris Goldberg.

And yet pink remains an aspirational colour, just as it was when Madame de Pompadour wore it at the French court. In 2011, Pantone chose Honeysuckle (18-2120), a bright variation of classic pink, as its Color [sic] of the Year, citing its “confidence, courage and spirit to meet the exhaustive challenges that have become part of everyday life.” It is a colour for the zeitgeist, a necessary perk in the dark days of our latest recession, with its many pink slips. According to Leatrice Eisema, Pantone’s Executive Director,”In times of stress, we need something to lift our spirits. Honeysuckle is a captivating, stimulating color that gets the adrenaline going – perfect to ward off the blues.”

So often viewed in opposition to something, pink can nonetheless be understood as a world unto itself. Whether seen as high or low, kitschy or elegant, soft or strong — or all of the above — it seems doubtful we’ve reached peak pink. Who knows what it will signify next?

Advertisements

The rise of lottery professions and why it’s so hard to get a decent job

May 4, 2014

In early September 2002, a young singer named Kelly Clarkson won the inaugural season of a new reality competition called American Idol. Her first single set a new record for fastest rise to number 1, vaulting past the previous record holders from 1964, the Beatles. Her rise to fame was categorized as meteoric, the kind of rags-to-riches story so beloved in America, and one that would be repeated, with more or less success, over the next 12 seasons and in several other similar contests.

Kelly Clarkson is fabulously talented, and also the beneficiary of a windfall. After years of struggling, she rose to the top of what has traditionally been known as a “lottery profession,” one in which there are many aspirants, very few of which succeed, and the rest do menial jobs in the hopes that one day their “big break” will come.

Often it never does. There are thousands of talented singers who never appear on our TV screens and never get Grammy awards because they are unlucky. They don’t win the professional “lottery.” (And in some cases it is a literal lottery: I’ve been to Idol auditions in Canada that have random draws of ticket stubs to determine who is even allowed an audition at the first stage.)

The concept of a “lottery profession” is usually applied to the performing arts – dance, acting, singing – and the literary ones – novel and poetry-writing – as well as sports and other fields in which aspirants need to be exceptionally talented, and also distinct, to an extent. And in these areas it has only become more difficult to succeed in the last hundred years.

The Poor Poet (Der Arme Poet), by Carl Spitzweg

 

Breakaway: how the best of the best get more market share

Chrystia Freeland writes in Plutocrats of how communications technology and economies of scale have made famous people even more famous. In the nineteenth century and before, a singer’s reach (and therefore income) was limited to those who could pay to afford a seat in a theatre; today, she can make money from records, huge live shows and merchandise. An early twentieth-century soccer player was limited in income by those who had paid to see the match – which is likely one of the reasons we don’t hear about many famous professional athletes pre-twentieth century – while today his face is on Nike ads and jerseys and he earns millions per season through licensing deals to see him on television getting yellow-carded.

And where the limited reach of a theatre or soccer pitch allowed a greater number of quite talented individuals to succeed, the limitless reach of television and the internet allow the über-talented to divide greater spoils amongst a much smaller number. Why bother going to see the local hotshot when you can watch Lionel Messi?

 

A Moment Like This: the lottery gets bigger

The trouble is, artists and athletes aren’t the only ones risking their livelihoods on a proverbial lottery ticket anymore. There are more new “lottery professions” all the time, often emerging out of professions that were once solidly middle class, able to support a family, with good salary and benefits. To give but a few examples:

  • Investment banking and stock trading, formerly quite boring, now require numerous credentials (CFA, MBA, etc.), and a good network in the right places, to get into;
  • Law work is now frequently outsourced to the developing world with intense competition for fewer and fewer spots at top firms in the West;
  • Tenured positions in academia, as I’ve already written a lot about, are quickly being eliminated with little hope for current Ph.D. holders;
  • Fire fighters have a 15% chance of acceptance at the local fire training academy, and most fire-fighting professionals have second jobs;
  • Medicine, nursing and social work programs accept fewer applicants for even fewer jobs, despite more demand for these professionals, instead hiring temporary foreign workers;
  • Even teaching, that bastion of middle-class professional employment, is a tough job to get these days, and, as anyone who has done any teaching will tell you, it ain’t a glamourous or lucrative gig.

Some of these changes are the effects of globalization, of course, which has pulled many in the developing world into the middle class even as it has displaced work from North America and Europe. The result is intense competition for what are really quite banal professions with long hours and few perquisites. After all, we are talking about work here, and while many of us can hope to enjoy what we do, more people still think of a job as more a way to make money than a calling.

 

People Like Us: what happens to the “winners”

Who holds the winning tickets? Extraordinarily talented, hardworking and lucky people like Kelly Clarkson and Lionel Messi. Those who inherit wealth. And those who are connected in the right ways. This is a self-perpetuating circle, with fame and money increasingly intertwined.

Success in one area seems to imply expertise in another, which is why we have a rise in parenting and home-making literature from people made famous by playing characters on TV and in movies. Famous actors try their hand at everything from making wine to formulating foreign policy. Just look at Dancing with the Stars, that barometer of cross-professional fame: it has deemed this season alone that comedians, “real housewives,” and Olympic gold medallists and will make money for them, ostensibly as dancers. Previous contestants include politicians, scientists, and athletes galore.

Science! …and cross-promotion.

The links here are fame and money, and while using either to get what you want in a new realm is nothing new, the potential reach of both (and opportunity to make more of each) has exponentially increased. And there is a new opportunity for unprecedented fame from the patronage of modern plutocrats, witnessed by the pantheon of celebrity chefs, celebrity dog trainers, and celebrity litigators. The über-rich want the best, and the best take a disproportionate slice of the industry pie.

 

Thankful: the rise of corporate patronage

So what happens to all those quite talented people who would have played to full theatres two hundred years ago? (Apart from making YouTube videos, that is.)

I’ve been thinking for years now that the “high” arts (theatre, ballet, classical music, dance) depend on wealthy patrons for survival, much as they did before these became popular attractions in the modern period. Those patrons today are largely corporate sponsors, instead of wealthy individuals, and the companies get cultural cachet and corporate social responsibility bonus points while the performers gain a living.

The trend goes beyond the arts. In Silicon Valley (and elsewhere in the US), corporations and wealthy benefactors are extending their philanthropy beyond traditional areas of giving. Mark Zuckerberg sponsors New Jersey school districts. Mike Bloomberg helps municipalities with their tech budgets. The Clinton Global Initiative finances green retrofits in the built environment. As the public sector falls apart, we become more dependent on the proclivities of wealthy people and the companies they run, for better or worse.

Your discretionary income at work!

 

Don’t Waste Your Time: what happens to everyone else

Those without a good corporate job or corporate patronage can still have interesting weekends. The last twenty years have seen a rise in hobby culture. Not just for hipsters anymore, farming, knitting, and brewing are all things to count as hobbies as it becomes harder and harder to actually make any money doing them. Assembly-line economics prompted a decline in bespoke items in favour of cheaper, ready-to-use/ready-to-wear equivalents, and with it the near-demise of artisan production. Hence, hobby culture has taken over. Many people today have side businesses that were once considered a main income stream, such as making crafts (e.g. through Etsy), photography (helped by the rise of Pinterest and Instagram) or self-publishing. I suspect this trend will only increase as 3D printing becomes more popular.

And for everyone else holding tickets and waiting for their numbers to come up, there is retail. The old stereotype of underemployed actors waiting tables persists because it is still true, and some are servers forever. In some industries and some places (for example, grocery cashiers in Toronto), service jobs are a means to an end, some spare cash earned while in school. In others, like much of the United States and suburban areas generally, people work in retail and/or service (the largest category of employment in North America) because they have no other option.

The result is a proliferation of companies pushing a “service culture,” a movement toward glorifying the customer experience everywhere from fast food to discount clothing stores. And while there is a long history of service as a noble profession (for example, in high-end restaurants), and giving clients what they desire is a laudable goal, claiming a service mandate while maintaining a pay gap between customer-facing employees and top management of 20, 80 or 200 times is deceitful, the false empowerment of the economically disenfranchised.

All of the above trends reflect a growing inequality in the workforce, one that becomes ever-more entrenched. Inequality is a major hot-button issue in politics at the moment, and a number of initiatives have been proposed to combat it, including raising the minimum wage. The long-term success of any solution, however, requires recognizing that the ability to earn a living can’t depend on holding a winning ticket.

 


Tasting Notes: A Scientific Justification for Hating Kale

April 17, 2014

There is a new East-West arms race, and it is full of bitterness. Literally.

Since moving to the West Coast, I have been struck by the preponderance of bitter foods and beverages. The coffee, beer and lettuce producers here appear to be locked in a bitterness arms race with each other to see who can make the least palatable product, with no clear victor. It seems that the West Coast version of all of these products (think: dark-roast Starbucks, exceedingly hoppy pale ales, and kale) are significantly more bitter than their east coast counterparts (think: more traditional lighter roast coffees, lagers, and Boston Bibb).

Hops: beer’s bittering agent, liberally applied on the liberal left coast

What’s going on here? Are people’s taste buds addled from years of sipping California’s notoriously strong Cabernets? Is our future all about green smoothies and kale chips? And what are picky eaters (like, ahem, this blogger) to do?

It turns out I am not alone in opposing such bitterness, and the evolution of taste is on my side. And, moreover, the future may be friendly.

A taste of history

Humans can taste five distinct flavours: sweet, salty, sour, bitter and umami (otherwise known as “savoury,” the flavour of cooked meat, among other things). And each of our taste buds contains receptors for each of these  flavours, so taste sensation is not concentrated in certain regions of the tongue as previously thought but dispersed throughout. For example, we probably lick ice cream cones because they are too cold to eat with our teeth, not because sweet receptors are located at the front of our tongues.

We can also taste all five flavours simultaneously yet distinctly; if you were to eat something that contained all of the flavour elements, you would taste each in turn (and probably not enjoy it very much – I can’t imagine what such a food would taste like). Tasting is a multi-sensory experience, in fact. As any aspiring sommelier will know, flavour is produced both by the five taste sensations and the olfactory receptors in our nose, which give foods and drinks a much more complex and multi-layered profile. Temperature, texture, and auditory inputs such as crunch also influence our experience of “taste.” No wonder we love to eat.

Humans have such developed tasting abilities because we are omnivores with varied diets, and require a plethora of nutrients found in many foods to survive. Other animals do not require such diversity of nutrients, so cannot taste such variety. Pandas, who have evolved to eat almost exclusively bamboo, cannot taste umami. Cats and chickens “lost” the ability to taste sweetness at some point in their history.

How sweet it is

It is thought that our fondness for sweet foods was among the first tastes to be developed, because we need simple sugars as a fundamental building block of nutrition. Today healthy sugars and sweet tastes come from fruits and breads. Salty food indicates the presence of sodium (or lithium, or potassium), and a certain amount of sodium is necessary for our bodies to function, since humans lose salt through sweat.

Sour foods, such as lemons, are typically acidic (in the chemical sense) and a sour taste can signify that food is rancid. Sour is also good, however: humans need a certain amount of Vitamin C, found in sour foods, to survive, so our taste buds developed to seek this flavour out. An emerging theory is that our sweet and sour tastes evolved simultaneously from exposure to fruit, which contains both tastes. Both flavours are also present in fermented foods and cooked meat, the former being important in providing good bacteria to aid digestion and the latter in being more easily digested than raw meat.

Bitterness is the most complex receptor, and it is thought that humans can perceive 25 different kinds of bitterness. Bitter foods are frequently basic (again, in the chemical sense), and bitterness is an innately aversive taste. Babies will turn away from bitter foods – such as leafy green vegetables – just as they will naturally gravitate toward sweet ones. As one article I read succinctly put it:

“Many people do not like to eat vegetables—and the feeling is mutual.”

Bitter melon. Shudder.

Evolutionarily, our aversion makes sense. Plants secrete pesticides and toxins to protect themselves from being eaten. Even now, if we taste a strong bitter food, our bodies behave as though they are preparing to ingest a toxin, activating nausea and vomiting reflexes to protect us. Pregnant women are particularly sensitive to bitterness because their bodies are hypersensitive to the baby’s health. It is also now thought that small children have some justification for hating brussels sprouts and other green, leafy vegetables in that their younger taste buds are particularly sensitive, and averse, to bitter flavours. Picky eaters vindicated!

It’s a bird! It’s a plane! It’s … Supertaster?

A relatively recent theory that has the tasting world abuzz (ataste?) is the discovery of so-called “supertasters,” individuals with a greater number of taste receptors (the typical number of taste buds in humans can range from about 3 000 to over 10 000). Some experts also theorize that supertasters may have normal receptors, but more efficient neural pathways to process the tastes. They are more likely to be female, and of African or Asian descent, and some estimates put them at 25% of the population.

Supertasters are particularly sensitive to bitter flavours present in such foods and drinks as grapefruit, coffee, wine, cabbage and dark chocolate. They are also thought to be more sensitive to sour and fatty foods, which means they are usually slim, but their aversion to vegetables makes them more susceptible to various cancers. And they are most certainly susceptible to the ire of their parents, friends at dinner parties, and anyone else who tries to feed them.

Like an evil mutant flower.

Leaving a bitter taste in our mouths

So why would anyone, supertaster or no, desire to eat foods that humans have convinced ourselves over millennia are toxic and therefore to be avoided?

In fact, many scientists theorize that we only learn to like bitter foods after seeing the other positive effects they can have on us, often pharmacological ones. Consider coffee, which makes us more alert, and wine, which makes us more relaxed. This can be the only reason anybody with taste receptors eats spinach or kale, right?

A fondness for bitterness seems, in my entirely unscientific analysis, to centre on warmer regions, where these foods are traditionally grown, such as coffee, olives, grapefruit, and bitter melon. See, for example, a traditional Mediterranean diet pyramid, which contains several bitter foods.

A Mediterranean traditional diet pyramid

Perhaps more significantly, though, scientists have discovered a link between eating bitter foods and socioeconomic status. One study in France found that men who ate a greater variety of bitter foods were more likely to be well-educated and have a lower body mass index (BMI). Women who ate a greater variety of bitter foods also had lower BMIs and were less likely to have diabetes.

It would seem that bitter foods today pose less of a threat of toxicity and yield great health benefits (well, perhaps kale more than IPAs). Likely this rational reasoning is behind the West Coast health food craze, and indeed why bitter foods are more commonly consumed for their health benefits where populations are more educated and wealthier, as a whole.

Science will continue to play a factor as well. We may know in our heads that Brussels sprouts are good for us but still dislike the taste. Food producers will likely try to engineer foods to keep the benefits without the drawbacks. In fact, many foods are already “debittered” by the food industry, from oil to chocolate to orange juice.

So good news for West Coast dwellers, supertasters, children and those averse to toxins everywhere: one day you may be able to have your kale chips and eat them too — happily.

Kale: the world’s ugliest vegetable?  It’s coming for you!

 


The Brands That Still Matter

February 13, 2014

Dannon Oikos’s nostalgic Superbowl spot was a great advertisement for both the French multinational and its new yogurt product.

But will knowing Oikos is a Dannon product make consumers want to purchase it? Or will they turn instead to nutritional count, pro-biotic content, or price to make their decision? How strong is this brand?

How strong, these days, is any brand?

What I need right now is some good advice

Well, it depends. An excellent piece in the New Yorker this week explores “the end of brand loyalty” and whether this spells the decline of an age in which brands were useful shorthands for purchasing everything from baked beans to luxury sedans. In an era in which the customer review is king, all companies must compete product by product, it says, even established giants. The field is open for upstarts and smaller rivals who can win over a market on the strength of a single, well-reviewed product.

It’s easier than ever for young companies to establish a foothold with a signature product: think Crocs, which have expanded from those foam clogs to flip flops and winter gear, creating a whole new hideous/comfortable footwear market. What propelled Crocs to fame was the strength of customer testimonials saying that it really was worth the price and the look to get that level of comfort.

The same trends that allowed Crocs to happen also signal the decline of major brands. When we have so much information at the click of a button, the promise of a consistent level of quality – which is really all a brand is – becomes less important than the fact – actual product reviews. Why trust a company to make things you know you’ll love when you can trust other users to tell you their opinions instead? It’s true: the level of trust in a product’s brand as a shorthand for making a good purchasing decision is at its nadir.

However, the decline of product brands has led to the rise in service brands, particularly those giving advice. Booking a holiday? The competition for who gives the best advice on hotels, restaurants and attractions seems to have been decisively won by TripAdvisor. Purchasing a book? Bypass the publishing house and read the reviews on Amazon, and then let the site recommend a choice for you. Looking for a good movie? Hardly anybody makes decisions about movies based on the studios that produce them, but Netflix can tell you what to watch based on what you’ve seen before.

These are all Internet-based examples, because the advice industry has moved online for the most part, but brick-and-mortar service brands have also maintained their strength amid the fall of brand loyalty for products. Banks are an example of organizations that are judged based on the selection of products they have curated for their customers, but more importantly how they advise their clients, particularly in the higher end, higher-margin businesses of wealth management and institutional and corporate banking. Consulting firms continue to prosper through economic slowdowns because they can advise on both growing revenue (in good economic climates) and streamlining expenses (in bad). And it all began with things like Consumer Reports, J.D. Power, and other ranking agencies who built their reputations upon being the ones who choose the products that matter, and whose advice you can trust.

The service brand becomes personal

Those who host the platforms that enable others to recommend products – the information aggregators and analysts – are poised to be the big winners of the near economic future. And this extends to individuals as well, which explains the push in the last ten years to develop “personal brands.” I’ve written before about how this makes many feel a bit icky, and yet if we think of skills as “products,” and analytical ability as “service,” it makes sense to have a personal brand that emphasizes how you think and relate to others as opposed to what you know. (This is why most personal brands focus on a combination of attitude and experience, e.g. Oprah’s signature empathy which resulted from her life experiences.)

Skills can be learned and degrees earned by many individuals, just like many companies can manufacture clothing. They are interchangeable. But proof of being able to think well, in the form of awards, complementary experiences, and attitudes, expressed through a self-aware brand, is unique.

This is likely why LinkedIn has moved to a model that goes beyond listing skills and abilities to providing references (“recommendations” and “endorsements”) to indicate past performance, and “followers” to show how popular one’s ideas are. These serve the exact same function as the ranking and number of reviews a destination has on TripAdvisor.

No doubt this has contributed to the large number of individuals wanting to strike out on their own. At a recent networking meeting I attended, 100% of attendees were looking to become independent personal nutritionists, career or life coaches, or consultants. They weren’t wanting to sell things, they wanted to sell themselves and their advice.

A strong brand – a personal one – is essential for this kind of career change, and part of creating a strong brand is ensuring consistency. Working for an organization whose values don’t align with yours – even if you are doing the same thing you’d want to do independently – is a brand mis-match.

All of this highlights another key similarity to traditional product brands: service brands, once established, have a grip on market share. Most companies would prefer to have an accountant at an established firm do their taxes over a sole proprietor. TripAdvisor has few competitors in the travel advice industry, which is why travel agencies are faring so poorly. The barriers to entry are high and name recognition and brand still counts for a lot.

My advice to newcomers: time to call up Uncle Jesse to make an ad for you and get some brand recognition.


Sparkling Water or Water Lilies? The Comfort vs. Beauty Problem

January 29, 2014

First things first: posthistorical is back! I am very excited to be blogging again. The world seems much the same: Obama, Harper, and Merkel have won more elections; politicians everywhere squabble over ridiculously trivial things and generally accomplish nothing; we collectively still spend way too much time on facebook. And yet much has changed: this blogger now lives in the Golden State instead of the True North Strong and Free, and with a government-enforced sabbatical now has a lot fewer excuses not to post frequently.

It’s also 4 years (ish) since I started posting on this blog, and that means the exciting quadrennial spectacle of nationalism that got many of my juices flowing last time (otherwise known as the Winter Olympics) will soon be upon us. Once more, in Russian! More to come.

But first!

A dichotomy for the ages

One of the things that started me on blogging again was a rush of ideas I encountered while re-reading Aldous Huxley’s Brave New World for a book club meeting. I will likely tease out a number of themes and their repercussions in the modern world in future posts, but the one that resonated most strongly with me was the dichotomy that one of the book’s main characters presents between truth & beauty and comfort & happiness. To have beauty and truth, he reasons, one needs to endure political and emotional instability, heartbreak, misery and terror – basically, opportunities to show mercy, wisdom and courage in the face of overwhelmingly bad odds. Happy and comfortable people have no need to rise above their situations in such a manner.

But who would choose discomfort and misery, given the choice?

The general trend of world history has been toward comfort, both in a material way and in the sense of social stability. If the nineteenth century was the century of engineering and industry, the twentieth century was the century of comfort. It was the century of spandex, widespread air conditioning and La-Z-Boy. More people than ever before were lifted out of poverty, and industrialization led to middle-class (or relative middle-class) comfort worldwide.

The number of people who choose sneakers over high heels or jeans and t-shirts over Little Lord Fauntleroy suits seems to back up comfort’s victory over beauty. And from the range of falsehoods – from “spin” to blatant lies – evident in government, advertising and many other areas, truth doesn’t seem to do very well either.

Have we already made the choice? And if so, is this progress?

The truth/beauty vs. happiness/comfort dichotomy mirrors the idea of moral vs. technological progress. Some thinkers, such as John Gray, whose anti-humanist work Straw Dogs I’ve written about before, believe that technological progress is in theory limitless, but that our moral progress as humans is essentially stalled. Nuclear technology, to use an example he gave, while a huge technological boon that can supply power to millions, has simultaneously allowed us to wipe cities off the map, a more efficient killing machine than had ever been known before.

Systematic discrimination

Perhaps truth and beauty – or moral progress, if we can equate the two – have seemingly lost out to comfort and happiness – technological progress – because the large-scale systems that largely control our lives have focused mainly on them. Take governments: funding for truth and beauty (whatever that would look like) will almost always come second to funding for hospitals, police, and even infrastructure – that is, the necessary building blocks for a comfortable life. The Brave New World character I mentioned earlier also points out that rule of the people leads to an emphasis on comfort and happiness over truth and beauty – certainly, this is the credo of America, “life, liberty, and the pursuit of happiness,” not, incidentally the pursuit of truth. Comfort, or at least freedom from harm and repression, was the first priority of the revolutionaries.

I went back to examine some other modern revolutionaries. Re-reading The Communist Manifesto, I discovered that the aims of Communism also begin with comfort before proceeding to truth, even if the ideals contained within the movement are based on so-called universal truths. Guaranteeing subsistence was the first step, through a radical change in property rights, the tax system, etc. followed by universal education (i.e., the pursuit of truth and beauty).

The other large system that governs our lives, free market capitalism, is also geared toward profits that can more easily be made from comfort than beauty.  This is why Proctor & Gamble, who sell deodorant and diapers, made $US 81 billion in 2013, and the New York Times, winner of more Pulitzer prizes than any other newspaper, struggles each quarter to make a profit. Perhaps this also explains the existence of the phrase “starving artist.”

First things first

There may be a way to see a positive outcome in this. Perhaps it is not so much a dichotomy between truth/beauty and comfort/happiness, as a ladder, or hierarchy, if you will. Perhaps, like ol’ Maslow said, we focus first on satiating the need for food, clean water and safety before striving for self-actualization.

Now, we all know how much I love Maslow (and so does everyone else, apparently, because this is by far my most read post). But this theory would disagree with Huxley’s characters, who imagine that it is either a comfortable, drugged out existence devoid of anything so confusing and challenging as truth, OR starving artists capitalizing on their misery and discomfort by creating beauty, that is, skipping straight to the top of the hierarchy.

I posit this theory: those who can truly move to the self-actualization stage can only do so because they feel their more basic needs have already been met. This is true even though they live in the same world as those more susceptible to advertising campaigns which introduce needs we never knew we had (for the new iPhone, elective rhinoplasty, or gluten-free dog food, for example). Maybe it’s just that those seeking truth and beauty seem deprived and miserable to those who couldn’t imagine taking their places.

Our need for comfort will stay the same as our definition of comfort changes; perhaps those who can be comfortable enough already, without soma and shiny new things, can have their truth/beauty cake and eat it too – happily.


New Money and How To Buy Things Anonymously

June 16, 2011

The more I read, the more I am determined that privacy/anonymity vs. openness/sharing will be the defining dichotomy of our age. The more web sites start to track pieces of information about what we buy and sell, where we browse, and what we like, the greater the number of calls for regulation and privacy protection. The battle lines between privacy and the power of information have been drawn.

But now there is a way to keep spending private, at least. Bitcoin, a digital currency allegedly created by hacker Satoshi Nakamoto, contains complex encryptions that allow its holders to buy and sell anything, anywhere in the world over the Internet, without revealing their real names or having to pay any kind of exchange fees or taxes. (For an interesting and accessible overview of Bitcoins and their implications, see this article in ars technica.) Bitcoin has all the advantages of cash – anonymity – but without the hassle of having to physically transport it anywhere. It also has all the advantages of a “trust-based electronic currency,” such as credit cards, in that it allows instant, ubiquitous transactions, but without the need for an identity attached to them.

Bitcoin has consequently been embraced by Anonymous, an anarchic online community that first came to mass public attention when it disrupted the sites of PayPal, MasterCard, Visa and others in response to perceived censoring of WikiLeaks last year. It is disrupting them again with Bitcoin, but this time more indirectly.

Normally, when new currencies appear on the scene, they have a hard time with what is termed “adoption and valuation,” that is, getting people to use them, and determining what they are worth compared with other currencies. New currencies are usually the prerogative of federal governments, or supranational ones (as in the case of the Euro), which automatically gives them a head start because citizens need to pay taxes in the new currency and generally use it to make purchases. Even then, as this history of the Euro points out, there are remarkably complex logistical and emotional hurdles to overcome, from swapping the money found in ATMs to choosing the images and words for the notes that so many people identify with to establishing the value of the new currency against other existing ones.

It is very rare for new currencies to spring up without a national backing, and perhaps Bitcoin has only been able to gain attention and adoption of the market because it is digital, and thus doesn’t have the physical/logistical barriers to overcome. But why are people using it? Just like a new national currency, Bitcoin has appeared and boldly declared that it stands for a new order, in a sense. Its users can now engage in economic activity outside of the sphere of government control, or the control of multinational credit corporations, in total privacy.

As an article on BigThink puts it, “You don’t need a banking or trading account to buy and trade Bitcoins – all you need is a laptop. They’re like bearer bonds combined with the uber-privacy of a Swiss bank account, mixed together with a hacker secret sauce that stores them as 1’s and 0’s on your computer.” Bitcoin represents the complete disengagement of the buyer from the seller, the furthest distance yet discovered from bartering or exchanging one good for another. Purchases now require approval from no-one.

Is this radical new territory, or a return to what currency is intended to be? As a means of exchange, currency technically need not have an identity attached to it. It stands as a measure of commensurability; buyers and sellers can rely on the value of the currency as a standard without having to ascertain the value of goods being exchanged every time they buy or sell. And it was only very recently in the trajectory of human history that currency was created with no direct correlation to an existing good like gold (called fiat currency), but with instead the backing of a national government with whose laws and regulations the buying and selling parties tacitly agree to comply. New virtual currencies like Bitcoin are similar to all modern government currencies in that their value is not intrinsic but imposed by decree (and perceived rarity, and a bunch of other factors). But they lack the oversight of institutions and regulators that comes with a national means of exchange.

Whether Bitcoins will remain as seemingly ominous and valuable as they have recently become is questionable. This week, the Bitcoin plot thickened with an apparent heist in which approximately $500 000 worth of Bitcoins were stolen from one veteran user. The theft pointed to the limits of exchange without third-party oversight, whether in the form of government or a corporation to monitor fraud and persecute offenders. Is the anonymity of exchange worth the risk?

It seems as though this has come down to the same “privacy vs.  security” debate that has dominated public discourse since the rise of the Internet (and, of course, September 11). In all likelihood, some third-party institutions will step in to regulate Bitcoin trading with limited liability and criminal activity investigations, as the above-linked article details. But these would decrease the anonymity of the users of the currency, in some ways negating the whole point. Perhaps the main take-away of Bitcoin is  that anonymity, in today’s world, has its trade-offs too, and can never be an absolute good.


Coffee vs. Alcohol: A better brew?

February 28, 2011

Almost everyone enjoys a good brew, but some brews are more acceptable than others, it seems. Around the world, coffee consumption has far outstripped that of alcoholic beverages, with around 2.9 pounds (or around 30 litres) of coffee consumed per person, on average, in one year. Compared with an average consumption of 5 litres per person, per year of alcohol worldwide, it seems we are much more inclined to be hitting a Starbucks than a bar on an average day.

Global average alcohol consumption

Global average alcohol consumption

Coffee is also a critically important trading commodity, second only to oil in terms of dollar value globally. I won’t get into the cultural influence of Starbucks, Tim Hortons and the like, but the impact on consumers and on the business world has been significant – much more so than any individual brand of alcohol in recent history.

Coffee is a relatively modern beverage. There is no Greek god of coffee, like there is of wine (though if there were, no doubt he would be a very spirited half-child of Zeus who enjoyed bold flavours, waking up early, and being chipper). The first evidence of coffee drinking as we know it today is generally placed in the fifteenth-century Middle East. Evidence of wine and beer consumption, in contrast, dates to 6000 BC and 9500 BC, respectively, or even earlier. Yet for such a young contender, coffee’s rise in popularity has been impressive.

No doubt in part this rise in Europe related to the appeal of the exotic, like the chocolate and other goods that originated in Turkey or other Arab countries. It is also likely that, like sugar, coffee was just tasty and appealing in its own right, and those who tried it liked it and wanted more. And certainly there is the social aspect, the rise of coffeehouse culture across France and Britain in the eighteenth century, which brought together politics, business and social interaction in a public forum as never before. The purported offspring of the coffeehouses, such as the stock market, French Enlightenment ideals, and even democracy, were significant. In a TED talk I watched recently, author Steven Johnson slyly remarked that the English Renaissance was curiously closely tied to the changeover from imbibing large amounts of depressants to large amounts of stimulants with the rise of the coffeehouse (go figure).

The best part of waking up?

Today, it seems that coffee has generally been linked to a host of other caffeinated beverages that are considered “good” (such as tea and cola) and alcohol has been linked with commodities that are “bad” and “unhealthy” (such as drugs and cigarettes). Why? Perhaps it is because colas, tea and coffee are unregulated, entirely legal, and (to a point) even considered safe for children, while the opposite can be said of alcohol, drugs and cigarettes.

Is the association fair? Hardly. While the dangers of addiction may be greater for the latter group, and public drunkenness more severely chastised than public hyperactivity, coffee and sugary colas (as fantastic as they are) are hardly the healthiest choices of beverages.

I suspect it is something else, something in the inherent nature or promotion of coffee that makes it seem less threatening than alcohol. Coffee suffers from none of the religious ordinances forbidding its consumption the way alcohol does (though, interestingly, coffee was also banned in several Islamic countries in its early years). Is has also never endured the smug wrath of teetotalers or wholesale prohibition.

Alcohol is generally placed into the realms of evenings and night-times, bars, and sexy movies, while coffee is the drink of busy weekday mornings, weekends with the paper, and businesspeople. Both are oriented toward adults, but coffee is in some ways more socially acceptable. Consider the difference between remarking that you just can’t get started in the morning without your coffee versus saying the same about your morning shot of whiskey. Similarly, asking someone out for a drink connotes much more serious intentions than asking someone for a coffee. And vendors are catching on: in Britain, many pubs are weathering the downturn in business caused by the recession and changing attitudes by tapping into the morning market of coffee drinkers.

Worldwide annual average coffee consumption (courtesy of ChartsBin)

Worldwide annual average coffee consumption (graphic courtesy of ChartsBin)

I wonder if the trend toward increased coffee consumption is in place of alcohol. I also wonder if it mirrors the general cultural shift toward an American orientation. The global dominance of Starbucks and other coffee shops seem to me to be supplanting the role of the local pub or licensed hang outs of the old world with a chirpy kind of Americanism and a whole new roster of bastardized European terms and ideas like “caramelo” and “frappuccino.” The New York Times backs up the idea of American dominance, noting that the U.S. makes up 25% of global coffee consumption and was a primary instigator of the takeover of coffee shop chains. Yet coffee is also extremely popular in Europe (especially in Scandinavia, as fans of Stieg Larsson would be unsurprised to discover) and even Japan.

Is this another case of American cultural colonialism, whereby traditions from Europe are adopted, commercialized, and re-sold to captive populations who want to tap into small piece of American corporate and social culture? Or is the global interest in coffee indifferent to American opinion?

Reading the tea leaves (coffee grinds?) to tell the future of consumption

Will coffee culture continue to increase in popularity, eventually supplanting the role of alcohol in social meetings? Two factors are worth considering here. The first is that while demand for alcoholic beverages in the developed world is shrinking, there is a growing interest in all kinds of alcohol (and especially wine) in emerging markets. Take, for instance, the rise of wine as a drink of choice and status symbol in China and Hong Kong as expendable incomes have grown. A similarly proportioned increase in coffee consumption there could be monumental – will it occur?

The second factor is the great cost of producing coffee. Putting aside the fact that most coffee is produced in comparatively poorer countries than those that refine, sell, and consume the finished product, the environmental cost is staggering. Waterfootprint asserts that for every 1 cup of coffee, 141 litres of water are required (mostly at the growing stage). Compare this figure with 75 litres for one similarly sized glass of beer and 120 litres for the average glass of wine and it would seem that a rise in coffee culture at the expense of alcohol could be disastrous for the environment.

Do the above statistics figure largely in the minds of those who drink any of the above beverages? Likely not. But all might – and likely will – in time affect production, and the economics of supply and demand will come into play, changing the equation once more and making it even harder to determine which is the better brew.