What makes a city great? Toward a hierarchy of urban needs

April 3, 2014

A few years ago I created a conceptual model of national needs, shown below, based on Maslow’s hierarchy of (personal) needs. It has become one of the most read posts on this blog, indicating that our identification with both nations and Maslow’s framework both continue to resonate today, decades after their creation.

Maslow's Hierarchy of Needs

Some context: Maslow’s Hierarchy of Needs, for individuals

Of course, it is difficult to map the idea of progressive needs of an individual cleanly to a political entity. Nations, like people, continue to evolve, and the role of nations in the world is changing too. Nonetheless, the idea of a hierarchy, in which basic needs must be satisfied before one can progress to a higher level of actualization and fulfilling one’s whole potential, can be applied to countries in various stages of development.

Since writing my National Needs post in 2010, a new country was created in South Sudan. It is still struggling (as indeed are many other nations) with the lowest level of securing territorial integrity and peaceful borders, and this remains its primary focus. The struggle for survival must come before feelings of security, esteem and morality.

Exon's Hierarchy of National Needs (Click for a larger version)

Exon [Smith]’s Hierarchy of National Needs, c. 2010 (Click for a larger version)

Yet there are other geographical entities with which we commonly identify, and which are becoming more and more important as centres of culture and economy as a greater percentage of the world’s population moves into them: cities. It is estimated that for the first time in human history, more people live in urban areas than outside of them, and cities are becoming important political players in their own right.

Since moving to California in late 2013 (and spending a lot of time on the Atlantic Cities channel), I have been thinking about how fundamentally important cities are. What makes them truly great? What makes them “cities” at all, in a sense apart from the obvious population requirements? For example, I live in San Jose, which is the third largest city in California, ahead of San Francisco in both population and area, and yet its own inhabitants curiously refer to San Francisco as “the city.” Why? What has to happen for a place to transform into a world-class city from a mere urban area?

So, as I am wont to do, I created a new model to explore the needs of a city, also along the lines of Maslow. I’m calling it the “Hierarchy of Urban Needs.” Note that I am assuming that this city exists within the context of a nation that ensures the rights and privileges of, as well as general governance over, its citizens.  Some discussion of the stages is below.

hierarchy1.pdf.001

Exon Smith’s hierarchy of urban needs (Click for a larger version)

Basic services 

At the most fundamental level, cities need key services delivered in an efficient and cost-effective way. (This is true even if such services aren’t necessarily paid for by the cities themselves, as is the case with, say, healthcare in Canadian cities.) This includes fire, police, and ambulance services; waste management; housing inspections to ensure both safety and affordability of housing; water treatment, and the like. For many cities, this means being able to control the tax base and be able to levy taxes on the population as necessary.

World-class cities will also have exceptional healthcare options and a focus on sustainability woven through even these fundamentals, such as extensive recycling and compost programs. San Francisco, for example, deploys teams to examine what its residents recycle properly and what they don’t so the city can mount better educational campaigns.

Of course, the basic running of the city must be free of corruption, and be able to pay its bills so it avoids a Detroit-like bankruptcy claim, or the succession of mayors Montreal has recently had.

Infrastructure

Historically, cities developed around major ports and, later, railway depots. Even today, no major cities exist without some kind of harbour, airport, train station or freeway linking them with the outside world. Inter-city transportation, undergirded by solid infrastructure, is a critical component of economic progress.

Cities with poor transit are at a huge disadvantage. Jakarta, a city of nearly ten million people, and the largest city of its size with no metro of any kind, has notoriously been working on an underground transit network for 20 years. Traffic congestion is thought to cost the city $1 billion a year. In another cautionary tale, it can take 12 hours to travel 40 miles in Lagos, Nigeria, and the way is fraught with crime and other dangers, a threat to legitimate trade.

Intra-city transportation is also a key factor, and how best to support the movement of people within a city is a subject of almost universal debate. Subways vs. light rail, bike lanes vs. car lanes, pedestrian-only roads and congestion pricing – these are major issues for all cities, and the thinking on public transportation keeps evolving.

This is one area in which San Jose currently struggles but has big plans for the future. My theory is that older cities, built before car use was predominant, have an easier time planning for pedestrian and bike access. Those (like San Jose) that were built after the advent of freeways and a Cadillac for every nuclear family tend to struggle to retrofit density in the downtown core when its points of interest are already quite far-flung.

And yet. San Jose is a critical location for high-speed rail between Los Angeles and San Francisco, as well as a hub for transportation around the San Francisco Bay (linking to San Francisco and Oakland), and has reserved space downtown for new transit links. It is planning for increased density to accompany the new transportation. Hopefully use of public transportation within city limits will also increase, because at the moment the city is hugely dependent on the car. Inefficient public transit routes poorly serve the population, resulting in, for example, 78% (!) of San Jose commuters travelling to work in single-occupancy vehicles.

Central Park

Infrastructure also includes sewers and other large-scale public works, including parks and other green space. More and more research indicates that green spaces make for happier communities, and many major cities can be identified by their parks alone (e.g. Central Park, Golden Gate Park, Bois du Boulogne, Sanjay Gandhi National Park). As I’ve said before, I love sewers, water mains and bridges, personally, and think more campaigns should be fought around securing funding for them. The recent, tragic gas explosion in Harlem only underlines the need to think the way the Victorians did about how cities really run and how we can leave a legacy for the future that is perhaps not glamourous, but that is critically important. One of Toronto’s great strengths, as is the case in many other cities, is the numerous cranes on the skyline building new architectural wonders (as well as a few duds). Would that we could focus on what lies beneath the soil as well.

A brief interlude on mayors…

Thinking about these lower levels of needs, it strikes me that the level of a city’s discourse (and thus position on this hierarchy) can often be seen through the lens of its mayoral elections. Toronto’s 2010 (as most likely will its 2014) election centered on the issues of transportation and waste in providing city services, leaving little room for discussion of higher-order issues (such as, ahem, drug use among elected officials). New York’s 2013 election, in which Bill de Blasio won almost three quarters of the votes, turned largely on issues of income inequality and pre-kindergarden education, the next level in my hierarchy. And the major issues of London’s 2012 election, won by incumbent Boris Johnson and his hair, were the economy, tackling crime, public transportation, and affordable housing.

Boris, Campaigning on Transit

Boris: Campaigning on Transit

It makes sense that the basics need to be taken care of, and continually improved upon, before a successful cultural scene can take root, in the same way that humans must be fed and watered, feel physically and emotionally safe, and feel a sense of belonging before they can achieve self-actualization.

…and then back to the hierarchy: Educational and research institutions

A strong educational foundation at every level is critical, and a well-educated population requires relative equality in the quality of schools. This is one of the main reasons cities should not fund their schools through neighbourhood taxes (and thus subject schools to the vagaries of house prices), as many cities in the United States do.  A well-educated citizenry contributes more to the economy than a poorly-educated one.

The presence of leading research and teaching institutions draws in talent and sows the seeds of innovation, which is why “cluster economies” such as Silicon Valley are the next big thing, because they focus research and development into localities with populations educated enough to feed them with employees. Every one of the world’s greatest cities has a leading university at its heart, without exception – this cannot be a coincidence.

Diversity is the key here. Cities built around just one industry are like monocultures: potentially dominant for a short while, but vulnerable to disastrous decline. Take any of the grand old cities in the Rust Belt: Buffalo, for example, was one of America’s greatest cities one hundred years ago, built on a strong grain-milling and shipping/railroad industry. After almost a century of decline, it is, well, no longer great – but it has managed to slow the decline by diversifying into the education and medical fields. Glasgow, once the premier city of Scotland, faced a similar decline due to its emphasis on a resource-based economy and de-emphasis on education.

Robust arts, sports and cultural scene

This stage is where the jump occurs from a merely livable city to one that is great. A safe, well-run, working city is lovely, but a city with a thriving cultural scene is one to fall in love with. In fact, social offerings, a broad category encompassing art, music, sport, religion and other community activities, are among the most significant contributing factors to residents’ feelings of attachment to their community. This is even above security or the state of the economy.

This stage of course includes both major municipal institutions such as museums, symphonies and ballets, but also spontaneous or smaller-scale, citizen-led activities. Being able to participate in a Sing-A-Long Messiah or see an independent movie at a film festival is as important as having the Bolshoi nearby, and also makes the arts more accessible to a wider population. Having Old Trafford around the corner is great, but so is the local curling league.

Doha’s Museum of Islamic Art

 

An arts and culture scene, moreover, is a key driver of tourism, which in turn feeds the economy on general feeling of being in a place worth being. (Just imagine Paris without the Louvre, or New York without the Empire State Building.) Older cities naturally have an advantage here because of the in-built history in ancient cathedrals, palaces or public art, but some newer cities have benefited by investing heavily in creating an arts scene. Doha, once little more than an oily afterthought, is planning for the time when its resources run out by creating a strong film industry and thriving place for modern art. It is also newly host to a major international economic forum, and will host the 2022 World Cup. (Probably.)

Openness to influence; becoming a symbolic beacon

Give me your tired, your poor,
Your huddled masses yearning to breathe free !

These words adorn the base of the Statue of Liberty  and represent what I have spoken of before, being a city of the imagination. These cities are the subject of books, films, Broadway musicals, and countless daydreams, and have a romance and level of impact that serves to draw people to them, for a visit or for good.

These cities, in turn, receive their tourists and immigrants in a more or less accommodating way, taking from them the best of their cultures and using that to strengthen and further diversify the metropolis. Cuzco, Islamic Seville, and the Florence of the Medici were all historical examples of the power of such “mixing bowls” of culture: out of their cultural milieu came the starting point for a massive empire, the Golden Age of exploration, and the Uffizi Gallery. Modern equivalents spring to mind precisely because they have this pull on our hearts and minds.

The last two levels of the hierarchy are quite iterative: the greater the cultural scene and economy, the greater draw a city has for immigrants, who then enrich the culture further. It is difficult to find a world-class city without a large percentage of immigrants, who bring with them new traditions, great ideas, ambition, and excellent food. It is in fact difficult to overestimate the importance – both historically and in the present day – of immigrants to cities’ successes, which is why openness to influence and disruption may be the most important trait a city can have.

 

So there’s the model. I’d love to hear your thoughts!

Advertisements

What is History Writing Now?

April 27, 2010

People reach post historical all the time by searching for odd little historical, philosophical and political science-related phrases. Given the obscure nature of many of these terms to those not deep within postcolonial or imperial studies, I assume they’re doing research for some paper or project. I wonder if they feel they can trust what they read. Am I a reliable source? Are my ideas sound? Can one cite a blog, or is this an even bigger research no-no than citing Wikipedia?

If it is, why? Consider this blogger: I have a graduate history degree from a good school, which, for many, constitutes formal “training” in the discipline.  I know how to cite sources and (hopefully) construct a logical and well-supported argument. Does this make me “qualified” to comment on things? Does being qualified today require being intelligent, well-trained, and peer-reviewed (in the traditional sense), or does it come from an even more democratic approvals process based on sheer number of readers? Would having six million hits to my blog make me a “qualified” opinion leader? Or do I need to have published six books through a university press that only 6 000 people will ever read in order to be a real “expert”?  And is either something to which I should aspire?

These questions have far-reaching implications for me as I go through the process of deciding whether to continue on with studying history as a career, or do something else entirely – something more practical, that would affect people more directly than a well-researched book in an obscure field and a few impassioned lectures about Lord Curzon and the Raj for a dwindling number of undergraduates who don’t care. Because it’s very important to me that I influence the way people think, not in a creepy mind control kind of way but by presenting a fresh perspective that makes them reconsider the world around them and how things work within it.

I’m not sure academic writing is the best way to do that: its scope is too narrow, and its audience is those who are already predisposed to thinking from many angles, and who likely know a lot about the subject already. Traditional academic writing is also very dry. It connects with the reader because it is persuasive, and offers a sourced argument with little personal point of view. Blogs and new media, in contrast, connect with readers because they cover current events and are often based on personal biases or feelings. They are inherently populist, because the vast majority of bloggers want others to read their blogs, and so they talk about things that appeal to a large audience: fashion, entertainment, celebrities, popular political news, etc. And the vast majority of people who read blogs read about the above topics. But does this make them experts in their fields? And does it translate to “academic” subjects like history?

One of my main goals for post historical is to bridge this gap with a forum that is flexible enough to talk about current events and timeless philosophical questions at the same time, yet with a focus that isn’t so personal or academically specialized to be unappealing to a broad audience outside of a strict historical discipline. One might call this “accessible” writing, though as I wrote about in my last post, “accessible” can be a bit of a loaded term. What matters most to me is making an impact in a way that is direct and tangible, which is why the thought of another history degree and a life as a struggling academic is slightly off-putting at times. It’s very clear what such a life could do for me: I’d be a recognized expert in my field; I wouldn’t have to get out of bed torturously early every morning to go to another soul-crushing corporate meeting; I’d be able to have great chats over coffee with fellow bright people and give speeches about things like maps; I could help out engaged students by giving them interesting research suggestions; and I would generally get to run around having people think I was a big smartypants. Clearly, these things all sound fantastic. But what would a life like that do for others, even if I did manage to actually get a job out of it (which these days, as my fellow blogger and history professor trivium points out on his excellent blog, almost nobody does)? How would it contribute to my big life goal of being a respected public intellectual who makes people think in exciting new ways?

I don’t mean to criticize academics, who are generally brilliant, insightful, creative people. It’s the system that is at fault, a system that encourages people to go to school for 10 years with absolutely no hope of finding employment of any kind at the end of it, a system that encourages killing trees by publishing books nobody cares about, and a system that has created the popular feeling that it is so removed from the everyday that it serves only to train its own. I fear academia is becoming so specialized that it just doesn’t have the impact, or the scope, or the popular appeal, to be taken seriously. When the people who make the most money and important decisions all have MBAs and law degrees, humanities majors are in for some trouble. Actually, we’re all in trouble because we’re losing out on diversity of thought and experience – big time.

As I’ve written before, I think great writing is all about having a conversation, which necessitates a connection between readers and writers. One of the great things about blogs, and Wikipedia, and other new media is that the connection – and the feedback, via the comments or revisions – is immediate, and the process of forming consensus iterative. This is when history and philosophy are really exciting (and this is why I love to receive comments and feedback from readers, particularly when you disagree or want to point out something I’ve missed). Traditional academic writing just isn’t set up to react quickly enough to changes in events, or popular feeling.

So, to paraphrase the great E.H. Carr, what is history writing now? One would think that it would adapt to the changing relationship between reader and writer, from words sent down from a lofty perch in an ivory tower to those that are picked over in comments forums around the world. It hasn’t. And we’ve all lost something in the process.  The Economist ran an article today about how this election (in Britain) is bound to be a landmark one, and yet has no landmark book or philosophy written about the popular mood to match it, or to spur discussion, as was the case in 1945, 1964, 1979 and 1997. (I was particularly excited to see that the article cited one of my historian idols, Linda Colley, as having written a significant work from which Tony Blair drew inspiration in 1997.)

Can it be that nobody has written anything groundbreaking in the past five or ten years that bears mention? Or is it that the political audience is too fragmented – or too busy writing their own blog posts – to notice? Is there still a place for the academic as a public intellectual, or has academic writing been pushed to the fringes of literate society by virtue of being irrelevant to everyday concerns? And if academia is on the fringes, who is in the centre?

I suppose we can all take comfort in the fact that there is still the expectation of something by the intelligent people who read and write for publications like The Economist. There is an intellectual void that will always need filling, by academics or writers or particularly insightful and far-reaching bloggers. The question for the next few years, it seems, is whether those who step up to fill it will have new job titles, and if so, what they will be.


The Modern Good Life, Part 2: History and its (Ab)Uses

March 24, 2010

My brother was a history major 10 years before I ever was, and I distinctly remember one weekend when he was visiting from university and asked me why we (as a people) study history. “Because we need to know about the past so we don’t make the same mistakes in the future,” I answered, quite proud of myself. (Not the most inspiring answer, but I was 8. Give me some credit here.) I think he was impressed too – little did he know I would grow up to write a nerdy history blog! Ha HA!

What I said then is not a novel idea: historians have long advocated the necessity of knowing about the past in order to inform our decisions in the present, and justifying those decisions once made. And everybody loves history, because they love the stories of overcoming great odds, or seeing how much things have changed (or, indeed, stayed the same), or thinking about how with one small shift things could have been very different.

But we tend to forget that our fascination with the past is unique. Other worldviews don’t see it this way. To the followers of many Eastern religions, and humans from most of human history, the past was just a series of fluctuations around the same human constant. I’ll go back to John Gray’s Straw Dogs to where he argues that attempting to make sense of history and giving history meaning that has the potential to inform the present and future is just a “Christian folly,” part of Christianity’s central, mistaken assumption that humans are different from other animals and can direct our lives. “History” was never before considered cumulative, or linear, but cyclical. It was not studied. It was not important. It was as much an unknown as the future. And it certainly did not direct anybody’s actions in the present.

There is a concept within the discipline of the “silent referent,” a particular narrative or idea that acts as the standard against which something else is measured. The narrative is usually the European, Marxist master narrative that charts the “progressive” transition from a feudal, mythical, communal past to a capitalist, secular, modern present. This narrative is celebratory, teleological, and complete. It wraps us all up in the confidence that we have trod a good path that has ended in a happy, modern present. [More on this in my next post.] The idea of the “silent referent” is often used in postcolonial history, most notably in a landmark book by Dipesh Chakrabarty titled Provincializing Europe. Chakrabarty argues that Indian history needs to escape from this master European narrative in which it was never a part and can never measure up.

We would all do well, I think, to take note of his caution. I’m not sure even we can measure up. We run the risk today of being so tied to this celebratory history we have told ourselves that we can barely function without referencing it, or live outside of its temporality. The silent referent of our lives today is the past.

Perhaps this is a simplistic statement. It is natural that the future we imagine for ourselves is a direct output of the past we have experienced. We can hardly imagine anything else. (This is why aliens in movies look like small, green people.) But our high regard for preserving our history – even if it is largely unconscious – is unique to our species, our culture, and our age. We live very historically contextualized, temporized lives. The title of this blog (elitist meta moment alert!) is an ironic note that even those who are trying to escape “time” or “history” by adding the prefix “post-“ to things are still temporizing themselves by saying that we are in the temporal phase that comes after it and thus reinforcing the idea that linear time is of paramount importance.

There are two traps in particular that we might fall into: overspecialization, and overgeneralization. The first can occur when we endlessly analyze, categorize, and pull apart the past in an attempt to preserve it for future generations. This kind of history in the end takes everything from the past as equally worth preserving, with no distinction (historians specializing in German shoemakers from Frankfurt between 1522 and 1523, take note!). Wallowing in the “good old days” is a recipe for disaster, especially because even the most objective historical narrative has a bias and an angle. Nietzsche wrote about this tendency in The Genealogy of Morals, warning that it can effectively prevent any innovation or aspiration for the future.

I have written before about the dangers of over-specialization: information overload leading to a societal inability to discern what is really important, and even paralysis by analysis – the inability to do anything for fear of breaking too strongly with the past. This is exactly what Nietzsche was talking about. Individuals become slaves to history and cannot act outside of or without it. Is this, perhaps, some of what plagues us today?

Maybe. I do suspect we as a population need to be wary of those who seek to ‘preserve’ a traditional way of life, or go back to it – and I don’t mean that we should stop the trend of going back to 80s fashion before it really takes off. I mean that factions arguing for “traditional family values” or established religion carry mistaken and destructive beliefs that contribute to our present woes.

However, I think we are more often slaves to the past in a different way, and another that Nietzsche considered problematic: overgeneralization. This is the kind of history that seeks out role models and teachers from the past when we feel unable to find them in the present. And it can, in excess, create typologies to serve as standard scripts for the present, which, as Herr N. wrote, “generalize and finally equate differences” and as such it does a disservice to the past in masking its historical (and geographic) particularities.  Think of how many times you’ve heard “the worst economic climate since the Depression” or “the largest deficit we’ve ever seen” in the past two years – is this contextualization helpful? Does it help to know that Hillarycare failed in 1993? It isn’t 1993. How many other unchecked assumptions about the past are we dragging around and using as props to justify not changing or trying something new?

History is an anchor, and a necessary one, but it can also be a deadweight that prevents us from moving on. Being tied to the past, afraid to spend more money because we have never spent so much before, unwilling to make bold moves in favour of merely speculating over our downfall as a society doesn’t serve us well because we have no script that prepares us for the present. History never repeats itself, except in overwhelmingly general terms.

There is much to learn from the past. We can find the human characteristics that will inspire us in the present – perseverance, ingenuity, humility, and many more – but not the right political or economic blueprint for the future we’re trying to build.

Previous post in this series: The Bias to Action

Next post in this series: The End of Progress


Cultural Intermediaries in the Wikipedian Age

February 22, 2010

I spend a lot of my time thinking about how technology has changed the way we communicate. It has obviously changed the tenor of our conversations:  they happen much more quickly now, and many at once, and in many different forms and forums.  We talk to more people, and different ones, with experiences different from our own.  But has technology changed the content of our communications? And has the level of quality changed?

Cultural intermediaries then…

I read a statement about culture in an article for a class a few years ago, and it has stayed vividly in my memory. The author was Pierre Bourdieu, an important sociologist and thinker from the past century, and he was discussing cultural intermediaries, those who fit between “legitimate” culture and mass-produced culture, the popularizers of the world. He wrote in 1984 but his perspective seemed much older: he described the petit bourgeoisie and their love of what he calls the “minor forms of legitimate culture” such as “light operas, science programmes, [and] poetry readings.” The intermediaries give life to dry institutional competence, as he puts it, but as presenters are devoid of any intrinsic value in and of themselves. They instead stage moderate cultural revolutions by canonizing “not-yet-legitimate arts” and masquerading as experts by surrounding themselves with a veneer of cultural authority in the form of (and this is worth quoting in full) “Academician contributors to painless history magazines, Sorbonne professors debating on TV, Menuhins gracing ‘quality’ variety shows.” (I had to look up Menuhin also; he was considered one of the twentieth century’s best violinists.) I like to think of this as giving street cred to high art, and vice versa.

…and now

I think of this passage often. I see its effects when I read web pages dedicated to collecting strange maps or entertainment blogs covering pop culture. I see it when I watch American Idol and note how it attempts to associate itself with the new “legitimate” musical culture (producers like Quincy Jones, judges like Elton John, and YES!, performers like George Michael). I see it when I watch YouTube videos or ze frank’s peculiar brand of comedy. It is exactly what Bourdieu means: everywhere today, individuals without the “standard cultural credentials” (whatever those may be) have essentially cornered the market on some small area of life in which there is great popular interest. (Interestingly, it is often themselves – think of most Internet memes and how they can come out of nowhere from an individual’s particular fancy writ large.) I read the writers’ comments about American Idol and Survivor on EW fairly religiously, all the while thinking that it is crazy that there are individuals out there whose livelihood is earned acerbically describing the proceedings of a variety show to an audience that mostly includes people who watched it the night before. How did this happen? And at least their subjects are real people: there is also extensive discussion about TV dramas, and comedies, and everything in between.

And what discussion of the canonization and genre-ization of life would be complete without discussing Wikipedia? Wikipedia is founded upon the classification and summarization of life’s minutiae, things like characters in popular movies, or levels in video games, or contestants on reality television shows. Approximately 45% of its content is Culture & Arts, or Biographies & Persons. (1% is Thought & Philosophy.) I’m guessing most of that is current pop culture. In order to gain entry, a topic must be “notable,” that is, it must have “received significant coverage in secondary reliable sources (i.e., mainstream media or major academic journals) that are independent of the subject of the topic.”  Consider: this is the veneer of cultural “legitimacy” that Bourdieu speaks of, this association with longstanding cultural pillars like the established news media. The irony is that old media are dying, and Wikipedia grows every minute. It has become a cultural compass in its own right.

Cultural legitimacy in the 21st century

Which leads me to ask what “important” and “legitimate” means today. Has technology changed the content of communication after all? Does it, by its nature as transient and inclusive, privilege the popular, mass-produced, lowbrow culture? And if so, is that bad? Are certain types of culture intrinsically better or more valuable than others? Bourdieu would likely say “yes,” in that his discussions of “legitimate” and “illegitimate” culture included an inherent value judgment. The debate rises to new levels of importance – and not just culturally, but politically – with so much more “out there” and accessible. Viewers can pick and choose what elements of culture they pay attention to, essentially filtering out all viewpoints that do not converge with their own. (The Atlantic ran a fascinating article on the shift to new media and how it has affected politics last month – check it out if you have a chance.)

But who are the culture brokers who determine what culture is good or bad, highbrow or lowbrow, worthy or not worthy of attention?

It’s relevant and interesting to me because posthistorical turns 1 month old today.  And I look at the most popular posts sometimes on the WordPress Dashboard and sigh that they are always about things like “the truth about Nicole Richie’s engagement ring,” or a “LOST exclusive” or a change in the top 24 of American Idol (full disclosure: I clicked on that one). There is no person deciding what is most popular in and around WordPress. Numbers are. Undoubtedly, because of today’s technology, culture is more democratized than ever before. And there are many, many more intermediaries than ever before because technology has played upon how fractured our culture can be. There are more and more people out there who don’t want to be mere consumers of information and culture, but producers of it as well. Many more people are finding their voices than ever before, and creating new forums to talk about culture. I am one of these people, and I love being able to be one.

New technology has made it easier to become a cultural intermediary, and in doing so has legitimized – at least in part – hundreds of new forms of culture. The content has indeed changed – significantly. It has caused a revolutionary shift in cultural studies as well, and changed the lenses through which we see and categorize the world for the purposes of analysis.

So who are the cultural intermediaries of the twenty-first century? Perhaps we might call them “aggregators,” either in their automated forms, or human ones. They are those who collect seemingly disparate bits of data and combine them into a coherent and meaningful whole.  I believe that all this technology makes cultural intermediaries – the trusted, popular, consistently competent ones – more important than ever. Career counsellors and management thinkers in the “information age” are constantly pointing out that with all of the information being thrown at us, the need increases for those who can rise above it all and provide an intelligent layer of analysis to help others sift through it and realize what is important.

So my focus as I move forward in life and with this blog is going to be to provide a lens through which disparate things make sense and are interesting. Wish me luck – and stay tuned!

I’m interested in hearing from you! Are you a cultural intermediary? If so, how do you see your role? Who do you think are the most significant cultural intermediaries of this new century? And what is “legitimate” culture now? Does it still exist?


Gaga’s Not the Only One Thinking About The Fame Monster

February 2, 2010

I am not ashamed to admit that I am a huge fan of Survivor. It isn’t because I delight in watching others drink lumpy sea cucumber and soy sauce smoothies, or enjoy the thought of 12 men and women who haven’t showered in 26 days mud-wrestling, or even because Jeff Probst’s transition from good-natured adventurer to gruff and sarcastic Simon Cowell equivalent has been so amusing. It’s because I’m curious to see how many different types of people are motivated by a paltry $1m USD (not even, after taxes) to play out a 39-day process of pseudo (and highly manufactured) “self-discovery” in front of millions of TV watchers. Young people. Old people. Attractive people. Ugly people (well, maybe fewer now than in seasons past, but comparatively). Americans, Britons, lawyers, doctors, janitors – all kinds of people are willing to endure over a month of Hell for the money they could often make at home in that time anyway – 5-star vacation for the whole family to the South Pacific included. I’ve concluded that it’s all about the fame. (Such as it is.)

One of the ideas I’m working through now is when and how the twenty-first-century (twentieth-century?) obsession with being famous arose, and what factors contributed to its rapid growth. Obviously it isn’t just manifested in the Survivor vein of reality TV – though this is often considered the first wave – but is also evident in the meteoric rise of sites like Facebook, MySpace, linked in, twitter, comment boxes on just about every information site, &c, &c.

In fact, I’m thinking about it seriously enough to consider extending it into a larger project. Some might call it a “dissertation.”

Some questions I’m letting percolate in my brain now:

  • How do societies differ in what they characterize as “fame”? How is our (modern, Western) idea of it now different from what it was in the past, and how and when did the modern idea of “fame” come about?
  • What conditions need to be in place in order for “fame” to occur? i.e. some way of preserving some part of the self (an idea, a treatise, a video recording, a blog), a captive audience large enough to “appreciate” it (as in, we can all watch a YouTube video if we have an Internet connection, but if you are an illiterate goldsmith in eighteenth-century Ireland, you probably can’t appreciate Plato’s Republic), a gradual decrease in personal privacy barriers …
  • What technologies are particularly associated with fostering and/or spreading fame?
  • What is the net effect of fame on society? Is it positive or negative?

It’s an idea still very much in an embryonic form. As such, I’d love your input on any or all of the following:

  • Are you aware of anyone currently working this field (academically or otherwise)?
  • Is there an established “canon” of theory associated with the idea of fame?
  • What other directions could I go in with this? What leaps into your mind?
  • Is this a topic that I could later expand upon? Is it a topic that is useful and interesting and relevant?
  • Any ideas on how I might turn this into a dissertation (that is, a 300-page tome) that a reputable History establishment might want to get behind?
  • What other questions should I be asking?

Please post your thoughts below!


New Year’s Resolutions to post historical(ly)

January 13, 2010

It is the beginning of a new year, which, along with the requisite top 10 lists from the year (and decade) that has passed, brings talk of the oft-dreaded New Year’s resolutions. I’m all for the idea of renewal and re-focusing, so I can support this tradition to some extent. Among my personal goals for 2010: truly connect with my friends more often. get married with as little stress as possible. figure out what my life’s path is.

I know, all very simple and SMART. More generally, I will be audacious (if Obama can do it, why can’t I?), authentic and original. (Perhaps at a later date I will write about whether it is better to say and do the unexpected and risk offending people or better to attend the dinner party you have to attend and just make nice small talk. My goal for 2010 is more of the former and less of the latter.)

For now, though, I have started a blog, with grand expectations that it will be the outlet I’m seeking for the beginnings and ends of thoughts that pop into my end on a relatively frequent basis. And because we live in a world where the personal is interpersonal news, I am sharing what I hope to achieve in an effort to be transparent. I am counting on you, dear readers, to help “keep me honest,” as they say! Read the rest of this entry »