The Internet is great, but it’s no telegraph or automobile.
That’s one of the key takeaways from economist Tyler Cowen’s “The Great Stagnation,” which to my great embarrassment I only got around to reading today, more than two years after he first published it. I’ve read Cowen’s blog, Marginal Revolution, since before then, but never plopped down the $4 to buy the e-book — not much longer than a long-form magazine article, I read it in just over an hour.
Cowen sets out to explain the last five years, and the last 40 — the sudden recession the world has not yet fully emerged from, and decades prior to that of slow, fitful, partially illusory economic progress.
The two are largely one and the same, Cowen diagnoses — a reflection of a society that has (with one key exception) largely run out of big new ideas but doesn’t realize it. The lack of ideas has slowed the economy, the self-delusion breeds bubbles and crashes.
His chief concept is that of “low-hanging fruit” — ideas, innovations and changes that produce a lot of economic bang for relatively little effort. For example, taking a largely uneducated population and sending everyone to high school and the best to college produces huge dividends. That’s low-hanging fruit. Trying to find a way to make existing schools five percent more effective is not low-hanging — it has relatively high costs compared to the payoff. Going from no car to owning a car transforms one’s life; going from the kind of cars they made in 1950 to the kind of cars they make in 2010 doesn’t. That’s not to say those more incremental improvements aren’t worth it, but they’re not the kind of changes that produce economic booms or transforms the economy.
America and the West has largely picked the low-hanging fruit of the Industrial Revolution, Cowen says. He breaks out charts to demonstrate it, charts of stagnating family incomes, slowing productivity and falling patent counts. Until we find more low-hanging fruit, society might just have to lower its sights and accept that the good old days are over.
The elephant in the room, of course, is the Internet and computer power — the single greatest invention of the past half century. Cowen acknowledges this, and tries to argue why as impactful as the computing age has become, it hasn’t led to low-hanging fruits for society at large:
- As of yet, the Internet just hasn’t made that much money. Some computer companies have made a great deal of money, of course. But “relative to how much it shapes our lives and thoughts, the revenue component of the internet is comparatively small.”
- Not everyone has benefitted from the Internet in the way that plumbing or education helped everyone. Anyone can find the Internet useful for entertainment, but Cowen says its value as a productive tool flows primarily to an intellectual elite, a small subset of knowledge workers with the “cognitive abilities to exploit” it.
But perhaps the Internet just hasn’t yet fully come of age. One of the more interesting observations in the book is the comparison to the heydey of the Industrial Revolution in the 19th Century. Then, Cowen notes, many of the biggest advances in science and technology could be made by clever amateurs, self-taught dilettantes with an idea or good luck. Today, it’s trained specialists with years of advanced training who make advances in most fields. The exception is the Internet — people like Mark Zuckerberg and Steve Jobs were amateurs when they made their breakthroughs. That suggests a field that is still ripe for advancements and could lift society out of its Great Stagnation and into a new boom.
Until then, however, Cowen suggests we should recognize that “relatively slow rates of technological progress will be with us for at least a few more years, possibly much longer.” The biggest problems with moving slowly can come when you think you’re moving quickly.
Here’s one of my biases: I’m more disposed to agree with something if it can be framed in an intellectual manner.
So it was with this week’s Internet tempest-in-a-teapot, over the use of the word “derp.” After one commentator used the term to slam his opponent (as “derpy”), various people like me took to the web to debate its appropriateness. I was at first inclined to agree with Gawker’s Max Read, who said the word was juvenile, silly and vaguely offensive:
“Derp,” a word for “stupidity,” was not a particularly funny joke when it was a throwaway line in the Matt Parker-Trey Stone BASEketball. It didn’t get funnier when it crossed over to 4chan and YTMND ten years ago, especially since message-board posters managed to turn it from a nonce word into one with connotations of disability.
It’s the sound of the word. Unlike calling someone “stupid” or an “idiot,” calling them “derpy” adds something of the low humor of the mimic — just imagine someone repeating everything you say, but replacing all your words with “herp derp herp derp.” (That’s exactly what one browser extension does.) Surely there are ways to conduct a debate that aren’t so demeaning to all parties involved.
But then I mostly changed my mind after reading economist Noah Smith offer a much more sophisticated definition of “derp” than “stupidity.” It has to do — notice I didn’t say this was a simpler definition — with Bayesian probability, a branch of statistics that deals with events (such as, say, most of real life) where the truth of a proposition is not certain.*
In Bayesian inference, you start your analysis of a question with a “prior belief” — what you think before you consider any evidence. This is shortened to your “prior.” Your “posterior belief” or “posterior” is what you conclude after considering the evidence. Smith:
What does it mean for a prior to be “strong”? It means you really, really believe something to be true. If you start off with a very strong prior, even solid evidence to the contrary won’t change your mind. In other words, your posterior will come directly from your prior.
Having strong priors — strong a priori beliefs that you hold to even when the evidence suggests otherwise — is NOT necessarily irrational in Bayesian probability. In one example, if you are trying to determine whether your friend’s baby is a boy, a girl, or a dog, you would be justified in rejecting the third option based on your prior belief that humans can’t give birth to dogs even if your only evidence, say, is a photo of a puppy.
Using the example of people who believe that solar power will never be cost-competitive with fossil fuels, Smith says there are limits to how much we should tolerate people clinging to their priors:
But here’s the thing: When those people keep broadcasting their priors to the world again and again after every new piece of evidence comes out, it gets very annoying. After every article comes out about a new solar technology breakthrough, or a new cost drop, they’ll just repeat “Solar will never be cost-competitive.” That is unhelpful and uninformative, since they’re just restating their priors over and over. Thus, it is annoying. Guys, we know what you think already.
English has no word for “the constant, repetitive reiteration of strong priors”. Yet it is a well-known phenomenon in the world of punditry, debate, and public affairs. On Twitter, we call it “derp”.
There’s a certain elegance to that definition that appeals to me. Someone who is derpy is someone who constantly refuses to change their views based on conflicting evidence. (That’s not to say they have to adopt the position of their opponents — perhaps they could adopt a more moderate position that’s less in conflict with the evidence, or concede the validity of the evidence but proffer new evidence to bolster your position.) I still don’t like “derp” as a word, but Smith is right that there’s no other word to express that point of view.
Now, of course, I’m second-guessing myself, that I’m only intrigued by this idea because it was expressed in a way that appeals to my intellectual vanity. Anyone have any other arguments one way or the other before I arrive at a posterior belief on the value of deep?
*Note: my grasp of Bayesian inference and related areas is very shaky, but I’ve read several intriguing pieces lately that rely heavily on it. If anyone knows a good layman’s introduction to the concept I would be very grateful for the tip; I’d like to learn more without getting too into the math.
Earlier this year, a discussion of taxes on a South Dakota political blog spurred me to write a short essay about the subject. It was primarily sourced from one of my favorite nonfiction books (and this is a somewhat telling statement about my character), “A Free Nation Deep In Debt: The Financial Roots of Democracy.” That book’s author, James Macdonald, is a former investment banker, but before he starts getting into analyses of bonds and interest rates and defaults, he delves into political philosophy.
On taxes, tribes and freedom
Dating back to ancient times, to be taxed was viewed as being to some degree unfree. More precisely: to be directly taxed was viewed as being unfree.
To the ancient Greeks, taxes “were an offense to the dignity of the citizen. State revenues from publicly owned property were acceptable, as were taxes on foreigners and limited indirect taxes. But the citizen would not have his money taken from him at the behest of some leviathan” (Macdonald, pp. 32-33).
To raise money for vast undertakings (read: wars) without taxes, free societies instead resort to expediencies such as forced, repayable gifts. Citizens would be obligated (by social pressure if not by law) to give freely of their fortunes to the state, which would then repay this donation out of the spoils of the presumably victorious war.
Direct taxes were only levied by despots, tyrants and emperors.
Later on, as various Germanic tribes set up successor states around the remnants of the Roman Empire, they shared beliefs about freedom. As conquerers, “the Goths and the Franks were not subject to tax any more than other successful tribal conquerers in past times” (p. 61).
Indeed, the word “Frank,” the name of a tribe, increasingly began to acquire another connotation — “‘free’ — and especially ‘free from tax’” (p. 61). (The word “franchise,” in terms of “having the franchise” or having the right to vote, is another descendent of this meaning of “Frank.”) The nobility of France would maintain their exemption from direct taxation until the verge of the Revolution. The social inferiors of the nobility, descendants of the conquered, not conquerers, had no such exemption and so bore most of the tax burden.
The medieval Italian republics shared this same aversion to taxation: “Like the Athenians of ancient Greece, the merchants of Venice disliked taxing themselves. Indirect taxes were easier to collect and fell on every resident and visitor…” (p. 72). As the costs of the incessant wars of the Middle Ages mounted, Venice and other republics relied on loans from their citizenry — increasingly compulsory loans.
These repayable levies were a medieval response to the perennial question of how to tax free citizens. They neatly expressed the duality of the position of citizenship in a small state: the obligation to undertake extra burdens and responsibilities to ensure the survival and prosperity of their state; and the exemption from the insult of direct taxes. (pp. 73-74)
Eventually, this ancient aversion to direct taxation would lessen — but never entirely. A key aspect was that such taxes be voluntary, voted on citizens by citizens themselves. (The citizens had long voted taxes on non-citizens, who it’s important to remember at this time comprised a large, sometimes even overwhelming, percentage of a republic’s citizenry. But under the logic of the time, the non-citizens didn’t enjoy any freedom from direct taxation.)
The Dutch, who had protested so vigorously about their very low tax payments to the Habsburgs, ended up paying unimaginably greater sums to their new autonomous government… In 1595, an English observer commented on the political paradox: “The Tributes, Taxes and Customes, of all kinds imposed by mutuall consent — so great is the love of liberty or freedome — are very burthensome, and they willingly beare them, though for much less exactions imposed by the King of Spaine…” (p. 155; [sic])
In the American Revolution, recall, the grievance of the colonists was not that Parliament was taxing them, but that they were being taxed without representation.
Today, we live in a republic with universal suffrage — and direct taxation. Part of the rhetoric used to back taxation is the idea that it’s the civic duty of a citizen to pay their taxes — a possible evolution of the earlier attitudes about the obligation of citizens to give to the state.
But notice that direct taxation — the income tax, the estate tax, the capital gains tax — provokes far more indignation and opposition than indirect taxes like those on sales. In some sectors, the rhetoric used to describe the income tax is that of government “confiscation” of private wealth — perhaps a necessary evil, but an evil nonetheless, and one to be kept to an absolute minimum. As one can see from history, this indignation is of ancient provenance.
Moreover, when tax-averse leaders support a new tax (such as the tourism surtax Cory finds bewildering) don’t overlook the key factor: that the people who pay such a tax requested it themselves. In the committee hearings on the tourism tax, tourism promoters and business owners alike emphasized how they saw the benefits of the tax outweighing the positives; conservative Republican lawmakers in turn highlighted the willingness on the part of the tourism business owners to tax themselves.
Very few Americans have anything one could even charitably call a classical education (and I am including myself in this benighted horde), so this would seem to not be a case of explicit imitation of ancient values. But perhaps indirectly modern Americans are tapping into some more profound intellectual well.
A few disclaimers: In sketching out this theory I’m not presenting any sort of normative argument, either that taxes are bad because the ancients opposed them, or that the ancients were silly for opposing direct taxes. I’m simply attempting to describe. Modern American democracy has many intellectual parents, not simply classical republicanism; to privilege it is anyone’s right, but there’s no necessary reason why one MUST rank it above all others. Finally, the ancient republics were of course not saintly — while citizens of these republics enjoyed far more freedom and power, and checked their rulers more strictly, than non-republican states, their citizenry comprised a relatively small subset of the population, with disenfranchised non-citizens and often large populations of slaves living in the polis but not voting, and often subjugated foreign cities paying tribute and having no citizenship rights at all. (This is true even if you set aside the half of the population who could not vote because they were female.) In many ancient republics, the dominant conflict was between the aristocratic republican elite on one hand, and despots promoting the rights of the common man on the other. Freedom and equality were in constant tension. Whether you cheer for a Caesar or a Cato, both have their flaws.
J.J. Abrams’ new film Star Trek Into Darkness has just enough thoughts floating around its unexceptional script to make the viewer conscious of what could have been, but not enough to make it interesting. Its visuals are flashy enough to entertain but not dynamic enough to transfix. It is a profoundly disappointing movie.
The plot, briefly: prematurely promoted starship captain James T. Kirk disobeys standing orders to try to save a primitive species of aliens. He is caught, demoted, then swept up into a manhunt for a wanted fugitive and a conspiracy within Starfleet command. To the degree it can be said to have a theme, it warns about the dangers of militarization. There’s rich potential in analyzing the tensions of a highly armed military body genuinely devoted to peace, potential the movie is uninterested in examining. Indeed, to do so would betray a greater sense of self-awareness than Star Trek Into Darkness appears to possess, because it is itself a more militarized, action-oriented version of past Star Treks.
Character development is less stunted though hardly impressive. The film is a coming-of-age tale in which its only two dynamic characters, Kirk and Spock, must learn to accept the trappings of adulthood — Kirk setting aside his considerable ego to care for his crew, and Spock abandoning his attempt to deny emotion altogether. But these are simply basic character beats, drawn broadly and not dwelled upon at any length. They are interesting primarily as a counterpoint to this film’s antecedent, Star Trek II: The Wrath of Khan, which featured a 51-year-old William Shatner and addressed, with considerable more elegance than Abrams, a group of adults coming to terms with the decline of middle age.
Abrams intends for viewers to draw comparisons with Wrath of Khan, sprinkling references to that classic, arguably greatest Star Trek film throughout. Throw-aways, gags, characters and subplots all reference the prior film, in what is doubtlessly the cleverest aspect of the script by Roberto Orci, Alex Kurtzman and Damon Lindelof. But even this highlight is telling in that it is a shallow mockery of intellectualism, content to merely refer to ideas and art without actually engaging them. Even the original, low-budget 1967 TV episode on which Wrath of Khan was based managed to be more thoughtful than Into Darkness, whose plentiful references reflect the shortcomings of this Age of the Remix.
It is not an unenjoyable movie, containing generally fine acting, typically excellent special effects and a series of mildly entertaining action set pieces. But this is no mere popcorn flick. Its plot and characters could be so much more interesting but fall sadly short through general lack of interest by Abrams and the writing trio (who were reportedly concerned with making the movie appealing to an international, non-English-speaking audience by shying away from the talky stuff).
The most curious thing about Star Trek Into Darkness is not actually about the movie at all, but rather why its similarly action-first and intellectually unambitious 2009 predecessor did not provoke the same disappointment. The 2009 Star Trek film was actually very entertaining despite its shortcomings. Partly it gets graded on a curve because it shoulders the burden of introducing all the characters and rebooting a universe. That film also drew much of its limited intellectual energy from the clash between the archetypes of the major characters — a well that can only be drawn on so many times before it begins to go dry.
If there is optimism to be found in this latest Star Trek film, it lies at the end, when after two entire movies the USS Enterprise finally gets its famous five-year mission to explore new worlds. That setup leaves creative room for a more interesting sequel, while financially the movie’s overseas success could build the brand loyalty to bring audiences back to a less flashy sequel. With Abrams himself set to jump ship to his own true sci-fi love, Star Wars, perhaps this rebooted Star Trek franchise could belatedly take the next step to become not simply entertaining but satisfying.
It’s one thing to know something. It’s another thing entirely to be conscious of how much or little you know compared to other people.
Last week, “Saturday Night Live” aired a sketch called “Game of Game of Thrones,” about a game show focused on HBO’s fantasy series “Game of Thrones.” The fundamental joke was that the contestants seemed to know the most arcane, obscure details about a fictional television show, but absolutely nothing about anything else:
A deeper — probably unintended — level of humor for me is that the questions and answers on the quiz show sounded very complex but were not terribly obscure within the universe of the show. They weren’t asking about minor trivia, background characters, facts mentioned in passing, or even touching the incredible depth of detail contained in the books on which the show is based. Instead, the questions were about major plot points and characters. “Who did Jaime Lannister kill to earn the name Kingslayer?” may sound complicated to someone who’s never watched the show, but the fact that Jaime killed the Mad King Aerys Targaryen is one of the two fundamental facts of his character, referenced and discussed repeatedly over the show’s three seasons, both directly and in passing.
It seemed to me that a normal viewer of the show — someone who hadn’t read George R.R. Martin’s source books, rewatched every episode repeatedly or spent hours looking up facts online, but had merely just watched every episode once during its original run should be able to answer at least many of those questions.
Immediately, I realized the absurdity of that thought. How odd — and telling of the times — to describe a casual viewer of a show as being someone who had “merely” watched every single episode. A decade ago, certainly two, someone who never missed an episode of a show might very well be described as a fanatical viewer. Our era of DVRs, DVDs and on-demand streaming is one when it’s easy to fit a show around your schedule rather than rearranging your schedule to fit a show. It’s enabled an unprecedented level of serialization on television, shows like “Game of Thrones” where episodes don’t stand alone but are inextricably bound together. Miss an episode of “Game of Thrones,” aptly called the “most complicated show on TV,” and you’ll be lost when you tune in the next week.
But just a day or two after watching the SNL sketch, I realized that even someone who never misses an episode can be pretty lost following the show.
Bill Simmons, the sports journalist who founded and runs the website Grantland, is a fan of “Game of Thrones.” Simmons is not a stupid man, and immerses himself regularly into pop culture. He hasn’t read Martin’s books or gone to any great lengths to research the show, but has still watched all the episodes — exactly my definition above of the normal fan who I thought should be able to identify major plot and character points in the show.
But on this podcast (starting about 44 minutes in), Simmons talks about being unable to follow the plot and the characters.
“We’re six episodes in, and I don’t know what the [bleep] is going on,” Simmons says. “I really don’t, I just don’t know. I don’t understand. I feel dumb. The show makes me feel inadequate. I have to go on Wikipedia after to figure out who’s who.”
Simmons talks about not recognizing many characters, and not being able to remember the names of the ones he does recognize. There’s “Lady Whatever” and “Queen Whatever,” “the one who’s trying to get King Joffrey,” “the guy who’s been by her side for most of the time, with the beard,” and of course the immortal “the one guy.”
“Stuff happens in ‘Game of Thrones’ and I literally have no idea why it’s happening,” Simmons says.
Now, I’ve read the books, multiple times over the past decade or so. I’ve watched most episodes more than once. I’ve gone to online reference points to look up characters and events and places. So I am under no delusions about the fact that I know more about the show than most people watching it. But even as I warn people about how complicated the show is, I don’t fully appreciate just how daunting it can be even for someone who does it “the right way” and watches all the episodes, in order.
It’s not that I’m alien to that experience. Last week (at the same time the SNL sketch was running), I watched the new “Anna Karenina” movie. This was a two-hour film condensed from a massive, dense Russian novel, and it took me about half an hour into the film and a few trips to Wikipedia on my phone before I had the main characters straight and could tell which one was Anna and who was in love with whom — pretty fundamental questions. If I’d read the Tolstoy book, I might have followed along with no difficulty — and if I’d been watching it with a Tolstoy fan, their experience might have been very similar to mine watching “Game of Thrones with a newbie.
Now if only they’d actually make that “Game of Game of Thrones” game show. As someone who is both intimately familiar with Westeros and can identify prominent public figures like Supreme Court justices, I’m pretty sure I’d clean up.
I’ve traveled reasonably broadly, though far from thoroughly. As I plan how I’m going to use my vacation time this year, it occurred to me that there’s one vast region of the country I haven’t even touched: the South.
My travels have taken me to the Southwest, to the Northeast, to the entire Midwest, to parts of the Inter-Mountain West, to the Mid-Atlantic, and, through Washington State, to the Pacific coast. (California, Oregon, Nevada and Utah is the other major hole in my American travels.)
Here’s a visual look:
This fall, I’ll hopefully be swinging over from Texas to visit some friends in Atlanta, knocking off five more states from my checklist in the process. Though some of those states — chiefly Ohio and Pennsylvania — I was very small when I passed through, and if I can I’d like to hit them again for good measure.
(As you can tell from the map, I’ve never visited either Florida or California. I don’t consider this a huge deal, but certain Disney-loving friends of mine say this is a crime against humanity. Tough — even if and/or when I get around to going to those states, I doubt I’d waste money and time going to an amusement park.)
I’ll be sure to plan future years’ road trips to take me up into Maine and down to California — and who knows, once evening last year I plotted out how exactly a road trip through Canada to Alaska would go. Visiting every state in the country isn’t a terribly important goal, but it’s one I’d like to check off sooner rather than later.
This is mildly embarrassing, but also amusing, so I thought I would share:
Up until a number of years ago, when people referred to “Williamsburg” as being a haven for hipsters, I thought they were referring to Colonial Williamsburg. This was mildly confusing — when hipsters say they like vintage clothing I didn’t think they meant 1740′s — but this fact, if true, would not be the weirdest thing about hipster subculture. So I just sort of rolled with it.
In the years since I became aware that Williamsburg was also a neighborhood of Brooklyn, the above paragraph has gone from misguided belief to desperate wish.
UPDATE: Sharon Wegner-Larsen points to the TV show Portlandia’s take on the hipster-historical reenactment joke:
This evening, I happened to remember that advice, and tracked it down on Google. Some of it is pretty situational to campaign reporting, but other bits ring true for any news reporter, whether you’re covering a presidential race or the city council. Worth keeping in mind for any scribe:
- Young reporters gearing up for campaign coverage, I have two words: neck pillow.
- More campaign advice for young reporters: befriend the photographers/cameramen.
- I know it’s a pain, but keep track of those receipts and file every week. Just do it. I swear.
- Charge every piece of equipment any chance you get.
- Keep your eyes peeled. I met my future wife at the Chequers Pub at the Hotel Fort Des Moines.
- If you make it to South Carolina without being yelled at by the campaign, you’re not doing your job.
- Be the reporter challenging false claim by candidate. If you’re not, be the one who follows up.
- Even the people who you like and trust on the campaign will lie to you.
- Food that is plentiful and seemingly free does not equal non-fattening.
- Don’t send that angry email. Save it. And then reconsider in the morning. You’re exhausted.
- Someone somewhere thinks things you say and do are interesting and reportable.
- “The news was first reported by (reporter) of (rival organization).” Do it. Applies to blogs too.
- Do not rely upon the hotel wake-up call. And don’t forget time zones.
I’ve edited this slightly from the original to transform Twitter shorthands into regular sentences.
Fellow Sioux Falls journalist Kristi Eaton is writing a blog about “quarterlife crises” — the pressure and problems 20-somethings can feel as they grow into full adulthood. Her most recent post contained a passing anecdote that struck me:
Remember “Ally McBeal” from the 1990s? I never watched any episodes but was well aware of the show and how popular it was. I knew she was a lawyer living in New York City (I think?) and seemed to be so grown up. Guess how old she was? 27. Today a similarly career-focused woman is on TV: Liz Lemon from “30 Rock.” She’s another woman freaking out about whether she’ll never have the opportunity to have kids because of her age. She was 36 when the series started, a full decade older than Ally McBeal.
One simple pop culture comparison that summarizes up so much. Medicine has advanced, allowing safe childbirth later and later for more and more women. Family-formation norms have shifted, with more and more people delaying childbirth until their careers are established. The economic change from the boom years of the 1990s to the less-vibrant economy of 2006 (when “30 Rock” premiered) to the recession a few years later has forced people to delay having expensive children. Modern-day careers require more delays, as young adults pursue internships and graduate degrees before they can even enter the workforce.
Some of those changes seem enduring, others perhaps transitory. It’ll be interesting what things look like in another decade with regards to expected ages for various milestones of adulthood.
As a promotion for the upcoming radio dramatization of Neil Gaiman’s work “Neverwhere,” BBC Radio 4 released a clip today of “Sherlock” star Benedict Cumberbatch, in his character as the angel Islington, singing a haunting dirge. Listen to it here:
The Radio Times suggested readers apply a simple tweak to the song for an even odder effect:
For an extra-eerie effect, open this page in two (or more) windows and start the clip playing for a second time, say 15/20 seconds after the first one has started (the more windows you open, the weirder it gets…)
Indeed, it works great. But it was tricky to get the timing just right. So I put the sound clip into Garageband and arranged it into a semi-precise canon that to me sounds fantastic. Give it a listen:
A note on rights: I believe that this remix constitutes fair use of the original BBC recording. It is my interest in posting it to promote the “Neverwhere” radio adaptation, which I greatly anticipate. If the BBC believes I am in error on this point, I will be happy to remove it from the Internet.
The “Neverwhere” production kicks off on March 16 on BBC Radio 4.