Several weeks ago, while discussing the oncoming winter with my Southern-raised girlfriend, we reached an impasse over what exactly constituted weather cold enough to get alarmed about. Coming from Louisiana, she insisted that anything in even the 40s Fahrenheit was frigid, weather to cause people to stay indoors, bundled up in front of the fireplace. Myself, growing up in bitter Chicago winters, said you can’t start calling weather “cold” until the weather at least falls into the 30s — and that even then, extreme cold doesn’t start until the thermometer falls to the single digits.
But clearly our perspectives were entirely subjective. The only way out of this situation, for any good rationally minded person, is to get more data.
So I went to my Facebook page and posted the following query:
Above what temperature would you generally consider the weather to be “hot,” as opposed to merely “warm”? Below what temperature would you generally consider the weather to be “cold,” as opposed to merely “cool”? (For context, please also provide the part of the country/world you grew up in.)
Twenty different people responded: nine men (counting me) and 11 women. Here’s what I found:
- The warmest temperature anyone considered cold was 62, though that may be an outlier — that respondent gave a range of only 11 degrees between cold and hot, much less than the average. Next up was 55 degrees, from a southerner.
- The coldest temperature that anyone considers not cold was a mere 11 degrees, from someone raised on a farm on the central South Dakota prairie.
- The coldest temperature anyone considered hot, aside from that same outlier (who said 73) was 85 degrees, while the highest threshold for the onset of true heat was 97.
- One person commented, “I think that you’ll find that the survey results will show that women get colder at a much warmer temperature than men.” And, in fact, he was right. The median female respondent said coldness began at 45 degrees, while the median male said coldness didn’t begin until 32 degrees. (Means told a similar story.) This wasn’t a function of a sample including a lot of females from warmer climes — the median latitude was about the same for both genders.
- But there was no difference when it came to when hotness began. Both men and women had a median hotness temperature of 90 degrees.
- Indeed, there was remarkable agreement about what constitutes heat. Setting aside the outlier, the range of hotness answers varied by only 12 degrees. The range of coldness answers varied by 45 degrees.
- Where people grew up, unsurprisingly, mattered. Using a little bit of judgement for people who had moved around (I defaulted to the town people listed as their hometown on their Facebook page) I plotted a longitude for each person. The southern half of the longitudes (a dividing line right through the southern part of the Chicago area) said cold began at a median of 42.5 degrees. The northern half said 33.5 degrees. (There was only a 2.5 degree difference on heat — the southern half said 92.5, while the northern half said 90.)
- The key difference, as shown on the below chart, was that while some northerners can’t stand the cold, no one from the south (minus one person who split his time as a kid between Indonesia and Alaska — he’s plotted as Indonesia and is a clear outlier, but clearly is someone who experienced both extremes) could. (Note that this actually a chart of the absolute value of latitude, because the southern hemisphere latitude of Jakarta looked weird, and distance from the equator is the more important factor.)
- The heat differences, again, are less dramatic:
This study didn’t actually end up proving anything or resolving my debate with my girlfriend. (For one thing, I’d prefer to have a sample size of several thousand points, not just a score.) But I had fun doing it, which is really the point of [social] science.
Interestingly, in our conversations, my girlfriend and I have agreed that the extremes aren’t actually where people disagree. That is, when it’s 102, everyone agrees it’s really hot, even if some people are more bothered by it than others. The same when the weather hits single digits — everyone agrees it’s really, uncomfortably cold. The conflicts arise in the middle ground — whether it’s warm enough to open the windows, or cold enough to require a comforter on top of bed sheets.
Making small-talk at a friend’s wedding in Waco, Texas, after talking about my life in South Dakota, I was more than once asked the same question: “So when did your flight come in?”
It didn’t, I’d reply. I drove the 950 miles down to Texas. And things were just getting started.
Every few years I like to hop in the car and put some mileage on it, seeing as many places as possible on a moderately circuitous route between home and some distant point. The road trip is, for those with more time than money (but a decent amount of each) the ideal way to travel. Flying is good to see a single destination, but driving lets you see things all along the way, too.
So four years ago a friend and I drove to Arizona in March, seeing a half-dozen Spring Training baseball games along with the Grand Canyon and various sites in between. Two years ago I went solo, visiting a friend in Denver, a volcano in New Mexico, a canyon in West Texas and relatives in San Antonio. Last month I retraced some of that — nearly 750 miles were duplicated, the north-south swing from South Dakota to Texas. But after attending the central Texas wedding that was the primary purpose of the trip, I veered off into new territory.
Also new this year: I wasn’t alone. When spending the better part of two weeks driving, it helps to have someone to share the wheel with. Fortunately, coming along with me for most of the ride was my girlfriend, Allison, my partner-in-banter for hours of driving, my guest at the wedding and my host for a surprise visit to her family’s home in northern Louisiana:
But that doesn’t come until a bit later.
Day 1: Sioux Falls, SD, to Omaha, NE (186 miles, 3 hours)
I started out with an evening drive after getting off work down to Omaha, where my college roommate Ian (and 2009 road-trip companion) put me up for a night. I crashed on his futon after getting roundly schooled in a series of Mario Kart heats.
Day 2: Omaha to Oklahoma City, OK (456 miles, 7 hours)
After a greasy-spoon diner breakfast in Omaha, the trip began in earnest. The morning drive to Kansas City was uneventful except very near the end, when I came over a hill to find that a truck had apparently just flipped around a too-tight curve (speed limit down to 55 mph from 70!). Traffic was at a standstill, just starting to pile up. Given where I came to a halt, I decided to engage in a bit of judicious lawbreaking and drove the wrong way up an onramp to detour around the blockage.
Lunch involved Kansas City BBQ, which was pretty good even though barbecue is not my favorite cuisine. Then it was off for the long, flat, quiet drive across Kansas and Oklahoma to OKC.
Day 3: Oklahoma City to Waco, TX (288 miles, 5 hours)
The actual drive to Waco, the site of my friend Abby’s wedding, is supposed to be around four hours. But that best-case scenario doesn’t reckon with Dallas traffic and road construction, a giant headache even without driving through the center of the city. (It was even worse northbound.) Still, we got there in plenty of time for the opening festivities of the wedding weekend: the rehearsal dinner at a ranch outside of the city.
All that came after an unplanned visit to the Oklahoma City bombing memorial, which happened to be across the street from where I parked for breakfast. It was impressive and somber, and I’m glad I stopped.
I post that photo here mostly because somehow, I didn’t take any photos at the rehearsal dinner! You can view some pictures from the wedding photographer here, though neither Allison nor I are in any of them.
Anyway, it was a fun time in pleasant surroundings. I met a bunch of college classmates again for the first time in six years, played soccer out on the lawn with Abby’s new stepson, and got just inebriated enough to make a toast. (It was short and non-embarassing, I think.) There were buses back to the hotel, though not buses with good ventilation and I surprised everyone by managing to fall asleep in the uncomfortable heat.
Day 4: Wedding in Waco
Again, my inner photographer appears to have not yet showed up for this vacation, for which I apologize. Most of the day was pretty quiet — sleeping in, seeing a bit of Waco, a trip to the drug store to get cough medicine for Allison’s unfortunately timed respiratory infection, then getting ready for the wedding. You’ll have to take my word for it that we looked snazzy, because you know.
The ceremony itself was Jewish, a mixture of traditional rituals and modern sensibilities, helpfully narrated in both English and Hebrew by the rabbi. (Unfortunately the prime seats I snagged ended up being not so prime when it turned out the wedding party stood in between us and the couple.) The dinner was very fancy, and tasty, though the careful seating arrangements were a bit blunted by the sheer volume in the room that made conversation difficult with anyone but one’s immediate neighbors.
To the certain shock of anyone who has even known me, I even danced a little bit once the music came on. I’m sure I’d have never ventured onto the floor without a better-coordinated girlfriend present, but I mostly didn’t regret it. (Note to acquaintances I will run into at future social gatherings: this is not a precedent.)
Day 5: Waco to Todd Mission, TX, to Huntsville, TX (179 miles, 3 hours)
It was the middle of November, but southern Texas didn’t get the memo. (Back home, Sioux Falls did, with the mercury falling to around 0 during the week I was away.) This Sunday was easily in the 80s and sunny, which was good, because we weren’t trapped in the car. Instead, Allison and I met my aunt, uncle and cousin north of Houston for a visit to the Texas Renaissance Festival.
Renaissance festivals, or at least this particular one, are basically a cross between state fairs and ComicCon. There’s a lot of overpriced food and merchandise, except everything has a vaguely medieval/fantasy veneer to it — along with a healthy dose of science fiction, steampunk, and things with only the most tenuous connections to genre fiction in the slightest.
They’re also pretty fun, and this time, I remembered to break out the camera.
It was definitely good to get out of the car, into the sun, and see some people wandering around with outlandish costumes and outlandishly fake accents for a day. We ate unhealthy food, drank overpriced alcohol (mead!), saw performances of carillons, bagpipes and insults, and marveled at all the merchandise that would have been all too tempting had we merely been millionaires. And did I mention I rode an elephant? (It’s not particularly medieval, but why pass it up? Sure, it was $8 for about 60 seconds, but if you’re not prepared to waste some money, don’t go to the fair.) Had I gone to the fair when I was a fantasy-loving 12-year-old, I might have exploded with excitement. My slightly more sober adult self merely enjoyed himself a great deal.
On top of all of this, there was good times with friends and family:
Day 6: Huntsville, TX, to West Monroe, LA (349 miles, five hours)
After the excitement and cacophony of the fair, Monday was much quieter — but in its own way, more intense. We drove through east Texas and Louisiana, bound for West Monroe — home of Allison’s parents, who I was about to meet for the first time. Oh, and they didn’t know we were coming.
But everything ended up going pretty well. We announced our surprise visit via phone early in the day’s drive, so everyone was ready when we pulled up into her family’s home, which they had fully remodeled a decade ago. Every one was very polite as we chatted, then went out to dinner for some traditional Louisiana food. (I felt bad I couldn’t finish, though it was probably my own fault for eating appetizers.) The all-important meeting went off without any issue, something I suspect was a big relief to all parties involved.
Day 7: West Monroe, LA, to Memphis, TN (256 miles, five hours)
On the way out of town, Allison and I stopped by the only reason most people have heard of West Monroe — the home base of the Duck Commander duck calls made by the Robertson family, stars of the smash reality show Duck Dynasty.
I tried to muster the appropriate level of excitement.
The trip involved five hours of driving, but it ended up taking a little bit longer because I found something much more interesting: the Vicksburg National Military Park, memorializing Ulysses S. Grant’s siege of Vicksburg that helped sever the South in two and ensure Union victory n the Civil War.
The actual park was somewhat overgrown with trees, which took away from some of the splendor. But the rolling terrain was suitably impressive:
That’s a curved panorama of one of the Confederate redoubts during the siege, which was stormed unsuccessfully by Union troops up a narrow trench during one of Grant’s failed attempts to seize the city by force. He finally settled down and starved the rebels out.
The most notable thing about the Vicksburg battlefield is the monuments dotting it, built by the siege’s veterans decades after the war. For whatever reason, the Illinois contingent paid for by far the grandest memorial, a huge, echoey pavilion. (Or perhaps it’s no surprise, given that Illinois was home to both Grant and President Abraham Lincoln.) Another memorial was a huge spire so tall I had to use panorama mode on my phone to get it all in one shot. Unfortunately, my unsteady hands meant the ramrod-straight spire appeared to go all wobbly halfway up, so I’ll withhold the photo. Those same veterans who erected the memorials also put up signs around the battlefield. The plaques, red for Confederates and blue for Union, are laid out along each side’s lines and contain information about the units who fought there and descriptions of notable actions.
Also impressive were the remains of a Union gunship that was sunk in the Yazoo River during the Vicksburg campaign and re-floated and restored a century later. You could walk inside the ruins of the U.S.S. Cairo and tour a museum explaining its significance and displaying plenty of artifacts from its crew.
Sadly, it turns out that seeing Vicksburg properly requires a lot of time — more time than we had, driving as we were to meet Allison’s cousin and her family for dinner in Memphis. So after two hours, we cut our trip short before getting a chance to explore the heart of the Confederate lines.
No matter, though — an after-dark arrival in Memphis brought a chance to sample that town’s own barbecue options, followed by some quality time with a one-year-old and then much-needed sleep.
Day 8: Memphis to Huntsville, AL (215 miles, 3.5 hours)
Wednesday was a lazy day — sleeping in and not much driving. Before leaving Memphis we went to the downtown Beale Street, the heart of the city’s blues culture. So with just an hour or two to spare, we went to the city’s Rock N Soul Museum for an abbreviated tour of music history.
It wasn’t the most exciting museum, but was well done. A complimentary audio tour with admission included lots of music samples and narration, giving capsule biographies for many stars of 50s, 60s and 70s music and cultural context for the music in the racial tensions of the time. There were also plenty of artifacts, though none that really jumped out at me as I was looking for iconic photos to take.
Well, okay, there was one thing that was sort of attention-grabbing, alongside things like Ike Turner‘s piano: the flamboyant costume of professional wrestler Sputnik Monroe, a Memphis native who was an advocate for desegregation:
We didn’t get a chance to fully take in the museum, since we were on a timer. Eventually we set out across northern Mississippi and Alabama, bound for Huntsville, Alabama (not to be confused with our Day 5 stop in Huntsville, Texas). There, I met for the first time in person a decade-long pen-pal, Cliff:
Outside the restaurant, in downtown Huntsville, Allison and I cleverly disposed of the leftover chips we had taken home in a doggie bag a few days earlier and never eaten, though the way the little devils swarmed I was nervous for a second. (I needn’t have been. Allison has a black belt.)
Day 9: Huntsville to Atlanta, GA (220 miles, 3.5 hours)
Huntsville’s primary claim to fame is that it’s home to NASA’s Marshall Space Flight Center. And while we couldn’t go inside, we could tour the U.S. Space and Rocket Center museum, which was top-notch for a pair of overgrown children like Allison and I. Lots of rockets and space vehicles on display (some authentic, others replicas), along with interactive demonstrations and — for some reason — a traveling display about Leonardo da Vinci.
In fact, the museum was so much fun we spent altogether too much time there, and were nearly late for our evening engagement several hours away in Atlanta. But before things came to that, we had fun doing things like Mars-themed rock-climbing:
Life-size space shuttle replicas:
phallic symbols rockets:
And various other things to pose in front of:
Disappointed we couldn’t linger, we hit the road for Atlanta and our date in the evening: the Atlanta Symphony Orchestra’s concert featuring Shostakovich’s Fifth Symphony.
Though the evening ended up being something of a fiasco as we were under fierce time constraints and at one point running in full dress clothes through downtown Atlanta to get to our restaurant on time, I did enjoy being able to dress up and attend a classy night out:
Day 10: Relaxing in Atlanta and a night at the theater
After the tumult of the night before, Friday was spent at a more leisurely pace. We slept in, and strolled instead of ran through Atlanta to get lunch. That night ended up an unqualified success, as we went to a fine performance of King Lear at Atlanta’s Shakespeare Tavern. Their setup is something of the dinner theater model: the audience is seated at tables, where they can get dinner before the show, and drinks and dessert throughout.
Post-show, despite the late hour and the early day we had ahead of us on Saturday, we couldn’t resist proceeding from the Shakespeare Tavern to the Marlow’s Tavern a mile away for a nightcap.
Day 11: Atlanta to St. Louis, MO (555 miles, 8 hours)
At an ungodly hour of the pre-dawn morning, we stumbled groggily out of our motel for our last drive together of the trip: to the Atlanta airport, where Allison was flying home to Philadelphia. She caught her flight without a hitch, and I found myself faced with about 15 hours to do eight hours of driving. (I sure wasn’t going to drive 15 hours on a half-night’s sleep.) So I set off, circuitously, first visiting Georgia’s Stone Mountain — the first attempt at carving giant men into a mountainside by Gutzon Borglum, who would later move on to a more famous work at Mount Rushmore, South Dakota.
As an artistic work, Stone Mountain is pretty impressive, and more so the closer you get:
Of course, any monument to the Confederacy is always a going to be a little uncomfortable to watch. The figures of Robert E. Lee, Stonewall Jackson and Jefferson Davis aside, the informative plaques on display were rather light on the existence of slavery, the institution all three were fighting (to varying degrees) to defend.
Not that Lost Cause revisionism seemed to be at the forefront of anyone’s mind at Stone Mountain these days. In fact, me as a tourist seemed rather out of place. Everyone else at the mountain that morning was busy setting up a winter village attraction that would soon open (though with fake snow, of course, because Georgia):
It was another few hours before my next stop, also Civil War themed: the battlefield of Chickamauga, where in 1863 Braxton Bragg’s Confederate army routed and nearly destroyed the federal Army of the Cumberland — but for the courageous stand of General George Thomas, who rallied fleeing troops on Horseshoe Ridge and held his ground against ferocious attacks until making an orderly withdrawal under cover of darkness. (Thomas got his nickname, “The Rock of Chickamauga,” when a union officer observing the situation wrote General William Rosecrans that “Thomas is standing like a rock.” That officer was a young Brigadier General James Garfield, the future president.)
My favorite part of the battle, though, was the performance of the “Lightning Brigade,” among the most technologically advanced units on the battlefield. Col. John T. Wilder equipped his men — with their own personal funds, until the Army, embarrassed, paid up — with the new Spencer repeating rifles, which could fire up to 20 rounds per minute, compared to three or four from traditional single-shot weapons. This gave Wilder’s men a huge advantage and let them face off many times their number. As mounted infantry, they could also move around the battlefield with rapidity but still hold a line. After a blunder left the Union army fleeing in a rout, the Lightning Brigade not only held its own but was preparing to counterattack and turn the advancing Confederate flank until Assistant Secretary of War Charles Dana got in the way and demanded to be immediately escorted to safety.
The battlefield was a driving loop with a cell phone-based audio tour. I was in a bit of a rush but still enjoyed seeing firsthand a battle I had read about just a few months prior at the New York Times’ excellent “Disunion” series, which chronicles the Civil War in real time 150 years to the day after the war’s events.
From Chickamauga I traveled up through Tennessee, Kentucky, and southern Illinois to just west of St. Louis, where, exhausted, I finally stopped for the night.
Day 12: St. Louis to Sioux Falls (611 miles, 8.5 hours)
The final day’s travel was quiet, if tiring — west to Kansas City, then north past Omaha to Sioux Falls. I took a straight shot with no detours, no tourism and no photos. After nearly two weeks on the road, and the last two days with heavy solo driving, I just wanted to be home.
That whole 12-day endeavor meant more than just checking a few more states off my to-visit list. It was also both fun and refreshing, a good break from work. Most of all, it was a great time with a swell lady. Even someone as introverted as I can admit that some things are just more fun doing with someone else.
All told I put around 3,300 miles on my car through 14 states, five of which I had never visited before:
The most important thing to know about a new board game is what role chance has in the play. To pick extreme examples, children’s classic “Candy Land” is entirely luck — you can’t be good or bad at Candy Land, you just draw randomly shuffled cards and do what they say. Chess, on the other hand, is pure strategy and no luck — both sides are perfectly balanced and there are no random elements.
Many of the best games have an element of both — a heavy role for strategy, so players’ abilities are tested, but some role for luck as an equalizer, to help keep less-experienced players in the game to the end. Games can be full with lots of one and a little bit of another, but generally speaking, I prefer an emphasis on strategy over luck — controlling your own destiny is more interesting to me than depending on the whims of a dice roll.
All that is prelude to saying that German import “Power Grid” (translated from the original, fantastic German name “Funkenschlag”) is my very favorite board game right now. It’s not purely strategy — there’s a deck of partially shuffled cards, for example — but generally speaking what happens in the game is almost entirely the result of player’s choices. But the brilliance of Power Grid is that it didn’t get rid of chance at the expense of game balance. To the contrary, several elegant (if intricate) mechanisms subtly penalize players in the lead and boost those trailing. The result is a gripping game where every action (or deliberate inaction!) has consequences, where a low key initial game builds to a high-pressure finish. If you like games that force you to think, strategize, and weigh difficult choices, you’ll love Power Grid.
Power Grid puts the players in the role of competing power magnates, trying to expand their company to dominate the electrical industry of the United States, or Germany (or other countries and regions sold separately). The core of the game involves three different steps repeated each turn:
First, players buy power plants, bidding against each other in an auction. At the start of the game, power plants are relatively inefficient — requiring a lot of fuel to power just a city or two. Consequently, they’re cheaper, costing just a few bucks of the game’s currency to buy — unless other players fix their eyes on the same plant and raise your bids. These bidding wars can drive the cost way up from its opening offer, and sometimes that’s even worth it. There’s a few different kinds of power plants, each of which takes a different kind of fuel — coal, oil, garbage or uranium, plus some “green” plants with no required inputs.
This matters because of the second step: players buy fuel from the market to power their plants. This is done in turn order, not via auction — but like the auctions, the law of supply and demand is on full display. There’s a limited supply of each resource, and the more players buy a resource, the higher the price gets. Don’t let the talk of economics scare you — this is clearly indicated on the board, not something involving finicky math. So even though coal starts out as cheap and abundant, and uranium is several times more expensive, if all your rivals have coal plants they could soon find it scarce and pricey, while you have cheap uranium all to yourself.
Finally, players build infrastructure to cities — so they can sell the electricity they generate to customers and earn money. This costs money for each city — plus extra money for overland connections between two cities. The cities of New England are cheap to reach, while the vast expanses west of the Mississippi will require a lot to spend. On the other hand, you might not have as much competition there because of the price, letting you keep expanding as others find their reach stymied.
After all that, players burn off their resources to power as many cities as they can. The more cities you power, the more money you get — but the more money it cost you to get there. And you can never, ever, rest on your laurels — every other turn at the minimum, if not every turn, you’ll be emptying your wallet on plants, resources and cities. Sometimes doing little or nothing for a turn can be the right move, to avoid overpaying for something, or to husband your money for the next turn when a much better power plant will become available. But the competition is fierce and laggards will pay the price.
Another plus is that the game doesn’t involve direct conflict between players. There’s no combat or attacks, no destroying other players’ hard work. But unlike some hobby games which can seem more like everyone playing their own solo game at the table, it does involve player interaction — and indeed makes it integral to the game.
The way the game incorporates competition and supply and demand is its most elegant aspect. But key to the game’s success is the system it puts in place to ensure balance. This is primarily done through artificially manipulating turn order, so players who are doing better are the last ones to buy resources and build into cities — meaning they’ll pay more and find their routes blocked. Leaders also are the first ones to bid on power plants, which hurts them because later in the auctions better power plants tend to become available. (Veteran players talk about the concept of “leading from behind” — intentionally keeping your income low to benefit from this system even as you position yourself for a late surge to the front. Of course, the fact that veterans can game this mechanic like this is partially a downside, in that it doesn’t help new players as much as you might think.)
These mechanics, combined with a few others, are one of the primary downsides of the game: it’s got a lot of little complicated elements that can be too much for some people — especially if no one at the table has played before to help teach and run things. Calculating the changes in turn order, figuring out how many resources to add each turn and handling all intricacies of the auctions can all seem overwhelming. Plus, many of these rules are artificial, without any benefit in theme, and don’t flow intuitively from the rules.
Even setting all that aside, the end game can involve quite a bit of math as players try to stretch their bank accounts for the final push. For me, this is a thrill (though I like to play with a pen and paper so I can jot down the various possibilities as I wait for my turn), but I can see how it would be a chore for people who like more casual games.
Power Grid isn’t for everyone. It’s involved, stressful and moderately complicated. People who like more casual games, or games with more of a random element, probably wouldn’t have fun with Power Grid. But for people who thrive on competition and strategy, it’s nigh perfect.
The game can incorporate anywhere from two to six players, though I’m told it’s best with four to six. (I’ve only ever played it with the larger groups.) Games take about 90 minutes to two hours. It involves both small pieces and math, so probably isn’t suitable for all but the most precocious children.
You can buy Power Grid at a local hobby store or on Amazon. Alternately, I own it and will gladly play it with you. Apologies in advance for beating you.
A question raised just now at work: if something can be preemptive, why can’t it just be emptive?
It’s a somewhat obscure example of a linguistic phenomenon that pops up periodically. Somewhat more famously, perhaps, is the question of why we can be “overwhelmed” and “underwhelmed” but are never just “whelmed.”
What happens is a word loses its meaning. In Old or Middle English, you have a word like “to whelm,” which means “overcome, as with emotions or perceptual stimuli.” That word gets a modifier, like “over.” Then, over the centuries, people gradually start using the compound form more and stop using the original root, until today, “whelm” is basically meaningless without a modifier.
The same thing happened with preemption. Originally, “emption” was a real word, in the late 15th Century, a noun meaning “buying.” Emption meant you were buying something; preemption, about a century later, meant you were buying something before someone else. Over the years, it got generalized to mean to do anything before somebody else — chiefly some sort of blow or strike. Meanwhile, “emption” fell out of the language.
So if a preemptive attack is to attack before someone else, an emptive attack would be just an attack, without reference to relative chronology. In other words, it’d be a pretty meaningless term. In this case, then, there’s a good word why we don’t use “emption” in the modern sense of “preemption.”
Can you think of more good examples?
- Another example is disgruntled/gruntled. We no longer say someone is “gruntled,” from “gruntle” which originally meant “to grumble” or to “grunt.” ”Dis” is an intensifier. So someone who is disgruntled grumbles a lot. But only the compound form survived. (Via Larry Kurtz)
- Couth/uncouth. Our word “uncouth,” meaning “lacking good manners or refinement,” derives from the Old English uncuð and originally meant “unknown,” from cuð, the past participle of cunnan, “to know.” In the 16th Century or so it got its modern meaning. To the degree we say “couth” any more, it’s a back-formation from “uncouth.” (Via Pinedale Roundup.)
This post has been updated with the term “bound morpheme” and one or more examples.
Hating on the Star Wars prequels is a favorite pastime among those of a geekier persuasion. They have their moments, but are also heavily uneven, tediously paced and largely lacking resonance. But what if the prequels were good — REALLY good? That’s the question asked by filmmaker Belated Media, whose name does not appear to be anywhere on his YouTube, Facebook or Tumblr. This nameless video auteur proceeds to answer his own question by sketching out changes to the prequel scripts that actually seem like they’d produce pretty awesome movies.
He’s done episodes 1 and 2 so far, and hopefully won’t wait another year to finish a third installment. Among the changes: turning Darth Maul into a recurring villain, giving Obi Wan Kenobi an arc involving his need for revenge for Qui Gon’s death, introducing Anakin as a teenager instead of a child, drastically simplifying the plotlines, setting much of the action on Alderaan (so its later destruction means more to us), introducing Bail Organa earlier, focusing on the relationship between Anakin and Obi Wan instead of Anakin and Padme, and removing Yoda’s (admittedly cool) lightsaber fighting.
And Jar Jar’s gone, too, but I hope that was assumed.
The two videos are a bit long but fun to watch. If you like Star Wars, give them a look.
See also: the even longer but equally entertaining takedowns of the prequels by Red Letter Media (I, II and III), and the proper order in which the six current Star Wars movies should be watched with the versions of the prequels we’re stuck with.
When most people talk about “efficiency,” they talk about it in one of two ways. For some people, it’s an unabashed good thing, a goal in and of itself. For other people, it may be a good thing but is often used as an excuse to bring bad things — firing employees, or replacing traditional tasks with soulless machines, or the like.
But I’ve been struck lately by several cases where people have defended inefficiency as a virtue. It’s not just that inefficiency has side benefits, but that the direct impact of the lack of efficiency is praiseworthy itself. Usually this is where the activity in question is frowned upon — but perhaps not so much as to warrant an outright ban.
For example, take alcohol laws in the United States. Under the “three-tier” system, the alcohol industry is divided into producers (breweries and distilleries), wholesalers and retailers. The producers, in other words, not only can’t sell directly to the public, but they can’t sell directly to stores that sell to the public, either.
This is doubtlessly inefficient. It “adds to the price of the drink at every step,” it “produces patently ludicrous scenarios,” it harms small producers who are unable to find distribution, and has “fleeced customers for decades,” according to author Kevin R. Kosar.
It’s also exactly the point.
“By deliberately hindering economies of scale and protecting middlemen in the booze business, America’s system of regulation was designed to be willfully inefficient, thereby making the cost of producing, distributing, and retailing alcohol higher than it would otherwise be and checking the political power of the industry,” writes journalist Tim Heffernan.
The inefficiency is deliberate, Heffernan writes, precisely because of the pernicious qualities of alcohol when abused. An outright ban didn’t work, so Americans settled for making the system inefficient, raising the costs and putting an artificial brake on something seen as undesirable.
In contrast, Britain has none of these restrictions — and because of that, Heffernan argues, a much worse national drinking problem.
Whether you agree with Kosar or Heffernan, it’s a striking argument to consider.
It’s the same case with a current issue-of-the-day, drones.
Many people are very upset with the U.S. government for using drones to spy on people and to kill people. And yet, something about that seems odd. Why are people so upset that DRONES are doing the spying or the killing? Some human is the one giving the orders or operating the controls. Why aren’t people complaining about the killing or the spying instead of the method by which that killing or spying is being done?
The key difference between using a drone to fire a missile at a terrorist compound and using, say, a manned aircraft, is that the drone is more efficient. You can conduct the operation without putting an expensive human being at risk. You don’t need to get those expensive human beings to the site in question; they can operate the unmanned vehicle from across the world. By using drones, the costs of conducting killing or spying go way down, thus allowing more of it. (Plus, decades of techno-phobic science fiction has made people a little leery of drones, which doesn’t hurt the public case, but among the people who are really worked up about the subject, efficiency is the real issue.)
Some people might want to ban the government from killing people or spying on people, others just want to limit it. But both can agree that just making it more difficult to do the frowned-upon activity by circumscribing the latest efficient technology is a good first step.
Are there other examples where inefficiency is praised as a self-evident good?
Many Democrats in Republican-dominated states like South Dakota derive what little political pleasure they have from the party’s national victories. While electing a Democratic governor in South Dakota can seem nigh-impossible at times, Democrats like Barack Obama and Bill Clinton regularly take control of the national reins of power thanks to more liberal voters in other states.
But in South Dakota, at least, history suggests Democratic national victories are actually devastating for the party’s hopes of winning local elections.
Dating back more than 60 years, Democrats have controlled around 10 extra seats in the South Dakota Legislature, on average, in the years when a Republican occupies the White House, compared to when a Democrat is running things in Washington, D.C.
In those years when a Republican is president, moreover, Democrats have tended to gain an average of around three or four seats per election.
But when a Democrat such as Lyndon Johnson, Jimmy Carter or Clinton has been president, South Dakota’s Democrats lose an average of four seats in each election.
This trend is striking when viewed graphically. The following chart shows the percentage of seats in the South Dakota Legislature occupied by Democrats, and is colored by the party affiliation of the president in the year of each election. (So 1976 is red even though Jimmy Carter won that year because Gerald Ford was president at the time of the vote.)
The slope is unmistakable — with a few outliers (notably the 1964 Lyndon Johnson landslide), Democrats steadily gain seats in the GOP-run years, and fall backwards otherwise.
These trends don’t prove that Democratic presidents are causing Republican victories in South Dakota, merely that the two tend to happen at the same time.
But Jon Schaff, a political science professor at Northern State University in Aberdeen, said he can see how Democratic presidents might make it tough for local Democrats in conservative states such as South Dakota.
“I could see that when a Democrat is in office, doing things that maybe a majority of the state doesn’t like, the … unpopularity of the national Democratic Party gets attached to local politicians,” Schaff said.
Another explanation for the trend could be the effect of Democratic presidents on Republican voters.
“At times when the national Democratic Party is in ascendancy, Republicans in the state become more partisan,” he said.
The only times Democrats have taken control of one or both houses of the Legislature — the Senate in 1958, 1972, 1974 and 1992, with a deadlocked House in ‘72 — have all been elections with a GOP president. Similarly, the only five times in this period that a Democrat has been elected governor were during Republican presidencies.
That’s not to say there’s not plenty of other possible explanations. Democratic gains in the 1970s under presidents Richard Nixon and Gerald Ford could involve the national anti-GOP backlash after the Watergate scandal. Their gains in the 1980s under President Ronald Reagan might be connected to Reagan’s unpopular farm policy rather than anything more fundamental about Reagan’s political affiliation.
One political watcher, former South Dakota Republican Party chairman Joel Rosenthal, said effect could simply be a coincidence. The real factor, he suggested, was that some Republican governors have been more vigorous at energizing and organizing the state GOP than others.
“It could have a lot more to do with Bill Janklow being on the ticket or being governor, or Dennis Daugaard,” Rosenthal said, pointing to two governors who have served at times when Republicans did well in the Legislature.
Only three governors were in office for both Democratic and Republican presidential administrations: Janklow (under Carter, Reagan, Clinton and Bush II), Mike Rounds (under Bush II and Obama), and Sigurd Anderson (under Harry Truman and Dwight Eisenhower). The sample size there is too small to draw any strong conclusions, but Janklow under Democrats saw Democrats lose an average of just under three seats per election, while Janklow under Republicans saw Democrats gain an average of a quarter seat. Rounds’ one election with a Democratic president was a Republican landslide when Democrats lost more than 13 percent of their seats, while his elections under Bush saw Democrats pick up an average of several seats each year. Anderson’s election under Truman saw the Democrats nearly wiped out in the state, while once Eisenhower took office they rebounded vigorously.
Shortly before the 2012 election, the then-chairman of the South Dakota Democratic Party dismissed the correlation.
“I think there’s probably a number of factors outside who’s in the White House at play in those elections,” said Ben Nesselhuf, chairman of the South Dakota Democratic Party until July 2013. “By and large, South Dakotans have a pretty good understanding of what their Legislature’s about and who they’re sending there, and the difference between the national parties and the state parties.”
But after the voters in November 2012 returned the same number of Democratic lawmakers despite a vigorous SDDP campaign, Nesselhuf privately pointed to this analysis as a reason why Democrats didn’t regain some of the seats they had lost in the 2010 Republican landslide. Some commentators, including this writer, had expected Democrats to gain back some seats under the evocatively named theory of the “dead cat bounce” — a party that suffers a landslide defeat will lose seats they normally win, and thus will be well-positioned to retake them once the landslide is over. That didn’t happen, and Nesselhuf suggested it was because South Dakota Democrats were struggling against the massive unpopularity in South Dakota of a Democratic president.
Rep. Bernie Hunhoff, the Democratic leader in the state House of Representatives, said Democrats tend to get “none of the benefits” of having a Democratic president, since the national party rarely invests significant resources in South Dakota.
“Both parties write off the state,” Hunhoff said. “And yet we get whatever negatives there might be, because everyone likes to blame the party in charge. That was certainly true under Clinton and Obama.”
View the data used in this analysis here.
I’ve been watching the political tumult in the Middle East with interest since before the “Arab Spring” first broke out in late 2010. After writing a research paper on the political economy of the United Arab Emirates for one college class, I decided to keep my research focused in the Arab world for my next class, on the “Diffusion of Democracy,” in the spring of 2008. I wrote a case study of three different Arab countries in different situations — oil-rich monarchy Kuwait, impoverished monarchy Jordan, and massive then-dictatorship Egypt.
While Jordan and Kuwait have had little political change since then, Egypt has been in near-continuous uproar. And I’m still proud that much of my analysis proved prescient. The full essay is here or embedded below, but here’s some choice excerpts.
By way of quick background, I analyzed the countries from two perspectives: structural and process. Structural analysis looks at what socioeconomic characteristics correlate with democracy, and sees how the undemocratic countries compare to try to predict which are likely to become democratic. I used two primary datasets. One was assembled by Ronald Inglehart and Christian Welzel, who took the massive World Values Survey and assigned every country a value based on its citizens’ responses on two axes — one between “traditional” and “secular/rational” values associated with industrialization, and one between “survival” and “self-expression” associated with the rise of the consumer-based economy. Inglehart finds that the self-expression values correlate well with support for democracy. Here’s a fascinating map showing how various countries fall; you can see right away it is not optimistic about the Muslim world:
Secondly, the Egyptian political scientist Moataz Fattah conducted a survey of literate Arabs throughout the Muslim world to try to gauge support for democracy. (The literate-only dataset is a significant limitation that probably serves to overestimate democratic values.) Fattah divided the Islamic population into four general camps — “traditionalist Islamists” who reject democracy as contrary to Islam, “modernist Islamists” who want a democracy compatible with Islam, “autocratic statists” who are secular supporters of dictatorship, and “liberal pluralists,” secular supporters of democracy. In general, where the modernists and the pluralists outnumber the traditionalists and the statists, support for democracy is strong.
But while structural analysis has a pretty good track record of predicting democratization in the long run, they don’t very much to say when those transitions are likely to occur. That’s the emphasis of a process-oriented approach, which takes a qualitative look at the structure of a regime and how it is likely to bend or break to popular democratic pressure. This approach also looks at what groups there are in society who might be able to exert effective pressure to demand democratic reforms.
In Inglehart’s model, Egypt scored a -1.57 on the traditional-rational axis and a -0.4 on the survival-expression axis. That’s not good — only a handful of countries that far to the left on the survival-expression axis are democratic, and none are paragons. Still, Inglehart did find a substantial constituency for democracy in the Egyptian survey — 30 percent of respondents had self-expression values, comparable to Venezuela, Peru, South Korea and Portugal. Egypt’s Freedom House scores (under the Mubarak regime) were actually less free than Inglehart’s model would predict. My conclusions from this model:
This suggests that Egypt is due for at least some formal democratization over the long term because its government is out of tune with the beliefs of its people. It would not take very much expansion of self-expression values to lead to a major change in this modernization model: “any society in which more than half the population emphasizes self-expression values scores at least 90 percent of the maximum score on liberal democracy.” On Inglehart and Welzel’s measure of “effective” democracy (this is to say, elite corruption), Egypt is a poor performer, ranking among the most corrupt countries in the world. Even if Egypt were to acquire formal democratic institutions, Inglehart and Welzel do not predict any rapid improvements in effective democracy. (Emphasis added)
In light of what happened three years after this paper, I think this paragraph holds up very well.
Fattah’s model suggested “Egyptians have among the strongest preferences for democratic institutions in the Muslim world (higher even than Muslims living in the United States) and also show high support for ‘democratic values.’” Only three percent of literate Egyptians were “traditionalist” (undoubtedly the actual figure is higher) and seven percent statist, compared to a huge 63 percent of modernist Islamists and 27 percent secular pluralists. In contrast, the rates of traditionalists in other Arab countries were 26 percent in Syria, 25 percent in Algeria and 46 percent in Saudi Arabia. But the dominance of modernist Islamists over secular pluralists in the Egyptian literate population — only about 70 percent of Egyptians can read, so the pluralists are surely even more outnumbered than this suggests — doesn’t bode well for people hoping for a Western-style liberal democracy there.
From a transition studies approach, I suggested the Egyptian military might not prove loyal to Mubarak if push came to shove. Though Mubarak had lavished resources on the military, he hadn’t taken certain steps to ensure its loyalty (such as that taken by Kuwait, stocking the military’s upper echelons with family members of the leader):
On the other hand, where some countries have bound the military to the regime through patrimonialism—placing relatives and key supporters who have a stake in the regime’s survival in key posts—Egypt’s military is “highly institutionalized” and capable of acting independently. Herb and others agree with Bellin that: “where the coercive apparatus is institutionalized, the security elite have a sense of corporate identity separate from the state. They have a distinct mission and identity and career path. Officers can imagine separation from the state.” They have O’Donnell and Schmitter find that where the military is professionalized and independent, “the only route to political democracy is a pacific and negotiated one.” It is conceivable (despite the close ties between regime and army) that the military could switch to the opposition or remain neutral during a transition.
In fact, when millions of protesters packed Tahrir Square and other Egyptian streets, the military refused Mubarak’s orders to crack down and ultimately removed him from power. Mohammad Morsi, Mubarak’s democratically elected successor, proved no better at bringing the army to heel and suffered the same fate.
Of course, in order for the military to respond to democratic pressures, there had to be those pressures. I examined two key sectors of Egyptian society that might prove capable of organizing politically to oppose Mubarak (or run a government after his ouster): the secular political parties, and the Muslim Brotherhood.
The former are “largely a sorry bunch” who had been intentionally emasculated by Mubarak:
Saad Eddin Ibrahim, a liberal Egyptian academic and dissident, notes with dismay (in a chapter written from a prison where he was serving a seven-year sentence for opposition activities) that “Egypt’s [secular] democracy advocates are the weakest of the three salient actors at present,” along with the regime and Islamists, because instead of “viewing them as an ally against extremism, the state has repeatedly repressed democracy advocates.” Unless things change drastically, the liberal opposition looks to be an ineffective advocate for democracy.
The Muslim Brotherhood, though banned under Mubarak, were nonetheless popular and organized. Even while officially illegal, members managed to dominate aspects of Egyptian society, often by acting as nominal independents. But the “biggest question with the Brotherhood is not whether they are strong enough to be a credible opposition organization—if they are not, then no one is, and all indications are that they are—but what kind of opposition group they would be.”
On the one hand, the Brotherhood has, since renouncing violence, consistently endorsed democratic principles. Brotherhood rhetoric uses terms like “democracy,” “liberty” and “freedom” “freely and repeatedly,” and the Brotherhood “consistently dismiss the argument that Islam and democracy are incompatible.”
On the other hand, the Brotherhood’s talk of democracy can sound suspicious to secular liberals. Brotherhood democracy is Islamic democracy based on sharia law. “Western critics,” notes Sana Abed-Kotob, “are fearful that the Brethren are using elections as a tactic to gain power and subsequently do away with the democracy that gave them their voice.” … Even granting the Brotherhood the best of intentions, a Brotherhood-led democracy will probably contain many objectionable elements to secular liberals. But for democracy advocates in Egypt, firm military support for the regime means that the Brotherhood is the only effective opposition group. On the Brotherhood’s good intentions may ride the prospects for Egyptian democracy. (Emphasis added)
As it turned out, the Brotherhood was not intimately involved in Mubarak’s overthrow, and neither were the secular political parties. His downfall came subject to a massive, unorganized outpouring of support that I did not expect, spurred by an international popular movement. But when examining what came next, as various interest groups tried to organize to control Egyptian democracy, the above paragraphs are still useful. The Muslim Brotherhood, once it made the decision to contest elections, was clearly the dominant group, winning both a large majority in the parliament and the presidency. And Morsi’s rule did indeed “contain many objectionable elements to secular liberals,” which combined with his mismanagement of the economy and the disloyalty of the military contributed to his downfall.
Even if Egypt’s liberals are able to organize effectively, they’ll still be outnumbered. Democratic institutions in Egypt are likely to return Islamist governments. But Egypt does have a solid core of around a third of the population who support democratic values, which is non-negligible and could shape the country’s political future for years to come.
It was doubly unfortunate for English Catholics that in 1570 Pope Pius V issued a Bull, Regnans in Excelsis, condemning Queen Elizabeth as a heretic and absolving her subjects from their allegiance to her. The Bull had been intended to help the northern rebels, but it was not issued and advertised in England until after they had been defeated (with reckless bravery, a Catholic gentleman called John Felton tacked a copy of it to the gate of the Bishop of London’s palace, and suffered the usual hideous execution of a traitor when he was caught). It provided a new embarrassment for Catholics instead of helping them. … Pius’s action was so generally recognized as a political blunder that it was even remembered in the 1930s when the papacy considered how to react to Adolf Hitler’s regime: discreet voices in the Vatican privately recalled the bad precedent, and behind the scenes it was a factor in preventing a public papal condemnation of Nazism.
“Jurassic Park,” the 1993 film*, is so misunderstood even the film’s writers (including source material author Michael Crichton) and director Steven Spielberg got messed up.
The ostensible message of the film is that playing God with nature is bad. Jurassic Park (the park) was doomed to fail, we are told, because humanity had exceeded its grasp by restoring dinosaurs to life and deluding ourselves into thinking we could control them.
Take it from Ian Malcolm, the Jeff Goldblum character who served as the authorial voice in the film:
Dr. Ian Malcolm: John, the kind of control you’re attempting simply is… it’s not possible. If there is one thing the history of evolution has taught us it’s that life will not be contained. Life breaks free, it expands to new territories and crashes through barriers, painfully, maybe even dangerously, but, uh… well, there it is.
Dr. Ian Malcolm: I’ll tell you the problem with the scientific power that you’re using here, it didn’t require any discipline to attain it. You read what others had done and you took the next step. You didn’t earn the knowledge for yourselves, so you don’t take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, and slapped it on a plastic lunchbox, and now you’re selling it, you wanna sell it. Well…
John Hammond: I don’t think you’re giving us our due credit. Our scientists have done things which nobody’s ever done before…
Dr. Ian Malcolm: Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.
Hogwash. The lesson of “Jurassic Park” was not that it was foolish for humans to resurrect dinosaurs. It was that humans were foolish to resurrect velociraptors.
If you want to be picky, you could add T-Rex to the list, but really, I would argue that was a perfectly acceptable risk to take to be able to say you have a T-Rex, the most popular and famous dinosaur species, in your dinosaur park.
Let’s review the sequence of events in the film:
- Jurassic Park is created. Even with all systems operational, a hapless worker is killed by velociraptors
- Treacherous programmer Dennis Nedry shuts down the island’s security system in order to escape. He is killed by a dilophosaurus after a coincidental typhoon messes up his carefully rehearsed exit strategy
- This allows the tyrannosaurus rex to break out of its (in hindsight under-engineered) enclosure
- By pure coincidence, a civilian tour group gets caught immediately in front of the T-Rex pen. The tyrannosaur kills one and injures another; the rest escape
- The park’s computer engineer restarts the software to restore the fences. This releases the velociraptors, which until this point had remained mercifully contained
- The velociraptors kill the engineer and the highly trained game warden and somehow fail to kill two small children.
- The T-Rex enters the center area and kills the velociraptors, allowing the surviving humans to escape
Nedry got what was coming to him, so we’ll ignore the dilophosaurus. If you crash your jeep into a lion cage at a zoo you might get eaten too, but no one would cite that as a reason to not have lions in zoos.
The only reason the T-Rex hurt anyone was a freak coincidence that at the exact same time the system was disabled, a tour group was in front of its cage. Now, the T-Rex incident clearly showed several system vulnerabilities — the electric jeeps need some sort of backup system, the cages around carnivorous dinosaurs need to be built to contain them even when power is disabled, and guests should probably be given a safety briefing about T-Rex vision before setting off on their safari. If you want to be incredibly cautious, keep the T-Rex out of your park. I’m sure the kiddos will understand when you tell them why their favorite dinosaur isn’t there. But even with the park’s systems shutting down, it was only by chance that the T-Rex was able to harm anyone.
And the other dinosaurs in the park? Glorious. Peaceful. Stayed in their enclosures and didn’t harm a single human. Don’t punish them for the crimes of other species.
Namely, the velociraptors. While the T-Rex was a force of nature, the velociraptors were shown to be intelligent, aggressive and determined. Even with all systems operational they probed their enclosure, searching for weaknesses, and managed to kill a very wary park employee. Plus no one cares much about velociraptors (or at least they didn’t until “Jurassic Park” was released, and anyway those were really more like utahraptors.) These guys are BAD NEWS. Leave them in the Cretaceous where they belong, and good riddance.
Without the velociraptors, the number of innocents killed in the park’s singular disaster would be one, who was just in the wrong place at the wrong time. And that guy was a lawyer, so I’m not even sure he counts as innocent. (I KID.) Of the 10 people on the island when Nedry turned off the security, two would have died, counting Nedry. In the actual incident, twice as many people bit the dust, solely because there were velociraptors present, and it really should have been more.
So scientists: if you ever manage to invent technology to recreate dinosaurs, don’t let the misinterpretations of Michael Crichton dissuade you from creating a new Jurassic Park. Such a park would be AWESOME, and minimally dangerous, as long as you are not dumb enough to include velociraptors in the mix. They’re clever girls.
Also the 1990 Michael Crichton book on which the movie was based. UPDATE: As Tim points out in the comments, the above light-hearted post is not actually accurate when it comes to Crichton’s book, which more thoroughly expounded his techno-skepticism. I haven’t read it in more than a decade, and forgot elements he mentions. I stand by my argument when it comes to the movie.
The Internet is great, but it’s no telegraph or automobile.
That’s one of the key takeaways from economist Tyler Cowen’s “The Great Stagnation,” which to my great embarrassment I only got around to reading today, more than two years after he first published it. I’ve read Cowen’s blog, Marginal Revolution, since before then, but never plopped down the $4 to buy the e-book — not much longer than a long-form magazine article, I read it in just over an hour.
Cowen sets out to explain the last five years, and the last 40 — the sudden recession the world has not yet fully emerged from, and decades prior to that of slow, fitful, partially illusory economic progress.
The two are largely one and the same, Cowen diagnoses — a reflection of a society that has (with one key exception) largely run out of big new ideas but doesn’t realize it. The lack of ideas has slowed the economy, the self-delusion breeds bubbles and crashes.
His chief concept is that of “low-hanging fruit” — ideas, innovations and changes that produce a lot of economic bang for relatively little effort. For example, taking a largely uneducated population and sending everyone to high school and the best to college produces huge dividends. That’s low-hanging fruit. Trying to find a way to make existing schools five percent more effective is not low-hanging — it has relatively high costs compared to the payoff. Going from no car to owning a car transforms one’s life; going from the kind of cars they made in 1950 to the kind of cars they make in 2010 doesn’t. That’s not to say those more incremental improvements aren’t worth it, but they’re not the kind of changes that produce economic booms or transforms the economy.
America and the West has largely picked the low-hanging fruit of the Industrial Revolution, Cowen says. He breaks out charts to demonstrate it, charts of stagnating family incomes, slowing productivity and falling patent counts. Until we find more low-hanging fruit, society might just have to lower its sights and accept that the good old days are over.
The elephant in the room, of course, is the Internet and computer power — the single greatest invention of the past half century. Cowen acknowledges this, and tries to argue why as impactful as the computing age has become, it hasn’t led to low-hanging fruits for society at large:
- As of yet, the Internet just hasn’t made that much money. Some computer companies have made a great deal of money, of course. But “relative to how much it shapes our lives and thoughts, the revenue component of the internet is comparatively small.”
- Not everyone has benefitted from the Internet in the way that plumbing or education helped everyone. Anyone can find the Internet useful for entertainment, but Cowen says its value as a productive tool flows primarily to an intellectual elite, a small subset of knowledge workers with the “cognitive abilities to exploit” it.
But perhaps the Internet just hasn’t yet fully come of age. One of the more interesting observations in the book is the comparison to the heydey of the Industrial Revolution in the 19th Century. Then, Cowen notes, many of the biggest advances in science and technology could be made by clever amateurs, self-taught dilettantes with an idea or good luck. Today, it’s trained specialists with years of advanced training who make advances in most fields. The exception is the Internet — people like Mark Zuckerberg and Steve Jobs were amateurs when they made their breakthroughs. That suggests a field that is still ripe for advancements and could lift society out of its Great Stagnation and into a new boom.
Until then, however, Cowen suggests we should recognize that “relatively slow rates of technological progress will be with us for at least a few more years, possibly much longer.” The biggest problems with moving slowly can come when you think you’re moving quickly.
Here’s one of my biases: I’m more disposed to agree with something if it can be framed in an intellectual manner.
So it was with this week’s Internet tempest-in-a-teapot, over the use of the word “derp.” After one commentator used the term to slam his opponent (as “derpy”), various people like me took to the web to debate its appropriateness. I was at first inclined to agree with Gawker’s Max Read, who said the word was juvenile, silly and vaguely offensive:
“Derp,” a word for “stupidity,” was not a particularly funny joke when it was a throwaway line in the Matt Parker-Trey Stone BASEketball. It didn’t get funnier when it crossed over to 4chan and YTMND ten years ago, especially since message-board posters managed to turn it from a nonce word into one with connotations of disability.
It’s the sound of the word. Unlike calling someone “stupid” or an “idiot,” calling them “derpy” adds something of the low humor of the mimic — just imagine someone repeating everything you say, but replacing all your words with “herp derp herp derp.” (That’s exactly what one browser extension does.) Surely there are ways to conduct a debate that aren’t so demeaning to all parties involved.
But then I mostly changed my mind after reading economist Noah Smith offer a much more sophisticated definition of “derp” than “stupidity.” It has to do — notice I didn’t say this was a simpler definition — with Bayesian probability, a branch of statistics that deals with events (such as, say, most of real life) where the truth of a proposition is not certain.*
In Bayesian inference, you start your analysis of a question with a “prior belief” — what you think before you consider any evidence. This is shortened to your “prior.” Your “posterior belief” or “posterior” is what you conclude after considering the evidence. Smith:
What does it mean for a prior to be “strong”? It means you really, really believe something to be true. If you start off with a very strong prior, even solid evidence to the contrary won’t change your mind. In other words, your posterior will come directly from your prior.
Having strong priors — strong a priori beliefs that you hold to even when the evidence suggests otherwise — is NOT necessarily irrational in Bayesian probability. In one example, if you are trying to determine whether your friend’s baby is a boy, a girl, or a dog, you would be justified in rejecting the third option based on your prior belief that humans can’t give birth to dogs even if your only evidence, say, is a photo of a puppy.
Using the example of people who believe that solar power will never be cost-competitive with fossil fuels, Smith says there are limits to how much we should tolerate people clinging to their priors:
But here’s the thing: When those people keep broadcasting their priors to the world again and again after every new piece of evidence comes out, it gets very annoying. After every article comes out about a new solar technology breakthrough, or a new cost drop, they’ll just repeat “Solar will never be cost-competitive.” That is unhelpful and uninformative, since they’re just restating their priors over and over. Thus, it is annoying. Guys, we know what you think already.
English has no word for “the constant, repetitive reiteration of strong priors”. Yet it is a well-known phenomenon in the world of punditry, debate, and public affairs. On Twitter, we call it “derp”.
There’s a certain elegance to that definition that appeals to me. Someone who is derpy is someone who constantly refuses to change their views based on conflicting evidence. (That’s not to say they have to adopt the position of their opponents — perhaps they could adopt a more moderate position that’s less in conflict with the evidence, or concede the validity of the evidence but proffer new evidence to bolster your position.) I still don’t like “derp” as a word, but Smith is right that there’s no other word to express that point of view.
UPDATE: One final necessary element of the argument that I originally omitted: What separates derpiness as a concept from mere stubbornness is that someone who is derpy not only holds on to his or her belief in the face of conflicting evidence, but loudly persists in professing that original belief even as they have been disproved repeatedly.
Now, of course, I’m second-guessing myself, that I’m only intrigued by this idea because it was expressed in a way that appeals to my intellectual vanity. Anyone have any other arguments one way or the other before I arrive at a posterior belief on the value of deep?
*Note: my grasp of Bayesian inference and related areas is very shaky, but I’ve read several intriguing pieces lately that rely heavily on it. If anyone knows a good layman’s introduction to the concept I would be very grateful for the tip; I’d like to learn more without getting too into the math.
Earlier this year, a discussion of taxes on a South Dakota political blog spurred me to write a short essay about the subject. It was primarily sourced from one of my favorite nonfiction books (and this is a somewhat telling statement about my character), “A Free Nation Deep In Debt: The Financial Roots of Democracy.” That book’s author, James Macdonald, is a former investment banker, but before he starts getting into analyses of bonds and interest rates and defaults, he delves into political philosophy.
On taxes, tribes and freedom
Dating back to ancient times, to be taxed was viewed as being to some degree unfree. More precisely: to be directly taxed was viewed as being unfree.
To the ancient Greeks, taxes “were an offense to the dignity of the citizen. State revenues from publicly owned property were acceptable, as were taxes on foreigners and limited indirect taxes. But the citizen would not have his money taken from him at the behest of some leviathan” (Macdonald, pp. 32-33).
To raise money for vast undertakings (read: wars) without taxes, free societies instead resort to expediencies such as forced, repayable gifts. Citizens would be obligated (by social pressure if not by law) to give freely of their fortunes to the state, which would then repay this donation out of the spoils of the presumably victorious war.
Direct taxes were only levied by despots, tyrants and emperors.
Later on, as various Germanic tribes set up successor states around the remnants of the Roman Empire, they shared beliefs about freedom. As conquerers, “the Goths and the Franks were not subject to tax any more than other successful tribal conquerers in past times” (p. 61).
Indeed, the word “Frank,” the name of a tribe, increasingly began to acquire another connotation — “‘free’ — and especially ‘free from tax’” (p. 61). (The word “franchise,” in terms of “having the franchise” or having the right to vote, is another descendent of this meaning of “Frank.”) The nobility of France would maintain their exemption from direct taxation until the verge of the Revolution. The social inferiors of the nobility, descendants of the conquered, not conquerers, had no such exemption and so bore most of the tax burden.
The medieval Italian republics shared this same aversion to taxation: “Like the Athenians of ancient Greece, the merchants of Venice disliked taxing themselves. Indirect taxes were easier to collect and fell on every resident and visitor…” (p. 72). As the costs of the incessant wars of the Middle Ages mounted, Venice and other republics relied on loans from their citizenry — increasingly compulsory loans.
These repayable levies were a medieval response to the perennial question of how to tax free citizens. They neatly expressed the duality of the position of citizenship in a small state: the obligation to undertake extra burdens and responsibilities to ensure the survival and prosperity of their state; and the exemption from the insult of direct taxes. (pp. 73-74)
Eventually, this ancient aversion to direct taxation would lessen — but never entirely. A key aspect was that such taxes be voluntary, voted on citizens by citizens themselves. (The citizens had long voted taxes on non-citizens, who it’s important to remember at this time comprised a large, sometimes even overwhelming, percentage of a republic’s citizenry. But under the logic of the time, the non-citizens didn’t enjoy any freedom from direct taxation.)
The Dutch, who had protested so vigorously about their very low tax payments to the Habsburgs, ended up paying unimaginably greater sums to their new autonomous government… In 1595, an English observer commented on the political paradox: “The Tributes, Taxes and Customes, of all kinds imposed by mutuall consent — so great is the love of liberty or freedome — are very burthensome, and they willingly beare them, though for much less exactions imposed by the King of Spaine…” (p. 155; [sic])
In the American Revolution, recall, the grievance of the colonists was not that Parliament was taxing them, but that they were being taxed without representation.
Today, we live in a republic with universal suffrage — and direct taxation. Part of the rhetoric used to back taxation is the idea that it’s the civic duty of a citizen to pay their taxes — a possible evolution of the earlier attitudes about the obligation of citizens to give to the state.
But notice that direct taxation — the income tax, the estate tax, the capital gains tax — provokes far more indignation and opposition than indirect taxes like those on sales. In some sectors, the rhetoric used to describe the income tax is that of government “confiscation” of private wealth — perhaps a necessary evil, but an evil nonetheless, and one to be kept to an absolute minimum. As one can see from history, this indignation is of ancient provenance.
Moreover, when tax-averse leaders support a new tax (such as the tourism surtax Cory finds bewildering) don’t overlook the key factor: that the people who pay such a tax requested it themselves. In the committee hearings on the tourism tax, tourism promoters and business owners alike emphasized how they saw the benefits of the tax outweighing the positives; conservative Republican lawmakers in turn highlighted the willingness on the part of the tourism business owners to tax themselves.
Very few Americans have anything one could even charitably call a classical education (and I am including myself in this benighted horde), so this would seem to not be a case of explicit imitation of ancient values. But perhaps indirectly modern Americans are tapping into some more profound intellectual well.
A few disclaimers: In sketching out this theory I’m not presenting any sort of normative argument, either that taxes are bad because the ancients opposed them, or that the ancients were silly for opposing direct taxes. I’m simply attempting to describe. Modern American democracy has many intellectual parents, not simply classical republicanism; to privilege it is anyone’s right, but there’s no necessary reason why one MUST rank it above all others. Finally, the ancient republics were of course not saintly — while citizens of these republics enjoyed far more freedom and power, and checked their rulers more strictly, than non-republican states, their citizenry comprised a relatively small subset of the population, with disenfranchised non-citizens and often large populations of slaves living in the polis but not voting, and often subjugated foreign cities paying tribute and having no citizenship rights at all. (This is true even if you set aside the half of the population who could not vote because they were female.) In many ancient republics, the dominant conflict was between the aristocratic republican elite on one hand, and despots promoting the rights of the common man on the other. Freedom and equality were in constant tension. Whether you cheer for a Caesar or a Cato, both have their flaws.
J.J. Abrams’ new film Star Trek Into Darkness has just enough thoughts floating around its unexceptional script to make the viewer conscious of what could have been, but not enough to make it interesting. Its visuals are flashy enough to entertain but not dynamic enough to transfix. It is a profoundly disappointing movie.
The plot, briefly: prematurely promoted starship captain James T. Kirk disobeys standing orders to try to save a primitive species of aliens. He is caught, demoted, then swept up into a manhunt for a wanted fugitive and a conspiracy within Starfleet command. To the degree it can be said to have a theme, it warns about the dangers of militarization. There’s rich potential in analyzing the tensions of a highly armed military body genuinely devoted to peace, potential the movie is uninterested in examining. Indeed, to do so would betray a greater sense of self-awareness than Star Trek Into Darkness appears to possess, because it is itself a more militarized, action-oriented version of past Star Treks.
Character development is less stunted though hardly impressive. The film is a coming-of-age tale in which its only two dynamic characters, Kirk and Spock, must learn to accept the trappings of adulthood — Kirk setting aside his considerable ego to care for his crew, and Spock abandoning his attempt to deny emotion altogether. But these are simply basic character beats, drawn broadly and not dwelled upon at any length. They are interesting primarily as a counterpoint to this film’s antecedent, Star Trek II: The Wrath of Khan, which featured a 51-year-old William Shatner and addressed, with considerable more elegance than Abrams, a group of adults coming to terms with the decline of middle age.
Abrams intends for viewers to draw comparisons with Wrath of Khan, sprinkling references to that classic, arguably greatest Star Trek film throughout. Throw-aways, gags, characters and subplots all reference the prior film, in what is doubtlessly the cleverest aspect of the script by Roberto Orci, Alex Kurtzman and Damon Lindelof. But even this highlight is telling in that it is a shallow mockery of intellectualism, content to merely refer to ideas and art without actually engaging them. Even the original, low-budget 1967 TV episode on which Wrath of Khan was based managed to be more thoughtful than Into Darkness, whose plentiful references reflect the shortcomings of this Age of the Remix.
It is not an unenjoyable movie, containing generally fine acting, typically excellent special effects and a series of mildly entertaining action set pieces. But this is no mere popcorn flick. Its plot and characters could be so much more interesting but fall sadly short through general lack of interest by Abrams and the writing trio (who were reportedly concerned with making the movie appealing to an international, non-English-speaking audience by shying away from the talky stuff).
The most curious thing about Star Trek Into Darkness is not actually about the movie at all, but rather why its similarly action-first and intellectually unambitious 2009 predecessor did not provoke the same disappointment. The 2009 Star Trek film was actually very entertaining despite its shortcomings. Partly it gets graded on a curve because it shoulders the burden of introducing all the characters and rebooting a universe. That film also drew much of its limited intellectual energy from the clash between the archetypes of the major characters — a well that can only be drawn on so many times before it begins to go dry.
If there is optimism to be found in this latest Star Trek film, it lies at the end, when after two entire movies the USS Enterprise finally gets its famous five-year mission to explore new worlds. That setup leaves creative room for a more interesting sequel, while financially the movie’s overseas success could build the brand loyalty to bring audiences back to a less flashy sequel. With Abrams himself set to jump ship to his own true sci-fi love, Star Wars, perhaps this rebooted Star Trek franchise could belatedly take the next step to become not simply entertaining but satisfying.
It’s one thing to know something. It’s another thing entirely to be conscious of how much or little you know compared to other people.
Last week, “Saturday Night Live” aired a sketch called “Game of Game of Thrones,” about a game show focused on HBO’s fantasy series “Game of Thrones.” The fundamental joke was that the contestants seemed to know the most arcane, obscure details about a fictional television show, but absolutely nothing about anything else:
A deeper — probably unintended — level of humor for me is that the questions and answers on the quiz show sounded very complex but were not terribly obscure within the universe of the show. They weren’t asking about minor trivia, background characters, facts mentioned in passing, or even touching the incredible depth of detail contained in the books on which the show is based. Instead, the questions were about major plot points and characters. “Who did Jaime Lannister kill to earn the name Kingslayer?” may sound complicated to someone who’s never watched the show, but the fact that Jaime killed the Mad King Aerys Targaryen is one of the two fundamental facts of his character, referenced and discussed repeatedly over the show’s three seasons, both directly and in passing.
It seemed to me that a normal viewer of the show — someone who hadn’t read George R.R. Martin’s source books, rewatched every episode repeatedly or spent hours looking up facts online, but had merely just watched every episode once during its original run should be able to answer at least many of those questions.
Immediately, I realized the absurdity of that thought. How odd — and telling of the times — to describe a casual viewer of a show as being someone who had “merely” watched every single episode. A decade ago, certainly two, someone who never missed an episode of a show might very well be described as a fanatical viewer. Our era of DVRs, DVDs and on-demand streaming is one when it’s easy to fit a show around your schedule rather than rearranging your schedule to fit a show. It’s enabled an unprecedented level of serialization on television, shows like “Game of Thrones” where episodes don’t stand alone but are inextricably bound together. Miss an episode of “Game of Thrones,” aptly called the “most complicated show on TV,” and you’ll be lost when you tune in the next week.
But just a day or two after watching the SNL sketch, I realized that even someone who never misses an episode can be pretty lost following the show.
Bill Simmons, the sports journalist who founded and runs the website Grantland, is a fan of “Game of Thrones.” Simmons is not a stupid man, and immerses himself regularly into pop culture. He hasn’t read Martin’s books or gone to any great lengths to research the show, but has still watched all the episodes — exactly my definition above of the normal fan who I thought should be able to identify major plot and character points in the show.
But on this podcast (starting about 44 minutes in), Simmons talks about being unable to follow the plot and the characters.
“We’re six episodes in, and I don’t know what the [bleep] is going on,” Simmons says. “I really don’t, I just don’t know. I don’t understand. I feel dumb. The show makes me feel inadequate. I have to go on Wikipedia after to figure out who’s who.”
Simmons talks about not recognizing many characters, and not being able to remember the names of the ones he does recognize. There’s “Lady Whatever” and “Queen Whatever,” “the one who’s trying to get King Joffrey,” “the guy who’s been by her side for most of the time, with the beard,” and of course the immortal “the one guy.”
“Stuff happens in ‘Game of Thrones’ and I literally have no idea why it’s happening,” Simmons says.
Now, I’ve read the books, multiple times over the past decade or so. I’ve watched most episodes more than once. I’ve gone to online reference points to look up characters and events and places. So I am under no delusions about the fact that I know more about the show than most people watching it. But even as I warn people about how complicated the show is, I don’t fully appreciate just how daunting it can be even for someone who does it “the right way” and watches all the episodes, in order.
It’s not that I’m alien to that experience. Last week (at the same time the SNL sketch was running), I watched the new “Anna Karenina” movie. This was a two-hour film condensed from a massive, dense Russian novel, and it took me about half an hour into the film and a few trips to Wikipedia on my phone before I had the main characters straight and could tell which one was Anna and who was in love with whom — pretty fundamental questions. If I’d read the Tolstoy book, I might have followed along with no difficulty — and if I’d been watching it with a Tolstoy fan, their experience might have been very similar to mine watching “Game of Thrones with a newbie.
Now if only they’d actually make that “Game of Game of Thrones” game show. As someone who is both intimately familiar with Westeros and can identify prominent public figures like Supreme Court justices, I’m pretty sure I’d clean up.