An introduction to Opeth

I'm going to try to introduce you to a band called Opeth. Opeth plays progressive metal*, which sounds scary and inaccessible, and is, a little bit, especially because they use growled or grunted vocals in many of their songs. I'll give you that: when you first hear growls, you might find them disturbing -- or funny. They're not something most people appreciate on first listen.

But Opeth is about a lot more than a few grunts. Their singer is also an accomplished "clean" vocalist, and all the musicians are first-rate. The songwriting is complex and diverse, encompassing an elaborate and intoxifying combination of genres. The result is music of startling depth, beauty, variety, and power.

So I'd like to introduce them to you one step at a time. At their quietest:

"Coil"

Perhaps this is more accessible than you expected. Let's add a two-part song structure (most of their songs go through three to five parts), and some distortion in the second half. This song is nearly eight minutes long, but be patient.

"Face of Melinda"

We're getting pretty kickass here. Same quiet opening, but then -- louder, more powerful, killer riffs. Notice how strong the melodic core of the song is, even at its loudest moments.

Get ready for the biggest shift: this next song is a fully-realized Opeth composition: loud parts, quiet acoustic parts, clean singing, growling, multiple movements, nine minutes long. Right off the bat you'll hear the growling, and hopefully you'll appreciate how it fits into the mood. A guest vocal by Porcupine Tree's Steven Wilson opens up the clean section at 3:25, and by 5:20 we're off in different territory altogether, before hitting the reprise.

"Bleak"

Yes, this is tough to listen to if you've never heard this kind of music before. But if you're willing to put in the time to habituate to the new noises and wrap your head around the song structures, you'll find yourself amply rewarded.

I'll leave you with one last track, which traverses a dizzying breadth of sonic terrain, at 0:00, 2:30, 3:30, 5:05, 7:15, 8:10, and a rhythmically interesting coda at 10:00 that I dare you to try to tap along with the first time you hear it.

"Reverie / Harlequin Forest"

How many people do I expect to convert with this post? Realistically, zero. But it's worth a shot...

* Many people actually label Opeth in the sub-genre "death metal", as opposed to its cousin, "black metal". Though any music labels are generally imprecise, I think of "death metal" as "feeling bad about yourself" and its cousin, "black metal" as "feeling bad about everyone else". I've never really gotten into black metal, this second genre, but I do have a few albums. It's pretty extreme -- there's occasionally animal blood at shows, and the musicians sometimes do crazy things, including burning churches, killing themselves and even committing the occasional murder... all of which I agree is ridiculous. A friend of mine was shocked that I'd even consider listening to black metal, given such behavior, and at the time, I kind of had to admit his point. In retrospect, though, it's very similar to rap (which we both listen to), in that both often glorify a violent, hateful worldview and occasionally live up to that promise in real life. In both cases, I don't think you have to endorse the lifestyle to appreciate the music. Black metal is a little comical in its absurdity, but there are still some jaw-dropping moments (give yourself at least until the "clean" vocals about halfway through this video), and impressive technical prowess (see the guitar work here, a live version of the next song on the same album by the same band).

Wrong way down a one-way street.

I actually keep a list of topics to blog about, because it's a lot easier to think of something interesting than it is to painstakingly convert that thought into prose. And I'm lazy.

Anyway, about six months ago, I thought of the following topic. I procrastinated like crazy on it (like I have the other dozen or so topics I have buffered up), until just last week, when I read an article that described exactly the phenomenon I wanted to write about. Good thing... saved me a lot of work. So here's the topic:

Imagine you're good friends with someone. You've known each other a long time, and you've had ample time and evidence to come to believe that your friend is a good person.

Then one day your friend does something mean. Not anything earth-shattering, but definitely something that, if observed in isolation, you'd attribute to a mean person. And there's the crux of it. Do you

  1. weigh the action against the body of evidence you've accrued and chalk it up to an anomaly; your underlying impression of your friend is only marginally altered for the worse.
  2. assume that your friend, like most people, puts on a public persona, and view the action as an insight past this persona and into his or her inner, true personality. Your underlying impression of your friend is radically altered for the worse.

I've always felt that people disproportionately opt for choice (b). And this article my brother sent me confirms this feeling:

With warmth, the inverse applies. Someone who does something nice, like helping an elderly pedestrian across an intersection, is not necessarily seen as a generally nice person. But a single instance of negative-warmth behavior—kicking a dog, say—is likely to irredeemably categorize the perpetrator as a cold person.

In other words, people feel that a single positive-competent, or negative-warmth, act reveals character. “You can purposely present yourself as warm—you can control that,” Cuddy explains. “But we feel that competence can’t be faked. So positive competence is seen as more diagnostic. On the other hand, being a jerk—well, we’re not very forgiving of people who act that way.”

I guess I understand the rationale, but it makes one huge assumption: that people are drastically different on the inside from how they act on the outside. Furthermore, I'd wager that if you'd have asked someone who chose (b) whether that was the case -- that he didn't really know his friend's "inner" personality -- before the friend committed the act, he'd vehemently deny it.

I prefer to believe the alternative, that any given person has some variation in his behavior (chemical, situational, whatever), and occasionally you'll see different aspects of it. In the end it just makes sense to consider the totality of your friend's behavior. Anything in isolation is simply insufficient. But am I still subject to the phenomenon described in the article, even though I am aware of it?


I really want to do a song-a-day blog kind of thing. These are songs I own on CD that I've ripped. What's a good medium? (Not LJ, I don't think.) Buzz/Facebook? If I embed a flash player (even possible in those media?) is that more legal than just linking to a file? Help me...

Another one

Wow. I just found out that Aimee Mann provides the female vocals on Rush’s “Time Stand Still”. Given how many times I’ve heard (a) that song and (b) her albums, you’d think I’d have figured it out on my own. But I didn’t. I’m still surprised, hearing it now.

Weather Permitting

I've been to a lot of weddings in my day -- 27 by last count.*  The one we went to last weekend might have had the most unique atmosphere, literally. It was on Hang Glider Site #3 at Mount Tam, and the conditions were pretty much whiteout fog. I think we were standing out an outcropping overlooking the sea, but who knows. Anyway, it was super awesome. Congrats to Dave and Natalie!

A while back, I discussed The Meaning of Liff, in which real place names are associated with "things that there aren't any words for yet", and suggested some additional examples. Here are some more:

LUCKNOW (n.)
The experience, marked by a quiet elation, of reading a New Yorker article about a random topic and realizing that it is going to be ridiculously interesting.

NOME (n.)
The combination of irritation and anger that results when one fills a bowl with cereal only to realize that one is out of milk.

HELL'S KITCHEN (n.)
Where such an event takes place.

Okay, that last one's a cheap shot, but I couldn't resist.

-----
* Unfortunately, we're at the point in our lives where we sadly can't attend all of them anymore. Which reminds me: you know you've grown up when (a) the thing you lack is no longer opportunity or money, but time; and (b) you have no desire to be famous.

Is Catfish real?

Casey Affleck has already admitted that his bizarre "documentary" of Joaquin Phoenix I'm Still Here was staged, essentially a self-destructive two-year-long performance piece.

What about Catfish?

The filmmakers insist that it is. Discerning viewers beg to differ.

Is it life imitating art, or is it merely art imitating art?

I haven't seen Catfish, and don't intend to. But hey, it's the internet, so I believe that makes me fully qualified to offer an opinion: the footage is, for the most part, real, but the filmmakers knew what was going on early on and manipulated the events to their liking.

Does it matter whether it's real? Yes, it does. All documentaries expose the viewpoints of their creators to some extent, but my guess is that this one crosses far enough over the line of manipulation to make it worthless.

Yosemite 2010

A few weekends ago my college roommate James came out to visit. We went to Yosemite for four days. It was pretty sweet. Here are some pictures; I picked ones that I liked, so they're not necessary comprehensive, and some might look very similar to each other.

On Saturday, we did a 14 mile hike from the valley floor up to Glacier Point, then down Panoramic Trail to Nevada and Vernal Falls, and back down to the valley. The first 5 miles were pretty strenuous, with a 3200' altitude gain, but after that it was very pleasant and quite beautiful. Probably the most scenic hike in Yosemite. The domes are immense. It's hard to appreciate their size until you happen to notice a tiny-looking tree or three on the top to put things in perspective.

On Sunday, we checked out a number of spots on Tioga Rd: Olmstead Point, Tenaya Laka, and Tuolumne Meadows.

Because we didn't get permits in advance, we waited til Monday for our most challenging hike, up to Half Dome. It was about 17 miles roundtrip, with a 4900' altitude gain. We made great time, about 3.5 hours, up to the base of Half Dome (the top of the sub dome). But the cables up Half Dome were crowded, and it took us an hour to get up to the top. By the way, I last did Half Dome 8 years ago, and in the intervening time I had forgotten how intense the cables are. Even now it doesn't seem that bad... but when you're actually on the cables, leveraging yourself up and down steep, steep granite rendered slick by thousands of human feet, your adrenaline is pumping.

Third photo here courtesy of James. On the last day, we checked out Mariposa Grove, home of some huge sequoias. That last photo is of some moss a staghorn lichen (thanks Chris!) on a sequoia.

Yosemite is a pretty dangerous place, especially in bad weather. Luckily for us, the weather was perfect. If you want to read about some amazing search-and-rescues, though, check out the Friends of YOSAR rescue page. Read some harrowing stories here, here, here, and here. Oh, and there's a good photo showing the steepness of the cable ascent (this shot is taken pointing almost straight down; note, if you can, how small the people and supplies at the bottom are, and imagine climbing it without a carabiner or anything else to tie you on).

Why has laptop technology stagnated?

I used to use a laptop every day. In my mind, the most important aspects of a laptop are: screen size and resolution; keyboard quality; thickness and weight; and battery life.

If you asked someone today to choose a top-of-the-line laptop, he’d likely pick a MacBook Pro, the 15” model of which has the following key features:

Surely an impressive machine. However, in 2003 I got an IBM Thinkpad T40, which has the following corresponding specs:

  • 14.1” screen with 1400x1050 resolution
  • Amazing keyboard
  • 1.0” thick
  • 4.9 lbs
  • 4:40 real-world battery life

It’s remarkably similar, and actually better on several counts. Sure,

  • The processor is slower, but it was state-of-the-art at the time, and was probably more power hungry.
  • The screen is smaller, but it’s higher resolution. Also, the 13.3” MacBook Pro weighs 4.5 lbs, so 14.1”/4.9 is still very competitive.
  • “Macs have better build quality, blah blah”. Does not appear to be the case, at least for this example. The T40 in question was used every day as my primary computer for years. It’s been dropped, while open and running, at least 5 times, with no ill effects. In fact, it is still running, seven years on, as a streaming Netflix server for our TV. It won’t die.

What gives? I know battery technology hasn’t improved much. But what about everything else? Shouldn’t new laptops be half the weight, or have fold-up “retina display” screens, or something?

Okay, here’s the most advanced laptop I could find with an optical drive… the Lenovo ThinkPad T410s.

  • 14” screen with 1440x900 resolution
  • Apparently great keyboard
  • 0.83” thick
  • 3.9 lbs
  • 4:10 real-world battery life

I guess that’s decent. But still. 7 years ago, I listened to music on a portable abacus, and now I have it piped directly into the aural processing unit of my brain. Why can’t laptops catch up?

Edit: In responding to a comment, I came across a better laptop with an optical drive, the ThinkPad X301:

  • 13.3” screen with 1440x900 resolution
  • Apparently great keyboard
  • 0.7” thick
  • 2.9-3.3 lbs
  • 3:40 real-world battery life
That's not bad. Too bad it starts at like $2,500.

A layperson's guide to P ?= NP

You might have heard whispers about a computer scientist posting a game-changing paper on the internet a few weeks ago. Vinay Deolalikar's attempt at a P != NP proof might net him a million dollars

More likely, you've never heard about P and NP and have no idea what I'm talking about. If you're curious at all about one of the most fundamental problems in computer science, and why proving a simple statement is worth $1M to anybody, or just want to understand a bit about how CS people think about computers, here's a layperson's guide to P ?= NP.

How hard is a problem?
Let's say you have to look up a name in the phone book. You might start at the beginning of the book and thumb through the letters until you find the first letter of the name you're looking for, and repeat for the second letter amongst the names that start with the first, and so on.

"Let's see... Smith. [thumbs through to S] A, B, C, …. Q, R, S... okay, S. [now thumbs through S to Sm] Sa, Sb, … Sl, Sm... great. Sma... "

Or you might flip the book open to the middle and see if the name you’re looking for comes before or after the names on the open page, and focus on the first or second half of the book, respectively. You'd repeat this halving procedure, getting down to smaller and smaller sections of the phone book, until you’ve found the name you're looking for.

"Let's see... Smith... [opens book to Nelson] that's after Nelson... [open halfway between Nelson and the end, gets Talbot] before Talbot, so between Nelson and Talbot... [halfway between Nelson and Talbot: Quinn] after Quinn..."

Which way is easier? You may have an intuitive idea. Imagine for a minute that your job is to look up names in a phone book all day, as fast as possible. Now you care a lot more about which of these two methods, or algorithms, you should use. You'd probably want something better than a rough guess; you'd want a quantified understanding of difficulty.

There is a field of computer science called complexity theory that deals with classifying problems by difficulty. Classification by difficulty is very useful. For instance,

  • If you have two proposed algorithms for solving the same problem (as with the phone book lookup above), you can determine which is more efficient, and thus which one to use.

  • If you have a fixed amount of time to prime your thrusters before the space shuttle launches, or else it’ll blow up, you may want to prove that the priming algorithm will take no more than that fixed amount of time to run.

  • If you are searching for an efficient solution to a difficult problem, you may be able to expose the problem as something that is simply too hard to solve by traditional means – one that could literally take longer than the age of the universe to solve, depending on the parameters. Knowing this could save you a lot of time fruitlessly looking for a solution.

What does difficulty mean when solving a problem? If you’re a human, you might think it’s a measure of hard work, intuition, and insight. Of these three, computers are really only good at hard work, so computer scientists measure difficulty in two primary ways, time and space.

Say you’re trying to solve a problem, and you're using a whiteboard for your step-by-step scratch work. Time represents the number of instances you write or erase something on the whiteboard as you work through your solution, and space represents the size of the whiteboard you’d need to fit in all of your scratch work (assuming that you could erase work you don’t need anymore along the way). Analogously, for a computer, time is the number of instructions it executes, and space is the amount of memory (RAM) it needs.

For this essay, we’ll only deal with time.1 How do we measure how much time a given algorithm takes? There are a couple of different ways to measure time: what's the fastest the algorithm could run? The average time it takes over different inputs? For simplicity, we'll focus on the worst-case time: the longest the algorithm could take to run. That way, no matter what parameters we are given, our thrusters will be primed before the space shuttle launches.

And then there's the issue of size. Looking up a name in the Nome, Alaska phone book takes less time than looking one up in the New York City phone book, right? The time it takes to solve a problem with a given algorithm depends on the size of the input to the problem (what you are asked to do – look a name up in Nome, or NYC?). If the input is a list of words, the size is the number of words. If the input is a single number, the size is the number of digits it has. And so on.

Let’s take a simple example. Say you want to sort a list of n numbers from smallest to biggest:

4  13  21  5  6  9  18  12  7  0

(Here, n is 10, but imagine that it could be anything.) How might you sort the numbers? Here’s one way: scan through the list and find the smallest number. In this case, that’s 0. Take that number out of the list and put it aside. Now, scan through the remaining n-1 numbers to find the next smallest number – in this case, 4. Put that next to the 0. Keep repeating this process, removing one number at a time, until you have no numbers left in the list, and a sorted pile of numbers to the side. Perfect.

How much time did that take? Well, you looked through the n numbers and found the smallest one, and pulled it aside; you repeated this process for each of the n numbers in the list. So you took n*n = n2 steps to sort the list2. We say that sorting numbers with this algorithm, called selection sort, takes n2 time. If there are 100 items in the list, it’ll take around 100*100 = 10,000 time steps to sort them. That’s pretty neat! You give me an input – the list of numbers to sort – and I can give you a limit on how long it’ll take me to sort them, without having to do the actual sorting at all.

Here’s another sorting algorithm. Given a list of n numbers, we permute them (jumble them up) in a random order and see if this new order is sorted or not. If it is, great! We’re done. If not, we jumble them up again in a new random order and repeat. This algorithm will eventually find the right sorted order. But how long will it take? Well, to permute the numbers, we select one of the n to be the first number in the new list. Then of the remaining n-1 numbers, we select another, and so on. The total number of permutations is n*(n-1)*(n-2)*…1, otherwise written as n! (factorial). So in the worst case, when we come across the right sorted order last, this permutation sort algorithm takes n! time to sort. If there are 100 items in the list, that’s 100!, or 100*99*98*…, which is big – something like 1 followed by 158 zeros.

So you just saw two algorithms for sorting numbers, with vastly different execution times. How difficult then, is the problem of sorting? We say that a problem is as difficult as the most efficient algorithm we know of to solve it. So in some sense, it doesn’t matter that permutation sort is really slow; insertion sort does a better job, so sorting is at worst an n2 problem. This is a crucial point: if the only algorithm you know of is permutation sort, you think sorting is a really hard problem. When you discover insertion sort, suddenly sorting looks easy and life is good. Our classification of problems, then, is often limited by our own creativity and knowledge. There are many problems out there that might be easier – that might be solved with more efficient algorithms – if only we could come up with these better algorithms to solve them.3

In fact, we know of even faster algorithms that can sort in n*log(n) time, which is faster than n2, so sorting really takes no more than n*log(n) time in the general case.

What are P and NP?
So far we’ve shown how problems can be classified by the amount of time it takes to solve them with the most efficient algorithms available. From here we can provide a very simple explanation for P:

P is the set of problems that can be solved in polynomial time.

A polynomial looks like nk: n, n2, n100, etc. So sorting, which takes n2 time, is in P. So are searching, greatest common divising, zip data compression, multiplying, primality checking4, and a host of other common problems. In fact, with few exceptions, the world of computers is built on algorithms that run in polynomial time. In addition to simple stuff like sorting, generic algorithmic sledgehammers like dynamic and linear programming, which solve problems from spell checking to resource allocation, are in P.

Other problems are exponential: their times look like 2n, 100n, and so on.

We consider polynomial problems to be easy: if n is 100, then n2 is 10,000, which isn’t too bad. Exponential problems, on the other hand, are very hard: if n is 100, then 2n is 1 followed by 30 zeros. So if you have a problem, you’d better hope it’s in P and not exponential, or something even worse.5 In a broad sense, problems in P are considered practical, and problems harder than P are cause for concern.

NP is a little more nuanced.

NP is the set of problems that can be checked in polynomial time.

By checking I mean: if I pose a problem and you provide a solution, I can check if your solution is correct relatively quickly, even if solving the problem took a lot of time.

For instance, if I give you a really complicated maze, and you give it back to me with a line drawn from start to finish, I don’t have to solve the whole maze myself to verify your answer; I can just check that your line goes from start to finish without crossing over any other lines, which is a lot easier.

Or: if I ask for the prime factors of a large number, and you give me a list of supposed factors back, it’s easy for me to check if you’re right. If the numbers you give me are prime (which I can check in P), and when I multiply them (which I can also do in P) I get my original number, you’re right. Otherwise you’re wrong.

In each of these cases, I don’t have to do the work of solving the problem myself.

There are many other interesting problems in NP, including:

  • Given a bunch of integers, can you pick some of them that together add up to 0?
  • Given a map of roads and cities, what is the shortest possible trip that visits all the cities exactly once?

Note that NP specifies nothing about how hard it is to actually solve one of its problems; some problems might be easy to solve and others hard. We really don't know. In fact, it is this unknown -- how hard it is to solve problems that are easily checkable -- that is at the heart of P ?= NP.

What does "P ?= NP" mean?
Read "P ?= NP" as "Does P equal NP?" Are the problems in P the same as the problems in NP? In other words, is every problem that is checkable in polynomial time also solvable in polynomial time?6

Here's another way to phrase the question. For certain problems in NP, the best known algorithms involve exhaustively enumerating every possible combination of inputs – an exponential task – to find a solution (much like permutation sort, above), even though checking that solution is ultimately easy to do. P ?= NP asks "Are there some problems for which brute-force (exhaustive) searching for a solution is the best possible algorithm, even if the solution is easy to check?" In other words, are some problems so hard that brute-force is the best we can do?

You can begin to see why this is a hard question. You have to show that either:

  1. All problems that are quickly checkable are quickly solvable (P = NP). And your proof better hold for problems that we haven't even thought up yet.

  2. A single problem that's quickly checkable is not quickly solvable (P != NP). Sure, you can easily show that the best known algorithm for this problem is not in P, but you also have to show that no algorithm for solving the problem in P can possibly exist, whether we know of it yet or not.

We know of many problems in NP that may or may not be in P. (I'll describe one in the next section.) The solutions to these problems can be checked in polynomial time, but arriving at the solutions using the best algorithms we've got takes exponential time. We simply do not know whether polynomial time algorithms exist to solve these problems. No one has invented (or discovered) one yet, and we don't know whether that's because none exist, or because we just haven't been smart enough to find one.

If it were ever shown that P = NP, the world would be a different place. Many problems that we consider totally out of reach could be solved. Sure, a few that we hope are hard (such as ones that would allow a hacker to decode your credit card number when you buy something on Amazon) would become simple, which would be bad. But on the whole, it would be an amazing broadening of powers for computers everywhere.

Life couldn't be that easy, could it? Intuitively, most theorists believe that P != NP: that some problems are hard to solve, even if they are easy to check. But no one has been able to show it yet. For the record, Deolalikar claims to show that in fact P != NP. The current consensus is that his proof is incorrect, even though the underlying claim is most likely true.

And that's P ?= NP.

(Feel free to stop reading here.)

NP-completeness
Of course, there's more to the story than that. Thousands of papers have been published about P and NP. One of the more intriguing discoveries is the notion of NP-completeness. A problem is NP-complete if it is in NP (checkable in P) and "as hard as" any other problem in NP.

What does that mean? It's pretty astounding. If I have an NP-complete problem, and you give me any other problem in NP, I can convert its inputs7 into a formulation of my problem and then solve that problem.

So if you give me an NP problem about scheduling traffic, and I have an NP-complete problem that finds cliques of friends amongst all the relationships on Facebook, I can solve your problem by somehow converting your arrangement of cars into a social graph. These conversions can be technical and complicated, but the bottom line is that I can solve your problem with my problem, and so mine is as hard to solve as yours. (If it were easier, then I shouldn't be able to solve your problem, right?) Furthermore, this property holds for any problem you can find in NP; my problem as hard as anything else out there.

There are many NP-complete problems, each of which can be used to solve the other; you could say they're all equally hard8. They allow for some interesting implications:

  1. If you find a single NP-complete problem that is in P, then all NP problems are in P, since they're all no harder than that single problem.

  2. Conversely, if you find a single NP problem that's not in P, then all NP-complete problems are not in P, since they're all at least as hard as that single problem.

Despite their significance, NP-complete problems are not bizarre theoretical constructs. I'll describe one, just to show you how simple they can be.

Imagine you're going on a camping trip and have a knapsack that can hold a given amount of weight, say p pounds. You have a whole bunch of gear that you'd like to take along: a sleeping bag (4 lbs), a stove (1 lb), a tent (6 lbs), and so on. What items do you put in your knapsack so that you get the closest to p pounds of stuff in there without going over?

Amazingly, the best known way to get the correct answer is to enumerate combinations of items until you've found the solution9, an exponential task. No one, to date, can do better. If you can – you win a million bucks. More amazingly, this knapsack problem is NP-complete, simple though it is. You can transform any other problem in NP (factoring, summing to 0, route finding, etc.) to a knapsack formulation and solve it.

On the other side of things, people have tried to approximate the knapsack problem with polynomial time algorithms, since solving it outright takes so long. If you can't get the correct answer efficiently, might as well see how close you can get, right? So rather than enumerate all combinations of items, these algorithms heuristically pick and choose certain combinations, sacrificing total correctness for speed. And they've succeeded: the best polynomial approximations can get arbitrarily close to the right answer, though not all the way there. So the problem is NP-complete – super hard – but also very easy to approximate. Crazy.10

In addition to knapsack, the two problems I described above as in NP, integer summing and shortest trip, are also NP-complete. But other hard problems, such as the prime factorization one, are not known to be NP-complete. It's a mysterious world out there.

If you find any of this interesting, Wikipedia is a good place to start. Oh yeah, bonus points if you can figure out which phone book lookup method is better.


1 Note that since it takes one unit of time to write something down in one unit of space, space requirements can never exceed time requirements. Back to text

2 Well, not quite. The first time you scanned the list for the smallest number, you looked through n numbers. But the second time, you’d already taken a number out, so there were only n-1 numbers. And so on. The real number of steps is n + n-1 + n-2 + …. 1 = (n2+n)/2. In such cases, we look for the "biggest" term – the one that will yield the largest result as n gets larger – and discard the others as relatively insignificant. In this case, n2 is the biggest term, so we forget about the rest. Back to text

3 In certain cases it’s possible to prove a lower bound on problem difficulty. That is, you could prove that a problem can’t take less than a certain amount of time, say t. If you know of an algorithm that takes t time, then you can rest easy, because you know that no other faster algorithm can exist. Here’s one example: finding the smallest number in a list of n numbers must take at least n time steps: one each to examine every number in the list. If you ever try to take fewer than n steps, you can’t be examining all the numbers, and I’ll foil you by giving you an input list in which the smallest number is one of the ones you didn’t look at. Boom – you’ve overlooked the smallest number in the list and your algorithm is incorrect. Many problems are complex enough, however, that proving a useful lower bound is very difficult. Back to text

4 Interestingly, primality checking was only shown to be in P in 2002! Back to text

5 Wait a minute, you say. If my problem takes n100 time, it’s in P, but it’ll still take really long! You’re right: in practice, the actual constants of the polynomial (in this case, the number 100) make a difference. A n100 problem is a lot harder than a n2 problem. And in fact, there are certain problems (such as linear programming) for which an exponential algorithm can find a solution faster than a polynomial algorithm in many common cases. But as n gets really large, even the largest polynomial is smaller than the smallest exponential: n1000000 is smaller than 2n (or even 1.0001n) for very large values of n. And geeky computer scientists like to think very big. Back to text

6 We know that the reverse is true: problems solvable in polynomial time are checkable in polynomial time; just solve them again and see if your result agrees with the proposed one. In other words, we know that all the problems in P are in NP. But we don't know whether all the problems in NP are in P. Back to text

7 For the record, this conversion process must itself take no more than polynomial time. Back to text

8 You can actually differentiate among NP-complete problems by how well you can approximate them with faster algorithms, but that's a topic for another time. (If you are surprised and curious about the fact that problems that are equally hard to solve can be approximated to different degrees, then start reading up, and welcome to the world of theoretical computer science!) Back to text

9 Without bogging you down with details, I'll acknowledge that there are other known ways to solve this problem, but they're all still exponential in the size of the input. Back to text

10 Just for kicks, one more twist: if you add just one more constraint to the knapsack, for instance giving it a volume in addition to a weight limit, you can no longer approximate it to an arbitrary degree of accuracy. Back to text

O RILY?

Rick Reilly is one of the most celebrated sportswriters of my lifetime. He's the guy who writes the back-page column for whatever magazine he works for. He's been voted National Sportswriter of the Year eleven times.

More critical acclaim, from his own bio page:

He is the winner of the 2009 Damon Runyon Award for Outstanding Contributions to Journalism, an honor previously won by Jimmy Breslin, Tim Russert, Bob Costas, Mike Royko, George Will, Ted Turner and Tom Brokaw, among others. Three times his columns have been read into the record in the U.S. Congress.

He is “the Tiger Woods of sports columnists,” says Bloomberg News.

As for his sports writing, the New York Daily News called him “one of the funniest humans on the planet.” Publishers Weekly called him, “an indescribable amalgam of Dave Barry, Jim Murray, and Lewis Grizzard, with the timing of Jay Leno and the wit of Johnny Carson.”

He is, in a word, impressive.

He's also one of the worst, most unpleasant writers I've ever read. In content alone, he's mean-spirited, self-satisfied, sanctimonious, painfully unfunny (oh yeah, and still mean-spirited), and flat out lame. When you get past his pathetic attempts to adopt the Sports Guy's style (with more faux-authority and far less authenticity), you find out that he's just a giant smarmy douchebag. Oh yeah, and he's also technically terrible. Purple prose, painful metaphors, hyperbole at every turn.

It's kind of astounding to read his columns and then his C.V. and try to reconcile the two. Am I the only one who is flabbergasted by his accolades? Or what am I missing?

(For the record, I like Bill Simmons.)

Edit: Apparently I'm not alone. And there's even this.