Review of “Inadequate Equilibria,” by Eliezer Yudkowsky

http://ift.tt/2yRvsa1

Inadequate Equilibria: Where and How Civilizations Get Stuck is a little gem of a book: wise, funny, and best of all useful (and just made available for free on the web).  Eliezer Yudkowsky and I haven’t always agreed about everything, but on the subject of bureaucracies and how they fail, his insights are gold.  This book is one of the finest things he’s written.  It helped me reflect on my own choices in life, and it will help you reflect on yours.

The book is a 120-page meditation on a question that’s obsessed me as much as it’s obsessed Yudkowsky.  Namely: when, if ever, is it rationally justifiable to act as if you know better than our civilization’s “leading experts”?  And if you go that route, then how do you answer the voices—not least, the voices in your own head—that call you arrogant, hubristic, even a potential crackpot?

Yudkowsky gives a nuanced answer.  To summarize, he argues that contrarianism usually won’t work if your goal is to outcompete many other actors in a free market for a scarce resource that they all want too, like money or status or fame.  In those situations, you really should ask yourself why, if your idea is so wonderful, it’s not already being implemented.  On the other hand, contrarianism can make sense when the “authoritative institutions” of a given field have screwed-up incentives that prevent them from adopting sensible policies—when even many of the actual experts might know that you’re right, but something prevents them from acting on their knowledge.  So for example, if a random blogger offers a detailed argument for why the Bank of Japan is pursuing an insane policy, it’s a-priori rather plausible that the random blogger is right and the Bank of Japan is wrong—even though the same would not be true if the random blogger said that IBM stock was mispriced or that P≠NP is easy to prove.

The high point of the book is a 50-page dialogue between two humans and an extraterrestrial visitor.  The extraterrestrial is confused about a single point: why are thousands of babies in the United States dying every year, or suffering permanent brain damage, because (this seems actually to be true…) the FDA won’t approve an intravenous baby food with the right mix of fats in it?  Just to answer that one question, the humans end up having to take the alien on a horror tour through what’s broken all across the modern world, from politicians to voters to journalists to granting agencies, explaining Nash equilibrium after Nash equilibrium that leaves everybody worse off but that no one can unilaterally break out of.

I do have two criticisms of the book, both relatively minor compared to what I loved about it.

First, Yudkowsky is brilliant in explaining how institutions can produce terrible outcomes even when all the individuals in them are smart and well-intentioned—but he doesn’t address the question of whether we even need to invoke those mechanisms for more than a small minority of cases.  In my own experience struggling against bureaucracies that made life hellish for no reason, I’d say that about 2/3 of the time my quest for answers really did terminate at an identifiable “empty skull”: i.e., a single individual who could unilaterally solve the problem at no cost to anyone, but chose not to.  It simply wasn’t the case, I don’t think, that I would’ve been equally obstinate in the bureaucrat’s place, or that any of my friends or colleagues would’ve been.  I simply had to accept that I was now face-to-face with an alien sub-intelligence—i.e., with a mind that fetishized rules made up by not-very-thoughtful humans over demonstrable realities of the external world.

Second, I think the quality of the book noticeably declines in the last third.  Here Yudkowsky recounts conversations in which he tried to give people advice, but he redacts all the object-level details of the conversations—so the reader is left thinking that this advice would be good for some possible values of the missing details, and terrible for other possible values!  So then it’s hard to take away much of value.

In more detail, Yudkowsky writes:

“If you want to use experiment to show that a certain theory or methodology fails, you need to give advocates of the theory/methodology a chance to say beforehand what they think they predict, so the prediction is on the record and neither side can move the goalposts.”

I only partly agree with this statement (which might be my first substantive disagreement in the book…).

Yes, the advocates should be given a chance to say what they think the theory predicts, but then their answer need not be taken as dispositive.  For if the advocates are taken to have ultimate say over what their theory predicts, then they have almost unlimited room to twist themselves in pretzels to explain why, yes, we all know this particular experiment will probably yield such-and-such result, but contrary to appearances it won’t affect the theory at all.  For science to work, theories need to have a certain autonomy from their creators and advocates—to be “rigid,” as David Deutsch puts it—so that anyone can see what they predict, and the advocates don’t need to be continually consulted about it.  Of course this needs to be balanced, in practice, against the fact that the advocates probably understand how to use the theory better than anyone else, but it’s a real consideration as well.

In one conversation, Yudkowsky presents himself as telling startup founders not to bother putting their prototype in front of users, until they have a testable hypothesis that can be confirmed or ruled out by the users’ reactions.  I confess to more sympathy here with the startup founders than with Yudkowsky.  It does seem like an excellent idea to get a product in front of users as early as possible, and to observe their reactions to it: crucially, not just a binary answer (do they like the product or not), confirming or refuting a prediction, but more importantly, reactions that you hadn’t even thought to ask about.  (E.g., that the cool features of your website never even enter into the assessment of it, because people can’t figure out how to create an account, or some such.)

More broadly, I’d stress the value of the exploratory phase in science—the phase where you just play around with your system and see what happens, without necessarily knowing yet what hypothesis you want to test.  Indeed, this phase is often what leads to formulating a testable hypothesis.

But let me step back from these quibbles, to address something more interesting: what can I, personally, take from Inadequate Equilibria?  Is academic theoretical computer science broken/inadequate in the same way a lot of other institutions are?  Well, it seems to me that we have some built-in advantages that keep us from being as broken as we might otherwise be.  For one thing, we’re overflowing with well-defined problems, which anyone, including a total outsider, can get credit for solving.  (Of course, the “outsider” might not retain that status for long.)  For another, we have no Institutional Review Boards and don’t need any expensive equipment, so the cost to enter the field is close to zero.  Still, we could clearly be doing better: why didn’t we invent Bitcoin?  Why didn’t we invent quantum computing?  Do we value mathematical pyrotechnics too highly compared to simple but revolutionary insights?  It’s worth noting that a whole conference, Innovations in Theoretical Computer Science, was explicitly founded to try to address that problem—but while ITCS is a lovely conference that I’ve happily participated in, it doesn’t seem to have succeeded at changing community norms much.  Instead, ITCS itself converged to look a lot like the rest of the field.

Now for a still more pointed question: am I, personally, too conformist or status-conscious?  I think even “conformist” choices I’ve made, like staying in academia, can be defended as the right ones for what I wanted to do with my life, just as Eliezer’s non-conformist choices (e.g., dropping out of high school) can be defended as the right ones for what he wanted to do with his.  On the other hand, my acute awareness of social status, and when I lacked any—in contrast to what Eliezer calls his “status blindness,” something that I see as a tremendous gift—did indeed make my life unnecessarily miserable in all sorts of ways.

Anyway, go read Inadequate Equilibria, then venture into the world and look for some $20 bills laying on the street.  And if you find any, come back and leave a comment on this post explaining where they are, so a conformist herd can follow you.



from Shtetl-Optimized http://ift.tt/2gbw71D
via IFTTT