Jul 17, 2013

Anirban’s Angle: Randomization, Bootstrap and Sherlock Holmes

Contributing Editor Anirban DasGupta writes:

It might well be an exercise in frivolity, but I see a common thread between Sherlock Holmes and the bootstrap. It’s randomized inference. A standard example in a statistics class is that if a coin is tossed 20 times, the 5% UMP unbiased test concludes that the coin is fair if $10 ± 3$ heads are observed, that the coin is unfair if less than 6 or more than 14 heads are observed, and if exactly 6 or exactly 14 heads are observed, then the decision is left to the toss of a customized coin that produces heads 39.2% of the times. When in a quandary, leave it to a machine. Crazy?

Notwithstanding the prestige of the dainty and enduring Neyman–Pearson theory, and that Wald himself considered using post-data randomization in his 1950 book, randomized tests and confidence intervals have been met with polite scorn and a cold shrug. Even the staunchest believer in optimal decisions runs away from post-data randomization (discussions of Basu, 1980, JASA).

There is one celebrated exception, the bootstrap, and to a lesser extent Pitman’s permutation tests. It is not my intention to knock or malign a wildly popular method. But, because bootstrap Monte Carlo is all but essential in estimating a bootstrap distribution, or its functionals, the final bootstrap inference is machine randomized. In different sittings, the machine would produce different answers for the same question and the same data. Sometimes visibly different. But it hasn’t caused the bootstrap a smidgen of a dent in its popularity (Efron and Tibshirani, 1993, CRC Press; Edgington, 1995, CRC Press).

I quote a small part of an example. Take the usual one dimensional iid $F$ scenario, and consider the mean absolute median-deviation

$T_n = \frac{1}{n} \sum_{i=1}^n |X_i − M_n|$,

$M_n$ being the median of the data. Under two moments, $\sqrt{n}[T_n − E|X−ξ|]$ is asymptotically normal, ξ being any median of $F$. For general absolutely continuous $F$, modern empirical process theory (Donsker classes) can be used to rigorously obtain the asymptotic variance. I take $F$ to be a Laplace distribution with parameters μ, σ, for, in that case, we have the crisp result

$\frac{\sqrt{n}[T_n-σ]}{σ}$

is asymptotically $N(0, 1)$. So the traditional percentiles for the 95% CLT interval would be $±1.95996 ≈ ±1.96$. With $n$ = 35, and one fixed simulated dataset, I bootstrapped 15 different times, using each of five values of $B$ three times, $B$ = 600, 750, 900, 1000, 1200 [choice of $B$ is discussed in Hall (1986, AoS), Horowitz (1994, JoE), or Shao and Tu (1995, Springer)]. The bootstrap substitutes for 1.96 varied between 1.651 and 2.030, with an average of 1.845 and the lower percentile varied between −2.021 and −1.777 with an average of −1.883. Thirteen of the 15 times, the bootstrap interval was shorter than the CLT answer, and twice, essentially identical. I would like to be corrected, but I am not sure that in practice the bootstrap is repeated (even if with the same $B$), and the different randomizations properly recombined; I do not have any space to discuss sensible recombination here. The Jackknife is secure on that front. Nonetheless, the bootstrap is a singular success story for randomized inference.

Let me proceed to the Sherlock Holmes example, one of wide notoriety. This is the story of The Final Problem. Holmes is fleeing London to escape the ruthless revenge of his mortal enemy, the certified evil genius and “Napoleon of crime,” Professor Moriarty. I apologize to the world that the Professor was a mathematician, and one of “phenomenal faculty”; Euler must be bowing his head in shame. Holmes boards the train at London, intending to get off at the terminal station Dover, and then to take a ship to the continent. The train has one intermediate stop at Canterbury. As the train leaves Victoria station, Holmes sees Moriarty on the platform, and must assume that Moriarty knows he is on this train. Moriarty can surely arrange express transportation to beat him to Dover. Anticipating this, Holmes may instead get off at Canterbury. But being the wily master mathematician that he is, Moriarty will anticipate what Holmes anticipated, and may himself proceed instead to Canterbury. Now, Holmes of course is mighty astute, and so surely he anticipates that Moriarty anticipates what Holmes first anticipated, and so on, yes, we have two great stalwarts, adversaries in a decision problem: where to alight? Philip Stark kindly pointed out that the Sicilian scene in The Princess Bride is formally equivalent to the Holmes-Moriarty game.

There is excellent literature on this fascinating example. Let me cite only Morgenstern (1935, NYU Press), Clayton (1986, discussion of Diaconis and Freedman, 1986, AoS), Eichberger (1995, GEB), Case (2000, AMM), and Koppl and Rosser (2002, SCE). The Holmes–Moriarty problem may be set up as a decision problem with a loss function. Each of Holmes’s non-randomized actions $a_0$ = detrain at Canterbury, $a_1$ = detrain at Dover, is admissible as well as minimax. Given the infinite chain of reasonings—“I believe that you believe that I believe that…”—each makes, paradoxes of self-reference arise and convergence is not attained. Randomized decisions seem to make sense here, and only those seem to make sense! If Holmes’s loss, should he find himself at the same station with Moriarty, is $L$, is zero should he detrain at Canterbury while Moriarty merrily proceeds to Dover, and is $cL, c<1$, should Moriarty detrain at Canterbury but Holmes continues to Dover, then Holmes’s optimum randomized strategy is $pa_0 + (1−p)a_1$ and Moriarty’s is $(1−p)a_0 + pa_1$, where $p=\frac{1-c}{2-c} $, and in this case, the game is a stalemate in the sense of von Neumann. And a stalemate is reasonable in a battle of two equal giants.

The Sherlock Holmes stories are such monuments of first-rate literature, unequalled and transcendent, that I know connoisseurs who do not leave home for long without Holmes in their suitcase. As in a laughing baby, a rose, a Mozart symphony, sunset over the ocean, raindrops on the window, or a beautiful theorem, in Holmes a man can find his solace. Sir Conan Doyle chose his favorite 19 Holmes stories: The Final Problem is on that list; The Dancing Men is categorically statistical. The British TV Sherlock Holmes series, while romancing all that is bizarre, is also marvelous entertainment.

1 Comment

  • I am really pleased to glance at this webpage posts which
    consists of plenty of helpful information, thanks for providing these statistics.

Leave a comment

*

Welcome!

Welcome to the new and improved IMS Bulletin website! We are developing the way we communicate news and information more effectively with members. The print Bulletin is still with us (free with IMS membership), and still available as a PDF to download, but in addition, we are placing some of the news, columns and articles on this blog site, which will allow you the opportunity to interact more. We are always keen to hear from IMS members, and encourage you to write articles and reports that other IMS members would find interesting. Contact the IMS Bulletin at bulletin@imstat.org

What is “Open Forum”?

With this new blog website, we are introducing a new feature, the Open Forum. Any IMS member can propose a topic for discussion. Email your subject and an opening paragraph (send this to bulletin@imstat.org) and we'll post it to start off the discussion. Other readers can join in the debate by commenting on the post. Search other Open Forum posts by using the Open Forum category link below. Start a discussion today!

About IMS

The Institute of Mathematical Statistics is an international scholarly society devoted to the development and dissemination of the theory and applications of statistics and probability. We have about 4,500 members around the world. Visit IMS at http://imstat.org