A few years have passed, so we now present an update to that list:

Nobel Prize in Economic Sciences |
||

Ragnar A. Frisch | 1969 | |

Kenneth J. Arrow | 1972 | |

Tjalling C. Koopmans | 1975 | |

Milton Friedman | 1976 | |

Trygve Haavelmo | 1989 | |

_ | ||

Abel Prize |
||

S. R. Srinivasa Varadhan | 2007 | |

_ | ||

International Prize in Statistics |
||

David R. Cox | 2017 | |

Bradley Efron | 2019 | |

_ | ||

Knighted |
||

Roy G. D. Allen | 1966 | |

David R. Cox | 1985 | |

Maurice G. Kendall | 1974 | |

John F. C. Kingman | 1985 | |

Bernard W. Silverman | 2018 | |

Adrian F. M. Smith | 2011 | |

_ | ||

Medal for Merit |
||

John von Neumann | 1947 | |

_ | ||

Medal of Freedom |
||

John von Neumann | 1956 | |

_ | ||

Presidential Medal of Freedom |
||

Milton Friedman | 1988 | |

_ | ||

National Medal of Science |
||

Sewall G. Wright | 1966 | |

Jerzy Neyman | 1968 | |

William Feller | 1969 | |

John W. Tukey | 1973 | |

George B. Dantzig | 1975 | |

Joseph L. Doob | 1979 | |

Milton Friedman | 1988 | |

Samuel Karlin | 1989 | |

C. R. Rao | 2001 | |

Kenneth J. Arrow | 2004 | |

Bradley Efron | 2005 | |

S. R. Srinivasa Varadhan | 2010 | |

David H. Blackwell | 2014 | (posthumously) |

Thomas Kailath | 2014 | |

_ | ||

National Medal of Technology and Innovation |
||

W. Edwards Deming | 1987 | |

_ | ||

Kyoto Prize |
||

Hirotugu Akaike | 2006 | |

_ | ||

MacArthur Fellowship |
||

Peter J. Bickel | 1984 | |

Persi W. Diaconis | 1982 | |

David L. Donoho | 1991 | |

Bradley Efron | 1983 | |

Susan A. Murphy | 2013 | |

_ | ||

Shaw Prize |
||

David L. Donoho | 2013 | |

_ | ||

Norbert Wiener Prize in Applied Mathematics |
||

David L. Donoho | 2010 | |

_ | ||

Wolf Prize for Mathematics |
||

Gregory F. Lawler | 2019 | |

Jean-François Le Gall | 2019 | |

_ | ||

Wolf Prize for Physics |
||

Benoit Mandelbrot | 1993 | |

_ | ||

Carl Friedrich Gauss Prize |
||

David L. Donoho | 2018 | |

_ | ||

Australian Academy of Science |
||

Noel A. C. Cressie | 2018 | Fellow |

Joseph M. Gani | 1976 | Fellow |

Peter G. Hall | 1987 | Fellow |

Christopher C. Heyde | 1977 | Fellow |

Henry O. Lancaster | 1961 | Fellow |

Kerrie L. Mengersen | 2018 | Fellow |

Edwin J. G. Pitman | 1954 | Fellow |

Terence P. Speed | 2001 | Fellow |

Matthew P. Wand | 2008 | Fellow |

Alan H. Welsh | 2007 | Fellow |

Ruth J. Williams | 2018 | Corresponding Member |

_ | ||

Academy of the Social Sciences in Australia |
||

Christopher C. Heyde | 2003 | Fellow |

_ | ||

The Royal Society of Canada |
||

Martin T. Barlow | 1998 | Fellow |

David R. Brillinger | 1985 | Fellow |

David R. Cox | 2010 | Fellow |

Miklos Csörgő | 1989 | Fellow |

Donald A. Dawson | 1987 | Fellow |

Stephen E. Fienberg | 2004 | Fellow |

Donald A. S. Fraser | 1967 | Fellow |

Christian Genest | 2015 | Fellow |

V. P. Godambe | 2002 | Fellow |

Agnes M. Herzberg | 2008 | Fellow |

John D. Kalbfleisch | 1994 | Fellow |

Jerald F. Lawless | 2000 | Fellow |

Neal N. Madras | 2002 | Fellow |

Edwin A. Perkins | 1988 | Fellow |

Jonnagadda N. K. Rao | 1991 | Fellow |

Nancy M. Reid | 2001 | Fellow |

Jeffrey Rosenthal | 2012 | Fellow |

Gordon Slade | 2000 | Fellow |

David Sprott | 1975 | Fellow |

Mary E. Thompson | 2006 | Fellow |

_ | ||

Royal Danish Academy of Sciences and Letters |
||

O. E. Barndorff-Nielsen | 1980 | Member |

David R. Cox | 1983 | Foreign Member |

William Feller | ? | Foreign Member |

Anders H. Hald | ? | Member |

Søren Johansen | 2002 | Member |

Steffen L. Lauritzen | 2008 | Foreign Member |

Thomas Mikosch | 2004 | Member |

Michael Sørenson | 2006 | Member |

Sewall G. Wright | ? | Foreign Member |

_ | ||

Académie des sciences (France) |
||

Georges Darmois | 1955 | Member |

Paul Deheuvels | 1994 | Correspondent |

Paul Deheuvels | 2000 | Member |

David L. Donoho | 2009 | Foreign Associate |

Joseph L. Doob | 1975 | Foreign Associate |

Marie-Joseph Kampé de Fériet | 1954 | Correspondent |

Jean-François Le Gall | 2013 | Member |

Gilles Pisier | 2002 | Member |

Michael S. Waterman | 2005 | Foreign Associate |

_ | ||

Indian Academy of Sciences |
||

Krishna B. Athreya | 1974 | Fellow |

Raj R. Bahadur | 1975 | Fellow |

Arup Bose | 2006 | Fellow |

Probal Chaudhuri | 2003 | Fellow |

Jayanta K. Ghosh | 1990 | Fellow |

Thomas Kailath | 2013 | Honorary Fellow |

S. K. Mitra | 1990 | Fellow |

Rahul Mukerjee | 2001 | Fellow |

B. L. S. Prakasa-Rao | 1992 | Fellow |

C. R. Rao | 1974 | Fellow |

S. S. Shrikhande | 1974 | Fellow |

S. R. S. Varadhan | 2004 | Honorary Fellow |

_ | ||

Indian National Science Academy |
||

Raj R. Bahadur | 1959 | Fellow |

Arup Bose | 2007 | Fellow |

Probal Chaudhuri | 2009 | Fellow |

Somesh Das Gupta | 1992 | Fellow |

Jayanta K. Ghosh | 1982 | Fellow |

Thomas Kailath | 2014 | Foreign Fellow |

Gopinath Kallianpur | 1979 | Fellow |

S. K. Mitra | 1981 | Fellow |

Rahul Mukerjee | 2004 | Fellow |

Gilles I. Pisier | 2011 | Foreign Fellow |

B. L. S. Prakasa-Rao | 1984 | Fellow |

C. R. Rao | 1954 | Fellow |

Samarendra N. Roy | 1946 | Foreign Fellow |

S. S. Shrikhande | 1963 | Fellow |

_ | ||

Royal Society of New Zealand |
||

Estate V. Khmaladze | 2010 | Fellow |

Alastair J. Scott | 1990 | Fellow |

_ | ||

The Royal Netherlands Academy of Arts and Sciences |
||

Peter J. Bickel | 1995 | Foreign Member |

Jan de Leeuw | 1989 | Corresponding Member |

W. Th. Frank den Hollander | 2005 | Member |

Richard D. Gill | 1999 | Member |

Mark Kac | 1981 | Foreign Member |

Johannes H. B. Kemperman | 1964 | Member |

Harry Kesten | 1980 | Corresponding Member |

Tjalling C. Koopmans | 1950 | Corresponding Member |

David van Dantzig | 1949 | Member |

Sara A. van de Geer | 2006 | Corresponding Member |

Aad W. van der Vaart | 2009 | Member |

Bartel L. van der Waerden | 1949 | Member |

Willem R. van Zwet | 1979 | Member |

John von Neumann | 1950 | Foreign Member |

_ | ||

The Norwegian Academy of Science and Letters |
||

Theodore W. Anderson, Jr. | 1994 | Foreign Member |

David R. Brillinger | 2004 | Foreign Member |

Niels Keiding | 2005 | Foreign Member |

Ester Samuel-Cahn | Foreign Member | |

Howell Tong | 2000 | Foreign Member |

S. R. Srinivasa Varadhan | 2009 | Foreign Member |

_ | ||

The Royal Swedish Academy of Sciences |
||

Ulf Grenander | 1966 | Member |

Peter Jagers | 1989 | Member |

Jerzy Neyman | 1963 | Foreign Member |

Holger Rootzén | Member | |

Herman O. A. Wold | 1960 | Member |

_ | ||

The Royal Society (UK) |
||

Alexander C. Aitken | 1936 | Fellow |

David J. Aldous | 1994 | Fellow |

Kenneth J. Arrow | 2006 | Foreign Member |

Martin T. Barlow | 2005 | Fellow |

Maurice S. Bartlett | 1961 | Fellow |

Julian E. Besag | 2004 | Fellow |

George E. P. Box | 1985 | Fellow |

David R. Cox | 1973 | Fellow |

Jack Cuzick | 2016 | Fellow |

Henry E. Daniels | 1980 | Fellow |

A. Philip Dawid | 2018 | Fellow |

Donald A. Dawson | 2010 | Fellow |

Peter J. Donnelly | 2006 | Fellow |

Alison M. Etheridge | 2015 | Fellow |

David J. Finney | 1955 | Fellow |

Peter J. Green | 2003 | Fellow |

Geoffrey R. Grimmett | 2014 | Fellow |

Peter G. Hall | 2000 | Fellow |

John M. Hammersley | 1976 | Fellow |

Thomas Kailath | 2009 | Foreign Member |

David G. Kendall | 1964 | Fellow |

John F. C. Kingman | 1971 | Fellow |

Steffen L. Lauritzen | 2011 | Fellow |

Terence J. Lyons | 2002 | Fellow |

Peter McCullagh | 1994 | Fellow |

Jerzy Neyman | 1979 | Foreign Member |

Egon S. Pearson | 1966 | Fellow |

Edwin A. Perkins | 2007 | Fellow |

H. Vincent Poor | 2014 | Foreign Member |

C. R. Rao | 1967 | Fellow |

Nancy M. Reid | 2018 | Fellow |

Bernard W. Silverman | 1997 | Fellow |

Gordon Slade | 2017 | Fellow |

Adrian F. M. Smith | 2001 | Fellow |

Terence P. Speed | 2013 | Fellow |

Simon Tavaré | 2011 | Fellow |

Robert J. Tibshirani | 2019 | Fellow |

John W. Tukey | 1991 | Foreign Member |

S. R. Srinivasa Varadhan | 1998 | Fellow |

Peter Whittle | 1978 | Fellow |

Sewall G. Wright | 1963 | Foreign Member |

_ | ||

The British Academy |
||

Roy G. D. Allen | 1952 | Fellow |

Kenneth J. Arrow | 1976 | Corresponding Fellow |

David J. Bartholomew | 1987 | Fellow |

David R. Cox | 1997 | Honorary Fellow |

James Durbin | 2001 | Fellow |

Maurice G. Kendall | 1970 | Fellow |

Oliver B. Linton | 2008 | Fellow |

Peter C. B. Phillips | 2008 | Corresponding Fellow |

Donald B. Rubin | 2009 | Corresponding Fellow |

Peter M. Robinson | 2000 | Fellow |

_ | ||

The Royal Society of Edinburgh |
||

John Aitchison | 1968 | Fellow |

Alexander C. Aitken | 1925 | Fellow |

Rosemary A. Bailey | 2015 | Fellow |

O. E. Barndorff-Nielsen | 2001 | Corresponding Fellow |

David R. Cox | 2013 | Honorary Fellow |

David J. Finney | 1955 | Fellow |

Anders H. Hald | 2001 | Corresponding Fellow |

Peter G. Hall | 2002 | Corresponding Fellow |

Terence J. Lyons | 1988 | Fellow |

H. Vincent Poor | 2013 | Corresponding Fellow |

Maurice H. Quenouille | 1952 | Fellow |

Nancy M. Reid | 2015 | Corresponding Fellow |

Brian D. Ripley | 1990 | Fellow |

S. D. Silvey | 1967 | Fellow |

Donald M. Titterington | 1991 | Fellow |

John Wishart | 1931 | Fellow |

Sewall G. Wright | 1951 | Honorary Fellow |

_ | ||

The Royal Society of Literature |
||

Alexander C. Aitken | 1964 | Fellow |

_ | ||

National Academy of Sciences |
||

David J. Aldous | 2010 | Foreign Associate |

Theodore W. Anderson, Jr. | 1976 | Member |

Kenneth J. Arrow | 1968 | Member |

Maurice S. Bartlett | 1993 | Foreign Associate |

James O. Berger | 2003 | Member |

Joseph Berkson | 1979 | Member |

Peter J. Bickel | 1986 | Member |

David H. Blackwell | 1965 | Member |

Raj Chandra Bose | 1976 | Member |

Maury Bramson | 2017 | Member |

Leo Breiman | 2001 | Member |

Emery N. Brown | 2014 | Member |

Lawrence D. Brown | 1990 | Member |

Donald L. Burkholder | 1988 | Member |

Herman Chernoff | 1980 | Member |

William G. Cochran | 1974 | Member |

David R. Cox | 1988 | Foreign Associate |

Gertrude M. Cox | 1975 | Member |

Harald Cramér | 1984 | Foreign Associate |

George B. Dantzig | 1971 | Member |

Persi W. Diaconis | 1995 | Member |

David L. Donoho | 1998 | Member |

Joseph L. Doob | 1957 | Member |

Richard T. Durrett | 2007 | Member |

Eugene B. Dynkin | 1985 | Member |

Bradley Efron | 1986 | Member |

Steven N. Evans | 2016 | Member |

William Feller | 1960 | Member |

Stephen E. Fienberg | 1999 | Member |

Milton Friedman | 1973 | Member |

Donald Geman | 2015 | Member |

Stuart Geman | 2011 | Member |

Leo Goodman | 1974 | Member |

Ulf Grenander | 1996 | Member |

Peter G. Hall | 2013 | Foreign Associate |

Morris H. Hansen | 1976 | Member |

Theodore E. Harris | 1988 | Member |

Trevor J. Hastie | 2018 | Member |

Wassily Hoeffding | 1976 | Member |

Harold Hotelling | 1970 | Member |

Dunham Jackson | 1935 | Member |

Iain M. Johnstone | 2005 | Member |

Michael I. Jordan | 2010 | Member |

Mark Kac | 1965 | Member |

Thomas Kailath | 2000 | Member |

Samuel Karlin | 1972 | Member |

Harry Kesten | 1983 | Member |

Jack C. Kiefer | 1975 | Member |

John F. C. Kingman | 2007 | Foreign Associate |

Tjalling C. Koopmans | 1969 | Member |

Gregory F. Lawler | 2013 | Member |

Erich L. Lehmann | 1978 | Member |

Thomas M. Liggett | 2008 | Member |

Benoit B. Mandelbrot | 1987 | Member |

Jacob Marschak | 1973 | Member |

Frederick Mosteller | 1974 | Member |

Susan A. Murphy | 2016 | Member |

Charles M. Newman | 2004 | Member |

Jerzy Neyman | 1963 | Member |

Yuval Peres | 2016 | Foreign Associate |

H. Vincent Poor | 2011 | Member |

Adrian E. Raftery | 2009 | Member |

C. R. Rao | 1995 | Member |

Nancy M. Reid | 2016 | Foreign Associate |

Herbert E. Robbins | 1974 | Member |

Kathryn Roeder | 2019 | Member |

Murray Rosenblatt | 1984 | Member |

Gian-Carlo Rota | 1982 | Member |

Donald B. Rubin | 2010 | Member |

Lawrence A. Shepp | 1989 | Member |

David O. Siegmund | 2002 | Member |

David Slepian | 1977 | Member |

Frank Spitzer | 1981 | Member |

Charles M. Stein | 1975 | Member |

Charles J. Stone | 1993 | Member |

Simon Tavaré | 2018 | Foreign Associate |

Robert J. Tibshirani | 2012 | Member |

John W. Tukey | 1961 | Member |

S. R. Srinivasa Varadhan | 1995 | Member |

John von Neumann | 1937 | Member |

Grace Wahba | 2000 | Member |

Larry A. Wasserman | 2016 | Member |

Michael S. Waterman | 2001 | Member |

Ruth Williams | 2012 | Member |

Jacob Wolfowitz | 1974 | Member |

Wing Hung Wong | 2009 | Member |

Sewall G. Wright | 1934 | Member |

Bin Yu | 2014 | Member |

_ | ||

National Academy of Engineering |
||

Emery N. Brown | 2015 | Member |

Thomas M. Cover | 1995 | Member |

George B. Dantzig | 1985 | Member |

W. Edwards Deming | 1983 | Member |

Donald P. Gaver | 2009 | Member |

Peter W. Glynn | 2012 | Member |

J. Michael Harrison | 2008 | Member |

Donald L. Iglehart | 1999 | Member |

Michael I. Jordan | 2010 | Member |

Gerald J. Lieberman | 1987 | Member |

H. Vincent Poor | 2001 | Member |

Howard Raiffa | 2005 | Member |

David Slepian | 1976 | Member |

Arthur F. Veinott, Jr. | 1986 | Member |

Michael S. Waterman | 2012 | Member |

Peter Whittle | 2016 | Foreign Associate |

C. F. Jeff Wu | 2004 | Member |

Moshe Zakai | 1989 | Foreign Associate |

_ | ||

National Academy of Medicine |
||

Kenneth J. Arrow | 1974 | Member |

Emery N. Brown | 2007 | Member |

Alicia L. Carriquiry | 2016 | Member |

Nicholas P. Jewell | 2017 | Member |

Stephen W. Lagakos | 2002 | Member |

Xihong Lin | 2018 | Member |

Paul Meier | 1992 | Member |

Frederick Mosteller | 1971 | Member |

Susan A. Murphy | 2014 | Member |

Lawrence A. Shepp | 1992 | Member |

_ | ||

American Academy of Arts and Sciences |
||

David J. Aldous | 2004 | Member |

Theodore W. Anderson, Jr. | 1974 | Member |

Kenneth J. Arrow | 1959 | Member |

Raj R. Bahadur | 1986 | Member |

Gérard Ben Arous | 2015 | Member |

Peter J. Bickel | 1986 | Member |

Patrick Billingsley | 1986 | Member |

David H. Blackwell | 1969 | Member |

George E. P. Box | 1974 | Member |

Leo Breiman | 1999 | Member |

David R. Brillinger | 1993 | Member |

Emery N. Brown | 2012 | Member |

Lawrence D. Brown | 2013 | Member |

Donald L. Burkholder | 1992 | Member |

Herman Chernoff | 1974 | Member |

Victor Chernozhukov | 2016 | Member |

William G. Cochran | 1971 | Member |

Thomas M. Cover | 2003 | Member |

David R. Cox | 1974 | Foreign Honorary Member |

Harald Cramér | 1961 | Foreign Honorary Member |

George B. Dantzig | 1975 | Member |

Arthur P. Dempster | 1997 | Member |

Persi W. Diaconis | 1989 | Member |

David L. Donoho | 1992 | Member |

Joseph L. Doob | 1965 | Member |

Richard T. Durrett | 2002 | Member |

Aryeh Dvoretzky | 1985 | Foreign Honorary Member |

Eugene B. Dynkin | 1978 | Member |

Bradley Efron | 1983 | Member |

William Feller | 1958 | Member |

Stephen E. Fienberg | 2007 | Member |

Irving Fisher | 1912 | Member |

David A. Freedman | 1991 | Member |

Milton Friedman | 1959 | Member |

Ragnar A. K. Frisch | 1960 | Foreign Honorary Member |

Hilda Geiringer | 1955 | Member |

Irving J. Good | 1985 | Member |

Leo Goodman | 1973 | Member |

Ulf Grenander | 1995 | Member |

Louis Guttman | 1975 | Foreign Honorary Member |

Trygve Haavelmo | 1976 | Foreign Honorary Member |

Morris H. Hansen | 1985 | Member |

Wassily Hoeffding | 1985 | Member |

Peter J. Huber | 1987 | Member |

Edward V. Huntington | 1913 | Member |

Dunham Jackson | 1915 | Member |

Iain M. Johnstone | 2003 | Member |

Michael I. Jordan | 2011 | Member |

Mark Kac | 1959 | Member |

Joseph B. Kadane | 2010 | Member |

Thomas Kailath | 1994 | Member |

Samuel Karlin | 1970 | Member |

Truman L. Kelley | 1931 | Member |

Harry Kesten | 1999 | Member |

Jack C. Kiefer | 1972 | Member |

Tjalling C. Koopmans | 1960 | Member |

William H. Kruskal | 1973 | Member |

Thomas G. Kurtz | 2005 | Member |

Gregory F. Lawler | 2005 | Member |

Lucien M. Le Cam | 1976 | Member |

Erich L. Lehmann | 1975 | Member |

Thomas M. Liggett | 2012 | Member |

Benoit B. Mandelbrot | 1982 | Member |

Jacob Marschak | 1961 | Member |

Peter McCullagh | 2002 | Member |

Paul Meier | 1980 | Member |

Lincoln E. Moses | 1981 | Member |

Frederick Mosteller | 1954 | Member |

Charles M. Newman | 2006 | Member |

Jerzy Neyman | 1976 | Member |

H. Vincent Poor | 2005 | Member |

John W. Pratt | 1988 | Member |

Adrian E. Raftery | 2003 | Member |

Howard Raiffa | 1968 | Member |

C. R. Rao | 1975 | Foreign Honorary Member |

Herbert E. Robbins | 1975 | Member |

Gian-Carlo Rota | 1963 | Member |

Donald B. Rubin | 1993 | Member |

Laurent Saloff-Coste | 2011 | Member |

Lawrence A. Shepp | 1993 | Member |

David O. Siegmund | 1994 | Member |

Leslie E. Simon | 1956 | Member |

David Slepian | 1990 | Member |

Stephen M. Stigler | 1987 | Member |

John W. Tukey | 1964 | Member |

S. R. Srinivasa Varadhan | 1988 | Member |

Richard E. von Mises | 1944 | Member |

John von Neumann | 1944 | Member |

Grace Wahba | 1997 | Member |

W. Allen Wallis | 1964 | Member |

Michael S. Waterman | 1995 | Member |

Samuel S. Wilks | 1963 | Member |

Ruth Williams | 2009 | Member |

Herman O. A. Wold | 1978 | Foreign Honorary Member |

Jacob Wolfowitz | 1970 | Member |

Sewall G. Wright | 1948 | Member |

Bin Yu | 2013 | Member |

Ofer Zeitouni | 2019 | Member |

Marvin Zelen | 1991 | Member |

_ | ||

American Philosophical Society |
||

Kenneth J. Arrow | 1968 | Member |

David H. Blackwell | 1990 | Member |

David R. Cox | 1990 | International Member |

Persi W. Diaconis | 2005 | Member |

David L. Donoho | 2019 | Member |

William Feller | 1966 | Member |

Milton Friedman | 1957 | Member |

Leo Goodman | 1976 | Member |

Robert Henderson | 1927 | Member |

Edward V. Huntington | 1933 | Member |

Mark Kac | 1969 | Member |

Samuel Karlin | 1995 | Member |

Benoit B. Mandelbrot | 2004 | Member |

Frederick Mosteller | 1961 | Member |

Stephen M. Stigler | 2006 | Member |

John W. Tukey | 1962 | Member |

Samuel S. Wilks | 1948 | Member |

Sewall G. Wright | 1932 | Member |

_ | ||

American Academy of Political and Social Science |
||

David R. Cox | 2003 | Harold Lasswell Fellow |

Stephen Fienberg | 2004 | Thorsten Sellin Fellow |

The amendment also passed.

The new Council members will be joining 10 other Council members: Peter Hoff, Greg Lawler, Antonietta Mira, Axel Munk and Byeong Park will serve another year; Christina Goldschmidt, Susan Holmes, Xihong Lin, Richard Lockhart and Kerrie Mengersen another two. Jean Bertoin, Song Xi Chen, Mathias Drton, Elizaveta Levina and Simon Tavaré will be stepping down after their three-year terms on Council.

Council is also made up of the Executive Committee members and Editors. From the coming IMS meeting, the Executive Committee will be Susan Murphy as President, Xiao-Li Meng as Past President, Regina Liu as President-Elect, Zhengjun Zhang as Treasurer, Ming Yuan as Program Secretary, and Edsel Peña as Executive Secretary. Alison Etheridge will leave the Executive Committee this year. The Editors are Francois Delarue and Peter Friz (Annals of Applied Probability), Amir Dembo (Annals of Probability), Karen Kafadar (Annals of Applied Statistics), Richard Samworth and Ming Yuan (Annals of Statistics), Cun-Hui Zhang (Statistical Science) and T.N. Sriram (Managing Editor).

Thanks to all the Council candidates, to the outgoing members of the committees, and to all of you who voted.

]]>David Donoho has made fundamental contributions to theoretical and computational statistics, as well as to signal processing and harmonic analysis. His algorithms have contributed significantly to our understanding of the maximum entropy principle, of the structure of robust procedures, and of sparse data description. His research interests include “the mathematics of statistical inference and theoretical questions arising in applying harmonic analysis to various applied problems.” They have ranged from data visualization to problems in scientific signal processing, image processing, and inverse problems.

The American Philosophical Society, the oldest learned society in the United States, was founded in 1743 by Benjamin Franklin for the purpose of “promoting useful knowledge.” The Society’s activities reflect the founder’s spirit of inquiry, provide a forum for the free exchange of ideas, and convey the conviction of its members that intellectual inquiry and critical thought are inherently in the public interest.

]]>At next year’s Joint Statistical Meetings (August 1–6, 2020, in Philadelphia, PA) there will be three Medallion Lectures: Susan Holmes, Roger Koenker and Paul Rosenbaum.

The IMS lecturers at the 2020 Bernoulli–IMS World Congress in Probability and Statistics (August 17–21, 2020, at Seoul National University, Seoul, Korea) are as follows: the Wald Lecturer will be Martin Barlow; the Blackwell Lecture will be given by Gábor Lugosi, and the five IMS Medallion Lectures will be given by Gérard Ben Arous, Andrea Montanari, Elchanan Mossel, Laurent Saloff-Coste and Daniela Witten. The IMS Presidential Address will be given by Susan Murphy. There are also two named IMS/BS Lectures: the Doob Lecture, which will be given by Nicolas Curien, and the Schramm Lecture, given by Omer Angel.

Also at the World Congress, there will be five Bernoulli Society named lectures. Persi Diaconis will give the Kolmogorov Lecture, Alison Etheridge the Bernoulli Lecture, Massimilliano Gubinelli the Lévy Lecture, Tony Cai the Laplace Lecture and Sara van der Geer will give the Tukey Lecture.

**Nominate now for Wald, Le Cam, Medallion lectures in 2021/2022**

You can nominate a special lecturer for future years: we are accepting nominations for the Wald Lecturer in 2021 and 2022; the Le Cam Lecturer in 2021; and the Medallion Lecturers in 2022. Submit your nomination (by October 1, 2019) at https://www.imstat.org/ims-special-lectures/nominations/

]]>The lecture and the interview are open to the public: see the meeting announcement on page 24 of the August 2019 Bulletin issue.

The Distinguished Statistician Colloquium series ran from 1978 until 2012 and was renewed in 2018. The colloquium series has featured C.R. Rao, Bradley Efron, D.R. Cox, and many more. For a complete list, see https://stat.uconn.edu/pfizer-colloquium/. The purpose of the Colloquium is to provide a forum for a distinguished statistician to share and disseminate their unique perspective and work in the theory and/or application of statistics. Starting from 2018, the series has been co-sponsored by Pfizer, the American Statistical Association, and the Department of Statistics at the University of Connecticut.

Dipak Dey, Chair of the selection committee, thanks Pfizer and the ASA for their generous financial support. He also thanks the members of the selection committee: Dan Meyer and Demissie Alemayehu from Pfizer, Ron Wasserstein and Nancy Flournoy from the ASA, Joseph Glaz and Ming-Hui Chen from UConn (Professor Chen also represents the New England Statistical Society, NESS).

]]>Charles Bordenave gave a Medallion Lecture at the INFORMS-APS 2019 meeting, which took place July 3–5, 2019, in Brisbane, Australia. He is pictured here with Ruth Williams. You can read the preview of his lecture here.

]]>Trevor Hastie is the John A Overdeck Professor of Statistics at Stanford University. Prior to joining Stanford University in 1994, he worked at AT&T Bell Laboratories for nine years, where he helped develop the statistical modeling environment popular in the R computing system. He received a BSc (Hons) in statistics from Rhodes University in 1976, an MSc from the University of Cape Town in 1979, and a PhD from Stanford in 1984. In 2018 he was elected to the National Academy of Sciences. Trevor’s main research contributions have been in applied statistics, particularly in the fields of data statistical modeling, bioinformatics and machine learning; he has published over 200 articles and written six books in this area: *Generalized Additive Models *(with R. Tibshirani, 1991), *Elements of Statistical Learning* (with R. Tibshirani and J. Friedman, 2001; 2nd edn 2009), *An Introduction to Statistical Learning, with Applications in R* (with G. James, D. Witten and R. Tibshirani, 2013), *Statistical Learning with Sparsity* (with R. Tibshirani and M. Wainwright, 2015) and the IMS Monograph, *Computer Age Statistical Inference* (with Bradley Efron, 2016). He has also made contributions in statistical computing, co-editing (with J. Chambers) a large software library on modeling tools in the S language (*Statistical Models in S*, 1992), which form the foundation for much of the statistical modeling in R. His current research focuses on applied statistical modeling and prediction problems in biology and genomics, medicine and industry. Trevor’s Wald Lectures will be delivered at JSM Denver, July 27–August 1, 2019.

This series of three talks takes us on a journey that starts with the introduction of the lasso in 1996 by Rob Tibshirani, and brings us to date on some of the vast array of applications that have emerged. In 2015 I published a research monograph by the same name with Rob Tibshirani and Martin Wainwright (*Statistical Learning with Sparsity; the Lasso and Generalizations*, Hastie, Tibshirani, Wainwright, Chapman and Hall, 2015). These talks will focus on some of the topics from this book.

The community of people that have worked on sparsity and high-dimensional statistical inference is by now very large (the lasso paper alone has over 28K citations!) My work with my colleagues and students has concentrated on applied methodology, and in particular algorithms and software for employing these powerful tools. All the applications I present are accompanied by software (mostly in R) that my students and I actively support and improve.

There are three Wald lectures, and they focus on different applications.

**Wald Lecture I:**

I motivate the need for sparsity with wide data, and then chronicle the invention of lasso and the quest for good software. After some early starts, my colleagues and I have settled on an algorithm known as coordinate descent, which is surprisingly efficient for fitting a sequence or path of sparse models. Along with our so-called strong rules for hedging the active set, our glmnet package in R (also python and matlab) has remained popular. Several examples will be given, culminating with a special adaptation of glmnet called snpnet for fitting lasso models for polygenic traits using GWAS (truly massive data). I end with a survey of some active areas of research not covered in the remaining two talks.

**Wald Lecture II:**

With real applications, we often encounter missing data, typically regarded as a nuisance. Depending on the application, we have different ways of sweeping the problem under the rug, some more natural than others. With principal components and the SVD, there is a natural way of accommodating NAs, which appears to have been in the statistical folklore for a long time. Matrix completion re-emerged during the Netflix competition as a way to compute a low-rank SVD in the presence of a large amount of missing data, and for imputing missing values. I discuss some aspects of this problem, and describe several algorithms for finding a path of solutions. Here sparsity comes in two forms: sparsity in the entries in the observed matrix, and sparsity in the singular values of the solutions. I illustrate with applications in a variety of areas, including recommender systems and the modeling of sparse longitudinal multivariate data.

**Wald Lecture III:**

As the sparsity literature has progressed over the years, some ingenious extensions have been proposed. One of these is the group lasso (Yuan and Lin, 2007 *JRSS B*), which selects for groups of variables. I briefly outline three projects that have employed these ideas; two concerning generalized additive model selection, and one for selecting interactions in a linear model. Then, in a different direction, the graphical lasso builds sparse inverse covariance matrices to capture the conditional independencies in multivariate Gaussian data. I discuss this approach and extensions, and then illustrate its use for anomaly detection and imputation with high-dimensional data.

Liza Levina is the Vijay Nair Collegiate Professor of Statistics at the University of Michigan, as well as affiliated faculty at the Michigan Institute for Data Science and the Center for the Study of Complex Systems. She received her PhD in Statistics from UC Berkeley in 2002, and has been at the University of Michigan since. She is well known for her work on high-dimensional statistical inference and statistical network analysis. Her current application interests are focused on neuroimaging. She is a recipient of the ASA Noether Young Scholar Award, a fellow of the ASA and the IMS, a 2016 Web of Science Highly Cited Researcher, and an invited speaker at the 2018 International Congress of Mathematicians. She has served the IMS in multiple capacities and is currently a council member and a co-chair of the IMS Task Force on Data Science. Liza Levina will deliver her Medallion Lecture at JSM Denver, July 27–August 1, 2019.

Network data have become increasingly common in many fields, with interesting scientific phenomena discovered through the analysis of biological, social, ecological, and various other networks. Among various network analysis tasks, community detection (the task of clustering network nodes into groups with similar connection patterns) has been one of the most studied, due to the ubiquity of communities in real-world networks and the appealing mathematical formulations that lend themselves to analysis. For the most part, community detection has been formulated as the problem of finding a single partition of the network into some “correct” number of communities. However, it is both well known in practice and supported by theory that nearly all the algorithms and models proposed for this type of community detection do not work well when the number of communities is large. We argue that for large networks, a hierarchy of communities is preferable to such a partition, since multiple partitions at different scales frequently make more sense in real networks, and the hierarchy can be scientifically meaningful, like an evolutionary tree. A hierarchical tree, with larger communities subdivided into smaller ones, offers a natural and very interpretable representation of community structure, and simplifies the problem of estimating the potentially large number of communities from the entire network. In addition, a hierarchy gives us much more information than any “flat” partition, by indicating how close communities are through their tree distance. Finally, recursive splitting is more computationally efficient, and, as we show, in some settings is more accurate. In particular, we show that even when the full community structure corresponding to the leaves of the tree is below the recovery threshold, we can still consistently recover the top levels of the tree as long as they are well separated, giving us partial but accurate information where a flat partition method would fail.

Many existing algorithms for hierarchical clustering can be modified to apply to networks. We adopt a simple top-down recursive partitioning algorithm, once popular in the clustering literature. It requires two tools that, in turn, can be chosen among many existing methods: an algorithm to partition a given network into two, and a stopping rule to decide whether there is more than one community in a given subnetwork. Given these two tools, the recursive (bi-)partitioning algorithm proceeds by starting with all nodes in one community, applying the stopping rule to decide whether a split is needed, applying the splitting algorithm to split into two communities if so, and continuing to apply this to every resulting subnetwork until the stopping rule indicates there are no further splits to make. This class of algorithms can be made model-free and tuning-free, and is computationally efficient, with the computational cost growing logarithmically in the number of communities rather than linearly, which is the case for most flat partition methods. We implement recursive partitioning by using regularized spectral clustering as the splitting rule, and the Bethe-Hessian estimator of the number of communities as the stopping rule, although any other consistent method can be used instead.

We analyze the algorithm’s theoretical performance under a natural framework for this setting, the binary tree stochastic block model. Under this model, we prove that the algorithm correctly recovers the entire community tree under mild growth assumptions on the average degree, allowing for sparse networks. Further, the assumptions to recover each level of the tree, which we make explicit, get strictly stronger as we move down the tree, illuminating the regime where recursive partitioning can correctly recover mega-communities at the higher levels of the hierarchy even when it cannot recover every community at the bottom of the tree. We show that in practice recursive partitioning outperforms “flat” spectral clustering on multiple performance metrics when the number of communities is large, and illustrate the algorithm on a dataset of statistics papers, constructing a highly interpretable tree of statistics research communities.

This is joint work with Tianxi Li (Univ. Virginia), Lihua Lei (UC Berkeley), Sharmodeep Bhattacharyya (Oregon State Univ.), Purnamrita Sarkar (Univ. Texas, Austin), and Peter J. Bickel (UC Berkeley). The manuscript is available at arXiv:1810.01509.

]]>Yee Whye Teh is a Professor of Statistical Machine Learning at the Department of Statistics, University of Oxford and a Research Scientist at DeepMind. He was programme co-chair for AISTATS 2010 and ICML 2017. His research interests span across machine learning and Bayesian statistics, including probabilistic methods, Bayesian nonparametrics and deep learning.

Yee Whye’s Medallion lecture will be delivered at the Joint Statistical Meetings in Denver, July 27–August 1, 2019.

A shorter version of this article appears below, or you can download a longer PDF version here.

Historically, machine learning has its roots in pattern recognition and connectionist systems whose intelligent behaviours are learnt from data. In the 90s, the community started realising the widespread connections with statistics, which led to a period when statistical approaches flourished and became the dominant framework for both theoretical foundations and methodological developments. In the last decade, with the growing popularity of deep learning, this coming together with statistics has started to unravel, and the research frontier moved from statistical learning to artificial intelligence, from graphical models to neural networks, and from Markov chain Monte Carlo to stochastic gradient descent.

In this new era, what is the role of statistical thinking in advancing the state-of-the-art in machine learning? It is my belief, and that of many others, that statistical thinking continues to play an important role in machine learning. The deep theoretical roots of statistics and probability have continued to nourish our understanding of learning phenomena; in unsupervised learning, generative modelling continue to be a popular paradigm; and the deep concern for uncertainty and robustness prevalent in statistics is now being increasingly felt as machine learning techniques are applied in the real world. In the following I will illustrate how statistical thinking has helped with two inter-related examples from my own research.

**Meta Learning and Neural Processes**

While much of machine learning excels for large datasets, there is increasing interest in systems that can learn efficiently from much less data. For example, in few-shot image classification, with just a few example images of each class, we would like a system that can generalise well to classifying other images. Meta learning is an idea whereby if our system has seen many examples of such few-shot image classification tasks (each with its own small dataset), we might conceivably expect there to be sufficient information spread across tasks for a system to learn to generalise sensibly from few examples.

While most recent approaches to meta learning are based on the idea of optimising learning algorithms, an interesting alternative, which we call neural processes, considers it from the statistical perspectives of hierarchical Bayes and stochastic processes (Garnelo et al., 2018a,b; Kim et al., 2019; Galashov et al., 2019). The idea is that in order to learn effectively from small datasets, prior knowledge is necessary, which from a Bayesian perspective takes the form of the prior distribution. In case of image classification and supervised learning, each task corresponds to a function, and the prior of interest is a distribution over functions, i.e. a stochastic process. While standard approaches in Bayesian nonparametrics might posit simple prior distributions that enable tractable posterior computation, we instead propose to use neural networks to directly learn the predictive distributions induced by the stochastic process from data.

Viewing meta learning from a statistical perspective has allowed us to better understand the underlying learning phenomena. This has in turn allowed us to make links with other ideas like Bayesian nonparametrics and Gaussian processes, and motivated new approaches which better handle uncertainty (Garnelo et al., 2018b) and learn more accurately (Kim et al., 2019), as well as new applications of meta learning in Bayesian optimisation and sequential decision making (Galashov et al., 2019).

**Probabilistic Symmetries and Neural Networks**

In neural processes, the central function being learnt has a form

$y = f(x, \mathcal{D}{^\mathsf{train}})$, of an output $y$ given an input $x$ and an iid training set $\mathcal{D}{^\mathsf{train}} = {(x_i{^\mathsf{train}}, y_i{^\mathsf{train}})}_{i=1}^n$. The question is, how should we choose the architecture of our neural network used to learn it? Specifically, the function should be invariant with respect to permuting the indices of $\mathcal{D}{^\mathsf{train}}$. We enforced this permutation invariance explicitly by choosing a specific neural architecture,

$f(x, {(x_i{^\mathsf{train}}, y_i{^\mathsf{train}})}_{i=1}^n) = h\left(x, \sum_{i=1}^n g(x_i{^\mathsf{train}}, y_i{^\mathsf{train}}) \right)$

where both $g$ and $h$ are neural networks.

By construction the function is invariant to permutations of the dataset, since addition is commutative. However, there are other commutative operators, for example element-wise product, max, or min. This raises the following questions: Which operator is best? Are there other neural architectures or function classes that have this permutation invariance property? And can we characterise all permutation-invariant functions?

In Bloem-Reddy and Teh (2019), we developed a general framework to answer these questions using tools from probabilistic symmetries and statistical sufficiency. The core idea is that an invariance means that some information is ignorable. The rest of the information then forms an adequate statistic for computing the function, and we can identify what the adequate statistic is. In the case of permutation invariance this is the empirical measure, and the implication is that the form we chose above is the natural one.

We have generalised this result in a few ways. Firstly, we can generalise to invariance under the action of some compact group. The results are structurally the same, except that the empirical measure is replaced by an appropriate adequate statistic called a maximal invariant. We have also derived analogous results for a different notion of symmetry called equivariance, where transformations of the input lead to output that is transformed in the same way.

**References**

Bloem-Reddy, B. and Teh, Y. W. (2019). Probabilistic symmetry and invariant neural networks. arXiv:1901.06082.

Galashov, A., Schwarz, J., Kim, H., Garnelo, M., Saxton, D., Kohli, P., Eslami, S., and Teh, Y. W. (2019). Meta-learning surrogate models for sequential decision making. In *ICLR Workshop on Structure & Priors in Reinforcement Learning*. arXiv:1903.11907.

Garnelo, M., Rosenbaum, D., Maddison, C. J., Ramalho, T., Saxton, D., Shana- han, M., Teh, Y. W., Rezende, D. J., and Eslami, S. (2018a). Conditional neural processes. In *International Conference on Machine Learning (ICML)*.

Garnelo, M., Schwarz, J., Rosenbaum, D., Viola, F., Rezende, D. J., Eslami, S., and Teh, Y. W. (2018b). Neural processes. In *ICML Workshop on Theoretical Foundations and Applications of Deep Generative Models*. arXiv:1807.01622.

Kim, H., Mnih, A., Schwarz, J., Garnelo, M., Eslami, A., Rosenbaum, D., Vinyals, O., and Teh, Y. W. (2019). Attentive neural processes. In *International Conference on Learning Representations (ICLR)*. arXiv:1901.05761.

Hao Helen Zhang is a Professor in the Department of Mathematics at the University of Arizona, as well as a faculty member of Statistics Graduate Interdisciplinary Program (GIDP). Helen Zhang obtained a PhD in Statistics from University of Wisconsin at Madison in 2002. She was assistant and associate professor of Statistics at North Carolina State University from 2002–11. Her research areas include statistical machine learning, high-dimensional data analysis, nonparametric smoothing, and biomedical data analysis. With Bertrand Clarke and Ernest Fokoué, she is the author of the book *Principles and Theory for Data Mining and Machine Learning*. Helen is currently Editor-in-Chief of ISI’s *Stat,* and Associate Editor of *Journal of the Royal Statistical Society Series B, Journal of the American Statistical Association*, *Journal of Computational and Graphical Statistics*, and *Statistical Analysis and Data Mining*. She is a Fellow of IMS and ASA, and an elected member of the International Statistical Institute.

Helen’s Medallion lecture will be delivered at the Joint Statistical Meetings in Denver, July 27–August 1, 2019:

**Breaking the Curse of Dimensionality in Nonparametrics**

The “curse of dimensionality” refers to sparse phenomena of high-dimensional data and associated challenges in statistical analysis. Traditional nonparametric methods provide flexible modeling tools to discover nonlinear and complex patterns in data, but they often experience theoretical and computational difficulties when handling high-dimensional data. In the modern computer age, rapid advances have occurred in nonparametrics to break the curse of dimensionality and enable sparse, efficient, and interpretable function estimation for high dimensional regression and classification problems. A variety of state-of-the-art nonparametric methods, theory, and scalable algorithms have been developed to extract low intrinsic dimension from data and accommodate high-dimensional data analysis more effectively.

In this talk, I will survey some recent works of nonparametric methods in model selection, dimension reduction, curve estimation, and inferences for high dimensional regression, classification, and density estimation problems. Related issues and open challenges will be discussed as well. In addition, there is intrinsic connection between nonparametric and statistical machine learning. The talk will also highlight a variety of nonparametric machine learning algorithms widely used in modern data science.

]]>It has been so long since I was quarantined by the joy of learning as a student, a form of joy whose purity many of us only recognize decades after we lost our innocence. I was therefore in debt to the organizers of the 2019 IEEE Data Science Workshop (DSW). They provided me the opportunity to experience that joy again; on a breezy, refreshing Sunday I limbered up with “Large-scale Optimization for Machine Learning” in the morning and tangoed with “Tensors in Data Science” in the afternoon.

However, a real “*aha*” moment came during the welcome reception that evening. A dean and an ex-president of IEEE’s Signal Processing Society (SPS) delivered welcoming remarks, and reminded the mixed audience of engineers, applied mathematicians, computer scientists, and statisticians that the mission of SPS has long been about “generation, transformation, extraction, and interpretation of information.” Isn’t that pretty much what Data Science (DS) is about? After all, who would care much about data if they don’t ultimately lead to actionable or at least understandable information?

I share much of my fellow statisticians’ and probabilists’ frustration that our consistent and substantial contributions to DS have generally not been properly recognized. But this remark reminded me that we are still the luckier ones. What is the percentage of the Venn diagrams on data science you can find online that include “signal processing” either as a participating discipline or a skill set? So far that percentage from my search is smaller than the probability that my mother country would win the 2026 world cup. The OR (Operations Research) community is in a similar situation; its contributions to optimization methods, which are the bread and butter of machine learning, are essentially infinite compared to the attention the community has received in the media frenzy over DS or AI.

No matter how frustrated, or even outraged, any individual group or discipline in DS is, there is no DS deity we can blame for unfairly favoring some groups over others. If anything is to blame, it is our long and collective failure to communicate with and learn from each other. Period.

The good news is that this period is about to end. There is an increasing awareness that it is much more effective to engage in outreach than in outrage, so to speak. That computer scientists and statisticians were invited to IEEE DSW represents the SPS’s effort. That the ACM and IMS reached out to each other last year is another such indication. As I wrote in my second President’s column, this outreach resulted in the establishment of the IMS task force, co-chaired by Liza Levina (Michigan) and David Madigan (Columbia), on the partnership with ACM, the world’s largest computing society with nearly 100,000 members. I am very happy to report that this effort is now expanding to a much larger-scale collaboration by multiple disciplines, as encouraged by NAS (National Academies of Sciences, Engineering, and Medicines), and with ACM and IMS as its co-leading organizations.

Specifically, the first ACM-IMS Interdisciplinary Summit on the Foundations of Data Science was held on June 15, 2019 in the grandiose Palace Hotel of San Francisco, just prior to the ACM award ceremony, which conferred the latest Turing Award to the “Fathers of the deep learning revolution.” The Summit co-chair, Columbia computer scientist Jeannette Wing, concluded her opening remarks [*which you can watch on the Livestream at *https://www.acm.org/data-science-summit/livestream*—see screenshot below*] by emphasizing that,

“*While today’s event focuses primarily on computer science and statistics, I want to acknowledge that the foundations of data science also draw on other fields—for example, signal processing from Electronic Engineering, optimization from Operation Research, analysis from Applied Mathematics, and more. David and I expect that the future events in the foundations of data science will reach out to these fields*.”

The joint leadership of ACM and IMS in reaching out to many disciplines is a *Big Deal. *I am deeply grateful to the ACM leadership team, especially its Executive Director and CEO, Vicki Hanson, to the Summit co-chairs, Jeannette Wing and David Madigan, and to all the members of the Steering Committee, which include IMS representatives Chris Holmes (Oxford), Ryan Tibshirani (CMU), and Daniela Witten (UW), for having formally kicked off this joint effort in less than eight months. The first joint Summit was a great success by almost all measures. And it is about this “almost” qualification that I am writing to ask for your help, urgently.

As you will see from the program of the Summit [*below*], it was an extremely well-crafted program in terms of coverage of the topics and representatives of the presenters. Indeed, the six-hour program packed with keynotes and panel debates was very inspiring and intense, so much so that one panel trigged the fire alarm—you can see how long we had to leave the auditorium. However, while the auditorium was packed with about 250 participants, the size of the IMS registered audience was smaller than the number of statisticians on the program.

I realize that the membership ratio of ACM to IMS is about 25:1, and hence the ratio at the Summit was not completely out of proportion. Nevertheless, if IMS truly wants to be a leading voice in DS, we have to move our collective feet to where our mouths say we want to be. We cannot keep complaining that we don’t have a seat at the table but not show up in numbers when we are invited or, worse, when we’re the co-hosts. The matter is very simple. If we don’t take these seats reserved for us, many others will. And few would keep reserving seats for those who don’t show up, no matter how important they are.

Of course, the IMS leadership needs to be more creative in finding ways to encourage members to attend such outreach events. For that, I am particularly grateful to David Madigan, together with Jeannette Wing, for leading the effort to secure an NSF (US National Science Foundation) grant which sponsored over 35 students and young researchers’ attendance at the Summit. It is telling is that all of these funds were taken within 24 hours of the award announcement, almost surely by CS students and young researchers.

This last observation makes me particularly appreciate a new emphasis by another IMS task force, co-chaired by Joseph Blitzstein (Harvard) and Deborah Nolan (Berkeley), which was inspired by Jon Wellner’s 2017 Presidential Address, *Teaching Statistics in the age of data science**.** *Its general task is as hard to accomplish as it is easy to state: **to determine what the PhD curriculum for statistics should be, in the age of data science**. The task force is charged with complimenting the work done at the NSF’s 2018 “Statistics at a Crossroads” workshops, one of which focused on PhD education. The complementary roles IMS can play are in (at least) two dimensions: going beyond the United States, and going deeper into probability. Its membership therefore reflects these dimensions: David Aldous (Berkeley), Emmanuel Candès (Stanford), Antonietta Mira (Università della Svizzera Italiana), Guy Nason (Bristol), Richard Samworth (Cambridge), Nike Sun (MIT), Qi-Man Shao (Southern University of Science and Technology of China), and Harrison Zhou (Yale). I am extremely grateful to this most prominent task force, which has been working diligently via monthly conference calls: no small feat considering the wide range of the time zones! (I will leave this as a trivia question: what is the optimal call time the task force identified?)

The task force is working on a report that consists of four major parts:

**International Training**: Compare and contrast the programs in different countries, using various metrics, such as median length of program, number of required courses, topic breadth in required courses, and the depth of professional development.

**Resources**: Create, curate, and share course materials on emerging topics that are not easy to find a textbook-style reference, and work out how to incentivize such efforts.

**Leadership**: Develop more PhD students into outstanding communicators and ambassadors for the importance of statistics and statistical thinking, in an era where the general public often hears about AI and ML but may have little understanding the critical roles statistics plays, or even what it is.

**Probability**: Update the probability curriculum to better reflect the statistical and data scientific challenges students are starting to encounter, addressing the old debate on how much measure theory to include in the core probability course, and recent questions about the roles of CS and DS in the probability curriculum.

I am particularly grateful for and pleased to see the task force’s emphasis on building leadership while one is still a student. It is not a secret that for too long “leadership” has not been viewed as an essential skill, and in some faculty members’ minds it was (and perhaps still is) even a distraction, subtracting from one’s scholarship. The end result is that our profession simply does not have enough “outstanding communicators and ambassadors” out there to explain —and promote the importance of—what we do. Promotion is not a dirty word as long as we have substance to be promoted, and we absolutely do. The lack of general leadership training in statistics is hurting us in real terms, including in our pockets. At the latest NAS Committee on Applied and Theoretical Statistics (CATS) meeting I attended, representatives from NSF reminded the committee once again of a painful reality: the suggestions regarding what kinds of DS research the NSF should fund come almost exclusively from outside of the statistical community.

This was why I invited Juan Meza, the Director of the Division of Mathematical Sciences at NSF, to write to us directly last November. Meza told us about the **Harnessing the Data Revolution** initiative and asserted that, as DS evolves, “new strategies, methods, and theory will be needed to address all of the complex data issues arising.” He concluded with a call to action for statisticians and probabilists: “And who better to do this than those who have already contributed so much to data sciences?” But apparently such messages need to be repeated periodically, as we are simply a shy profession, especially compared to CS which has a much faster-paced and action-oriented culture.

Regardless of whether or not we feel our fellow disciplines are moving too aggressively, no one can hear us if all we do is to complain to each other that others don’t hear us. If we want to be a leading voice in the DS era, we must go out, communicate with other disciplines, speak to funding agencies, talk to the general public, etc. That is, we must work for what we wish for, just as we should always practice what we preach.

This is my departing wish as the IMS President. I look forward to thanking you in person for your trust in me when I see you at an ACM symposium or an AMS meeting or an IEEE workshop or an INFORMS conference.

Until then, please consider giving one presentation to your favorite high school. Thank you!

—

Watch the whole Summit on the Livestream at https://www.acm.org/data-science-summit/livestream

**9:00-9:05 AM – Introduction**

- Jeannette Wing,
*Columbia University*

**9:05-9:40 AM – Keynote Talk: “Making the Black Box Effective: What Statistics Can Offer”**

- Emmanuel Candès,
*Stanford University* - Introduction: David Madigan,
*Columbia University*

**9:40-10:20 AM – Panel: Deep Learning, Reinforcement Learning, and Role of Methods in Data Science**

- Moderator: Joseph Gonzalez,
*University of California Berkeley* - Panelists:
- Shirley Ho,
*Flatiron Institute* - Sham Kakade,
*University of Washington* - Suchi Saria,
*Johns Hopkins University* - Manuela Veloso,
*J.P. Morgan AI Research, Carnegie Mellon University*

10:20-10:35 AM – Break

**10:35-11:15 AM – Panel: Robustness and Stability in Data Science**

- Moderator: Ryan Tibshirani,
*Carnegie Mellon University* - Panelists:
- Aleksander Madry,
*Massachusetts Institute of Technology* - Xiao-Li Meng,
*Harvard University* - Richard J. Samworth,
*University of Cambridge, The Alan Turing Institute* - Bin Yu,
*University of California, Berkeley*

**11:15-11:55 AM – Panel: Fairness and Ethics in Data Science**

- Moderator: Yannis Ioannidis,
*National and Kapodistrian University of Athens* - Panelists:
- Joaquin Quiñonero Candela,
*Facebook* - Alexandra Chouldechova,
*Carnegie Mellon University* - Andrew Gelman,
*Columbia University* - Kristian Lum,
*Human Rights Data Analysis Group (HRDAG)*

11:55 AM-1:00 PM – Lunch

**1:00-1:35 PM – Keynote Talk: “Deep Learning for Tackling Real-World Problems”**

- Jeffrey Dean,
*Google* - Introduction: Suchi Saria,
*Johns Hopkins University*

**1:35-2:10 PM – Keynote Talk: “Machine Learning: A New Approach to Drug Discovery”**

- Daphne Koller,
*insitro* - Introduction: Kristian Lum,
*Human Rights Data Analysis Group*

2:10-2:20 PM – Break

**2:20-2:55 PM – Panel: Future of Data Science**

- Moderator: David Madigan,
*Columbia University* - Panelists:
- Michael I. Jordan,
*University of California, Berkeley* - Jeannette Wing,
*Columbia University*

**2:55-3:00 PM – Closing Remarks: **David Madigan and Jeannette Wing, *Columbia University*

** **

All of us were told as undergraduates, or perhaps Masters students, that an essential property of a point estimator is that it be consistent. And indeed, we usually or even always select estimators that are consistent. We are going to ask a provocative question in this month’s puzzle: is there consistency in the real world? The problem posed asks you to show that, in fact, what we believe to be consistent, when computed, is not.

Where does the unavoidable inconsistency in the real world come from? Although the human body is an amazing machine, it is not a perfect one. The practical inconsistency comes from human limitations in the precision of a measurement. This inconsistency is incurable and a large sample won’t fix it. Here is the exact problem of this month.

Suppose we have an iid sequence of exponential random variables $X_1, X_2, \cdots $ with mean $\lambda $. Suppose the values are rounded off using an often-used rule: an observation $X$ is written down as zero if $X \leq 0.005$, as $0.01$ if $X$ is between $0.005$ and $0.015$, as $2.00$ if it is between $1.995$ and $2.005$, and so on.

Call this recorded value $Y$ and consider the mean $\bar{Y}$ for a sample of size $n$.

(a) Prove that $\bar{Y}$ is not a consistent estimator of $\lambda $.

(b) Find in closed form a parametric function $h(\lambda )$ such that $\bar{Y}$ is a consistent estimator of $h(\lambda )$.

(c) Derive an asymptotic expansion to one, or if you can, two terms, for $h(\lambda ) – \lambda $ as $\lambda \to \infty $.

**Send us your answer by September 5! Email bulletin@imstat.org.**

Congratulations to the *four* student members who sent in correct answers—some more complete than others. They are **Prakash Chakraborty**, Purdue University; **Sihan Huang**, Columbia University; **Kumar Somnath**, The Ohio State University; and **Andrew Thomas**, Purdue University.

Now for the solution. Denote the number of steps required to reach the point $(n,n)$ by $S_n$. The quickest that the particle can reach the point $(n,n)$ is in $2n$ steps, which happens if exactly $n$ heads and $n$ tails are produced in $2n$ tosses of our fair coin. This has probability $\frac{{2n \choose n}}{2^{2n}}$.

Next, for any given integer $k \geq 1,

P(S_n = 2n+k) = {2n+k-1 \choose n-1}\,2^{-2n-k+1}$.

Thus, $\mu _n = E(S_n) = 2n+\sum_{k=1}^\infty k\,{2n+k-1 \choose n-1}\,2^{-2n-k+1} = 2n\,\bigg (1+\frac{{2n \choose n}}{2^{2n}}\bigg )$, with a little bit of calculation. In particular, $\mu_3 = \frac{63}{8} = 7.875$, and on using Stirling’s series for $n!$, we get

$\mu_n = 2n+\frac{2\,\sqrt{n}}{\sqrt{\pi }}-\frac{1}{4\,\sqrt{n\pi }}

+O(n^{-3/2})$.

The Penn Department of Statistics is now in its 88th year. Over this period of time it has experienced many changes, and one of these was especially transformative. I’m referring to Larry joining the Department in 1994. In addition to his strength and accomplishments as a researcher, he brought to the Department an exceptionally high level of excellence in teaching and mentoring.

During his career Larry was the dissertation adviser for 37 students, with 21 of them completing their degrees at Penn during the years 1998–2016. [*Fourteen of those 21 were present at the event, as well as many others who benefited from Larry’s teaching, advice, and collaboration.*]

In 2011 Larry received a Provost’s Award for Distinguished PhD Teaching and Mentoring at the University of Pennsylvania. Comments submitted at the time included: *“all you could hope for in an adviser”; “has a special way of finding important problems that highlight the talents of each of his doctoral students”; “willing to advise any student, weak or strong”; “he somehow manages to bring out their full potential. Getting to know each and every one of them, he consistently manages to come up with a fertile new research direction that is suitably tailored to their interests and talents. … In this way, he provides the perfect preparation for an academic research career, giving his students the confidence and self-resourcefulness that is so critical for success”; “devotes endless amount of attention and time to each of his students, and by doing so, he truly portrays what academics is about: nurturing young minds to investigate new problems and come up with solutions.”*

At the December 15–17, 2010 conference in honor of his 70th birthday, Larry’s students presented him with a plaque that read, in part: *“Happy Birthday Larry! We love you! We thank you! For teaching us how to be successful as a professional and a human; for inspiring us to achieve our full potential; for taking care of us; … and for being such a great role model for us.”*

Larry’s devotion to guidance and mentoring was always accompanied by personal modesty.

I want to conclude by talking about intercollegiate athletics at Cal Tech. Larry played basketball at Cal Tech, first on the freshman team and then three years on the varsity. One of his teammates, Mike Perlman, provided some information about the team.

As we all know, Cal Tech is not an athletic powerhouse. However, during Larry’s tenure on Cal Tech basketball, the team did experience a respectable number of wins. In 1959–1960, it recorded 6 wins and 15 losses. Mike wrote: “Besides Larry, who was an intense competitor, the team featured Fred Newman, who was a first-rate player—he held many conference scoring records. Another name you may know is Roger Noll, who became a well-known economist.” [Noll is now an emeritus Professor of Economics at Stanford.] In 1960–1961 the team had 8 wins and 12 losses. “I’m not certain, but there may not have been as good a record since—the Legendary Losing Streaks began sometime after that,” Mike added.

In recent years Cal Tech teams have experienced very long losing streaks—articles have appeared in the press when these streaks have been broken. On several occasions, I talked to Larry after reading such news. One such event occurred on February 22, 2011. Cal Tech basketball defeated Occidental 46–45 in its final game of the season. In doing so, it scored the final nine points, and the winning basket was the first of two free throws, with three seconds left. The second attempt missed, and a desperation shot from halfcourt by Occidental did not connect. This win ended a 310-game losing streak. I had a lot of fun talking to Larry about this!

But there was a far worse losing streak, which occurred in baseball. On March 31, 2017, Cal Tech defeated Pomona–Pitzer 4–3 with a walk-off win. It had not won in the SCIAC previously since 1988—a span that included 587 games. I went to talk to Larry again. He said, “Oh, I once played baseball for Cal Tech.” As he explained, the team was short-handed for a game. The first baseman was his roommate and persuaded him to fill in for a game. It’s just another example of how cooperative and helpful Larry always was.

—

**Lawrence D. Brown **(1940–2018), Miers Busch Professor and Professor of Statistics at The Wharton School, University of Pennsylvania, had a distinguished academic career with groundbreaking contributions to a range of fields in theoretical and applied statistics. He was an IMS Fellow, Wald Lecturer, and a former IMS President. Moreover, he was an enthusiastic and dedicated mentor to many graduate students. Larry’s firm dedication to research, teaching and service sets an exemplary model for generations of new statisticians. Therefore, the IMS is introducing a new award in his honor: the **IMS Lawrence D. Brown PhD Student Award**. [*The deadline was July 15, 2019.*] This annual travel award will be given to three PhD students, who will present their research at a special invited session during the IMS Annual Meeting. The winners of the inaugural 2020 award will be announced in a future issue.

**Donations are welcome**, through https://www.imstat.org/contribute-to-the-ims/** **under “IMS Lawrence D. Brown Ph.D. Student Award Fund.”

We mark the passing of Dr. Joel Zinn, Professor Emeritus of Mathematics, Texas A&M University, on December 5, 2018, at his home in Westlake Village, California. He was 72 years old.

Joel was born on March 16, 1946, in Brooklyn, New York. His family said, about Joel’s early attraction to the subject, “He fell in love with mathematics at an early age, determining in the sixth grade that it would become his life’s work.” Having graduated from high school at 16, Joel earned his bachelor’s degree in mathematics from Queens College. While there, he met Michele; they married in 1968 and moved to Madison, Wisconsin, for Joel to pursue his PhD in Mathematics under Jim Kuelbs. Joel’s early career took him to the University of Minnesota, University of Massachusetts and Michigan State; in 1981, he became Associate Professor at Texas A&M, where he would remain for 36 years, becoming Professor Emeritus in 2007.

By 1978, Joel had published five papers, three as sole author. Topics included 0–1 laws, stable measures, translation of measures on vector spaces, recurrence in stationary sequences. During ’78–’81, he published 13 more papers on topics including: probability on *L ^{p}* spaces, stable laws, iterated logarithm laws on Banach spaces, weighted empirical logarithm, limit theorems in Banach spaces, and random sets.

Many mathematicians will have shared the experience of having been caught up in a mathematical question, thrilled to the point where they lost track. Joel spent a lot of time in such a world, making his choices, innovating mathematical results. Excellence is about getting particular things done and bearing up when something doesn’t work. Working with others is in many cases essential if progress is to be made. It can be a highly efficient and supportive culture.

Joel was earning a reputation for steady production of fundamental research published in top journals, working and communicating with an impressive list of strong researchers. Joel co-authored 22 publications with Evarist Giné (who passed away in 2015).

As of today, Joel’s most highly cited publication is the 61-page 1984 *Annals of Probability *Special Invited Paper, co-authored with Giné: “Some limit theorems for empirical processes.” In some important parts, it was their improvement of basic results underpinning existing proofs that made the difference. When structuring the paper, they adopted spare, non-ambiguous notation and a narrative of examining years of advances on this topic by luminaries in probability theory, some of it unpublished work by Le Cam. Their treatment of details, old and new, is refreshing, accurate, author credited, and powerful. They had discovered ways to get a handle on the performance of empirical probabilities of events used as estimates of their actual probabilities, for all events in unusually large classes of events. Such results were particularly needed to support the role of Bootstrap in complex problems. Not to be overlooked are passages that give one the feel of being in the same room with lots of people you admire.

As the list of Joel Zinn’s different co-authors expanded, so too did the scope of his research topics. Most remained in the category of basic research, partly because fundamentals he continually worked were designed to punch through roadblocks standing in the way of developing broadly applicable mathematical formulas. His later topics included: uniform convergence of weighted kernel density estimators, various laws of iterated logarithm, a central limit theorem for empirical processes involving time dependent data, and when is the Student *t*-statistic asymptotically standard normal?

Another highly cited publication is Joel’s 1990 paper, again co-authored with Evarist Giné: “Bootstrapping general empirical measures.” Building on the ’84 paper, it was timely, represented an important commitment of probability talent, and genuinely extended the role of Bootstrap to complex probability models of current interest today. These results, not so many years after Bootstrap burst forth, elevated yet another part of probability to a higher level of mathematical maturity, clarity, generality, and leadership.

Joel’s many substantive publications, including his co-authorship (with V. Koltchinskii, R. Nickl, S. van de Geer and J. Wellner) of an obituary for Evarist Giné, often share elements of acute clarity, precise notation, wonderful narratives, and complete airing of important strengths and weaknesses of component parts.

Considering the pace of development, it seems that much of our precious Probability is carried around in the heads of those in the network of knowledgeable persons working that ground, many of whom are close at hand from shared academic ancestry. Like art and music, our subject will change over time. What is timeless is the value of colleagues like Joel Zinn who leave things in proper order for those who follow.

We close with a Toast: *“**To Probabilists, Joel Zinn, et al.”*

*—*

*Written by Raoul LePage, Michigan State University*