The Institute of Mathematical Statistics has selected **Elyse Gustafson **as the recipient of this year’s Harry C. Carver Medal. The award is made for Elyse’s exceptional service and dedication as Executive Director of the IMS over the past 20 years. Throughout this time, which included relocation of the IMS office, unpredictable fiscal challenges and substantial changes in the publishing industry, the IMS functioned smoothly as a preeminent society and publisher, under the administrative leadership of Elyse Gustafson. As the sole permanent staff person, Elyse has admirably managed a team of dedicated contractors and provided outstanding support for the IMS Executive Committee, Council, multiple IMS committees and journal editorial boards. She is especially recognized for her extraordinary ability to cooperate efficiently and cheerfully with a huge number of members who volunteer their time to help with IMS activities and who have a wide range of ideas and working styles.

Elyse will receive the Carver Medal at the IMS Presidential Address and Awards ceremony on Monday, July 31, at JSM in Baltimore (8:00pm in Ballroom 1. See the JSM online program).

On hearing about her award, Elyse said, “I am surprised and honored to receive the Carver Medal. Working for the IMS for the last 20 years has been incredibly fulfilling. The volunteer leadership of the IMS is deeply dedicated to the mission of the organization. They are what makes this job so rewarding. Cultivating the organization together with these leaders has been more than I can hope for. I look forward to many more years together.”

The Carver Medal was created by the IMS in honor of Harry C. Carver, Founding Editor of the *Annals of Mathematical Statistics* and one of the founders of the IMS. The medal is for exceptional service specifically to the IMS and is open to any member of the IMS who has not previously been elected President. See http://www.imstat.org/awards/carver.html for more information on the nomination process (it’s not too early to start thinking about nominations for next year! You can check the list of past recipients).

Maury D. Bramson, professor of mathematics at the University of Minnesota, Minneapolis, was among the 84 new members and 21 foreign associates elected to the US National Academy of Sciences. Members and Associates are elected in recognition of their distinguished and continuing achievements in original research. Those elected this year bring the total number of active members to 2,290 and the total number of foreign associates to 475.

Maury Bramson works in probability theory, including interacting particle systems (with applications to mathematical physics, physical chemistry, and biological systems), branching Brownian motion (with applications to mathematical physics and biological systems), and stochastic networks (with applications to electrical and industrial engineering, computer science, and operations research). Among his honors, Bramson is a fellow of IMS and the American Mathematical Society, and was an invited speaker at the 1998 International Congress of Mathematicians.

]]>I am very happy to bring you news of **Leonid Mytnik**’s Humboldt-Forschungspreis. Leonid and I met, not entirely by chance, in Bamberg, Francony in late March. The von Humboldt symposium held there for a few days was an exceptional event for me in many ways. It was the first time I had seen Leonid in person for about ten years. It was my first time in Bamberg, a UNESCO World Heritage site, unknown to me a year ago. It was the first time I participated in a ceremony that honored 46 scientists from many different disciplines simultaneously. Leonid was not among that 46—he will be honored later this year at the von Humboldt Annual Meeting in Berlin—but I was. Last November I received a Friedrich Wilhelm Bessel-Forschungspreis, an analog of Leonid’s award in my (lighter) scientific category. My host is Anja Sturm at Georg-August-Universität, Göttingen. I look forward to this exciting year. Since there is no “free lunch”, at the moment I am paying in time spent on all the practical aspects. I will most likely be quiet for a while. Before that, let me use this opportunity to thank again those who made my year.

**The Sixteenth Annual Janet L. Norwood Award**

The Department of Biostatistics and the School of Public Health, University of Alabama at Birmingham (UAB) is pleased to request nominations for the Sixteenth Annual Janet L. Norwood Award for Outstanding Achievement by a Woman in the Statistical Sciences. The award will be conferred on Wednesday, September 13, 2017. The award recipient will be invited to deliver a lecture at the UAB award ceremony, and will receive all expenses, the award, and a $5,000 prize.

Eligible individuals are women who have completed their terminal degree, have made extraordinary contributions and have an outstanding record of service to the statistical sciences, with an emphasis on both their own scholarship and on teaching and leadership of the field in general and of women in particular and who, if selected, are willing to deliver a lecture at the award ceremony. For additional details about the award, please visit our website at http://www.soph.uab.edu/awards/norwoodaward.

To nominate, please send a full curriculum vitae accompanied by a letter of not more than two pages in length describing the nature of the candidate’s contributions. Contributions may be in the area of development and evaluation of statistical methods, teaching of statistics, application of statistics, or any other activity that can arguably be said to have advanced the field of statistical science. Self-nominations are acceptable.

Please send nominations to Charity Morgan, PhD, Assistant Professor, Biostatistics: cjmorgan@uab.edu. The deadline for receipt of nominations is **June 23**, 2017. Electronic submissions of nominations are encouraged. The winner will be announced by July 3.

Previous recipients of the award, starting in 2002, are: Jane F. Gentleman, Nan M. Laird, Alice S. Whittemore, Clarice R. Weinberg, Janet Turk Wittes, Marie Davidian, Xihong Lin, Nancy Geller, L. Adrienne Cupples, Lynne Billard, Nancy Flournoy, Kathryn Roeder, Judith D. Singer, Judith D. Goldberg and Francesca Dominici.

**Ulf Grenander Prize**

The American Mathematical Society’s Ulf Grenander Prize in Stochastic Theory and Modeling is a new prize that recognizes exceptional theoretical and applied contributions in stochastic theory and modeling. It is awarded for seminal work, theoretical or applied, in probabilistic modeling, statistical inference, or related computational algorithms, especially for the analysis of complex or high-dimensional systems. The prize was established by colleagues of Ulf Grenander, who died in 2016. A longtime faculty member and chair of the Brown University Department of Applied Mathematics, Grenander received many honors. He was a fellow of IMS, the American Academy of Arts and Sciences and the National Academy of Sciences, as well as a member of the Royal Swedish Academy. See his obituary: http://bulletin.imstat.org/2017/04/obituary-ulf-grenander-1923-2016/

Nominations are open until **June 30** for the first Grenander Prize, which will be awarded in January 2018. For details and to nominate, please visit the AMS website at http://www.ams.org/profession/prizes-awards/ams-prizes/grenander-prize

**AMS Bertrand Russell Prize**

The AMS has also created the Bertrand Russell Prize, to honor research or service contributions of mathematicians in promoting good in the world and to recognize how mathematics furthers human values. Nominate by **June 30**: http://www.ams.org/profession/prizes-awards/russell-prize

Born in Brooklyn New York in 1920, Stein was a prodigy who started university of Chicago at age 16. There, he fell under the spell of abstraction through Saunders Mac Lane and Adrian Albert. His first applied statistical work was done making weather forecasts during World War 2. Working with another youngster, Gil Hunt, Stein studied the interaction of invariance and accuracy, a lifetime theme.

Suppose that $P_\theta(dx), \theta \in \Theta$ is a family of probabilities on a space $X$, and suppose that a group $G$ acts on $X$, taking $X$ to $X^g$ (think of taking Fahrenheit to Celsius). The group is said to act on the family if, for every $g$, there is $\bar{g}$ so that $P_\theta(x^g)=P_{\theta^{\bar{g}}}(x)$. An estimator $\hat{\theta}(x)$ is equivariant if $\hat{\theta}(x^g)=\bar{g}\hat{\theta}(x)$. If $L(\theta,\hat{\theta})$ is a loss function, an estimator $\theta^*$ is minimax if

$$\inf_{\hat{\theta}}\sup_{\theta} EL(\hat{\theta},\theta) = \sup_{\theta} EL(\theta^*,\theta).$$

The Hunt–Stein theorem shows that if an estimator $\theta^*$ is minimax among all equivariant estimators, then it is minimax among all estimators, provided that the group is *amenable*. This is a remarkable piece of work: often it is straightforward to write down all equivariant estimators and find the best one. Showing such an estimator has global optimality and the intervention of amenability is remarkable. That it was done during wartime conditions by two college kids is astounding.

Stein’s work on invariance under a group energized Erich Lehmann, who wrote up the Hunt-Stein theorem in the first edition of his testing book (the original manuscript is lost) and established equivariance as a general statistical principle.

After the war, Stein entered graduate school at Colombia to work with Abraham Wald. Following Wald’s tragic death in an airplane accident, Stein’s thesis was read by Harold Hotelling and Ted Anderson.

Stein’s thesis [ref1] solved a problem posed by Neyman: find a fixed-width confidence interval for a normal mean when the variance is unknown. The usual t-interval has random width governed by the sample standard deviation and George Danzig had proved that no such confidence interval exists based on a fixed sample of size $n$. Stein introduced a two-step procedure: a preliminary sample is taken, this is used to determine the size of a second sample and that finally yields the estimator. Combining these ideas to get an exact procedure takes a very original piece of infinite dimensional analysis, still impressive 70 years later.

Stein taught at UC Berkeley from 1947 to 1950, then, having refused to sign Berkeley’s loyalty oath during the McCarthy era, he moved from Berkeley to Chicago and then to Stanford University’s Department of Statistics in 1953, where he spent the rest of his career.

A celebrated contribution to decision theory is Stein’s necessary and sufficient condition for admissibility. Roughly, this says that any admissible procedure in statistical decision problem is a limit of Bayes rules. The setting is general enough to encompass both estimation and testing. Stein had a lifetime aversion to the “cult of Bayes”: in the Statistical Science interview with de Groot [ref2] he said, “The Bayesian point of view is often accompanied by an insistence that people ought to agree to a certain doctrine, even without really knowing what that doctrine is”. He told us that it took him five years to publish—until he could find a non-Bayesian proof of the result. He softened in later years: when discussing his estimate of the multivariate mean, the theory allows shrinkage towards any point. Stein said, “I guess you might as well shrink towards your best guess at the mean.” He made one further philosophical point to us regarding his theorem: the theory says that good estimators are Bayes but it is perfectly permissible to use one prior to estimate one component of a vector of parameters and a completely different prior to estimate other coordinates. For Stein, priors could suggest estimators but their properties should be understood through the mathematics of decision theory.

Stein contributed to several other areas of statistics: “Stein’s lemma” [ref3] for bounding the tails of stopping times in sequential analysis is now a standard tool. His work in nonparametrics: where he showed that to estimate $\theta$ given $X_i=\theta+\epsilon_i$, where the law of $\epsilon_i$ is unknown; one first estimates the law of $\epsilon$ using nonparametric density, then combines that with a Pitman-type estimator. This has remarkable optimality properties.

The “Sherman–Stein–Cartier–Fell–Meyer” theorem formed the basis of Stein’s (unpublished) Wald lecture. This started the healthy field of comparison of experiments, brilliantly developed by Lucien Le Cam [ref4]. Two brief but telling notes [ref5, ref6] show Stein’s idiosyncratic use of clever counter-examples to undermine preconceptions that everyone believed true.

Throughout his statistical work, Stein preferred “properties over principles”. Here is the way he explained the content of his shrinkage estimate to us: if you ask a working scientist if estimators should obey natural transformation rules (changing from feet to meters), they should agree this mandatory. Most would prefer an estimator which always has a smaller expected error. Stein’s paradox shows that these two principles are incompatible (shrinkage estimators are not equivariant).

Roughly, the second half of Stein’s research career was spent developing a new method of proving limit theorems (usually with explicit finite sample error bounds): what is now called Stein’s method. This separation is artificial because Stein saw the subjects of statistics and probability as intertwined. For example, in working out better estimates of an unknown covariance matrix, Stein discovered, independently, Wigner’s semi-circle law for the eigenvalues of the usual sample estimator. His estimator shrank those to make them closer to the true eigenvalues. Then he proved that this estimator beats the naive one. Stein’s method of exchangeable pairs seems to have been developed as a new way of proving Hoeffding’s combinatorial central limit theorem whose starting point is a non-random n by n matrix $A$. Forming a random diagonal $W_{\pi}=\sum_{i=1}^n A_{i\pi(i)}$, where $\pi$ is a uniformly chosen permutation of $\{1,2,\ldots,n\}$. Under mild conditions on $A$, this $W_\pi$ has an approximately normal limit. This unifies the normality of sampling without replacement from an urn, limit theorems for standard nonparametric tests and much else. Stein compared the distribution of $W_\pi$ and $W_{t\pi}$ where $t$ is a random transposition. He showed these differ by a small amount and was able to show this mimicked his famous characterization of the normal: a random variable $W$ has a normal distribution if and only if $E(Wf(W))=E(f'(W))$ for all smooth bounded $f$. The $W_\pi$ satisfy this identity approximately and Stein proved this was enough. The earliest record of this work is in class notes taken by Lincoln Moses in 1962 (many faculty regularly sat in Stein’s courses). His first publication on this approach used Fourier analysis and is recorded in [ref7]. The Fourier analysis was dropped and the method expanded into a definitive theory, published as a book by the IMS in the monograph series [ref8].

Two important parts of Stein’s life were family and politics. Charles met Margaret Dawson while she was a graduate student at UC Berkeley. She shared his interest in statistics; they translated A. A. Markov’s letters to A. A. Chuprov [ref9] and worked together as political activists. While Charles almost always had his head in the clouds, Margaret made everything work and guided many of his professional decisions. Their three children, Charles Jr, Sarah and Anne, grew up to be politically active adults. Margaret passed away a few months before her husband. He is survived by two daughters: Sarah Stein, her husband, Gua-su Cui, and their son, Max Cui-Stein, of Arlington, Massachusetts; by Anne Stein and her husband, Ezequiel Pagan, of Peekskill, NY; and by his son Charles Stein Jr. and his wife, Laura Stoker, of Fremont, California.

Politics of a very liberal bent were a central part of the Steins’ world. Charles led protests against the war (and was even arrested for it), Margaret was a singing granny and a community organizer. The family traveled to the Soviet Union and the kids went around to churches and schools upon returning to try to humanize the USSR’s image.

Charles shared his ideas and expertise selflessly. He read our papers, taught us his tools and inspired all of us by his integrity, depth and humility. All of our worlds are a better place for his being.

—

Written by Persi Diaconis and Susan Holmes, Stanford, CA

**References:**

[1] Stein, C.M. (1953)* A two-sample test for a linear hypothesis having power independent of the variance*. PhD, Columbia University.

[2] DeGroot, M.H. (1986) A conversation with Charles Stein. *Statistical Science* **1**: 454–462.

[3] Stein, C. (1946) A note on cumulative sums. *Ann Math Statist* **17**: 498–499.

[4] Le Cam, L., Yang G.L. (2012) *Asymptotics in statistics: some basic concepts*. Springer Science & Business Media.

[5] Stein, C (1959) An example of wide discrepancy between fiducial and confidence intervals. *Ann Math Statist* **30**:877–880.

[6] Stein, C. (1962) A remark on the likelihood principle. *J Roy Statist Soc Ser A* **125**:565–568.

[7] Stein, C. (1972) A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. *Proc. Sixth Berkeley Symp. Math. Statist. Probab.* **2**: 583–602.

[8] Stein, C. (1986) *Approximate computation of expectations*, volume 7 of IMS Lecture Notes–Monograph Series.

[9] Markov, A., Chuprov, A. (1971) *The correspondence on the theory of probability and mathematical statistics*. Springer-Verlag. Translated from the Russian by Charles M. and Margaret D. Stein.

At the SPA meeting, Richard Kenyon will be delivering the **Schramm Lecture **[link] and Takashi Kumagai [link] and Marta Sanz-Solé [link] will deliver **Medallion Lectures**. At JSM the **COPSS Fisher Lecturer** is Rob Kass [link], the **Wald lecturer** is Emmanuel Candès [link], and one of the five **Medallion lecturers** is Mark Girolami [link].

The other IMS lecturers at JSM are Martin Wainwright (**Blackwell lecture**), Jon Wellner (**Presidential Address**), and Edo Airoldi, Emery Brown, Subhashis Ghosal and Judith Rousseau (**Medallion lectures**)—look out for previews in the next issue. [Note that Thomas Mikosch was due to give his **Medallion lecture** at the APS meeting in Evanston (July 10–12), but this has been rescheduled to next year’s IMS annual meeting, in Vilnius.]

We invite nominations for special IMS lectures: see this link.

]]>Robert E. (Rob) Kass is the Maurice Falk Professor of Statistics and Computational Neuroscience at Carnegie Mellon University. Rob received his PhD in Statistics from the University of Chicago in 1980. His early work formed the basis for his book *Geometrical Foundations of Asymptotic Inference*, co-authored with Paul Vos. His subsequent research has been in Bayesian inference and, since 2000, in the application of statistics to neuroscience. Rob Kass is known for his methodological contributions, and for several major review articles, including one with Adrian Raftery on Bayes factors (*JASA*, 1995), one with Larry Wasserman on prior distributions (*JASA*, 1996), and a pair with Emery Brown on statistics in neuroscience (*Nature Neuroscience*, 2004, also with Partha Mitra; *Journal of Neurophysiology*, 2005, also with Valerie Ventura). His book *Analysis of Neural Data*, with Emery Brown and Uri Eden, was published in 2014. Kass has also written widely-read articles on statistical education. Recently, he and several co-authors published “Ten Simple Rules for Effective Statistical Practice” (*PLOS Computational Biology*, 2016).

Kass has served as Chair of the Section for Bayesian Statistical Science of the American Statistical Association, Chair of the Statistics Section of the American Association for the Advancement of Science, founding Editor-in-Chief of the journal *Bayesian Analysis*, and Executive Editor of *Statistical Science*. He is an elected Fellow of IMS, ASA and AAAS. He has been recognized by the Institute for Scientific Information as one of the 10 most highly cited researchers, 1995–2005, in the category of mathematics (ranked #4). In 2013 he received the Outstanding Statistical Application Award from the ASA for his 2011 paper in the *Annals of Applied Statistics *with Ryan Kelly and Wei-Liem Loh. In 1991 he began the series of eight international workshops, Case Studies in Bayesian Statistics, which were held every two years at Carnegie Mellon, and was co-editor of the six proceedings volumes that were published by Springer. He also founded and has co-organized the international workshop series Statistical Analysis of Neural Data, which began in 2002; the eighth iteration takes place in May, 2017. In 2014 Kass chaired an ASA working group that produced the forward-looking report Statistical Research and Training Under the BRAIN Initiative.

Kass has been on the faculty of the Department of Statistics at Carnegie Mellon since 1981; he joined the Center for the Neural Basis of Cognition (CNBC, run jointly by CMU and the University of Pittsburgh) in 1997, and the Machine Learning Department (in the School of Computer Science) in 2007. He served as Department Head of Statistics from 1995 to 2004 and was appointed Interim CMU-side Director of the CNBC in 2015.

JSM 2017 in Baltimore, MD, USA, Wednesday, August 2, 4:00pm

The brain’s complexity is daunting, but much has been learned about its structure and function, and it continues to fascinate: on the one hand, we are all aware that our brains define us; on the other hand, it is appealing to regard the brain as an information processor, which opens avenues of computational investigation.

While statistical models have played major roles in conceptualizing brain function for more than 50 years, statistical thinking in the analysis of neural data has developed much more slowly. This seems ironic, especially because computational neuroscientists can—and often do—apply sophisticated data analytic methods to attack novel problems. The difficulty is that in many situations, trained statisticians proceed differently than those without formal training in statistics.

What makes the statistical approach different, and important? I will give you my answer to this question, and will go on to discuss a major statistical challenge, one that could absorb dozens of research-level statisticians in the years to come.

]]>Emmanuel Candès is the Barnum-Simons Chair in Mathematics and Statistics, and professor of Electrical Engineering (by courtesy) at Stanford University, where he currently chairs the Department of Statistics. Emmanuel’s work lies at the interface of mathematics, statistics, information theory, signal processing and scientific computing: finding new ways of representing information and of extracting information from complex data. Candès graduated from the Ecole Polytechnique in 1993 with a degree in science and engineering, and received his PhD in Statistics from Stanford in 1998. He received the 2006 NSF Alan T. Waterman Award, the 2013 Dannie Heineman Prize from Göttingen, SIAM’s 2010 George Pólya Prize, and the 2015 AMS-SIAM George David Birkhoff Prize in Applied Mathematics. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences.

The Wald Lectures are delivered this year at JSM in Baltimore.

For a long time, science has operated as follows: a scientific theory can only be tested empirically, and only after it has been advanced. Predictions are deduced from the theory and compared with the results of decisive experiments so that they can be falsified or corroborated. This principle, formulated independently by Karl Popper and by Ronald Fisher, has guided the development of scientific research and statistics for nearly a century. We have, however, entered a new world where large data sets are available prior to the formulation of scientific theories. Researchers mine these data relentlessly in search of new discoveries and it has been observed that we have run into the problem of irreproducibility. Consider the April 23, 2013 *Nature* editorial: “Over the past year, *Nature* has published a string of articles that highlight failures in the reliability and reproducibility of published research.” The field of Statistics needs to re-invent itself and adapt to this new reality in which scientific hypotheses/theories are generated by data snooping. In these lectures, we will make the case that statistical science is taking on this great challenge and discuss exciting achievements.

An example of how these dramatic changes in data acquisition that have informed a new way of carrying out scientific investigation is provided by genome-wide association studies (GWAS). Nowadays we routinely collect information on an exhaustive collection of possible explanatory variables to predict an outcome or understand what determines an outcome. For instance, certain diseases have a genetic basis and an important biological problem is to find which genetic features (e.g., gene expressions or single nucleotide polymorphisms) are important for determining a given disease. Even though we believe that a disease status depends on a comparably small set of genetic variations, we have a priori no idea about which ones are relevant and therefore must include them all in our search. In statistical terms, we have an outcome variable and a potentially gigantic collection of explanatory variables, and we would like to know which of the many variables the response depends on. In fact, we would like to do this while controlling the false discovery rate (FDR) or other error measures so that the results of our investigation do not run into the problem of irreproducibility. The lectures will discuss problems of this kind. We introduce “knockoffs,” an entirely new framework for finding dependent variables while provably controlling the FDR in finite samples and complicated models. The key idea is to make up fake variables—knockoffs—which are created from the knowledge of the dependent variables alone (not requiring new data or knowledge of the response variable) and can be used as a kind of negative control to estimate the FDR (or any other error of type 1). We explain how one can leverage haplotype models and genotype imputation strategies about the distribution of alleles at consecutive markers to design a full multivariate knockoff processing pipeline for GWAS!

The knockoffs machinery is a selective inference procedure in the sense that the methods finds as many relevant variables as possible without having too many false positives, thus controlling a type 1 error averaged over the selected. We shall discuss other approaches to selective inference, where the goal is to correct for the bias introduced by a model constructed after looking at the data as is now routinely done in practice. For example, in the high-dimensional linear regression setup, it is common to use the lasso to select variables. Now it is well known that if one applies classical techniques after the selection step—as if no search has been performed—inference is distorted and can be completely wrong. How then should one adjust the inference so that it is valid? We plan on presenting new ideas from Jonathan Taylor and his group to resolve such issues, as well as from a research group including Berk, Brown, Buja, Zhang and Zhao on post-selection inference.

Some of the work I will be presenting is joint with many great young researchers including Rina Foygel Barber, Lucas Janson, Jinchi Lv, Yingying Fan, Matteo Sesia as well as many other graduate students and post-docs, and also with Professor Chiara Sabatti who played an important role in educating me about pressing contemporary problems in genetics. I am especially grateful to Yoav Benjamini: Yoav visited Stanford in the Winter of 2011 and taught a course titled “Simultaneous and Selective Inference”. These lectures inspired me to contribute to the enormously important enterprise of developing statistical theory and tools adapted to the new scientific paradigm — *collect data first, ask questions later.*

Richard Kenyon received his PhD from Princeton University in 1990 under the direction of William Thurston. After a postdoc at IHES, he held positions at CNRS in Grenoble, Lyon, and Orsay, before becoming a professor at UBC for 3 years and then moving to Brown University where he is currently the William R. Kenan Jr. University Professor of Mathematics. He was awarded the CNRS bronze medal, the Rollo Davidson prize and the Loève prize; he is a member of the American Academy of Arts and Sciences, and is currently a Simons Investigator.

Richard Kenyon’s 2017 Schramm lecture will be given at the 39th Conference on Stochastic Processes and their Applications (SPA) in Moscow (July 24–28, 2017). See http://www.spa2017.org/

The boxed plane partition (see Figure 1) is a tiling of a hexagon of side length $n$ by “lozenges”: tiles consisting of $60^{\circ}$ rhombi in one of the three possible orientations; one can also think of it as a projection of a stack of cubes stacked into an $n\times n\times n$ box in such a way that the surface of the stack projects monotonically to the plane $x+y+z=0$.

*Fig. 1: The boxed plane partition*

In the limit $n\to\infty$ under rescaling there is a well-known “limit shape phenomenon” [ref1, ref2] under which this surface in ℝ$^3$, defined by a uniform random boxed plane partition, when scaled by $n$, converges to a *nonrandom* surface. This surface is the unique surface spanning the boundary and minimizing a certain *surface tension*, which we can write as

$$\iint_{H} \sigma(\nabla h)\,dx\,dy,$$

where $H$ is the hexagon, $h:H\to$ ℝ is the function giving the height of the surface above the plane $x+y+z=0$.

There is a similar limit shape phenomenon for tilings of any other region, obtained by minimizing the surface tension with other boundary conditions [ref2, ref4].

The main tool for studying the lozenge tiling model is the determinantal formula describing the correlations between individual tiles. These are based on the formula due to Kasteleyn [ref3] which shows that the number of lozenge tilings of a simply connected polygonal region is the determinant of the adjacency matrix of an underlying graph.

In joint work with Jan de Gier and Sam Watson we consider a generalization of the lozenge tiling model, which we call the *five-vertex model* since it is a special case of the well-known six-vertex model in which one of the six local configurations is disallowed. This model is, concretely, a different measure on the same space of lozenge tilings: we simply give a configuration a weight probability proportional to $r$ to the power of the number of

adjacencies between two of the three types of tiles. The lozenge tiling model is the case $r=1$ of this model.

This new measure is no longer determinantal. Thus we must rely on the *Bethe Ansatz method* for counting the number of configurations and computing correlations. This is notoriously difficult to carry out and indeed the solution to the general six-vertex model is a well-known open problem. Somewhat remarkably, this calculation can be performed for the five-vertex model to get a complete limit shape theory: we can give an explicit PDE describing the limit shapes associated to the model.

Like the lozenge model, limit shapes are obtained by minimizing a surface tension $\sigma_r(\nabla h)$ for given boundary values. Again $\nabla h=(s,t)$ varies

over the triangle 𝒩$=cvx\{(0,0),(1,0),(0,1)\}$. The surface tension $\sigma_r:$𝒩$\to$ℝ is an explicit smooth convex function also involving the dilogarithm,

$$Li_2(z) := -\int_0^z\frac{\log(1-\zeta)}{\zeta}\,d\zeta.$$

see Figure 3. Unlike the lozenge case there is a certain curve in 𝒩

$$\gamma=\{(s,t)~:~1-s-t+(1-r^{-2})st=0\}$$

along which $\sigma$ is not analytic; for $(s,t)$ on the side of $\gamma$ near the line $s+t=1$ the underlying Gibbs measure $\mu_{s,t}$ is not *extremal*. Physically, we think of the paths consisting of the white and light gray lozenges as *attracting* each other; for typical configurations the paths clump together. On the other side of the curve these lines repel each other. The surface tension is conveniently written in terms of a pair of auxiliary complex variables $z,w$ lying on the *spectral curve* $0=P(z,w) = 1-z-w+(1-r^2)zw$.

The relation between $(s,t)$ and $(z,w)$ is described in Figure 2.

*Fig. 2: Given $r$ and a point $(s,t)\in$𝒩 (on the left side of the curve $\gamma$), there is a unique $w$ in the upper half plane satisfying the property that $t = \arg(w)/\arg(\frac{w}{1-w})$ and $s=\arg(z)/\arg(\frac{z}{1-z})$. Here $\bar z$ is the reflection of $w$ in the circle of radius $r/(1-r^2)$ centered at $1/(1-r^2)$.*

The Euler-Lagrange equation for the variational problem is the PDE that any minimizer will satisfy. In this case the PDE, when written in terms of the variables $z,w$ with $P(z,w)=0$, (each of which is a function of $x,y$) reduces to the wonderfully simple form

$$\frac{\partial B(w)}{\partial w} w_y – \frac{\partial B(z)}{\partial z} z_x = 0,$$

where $B(z) = \arg z\log|1-z|+ Im Li(z)$ is a nonanalytic function.

This research was supported by the NSF and the Simons foundation.

*Fig. 3: Minus surface tension as a function of $(s,t)\in$𝒩 with $r=.8$. The black line is the curve $\gamma$.*

**References**

References

[1] H. Cohn, M. Larsen, J. Propp, The shape of a typical boxed plane partition, *New York J. Math.* 4 (1998), 137–165.

[2] H. Cohn, R. Kenyon, J. Propp, A variational principle for domino tilings. *J. Amer. Math. Soc.* 14 (2001), no. 2, 297–346.

[3] P. Kasteleyn, Dimer statistics and phase transitions. *J. Mathematical Phys.* 4 1963 287–293.

[4] R. Kenyon, A. Okounkov, Limit shapes and the complex Burgers equation. *Acta Math.* 199 (2007), no. 2, 263–302.

*Takashi Kumagai studied at Kyoto University, where he defended his PhD thesis in 1994 (supervisor: Shinzo Watanabe). After working at Osaka University and Nagoya University, he went back to Kyoto University in 1998. He is now a professor at the Research Institute for Mathematical Sciences (RIMS), Kyoto University. His research areas are anomalous diffusions on disordered media such as fractals and random media, and potential theory for jump processes on metric measure spaces. He gave St. Flour 2010 lectures, was a plenary speaker at SPA 2010 in Osaka, invited speaker at the International Congress of Mathematicians in Seoul 2014.*

*Takashi Kumagai’s Medallion Lecture will be given at SPA 2017 in Moscow, July 2017.*

There has been a long history of research on heat kernel estimates and Harnack inequalities for diffusion processes. Harnack inequalities and Hölder regularities for harmonic/caloric functions are important components of the celebrated De Giorgi-Nash-Moser theory in harmonic analysis and partial differential equations. In early 90’s, equivalent characterizations for parabolic Harnack inequalities (that is, Harnack inequalities for caloric functions) that are stable under perturbations were obtained by Grigor’yan and Saloff-Coste independently, and later extended in various directions.

Such stability theory has been developed only recently for symmetric jump processes, despite of the fundamental importance in analysis. In 2002, Bass and Levin obtained heat kernel estimates for Markov chains with long-range jumps on the d-dimensional lattice. Motivated by the work, Chen and Kumagai (2003) obtained two-sided heat kernel estimates and parabolic Harnack inequalities for symmetric stable-like processes, which are perturbations of symmetric stable processes, on Ahlfors regular subsets of Euclidean spaces. The results include equivalent stable condition for the two-sided stable type heat kernel estimates. There has been vast amount of work related to potential theory on symmetric stable-like processes since around that time. Definite answers are given in the recent trilogy by Chen, Kumagai and Wang on the stability of heat kernel estimates and parabolic Harnack inequalities for symmetric jump processes on general metric measure spaces. While both of them are stable under perturbations, unlike diffusion cases, heat kernel estimates are not equivalent to parabolic Harnack inequalities for jump cases.

In the talk, I will summarize developments of the De Giorgi-Nash-Moser theory for symmetric jump processes and discuss its applications. The applications include discrete approximations of jump processes and random media with long-range jumps. The talk is based on joint works with my collaborators; M.T. Barlow, R.F. Bass, Z.-Q. Chen, A. Grigor’yan, J. Hu, P. Kim, M. Kassmann and J. Wang.

]]>*Marta Sanz-Solé is a professor of Mathematics at the University of Barcelona. She is a member of the Institute of Catalan Studies and a Fellow of the IMS. Her research interests are in the field of stochastic analysis, specially Malliavin calculus and stochastic partial differential equations. She is, and has been, member of editorial boards of several journals, including the Annals of Probability. Marta Sanz-Solé has served on international scientific committees, advisory boards and evaluation panels. During the period 2011–2014, she held the position of President of the European Mathematical Society.*

*Marta’s Medallion lecture will be given at the 39th Conference on Stochastic Processes and their Applications (SPA) in Moscow (July 24–28, 2017). See http://www.spa2017.org/*

The theory of stochastic partial differential equations (SPDEs) emerged about thirty years ago and since then, it has been undergoing a dramatic development. This new field of mathematics is at the crossroad of probability and analysis, combining tools from stochastic analysis and the classical theory of partial differential equations. Motivations for the study of SPDEs arise both within mathematics, as well as from applications to other scientific settings possessing an inherent component of randomness. This randomness can be, for example, in the initial conditions, in the environment, or as an external forcing. Fundamental problems in the theory of SPDEs are the existence and uniqueness of solutions, and the properties of their sample paths. In the relevant examples, these are non-smooth random functions.

A basic question in probabilistic potential theory is to determine whether a random field ever visits a fixed deterministic set A. This leads to a quantitative analysis of the hitting probabilities in terms of geometric measure notions, like the Hausdorff measure or the Bessel–Riesz capacity of the set A. For Gaussian (and also for Lévy) processes, the subject (initiated in the 40’s) has reached a state of maturity. Over the last decade, the study of hitting probabilities relative to sample paths of systems of SPDEs has been in the focus of interest. Substantial progress has been made, thereby contributing to the understanding of qualitative features of SPDEs. There are however many unsolved problems for further investigations.

In my lecture, I will describe the mathematical approach to obtaining upper and lower bounds for hitting probabilities of random fields in terms of Hausdorff measure and Bessel–Riesz capacity, respectively. The roles of the dimensions, the roughness of the sample paths, the one-point and two-point joint distributions and particularly, the structure of the covariance, will be highlighted. Then I will focus on a class of SPDEs defined through linear partial differential operators, with nonlinear Gaussian external forcing. The random field solutions to systems of such equations are random vectors on an abstract Wiener space. Except in very simple cases, they are not Gaussian stochastic processes.

Malliavin calculus provides a powerful toolbox to tackle many questions about probability laws on abstract Wiener spaces. The existence and properties of densities relative to our SPDEs can be proved using this calculus. In particular, the qualitative behaviour of the covariance of two-point joint distributions when these points get close to each other, can be achieved by a detailed analysis of the Malliavin matrix –an infinitesimal covariance type matrix on the Wiener space.

I will illustrate the implementation of this approach with two classical examples: the nonlinear stochastic heat and wave equations. Finally, I will mention some on-going work and open questions.

]]>

*Mark Girolami is an EPSRC Established Career Research Fellow (2012–2018) and was previously an EPSRC Advanced Research Fellow (2007–2012). He is the Director of the Lloyds Register Foundation Turing Programme on Data Centric Engineering, and previously led the EPSRC-funded Research Network on Computational Statistics and Machine Learning which is now a Section of the UK’s Royal Statistical Society. In 2011 he was elected Fellow of the Royal Society of Edinburgh and also awarded a Royal Society Wolfson Research Merit Award. He was one of the founding Executive Directors of the Alan Turing Institute for Data Science from 2015 to 2016, before taking leadership of the Data Centric Engineering Programme at The Alan Turing Institute. His paper on Riemann Manifold Hamiltonian Monte Carlo methods was Read before the Royal Statistical Society, receiving the largest number of contributed discussions of a paper in its 183-year history.*

*Mark Girolami’s Medallion lecture will be given at the 2017 Joint Statistical Meetings in Baltimore (July 29–August 4, 2017). See the online program at http://ww2.amstat.org/meetings/jsm/2017/onlineprogram/index.cfm*

Consider the consequences of an alternative history. What if Leonhard Euler had happened to read the posthumous publication of the paper by Thomas Bayes on “An Essay towards solving a Problem in the Doctrine of Chances”? This paper was published in 1763 in the *Philosophical Transactions of the Royal Society*, so if Euler had read this article, we can wonder whether the section in his three volume book *Institutionum calculi integralis,* published in 1768, on numerical solution of differential equations might have been quite different.

Would the awareness by Euler of the “Bayesian” proposition of characterising uncertainty due to unknown quantities using the probability calculus have changed the development of numerical methods and their analysis to one that is more inherently statistical?

Fast forward the clock two centuries to the late 1960s in America, when the mathematician F.M. Larkin published a series of papers on the definition of Gaussian Measures in infinite dimensional Hilbert spaces, culminating in the 1972 work on “Gaussian Measure on Hilbert Space and Applications in Numerical Analysis”. In that work the formal definition of the mathematical tools required to consider average case errors in Hilbert spaces for numerical analysis were laid down and methods such as Bayesian Quadrature or Bayesian Monte Carlo were developed in full, long before their independent reinvention in the 1990s and 2000s brought them to a wider audience.

Now in 2017 the question of viewing numerical analysis as a problem of Statistical Inference in many ways seems natural and is being demanded by applied mathematicians, engineers and physicists who need to carefully and fully account for all sources of uncertainty in mathematical modelling and numerical simulation.

Now we have a research frontier that has emerged in scientific computation founded on the principle that error in numerical methods, which for example solves differential equations, entails uncertainty that ought to be subjected to statistical analysis. This viewpoint raises exciting challenges for contemporary statistical and numerical analysis, including the design of statistical methods that enable the coherent propagation of probability measures through a computational and inferential pipeline.

]]>**2019 Wald Lecturer:**The Wald Memorial Lectures honor Professor Abraham Wald. The Wald Lecturer gives two, three or four one-hour talks on one subject. This gives sufficient time to develop material in some detail and make it accessible to non-specialists.**2019 Rietz Lecturer:**The Rietz Lectures are named after the first President of the IMS, Henry L. Rietz. The lectures are intended to be of broad interest and serve to clarify the relationship of statistical methodology and analysis to other fields.**2020 Medallion Lecturers:**The Committee on Special Lectures invites eight individuals to deliver Medallion Lectures annually.

The deadline for nominations is **October 1, 2017**. To nominate someone, you will need a nomination letter (half a page, including your name, the nominee’s name and the name of the IMS lecture for which the nominee is nominated), and a list of five of their most relevant publications, with a URL where these publications are accessible.

For more information visit: http://imstat.org/awards/lectures/nominations.htm