Michael DeWeese received his BA (1988) in physics from UC Santa Cruz and his PhD (1995) in physics from Princeton. From 1995-1999 he took a computational postdoctoral appointment at the Salk Institute in La Jolla, California, with a fellowship from the Alfred P. Sloan Foundation. He then pursued experimental neuroscience as a postdoctoral researcher at Cold Spring Harbor Laboratory on Long Island, NY from 2000-2006. In 2007 he took a junior faculty position at UC Berkeley and he is currently an Associate Professor of Physics and Neuroscience.

## Research Interests

Our group’s research spans three broad areas: nonequilibrium statistical mechanics, machine learning theory, and systems neuroscience. Our work in these different areas is linked by several unifying ideas, including stochasticity, high dimensionality, non-convex optimization, learning and prediction, and the statistics of natural data. Many of our projects involve inspiration or tools from one of these fields applied to questions from another.

**Nonequilibrium Statistical Mechanics:**

Uncovering the principles underlying the operation of biomolecules and designing molecular-scale machines will ultimately require a deep understanding of nonequilibrium statistical mechanics. The past couple of decades have witnessed major breakthroughs in our understanding of nonequilibrium processes, but many fundamental questions remain unsolved. We are especially interested in deriving and applying geometrical techniques for computing optimal protocols for driving nonequilibrium systems in various settings, including point-to-point paths through control parameter space as well as cyclic processes such as heat engines. In addition, we develop work-energy theorems and other fundamental relations to better understand naturally occurring active matter systems that continually dissipate energy even in steady state.

**Machine Learning:**

We use ideas from physics and neuroscience to develop theories of learning and inference and to derive new machine learning algorithms. Deep neural networks in particular have proven extraordinarily useful for a wide array of learning problems, but theoretical understanding of their high performance is famously lacking. We aim to develop the sort of first-principles theory needed to quantify and explain the remarkable performance of modern artificial neural networks. In addition, we use concepts from physics to devise efficient algorithms for important tasks, such as fitting complex probabilistic models to large data sets and obtaining representative samples from these models once they are learned.

**Systems Neuroscience:**

Despite the wealth of neural data acquired in recent years, our understanding of how the brain works remains rudimentary. In order to gain insight into the nervous system, we are developing biologically plausible algorithms to model sensory processing and other forms of computation. Our theories typically rely on coding principles, such as maximizing sparseness or information flow, similar to familiar concepts from physics, such as minimizing free energy or maximizing entropy. Our models are designed to clarify the computational roles of different neural populations and to provide specific, falsifiable experimental predictions about the structure and activity patterns in biological neural networks.

## Publications

A.G. Frim and M.R. DeWeese. A geometric bound on the efficiency of irreversible thermodynamic cycles. arXiv:2112.10797 (2021).

E.N. Evans, Z. Wang, A.G. Frim, M.R. DeWeese, and EA Theodorou. Stochastic optimization for learning quantum state feedback control. arXiv:2111.09896 (2021).

J.B. Simon, M. Dickens, and M.R. DeWeese. Neural Tangent Kernel Eigenvalues Accurately Predict Generalization. arXiv:2110.03922 (2021).

A.G. Frim and M.R. DeWeese. Optimal finite-time Brownian Carnot engine. arXiv:2107.05673 (2021).

J.B. Simon, S. Anand, and M.R. DeWeese. On the Power of Shallow Learning. arXiv:2106.03186 (2021).

C.G. Frye, J. Simon, N.S. Wadia, A. Ligeralde, M.R. DeWeese, and K.E. Bouchard. Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses. Neural Computation 33 (6), 1469-1497 (2021).

A.G. Frim, A. Zhong, S.F. Chen, D. Mandal, and M.R. DeWeese. Engineered swift equilibration for arbitrary geometries. Physical Review E 103 (3), L030102 (2021).

E.A. Holman, Y.S. Fang, L. Chen, M. DeWeese, H.Y.N. Holman, and P.W. Sternberg. Autonomous adaptive data acquisition for scanning hyperspectral imaging. Communications Biology 3 (1), 1-7 (2020).

T.E. Yerxa, E. Kee, M.R. DeWeese, E.A. Cooper. Efficient sensory coding of multidimensional stimuli. PLoS Computational Biology 16 (9), e1008146 (2020).

N.S. Wadia, R.V. Zarcone, and M.R. DeWeese. A Solution to the Fokker-Planck Equation for Slowly Driven Brownian Motion: Emergent Geometry and a Formula for the Corresponding Thermodynamic Metric. arXiv:2008.00122 (2020).

P.S. Sachdeva, J.A. Livezey, and M.R. DeWeese. Heterogeneous synaptic weighting improves neural coding in the presence of common noise. Neural computation 32 (7), 1239-1276 (2020).

L. Kang and M.R. DeWeese. Replay as wavefronts and theta sequences as bump oscillations in a grid cell attractor network. Elife 8, e46351 (2019).

A full list of publications can be found on my Google Scholar profile.