Physics and the Brain
There are on the order of 1011 neurons in your brain, and each of these neurons is connected to about 10,000 other neurons. Neuroscientists have a special name for these facts—namely, “job security”—because it’s hard to imagine that we’ll be able to catalog all the details of something as vast and complex as the brain anytime soon. However, physicists have been very successful at characterizing large, many-body systems, such as magnets and galaxies, in ways that leave out most of the details while retaining interesting macroscopic properties—and this is accomplished with a set of underlying equations short enough to fit on a T-shirt.
Are there reasons we should expect the brain to be understandable from a physics-like point-of-view requiring just a few basic “principles?” One hint that we might is that the entire human genome contains only about 1.5 gigabytes of information (uncompressed), and that’s for the development of the whole body, not just the brain—not to mention the fact that roughly 90% of the genome doesn’t seem to code for any proteins. Another hopeful sign is that modern humans, from musicians to brain surgeons to string theorists, evolved in only six million years from a common ancestor we share with modern apes. That’s a huge leap in brain power in only a few hundred thousand generations of random mutation and natural selection.
It’s also noteworthy that a relatively high level of intelligence has evolved independently in several species, including dolphins, elephants, and grey parrots, which diverged from our ancestral line roughly 65, 105 and 300 million years ago, respectively…presumably long before each of these species got so smart. These observations suggest that natural intelligence might rely on a small number of basic principles, rather than a welter of finely tuned “hacks.”
Corroborating this circumstantial evidence, the physical structure of the cerebral cortex also points to the existence of simple principles. The cortex is a 1 ½ cm thick sheet of neurons that comprises the “gray matter” at the surface of the mammalian brain. Different areas of the cortex are clearly involved in different cognitive tasks and these areas often have clear physical boundaries.
These different cortical areas communicate with one another via axonal projections (outputs from individual neurons) that travel from one cortical area to the other via the “white matter” beneath the cortical sheet. More complex brains, such as ours, typically contain more cortical areas than less intelligent mammals, and there are strong structural similarities among essentially all cortical areas within and between species, suggesting that cortex is modular.
In other words, it seems plausible that there are universal rules for hooking up different areas of the cortex so that making smarter brains just requires adding more square footage to the homogeneous cortical sheet and breaking it up into more distinct cortical areas that get wired to one another like a switchboard. This appealing idea has been floating around for quite some time now, but it is still unclear to what extent there might be a canonical cortical micro-circuit—analogous to, say, a transistor or an AND gate in a digital computer—that can help us understand how the larger circuit is built up from smaller components. There are many well-known regularities in local circuitry, but few workers in the field would dispute that we do not yet grasp their functional significance.
Complimenting the reductionist, bottom-up strategy of studying detailed circuitry to elucidate brain function, there are several top-down approaches based on various optimization principles, which are all rooted in Darwin’s notion of evolutionary pressure for optimality through survival of the fittest. Some popular candidate principles that have a physics flavor include: maximizing the (Shannon mutual) information transmission rate (useful in the sensory periphery); maximizing the reliability of transmitted information (useful in the motor periphery); minimizing metabolic costs (fairly ubiquitous); maximizing the population sparseness of a neural ensemble (useful in parts of cortex, for example); and matching the response properties of sensory neurons to the statistics of natural sensory data such as natural sounds and images (useful in the retina, auditory nerve, and sensory cortical areas). All of these principles have yielded predictions born out by experimental observation, and several of them have been successfully developed by members of the Redwood Center for Theoretical Neuroscience here at UC Berkeley.
Dynamics play an important role in neural circuits. Two ubiquitous properties of the nervous system in general and the cortex in particular are adaptation and plasticity. Neurons are constantly readjusting their connection strengths, and even the existence of connections, with their neighbors. Though it is far from fully understood, synaptic plasticity of this sort is widely believed to be a crucial component of learning and memory. Individual neurons can exhibit adaptation of their responses to the changing statistics of their inputs. These readjustments in connection strength and neural responsiveness can occur on a wide range of timescales.
Moreover, large regions of the brain can dynamically reroute information in a concerted fashion as it is being processed, presumably to allocate finite neural resources in an efficient manner as task contingencies change. This dynamic ability, coupled with the large number of highly interconnected neurons in the cortex all working in parallel, somehow manages to outperform the fastest modern computers running the latest software when it comes to intelligently processing real-world data, such as isolating a single voice out of many at a loud cocktail party. How the cortex directs selective auditory attention on a moment-to-moment basis is the primary question that drives the work in my own laboratory.
With the rapid advancement of novel experimental techniques for monitoring and manipulating neurons in the intact cortex of model systems, we are in a better position than ever before to take systems neuroscience beyond a mere descriptive science and finally uncover the mechanisms underlying cortical function. There have been many recent theoretical advances as well, but only time will tell if systems neuroscientists are entering an era akin to the birth of modern physics just before the turn of the previous century, or if we are actually still in the Dark Ages and must wait for hundreds of years before the revolution in understanding the brain. The answer is likely somewhere in-between, but I am optimistic about the prospects for the former scenario.