The Puzzle of How Large-Scale Order Emerges in Complex Systems

by Admin
The Puzzle of How Large-Scale Order Emerges in Complex Systems

A complex system exhibits emergence, according to the new framework, by organizing itself into a hierarchy of levels that each operate independently of the details of the lower levels. The researchers suggest we think about emergence as a kind of “software in the natural world.” Just as the software of your laptop runs without having to keep track of all the microscale information about the electrons in the computer circuitry, so emergent phenomena are governed by macroscale rules that seem self-contained, without heed to what the component parts are doing.

Using a mathematical formalism called computational mechanics, the researchers identified criteria for determining which systems have this kind of hierarchical structure. They tested these criteria on several model systems known to display emergent-type phenomena, including neural networks and Game of Life–style cellular automata. Indeed, the degrees of freedom, or independent variables, that capture the behavior of these systems at microscopic and macroscopic scales have precisely the relationship that the theory predicts.

No new matter or energy appears at the macroscopic level in emergent systems that isn’t there microscopically, of course. Rather, emergent phenomena, from Great Red Spots to conscious thoughts, demand a new language for describing the system. “What these authors have done is to try to formalize that,” said Chris Adami, a complex-systems researcher at Michigan State University. “I fully applaud this idea of making things mathematical.”

A Need for Closure

Rosas came at the topic of emergence from multiple directions. His father was a famous conductor in Chile, where Rosas first studied and played music. “I grew up in concert halls,” he said. Then he switched to philosophy, followed by a degree in pure mathematics, giving him “an overdose of abstractions” that he “cured” with a PhD in electrical engineering.

A few years ago, Rosas started thinking about the vexed question of whether the brain is a computer. Consider what goes on in your laptop. The software generates predictable and repeatable outputs for a given set of inputs. But if you look at the actual physics of the system, the electrons won’t all follow identical trajectories each time. “It’s a mess,” said Rosas. “It’ll never be exactly the same.”

The software seems to be “closed,” in the sense that it doesn’t depend on the detailed physics of the microelectronic hardware. The brain behaves somewhat like this too: There’s a consistency to our behaviors even though the neural activity is never identical in any circumstance.

Rosas and colleagues figured that in fact there are three different types of closure involved in emergent systems. Would the output of your laptop be any more predictable if you invested lots of time and energy in collecting information about all the microstates—electron energies and so forth—in the system? Generally, no. This corresponds to the case of informational closure: As Rosas put it, “All the details below the macro are not helpful for predicting the macro.”

What if you want not just to predict but to control the system—does the lower-level information help there? Again, typically no: Interventions we make at the macro level, such as changing the software code by typing on the keyboard, are not made more reliable by trying to alter individual electron trajectories. If the lower-level information adds no further control of macro outcomes, the macro level is causally closed: It alone is causing its own future.

Source Link

You may also like

Leave a Comment

This website uses cookies. By continuing to use this site, you accept our use of cookies.