Tuesday, December 13, 2016

Emergence: The Connected Lives of Ants, Brains, Cities, and Software



Book Review: Emergence: The Connected Lives of Ants, Brains, Cities, and Software – by Steven Johnson (Scribner 2001)

This was a pretty good and engaging book about self-organization and its technological and future implications. Self-organizing systems can embody what have been termed ‘emergent’ properties, moving from individual behavior to group behavior, local behavior to global behavior. Collections of simple constituents can display complex behavior when aggregated together in a system. Such systemic behavior typically (but apparently not always) serves adaptive functions in biology. Biological subsets like the human brain and human social behavior also display self-organizing properties.

The book begins with the work of Japanese scientist Toshiyuki Nakagaki in 2000 who announced that he ‘trained’ an amoeba-like organism, the slime mold, to find the most efficient path through a maze to find food and to do so despite the organism having no cognitive resources. Slime mold behavior confounded scientists for some time until its ‘superorganism’ functions were discovered. Through much of its life it lives as distinct single-celled units but under certain conditions these cells will join to form a single organism that can move across the ground as a unit, a swarm. New classes of study overlap such behavior: non-equilibrium thermodynamics, non-linear theory, complexity theory, mathematical biology, and ‘morphogenesis’ for instance. In studying slime mold aggregation scientists first hypothesized ‘pacemaker’ cells that initiated the behavior. They knew that a substance, acrasin, or cyclic AMP, was released prior to the aggregation behavior. But alas, no pacemaker cells were found. It was later found that changes in individual cell releases of AMP and other cells would follow suit as well as following the pheromone trails released by other cells. Thus it is an ‘emergent’ group behavior without a leader (pacemaker). This explanation derived partly from Alan Turing’s work with morphogenesis. It took a while before scientists would abandon the pacemaker idea and accept the existence of ‘collective behavior.’ This is one example of the development of the new science of self-organization. Darwin, Engels, Adam Smith, and Turing had inadvertently contributed to it. The author notes that a new phase in the study of self-organization is happening with software and video games where such functions can be programmed in so that new patterns may emerge – this is termed artificial emergence and will likely be an aspect of artificial intelligence (AI). Johnson also mentions the core principles of the field of self-organization: “neighbor interaction, pattern recognition, feedback, and indirect control.”

Next he visits Deborah Gordon at Stanford’s Gilbert Biological Sciences Building in Palo Alto. She studies ants, in this case harvester ants. Depictions of ant colonies as command economies, of Stalinist communes, are simply wrong. There is no centralized rule. Instead, ant behavior is quite decentralized and emerges from the bottom up. She shows him that the ants instinctively put the cemetery as far away as possible and the trash dump as far away as possible and also far from the cemetery. The idea of an ant “queen” that actually directs behavior is bogus. The queen simply births the ants. Her function is genetic and has nothing to do with directing ant behavior. {although with modern understanding of epigenetics one might want to revisit that possibility}

Next he moves on to the idea of a self-organizing city. Manchester, England is an obvious choice due its co-emergence with the Industrial Revolution. Between 1700 and 1850 Manchester became a booming industrial city and saw a ten-fold increase in population in 75 years as people moved there to work in the coal/steam powered factories. It was not recognized as a city nor had it a city government nor city services until the end of this period. There was no city planning. However, there was fairly precise segregation of the poor and the rich. Engels noted with astonishment that it was done that way with no formal planning. The author here notes the development of Manchester as a form of emergent behavior. Famous city planners Lewis Mumford and especially Jane Jacobs have pointed out that cities tend to develop and cluster in very specific ways without central planning. Engels termed it “systematic” complexity” referring to Manchester. Artists, bankers, crafters, etc. all seem to coalesce in various parts of a city. From the mid-19th century onward Manchester had an area where gay men would meet even in a world where such activity was extreme taboo. The area is officially recognized today and is popular with site-seers. Alan Turing was ostracized for frequenting it, lost his status, and took his own life. Turing, who was the most famous code-breaker of World War II and developed the earliest form of a computer, also wrote a seminal work on ‘morphogenesis,’ that was tragically cut short by his death. Self-assembly was the topic. Turing was discussing the merits of the paper with Belgian Nobel chemist Ilya Prigogine whose own work in non-equilibrium thermodynamics had some overlapping implications. Turing worked for some months at Manhattan’s Bell Labs and met another code-breaker, Claude Shannon, the developer of Information Theory.  Shannon urged him to make his “thinking machine” (computer) more brain-like, adding culture to it. Pattern recognition was key to the development of the computer, information theory, and AI. Warren Weaver would later write a review of scientific research developments that would perhaps be the first official recognition of Complexity Theory, which is based in part on Shannon’s Information Theory as well as computer science, molecular biology, physics, and genetics. Later would come Chaos Theory, although its development was underway at the same time. Weaver classified the ideas into disorganized complexity and organized complexity. Organized complexity simply was simpler and had fewer variables, and would thus be more predictable. Organized complexity systems can yield re-organized macrobehavior while disorganized complexity systems can only be predicted with statistics. Weaver recognized that with new tools come new paradigms, as Thomas Kuhn would further relate. It would be computers and their ability to tackle large data sets and crunch numbers that would come to empower these new scientific ideas: complexity theory, information theory, chaos theory. Thus it is a tragedy, notes the author, that Turing did not live to see the intersection of two ideas that he was instrumental in developing: computers and complexity. 
    
City planner and social theorist Jane Jacobs read Weaver’s essay and noticed that organized complexity, or complex order, was an issue in how some cities developed and why some parts functioned better than others. This was in the early 1960’s. Jacobs saw the city as an organism with interacting parts. Shannon’s work in the 40’s emphasized the importance of pattern recognition and feedback in information systems, while E.O. Wilson’s discovery in the 50’s of ants use of pattern recognition of pheromone signals in social communicating (similar to the AMP processing of slime molds) further boosted the new ‘science’ of complexity. Meanwhile, Ilya Prigogine was showing through his nonequilibrium thermodynamics that the laws of entropy could be temporarily suspended, with a higher-level order emerging from the chaos. Turing and Shannon’s colleague Norbert Weiner would show the importance of feedback in any ‘cybernetic’ system. Weiner’s student Oliver Selfridge and Marvin Minsky would work similarly with machine learning and AI, developing better means of pattern recognition. Selfridge developed the first emergent software program with his ‘Pandemonium.’ Another of Weiner’s students, John Holland would expand on Selfridge’s ideas to develop ‘evolving’ software programs, based loosely on genetics. His ‘genetic algorithm’ was based on the idea that code was like the genotype and what code does was like the phenotype. UCLA professors David Jefferson and Chuck Taylor furthered the idea in the late 70’s to make software (the Connection Machine) that simulated evolving life – so that replication was imperfect as it is in life rather than exact. Their format was virtual ants following pheromone trails, an emergent behavior, so they proved it could be done virtually, with virtual ants and software code. The ideas of the people mentioned above and others had forged new ways of thinking, from a ‘bottom-up’ perspective rather than a top-down’ one. The Santa Fe Institute was founded in 1984. James Gleick’s book, Chaos, The Making of a New Science, came out in 1987. (I am about half way through that one). Before that, in 1980, came Douglas Hofstadter’s classic, Godel, Escher, and Bach. In the early 90’s came Will Wright’s program SimCity. SimCity would become a popular video game, one that exhibited some self-organizing behavior/emergent properties. 

Humans aside, ants are likely the most successful species on earth. It is likely that the ‘collective intelligence’ of this ‘eusocial’ species is the key to its success. Ants change their individual ‘local’ behavior to meet the ‘global’ needs of the colony. There are no leaders. They change tasks according to need. There is no overseer of the system. As E.O. Wilson and his colleague Holdobler noted, “pheromones play the central role in the organization of colonies.” Ant communication is based on ten signs, nine of which are pheromone-based, the other being tactile communication. Through ‘gradient detection’ ants can discover the source-area of the pheromone trails. They can also assess the frequency of these ‘semiochemicals,” (presumably how many sources there are of them and/or how many emission events there are) which may allow them to assess colony size and adjust task if necessary. Such abilities allow the colony to be efficient, with the right amount of ants dedicated to the varying tasks. Deborah Gordon’s harvester ants exhibit five principles of bottom-up organization: 1) More is different – enough ants need to be around to make a colony and they need to know what to do based on size; 2) Ignorance is useful – it is a plus that no one can assess the overall state of the system – it works best when no one knows it is a system; 3) Encourage random encounters – the many random encounters allow the ants to assess the needs of the colony and promote macrobehavior; 4) Look for patterns in the signs – pattern detection through analyzing pheromone trails and task distribution allows ants to find and exploit food sources and optimize tasks; 5) Pay attention to your neighbors – local info can lead to global knowledge, or swarm logic.
Since ant colonies typically last about 15 years, the lifespan of the queen, Gordon began studying them on longer time scales which had not been done much before. She discovered that the age of the colony is a factor since they have phases – she defined three: infancy, adolescence, and maturity. Younger colonies respond more variably to changes than older ones. Individual ants live no longer than a year. The whole colony still develops and matures while its individuals last a short time. The queen is the only one who lives longer but she never sees the light of day except when mating and is quite separate from the day to day lives of the worker ants. Her mates live such a short time (a few days at most) that genetics doesn’t outfit then with mandibles like the rest of the ant types. One might see human cells as a cooperative hive/colony as well. DNA might be seen as a directing influence which is top-down. However, cells also learn from neighbors which is bottom-up. 

“Cells draw selectively upon the blueprint of DNA: each cell nucleus contains the entire genome for the organism, but only a tiny segment of that data is read by each individual cell.”

The idea of ‘emergence’ might have more to do with biological development (morphogenesis) than biological function. 
  
“Cells self-organize into more complicated structures by learning from their neighbors.”

Cells communicate through chemical messengers (salts, sugars, amino acids, proteins, and nucleic acids). These chemical messengers are akin to the pheromones of ants. We begin life as a single-celled embryo but after a few seconds we morph into compartments: a head and a tail, and join the multicellular ranks, each part with different ‘instructions.’ After cells further divide into more ‘heads’ and ‘tails’ and the embryo grows there is formation of cell ‘collectives.’ Cells, like ants, lack a ‘bird’s eye view’ of life and only experience it from what the author calls ‘street level.’ Cells take cues from neighbor cells and these cues are what has become known as “gene expression.” Neighbors and neighborhoods are also the domain of cities as well as of AI as software that learns and evolves. The author points out the similarities of the SimCity game with both ants and embryos as well as with cities. They all use local interactions to affect global behaviors. Economist Paul Krugman wrote about the ‘self-organizing economy’ in the 90’s. He noted that certain businesses coalesce in city areas, presumably to share customer base. Businesses also tend to like to have their competitors closer rather than a little further away. He says businesses will cluster in these ways in time no matter how a city is first organized. Thus are formed what might be called “hubs” of certain activity in an area. Ethnic and lifestyle-similar groups also tend to cluster in certain areas of a city. Favorable interactions with neighbors make areas within a city safer, noted Jacobs – another example of random local interactions leading to global order. Jacobs saw the sidewalk as the necessary place where these local interactions occur, the interface. Johnson does note that there is an important obvious difference between ants and humans. Ant colony coherence is enhanced by the ignorance of individual ants or rather their inability to make ‘conscious’ decisions, at least compared to humans. Ant decisions are much more based on genetics (and pheromones) than human decisions. But our ‘free will’ may not matter so much at different scales. Regardless, our social clustering has significant predictability based on systems analysis. If we scale out to hundreds or thousands of years the city as a human superorganism will seem much more like an ant colony. As more humans move to the cities our unseen (individually) emergent and collective behavior will become more important, one would think. The author mentions the guilds of Medieval Europe, and notes that the silk weavers, once part of the goldsmith guilds, are still in the same section of Florence as they were as early as 1100. 

Recognizing and responding to anomalies and changing patterns is something we do both consciously and unconsciously. As in the guilds being in certain areas, one might even see “traditions” as patterns enduring through time. Cathedrals and universities also often keep their areal configurations through time and there are of course practical reasons for this such as the uniqueness of the structures themselves. Places become known for things and such knowledge may endure. Such districts become network nodes and hubs in manufacturing and trade. What are called ‘economies of agglomeration’ may develop due to the advantages of sharing resources and services. 

“Cities were creating user-friendly interfaces thousands of years before anyone even dreamed of digital computers. Cities bring minds together and put them into coherent slots.”

Cities store and transmit information, such as ‘how-to’ knowledge of new technologies. Neighborhoods often come to be self-organizing clusters. There is a need to process and prioritize information. There are more people in a city and more specialized knowledge. More people in a group usually lead to more specialization. More specialization leads to networking nodes and hubs. This is perhaps not too distant from the task specialization of ants. Johnson says that information management is the latent purpose of a city, more like the unconscious pattern recognition. Johnson speculates why cities emerge and grow, particularly the ones beginning again after the fall of the Roman Empire after which there was a contraction and loss of cities. Technology, especially for food production, like the heavy-wheeled plow from Germanic peoples and crop rotation allowed areas to support larger populations which in turn tends to lead to more macrobehaviors. 

He compares the brain and ants, analogizing ants with neurons and pheromones with neurotransmitters. Much like the collective knowledge of the ant colony is the sum of decisions by simple and ignorant individual ants, so too is the brain the sum of decisions of individual neurons. 
Some, like Robert Wright, see the World Wide Web as an heir to cities in developing bottom-up self-organization. Others disagree, noting that there are no ‘higher orders’ manifesting in the highly disordered web. Stephen Pinker explained how the internet was very different from the human brain: The brain is imbued with and connected with specific “goal directed organization” while the internet has no such organization. The Web is great with connections but lousy with structure, says Johnson. He calls it ‘networked chaos.’ One problem, he says, is that HTLM-based links are one-directional – there are no mechanisms for feedback. It is feedback that allows self-organizing systems to become more ordered. Nowadays there are quite a bit of feedback algorithms, many involved with advertising and marketing. The algorithms are designed to recognize patterns and make recommendations based on that. They search and recognize our website-clicking patterns, our seeming preferences, so we can be targeted. It works in many cases. However, the feedback systems of the web are rarely if ever adaptive. 

Neurologist Richard Restak say that habit and memory involve repetition which involves “the establishment of permanent and semi-permanent neuronal circuits.” The brain is made up of connections and networking is a major function and feedback is the key to that functional interconnectedness. Johnson compares the media, what he calls the ‘mediasphere,’ to the brain in that there are numerous feedback loops. Interest in media events seems to “blossom” possibly in response to the strength of the feedback loops. Feedback can drive media stories, especially nowadays with the internet and social media being a major source of news rather than the tightly-controlled “mainstream media” of the past. Johnson talks about the new (at the time ca. 2001) CNN news feeds where subscribed local news could select among a pool of stories and present them in the old news style format that tended to “reverberate” with watchers and listeners. Of course, these feedbacks were also not adaptive. 

Johnson explains “negative feedback” as incorporating previous and present conditions to regulate – as in the thermostat controlling the temperature of a room. Negative feedback is a regulating mechanism while positive feedback is a mechanism for progressing onward in one direction. The use of information as a medium for negative feedback was first explored by Norbert Weiner in his 1949 book, Cybernetics. For many real world applications making decisions based on analysis through negative feedback required a way to make sense of the data, to analyze it through number crunching. Thus Weiner helped developed early computers with the ENIAC. 

“For negative feedback is note solely a software issue, or a device for your home furnace. It is a way of indirectly pushing a fluid, changeable system toward a goal. It is, in other words, a way of transforming a complex system into a complex adaptive system.”

“At its most schematic, negative feedback entails comparing the current state of a system to the desired state, and pushing the system in a direction that minimizes the difference between the two states.”

That is what Weiner meant by “homeostasis.” In Weiner’s words:

“When we desire a motion to follow a given pattern, the difference between this pattern and the actually performed motion is used as a new input to cause the part regulated to move in such a way as to bring its motion closer to that given by the pattern.”

The human body is a “massively complex homeostatic system” where many of the feedback mechanisms are controlled by the brain. Our sleep cycles and circadian rhythms are controlled by negative feedback. That the brain and body are homeostatic systems is why such artificial feedback methods like biofeedback can be successful. Through practice and habituation we can learn to control to some extent some of our internal bodily processes. Neurobiofeedback utilizes brainwave patterns as the goal, represented graphically. Different brain wave signatures correlate to different states of consciousness and degrees of tranquility or excitation. Neurobiofeedback involves pattern amplification and recognition. Johnson sees the media over-amplifying certain stories through excessive coverage as a positive feedback loop. Neurons suffer fatigue states (less than a millisecond) while the media does not fatigue, he notes.

City planners Lewis Mumford and Jane Jacobs were having a feud about the breakdown of self-organization in cities. While Mumford thought Jacobs’ ideas worked great in small intimate cities, he also thought much was lost in larger cities, especially without the direct feedbacks and feedback enablers: sidewalks and dedicated neighborhoods. Meanwhile the early Web-based communities, the electronic bulletin boards, were mostly top-down with leaders picking topics and moderators so hierarchies of sorts did develop. But homeostasis did not happen nor did much self-organization. Johnson thinks one reason it did not occur is due to the lack of social feedback in non-face-to-face discussion. In face-to-face encounters there is a vast amount of social feedback though voice tones, facial expressions, gestures, and other body language. We become “social thermostats,” he notes. Threaded discussions often consist of active participants and lurkers. The lurkers give no feedback as they are invisible. If a “crank” appears to disrupt discussions (crank might be precursor to what we now call troll) he may be booted by active participants but lurkers can’t be appreciated nor abhorred, nor policed unless participation is mandatory which is rare I would guess. Thus when lurkers are factored in the online groups may be less self-organizing than face-to-face groups due to lack of feedback in parts of the system due to lurkers exhibiting only one-way communication. Thus, no homeostasis. He talks about an online community that exhibited some self-organization called Slashdot that grew and was faced with the decision to keep small and preserve quality or to grow and risk losing that quality – not unlike Mumford’s city-size at which self-organization breaks down. Slashdot was partially based on moderators rating other’s posts and then giving points (called karma) based on ratings, which yielded privileges. Thus there is plenty of feedback. The moderators were limited which created scarcity while the karma rewards created value so the system functioned like a kind of currency. Thus it could be seen as a pricing standard for community participation. Valuation by user-ratings is still in full swing today especially on-line. However, depending on how valuation is designed one might create a “tyranny of the majority” that demotes minority viewpoints. 

Mitch Resnick’s self-organizing program/game Star Logo simulates slime-mold behavior through color flashing which mimics the c-AMP chemical secretion. Other flashing colors receive the transmitting colors. Star Logo is basically a simulator designed to help understand emergent behavior. AI guru Marvin Minsky saw Resnick’s program and initially made incorrect assumptions about it – that it was directed and not self-organized, although after Resnick explained how it worked he revised his assumptions. The point Resnick makes in saying this is that we are accustomed to think of system design in top-down, centralized planning ways – and even an expert in emergent systems (Minsky) could be fooled initially. Of course, the programmer can be considered a centralized authority so in the case of simulators there is some. Weiner derived the term, cybernetics, from the Greek for ‘steersman’ so that control or direction by feedback can be considered a way of directing a moving system, or driving it. Johnson goes through other learning/emergent software innovations such as the number sorting software of programmer Danny Hillis, where the machine takes over from the programmer. The author goes through several other (early) ‘interactive’ software and game products and projects where users/players can only direct self-controlling systems in limited ways. Perhaps the uncertainty of interactive games keeps players from becoming bored or disinterested too quickly. Wright even had to ‘dumb down’ some of his AI-like creations in the subsequent SimCity games to keep things interesting – perhaps like ants are dumb compared to their unseen (by them) collective intelligence. Johnson calls the programmers or controllers of such games – control artists, suggesting that part of their work is art.

The next section begins with mindreading. Psychologists have found that at about age 4 children begin to be able to accurately predict the intentions of others. Other primates can’t really do it. Some have theorized the existence of certain neurons called mirror neurons that are dedicated to copying the motor activity of others. There is much debate about that. Autistic people may have difficulty with mindreading. We do know that the mind makes estimates and predictions about parts of sensory reality that are missing or unclear and fills in the gaps. Visually we do it with blind spots. Apparently, we intuitively understand the predictive success rates of what we think of as likelihoods, our expectations for sensory reality. Similarly we make predictions and develop expectations for the intentions and behavior of others. This is all part of what philosophers, psychologists, and cognitive neuroscientists call ‘theory of mind’ which suggests that our self-awareness may be a by-product of reading the intentions of others. It is thought that this new development added more brain size, particularly in the pre-frontal lobes. Johnson notes that mind-reading and its relative, self-awareness “is clearly an emergent property of the brain’s neural networks.” This involves feedback-heavy interactions. The brain is always rewiring its circuitry.

“Amazingly, this process has come full circle. Hundreds of thousands – if not millions – of years ago, our brains developed a feedback mechanism that enabled them to construct theories of other minds. Today, we are beginning to create software applications that are capable of developing a theory of our minds.”

Will our media come to really know us? Perhaps. It sometimes seems a bit eerie when those Amazon bots pick a good book for you but perhaps less so when they fail. But targeted ads can and sometimes do save the ad makers and the customer time and annoyance. 

“… the invention of the graphic interface – was itself predicated on a theory of other minds. The design principles behind the graphic interface were based on predictions about the general faculties of the human perceptual and cognitive systems. Our spatial memory, for instance, is more powerful than our textual memory, so graphic interfaces emphasize icons over commands. We have a natural gift for associative thinking, thanks to the formidable pattern-matching skills of the brain’s distributed network, so the graphic interface borrowed visual metaphors from the real world: desktops, folders, trash cans. Just as certain drugs are designed specifically as keys to unlock the neurochemistry of our gray matter, the graphic interface was designed to exploit the innate talents of the human mind and to rely as little as possible on our shortcomings.”

Of course software and interface design is decidedly top-down and attempted integration with mere predictions for an average human mind. Interactive computing is often first applied to virtual reality and sometimes VR pornography as we humans seem to seek to technologize our urge-fulfillment. The bot ads and targeted ads based on user-histories and clicks can be seen as self-organized ad media. Ebay works because ratings of sellers works. Otherwise there would be more scamming.
Some high-tech companies experimented with neural-net-like organizational structures with decentralized intelligence. I am not sure how much of this is around today but surely some. CEOs are still around and still extremely well-paid. He also mentions the decentralized nature of protest movements that probably began with the Seattle anti-globalization protests and continued in more recent times with the direct democracy and consensus styles of Occupy Wall Street – although I can see some serious flaws in those set-ups. Today’s smart technologies and devices rely on the ability to learn. While the programming is general the learning fills in the specifics if the system is self-organizing.

Johnson reminds that emergence happens on different scales (or zoom levels) in different systems. That reminds me of fractals and Fibonacci scales within scales, and indeed emergence and chaos are related since both are often features of ‘developing’ systems, both organic and inorganic.
“ … understanding emergence has always been about giving up control, letting the system govern itself as much as possible, letting it learn from the footprints.”

Great thought-provoking book, even if outdated in some respects. It was undoubtedly ahead of its time when published so that makes up for some of the outdatedness. I hear Johnson now and then as a guest or commentator on NPR’s Science Friday or Radiolab but may have to look and see what he has written lately, especially if he is still ahead of things technologically.