Iyad Rahwan, the director and principal investigator of the Media Lab’s Scalable Cooperation group wrote, “We’re seeing the rise of machines with agency, machines that are actors making decisions and taking actions autonomously. This calls for a new field of scientific study that looks at them not solely as products of engineering and computer science, but additionally as a new class of actors with their own behavioral patterns and ecology.”

The day after the blog post, the group published a paper in the journal Nature titled “Machine Behaviour.” It outlines a broad scientific research agenda to study machine behavior in a way that integrates computer science and the sciences that study the behavior of biological agents. The researchers point out that just as animal and human behavior can’t be understood without the context in which behaviors occur, machine behavior also requires a coordinated study of the algorithms and the social environments in which algorithms frequently occur.

“Scale of Inquiry in the Machine Behavior Ecosystem“ from “Machine Behavior,” MIT Media Lab


In general, the concern “about the broad, unintended consequences of AI agents that can exhibit behaviors and produce downstream societal effects—both positive and negative—that are unanticipated by their creators” is getting louder and coming from a wider spectrum of commentators and scholars. The concern is beginning to make the case of a potential loss of human oversight over intelligent machines.

There is also the possibility of missed benefits that “AI agents can offer society by supporting and augmenting human decision making.”

The paper describes three primary motivations for the creation of a new scientific discipline addressing machine behavior:

  1. There is an expanding variety of algorithms operating in our culture, and these solutions are taking on more significant roles in our everyday lives.
  2. The complexity of these algorithms and the environments in which they operate can create virtual black boxes, impenetrable to all but a minority.
  3. The combination of the ubiquity of these AI agents and their complexity makes it difficult for us to predict the effects of intelligent algorithms on humanity.

Compounding the growth problem, the effects of autonomous agents “are increasingly likely to affect collective behaviors, from group-wide coordination to sharing.” And adding to the complexity, there’s also the problem of computer systems learning from data when the data has imperfections or is flawed in the way it was collected. Further, a great deal of source code and model structure for the most frequently used algorithms are proprietary, as is the data used by those algorithms. You are prevented from seeing and understanding these by the corporate owners who have copyright-locked them as black boxes.

Examples of questions that fall into the domain of machine behavior. From “Machine Behavior,” MIT Media Lab

Some of the most useful information gathered in human and animal behavior studies is predictive. But to arrive at reliable predictions, you need to understand behaviors and their contexts. Along with the problems already mentioned, Janine Liberty, a Media Lab expert on digital content and strategy writes about another obstruction. “Even if big tech companies decided to share information about their algorithms. . . there is an even bigger barrier to research and investigation, which is that AI agents can acquire novel behaviors as they interact with the world around them and with other agents. The behaviors learned from such interactions are virtually impossible to predict, and even when solutions can be described mathematically, they can be ‘so lengthy and complex as to be indecipherable.’”


The Media Lab’s “Machine Behavior” insists on the importance of the interplay of human and machine behaviors. “Intelligent machines can alter human behavior, and humans also create, inform and mold the behaviors of intelligent machines. We shape machine behaviors through the direct engineering of AI systems and through the training of these systems on both active human input and passive observations of human behaviors through the data that we create daily.” And the authors of the paper are careful to remind us throughout the 10 pages that the study of machine behavior is critically important for the potential benefits AI can provide our society.

At the same time, they caution against oversimplifications. In the concluding outlook they write, “Machines exhibit behaviors that are fundamentally different from animals and humans, so we must avoid excessive anthropomorphism and zoomorphism. Even if borrowing existing behavioral scientific methods can prove useful for the study of machines, machines may exhibit forms of intelligence and behavior that are qualitatively different—even alien—from those seen in biological agents.” Machine behavioral science will travel in several unknown places as well as in our familiar social, economic, political, and intellectual environments.

The researchers explain that this isn’t the creation of a new field of study from scratch. It will incorporate computer studies with research from psychologists, social scientists, economists, and more.

A measure of the potential success of this initiative from the MIT Media Lab can be seen in the list of the 23 researchers who contributed to the paper, along with colleagues convened from the Max Planck Institutes, Stanford University, the University of California San Diego, Google, Facebook, and Microsoft. The Scalable Cooperation group at the Media Lab has released a series of interviews with a number of the authors of the paper, and they’ve organized upcoming conferences for those working on machine behavior in a wide variety of other fields.

About the Authors