“Human intelligence is something deeply social”

A few words on the importance of society and multi-agent learning in AI Research

Maciek Wiatrak
DataDrivenInvestor
Published in
7 min readJan 15, 2019

--

DeepMind artwork of cooperating ants. DeepMind

We tend to think of intelligence as something self-made, a product of our thoughts and that this inborn high level of intelligence has allowed human to gain such a dominant status among other beings we know. Such a mindset is also often shared in the field of machine intelligence, where recent advancements in single agent framework utilizing deep learning proved to excel at skills previously thought to be unattainable for machines (i.e. image recognition). Nevertheless, as helpful in particular tasks, it remains doubtful that it will be the complete solution to match human cognitive capabilities, thus creating a ‘general artificial intelligence’.

What those machines often find exceptionally challenging are social interactions, including things like reciprocity or communication, which for humans appear trivial. One of the reasons for this is that while establishing systems and algorithms, we often omit the social aspect of human intelligence and the fact that it has been constantly changing and re-shaping over a thousand years of evolution, trying to adapt to the environment. Indeed, recent research indicates that what has allowed us to gain an advantage over other species was neither our physical abilities, nor mental sharpness, but our ability to form and cooperate effectively in large groups. Enabling us to efficiently divide tasks and share responsibilities. Thus building large systems of knowledge. One of the examples of such a system is Wikipedia — an enormous database of evolving knowledge, from which everyone can benefit. Trying to keep up with one discipline such as machine learning can be hard and keeping up with many is almost possible. Luckily, Wikipedia is there to help out, providing support in things such as writing this article.

But effective cooperation is not unique to humans, nature is full of examples of successful cooperation. Ants, wolves, bees and a number of other species have proven to excel in cooperation. What makes us different is that we do not necessarily need to be connected through blood ties, nor territory to work together effectively. Instead, we’ve found another way to do that — through forming so-called intersubjective realities. This intersubjectivity, defined as a mutual agreement between people on a given set of meanings and definitions is a cornerstone of almost all human societies. The first example is human rights law, we see it as something objective, which should be given to every human being, but if these rights were to be formed in medieval times in a similar format, the content would surely differ strongly. This law is nothing more than a reflection of the current state of the world, what’s more, it is also debatable as many societies today would not fully agree with those laws. An even more down to earth example is the establishment of companies and institutions. Let’s take Coca-Cola for example, we all agree that there is such a company specializing in producing soda drinks, but if we take apart a can of this drink, we can see that it’s nothing but a set of chemicals mixed together. Coca-Cola, other brands, institutions, and human rights law exist solely because we agree that they exist. This list could be further extended to religion, football teams and finally, countries. At the end of the day, what makes a Berliner and Londoner different? Biologically, the difference between all humans is not significant and what tradition and culture is if not a set of rules and rituals that have been formed and maintained over a substantial amount of time? Ultimately, our culture is not inborn.

This leads to the fact that human intelligence did not evolve in isolation, but as a result of cumulative cultural evolution emerging from constant competition and cooperation. Our world is intrinsically a multi-agent world and if we wish to create more general intelligent agents, these systems should encapsulate it. Another reason and benefit for developing multi-agent systems is its potential robustness and scalability. Being intrinsically dynamic, multi-agent environments are the most complex and could possibly provide cutting-edge architectures to be deployed in more general tasks. Finally, the current state of the world is multi-agent and various design of institutions around us such as government, economy, or local market are multi-agent as well.

The incorporation of multi-agent framework could provide us with systems capable of a higher state of generalisation, thus possibly performing better at things as social interactions. But to be able to reach such complexity, we need a definition of intelligence that would allow us to test and benchmark the results. This is exceptionally hard as the question of intelligence remains an open one and as argued above, it’s not currently possible to fully capture it due to the uncertainty around it. Nevertheless, an attempt of a definition currently used by a number of researchers and a one that could shed some more light onto the problem has been proposed by Legg & Hutter:

“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”

This informal definition is debatable, however, its advantage is that it can be formulated as a relatively simple equation, well depicturing the importance of the generalization:

definition of intelligence by Legg & Hutter (2007)

The idea behind the formula is to measure the intelligence of an agent 𝜋 defining it as the value V achieved by an agent 𝜋 in the environment 𝜇, summed over all of the environments;

summed over all environments

with each value achieved in the environment adjusted by the weighting factor.

weighting factor

The weighting factor plays a key role, here, as it weights the agent’s value achieved in the environment inversely proportional to its complexity. This reflects the fact that we want the system to be general-purpose and cover a large number of simple tasks, to later move on the more complex ones. The complexity of the environment in the weighting factor is measured by the Kolmogorov complexity K, defined as the length of the shortest computer program that maximizes the value achieved in the environment. What’s important is that the Kolmogorov complexity encapsulates the rule known as Occam’s razor which essentially states that having a number of possible solutions, the simplest should be preferred. That is widely regarded as a rational thing to do and often reflected in IQ tests, where the tests examine one’s ability to utilize Occam’s razor.

An example of that presented by Legg & Hutter is a common question in IQ tests to predict the next number in a sequence of 2,4,6,8. On average, humans find the solution to be obvious as the numbers increase by 2 at a time, hence the nth item in the sequence is given by 2n and the next item will be 10. The catch is that the polynomial 2n4–20n3+70n2–98n +48 also fits the pattern and in this case, the next number in the sequence will be 58, instead of 10. For humans, the choice of 10 comes almost naturally, as subconsciously we apply the Occam’s razor. However, without the Kolmogorov complexity the computer might not see the difference between the two solutions as indeed, both meet the objective.

In this way, we’ve outlined a general definition of intelligence, measuring agent’s general ability to achieve goals in a wide range of environments and taking into consideration the methodology linking the solution with the complexity of the environment. However, the generality of the definition could be also seen as a limitation, not mentioning the main technical drawback, which is the fact that Kolmogorov complexity K cannot be computed, but only approximated (For more details, please refer to Fortnow (2001)).

Such a brief introduction to the definition of machine intelligence provides an understanding of what multi-agent and intelligent systems in general, shall aim to achieve. However, unfortunately, does not get us much closer to building one. To do that, we first need to face a number of challenges. One of such being the computational complexity of these systems, especially applicable to the multi-agent design.

Getting back to the problem of the social aspect of evolution and the argument that intelligence has evolved together with humans over a thousand years. The main bottleneck is that if it is to appear to be true, we are very far away from grasping how the intelligence evolved. What are the causes, implications and how we could possibly use it to construct a more general artificial intelligence? An example of this is language — for us, an integral part of human experience, which has evolved over the years of communicating and interacting. Something that can be learned without any formal feedback or training. Nevertheless, regardless of efforts into studying it, we are far from fully understanding how it works and reconstructing it.

This is only one theory stating the importance of society in the intelligence, its definition and what we could possibly do to understand it better. But as Minsky, a prominent researcher in this area and an author of the “Society of Minds” discussing the multi-agent design, has said: “The very concept of intelligence is doomed to change the more we learn about us and just like the concept of the “unexplored regions of the world”, it disappears as soon as we discover it.”

Note: Highly welcome feedback and open for discussion. This article is a reflection of my personal views.

--

--

University College London student, previously Data Scientist @ Growbots. Interested in Machine Intelligence and Multi-Agent Reinforcement Learning.