Between Ethics and Profit: AI and The Agency Problem

Er Raqabi El Mehdi
DataDrivenInvestor
Published in
7 min readMar 2, 2021

--

While Artificial Intelligence (AI) has been adding value to ours lives in many ways, it remains a tool that can be harmful in case of exaggeration or bad usage. To deal with the second aspect, several big tech corporations initiated AI ethics boards and/or departments. While the role differs from a company to another, these entities are generally supposed to oversee, check, and suggest ethical initiatives that will foster the development of responsible AI. Still, one may wonder whether such entities are playing a significant role or just enhancing the brand image of these companies.

For any human who really cares about others, such a question is crucial from many perspectives. Being concerned about youth, I can foresee several examples in my daily life and interactions. For instance, using AI, Netflix is “perfectly” recommending shows and movies that may get the user stuck for days watching. Facebook (and Instagram) also is keeping users scrolling across content for hours and hours. This can lead to addiction, a loss of passion in studies, a significant waste of time, and mental health issues in case of “polluted” content. While parents and tutors should also support youths to achieve a healthy lifestyle, these companies are also directly responsible in contributing in nurturing the next generations. In this article, I want to highlight this topic from an agency perspective, discuss the issue, the resulting behaviors, and conclude with some suggestions.

The Agency Problem

When an employee is recruited, she/he is expected to act based on the employer’s best interests. Now, the way interests are defined vary according to the context. For instance, in corporate finance, the employee is supposed to increase shareholders’ equity. In case the employee is not aligned with such expectation, we talk about the agency problem, which is formally defined as “a conflict of interest inherent in any relationship where one party is expected to act in another’s best interests”. A famous story in the literature is the one of Enron into which the directors were legally enforced to support investors’ interests. Still, without having enough other incentives, they rejected some of their responsibilities leading the company to enroll into illegal activities that generated an accounting scandal followed by billions of dollars in losses.

(AI) Ethics

Before globalization, people belonged to different societies and groups where there were a general agreement on ethics. Nowadays, in a connected and rapidly changing world, the word “ethics” seems more complicated compared to before. It is understandable since humans are moving from a locally-based interaction to a more globally-based one. Social networks are representing a world where many people from different societies, backgrounds, beliefs, and cultures are interacting together. There are hence many emerging controversial topics for which there is no “Consensus”. A behavior which is considered acceptable in an area may be considered Taboo in another one and vice versa.

The term “Machine Ethics” was coined by Mitchell Waldrop in the 1987 AI Magazine article “A Question of Responsibility”. The author stated:

“However, one thing that is apparent from the above discussion is that intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus, as computers and robots become more and more intelligent, it becomes imperative that we think carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory and practice of machine ethics, in the spirit of Asimov’s three laws of robotics.”

Since then, AI ethics has been evolving to cover many topics including privacy, surveillance, bias, and discrimination.

The Issue

With all the concepts being introduced, the issue can be easily identified. In fact, AI is making the companies more profitable. Still, it may be either directly or indirectly harmful to customers, users, youth, children, etc. In such a case, from an ethical perspective, it may be necessary to reach a fair threshold between “Ethics” and “Profit” (or “Performance”). However, in our era of wild capitalism, several companies are more profit-oriented. As far as I observe, ethics seems coming at best in the second position after profit. The weight associated with “Profit” variable is significantly larger to the weight associated with “Ethics” variable. Furthermore, in such a fierce competition and race towards more profits, humans well-being seems to be deteriorating. For example, the mental health is becoming an alarming topic, especially among youth. Social networks, shows, movies as well as others are drawing a completely wrong image about the reality of life, which makes students less attracted towards education, learning, sports, etc.

Behaviors

Now, imagine an (AI) scientist excited about (AI) ethics and recruited to work on it by a big tech company. The role will be overseeing, checking, and analyzing the previously, currently, and upcoming algorithms. Obviously, the company want the best possible algorithms in terms of performance metrics that can boost for instance the number of subscriptions, the time spent by users, etc. By definition, if the shareholders’ expectations are solely on profits, he/she cannot be aligned with their interests, and especially if he/she detects any harmful aspects that make the algorithms very powerful. In such a situation, people behave differently based on various criteria:

  • Agree: many employees end up giving up and working for a salary and some benefits. They accept letting their beliefs and merge gradually into the environment and its requirement. These people represent the large portion. I have seen such situations throughout my experiences and even among prominent researchers working in business I have been following. And that is one of the reasons I think researchers should consult and support companies without being employees. Otherwise, many lose the ethical criterion.
  • Neutral: many employees remain working without getting involved in such debates. They do not necessarily agree, but they keep doing their job. Such a profile probably prefers being present without contributing to the harm. From an ethical perspective, such a behavior remains an open question.
  • Disagree: these employees usually suffer a lot because they detect problems, highlight them, and propose solutions. The latter may not be satisfying to shareholders leading consequently to issues. A good example is the recent news. In less than three months, Google fired two top researchers on its AI ethics team. While some support and some disagree on these decisions, this sounds linked with the analysis conducted above. Two other engineers quit Google as well. One of them, David Baker, left after 16 years. According to him, Gebru’s departure extinguished his desire to continue as a Googler.

Suggestions

Big tech companies are attracting great potentials that are supposed to lead change for humanity not for shareholders. Internally, it is very difficult for a researcher to achieve significant changes in terms of (AI) ethics. This has been the case with Timnit and others. Externally, there is a huge potential in case an international organization in charge of AI ethics is formed. It should be diversified and brings together prominent people (scientists, researchers, leaders, etc.) from different backgrounds and cultures. The main focus will be enforcing of Ethics incorporation in the the development of AI algorithms. I believe we will slightly converge towards such a state. Still, I hope to see it as soon as possible since it is really time to shift given the already witnessed AI drawbacks.

If we really care about each others, if we really care about the next generations, if we really care about youth, it is time to increase “Ethics” weight in the objective function. The big tech companies do not seem excited to saying “enough money”, they will keep tuning and improving their algorithms to achieve more and more profits. Do they really care about humans well-being? As a user who interacts continuously with youth, I do not think so. Still, I am looking forward to a greater future for all of us :-) !

“Living together is an art.”-- William Pickens

There are many other resources with different/similar opinions that deserve reading/watching as well to have more insights on the topic of “AI Ethics”. I would like to suggest a video and a paper :-).

This is an interesting topic where all of us are either directly or indirectly involved. Right? I am looking forward to hear your ideas and opinions :-).

Gain Access to Expert View — Subscribe to DDI Intel

--

--