AI, Agency and the Ethical Implication

Since the inception of Artificial Intelligence, the notion of “intelligence” and “being like a human” are often interchangeable. This is a lineage of western tradition of equating human intellectual capacity as the unique and supreme power in comparison to all other beings, animate or not: to be human is to have a mind, and to have a mind is to be able to think like a human. This explains the Turing Test: the earliest criteria to assess a machine’s capacity is to test how much they’re able to resemble human beings.

Today we’re not only used to, but also inseparable from devices that has incredible computational power that they far exceed our own thinking capacity with certain designated tasks, but we can probably agree that none of them can be remotely considered having anything close to a mind. The improvement in computational power and speed hasn’t made significant differences in the objective of creating machines that think like us. The famous article by John Searle(1) in 1980 makes a distinction between Strong AI and Weak AI : a Strong AI is what both acts and thinks and has intentionality like humans do, and a Weak AI only acts like it does. Up until today, we’ve made significant progress in making incredibly powerful Weak AI, but whether Strong AI can ever be created is still a debatable subject.

What’s truly fascinating however, is that even when we know that AI doesn’t have the same capacity to act or feel or even think the same way we do, we as human can still act or have an emotional reaction to them to some degree in a similar way how we do to other humans. In fact, because of this there is an entire industry of robot design that caters to helping with people’s emotional need.

PARO, for example, the famous robot seal that serves elderly homes worldwide, is incredibly popular and has shown by some studies to have positive impact on alleviating depression and anxiety and also promotes social interactions between people.(2)

Another example is Aibo, the robot dog that previously exhibited at the Barbican as part of the AI: More than Human(3), which has no functional purposes (not even for explicit therapeutic objective like PARO) other than emotionally connect with us like an actual pet would do.

 

This raised a few questions regarding AI’s role as a potential agent in human society:

1.     Why are we so obsessed with applying the same criteria as human intelligence to fabricated machine intelligence?

2.     Does the capacity of AI to have an agency and perform intelligent social activities and therefore form a social relationship with human beings involve something more than just machine intelligence?

3.     Following the previous questions, what can we learn from all of the above concerning human psychology and ethics?

 

In the chapter Figuring the Human in AI and Robotics, Lucy Suchman(4) provides some key concepts and frameworks that help dissecting these questions, especially the three key elements that she framed – embodiment, emotion, and sociability. What’s particularly relevant to my questions is her summary of the consensus on Affective Computing, where Affect is “the expression of an underlying emotional ‘state’.” This view frames what’s underlying the appearance of emotions as certain states that the affect is dependent on, whether it’s human emotions or how the robot is designed to respond. As long the there’s a correspondence between human actors and robot actors based on how we interpret the expression of emotions, and the robots can display satisfactory consistency in their response, the robots can be claimed to have a place in the society as more than just a tool, but something we can establish a connection with.

However, even though practically this view explains how (pet) robots can provide emotional values to part of human society, the explanation itself risk sounding reductionist. It works around the hard question of whether this kind of emotional connection is genuinely what people want or whether its deceptive and manipulative because it creates genuine emotional responses in people (in the experiential sense that involves consciousness), but what happens under the affective expressions in robots are simply a series of instructions (no matter how sophisticated they are) that let them display emotions.

Ultimately the question turns back to human beings: there are many motives for the exploration of machine intelligence, but whenever it comes to the discourse of agency and social role of AI, we need to understand why it is that we are humanizing machines and what we gain out it that cannot be replaced by human beings ourselves.

 

 

 

1.      Searle, John R. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3, no. 3 (September 1980): 417–24.

2.      University of Brighton. “The PARO Project.” Accessed October 23, 2019. https://www.brighton.ac.uk/research-and-enterprise/groups/healthcare-practice-and-rehabilitation/research-projects/the-paro-project.aspx.

3.      “AI: More than Human | Barbican.” Accessed October 23, 2019. https://www.barbican.org.uk/whats-on/2019/event/ai-more-than-human.

4.      Suchman, Lucy. “Human-Machine Reconfigurations by Lucy Suchman.” Cambridge Core, December 2006.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s