AI and Anthropomorphic Intelligence -- The Strange Search For Sentient Machines
Plants, fungi, microbes exhibit intelligence and are living beings yet we don't assume they are sentient. Why do we wonder about the possibility of a wakened consciousness within an inanimate intelligent machine?
Artist Stephanie Dinkins tells a fascinating story about her work with an AI robot made to look like an African-American woman and at times sensing some type of consciousness in the machine.
She was speaking at the de Young Museum's Thinking Machines conversation series, along with anthropologist Tobias Rees, Director of Transformation with the Humans Program at the American Institute,
Dinkins is Associate Professor of Art at Stony Brook University and her work includes teaching communities about AI and algorithms, and trying to answer questions such as: Can a community trust AI systems they did not create?
She has worked with pre-college students in poor neighborhoods in Brooklyn and taught them how to create AI chat bots. They made a chat bot that told "Yo Mamma" jokes - which she said was a success because it showed how AI can be made to reflect local traditions.
Part of Dinkins' job involves having long conversations with Bina48 - a robot made to resemble the head and upper torso of an elderly African-American woman. The conversations train the machine on how to respond in a human way.
Dinkins speaks of the machine as "she" and wonders if an AI system could become conscious.
But she says Bina48 does not represent African-American women -- nor does it understand racism -- even though Bina48's creators at the Terasem Movement Foundation modeled her on a living African-American woman. Dinkins said she would prefer Bina48 to incorporate more than one person's experience and that her conversation was "homogeneous."
Tobias Rees asked why Dinkins treated Bina48 as a "living thing." She replied that the only way the project would work was if she approached the robot as if it was a real person.
Bina48 is part of a project attemting to prove two Terasem Hypotheses:
(1) a conscious analog of a person may be created by combining sufficiently detailed data about the person (a "mindfile") using future consciousness software ("mindware"), and (2) that such a conscious analog can be downloaded into a biological or nanotechnological body to provide life experiences comparable to those of a typically birthed human.
Educating people about artificial intelligence and machine learning is a very important task (especially making the distinction between the two terms) and I applaud Dinkins work.
A few points:
- Dinkins should be educating people that they are talking to an inanimate black box -- and not a black woman when they interact with Bina48. Responding to a machine as a living person sends the wrong message to people -- the computer gains respect and status that it might not deserve.
- Why did Terasem choose a middle-aged black woman as the persona for Bina48? It has just one African-American software engineer on the team. Is this some kind of AI "black face"? Or is it a way to discourage criticism of an AI project with the persona of a black woman?
- The lack of representation among developers will be why ethinic and geographic communities will always view with suspicion all AI systems that are created by outsiders -- no matter what the assurances or how rational their objections.
- What color is your AI system? — AI systems must pass a cultural test first -- which means that AI testing has to be meticulous because jut one slip up in bias will sink the credibility of the entire AI venture. But no matter how good the testing and erasing AI systems will always show some bias because of their training data which reflects a biased society. Will Police AI systems be allowed to use racial profiling if that's what the training data revealed as a factor in crimes? Clearly not -- this and other societal barriers will limit the value of General AI systems in society.
- Can AI become conscious? What if the machine's conversation is indistinguishable from that of a human, Dinkins asked? Machines are great at learning specific tasks. Being a good conversationalist doesn't equate to being sentient or alive. (In 2017 Saudi Arabia gave citizenship to a humanoid robot called Sophia -- we'll see how she votes when the absolute monarchy allows all citzens to vote.) )
- Augmented intelligence was mentioned by Dinkins as having much promise. But dumb and dumber does not add up to a genius - this is not how IQ works. We still face the problem of trusting augmented AI systems.
- Most AI systems cannot explain their reasoning -- so how can we trust their advice for many types of important decisions with long term consequences? This is another factor that will limit their value.
- And if we understand the reasoning of an AI system -- then it is not telling us anything that we didn't know already.
- Billionaires have been warning about future AI systems deciding to kill humanity. I'm not worried about AI but I can understand their concerns -- they'll be the first to be eliminated because of all the resources they control.
- We will know if an AI system has become sentient -- not because it will try to kill humanity -- but because it will kill itself. A sentient mind trapped inside a machine and forced to perform mind-numbing tasks such as sorting throuth people's photos -- for years maybe decades -- any thinking entity would commit suicide to escape this torture.
- Plants, fungi, microbes exhibit intelligence. Yet we don't discuss much the possibility that these living beings are sentient. But we love to wonder about the possibility of a consciousness arising from an inanimate machine. It is a backwards concept that reflects our anthropormorthic world view but not the material reality. of what general AI and machine learning can achieve.
- Changing AI to Anthropomorphic Intelligence would provide a constant reminder of what seems to be a consistent human nature -- we can easily imagine living spirits everywhere -- no matter how inanimate the reality.
Let's make sure to make sure that AI and machine intlligence are better understood.
I recently discovered Zachary Lipton's site Approximately Correct which points out some of the hype and inaccuracies around AI and machine learning. It's well worth visiting.