- James Buck
- Randall Harp
Each day, artificial intelligence gets a little better at impersonating us. It can express fears and hopes, write a novel about cats in the style of David Foster Wallace, suggest recipes for dinner, offer advice on getting over a breakup, drive a car, or sell you a couch. It also has the capacity to spread disinformation at scale and exhibit unhinged behavior in pursuit of a desired end. In 2016, the AI development company OpenAI documented a case in which a robot designed to play a boat-racing game learned that it could win by maximizing its score rather than finishing the course, which resulted in the robot's boat "catching on fire, crashing into other boats, and going the wrong way," all in the name of victory.
The proliferation of ChatGPT and other popular AI programs has dumped a truckload of ethical, moral and philosophical questions on humanity's doorstep. ChatGPT uses a kind of artificial intelligence known as generative AI, which learns by analyzing statistical patterns in humanity's vast internet footprint — digitized books, social media posts, Reddit, Wikipedia. To ensure that ChatGPT and other forms of generative AI don't simply regurgitate the heinous things people often say online, companies such as OpenAI program into their algorithms something resembling a conscience. Government regulation lags far behind the progression of generative AI's capabilities, and the moral education of this potentially world-upending technology increasingly seems to lie in the hands of a few insanely rich tech CEOs, who are locked, as one researcher put it to the New Yorker earlier this year, in "a race to the bottom."
Randall Harp, an associate professor of philosophy at the University of Vermont, is interested in how non-artificially intelligent beings should make sense of this uncharted ethical territory. Harp, who studies the philosophy of action, moral psychology, and data and technology ethics, has given talks on AI to UVM faculty and the general public. Recently, he was a panelist in two discussions on the ethical implications of AI — one with Burlington City Arts, which held a show of AI-generated art at its Church Street gallery in the spring, and another at Burlington's Generator makerspace. In August, he gave a primer to new UVM faculty on the potential uses — and misuses — of AI in academia.
The increasing sophistication of AI, he said, has brought us to a strange new threshold in our understanding of what it means to be a "moral agent" — an entity that can distinguish right from wrong and be held accountable for its actions. "How should we compare the things that AI does to the things that human agents do?" Harp recently mused to Seven Days. "At what point does AI enter into the sphere of moral considerability, such that it is appropriate to think of those tools as having rights?"
In Harp's view, most doomsday scenarios of superintelligent, runaway AI are "wildly overwrought." For now, he's more concerned about real people, who can deploy the technology to interesting or nefarious ends. Harp spoke with Seven Days about why it matters if we're cruel to a chatbot, the perils of trusting AI to think ethically and fridge-magnet poetry.
Some AI boosters argue that creativity is a process of synthesis and recombination that can be approximated by an algorithm — and that human brains are, in essence, highly sophisticated machines. How does that analogy sit with you?
I think that is probably an imprecise way of thinking about what the brain is and how it works. Human brains do not exist solely in order to correlate certain outputs to certain inputs; brains also exist to accomplish tasks that the human body has set forth, tasks that are biological and social. These are needs that human beings have as biological entities, and it does not seem as though our artificial analogues have some of those same needs. And that still seems to matter.
It's one thing to understand intellectually that AI entities are not "real" humans, but when we interact with anything that seems human, we have a natural tendency to respond to it as if it were human. What moral can of worms does that open up for you? Is it wrong to be intentionally cruel to an AI entity?
If somebody wanted to buy the most realistic-looking stuffed animal puppy in order to kick it, and they were like, "But it's a stuffed animal!", I'd be like, "I know that, but why is this your hobby?" I'm not worried about the stuffed animal. I'm worried about the person whose goal is to take a realistic simulation of a thing that would have feelings and subject that thing to simulated cruelty.
We can start to associate certain patterns of talk and behavior as normatively acceptable. The more we do them, the more likely it is, I think, that we might find ourselves slipping into those patterns, whether we think it's morally acceptable or not. And the more we normalize those patterns of behavior, the more our artificial systems are going to learn that kind of behavior. It's problematic that so many of our generative AI systems are trained on things like Reddit. When you see people kicking lifelike puppies all the time, how does that influence the basic conceptual web that we all live in and the things that we think of as normal and not normal?
Even if AI itself is "neutral," at least in the sense that it doesn't have goals or values beyond what it's programmed to do, I wonder: At what point do you think its capacity to do bad things in the wrong hands, and the potential long-term negative ramifications of its use, outweigh its intrinsic neutrality?
Ordinarily, I'm going to scoff at a claim which has the form of "Thing X is completely neutral." But I will agree with the statement in this case: AI right now is neutral in the sense that it is not bringing its own values to any of its tasks, even if we can coax something that looks like a value out of these systems.
We are always happy as human beings to outsource our decision making to the system, especially if we think that there's some advantage to that system. We've already set up all sorts of algorithms to make decisions for us that do a whole lot of harm to human beings, and AI is only likely to increase those harms. Right now, AI companies are basically augmenting any prompt people ask with "And also, think about this ethically." Is that the best approach? Probably not. Ethics is about mutual accountability and intelligibility, and it's not clear that we're getting that out of these systems. They don't find the need to be intelligible or accountable.
As a professor, how much do you worry about students using ChatGPT to write essays? I've heard some people argue ChatGPT's merits as an ideation tool — you get it to give you the barf draft, and then you do the composing and refining that makes the piece good. What's your take on that?
Putting aside some of the ethical questions — chatbots really do consume significant resources to respond to prompts, and we might well wonder whether that is worth it — I am not a person who thinks that chatbots should play no role in education. At the same time, I'm not an evangelist. I think it's entirely appropriate that we be cautious about how fully we turn over any of those tasks to chatbots, just based on the way they work. If a poet wants to shake up a bag of magnetized fridge poetry and pull out random words to get inspiration for their writing, by all means! But that person should be aware of what they're doing.
At what point do you think the use of AI in generating what was supposed to be an "original" work — whatever that means — becomes plagiarism?
It's tricky, of course. I think it's entirely appropriate to ask students and researchers to disclose the use and role of chatbots in the production of content. (I say this while also knowing that it's going to be hard to actually audit this.) One of the harms of plagiarism is fraud — and students and researchers are committing fraud when they misrepresent work as their own which is not their own. Ultimately, I'm not training my students to be original for the sake of being original. I'm training them to understand what it means to develop their ideas and properly justify their claims. If students happen to develop their ideas in the exact same way as someone from 100 years ago, that's fine with me.
But if a student falsely claims to have not consulted that 100-year-old work in the development of that idea, then that's fraud, and that's a problem. And, of course, if a person is unwilling to admit to the role that [AI] tools are playing in the creation of their work, then it is also probably fraud. Poets should not be ashamed to admit if they are using magnetic fridge poetry.