top of page
Search

Is it okay to ask a chatbot for moral advice?


A hand holds a smartphone, the screen blurred and obscured by multicolored light spots.

There's nothing wrong with using Google Maps to learn how to get to your destination, but what about using a service called "Google Morals" that always tells you the right thing to do, in any situation?


This is the premise of Robert Howell's (2014) paper "Google Morals, Virtue, and the Asymmetry of Deference." Howell thinks that there's something suspect about using Google Morals that makes it relevantly different from using Google Maps, even if Google Morals always gets it right.


Using Google Maps: ✅


Using Google Morals: 😬


To set up the puzzle and figure out what that difference is, Howell assumes that you are deferring to Google Maps. Deferring basically involves believing something because someone else (or your past or future self) believes it. In Howell's puzzle, this means that you believe that something is the right thing to do because Google Morals says it's the right thing to do.


Howell also assumes that deferring is sometimes fine and even good. Generally, you should believe the mechanic when she tells you that your old Honda CR-V is making noises because the sway bar is no longer secure. If you don't have the relevant knowledge, then it's good to defer to the experts.


I've also argued that, sometimes, you ought to defer when it comes to moral knowledge. If, say, an Indigenous person testifies to me about their experience of oppression, I should generally believe them. They have relative expertise and knowledge in that area that I lack, and I extend trust and respect by believing them and/or taking the actions that they recommend.


But what happens when you start using Google Morals more and more? What if Google Morals isn't always right?



ChatGPT and Moral Advice


I have a small confession to make. I've asked ChatGPT for moral advice.


I know I'm not alone.


I was tired and sad, and I felt guilty for making the choice to prioritize my own wellbeing over spending time with a friend that weekend. I wanted external confirmation that it was okay for me to do so.


When I was playing around with ChatGPT with another friend, one of the prompts she suggested was to ask it how to respond to a friend in a sensitive situation where she felt conflicted about a decision the friend was making but still wanted to support her friend [specifics redacted for anonymity].


These are both straightforwardly questions seeking moral advice.


Are they necessarily cases of deference? Depends on if you believe what ChatGPT says because it says it. I think I was certainly relying on ChatGPT when I asked it for confirmation, but it's unclear that I was deferring full stop. I already knew what I thought. My friend just wanted some ideas.


At the same time, we might find that we have similar icky feelings about asking ChatGPT for moral advice as we do about deferring to Google Morals.



Howell's Diagnosis


Howell thinks that deferring to Google Morals is bad because it prevents us from developing virtue and good character. If we're not actively thinking about how our moral judgments fit together and learning how subtle differences in situations warrant different responses, then we're not really developing the skill of moral judgment.


I think that asking ChatGPT for advice makes me feel weird for similar, yet distinct reasons. There's a key difference between Howell's puzzle and the ChatGPT puzzle: Google Morals always gets it right. ChatGPT is a hallucinating hobgoblin created from the internet and trained to avoid specific ethical pitfalls.


Even if it generally gives helpful answers and people are already using it for therapy, it still gets things wrong and makes things up a good deal of the time. It's also unclear that it can generate new ideas that further elucidate morality - it's mainly regurgitating common answers and platitudes.



The Chronic ChatGPT Morals User (Deference)


I think I agree with Howell that there's a danger of moral laziness with ChatGPT. If ChatGPT can tell you what to do much with much less effort than it would take to think through the issue yourself, it could exacerbate existing moral laxity. Even if ChatGPT got the moral question right more often than the user would, we might be concerned that the chronic "ChatGPT Morals" user doesn't really care about learning what is right for themselves or even about what is true. If they're just deferring to ChatGPT, we should probably be concerned.


The Occasional ChatGPT Morals User (Reliance)


If you, like me, only use ChatGPT occasionally for moral advice, there may still be some danger. Even if I'm thinking carefully about the issue and deciding whether to accept what ChatGPT says on the basis of the reasons it offers, it's still shaping my thinking. If I have a healthy diet of moral views and only use ChatGPT occasionally, it's probably not shaping my thinking that strongly. However, if I start relying on it more, I may rightly be worried that it's giving me an incomplete picture that lacks the full breadth of human creativity.


The Conversational ChatGPT Morals User (Engagement)


What if I'm using ChatGPT to learn about ethical theories and see how they could be applied to different cases? In this case, many of the previous worries disappear. If you are explicitly asking the bot for a variety of views and engaging with it conversationally to form your own moral understanding, then it seems like you are more straightforwardly working to develop your own virtue and character. If, however, ChatGPT is your only conversation partner, then you're missing out on important perspectives and experiences.



Some Final Thoughts


Q: Is this any more weird than asking a friend for moral advice? Your friend doesn't get things right all the time, and asking a friend for moral advice occasionally is just fine.


A: Unlike ChatGPT, you tend to know your friend and the experiences that inform their perspective. There's also an interpersonal relationship of trust, which allows you to hold your friend accountable for their advice if it turns out to be bad. Also, if your friend just makes things up about 10% of the time with no regard for the truth, you probably wouldn't ask that friend for advice.



Q: Isn't ChatGPT an expert? It at least knows more about a lot of stuff than I do, so shouldn't I defer?


A: In my mind, ChatGPT seems to be more of a generalist than an expert. It can handle most things at the high school and undergraduate levels, but it can't yet handle my dissertation. Even if it counts as an expert, it's not a reliable expert. It doesn't warrant our trust because it gets things wrong so frequently.



Q: We're basically trained on the internet and the prevailing moral views of our time, so why think that we would get things right any more often than ChatGPT would?


A: Unlike ChatGPT, we have direct experiences. I imagine you know what it's like to be in a place with bad rules and structures that make your life worse. Even if you're told that they are good, you might eventually realize that "this is the bad place!" We certainly get things wrong a lot of the time, but we have a better chance of snapping out of it.



Even if there are short-term benefits to using ChatGPT for therapy or for affirmation, we should at the very least be cognizant of how we are using the chatbot, how we are relating to it, and how it's affecting our tendency to think for ourselves.


What do you think? Do you feel weird about people using ChatGPT for moral advice? Is it for the same reasons I've pointed out? Or do you think that using ChatGPT for moral advice is just fine?


Let me know in the comments!



Photo Credit: Rodion Kutsaiev

17 views0 comments
bottom of page