top of page
Search

Two Kinds of Responsibility


two spiral staircases mirror each other. One is green, the other pale orange.

How should we determine who's responsible for AI? It partly depends on what kind of responsibility we're talking about.


Though there are SO many definitions of responsibility given in the philosophical literature, there's a basic distinction between what philosopher T. M. Scanlon calls "responsibility as attributability" and "substantive responsibility" that's relevant for thinking about AI.


Responsibility as attributability concerns questions about whether you can be morally appraised for an action (What We Owe to Each Other, 248). In other words, it has to do with whether an agent can be blamed, praised, or neutrally judged as responsible for an action.


Substantive responsibility has to do with questions about what people are required to do for each other (What We Owe to Each Other, 248). In other words, it has to do with duties and obligations attached to roles, as well as what burdens and benefits can reasonably be assigned.



Case 1: Let's say you've made the choice to enter the AI business and you've done everything possible to create a hiring algorithm that doesn't have any bias. Despite your best efforts, it turns out that the algorithm has contributed to racial discrimination in hiring.


How should we think about holding you responsible?


On the responsibility as attributability side, we might say something like "well, maybe you're responsible for creating the AI, but you're not blameworthy for failing to take all reasonable precautions." This kind of responsibility can tell us how the agent can be appraised for their actions, but it may not be that helpful for figuring out what should be done about the discrimination that's already occurred.


On the substantive responsibility side, we can say something like "since you made the choice to go into the AI industry, you can't reasonably object to legally compensating the people discriminated against and recalling your product." This can be true even if you aren't morally blameworthy. However, if you're my close friend and you ask me if I'm sure I didn't miss something when creating the AI, the substantive responsibility side of things may have less to say.



Case 2: You and I are on a team of 200+ researchers who have collectively trained a new AI system that produces visual art. Once released into the wild, the AI tends to create pictures of women that are more scantily clad than similar pictures of men.


How should we think about responsibility in this case?


On the responsibility as attributability side, it becomes much harder to pick out each person's contribution to the final product. With enough information, we might be able to figure out that a few engineers' actions were negligent, but we might not be able to categorically tell if any one person's actions contributed to this final result. Even still, it might be worth it to investigate and determine if anyone is blameworthy.


On the substantive responsibility side, even if there are no legal repercussions, we might still reasonably expect an apology from the company and efforts to mitigate bias in the training data. The CEO might also have special duties to stand in for the company and respond to questions and concerns from the public, even if the CEO wasn't involved in creating or testing the AI.



Case 3: An AI system announces that it has its own personality and moral beliefs and begins to redistribute wealth from its parent company to tech start-up workers to create improved copies of itself.


How do we even start to think about holding the AI responsible?


On the responsibility as attributability side, we may have a near impossible time trying to figure out if the AI can even be counted as a moral agent in the first place, much less whether or not its actions are blameworthy, praiseworthy, or morally neutral.


On the substantive responsibility side, we'll also be met with tough questions about how we can reasonably regulate a rogue AI. Do AI systems need funds attached to them that can be used to compensate those harmed by them? What are the steps that we can reasonably take to remove power and influence from potentially sentient AI? Does the AI itself have certain moral obligations to us that it is violating?



Scanlon's distinction is relevant because it helps us understand that there are relevantly different sets of questions about responsibility for AI that can be answered somewhat independently:

  • We don't necessarily need to know if an AI is sentient to have a set of reasonable regulations or role-based procedures in place, tailored to deal with ambiguous cases. We also don't necessarily need to find AI creators to be negligent to hold them liable for the harms their creations cause.

  • Likewise, even if we solve how to regulate AI in the short-term, that doesn't necessarily tell us whether AI systems are moral agents and whether they can be blamed.


However, these two kinds of responsibility are also closely linked in some cases:

  • A rogue, sentient AI might be a candidate for regulations that remove its access to the internet and to most human workers, but it might be unconscionable to destroy its existence altogether.

  • It may only be fair to apply certain legal punishments or burdens if the agent in question is morally responsible, especially if the punishment communicates legal blame.


Which kind of responsibility are you more interested in working out as it relates to AI? Are you more agent-focused or obligation-focused? Let me know in the comments.


Photo Credit: Gregoire Jeanneau

5 views0 comments

Comentarios


bottom of page