top of page
Search

The "U" Behind "TESCREAL"

Updated: Sep 19, 2023


An old-school calculator with shades of orange keys collects dust on a table with other calculators.

If you've not been steeped in the online AI ethics discussion that's happening right now, "TESCREAL" may look like alphabet soup to you. But each part of TESCREAL represents a different piece of a questionable ideology that is emerging in certain sectors of people thinking about AI ethics. More on this in a moment.


There is a single "U" that unites each of the elements of TESCREAL, and that is utilitarianism. If you can understand how utilitarianism functions, then the rest of the TESCREAL soup should be much easier to digest.


In what follows, I'll very briefly take you through basic definitions of the ideologies that make up TESCREAL and then explain how utilitarianism forms the backbone of some truly weird AI ethics approaches (that also promote eugenics).



TESCREAL


"TESCREAL" is a term coined by Dr. Timnit Gebru and Émile Torres to identify the philosophical underpinnings of contemporary AI doomerism and explore their connections to modern day eugenics:


T = Transhumanism advocates for the enhancement and improvement of human capabilities through technology and medical intervention. In other words, we should make ourselves better by changing our bodies and minds. (If you'd like to learn more, Philosophy Tube did a video on transhumanism a while back.)


E = Extropianism is a brand of transhumanism with a number of different principles and commitments, but the two that matter for the purposes of this blog are the ideas that 1) we should extend human lifespans and 2) we should expand outwards into the universe.


S = Singulitarianism is the idea that the creation of superintelligence (something that's more effective than us at everything) will happen in the near future and that we need to ensure we benefit from its creation. The "singularity" refers to the moment the superintelligence is created.


C = Cosmism was a Russian school of thought developed by Nikolai Federov in the late nineteenth century. Federov believed that we have a moral obligation to cure death and that outer space would be the place where we could live forever and access infinite resources.


R = Rationalism states that reason, as opposed to emotion, should be the basis for knowledge. In contemporary online communities, rationalists tend to want to perfect their own reasoning abilities and can sometimes fall into cult-y patterns.


EA = Effective Altruism claims that we should use evidence and reason to determine how to help others as much as possible, and then we should use those methods to help people. Effective altruists commonly rank charities based on (sometimes questionable) metrics. (I've written my own critique of EA as well.)


L = Longtermism is the idea that most of the value exists in the future (because most of the people exist in the future), and consequently that the key moral priority of our time is to preserve the possibility of a good long-term future on the scale of millions and billions of years.


Together, these letters form a picture on which we want to create a superintelligent AI that will create a simulation with a bunch of better versions of ourselves living forever happily or make medical and scientific breakthroughs that allow us to cure all disease and expand into outer space.


The danger, on this picture, is creating a superintelligent AI that doesn't align with the TESCREAL values and that subsequently destroys humanity, or some other scenario in which we destroy our own potential to reach the long-term future continuation of human reason and the improvement of the species.



Utilitarianism


Utilitarianism is a moral theory created by Jeremy Bentham and popularized by John Stuart Mill. The basic thesis is that the morally right action to take is the one that maximizes pleasure and minimizes pain. (There are a whole bunch of different versions of utilitarianism on offer, but we'll just stick with act utilitarianism for now.) Determining the morally right action requires a careful calculus of all the potential pleasures and pains that might come out of a particular action, as well as how significant those pleasures and pains are.


Let's say you're trying to figure out whether you should create an AI that can help first-time home buyers access housing. You've run a few tests, and you've discovered that the best version of the AI would help about 80% of first-time home buyers but discriminate against about 20%. Let's say the 20% are Hispanic and Black women.


Now it's time to run the utilitarian calculus. Assume that not creating the AI would result in a net zero gain of pleasure. Let's say that the 80% of first-time home buyers get +5 pleasure from using the AI service and that the 20% of first-time home buyers that are discriminated against get -7 pain. On this assessment of pleasures and pains, it seems clear that creating the discriminatory AI would overall create more pleasure than not creating the AI at all. As such, utilitarianism would say that you have a moral obligation to create it.


This is a bad result!


There are all sorts of ways different philosophers have tried to make utilitarianism more plausible: Perhaps some pleasures and pains are just qualitatively on a different level, such that we'd always prefer some to others. Maybe we need to think about which rules would maximize pleasure and minimize pain. What if we're just really bad at calculating the numbers and we need to get better at identifying how to go about assessing pleasure and pain?


In my view, answering these questions won't fix utilitarianism's main problems:

  1. Utilitarianism treats people as containers for value, not as valuable in themselves. You only matter because you can produce pleasure in the world, both in the pleasure that you feel and in the pleasure that you cause others to feel. This can lead to perverse conclusions that, for example, a depressed person morally should commit suicide if they're producing more pain than pleasure in the world.

  2. Utilitarianism requires you to care about total pleasure and pain, rather than personal relationships. If both Beyoncé and your non-famous soulmate are drowning and you can only save one, utilitarianism dictates that you are obligated to save Beyoncé. (Even if you get a version of utilitarianism that says that we should set moral rules that allow you to save the ones you love, the background justification is still the maximization of pleasure, not the obligations produced by loving relationships.)

  3. For each action, utilitarianism asks you to think through all the possible consequences and act accordingly. While we tend to have some heuristics that normally work in favor of acts that maximize pleasure and minimize pain, the devout utilitarian would spend quite a bit of time trying to determine the best possible courses of action. This is a markedly demanding form of moral thought.


The "U" Behind "TESCREAL"


Now that we have a basic understanding of utilitarianism on board, we can easily see how it informs each of the elements of TESCREAL.


If we work to improve ourselves in the way that transhumanism suggests, we might be able to minimize human pain and suffering from aging bodies and sickness. Smarter humans would also be better at determining which acts would maximize pleasure for everyone.


Extending human lifespans in the way extropianism advocates would allow us to feel pleasure for longer. If we continue to expand out into the universe, we can continue to create more humans that feel more pleasure.


If we, as singulitarianism claims we might, create a superintelligence that's aligned with our [ahem... utilitarian] values, then the superintelligence can help us solve problems that threaten humanity and allow us to ascend into a simulation that is more pleasurable than our current mode of being.


Curing death and accessing infinite resources in space, as cosmism suggests we should, would let us create infinite people living infinitely long and pleasurable lives.


If we want to be the best utilitiarian thinkers, then we need to be imminently rational in the way that rationalism asks us to be. We've got to carefully calculate exactly which courses of action will maximize pleasure and minimize pain.


Which charities should you support on the utilitarian view? The ones that produce the most pleasure for the least cost. This is the thesis of effective altruism.


If people are just containers for value and most of the people exist in the future, then most of the pleasure that will ever exist is in the future. Longtermism, or the thesis that the future matters more than the present, falls out of utilitarianism.



The Eugenics Connection


If I learned anything in my time coaching IU's Ethics Bowl team, it's that you're always two steps away from eugenics and ecofascism. It's surprisingly easy to fall into the traps of both. For now, however, we'll just focus on eugenics, or the noxious idea that you can (and should) selectively breed better humans.


So, let's say you want to create as many happy people in the future that you can. What are your chief concerns?


One of the many existential risk scenarios that Nick Bostrom, the author of Superintelligence, worries about are dysgenic pressures, or what would happen to humanity if too many stupid people bred with each other and we lost our intellectual ability, which would make the transhumanist project impossible. (If you've seen the movie Idiocracy, it's a version of the same worry.) Bostrom has also recently come under fire for a 26 year old email in which he supported the claim that “Blacks are more stupid than whites.”


Set aside the particular question of Bostrom's character for now. If you accept a transhumanist picture on which the aim is to become better humans, what makes some humans better than others? In seeing some humans as better than others, do you implicitly value the lives of people with intellectual disabilities less? Where do they fit in the transhumanist utopian future, if at all?


Additionally, there is a long history of race and constructions of disability as two sides of the same coin. For example, immigrants who landed at Ellis Island were officially rejected under the idea that they were disabled in some way, but this was also a convenient tactic to reject many people of color. And the US has a long history of the forced sterilization of women of color, justified by reference to "low IQ" and other undesirable traits. Where do people who are not white, wealthy tech bros fit into the transhumanist future with AI?


If we look at contemporary utilitarian views, we can also see that the specter of eugenics looms large. Peter Singer has famously argued that parents should have the right to euthanize newborns with Down syndrome, spina bifida, and hemophilia, because “the child’s life prospects [are] significantly less promising than those of a normal child.” On a utilitarian calculus, keeping children with certain disabilities alive may produce less pleasure on the whole than ending their lives as newborns, and it may even be morally obligatory to euthanize them. But this is surely a perverse result.


Finally, assume that a higher IQ allows one to be a better effective altruist and create more pleasure in the world than other, less intellectually endowed individuals. Do more rational, sophisticated reasoners live more valuable lives because they can create more pleasure in the world? Should their lives be prioritized over people who haven't had the same opportunities to develop their innate talents?


As you can see, any attempt to describe a certain set of humans as innately "better" than others starts to lead us down the eugenics pathway and fails to recognize the full breadth of human value.


The idea the eugenics is in the background of TESCREAL projects isn't a conspiracy, it just turns out that we have a long history of thinking in terms of eugenics and that utilitarianism, in particular, lends itself well to conclusions that favor eugenics. Because utilitarianism is the basic framework that informs the TESCREAL cluster, it's no surprise that our old sins have resurfaced in our new visions of the future.



Photo credit: Ömer Haktan Bulut

177 views2 comments

Recent Posts

See All
bottom of page