top of page

Elite Capture and Shallow AI Ethics

Poker chips are scattered across playing cards. The middle row of playing cards are all kings.

In Elite Capture: How the Powerful Took Over Identity Politics (And Everything Else) (2022), philosopher Olúfémi O. Táíwò analyses elite capture as what happens when "the advantaged few steer resources and institutions that could serve the many toward their own narrower interests and aims" (22). Informed by his careful analysis, Táíwò identifies more and less effective strategies for collective resistance (22).

Táíwò's main critique is of relatively superficial identity and deference politics that urge us to listen to the most marginalized yet fail to extend their projects outside of the halls of power - elite rooms often get more equitable and inclusive while nothing is done to improve the material reality of the poor and oppressed outside the room.

In order to move beyond shallow identity politics, Táíwò argues that we need to adopt a strategy built to respond to a "world where 1.6 billion people live in inadequate housing (slum conditions) and 100 million are unhoused, a full third of the human population does not have reliable drinking water" (70). For Táíwò, this constructive strategy needs to recognize the underlying structural and material underpinnings of our social world and make use of our collective power to build something newer and better.

The "constructive approach to politics calls for us to build power expansively, across all aspects of social life - beyond just work. ... Among the threats posed by this most recent stage of racial capitalism are the erosion of the practical and material bases for popular power of knowledge production and distribution. The capture and corruption of these bases by well-positioned elites, especially tech corporations, goes on unabated and largely unchallenged" (111).

This is hard work that requires organizing, educating, and time, but it's a project worth doing.

It's time to build a constructive politics for artificial intelligence, algorithms, and large tech companies instead of just focusing on expanding DEI (diversity, equity, and inclusion) work within those power structures.

Elite Capture and Powerful Algorithms: 5 Examples

1. Algorithms and Rent Increases

Last fall, ProPublica reported on YieldStar, an aggressive rent-setting algorithm used for up to 70% of apartments in one neighborhood in Seattle and for tens of thousands of other apartments nationwide. The following excerpt is taken directly from the ProPublica article:

"'Never before have we seen these numbers,' said Jay Parsons, a vice president of RealPage, as conventiongoers wandered by. Apartment rents had recently shot up by as much as 14.5%, he said in a video touting the company’s services. Turning to his colleague, Parsons asked: What role had the software played?

'I think it’s driving it, quite honestly,' answered Andrew Bowen, another RealPage executive. 'As a property manager, very few of us would be willing to actually raise rents double digits within a single month by doing it manually.'


'The beauty of YieldStar is that it pushes you to go places that you wouldn’t have gone if you weren’t using it,' said Kortney Balas, director of revenue management at JVM Realty, referring to RealPage’s software in a testimonial video on the company’s website.

The nation’s largest property management firm, Greystar, found that even in one downturn, its buildings using YieldStar 'outperformed their markets by 4.8%,' a significant premium above competitors, RealPage said in materials on its website."

Even if the rent-setting algorithm has successfully avoided bias related to long-term housing inequality and racial disparities in wealth generation (which is very unlikely), its use still benefits the most financially well off at the expense of the least financially well off. That active exploitation still disproportionately impacts oppressed groups that have not achieved full equality.

2. ChatGPT and Knowledge Silos

Now that ChatGPT is taking off, more users are reporting that they are using the chatbot as a search engine instead of google. As ChatGPT becomes more ubiquitous and easy to use (and as GPT-generated plugins take off), it is likely that a good portion of users will receive most of their information diet from GPT models.

As was helpfully pointed out to me in a conversation earlier this week, a GPT knowledge silo might decrease user autonomy by limiting their understanding of rich and complex debates and perspectives. While GPT-4 is more interesting and more accurate than GPT-3.5, it's not a replacement for reading across a variety of perspectives and voices.

Building a coalitional politics requires the ability to compromise and accommodate different perspectives, and that necessitates the difficult yet rewarding practice of learning to engage with challenging ideas and interact with real, often irrational thinkers. GPT models are not a replacement for intentional communities of inquiry.

3. Sam Altman and Wealth Redistribution

The NYTimes recently interviewed Sam Altman, the CEO of OpenAI, to ask him about his predictions for A.G.I. (artificial general intelligence) and his plans to deal with the potential future economic consequences of introducing a fast, cheap AI that meets or surpasses human intelligence. The following excerpt is taken from the NYTimes article:

"When I asked Mr. Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.

If he’s wrong, he thinks he can make it up to humanity.

He rebuilt OpenAI as what he called a capped-profit company. This allowed him to pursue billions of dollars in financing by promising a profit to investors like Microsoft. But these profits are capped, and any additional revenue will be pumped back into the OpenAI nonprofit that was founded back in 2015.

His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.

If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.

But as he once told me: 'I feel like the A.G.I. can help with that.'"

While there is a chance that a properly trained algorithm could work to distribute wealth in a way that avoided bias and rectified inequality, it's still deeply concerning that the decision-making power for wealth distribution would be up to one elite. Part of how we ensure equality and fairness is by a grassroots democratic process of organizing and building power. Decisions about large-scale wealth distribution shouldn't be left up to Sam Altman alone.

4. Elon Musk and Twitter Views

After Twitter published some of its source code on GitHub this month, users found that a branch of the algorithm identified tweets based on whether they were authored by Elon Musk and labeled users as Democrats, Republicans, or power users.

While Musk's antics at the Twitter headquarters have been somewhat entertaining from afar, the internal algorithmic categorizations that prioritize his tweets and separate users politically make a real impact on the content that users see and interact with on the social media platform.

The Twitter algorithm is part of the built foundation of our online spaces, and the information we have access to shapes our perceived and actual political and social worlds. Filter bubbles, echo chambers, and other algorithmic inequalities can keep us from adequately understanding and organizing around creating solutions to key problems we face.

5. Microsoft and Responsible AI Principles

Microsoft recently laid off its entire AI ethics and society team, and yet their principled AI ethics approach feels lacking. They support fairness, reliability & safety, privacy & security, inclusiveness, transparency, and accountability, and yet there does not seem to be a sufficient eye towards analyzing the distributive justice impacts of hastily adopted AI systems.

Including a diverse range of people in Microsoft's profits and products creates a shallow justice if only a small portion of the population shares in those profits. If a wide swath of the marginalized is laid off because of the quick and reckless adoption of these technologies while wealth is concentrated in the hands of the lucky few, we have not achieved a successful intersectional politics.

We need to expand our thinking beyond bias internal to the algorithms and the processes that create them to how the use of these algorithms will affect power relations in our larger social world.

How Do We Fight Back?

Organizing. Unionizing. Collectively building guiding philosophical frameworks and actual political power.

Right now, I'm working to organize a joint academic and non-academic free zoom conference on ethical issues pertaining to ChatGPT (more details to come). Please let me know if you are interested in helping or participating!

But GPT isn't the only issue out there, and one conference can only do so much - get involved in your community to figure out how we can collectively reshape power structures and use this new technology to

  • fight climate change,

  • provide housing and food to people who need it,

  • create free and open discussion,

and do all these things as co-deliberating equals.

Don't forget to expand outside of your immediate community as well.

Photo Credit: Nik Korba

12 views0 comments


bottom of page