Newsletter

AI and the Fallacy Of Equality

How AI researchers and practitioners should scrutinize themselves for a better future.

Dr. Adam Hart
Towards AI
Published in
7 min readOct 11, 2021

--

Picasso’s Guernica
Guernica © Almudena_Sanz_Tabernero courtesy Pixabay

“At the end of the day, the success of AI research will be measured by how it has empowered all people, helping tackle the many wicked problems facing the planet, from the climate emergency to increasing inequality within and between countries.”

Artificial intelligence is now part of our everyday lives — and its growing power is a double-edged sword — The Conversation October 11, 2021

The “cult” of code

If I was a betting person and conducted a survey of all people of self-identified as technologists, I would bet that a fair proportion of them are familiar with and perhaps even adore the 1970’s works of Phillip K Dick, or the works of authors like 1980’s William Gibson or <insert favorite tech genre author and year here>.

One of the most known authors well propagated through cinema and VOD, Gene Roddenberry, had a nobler goal in mind than presenting the dystopian views of Dick and Gibson. He wrote about a utopian possible future where egalitarian and enlightened scientists zoom around sharing wisdom and following the prime directive after the discovery of warp drive enabled divided humanity to participate in a galactic federation which was variously threatened by Klingons who became passive buddies.

My doctoral supervisor many years ago posited about the “cult of code”. He believed that coders of all sorts derived their inspiration, and their joy of coding ultimately from sci-fi. If this is plausible it is also plausible that the dystopian/utopian dichotomy is a necessary part of a coder's “discourse” or “premise” or “a priori” or “belief system” or “cult”.

The belief inequality is the belief in a good shepherd.

Just so in other less technological disciplines, whether for example political science, international relations, legal, economics et al, there is a strong belief in favor of utopian equality. Equality of all kinds: sexual; medical; economic; informational; ethnic; social.

If you read the Christian ethos you will quickly see that this “equal” utopia, one where no one is disadvantaged, one where no one is hungry, where there is no war, is a direct carry over from that.

Michel Foucault spoke about prerequisites for this kind of utopia, one where we are each a passive member of a pastorate, tended to by a benign good shepherd. While the free Greeks would never submit to such an ideology, many members of the above disciplines want the general public to believe in this kind of “equal” future, excepting in the absence of an all-knowing and benign shepherd.

The good shepherd was perhaps in post-religious times theoretically meant to be filled by the role of Government, but surely in this 21st century, the average person is dissuaded by any Government’s lack of self-interest and benign-ness.

There ain’t no Dali Lama as President.

The dominance of growth and efficiency

So, between the dystopia of a technological disaster that many technologists, including the AI researchers above, believe in (and also the likes of Nick Bostrom, The Future of Life Institute, and Global Risk Institute believe in) and the utopia of equality that policymakers are asking us to uphold in the absence of a truly good shepherd, we have the dominant reality that is actually occurring in front of us: big tech, the race to monetize space, the megabuck billionaire.

The dominant reality is upheld by business leaders who have the resources and capital to continue the belief in growth and efficiency.

Think Global top 100: Google, Apple, Amazon, Tesla, BHP, Citi-bank, Starbuck, Siemens, Sony, SoftBank et al.

These global conglomerates' sole prime directive is growth and efficiency. While small elements inside of them speak about ethics and equality, their market capitalization says otherwise. Their business is growth and efficiency and continued dominance of markets.

The Lee Kum Kee case study speaks to this. From an accidental discovery of oyster sauce (using real oysters) 100 years ago this conglomerate's chairman and board literally believe in a 1000 year multi-generational dynasty, predicated on growth and efficiency.

The notion of a dystopian future where their wealth is compromised is simply a risk management strategy, a scenario to be avoided. A utopian future is one where they become the good shepherd. The central premise of the top 100 is likely along these lines.

Are we still living in medieval times?

“Japanese are orderly, polite, sophisticated and proudly nationalistic. In a way, the society is still feudal, with strict stratification, customs and regulations.”

N. Hacko Newsletter

“Kings and Queens (Politicians, CEOs and Board members) of times gone by took their lands (assets) by force and put the serfs (consumers) to work on their lands, charging rent (rent) for the land and tax (tax) on the crops. When taxes (prices) were raised the serfs (consumers) complained and were disciplined by the knights (police and media) of the realm (country). That it was necessary and reasonable became expected (media desensitisation through saturation).

The Royalty enlisted the church (academia) to promote the ideas of goodness and equality in latin (academic speak) all the while seeking to avoiding being murdered (thrown out of office) and acquiring new lands (Taiwan) and peoples (competitive visas).

New technology like flintlock pistols and padlocks (ML/AI) were employed to fight off invaders and threaten the populace with punishment (poverty and jail) while the king and queens lands and wealth grew (efficiency and growth).

When the people became vociferous and agitated, knightly regally endorsed contests were organised to quell dissent (regular global sporting contests). When they were not passive and protested against the orders (emergency health orders) calm witches and warlocks (AI enabled face recognition used by law enforcement and social media surveillance) were sent into the community to locate, punish (fine) and torture (incarcerate) them for their insolence.”

Putting AI in its historical context

“At the end of the day, the success of AI research will be measured by how it has empowered all people, helping tackle the many wicked problems facing the planet, from the climate emergency to increasing inequality within and between countries.” (cf. Above)

In the context of the “cult of code”, the belief in equality and the dominance of growth and efficiency, this statement by leading Professors of AI looks nonsensical [1].

In the context of the possibility we are still in a medieval-like present with the social, governmental, corporate, and other discourses in a network of medieval relations, it looks equally silly.

While AI [2] seems, in general, to be plausibly presented as a systemic threat to human existence, to human freedoms; a weapon of untold capacity for harm; or a tool for justice and good, this too is nonsensical. This mindset simply channels the dystopic/utopic dichotomy that may be at the heart of every technologist's ideology — whether they are aware of it or not — dressed up in fallacious arguments.

AI is better seen as another emergent technology (like Plate Mail Armour, the CrossBow, Colt 45) that can be weaponized and has arisen through a conjunction of forces that are in various stages of mature power-relations:

1. The propensity for humans to tinker and fiddle with things that are better left alone but they can’t leave it alone;

2. The great desire for certain individuals and groups to seize and retain wealth and assets at all costs irrespective of what is widely accepted (eg: Elon & Zuck) and failures to regulate this and disempower the many in favor of the few;

3. The failure of any ideology or political system of government to adequately regulate the challenge of 1 & 2 and trend towards authoritarian legal controls as the only way forward.

4. Humanity perhaps hasn’t really matured society beyond the Medieval.

We all know that tools are in the hands of the user (who are mostly conglomerates or governments and their police forces). AI and the way even highly educated academics talk about it is still stuck in a dystopic/utopic or ethical discourse, irrespective of the dominant global reality of the success of conglomerate’s growth and efficiency discourse.

AI therefore cannot be unbiased, because no human will ever be unbiased and no data related to humans can be unbiased. AI can never be equal because the dominant reality is that the growth and efficiency (and ownership) objectives of conglomerates will ensure it is never equal and unbiased.

Cursed to be a willing passive silent subject of Governments, a birth-to-grave consumer, living in a democracy that is nothing like what the free Greeks thought of as Democracy. This is already a reality irrespective of AI. AI cannot solve this problem, this is techno-solutionism.

Here, then, are three questions for AI researchers and practitioners:

1. Can they {you} ever move away from the dystopic/utopic and tech-as-equality-empowering ideology?

2. Can they (you) recognize themselves and their flaws and establish balanced effective power relations in the dominant growth and efficiency discourse or conglomerates (and Governments) who rule us?

3. Can they (you) themselves, not their tech, organize towards balance and participate in how to regulate towards a balance that is missing in the face of the dominant reality.

An ethic in AI use (like enabling virtue) is possible and balanced, equality in every dimension or dichotomies are a fallacy. That AI can be equal from inside the AI tech is a fallacy.

Footnotes.

[1] Especially when Universities seek funding from conglomerates and these professors themselves are in a hierarchical politico-academic contest to acquire and maintain their position. They do not see themselves equal to the rest of us either nor engage in genuine dialogue with the general public just remote pontification base on positional authority.

[2] AI in this article is AI in current use aka Deep Neural Networks like AlphaGo Zero et al. Not the ASI of Bostrom and Dennett. That ASI may essentially be alien and unknowable using the framework presented here. We’re talking about AI that is employed by humans for growth and efficiency.

--

--