P5: Behavioral Economics and The Algorithm

steve wright
Conches

--

Two barrels of the shotgun pointing at our heads

Intro, Part 1, Part 2. Part 3, Part 4, Part 5, Part 6

Rationality and Economics

It is now consensus. Humans are irrational. This enlightened understanding emerged in 2008 at the beginning of Great Recession when super-banker-man and Chair of the Federal Reserve, Alan Greenspan, declared:

“I made a mistake in presuming that the self-interest of organizations, specifically banks and others, was such that they were best capable of protecting their own shareholders.”

Homo Economicus Sofia Verano

20th century economists like Greenspan grounded their theories in what they called Homo Economicus, the perfectly rational actor. Perfect rationality is impossible. Perfect rationality is like physicists studying motion without knowing about friction or biologists studying disease without understanding contagion. In 2002 Daniel Kahneman won the Nobel Prize in Economics for showing that like Pluto and the Brontosaurus, Homo Economicus doesn’t exist and human economic behavior is not rational. This was the beginning of the new field of Behavioral Economics. In 2019, Esther Duflo won the same prize (the second woman after Elinor Ostrum) for defining rigorous experimental methods that can demonstrate empirically what irrationality looks like and why we actually make specific economic choices. This shift provides an opportunity to see the world more accurately and build equity amongst economic actors to create a vibrant and resilient economy. It also provides an opportunity to lean into Capitalist mythology, to exploit privilege and manipulate consumer irrationality for personal gain.

In 2009 in the Harvard Business Review, Behavioral Economist and also Nobel winner (2008) Dan Ariely declared “The End of Rational Economics”.

We are now paying a terrible price for our unblinking faith in the power of the invisible hand. We’re painfully blinking awake to the falsity of standard economic theory — that human beings are capable of always making rational decisions and that markets and institutions, in the aggregate, are healthily self-regulating. If assumptions about the way things are supposed to work have failed us in the hyperrational world of Wall Street, what damage have they done in other institutions and organizations that are also made up of fallible, less-than-logical people? And where do corporate managers, schooled in rational assumptions but who run messy, often unpredictable businesses, go from here?

We now know the answer to Ariely’s question. Corporate managers have weaponized Behavioral Economics to manipulate our choices, fracture our society and cement the American Oligarchy. To be sure, this is not new. Capitalism has been working on these goals from the very beginning but in this moment we are fighting the dark serendipity of the simultaneous rise of Behavioral Economics and the algorithm.

The Algorithm

Before electricity we knew illumination. Before the hammer we would pound. Before the computer we calculated. But through human invention we built refined tools to make these activities mundane. The task performed became abstracted into the name of the invention without a need to understand how it works. We turn on the lights. We hammer a nail. We ask the Internet.

A computer is a blunt object made refined by human ingenuity. We like to say that computers have intelligence, that computers learn but that’s a lie. Not a Trump-won-the-election lie but a this-product-is-shinny-new-and-delicious lie; something that people who think they are smart say to people that they think are stupid. A computer’s intelligence is artificial. A computer’s learning is mechanized. Clouds are atmospheric phenomena and not the place where the Internet is. A computer is a tool commanded by humans to pound things we want pounded; to illuminate things we want illuminated; to calculate things we want calculated.

In reality, a computer is a blunt object made refined by human ingenuity. A computer is a box filled with switches; highly specialized, very fast, switches. On, off, one, zero. And an algorithm is just instructions; instructions that can be understood and manipulated by both humans and machines. An algorithm is text, exactly like the text you are reading now but written in a language of painstakingly precise and explicitly unambiguous human words that can be translated into massive but finite strings of ones and zeros. Instructions for switches. Ones and zeros. On and off.

Through human intelligence we wield boxes of switches to automate decisions in highly controlled environments. This is artificial intelligence. It is no different than a calculator that takes an input, generates an output and then repeatedly uses that output for its next calculation. Someday artificial intelligence may become superhuman. Maybe. Someday we may learn to accept grace and value love. Maybe. At this moment I am skeptical of both. We personify and elevate the computer and the algorithm because they obscure human industry and human intention. If the Internet is in the clouds it is ubiquitous, omniscient even. If computers are intelligent, we don’t need to be. If the algorithm personalizes my experiences and curates my life then I am chosen, I am pure.

Student accused of stealing sign in Capitol welcomed ‘INFAMY’ on Instagram, FBI says — https://www.washingtonpost.com/dc-md-va/2021/01/18/gracyn-courtright-capitol-riot/

“To get just an inkling of the fire we are playing with, consider how content-selection algorithms function on social media. They aren’t particularly intelligent, but they are in a position to affect the entire world because they directly influence billions of people. Typically, such algorithms are designed to maximize click-through, that is, the probability that the user clicks on presented items. The solution is simply to present items that the user likes to click on, right? Wrong. The solution is to change the user’s preferences so that they become more predictable. A more predictable user can be fed items they are more likely to click on, thereby generating more revenue. People with more extreme political views tend to be more predictable in which items they will click on. … Like any rational entity, the algorithm learns how to modify the state of its environment — in this case, the user’s mind — in order to maximize its own reward. The consequences include the resurgence of fascism, the dissolution of the social contract that underpins democracies around the world, and potentially the end of the European Union and NATO.” — Stuart Russell, Human Compatible

Misinformation dropped dramatically the week after Twitter banned Trump and some allies — https://www.washingtonpost.com/technology/2021/01/16/misinformation-trump-twitter/

Computers are rational. Humans are irrational. So we, humans, manufacture intentionally duplicitous machines meticulously designed to change human behavior and we do this, it seems, for no meaningful reason at all. I choose to see Mark Zuckerberg, Jack Dorsey, Sundar Pichai, Jeff Bezos as children shouting to the world “Look what I can do! Look what I can do!” while they jump not so very high and run not so very fast.

This is the sixth post in a series:
Intro: Replacing Milton Friedman with All Of Us
Part 1: The Neighborhood
Part 2: Incentivizing the Health of the Human Neighborhood
Part 3: Difference Defines Us
Part 4: On Freedom
Part 5: Behavioral Economics and The Algorithm
Part 6: We Can Build That

--

--

The protocols of neighborliness are in contestation with the protocols of purity and the most important question we can ask ourselves is “Who is my neighbor?”