We Are Being Asked To Contort Ourselves Into The Shape of Machine

steve wright
Conches
Published in
8 min readAug 18, 2023

--

Artificial Intelligence, Teaching and Learning

Image generated by Substack’s AI from the prompt “The Wizard of Oz Teaching in a Classroom”.

“The wizard of oz teaching in a classroom” generated by Substack’s AI

I believe there is a significant danger in the coming Artificial Intelligence revolution but I don’t think it has anything to do with super-human robot overlords. It’s far more mundane than that. AI, like the Internet that is its sustenance, is just another, disingenuous, homogenizing, dominant culture propagating engine. For a rabbit-hole vacation of exactly this, the Rolling Stone article, These Women Tried to Warn Us About AI is a great place to start.

To accept Artificial Intelligence, we are being asked to give up on understanding the nuance of being human in favor of living a simplified analogy for life in a reductive digital simulation. We are being asked to ignore the spaces between the extremes and view reality in the radically simplified terms that machines can understand. Yes or no, true or false, right or wrong. We are being asked to live reductive binary lives so that we can be algorithmically manipulated and economically exploited for the benefit of those that own the machines, for the benefit of those who believe they own the future.

We are being asked to contort ourselves into the shape of machine because only then can machines be human.

This isn’t new. This has always been the way. Human development, human liberation, has never been the goal of education, of American society. Cultural and political support for liberation has always been possible but rarely intended. Artificial Intelligence is just a better tool to do what societies have always tried to do — control those who dare to want more and offer liberation only to those with the fewest incentives to pursue it, only to the children of the future owners.

Techsplaining Artificial Intelligence usually begins by equating learning to computation but I am going to do this in the opposite direction. I am a Computer Science teacher and while I understand computers, I care about humans, especially the development of young humans. So what is learning? What is teaching?

In 1960 Jerome Bruner wrote The Process of Education which was unique because it focused not on cognitive development but on effective teaching to help students grow, to help students value their creativity and build intuition about our world. The Process of Education was written as the output of a 10 day mtg hosted by the National Academy of Science attended by educators and scientists from around the country interested in the best ways for teachers to ensure students are able to learn science in an era of rapid advances. Contrary to the trend in education today, the meeting DID NOT come up with a list of stuff to be memorized and regurgitated by students.

Mastery of the fundamental ideas of a field involves not only the grasping of general principles, but also the development of an attitude toward learning and inquiry, toward guessing and hunches, toward the possibility of solving problems on one’s own. … To instill such attitudes by teaching requires something more than the mere presentation of fundamental ideas. Just what it takes to bring off such teaching is something on which a great deal of research is needed, but it would seem that an important ingredient is a sense of excitement about discovery — discovery of regularities of previously unrecognized relations and similarities between ideas, with a resulting sense of self confidence in one’s abilities.

Bruner believed that an outcome of good teaching is for a student to develop a “sense of self confidence in one’s abilities”. Bruner talked about how increased self-confidence leads to trust in one’s own academic intuition which leads students to better and deeper questions and deeper understandings.

Bruner believed that pattern recognition or, in his words, “discovery of regularities of previously unrecognized relations” is an important stage in human cognitive development. Pattern recognition is also something that computers do really well. As a matter of fact, as far as producing the simple outcome of a discovered pattern, computers do it far better than humans and it’s not really a stretch to say that this has become the dominant activity of computing. Humanity has been repurposed to be a massive data input engine. Each of us typing away at our documents, spreadsheets, presentations, emails, Zoom meetings, social media posts and sending it to the giant pattern recognition machine that is the commercial Internet. The owners of these machines have no interest in humans developing the ability to discover patterns, develop insight, draw connections or grow to be able to better understand and manage our human lives. The owners of the machines profit by manufacturing those connections for us, feeding us alternative truths connected by artificially bright lines to our excavated and misappropriated fears and desires. It works better for them if most of us are unable to create our own insights.

Bruner wrote The Process of Education within a few years of when Seymour Papert co-founded the MIT Artificial Intelligence lab (around 1967). Papert talked about how as a child he fell in love with a gearbox from an old car and how playing with and building an understanding of that gearbox was a catalyst for learning throughout his life. He also described how it would likely be a mistake to teach children about gears in an effort to replicate his transformative learning experience. About his love for that gearbox he said:

“This is something that cannot be reduced to purely ‘cognitive’ terms. Something very personal happened, and one cannot assume that it would be repeated for other children in exactly the same form.”

Papert saw in early computers a chance to build models that could simulate learning, models that could give humans the chance to play with ideas and gain new insights. He saw computing as an opportunity to gain insight into humans, humans as generative beings. He was particularly interested in how computers could provide environments for academic play and exploration. Many years later in 2002, looking back on his career, Papert said,

“I have been through three movements that began on a galactic scale and were reduced and trivialized.”

The three moments were efforts in the 1960’s to advance our understanding of 1) child development, 2) human intelligence through artificial intelligence and 3) computers as generative tools for education. Both Bruner and Papert believed that emotion, the affective realm, was as important to education as is the cognitive and that the natural inspiration that comes from human experience is an essential component of learning.

The current state of education in America is, as Papert described, reduced and trivialized and it is not just the political right that designs and deploys education as an engine for obedience. The left also fundamentally understands the purpose of education as indoctrination into an existing and specific reality. To be clear, the craven, self-serving wormtongues of the Donald Trump era Republican Party have designed the most detestable dystopia but the political left also lacks the courage to embrace the reality of our individual mortality and the inevitably of a world managed differently by others. The impossibly static worlds dully imagined by both the right and the left are the conceits of those who believe that they own the future. Education bent on maintaining any specific power structure, education bent on obedience, is oppression.

Professor Samir Rawashdeh is an Artificial Intelligence researcher from the University of Michigan. I am sure he is a highly competent Artificial Intelligence researcher and that he works in a sufficiently rigorous and productive academic department. The Google Machine found him for me. I picked his article “AI’s mysterious ‘black box’ problem explained” not because it stood out or was exceptional in any way. I picked the article because it is one of thousands of articles parroting the same insidious and infantilizing ideas about Artificial Intelligence, what AI is and how the mysteries of AI are for AI and AI only to understand. Professor Samir Rawashdeh is just another herald of Artificial Intelligence, the new Wizard of Oz. He and all the other heralds always begin their cry by declaring that the human brain is a machine.

“Learning by example is one of the most powerful and mysterious forces driving intelligence, whether you’re talking about humans or machines. Think, for instance, of how children first learn to recognize letters of the alphabet or different animals. You simply have to show them enough examples of the letter B or a cat and before long, they can identify any instance of that letter or animal. The basic theory is that the brain is a trend-finding machine.”

This is the sleight of hand that keeps us from looking behind the curtain. We have moved from Papert’s illustrative analogy to an equality. From the brain is like a machine to the brain is a machine. This hidden leap of faith erases all of the metaphysical, ontological questions. Actually, that’s not true. The heralds are very thorough, they first went about redefining ontology. Look it up and you will see two totally separate and incompatible definitions for the same term.

Ontology, noun

  1. The branch of metaphysics dealing with the nature of being.
  2. A set of concepts and categories in a subject area or domain that shows their properties and the relations between them.

The herald cries: “The human brain is a machine.”

The second step is for the herald to describe how the machine brain that is Oz mimics the brain machine because it uses something the heralds created and then named “neural networks”. Neural, that’s a brain word. Brains are machines. Machines are brains. Herald Rawashdeh continues…

In fact, deep learning algorithms are trained much the same way we teach children. You feed the system correct examples of something you want it to be able to recognize, and before long, its own trend-finding inclinations will have worked out a “neural network” for categorizing things it’s never experienced before.

Notice how Oz is always described as if it were autonomous. As if it wasn’t built. As if its code wasn’t written. As if it wasn’t plugged in. As if it wasn’t owned and intended. As if there wasn’t a man behind the curtain pulling its strings.

The final step is to throw up our hands at the mystery of it all. Herald Rawashdeh says:

“Just like our human intelligence, we have no idea of how a deep learning system comes to its conclusions. It “lost track” of the inputs that informed its decision making a long time ago. Or, more accurately, it was never keeping track.”

Even Oz can’t know its own magnificence! Evidently it isn’t possible for Oz to drop breadcrumbs along its decision tree to enable even Oz itself to find its way back? I am skeptical but then, I am not a herald.

There is a final insult by the future owners to the injury of humanity. The advances that we have made in AI, the insights that drove the development of machine learning, came from the study of children, human children. One particularly critical discovery was that in early childhood learning the presence of a caregiver is essential to create a foundation of care so that a child can take risks even in the face of negative feedback. This is important because taking risks is essential to learning. This little tidbit, that caregiving is an important aspect of teaching, has been really helpful … for machines.

Jerome Bruner famously said that any subject, any body of knowledge, can be made to be understandable by anyone if you can just find the right place to start. The Heralds of Oz are not interested in our understanding. The Heralds of Oz are interested in us leaving them alone to work behind the curtain, the curtain that marks the limits of what we are allowed to understand. The Heralds of Oz are interested in infantilizing our understanding of the Great and Powerful Oz.

We are being asked to give up on understanding the nuance of being human in favor of living a simplified analogy for life, in favor of living in a reductive digital simulation, owned and operated

--

--

The protocols of neighborliness are in contestation with the protocols of purity and the most important question we can ask ourselves is “Who is my neighbor?”