Writer Meghan O’Gieblyn on AI, Consciousness, and Creativity


These days, we’re inundated with speculation about the future of artificial intelligence—and specifically how AI might take away our jobs, or steal the creative work of writers and artists, or even destroy the human species. The American writer Meghan O’Gieblyn also wonders about these things, and her essays offer pointed inquiries into the philosophical and spiritual underpinnings of this technology. She’s steeped in the latest AI developments but is also well-versed in debates about linguistics and the nature of consciousness.

O’Gieblyn also writes about her own struggle to find deeper meaning in her life, which has led her down some unexpected rabbit holes. A former Christian fundamentalist, she later stumbled into transhumanism and, ultimately, plunged into the exploding world of AI. (She currently also writes an advice column for Wired magazine about tech and society.)

When I visited her at her home in Madison, Wisconsin, I was curious if I might see any traces of this unlikely personal odyssey. 

I hadn’t expected her to pull out a stash of old notebooks filled with her automatic writing, composed while working with a hypnotist. I asked O’Gieblyn if she would read from one of her notebooks, and she picked this passage: “In all the times we came to bed, there was never any sleep. Dawn bells and doorbells and daffodils and the side of the road glaring with their faces undone …” And so it went—strange, lyrical, and nonsensical—tapping into some part of herself that she didn’t know was there. 

That led us into a wide-ranging conversation about the unconscious, creativity, the quest for transcendence, and the differences between machine intelligence and the human mind.

In Body Image
SINGULAR MIND: Writer Meghan O’Gieblny grew up as a Christian fundamentalist. As a young adult, she lost her faith in God and found the potential for transcendence in a new realm. Photo courtesy of Meghan O’Gieblyn.

Why did you go to a hypnotist and try automatic writing?

I was going through a period of writer’s block, which I had never really experienced before. It was during the pandemic. I was working on a book about technology, and I was reading about these new language models. GPT-3 had been just released to researchers, and the algorithmic text was just so wildly creative and poetic. 

So you wanted to see if you could do this, without using an AI model?

Yeah, I became really curious about what it means to produce language without consciousness. As my own critical faculty was getting in the way of my creativity, it seemed really appealing to see what it would be like to just write without overthinking everything. I was thinking a lot about the Surrealists and different avant-garde traditions where writers or artists would do exercises either through hypnosis or some sort of random collaborative game. The point was to try to unlock some unconscious creative capacity within you. And it seemed like that was, in a way, what the large language models were doing.

You have an unusual background for a writer about technology. You grew up in a Christian fundamentalist family.

My parents were evangelical Christians. My whole extended family are born again Christians. Everybody I knew growing up believed what we did. I was homeschooled along with all my siblings, so most of our social life revolved around church. When I was 18, I went to Moody Bible Institute in Chicago to study theology. I was planning to go into full-time ministry.

But then you left your faith.

I had a faith crisis when I was in Bible school, which metastasized into a series of doubts about the validity of the Bible and the Christian God. I dropped out of Bible school after two years and pretty much left the faith. I began identifying as agnostic almost right away.

But my sense is you’re still extremely interested in questions of transcendence and the spiritual life.

Absolutely. I don’t think anyone who grew up in that world ever totally leaves it behind. And my interest in technology grew out of those larger questions. What does it mean to be human? What does it mean to have a soul? 

A couple of years after I left Bible school, I read The Age of Spiritual Machines, Ray Kurzweil’s book about the singularity and transhumanism. He had this idea that humans could use technology to further our evolution into a new species, what he called post-humanity. It was this incredible vision of transcendence. We were essentially going to become immortal.

The algorithmic text was just so wildly creative and poetic.

There are some similarities to your Christian upbringing.

As a 25-year-old who was just starting to believe that I wasn’t going to live forever in heaven, this was incredibly appealing to think that maybe science and technology could bring about a similar transformation. It was a secular form of transcendence. I started wondering: What does it mean to be a self or a thinking mind? Kurzweil was saying our selfhood is basically just a pattern of mental activity that you could upload into digital form.

So Kurzweil’s argument was that machines could do anything that the human mind can doand more.

Essentially. But there was a question that was always elided: Is there going to be some sort of first-person experience? And this comes into play with mind-uploading. If I transform my mind into digital form, am I still going to be me or is it just going to be an empty replica that talks and acts like me, with no subjective experience? 

Nobody has a good answer for that because nobody knows what consciousness is. That’s what got me really interested in AI, because that’s the area in which we’re playing out these questions now. What is first-person experience? How is that related to intelligence?

Isn’t the assumption that AI has no consciousness or first-person experience? Isn’t that the fundamental difference between artificial intelligence and the human mind?

That is definitely the consensus, but how can you prove it? We really don’t know what’s happening inside these models because they’re black box models. They’re neural networks that have many hidden layers. It’s a kind of alchemy.

A sophisticated large language model like Chat GPT has accumulated a vast reservoir of language by scraping the internet, but does it have any sense of meaning?

It depends on how you define meaning. That’s tricky because meaning is a concept we invented, and the definition is contested. For the past hundred years or so, linguists have determined that meaning depends on embodied reference in the real world. To know what the word “dog” means, you have to have seen a dog and belong to a linguistic community where that has some collective meaning.

Language models don’t have access to the real world, so they’re using language in a very different way. They’re drawing on statistical probabilities to create outputs that sound convincingly human and often appear very intelligent. And some computational linguists say, “Well, that is meaning. You don’t need any real-world experience to have meaning.”

What does it mean to be human? What does it mean to have a soul? 

These language models are constructing sentences that make a lot of sense, but is it just algorithmic wordplay?

Emily Bender and some engineers at Google came up with the term “stochastic parrots.” Stochastic is a statistical set of probabilities, using a certain amount of randomness, and they’re parrots because they’re mimicking human speech. These models were trained on an enormous amount of real-world human texts, and they’re able to predict what the next word is going to be in a certain context.

To me, that feels very different than how humans use language. We typically use language when we’re trying to create meaning with other people. 

In that interpretation, the human mind is fundamentally different than AI.

I think it is. But there are people like Sam Altman, the CEO of OpenAI, who famously tweeted, “I am a stochastic parrot, and so r u.” There are people creating this technology who believe there’s really no difference between how these models use language and how humans use language.

We think we have all these original ideas, but are we just rearranging the chairs on the deck?

I recently asked a computer scientist, “What do you think creativity is?” And he said, “Oh, that’s easy. It’s just randomness.” And if you know how these models work, there is a certain amount of correlation between randomness and creativity. A lot of the models have what’s called a temperature gauge. If you turn up the temperature, the output becomes more random and it seems much more creative. My feeling is that there’s a certain amount of randomness in human creativity, but I don’t think that’s all there is.

As a writer, how do you think about creativity and originality? 

I think about modernist writers like James Joyce or Virginia Woolf, who completely changed literature. They created a form of a consciousness on the page that felt nothing like what had come before in the history of the novel. That’s not just because they randomly recombined everything they had read. The nature of human experience was changing during that time, and they found a way to capture what that felt like. I think creativity has to have that inner subjective quality. It comes back to the idea of meaning, which is created between two minds.

It’s commonly assumed that AI has no thinking mind or subjective experience, but how would we even know if these AI models are conscious?  

I have no idea. My intuition is that if it said something that was convincing enough to show that it has experience, which includes emotion but also self-awareness. But we’ve already had instances where the models have spoken in very convincing terms about having an inner life. There was a Google engineer, Blake Lemoine, who was convinced that the chatbot he was working on was sentient. This is going to be fiercely debated.

Artificial general intelligence is creating something that’s essentially going to be like a god.

A lot of these chatbots do seem to have self-awareness. 

They’re designed to appear that way. There’s been so much money poured into emotional AI. This is a whole subfield of AI—creating chatbots that can convincingly emote and respond to human emotion. It’s about maximizing engagement with the technology. 

Do you think a very advanced AI would have godlike capacities? Will machines become so sophisticated that we can’t distinguish between them and more conventional religious ideas of God?

That’s certainly the goal for a lot of people developing this technology. Sam Altman, Elon Musk—they’ve all absorbed the Kurzweil idea of the singularity. They are essentially trying to create a god with AGI—artificial general intelligence. It’s AI that can do everything we can and surpass human intelligence.

But isn’t intelligence, no matter how advanced, different than God? 

The thinking is that once it gets to the level of human intelligence, it can start doing what we’re doing, modifying and improving itself. At that point it becomes a recursive process where there’s going to be some sort of intelligence explosion. This is the belief. 

But there’s another question: What are we trying to design? If you want to create a tool that helps people solve cancer or find solutions to climate change, you can do that with a very narrowly trained AI. But the fact that we are now working toward artificial general intelligence is different. That’s creating something that’s essentially going to be like a god.

Why do you think Elon Musk and Sam Altman want to create this?

I think they read a lot of sci-fi as kids. (Laughs) I mean, I don’t know. There’s something very deeply human in this idea of, “Well, we have this capacity, so we’re going to do it.” It’s scary, though. That’s why it’s called the singularity. You can’t see beyond it. It’s an event horizon. Once you create something like that, there’s really no way to tell what it will look like until it’s in the world. 

I do feel like people are trying to create a system that’s going to give answers that are difficult to come by through ordinary human thought. That’s the main appeal of creating artificial general intelligence. It’s some sort of godlike figure that can give us the answers to persistent political conflicts and moral debates.

If it’s smart enough, can AI solve the problems that we imperfect humans cannot?

I don’t think so. It’s similar to what I was looking for in automatic writing, which is a source of meaning that’s external to my experience. Life is infinitely complex, and every situation is different. That requires a constant process of meaning-making.

Hannah Arendt talks about thinking and then thinking again. You’re constantly making and unmaking thought as you experience the world. Machines are rigid. They’re trained on the whole corpus of human history. They’re like a mirror, reflecting back to us a lot of our own beliefs. But I don’t think they can give us that sense of meaning that we’re looking for as humans. That’s something that we ultimately have to create for ourselves.

This interview originally aired on Wisconsin Public Radio’s nationally syndicated show To the Best of Our Knowledge. You can listen to the full interview with Meghan O’Gieblyn here.

Lead image: lohloh / Shutterstock