Skip to main content
Article detail banner

The Third Wave: AI Scientist Peter Tu’s Work With Machines Is Often About What It Means to Be Human

May 29, 2024 | by Chris Norris

To hear Peter Tu describe it, the future of artificial intelligence is child’s play. “Think of how a child learns in their first two or three years of life,” says GE Aerospace Research’s chief AI scientist. Just like a toddler, the burgeoning ocean of neural networks we collectively refer to as “AI” behave much like a fledgling human, exploring and learning about the world. “It realizes there are objects, there are entities with agency, there are things happening outside itself,” Tu says. And through this “play,” he adds, AI might soon make the kind of intuitive leaps hitherto known only to human evolution. 

All of which can make for awkward chats at cocktail parties. “A few years ago, if you talked about consciousness in machines, people would be like, ‘OK, here comes a lunatic,’” says Tu with a chuckle. “But now it’s this elephant in the room. We don’t talk about it because we still don’t really have the language for it. But we’re past talking about or contemplating this thing and we’re actually experimenting with it. That’s why it’s such an exciting time.”

Few scientists are better poised than Tu to help GE Aerospace and the larger world negotiate AI’s challenges and opportunities at such a heady time. “At GE Aerospace, we’re trying to understand AI’s implications and what aspects of AI the business should take advantage of,” Tu says. “It’s about seeking comprehensive strategies to integrate AI in a more holistic way.” At the same time, he’s a sort of global emissary of the field. Tu’s new book, The Practical Philosophy of AI-Assistants, maps out some of the terrain he’s spent much of the past two decades exploring for GE Aerospace Research and its various research partners. 

A History Deep in AI Learning

Tu joined the company in 1997, after completing a doctorate in computer vision at Oxford, where he used powerful algorithms to enhance a digital camera’s ability to see more precisely and with greater comprehension. He applied this work in collaboration with the FBI, performing fingerprint analysis and facial reconstruction from human skulls. “Back then, it was about understanding an environment, the limits of seeing and perception,” Tu says. “As we developed more capabilities, it moved beyond simply recognizing an object to understanding behavior.”

More recently, Tu’s work on a series of programs with the Defense Advanced Research Projects Agency (DARPA) and various universities pushed these capabilities closer to human levels of situational awareness. In 2019, Tu’s team partnered with scientists from Siena College’s Institute of Artificial Intelligence on a DARPA project called Grounded Artificial Intelligence Language Acquisition (GAILA), which studied AI’s ability to achieve childlike language acquisition and understanding based on visual and contextual cues. 

Subsequent DARPA programs, like Context Reason for Autonomous Teaming (CREATE), expanded on this, using the dynamic sensing and controller platform Sherlock to allow AI agents to observe people interacting and mimic the ways that human intuition enables people to cooperate with one another through nothing more than a few verbal and nonverbal cues. AI agents capable of such improvisational interactions with each other are better able to address an array of unforeseen challenges in unpredictable environments — such as a 24-hour gas station, which Tu observed for a project aimed at monitoring hazardous driver behavior. “Over 24 hours, you see all these behaviors that you’d never expect or predict,” he says. “That’s why they require a human attendant.” At least for now.

All images courtesy of Peter Tu.

The New Wave

But it’s in just such strange, unknowable fields of operations that Tu sees future AI surging to the forefront as the technology enters its decades-in-the-making third wave. He explains its evolution to this point as follows: “The first wave was what I’d call knowledge exploitation,” he says. “We have certain propositions we know to be true, and we combine them to discover new things.” The second wave was driven by what Tu calls “statistical inference.” “It’s where you have a large number of observations and you want to infer a variable you can’t directly see,” he says. “Second-wave AI is neural networks doing what we call ‘deep learning,’ which has allowed some astonishing breakthroughs — facial recognition, self-driving cars — but still assumes that a person has anticipated the problem.”

Third-wave AI is for situations like the 24-hour gas station: unpredictable, unanticipated scenarios where agents must get an almost human “gist” of a situation and take the right action. This is an area Tu is currently tackling through GE Aerospace Research’s work on another DARPA program called Environment-Driven Conceptual Learning (ECOLE), the goal of which is to develop neural symbolic representations of real-world phenomena like objects and actions, pushing machines past simple pattern recognition and toward an understanding of what the things they’re seeing actually are. “Not ‘Can you recognize a horse?’” Tu explains. “But ‘Can you tell me what the essential attributes of a horse are?’” 

This is where discussions of AI start to get, if not human, then humanist. “It’s a daunting problem, technically and philosophically,” Tu says of designing agents for the next phase of AI. “How does one gather the wisdom, creativity, and recognition of a human mind?” 

In addressing such themes, Tu’s The Practical Philosophy of AI-Assistants explores the ideas of computer science visionary Alan Turing, the psychologist Daniel Kahneman, and the cognitive scientist David Chalmers, among others. And while Tu brings their work to bear on all four general areas of AI development — recognition, communication, explanation, and civility — it’s the last that provokes some of the most profound questions, about both machines and humans. 

“It’s a fragile world we live in,” Tu acknowledges. “Engineering for civility means asking questions about ourselves and society. What does it mean for an agent to participate in our civilized society? What does it mean to recognize and support a norm, to provide the support that buttresses us against assaults? How can agents explain the way the world works, the way things happen? All of this is premised on our own ability to recognize and understand the world as it is.”

He’s not saying that large language models (LLMs) are conscious. Not exactly. But their capacities are approaching ours in scale. “We’re talking about a couple hundred billion neurons,” Tu says of today’s neural networks. “That’s not too far off from where we’re at. But this neural network is observable. We can look at what those neurons are doing.” 

And as these digital children grow closer and closer to having human levels of insight, they may reveal more of what it is to be a human being. “As smart as we are, human beings actually have fairly little capacity to observe ourselves. I mean, fMRI gives us some aggregate statistics, but we’re really still a mystery to ourselves. That’s part of what’s so exciting about AI today,” says Tu. “There are emergent properties that weren’t expected. And one of them might be to provide a window into ourselves.”

Related Posts

GE Aerospace is a world-leading provider of jet and turboprop engines, as well as integrated systems for commercial, military, business and general aviation aircraft.