On a path to a new branch of engineering
“In the 21st century, lifts are interesting because they’re one of the first places that AI will touch you without you even knowing it happened. In many buildings all around the world, the lifts are running a set of algorithms. A form of protoartificial intelligence. That means before you even walk up to the lift to press the button, it’s anticipated you being there. It’s already rearranging all the carriages. Always going down, to save energy, and to know where the traffic is going to be. By the time you’ve actually pressed the button, you’re already part of an entire system that’s making sense of people and the environment and the building and the built world”, tells Genevieve Bell in her TED talk about artificial intelligence (AI).
AI is already everywhere around, in many places. It’s in buildings and in systems. More than 200 years of industrialization suggest that AI will find its way to systems-level scale relatively easily. The story of mechanization, automation and digitization all point to the role of technology and its importance. “Those stories also put the focus squarely on technology and technology change. But I believe that scaling a technology and building a system requires something more”, says G. Bell, one of the founders of 3Ai Institute at the Australian National University.
The institute has a simple mission: to establish a new branch of engineering to take AI safely, sustainably and responsibly to scale. The researchers raise six big questions; they are about autonomy, agency, assurance, indicators, interfaces and intentionality.
Is the system autonomous? Is it able to act without being told to act?
Does this system have agency? Does this system have controls and limits that live somewhere that prevent it from doing certain kinds of things under certain conditions?
How do we think about all of pieces of assurance: safety, security, trust, risk, liability, manageability, explicability, ethics, public policy, law, regulation? And how would we tell you that the system was safe and functioning?
What would be human interfaces with AI-driven systems? Will people talk to them? Will systems talk to people, will they talk to each other?
What will the indicators be to show that they’re working well? Two hundred years of the industrial revolution tells that the two most important ways to think about a good system are productivity and efficiency. In the 21st century, you might want to expand that just a little bit. Is the system sustainable, is it safe, is it responsible?
What’s its intent? What’s the system designed to do and who said that was a good idea? Or put another way, what is the world that this system is building, how is that world imagined, and what is its relationship to the world we live in today? Who gets to be part of that conversation? Who gets to articulate it?
Genevieve Bell is sure, there are no simple answers to these questions. “Instead, they frame what’s possible and what we need to imagine, design, build, regulate and even decommission. They point us in the right directions and help us on a path to establish a new branch of engineering”, she says.