To control AI, we need to understand more about humans

To control AI, we need to understand more about humans


To control AI, we need to understand more about humans

From Frankenstein to I, Robot, we have for centuries being intrigued with and terrified of creating beings that might develop autonomy and free will.

And now that we stand on the cusp of the age of ever-more-powerful artificial intelligence, the urgency of our ways to ensure our creations always do what we want them to do is growing.

For some others in AI, like Mark  Zuckerberg , AI is just getting better all the time and if problems come up, technology will solve them. But for others, like Elon Musk , the time to start figuring out how to regulate powerful machine- based systems is now

On this point, I’m with Musk. Not because I think the doomsday scenario that Hollywood loves to scare us with is around the corner but because Zuckerberg’s confidence that we can solve any future problems is contingent on Musk’s insistence that we need to “learn as much as possible “now.

And among the things we urgently need to learn more about is not just how artificial intelligence works, but how humans work.

Humans are the most elaborately cooperative species on the planet. We outflank every other animal in cognition and communication – tools that have enabled a division of labor and shared living in which we have to depend on others to to their part. That’s what our economies economies and systems of government are all about

But sophisticated cognition and language-which AIs are already  starting  to use-are not the only features that make people so wildly successful at cooperation.

Humans are also the only species to have developed “group normativity” – elaboration system of rules and norms that designate what is collectively acceptable and not acceptable for other people to do, kept in check by group efforts to punish those who break the rules.

Many of these rules can be enforced by policies with prisons and courts but the simplest and most common punishments are enacted in groups: criticism and exclusion-refusing to play, in the park,  market , or  workplace , with those who violate norms.

When it comes to the risks of AIs exercising free will, then, what we are really worried about is whether or not they will continue to play by and help enforce our rules.

So far the AI ​​community and the funded funding  AI safety research  – investors like  Musk  and several  foundations  – have mostly turned to ethicists and philosophers to help think through the challenge of building AI that plays nice. Thinkers like  Nick Bostrom  have raised questions questions about the values ​​AI, and AI, should care about

But our complex normative social orders are less about ethical choices than they are about the coordination of billions of people made millions of choices on a daily basis about how to behave.

What is coordination is doing is something we do not really understand. Culture is a set of rules, but what makes it change – while slowly,   is often quick ? Is something we have yet to fully understand. Law is another set of rules that we can change simply in theory but less so in  reality .

As the newcomers to our group, therefore, AIs are a cause for suspicion: what do they know and understand, what motivates them, how much respect will they have for us, and how willing will they be to find constructive solutions to conflicts? AIs will only be able to integrate into our elaborate normative systems if they are built to read, and participate in, that system.

In a future with more pervasive AI, people will be interacting with machines on a regular basis. Sometimes will happen to our willingness to drive or follow traffic laws when some of the cars are autonomous and speaking to each other but us we trust a robot to care for our children in school or our aging parents in a nursing home?

Social  psychologists  and  roboticists  are thinking about these questions, but we need more research of this type, and more that focused on the features of a system, not just the design of an individual machine or process. This would require expertise from people who think about the design of normative systems

Are we prepared for AIs that start building their own normative systems-their own rules about what is acceptable and unacceptable for a machine to do-in order to coordinate their their interactions? I expect this will happen: like humans, AI agents will need to have a basis for predicting what other machines will do.

We have already seen AIs that surprise their developers by creating their own language to improve their performance on cooperative tasks. But Facebook’s ability to shut down cooperating AIs that developed a language that humans were unable to follow is not agree an option that would always exist.

As AI researcher Stuart Russell emphasizes, smarter machines will figure out that they can not do what humans have tasked them to do if they are dead-and hence we must start thinking about about how design systems that ensure they continue to value human input and oversight The

To build smart machines that follow the rules that multiple, conflicting, and sometimes inchoate human groups help to shape, we will need to understand a lot more about what makes each each easy to do that, every day.