CNA Statement to UN Group of Government Experts on Lethal Autonomous Weapon Systems, August 29 2018

This statement was made by CNA’s Director of its Center for Autonomy and AI to the UN’s Group of Government Experts (GGE) on Lethal Autonomous Weapon Systems (LAWS) in Geneva during its August 27-31 2018 meeting.

Ambassador Gill, thank you for your leadership in such a complex and critical issue, which is at the heart of the future of war. I have been part of these discussions since 2016, first as a diplomat and now as a scientist. Overall, we have seen some progress in the past few years. I point out the UK and Dutch positions which illuminate a broader framework for human control over the use of force.

Yet it is clear that there is a fundamental disagreement in this body on the way forward. Some have said today that this is because some States are stalling. But I offer another explanation.

Einstein said this: “If I had an hour to solve a problem, I would spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.”

I believe this wisdom applies here. We have not had more progress in the past few years because we have not sufficiently defined the problem. States and other groups are still talking past one another.

One example. There are two types of AI: general and narrow. General AI is often described as an intelligent or superintelligent type of AI that solves many types of problems and does not, and may never, in fact exist. Narrow AI is a machine performing specific pre-programmed tasks for specific purposes. This is the type of AI we see in use today in many applications. Those two types of AI carry very different risks. When we fail to discriminate between the two types of AI in our discussions, we talk past one another and cause confusion. Framing this discussion around narrow AI – the kind of technology that is actually available to us now and in the near future – would help us to focus on the specific risks of AI and autonomy that need to be mitigated.

That is one example, and there are others. I discuss a number of such risks in CNA’s new report, AI and Autonomy in War: Understanding and Mitigating Risks .

Finally, there is much discussion of civilian casualties in this forum. It is surprising that, as a Group of Government Experts, we have not explored this particular problem more in depth. Such exploration is very possible. Having led many studies on how civilian casualties occur in military operations, there is much we as a group can learn about risks to civilians, how autonomy can introduce specific risks, and how technology can mitigate risk.

Overall, I believe we can learn from Einstein: this is a problem we can solve, but we still have work to do to adequately frame the problem.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: