An official website of the United States Government 
Here's how you know

Official websites use .gov

.gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( lock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

You have accessed part of a historical collection on defense.gov. Some of the information contained within may be outdated and links may not function. Please contact the DOD Webmaster with any questions.

Department Of Defense Press Briefing on the Adoption of Ethical Principles for Artificial Intelligence

STAFF: Good afternoon, everyone. Today we are joined by the honorable Dana Deasy, the DOD's chief information officer and Lieutenant General Jack Shanahan, the director of the Joint A.I. (Artificial Intelligence) Center, who will discuss the rollout of the A.I. principles for the Department of Defense.

We'll begin with short statements from each of the principles, followed by a question and answer session. When the questions and answer session begins, please raise your hand and I'll call on you. Please identify your name and organization before asking your question. I'll ask that we ask one question at a time. That way, we can get through all of your questions and then we'll cycle back through. 

With that said, I'll now turn it over to Mr. Deasy to begin today's session. 

CIO DANA DEASY: Thank you. Is this on? Yeah? OK.

Well, good afternoon and thank you for all coming today. It's great to have so many of you with us. It's exciting to learn about our formal adoption of ethical principles for artificial intelligence. 

Our work here today builds upon centuries of our nation's honorable service in the defense of the nation and our consistent commitment to upholding our values. In 2018, the Department of Defense issued the National Defense Strategy, which stated that the DOD will accelerate the adoption of artificial intelligence in order to ensure and extend our competitive military advantage. 

Secretary Esper has consistently stated that the A.I. is his number one technology modernization priority for the DOD. The department's A.I. strategy, which was included as an annex to the NDS, has one of its core pillars to lead in military ethics and A.I. safety. 

For this reason, the department asked the Defense Innovation Board to propose A.I. ethical principles for the Department of Defense. The DIB conducted a comprehensive and robust 15-month study that included consultation with many leading A.I. and technical experts, current and former DOD leaders and the American public. We thank them for their extraordinary and thoughtful work. 

I'm delighted to share with you that just this week, I received direction from Secretary Esper to proceed with formal adoption of the five A.I. ethical principles based upon the DIB's recommendation. 

Today's announcement by the secretary is a significant milestone. It lays the foundation for the ethical design, development, deployment, and the use of A.I. by the Department of Defense. These principles build upon the department's long history of ethical adoption of new technologies. 

The five principles – responsible, equitable, traceable, reliable and governable – each will apply to both combat and non-combat A.I. technologies used by the department. 

Just last year, President Trump signed an executive order, the American A.I. Initiative, which provided the support and direction needed for the U.S. to modernize and to maintain leadership in every facet of A.I. 

This extends to the private sector and internationally, where the DOD is working closely with important stakeholders and the American tech industry, academia, and our allies and partners. 

In addition to the DOD's CIO (Chief Information Officer) and the JAIC (Joint Artificial Intelligence Center), both acquisition and sustainment and research and engineering are integral partners when it comes to implementing these new principles. We will continue to work closely with Ms. Ellen Lord's team to create agile software and capability acquisition policies that allow us to deliver impactful technology at relevant speed and at scale. 

We are committed to fostering A.I. collaboration with R&E (Research and Engineering) under the direction of Dr. Mike Griffin to advance the state of the art of intelligence, artificial intelligence, finding innovative solutions for our warfighters. 

Most of you have heard me say by now that A.I. is one of the key components of the DOD Digital Modernization Strategy and enterprise cloud capability is the foundation to modernize our digital infrastructure at DOD. 

The new A.I. technologies will provide effective systems for the war fighter to operate in both combat and non-combat operations. Building on cloud and A.I., I cannot stress – I cannot stress enough the importance of command and control of communications. We must have top of the line systems for war fighter communications.

Finally, every system that DOD employs must have secure networks. Cybersecurity should be baked into every system from start to finish. A.I. technology will truly provide the warfighter with the transformative tools needed to remain competitive in a constantly evolving global threat environment but also each component of the Digital Modernization Strategy must be fully integrated.

I will now turn it over to Director of the JAIC, Lieutenant General Jack Shanahan, who will tell you more about JAIC's leadership coordination of the DOD's implementation of these principles. Thank you everybody for coming, I look forward to taking your questions.

LIEUTENANT GENERAL JACK SHANAHAN: All right, thank you, sir. Good afternoon, ladies and gentlemen. I am, as was already said, Lieutenant General Jack Shanahan, Director of the DOD Joint Artificial Intelligence Center, or the JAIC. It's good to be with you again and I'm glad to see so many familiar faces in the audience.

I'll begin by echoing a point that Secretary Esper has emphasized. Technology changes, but the U.S. military's commitment to upholding the highest ethical standards will not. Make no mistake, the adoption of A.I. principles is about achieving ethical outcomes as we field A.I.-enabled capabilities. 

The stakes for A.I. adoption are high. A.I. is a powerful emerging and enabling technology that is rapidly transforming culture, society, and eventually even war fighting. Whether it does so in a positive or negative way depends on our approach to adoption and use.

The complexity and the speed of warfare will change as we build an A.I.-ready force of the future. We owe it to the American people and our men and women in uniform to adopt A.I. ethics principles that reflect our nation's values of a free and open society.

As I've mentioned before, this is a multi-generational challenge that will require a multi-generational commitment. The DOD's ethics principles are a steadfast symbol of that commitment. While we firmly believe that the nation that masters A.I. first will prevail on the battlefield for many years, we also believe that the nation that successfully implements A.I. principles will lead in A.I. for many years. The U.S. military intends to do just that. 

Last month, I was in Brussels for meetings with our NATO and European Union counterparts to discuss the prospects for A.I. partnerships. My conversations with our allies and partners in Europe revealed that we have much in common regarding principles related to the ethical and safe use of A.I.-enabled capabilities in military operations.

This runs in stark contrast to Russia and China, whose use of A.I. technology for military purposes raises serious concerns about human rights, ethics, and international norms. Conversely, the U.S. system of democratic values and transparency, which led us to the development of the DOD's A.I. ethics principles, provides a framework for likeminded nations to follow as they look to develop their own A.I. principles.

With the adoption of A.I. ethics – ethics principles today, the U.S. is forging a path to increase dialogue and cooperation abroad to include the goal of advancing interoperability with key allies and partners. Our interest in A.I. partnerships extends to our interagency partners, the American tech industry, academia and others.

These relationships are vital for the Department to successfully adopt A.I. at scale. We value these partnerships. In fact, we would not be where we are today without the benefit of insights from many A.I. experts in government, industry and academia, to include from some who object to DOD's use of A.I.-enabled capability.

And I especially want to thank the members of the Defense Innovation Board, or the DIB, for their work in getting us to this point. Our dialogue with all of these stakeholders will continue and we look forward to sharing best practices and lessons learned.

I followed through on my previous commitment to hire someone to lead our work on implementation of A.I. ethics principles. She has the right blend of technical, policy, legal, and ethics experience. In the coming months, she, along with the rest of our JAIC policy team, will be bringing together thought leaders from across the rest of the Department to work on the implementation of A.I. principles.

This will be a rigorous process aimed at creating a continuous feedback loop to ensure the Department remains current with emerging technology innovations in A.I. Our teams will also be developing procurement guidance, technological safeguards, organizational controls, risk mitigation strategies and training measures.

We will make these implementation measures available to the Department and developed governance standards. These are proactive and deliberate actions that will infuse a strong foundation for the implementation of A.I. ethics principles while allowing for flexibility to adapt as technology evolves.

As long and as arduous as the DIB's journey has been over the past 15 to 18 months, in some ways that was the easy part. Implementing the A.I. ethics principles will be hard work. The Department's efforts over the next year will shape the DOD's future with A.I.

Our intentions are clear – we will do what it takes to ensure that the U.S. military lives up to our nation's ethical values while maintaining and strengthening America's technological advantage. The Department's adoption of A.I. ethics principles demonstrates our commitment to the American people, the men and women in uniform, and our allies and partners to be ready to deter an A.I. enabled fight and win when our nation calls.

The road ahead will bring challenges; yet I'm optimistic about the prospect of an A.I.-ready force of the future that is grounded in A.I. ethics principles. And with that, we'll turn it over to our questions. Thank you.

MR. DEASY: Right, I'll start from – from the back. Travis, will you start us off, please?

Q: Sure. Travis Tritten with Bloomberg. Thanks for doing this. You mentioned the tech industry. I'm wondering if you think adopting these ethical guidelines are going to ease some of the concerns in the tech industry about working on A.I. applications for the military and create some space for collaboration? And if so, how is that going to benefit the work you're trying to do right now?

GEN. SHANAHAN: Yeah, I – we would be doing these A.I. ethics principles regardless of – of the angst in the tech industry and – and sometimes I think the angst is a little hyped but we do have people who have serious concerns about working with the Department of Defense.

However, we do see this as a unique opportunity to work with the tech industry and academia on a set of principles. I think that we'll find we have far more in common than we do differences. In fact, the – the – the person I chose to lead our ethics implementation plan was out on the West Coast recently with some other members of – of the JAIC and had some discussions with some of the biggest companies in industry – I – I won't name them but I'll tell you there was a thirst for having this discussion.

First of all, they unanimously praised the DIB's work and they thought it was a very, very good piece of work but what the team also found in talking with the big companies is nobody is very far along in – in this – in this area of ethics implementation. There's been a lot of great talk about it but everybody finds there's some challenges when you actually take principles and apply them to every aspect of the A.I. fielding pipeline.

So I actually would – would like to believe this is an excellent conversation started with the tech industry, and I hope it shows that we have more in common than most people might suspect by hearing some of the stories about what we did in the past with Project Maven and so on, so it's a good scene-setter, the way I look at it.

STAFF: Sir, will you take the next question?

Q: (inaudible) What does this – so you've been working with A.I. before, obviously, but what does this mean for the average person working in A.I., now that you're implementing these policies? What kind of changes are they going to be seeing?

GEN. SHANAHAN: Well, let me – let me start with that. I think for everybody, the real hard part of this is taking the A.I. delivery pipeline and understanding where those ethics principles need to be applied, and it's going to be everything, I believe, from, where does your data come from? What does the data look like? Is the data representative, a very small sample size, as opposed to a very diverse set of – set of data that would be necessary to – to develop a high-performing algorithm – all the way through things like test and evaluation. What do we need to do during T&E (test and evaluation) and V&V (validation and verification) – validation and verification, test and evaluation – to show that we're meeting, in this case, I would say the reliable piece of the five principles? Reliable, to me, is about T&E and V&V, and I could – I could run you through every one of the five principles and find a place where people would understand where they fit in.

I think the – but probably more important than anything else is that everybody will now understand, when the department is going to field an A.I. capability it will be representative of the five ethics principles, as opposed to just going out and trying something, and maybe doing some T&E, but not doing the other four principles that are associated with it.

MR. DEASY: Yeah, I would – I would say when we created JAIC, from day one we said, "Look, we've got to put some real rigor in about, what do we mean when we say we want to field an A.I. solution?" And so we've created this joint foundation which is about technology and tools that people can use. But one of the things we've always said throughout this is we always knew that if we got the A.I. principles right, which I absolutely believe we've gotten right here, this'll bring even more structure and discipline for how we go about building solutions, fielding solutions, and the ongoing operation of those solutions. So I think this is additive from how the average A.I. technician might see their job in terms of bringing more discipline to the process.

GEN. SHANAHAN: And – and if I might just go back to one thing you said earlier, you know, apply two things together – sometimes the speed to market in the tech industry matters more than anything else. That's not the case at the Department of Defense. We will move as fast as we can, but while abiding through these five principles. I think that's very important.

STAFF: Ma'am, go ahead.

Q: Thanks. Courtney Kube with NBC News. So General, I want to ask you about something that you said in your opening, that the nation that masters A.I. first will prevail on the battlefield. Do you think China has mastered A.I. first, and are they – does that give them a technological advantage over the (inaudible)?

GEN. SHANAHAN: No, no, I do not – I do not think that China has mastered A.I. We know that – a lot about China. Some of what we know is on the open side; some is on the classified side. I'll tell you what we know is their intent is to move fast and aggressively with high levels of investment and extraordinarily large numbers of people to – to advance A.I. They're running into the same challenges in – in adopting A.I. as any country in the world is. Their data problems are representative of our data problems, just different problem sets.

So I give them credit for having the intention from Chairman Xi level on down to sort of the prefector, a nationwide intent to lead in A.I. by 2030. That doesn't mean they have mastered A.I., and I suggest they do not. The United States has very deep structural advantages in everything from, say, hardware, microelectronics to our academic institutions to our talent and our agility and our ability to innovate and be agile.

However, we can't afford to slow down, either. Those structural advantages will not be in place forever. We have to keep at this and move very aggressively ourselves. I do not believe, in sitting here in this room this afternoon, that China or Russia are having any sort of conversation like we're having today, the fact that the PLA or the Russian military would be on a stage in front of the media talking about the importance of A.I. and these principles. So what I worry about with both countries is they move so fast that they're not adhering to what we would say are mandatory principles of A.I. adoption and integration.

Q: Can you just give us a sense – I – I'm still unclear about, like, the practical implication, or – or the – the – the way that this is practically going to be applied to A.I. So something that's like, one big thing I think of with A.I. is facial recognition, right? We – we know a little bit about China might apply that, but what would be the difference between how the U.S. would apply it, given these ethical principles?

GEN. SHANAHAN: Well, let me give you two – two – two part answers to that. First of all, the idea of test and evaluate and validation, verification. I would suggest that some authoritarian nations are less concerned about high performance of algorithms, and more of just, "OK, we will make some mistakes and accept those, and move on." We will not field an algorithm until we are convinced it meets our level of performance and our standard, and if we don't believe it's going to be used in a safe and ethical manner we won't field it. 

Another one of the principles, the first principle that we're adopting is the idea of responsible. So translate the word responsible – is accountable. We have a strong track record in the Department of Defense of holding people accountable for mistakes that happen in a battlefield situation. So part of responsibility is determining who in their A.I. fielding pipeline is held accountable for a product that might have mistakes once it's fielded. 

MR. DEASY: You know – yeah.

GEN. SHANAHAN: I think those are two that come to mind.

MR. DEASY: I think one I'd add is this one around equitables. So everybody knows that one of the successes with A.I. is being able to accumulate mass quantities of data. The more data you feed an A.I. algorithm, the more you start to get to solutions that then allows you to make decisions from it. But we also know you need to be very, very thoughtful about where that data is coming from, what was the genesis of that data, how was that data previously being used? And then you can end up in the state of unintentioned bias, and therefore, can end up developing a solution that will create a certain algorithmic outcome that's different than what you're actually intending. How much time they will spend on truly trying to appreciate that the sort of data, where they've collected from and what bias may be embedded in that data is questionable.

STAFF: Sidney, will you take our next question, please?

Q: Thank you. Gentlemen, thanks, first, for (inaudible). The other question is principles we've – you know, there's been a discussion about them. As it is, there – you know, on (inaudible) here, admirable, but almost unbearably vague in application, which I think is driving all the questions of my colleagues. In DOD, it's all about the implementation guidance, about the big, thick documents that people have to – have to really, actually go through to check with (inaudible). How are you getting, with this new employee who we'd love you to name, from these very broad principles and goodwill to specific, actionable guidance for different people in the department and different organizations at different steps of that pipeline you've mentioned?

MR. DEASY: Yeah, I'll – I'll take that one to start with. So I – I would say that we're actually in step two. Step one was the – the signing of the memo that took place. Step two was now starting communications. It's really important that we get out and actively communicate what we mean by this. The very question that you raised is a very appropriate one. So what do we mean by actually implementing this?

So the way we've done this is we've stood up a DOD A.I. executive steering committee. We already knew that these principles were coming, so there's a subgroup underneath that A.I. executive committee, but it will deal specifically with this. What is "this" that we are talking about? So there is how you actually bring data in. What are the questions we need to ask ourselves. Like just gave you that whole example on – on conscious biased or unintended biased.

To how do you actually develop a solution and in that development process, what principles will we use and how someone actually builds the application of stealthy algorithm. Then there's the testing of that. 

What are the principles that you have to apply to those five that we just outlined around how are you actually going to test to see if you're getting to the results that were, for example, within the parameters of success.

What would the intervention human in the loop need to look like when you go to deploy this, for example, from an operation standpoint. And then the operator needs to be trained so these A.I. solutions get handled over to the field.

How is we need to train them and then what is we need to teach them that we need to look for to see if there's something about (inaudible) that's not operating correctly. So those will be examples of very actionable things that we will have to do across the board within each of the services and with the combatant commands as well.

GEN. SHANAHAN: And soon, you'll probably tack a couple of things on to that. There was a reason I said in sort of the concluding paragraph in my opening remarks is that as hard as it was, as challenging it was for the DIB to do the 15 month study that's – in some ways that was the easy part we're about to embark on the really challenging part.

If it was easy it would have been done by now. And you're right, they're broad principles but they're broad principles for a reason. Like everybody understands in this room who – who writes on tech – tech advance, tech evolve; what we – last thing we wanted to do was put handcuffs on the department to say what you could not do. So the principles now have to be translated into implementation guidance.

And in addition to what Mr. Deasy said about the executive steering that there is now a subcommittee that is the responsible use of AI subcommittee. This will bring in people from across the entire department especially interested in research and engineering, testing and evaluation, DOD (inaudible) and the services and others to really learn what it takes to implement the principles as currently written.

I almost look at these as confidence building measures. We're on the ground floor. If there was one thing I would emphasize more than anything it's we're on the ground floor. We're not 15 years into adoption of – of AI on a fielding site. We've done the research for five decades.

But on a fielding side I believe we could say we're on the ground floor and having these conversations with industry – the tech industry, academia, and others that we work together on common principles, which include our allies and partners as we look to seal this.

So yes, there's a – there's an awful lot of work ahead. We started some initial steps such as model cars as we took that from industry for a data card that shows the providence of the data human and machine readable card that suggest what the data can and should not be used for.

We're looking at some non obligatory language in contracts that would ask industry can you abide by these principles. What would it look like to develop an algorithm or a model in accordance with these principles? Not binding but at least having these conversations with industry right now.

Q: Is there a timeline for when (inaudible) those things out.

MR. DEASY: Yes, I – I mean one of the things I've tried to share across the department is if you just look at what it takes to put an AI out loose into the field, it is a highly iterative process. You bring data in, you test that against the algorithm, you then get results. You then iterate this loop.

When you think about those five principles, we're going to be embedding now those five principles and all those steps I talked about, how we bring data in, how we develop, how we test. Every time we roll something out, we're going to learn something further. We're going to have that – kind of that ah-ha moment and go OK, this has to be adjusted. 

This principle now brings to light this problem we haven't thought through. And so there is no such thing as an end state. I would say that we will continue to always use these principles as we continue this journey towards what I – what I like to call bringing A.I. alike at scale and at the right speed across the Department of Defense.

Q: OK, a follow up on ... 

STAFF: Sir, I'm – I'm actually going to just move to the next person. I'll get to you, Travis, in just a moment.

Q: Thanks. Patrick Tucker from Defense One. So the original publication of Defense Innovative Board's principles, guidelines included a supporting document that goes to almost 80 pages, which includes a bunch of appendixes that go into some more detail about how to actually implement this stuff.

Is that being formally adopted as well or is that at the discretion of this steering committee to implement?

GEN. SHANAHAN: Yes, at the discretion of the steering committee, not – I wouldn't use the term "formally adopted" but it's a – it's a tremendous starting point to give us all the work of – of the 15 months that the DIB spent putting that together.

Q: OK. And can you go into – like, elaborate on how you're working with the interagency to help – like, for instance, ODNI as they try to develop A.I. principles as well? Are they sort of following on what's happening here or how is that working?

MR. DEASY: Yeah, I will start by – when we were getting ready to roll out these, there is an interagency A.I. group that comes together. We sat down, we walked through that. Many of them made the comment that these are so foundationally applicable to other agencies across the government that I think we've actually helped create a blueprint that other agencies will be able to use as they roll out their appropriate adoptions of A.I. ethics.

GEN. SHANAHAN: Yeah, that's the A.I. Select Committee and the IC is represented in that. I have a close relationship with Dean Soleylus on the AIM (Augmenting Intelligence using Machines) initiative or the innovation hub. I – as – as they go forward with sort of principles, you won't find much difference. In fact, as the White House looks to release some A.I. principles this summer, you will find strong similarities between our – our sets of documents, not surprisingly.

STAFF: Ma'am, will you go ahead please?

Q: Yeah, thank you. Sandra Erwin with Space News. Following up on that question on the Intelligence Community, some of the agencies use A.I. extensively in support of – of DOD so how does that work? Do they have to follow DOD guidelines or will they have to follow whatever guidelines ODNI adopts?

GEN. SHANAHAN: I – I – there are – there are – there will be nothing that, when we're working with somebody else from across the interagency, that would be sort of the line you shall not cross. These are principles, we will discuss those as we're looking to bring in a certain technology from commercial industry or academia, and that's largely what my mission, at least, in – in the JAIC, is to bring in capabilities that exist in academia and industry today that's considered state of the art and then getting to the field, in (inaudible) of it.

But in these discussions across the interagency – again, I go back to what the White House was working on. Those will be coordinated across the entire United States government. And when you compare the – their eventual document with ours, the words won't be precisely the same, but the spirit and intent will be the same.

And so I – I just don't think it's going to be that hard to work across the interagency. I think the important part is bringing people together to have the discussion about what DOD is doing versus what some other part of the United States government is doing.

Q: And when are these discussions going to be taking place?

GEN. SHANAHAN: As we get the subcommittee up and running – and pretty much we're already ongoing, even though we didn't have a formal executive hearing group established, we've been having these discussions. We've been working throughout the – the entire interagency.

For us, the relationship we have with GSA (General Services Administration) A.I. Center of Excellence has been a very strong one, so some of these discussions have been going on already with them and the White House.

MR. DEASY: Yeah, you should hold that, even though the Secretary's letter was issued today, this – this conversation and knowing the steps that we would need to take, we've started between General Shanahan and myself and the entire JAIC team and the services has been well underway already.

So it's not like we're starting from ground zero today.

STAFF: Justin, go ahead.

Q: Hey, thanks. Justin Doubleday with Inside Defense. Just really quickly following up and I have another question, who's on the hearing committee, who leads this hearing committee?

GEN. SHANAHAN: The – the individual that I mentioned, I – I – and I don't want to give her name out right now, we'll – we'll – we'll – her name will become known not too far down the road – she leads the subcommittee. I am the Director of the executive steering group and there's a DOD A.I. working group which brings people throughout the entire Department together.

This Responsible Use of A.I. Subcommittee will be – I think even a little broader. We're bringing in some of the people from across the interagency to have – to have these discussions because nobody has solved this yet. 

I go back to what I said earlier about industry, they're – they're wrestling with the very same challenges of how to take very broad documents that talk about principles and turn them into actionable items.

Q: Got it. And then on the – so some organizations have recommended that DOD update its lethal autonomous weapons system, DOD Directive 3000.09. Do you see a need to review that and potentially update that right now?

GEN. SHANAHAN: Let me just clarify, title of autonomy and weapons systems is the DOD Directive 3000.09. Those discussions are ongoing. That document was written really in a pre-A.I. time. A lot of it applies to A.I. capabilities but we tend to conflate A.I. and autonomy as opposed to A.I.-enabled autonomous systems.

As we build a directive that applies for A.I. across the Department, we're going to look at how do we bring those – those principles in and – and that guidance into maybe another document.

Q: (Inaudible) committee is reviewing?

GEN. SHANAHAN: We will – we will talk about that through the executive hearing group.

Q: OK.

STAFF: Zach, go ahead.

Q: Zach Biggs with the Center for Public Integrity. So I wanted to ask about this sort of responsible, accountable question. Obviously there's systems that are currently being developed. By 2021, you're looking at your first lethal application of A.I. as part of the JAIC's process.

It's iterative, I grant, but right now, what do you view as responsible meaning in terms of a Commander? Is it accountable, the Commander is responsible? What are you viewing as that obligation? And I – granting (inaudible), right now, when you're developing systems, what does responsible A.I. mean?

GEN. SHANAHAN: I – that is – that is part of what we need to work through. When we look at everything from 3000.09 to what are the capabilities we're talking about, we do not have capabilities being fielded right now that have us to the point in the Department where have anything resemble – close to resembling lethal autonomous weapons systems.

By having us on the ground floor and doing table talks and – tabletop exercises and work our – through some scenarios, we began to draw out some of those points in the development process where we would have to question who is held responsible, all the way from what – what does it look like on the software development side of the house to the fielding side?

Because I've spent 35 and a half years in uniform, I come from a place where we hold people accountable in battlefield operations. That's not going to change to whether – whether A.I.-enabled capabilities are on the battlefield.

We will still adhere to those principles. We will investigate – we have a long track record in the Department of Defense of investigating when things go wrong and holding somebody accountable. What we don't have a full appreciation of, which gets to your point, is where in an A.I. fielding process do all of those come into play? That's part of what we're going to be addressing through this subcommittee.

MR. DEASY: Yeah and part of responsible depends on where you are on the A.I. cycle. So if you're in the development part of that, part of that responsibility is where you're bringing your data in from. If you're on the test side of that, part of your responsibility is what's the upper and lower range? Let's say if an A.I. solution is supposed to give you a predictable result, are you staying within that?

And then depends on – when we say fielded, who's it being fielded to, who are the operators of it versus who's the decision maker based on what the operators are gleaning from what the system is saying? So this responsibility's a really good question.

Part of why we have to get started is we're going to learn to the first few of these how we think about responsible in each part of that phase of rollout.

Q: Sir, following, right now, is there any sort of delay that will be caused in terms of fielding so that you can sort out the responsibility question or ... 

GEN. SHANAHAN: No. The department's moving forward with development of capabilities. We will do this commiserate with development of capabilities. And let me go back to something you said earlier and it's a good question that if I go back to my time in Mayvin, they're still human all over the loop in Mayvin capabilities. 

So we're sort of doing this incremental. You – you acknowledged the evolution of technology and that's going to be the case with A.I. There's going to be humans, human machine, and at some point machine to machine.

Machine to machine on a higher advanced capability as opposed to sort of the standard command and control machine to machine that goes on today. At every step of the way we'll be learning those lessons back to what Mr. Deasy said earlier about an – a process by which we will continually review and access how we did in previous fielding capabilities, lean from that and then adopt.

Again, principles are broad for a reason. We're going to apply those and then get more specific as we learn through the fielding process.

MR. DEASY: So if you think about the fact JAIC is just now a little over a year old. We actually stood up to the defense innovation part of this work way back when JAIC was being stood up.

And one of the reasons why I did that was I knew that someday we would be running in through this concern of how we build smartly all of these principles, we've talked about that. And so this – this was one of the things I saw that very early on we had to get done for us to be able to start doing more and more A.I. roll out at scale.

I would say that's the very responsible view versus pushing out a bunch of A.I. solutions and then suddenly you're waking up one day and saying we haven't addressed the ethical. We've done for the ethics of the very start of the process standing up JAIC. 

STAFF: OK. I'm going to go to you.

Q: Hi. Lauren Williams with Federal Computer Week. General, earlier you mentioned including non obligatory language into contracts for, I guess would it be defense contractors. How does the DOD plan to enforce that – access whether these companies would be abiding by these principles.

GEN. SHANAHAN: Yes. Thanks. I would say there's not – I was not suggesting enforcement at the beginning of it. These are early conversations to be had with our industry partners to say now that we have established these five principles for A.I. ethics, could you develop capabilities that address each of the five at some point along the way.

And maybe it's not all five. It could be sort of on the – on the reliable piece of it, which is more on the responsible, which we have been doing very deliberately through my Project Maven days and now some of our early JAIC work is really focusing on what T&E and validation, verification looks like.

So I – I – I'm not suggesting that we would go out and put language in that has to be enforced. Far too early in the process for that. But I'm really excited about is the opportunity to speak with our industry partners just to have this conversation.

And it turns out the biggest companies of the world, the start-ups of the world, wants to have this conversation. It's in their best interest to be developing capabilities in accordance with their principles as well. They must just have slightly different flavor in their own company.

MR. DEASY: I'd actually say that by these principles now being rolled out; think about what it gives the market place. It gives them access to how we think, what's important to us. How do we want to drive A.I. I think this will actually stimulate the market place and now start to create technology to solve very specific problems if you look across each of those five principles.

Some of those principles will need some unique technology solutions to help bring those to life.

STAFF: Sir, go ahead.

Q: Jackson Barnett with the FedScoop. To follow up on that, are you hoping this is going to be a document that's going to lead the private sector in developing their own principle and say for example if you were a contracting, you know, A.I. as a service or a similar type mechanism, would that company be required or in some way be kind of propelled to adopt principles similar to (inaudible) out here today.

MR. DEASY: Yes, I'll start by that by saying having been on both sides, there is nothing in these principles as you read them that are uniquely and only specific to the DOD. Any one of these is absolutely applicable to the private industry as well.

Now, am I – am I trying to suggest that we are going to be the leaders in driving out in the corporate world? No, the corporate world will pick up at that and deal with it in the appropriate way but I think the application of how you could apply these are very applicable to private industry.

GEN. SHANAHAN: And the only thing I would add is I – I'm – we would not say that DOD has the answers, we have some answers, we want to – we want to understand what else is out there. So just – to what Mr. Deasy said, we're going to learn as much from industry and academia, we're just proud of the fact that we're – we're – the Secretary of Defense has put his signature on this memo and we're tasked to implement that.

Q: If I could – if I could clarify one thing. Is this a straight copy and paste from what the DIB submitted? Was there any changes in the draft from – from what they submitted?

MR. DEASY: Go ahead.

GEN. SHANAHAN: There are some changes but let me say right off the top that if you were to compare what the DIB submitted and what the Secretary signed, there's clear intent that both versions are entirely consistent.

Where you – where – well, where you'll see some differences and where the DIB document had "should," the Secretary of Defense wrote "will." So it is actionable. You will go do this as the Department of Defense. Well I think that makes the document stronger.

There's a few other places where, as they were recommendations from the DIB, the Secretary of Defense had to turn around and write them in a formal memo to the rest of the Department. So the lawyers had to make sure the language was appropriate in all – in all cases.

There are other – some places where it was even a little bit stricter, I think, on the DOD side, where, in one of them, I think it was on the traceable or equitable side, it talked about this would apply to the people with the – the "relevant technical expertise," now it just says "relevant personnel," so it's even broader, that anybody who touches this will have some responsibility.

But if you were – the five principles themselves are identical. The words a little bit different in some places, largely – largely the same, the spirit and intent entirely consistent.

STAFF: I'm going to go to the back of the room but, Travis, I think you had a follow up?

Q: Yeah, I wanted to follow up on – and just to be crystal clear, I haven't seen the language so I apologize if it's explicit in there. The – the Defense Industrial – the Defense Board, when they made these recommendations last year, they really underscored this idea of having a human and (inaudible) that could shut down these systems if something goes wrong.

Is that preserved in there and how important is that element in and of itself?

GEN. SHANAHAN: It is – it is preserved. In – in some ways, it's even a little bit broader than it was in the DIB language. It says now – so the end of the sentence, "possessing the ability to protect and avoid unintended consequences and to disengage or deactivate deployed systems that demonstrate unintended behavior." That could either be human or automated ways of doing that.

So again, the language is consistent between the two, this just is a little bit broader applicability to how that would actually happen. It doesn't specify human or automated ways of disengagement, it just says there has to be a way to disengage or deactivate deployed systems that demonstrate unintended behavior.

MR. DEASY: And you should note that it – (inaudible) in the case of the DIB wrote these, hand them over to us and then we'll – we went alone at it. We've had the DIB in for I don't – a multitude of conversations, some really good healthy discussions, debates, working through what they came up with and sharing with them how we were going to enhance them, change some wording so it made more sense in terms of how the Department of Defense talks.

STAFF: Sir, you've had your hand up. Go ahead, please.

Q: Matt Beinart, Defense Daily. I was just wondering about the current – JAIC's current national mission initiatives. Any updates you're able to provide in terms of how far along those are? I know, like, the Blackhawk maintenance tool, disaster relief, and then just any new NMIs that have started?

GEN. SHANAHAN: Yeah, I – I don't want to get into a lot of details in this setting but we are making progress on our cyberdefense mission initiative, as well as a war fighter health one, which still is one of our earlier ones, but we've assembled a very capable team led by actually a Navy 06 active cardiac thoracic surgeon who wrote a War College paper on A.I. for (medicine ?), so I think we got a – a (unicorn ?) leading our war fighter health initiative, and all of the Surgeons General from the rest of the services have been happy to provide body (assistance ?) because they don't have the capacity to do it.

And then broadly, one that talked about – called joint war fighting, looking at different capabilities having to do with terrestrial surveillance capabilities, sort of small UAS's for perception, cognitive assistance and so on.

And then the one that we're really sort of picking up steam on it, that Mr. Deasy's very familiar with, what I would call intelligent business automation or processed automation. Those are the places where we're going to find near term return on investments, which are not pure A.I. capabilities but I've got a superstar leading that project who's really working across all of the services and has a thirst for getting that fielded very quickly.

MR. DEASY: I think one of the ones I'm most excited about is the one that every time when we meet, I always say "how is it going?" And that is the joint – the Joint Common Foundation. I've always said JAIC's success lies in its ability to – to scale at speed but create repeatable processes, technology tools, libraries. And so there's this whole foundation that is being built that is starting to pick up some real momentum.

And this is what the services will be using as they start to build and deploy their solutions.

STAFF: Next question? Ma'am, go ahead.

Q: (Inaudible) National Defense. A lot of conversation was about how this would affect industry and whatnot but what kind of role did they play in actually drafting them? Any – I mean, what kind of feedbacks did you get?

GEN. SHANAHAN: Yeah, the principles that the DIB – the report that DIB wrote, they engaged industry throughout the entire 15 month period. They had very open sessions bringing in academic expertise, industry experts, open to public to come in, and as I said in my remarks, people that objected, did not like the fact that DOD is using A.I.-enabled capabilities, had a chance to come to the table and express why they didn't like the fact that DOD would be adopting A.I.-enabled capabilities, so industry was infused throughout.

In our conversations – and that goes back to something Mr. Deasy said earlier – even before the publication of the DIB's report, we were having conversations with industry but they were a little broader because we didn't have the principle set yet.

We just wanted to know if – if we are going to field a capability – so I always go back to the – the one that we call reliable, that's the one I – I – I mean, we put the most time and thought into, in Project Maven and the beginning of JAIC, is how do I know that this will perform as advertised once fielded? That's the test and evaluation, validation and verification.

And even when it's a capability that's fielded, there has to be real time feedback on things that it's performing against in the real world as opposed to the data sets that were representative of the real world but things happen differently when you get into the – those situations. So then we have to update the test and evaluation as we put new models out the door. 

So industry has been in this conversation throughout, to include the business that I mentioned, our team went out to the West Coast last week, very, very eager to have this conversation. Even some that have some reluctance about things like Project Maven are enthusiastically coming to the table to talk about the A.I. ethics principles, so really a conversation everybody wants to have. It's a way to get ourselves in the door more – more often than not.

MR. DEASY: When I – when I meet with CEOs at these industries, you know, the first question is "do you think we got these right, is there anything you think we're missing?" And they immediately want to start having a conversation about what can they provide in terms of solutions now that can help us actually bring these to life?

So, you know, a classic model of they're starting to look at what the business opportunity is, that they can now take these principles and create solutions for us. That's what the conversations are that I'm having with industry, is they're wanting to talk about how can they actually start to help us create solutions around these things?

STAFF: I think we have time for one more. Justin?

Q: I just wanted to ask one of the interesting things about Project Maven was the fact that – that the controversy was – then it was, like, Google was able to keep its relationship with the project secret until an employee discovered it.

I know a lot of work that you guys do is classified for security reasons but to what extent can you commit to transparency in who you're working with in the commercial sector on these projects?

MR. DEASY: Maybe I'll – maybe I'll answer the question in a slightly different way and say if you had had the A.I. ethics principles three years ago or two and a half years ago and our starting point with one of the big tech companies was that and we were transparent about what we were trying to do and why we're trying to do it, maybe we would've had a different outcome.

You've – you've heard me talk before. There was lessons learned on both sides about how that – all that played out but if we could at least have a starting point of with – the Department of Defense intends to abide by these A.I. ethics principles. Now, with that in mind, here are the capabilities that we're looking for and then we – then we start the conversation there.

We didn't know that at the time. We – we just – it was brand new to the Department, we were trying to accelerate what we were doing, we knew some core principles were in play, we just didn't have them defined in the very nice way that the DIB has done and now the Secretary of Defense has promulgated to the rest of the Department.

But I think there could have been a different – would have been a different conversation if we had had the principles two and a half years ago or three years ago as opposed to where we are today. But on the other hand, I think everybody's trying to work hard at improving the relationships with the Department of Defense.

STAFF: OK. All right, ladies and gentlemen, that'll conclude our press conference today. There will be a transcript posted to defense.gov later in the evening and there's also – you'll find online the live stream. I believe that's been delayed momentarily but that – that'll be up, as well, so you can review both of those products too. 

If you have questions, if you have follow ups, I'm available as well. Most of you have my contacts or Lieutenant Colonel Carver's as well. So thank you very much and we'll see you next time.

(UNKNOWN): Thank you.

(UNKNOWN): Thank you.