An official website of the United States Government 
Here's how you know

Official websites use .gov

.gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( lock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

You have accessed part of a historical collection on Some of the information contained within may be outdated and links may not function. Please contact the DOD Webmaster with any questions.

Honorable Robert O. Work, Vice Chair, National Security Commission on Artificial Intelligence, and Marine Corps Lieutenant General Michael S. Groen, Director, Joint Artificial Intelligence Center Hold a Press Briefing on Artificial Intelligence

STAFF: Hey, good morning, ladies and gentlemen. Welcome to today's press conference on DoD (Department of Defense) artificial intelligence (AI). I'm Lieutenant Commander Arlo Abrahamson, and I'll be moderating today's briefing.

With us today is the Honorable Robert O. Work, vice chair of the National Security Commission on Artificial Intelligence (NSCAI), and Lieutenant General Michael Groen, the director of the DoD Joint Artificial Intelligence Center (JAIC). We'll begin this morning's briefing with an opening statement from both principles, then we'll go to questions. I'll plan to go out to people, out in the phones. We have a few people in the room. Please just identify your name and outlet to the principles before you ask your question.

And with that, I'll now turn it over Mr. Work and General Groen to deliver their opening statements.

VICE CHAIR ROBERT O. WORK: Well, thank you.

And good morning, everybody, those here and also who are following online.

I'd like to start by just two overarching comments.

First, for the first time since World War II, the United States technical predominance, which undergirds both our economic and our military competitiveness, is under severe threat by the People's Republic of China. Dick Burns, who is in his confirmation hearing -- or Bill Burns, Bill Burns, I'm sorry -- Bill Burns in his confirmation hearing as the director of the CIA said that in the strategic competition with China, technology competition is the central pillar, and the AI. Commission agrees totally with that.

The second broad thought is within this technological competition, the single most important technology that the United States must master is artificial intelligence and all of its associated technologies. Now, we believe -- we view AI. much like Thomas Edison viewed electricity. He said, "It is a field of fields. It holds the secrets which will reorganize the life of the world." Now, it sounds like a little hyperbole, but we actually believe that.

It is a new way of learning which will change everything. It will help us and -- utilize quantum computing better. It will help us in health. It will help us in finance. It will help us in military competition. It is truly a field of fields.

So with that as background, we said, "Look, we are not organized to win this competition." We just are not. We say we're in a competition, which is a good thing. The first thing you have to do is admit you have a problem. So Houston, we have a problem. But we have not organized ourselves to win the competition, we do not have a strategy to win the competition, we do not have the resources to implement a strategy, even if we had one.

So the first thing is we have got to take this competition seriously and we need to win it. We need to enter it with the one single goal -- we will win this technological competition.

Now, what we decided -- the best way to think about this is we are not organized now, we need to get organized. We said by 2025, we should -- the department and the federal government should have the foundations in piece for widespread integration of AI. across the federal government and particularly in DoD.

Now, there are three main building blocks to achieve this vision. First, you have to have top down leadership. You cannot say AI. is important and then let all of the agencies and subordinate departments figure out what that means. You have to have someone from the top saying "this is the vector, you will follow the vector. If you do not follow the vector, you will be penalized. If you do follow the vector, you will gain extra resources." So you have to have top down leadership.

Now, one of the first recommendations that we made is JAIC was underneath the CIO (Chief Information Officer) and it was actually underneath DISA (Defense Information Systems Agency), in many ways, administratively. We said if you want to make AI. your central technological thrust, it needs to be elevated, and we recommended that the JAIC report either to the Secretary or the Deputy Secretary. That was actually included in the NDAA (National Defense Authorization Act) and now JAIC reports to the Deputy Secretary of Defense, and that's a very good first step.

But we think the next step is to establish a steering committee on emerging technology. This would be a tri-chaired organization -- the Deputy Secretary, the Vice Chairman of the Joint Chiefs of Staff and the Principal Deputy Director of National Intelligence. They would sit and they would look at all of the technologies, they would drive the thrust towards an AI. future and they would coordinate all activities between the Intelligence Community and DoD, which is a righteous thing. They would be the ones who identify lack of resources, address that problem and also remove any bureaucratic obstacles.

The steering committee would oversee the development of a technology annex of the National Defense Strategy. The last time we had a list of technologies, there were 10 on the list. All 10 of those were very, very important but when you have 10 things as your priorities, you have no priorities. You have to establish some type of prioritization and enforce it. So the technology annex to the National Defense Strategy would do just that. Also, the department should set AI. readiness performance goals by the end of this fiscal year, 2021, with an eye towards 2025, when we need to be AI. ready.

So top down leadership is the first big pillar. The second is to ensure that we have in place the resources, processes and the organizations to enable AI. integration into the force. Now, the commission said you need to establish a common digital ecosystem. The JAIC has established the Joint Common Foundation. There are a lot of similarities between the two, although the commission's view is a little bit broader than the Joint Common Foundation at the point. But the point is that everyone sees the necessity that provides access to all users in the department, to software train models, data, computing and a developmental environment for DevSecOps that is secure.

We recommended that you designate the JAIC as the AI. accelerator. We actually assessed that China is a little bit ahead of the United States in fielding applications at scale. We can catch up with them and we believe that JAIC is the logical place in the department to really be the accelerator for AI. applications at scale.

The department has to increase its S&T (science and technology) spending on AI and all of R&D (research and development). We think it should be a minimum of 3.4 percent of the budget and we recommend that the department spend about $8 billion on AI R&D annually. That will allow us, we think, to cover down on all of the key research areas.

There's all sorts of specialized acquisition pathways and contracting authorities out there. We still continually need to refine them because many of them are not perfectly applicable to software-type things. And I know JAIC is working on this but we have to have an updated approach to the budget and oversight process for these things. So the second big pillar is ensure you have the resources and the processes and the organizations.

And third, you have to accelerate and scale tech adoption. You really have to push this. So we recommend standing up an AI development team at every single COCOM (Combatant Commands), with forward deployable elements and a leveraged, technological knowledge to develop innovative operational concepts and essentially establish a pull for AI-enabled applications that will help them accomplish their missions.

The department should prioritize adoption of commercial AI solutions, especially for all of the back office stuff. There's really no reason to do a lot of research on those type applications. The commercial industry has plenty of them. You just have to prioritize identifying the ones that can be modified for our use and bring them in as quickly as possible.

We think the department should establish a dedicated AI fund, under the control of the Undersecretary of Defense for Research and Engineering (R&E), and that fund would allow the Undersecretary to get small, innovative AI companies across the Valley of Death, and this would be up to the Undersecretary of Defense for R&E, who is the Chief Technology Officer of the department.

Now, the things that undercross all of these are talent, ethics and international partnerships. Let me talk about talent first. We think you have -- we have to have a DoD Digital Corps modeled after the Medical Corps. These are digitally savvy warriors, administrators and leaders, we just need to know who they are, we need to code them in some way and we need to make sure they're in the places that have the highest return on investment. We need to train and educate warfighters to develop core competencies in using and responsibly teaming with machine systems, understanding their limitations, understanding what they should not be asked to do, etc.

And equally, AI and other emerging technologies need to feature prominently in senior leader education and training, with a key focus on ethics, the ethical use of AI, and I'll go right into that.

We're in a competition with authoritarian regimes. Authoritarian regimes will use technology that reflect their own governing principles. We already know how China wants to use AI. They want to use it for population surveillance, they want to use it to suppress minorities, they want to use it to cut individual privacy and trample on civil liberties. That's not going to work for a democratic nation like the United States. And so this is as much a values competition as it is a technological competition.

The way Eric Schmidt, our chairman, talks about this is we're going to employ platforms which bring these technologies. So let's just think about how 5G worked. Huawei's 5G technologies allowed a country, who do this -- or uses it to essentially surveil their population. So these values are very, very critical and an important part of the competition.

And finally, we're not going to succeed if we do it alone. This is a kind of central thinking in U.S. defense strategies. So we have to promote AI interoperability and the adoption of emerging technologies across -- among our allies and our partners.

We are absolutely confident, as a commission, we can win this competition. But we will not win it if we do not organize ourselves and have a strategy and have resources for the strategy and a means by which to implement the strategy and make sure that everyone is doing their part.

Thank you.

LIEUTENANT GENERAL MICHAEL S. GROEN: Good morning, everybody, and thank you very much for participating in this important session.

And first I want to say thank you to Secretary Work and the National Security Commission AI -- on AI team. Just incredible work. I mean, what you see if you've read the report -- if you haven't, I encourage you to go to the website and look at the NSCAI final report.

What you see is like a deep understanding and a deep analysis of down to first principles, bare metal for what it takes for AI integration and preserving our military effectiveness. What they produced is critically important and critically important for us in the department, but is also critically important for our national competitiveness.

In the same breath I -- I'd like to say thank you to Congress and -- and department leadership, both of which clearly understand the importance and the need to innovate and modernize the way we fight and the way we do business.

And I'm happy to report, as the director of the JAIC, positive momentum toward implementation of an AI -- of implementation of AI at scale. We certainly have a long ways to go, but you can see the needle trending positive.

With bipartisan support from Congress, with great support from the DoD leadership, the services are beginning to develop AI initiatives and expand operational experimentation that is taking those first steps. The Defense agencies are reaching out daily to share their best practices with us and with each other.

The combatant commands, especially the combatant commanders, have -- have caught a glimpse of what the future might look like through a series of -- of integrative exercises. They like it and they're eager to gain these capabilities.

With the JAIC now aligned under the deputy secretary, which gives her and the rest of the department leadership access to the tools and processes to reinforce their priorities, underline our ethical foundations, integrate our enterprises and transform our business processes, and we -- we are eagerly looking forward to that work.

Like the NSCAI, we see AI as a core tenet of defense modernization. And when I say "AI," I want to be clear, I'm not just talking about the JAIC, all AI, the -- the efforts of the services, the efforts of the departments and the agencies, rides on the foundations of good networks, good data services, good security and good partnerships. And an important part of the JAIC's business model is to build those as part of our AI infrastructure.

And with lots of budget work ahead, I think -- you know, we'll hear as F.Y. '22 (fiscal year 2022) is relooked and -- and the POM (Program Objective Memorandum) '23 (2023) to '27 (2027) is developed, we'll hear a lot about modern weapons systems and concepts. And it's important that we -- we understand that their potential -- those weapon systems, those concepts, their potential to modernize our warfighting rides on those -- on the foundational data, the networks, the algorithms that we built to integrate and inform them.

We'll have to talk about these technical foundation and architectures in the same conversation that we talk about platforms. Getting AI right and our secure data fabric environment right will be central to our ability to compete effectively with the Chinese and the Russians as well, or any modern threat for that matter. And -- and there's more, actually.

So in an era of tightening budgets and -- and a -- and a focus on -- on -- on squeezing out things that are -- that are legacy or not important in the budget, the productivity gains and the efficiency gains that AI can bring to the department, especially through the business process transformation actually becomes an economic necessity. So in a squeeze play between modernizing our -- our warfare that moves at machine speed and tighter budgets, AI is doubly -- doubly necessary.

So what I'm -- what am I talking about when I talk about AI? As Secretary Work's comments convey, the integration of AI across the -- across the government and the Department of Defense is much more than just a -- than just a -- you know, a facile layer of technology applied. It's not about shiny objects.

You've heard the -- you know, the phrase: amateurs study tactics and professionals study logistics. Well, in this environment, amateurs talk about applications and professionals talk about architectures and networks. And elevating the AI dialogue in the department so that we are talking about the foundations of all of our modern capabilities is a really important task, one that we're -- that we're -- we're working hard on.

The core business model -- that is: what the department, you know, gives to the American people -- what our mission is doesn't change. But a modernized, data-driven software-heavy organization will do things in a different way. It's a -- it really represents a transformation of our operating model: how do we do the things that we do as the Department of Defense?

And that operating model will have to create a common data environment, where data is shared, data is authoritative, data is available. The data feeds and algorithms across the department will create productivity gains; accelerate processes; provide management visibility, insights into markets.

And if all of that sounds like a modern software-driven company -- you know, you can think -- if you think of all of our tech giants and -- and -- and smaller innovative companies across the -- U.S. economy, it's because it is. It's the same challenge. It's the same problem. And so we have examples, right? There's very little magic here.

It's about making our organization, the Department of Defense in this case, as productive and efficient as any of these modern, successful data-driven enterprises. But there's so much more, because all of this technology applies equally to our warfighting capabilities, our capabilities in the broad range of supporting activities from all the Defense agencies in other places that make up the business of the department.

We've created positive momentum for AI and we continue to build on that now. But now comes the real critical test. In -- as in any transformation, the hardest part is institutional change and change management of the workforce, and practices, and processes that drive -- that drive a business. This step will not be easy, even within the Department of Defense. But it's foundational to our competitive success, our accountability, and our affordability.

As the NSCAI work reveals, we have a generational opportunity here. For AI to be our future, we must act now. We need to start putting these places into place now. So I want to quickly describe our position through two different lenses. One is competition and the other is opportunity.

First of all, with respect to AI competition, I think it's illustrative to talk about the economic impacts of artificial intelligence as a first order. Economic forecasts predict an AI economy of 16 trillion, a $16 trillion AI economy in the next 10 years. And this will -- this could amount to massive GDP increases, 26 percent, as high as 26 percent for China, as high as 15 percent for the United States, that to participate in this competitive AI marketplace. And if we do that, this core economic competitiveness of the United States then needs to be reflected in a core military competitiveness in this space as well.

It's important to note that, you know, while we talk about a $16 trillion market in the next decade, this happens to coincide pretty closely with China's declared and often repeated intent to be globally dominant in AI by 2030. So we look at the transformation of our economy has to be accompanied with close attention to the emerging threats that are taken -- that are there declaring their intention to use this as a point of competition between autocracies and democracies.

Our forces must operate with tempo, with data-driven decisions, with human-machine teaming. Our forces must have broad situational awareness, multi-domain integration. The PRC has a robust entrepreneurial AI environment. You know, we're all familiar with, you know, Ant Financial or Alibaba, Tencent, I mean, these are global companies.

But we're also very familiar with the artifacts of population surveillance, minority oppression, the things that Secretary Work talked about under the Chinese Communist Party's rule. We read about Beijing's large-scale campuses, their tech campuses and their state-owned enterprises that create a pipeline from entrepreneurs and innovators in China to -- through the civil-military fusion that take those capabilities directly into the PLA and military capabilities without intervening accountability or transparency.

Their organizational efficiency, that autocratic rule, they count that as an advantage, is being applied directly to their AI development. And they are surging forward in their capability. This has to give us pause to contemplate. What does -- what does China's dominance in AI mean for us if they intend that dominance by 2030? What does that imply for us?

But we also can look through the lens of opportunity. Our best opportunities lie in American innovation. Academia and small companies are brimming with good ideas in the AI space. The number of AI companies is proliferating rapidly. We have warfighters across the department, especially young ones that can visualize their use cases in their operating environments, in the things that they need -- need to do from a military capability perspective. They're good at this. They know how to operate in a data-driven and app-based environment because they grew up that way. And they expect the same from their defense systems.

We have the best science and the best AI research available in academia inside the United States and in small companies. And we also benefit from the fact that we have a tech inversion in place where the AI technology that we need to run our department and change our operating model exists right at -- literally right across the street. And many of the companies, the -- the modern AI-driven, data-driven companies that have survived in a very competitive market, we have lots of good examples to look at.

We also have a rock-solid ethical baseline that drives a principled approach, that drives our test and evaluation, our verification, our validation, our policy, and in the end of the -- in -- in the end of the -- of the analysis, our trust in our AI systems, and I welcome your questions about that.

The good news: We have a thousand flowers blooming inside the department through the initiative of the services, the agencies and the activities of the department, and we're doing better to -- to integrate our industry technical expertise with warfighting functional expertise so that we can actually responsibly and responsively build -- implement technology in the places that matter most.

We have the opportunity to drive productivity, efficiency, effectiveness -- of the department to new heights, and the performance across the department in the JAIC, in the services and other places are very excited and count themselves lucky to be part of -- of this work.

And with that, we very much look forward to your questions and appreciate your attention.

STAFF: All right, everybody. We've got about 16 or 17 reporters on the line, so if we could ask just one question at a time, and then I promise I will get to you for a second if we have time.

So the first question's going to go out to Mr. Aaron Gregg from the Washington Post.

Aaron, I know you're on the line. I believe you're on the line. Go ahead.

Q: Thank you guys for doing this.

How does the enterprise cloud strategy play into all of this? Is -- is this hodgepodge that you're currently working with working for the department, and what does the strategy look like under this new administration and the new SecDef?

GEN. GROEN: So -- so I'll -- I'll -- take that one first.

So -- so what we have today -- you're right. We have development environments, and pretty mature development environments in each of the services. Some of the services have multiple development environments. And -- and so one of the things that we -- that we have to look at is, you know, what -- what degree of resilience do we gain from having multiple dev environments? But also, what advantages do we gain by -- by stitching those development environment -- environments together into a -- into a fabric?

So that is our intent and that is what we're looking for now -- mapping that out. So what, you know, what we need is -- is a network of development environments that shares, you know, through a containerized process, shares, you know, authority to operate on networks that shares access to data sources, that shares algorithms and that shares even developmental tools and developmental environments. And so this is what we're trying to construct today so that we can broaden the base of developmental work.

But on top of that, we need an operating layer and an operating network, and -- and this is kind of the next step, because if you take those developmental algorithms and you're going to employ them in a -- in a -- in a steady-state basis, in a combatant command and a warfighting situation wherever, then you need an -- a network of operating platforms that you can do the same thing. And so this is the next step as we evolve developmental platforms into a fabric, we move that up to the operational level and integrate service networks in -- into a -- into a -- into a -- a -- a global network. This will give us the capability to have global situational awareness, and then to achieve the goals of what's described in JADC2 (Joint All-Domain Command and Control), which is, you know, any sensor, any shooter or any sensor and any decision-maker, we're going to build that network, the date -- the data storage and the processes that make that possible, and we're going to do that as a team across the department. But the JAIC hopes to help coordinate the -- the alliance that brings that together.

MR. WORK: I can't add to that.

STAFF: OK, we'll go to the next question, sir.

Go ahead.

Q: Hi. I'm -- I'm Luis Martinez of ABC News. Just a question for both of you, please.

General, Secretary Work talked about how China is way ahead on this. In terms of what you just spoke about, worldwide awareness, China right now is really still more of a regional player trying to become a worldwide player. Does AI make that leap for them, or is the AI advantage that they have still strictly only regional?

And Mr. Work, if I could ask you about, I think the final report talked about the importance of the human element in AI. Can you talk about that, especially as some people may have concerns about, since we're here at the Pentagon, talking about how AI relates to the weaponization of that technology?

GEN. GROEN: Yeah. So -- so thank -- thank you, Luis, for the -- for -- for the question.

The -- the -- I -- I think it's important to kind of pay attention to what China and their relationship with AI and the technology is, you know. For example, the Chinese export autonomous systems to nations around the world, and you know, in some places that -- that have some pretty -- some pretty ugly conflicts that are underway, and you know, lots of human suffering and not a lot of world attention, in some cases.

But, so here you are. You have a nation that's proliferating autonomous systems with no ethical baseline, no sets of controls, no transparency into those very dangerous, small, you know, brushfire wars that are going on in -- in a lot of different places. So that proliferation of technology is something that we need to pay attention to.

Similarly, as you look at, you know, for example, just, you know, right now, Chinese ships underway, you -- you know, you know, moving -- moving east, you know, it's -- you know, as a -- as a -- as a demonstration capability shows you their willingness to push the boundaries, you know, and to be considered something more than a regional power.

So that ambition drives -- I think is linked to their technological ambition of AI dominance, and so we have to look at if these are -- if these things are coupled today, what does that hold for the future, you know, in '25 -- 2025 or 2030? And we have to be prepared for that, and we have to be as agile and as competitive in this space as the Chinese intend to be.

MR. WORK: Luis, it's a great question, and I'd like to clarify if something I said -- we do not believe China is ahead right now in AI. The way we went about it as a commission is we said, look, AI is not a single technology; it is a bundle of technologies, and we referred to it as the AI stack, and the AI stack has talent, the people are going to use this, has data, has the hardware that actually runs the algorithms, algorithms, applications and integration. And so what we tried to do is we looked at each of the six and said, where does the U.S. have an advantage, and where does China have an advantage?

We believe the U.S. has an advantage in talent right now. We definitely are the global kind of magnet for best talent. There's a lot of things changing in that, and unless we're smart about our immigration policies, etc., we could lose that. But right now, we judge that we have better talent.

Second, we know we have an advantage in hardware, the United States and the West most -- more broadly, and we think we have an advantage in our algorithms, although the Chinese are really pushing hard. We think that they could catch up with us within five to 10 years.

Now, they have an advantage, in our view, in data. They have a lot of data, and they don't have the restrictions on privacy, etc., that we do. They have an advantage in applications -- they're very good at that -- and we think they have an app -- an advantage in integration because they have a coherent strategy to get all of the AI stack together to give them a national advantage. Now, we judge because talent, hardware and algorithms are so central and important to the stack. We judge that the United States actually is ahead of China in AI technologies more broadly but what we're seeing is the Chinese are far more organized for a competition and have a strategy to win the competition and are putting in a lot of resources.

So as Lieutenant General Groen said, they want to be the world leader in AI technology by 2030. As soon as they say that, that means to me they recognize that they are not the world AI leader now and it's going to take them about 10 years, they think, eight years or so to surpass the United States. That's why we say look, we better be in this competition full on by 2025. If we don't, then we've run the risk of them surpassing us.

So I just wanted to clarify that. I wasn't saying that China is ahead of us in AI. The second thing -- part of your question is all you've got to do is look at what they did with Huawei to say the way they think about becoming a global power is not by invading countries, it is putting out AI platforms -- excuse me, technology platforms that allow their values to proliferate around the world, and that's what happened with Huawei. And the other place they're going really hog wild on are global standard setting, which is kind of the U.S. -- that's in our wheelhouse, we've been doing that since the end of World War II and the Chinese are actually coordinating with the Russians to set global standards in AI that prefer their type of technology.

So without question, I agree with -- with Lieutenant General Groen. The Chinese have ambitions to be a global power. They say by 2050 -- actually, it's 2049, it's the 100 year anniversary -- they want to be -- have the largest economy in the world and they want to be the foremost military power in the world. That's not a future that the United States should say "yeah, let's just let that happen." Let's compete because we want to be the world's foremost military power and we want to be the most dynamic, innovative economy in the world.

So the Chinese definitely have global ambitions. They are a regional power now but they're really starting to move more broadly on the world stage.

STAFF: Next question goes to Sydney Freedberg from Breaking Defense. Go ahead.

Q: Hi, thank you for doing this. Sydney Freedberg, Breaking Defense here.

Let me -- let me ask a question, particularly for General Groen -- of the various recommendations in the AI Commission final report, which ones is DOD contemplating, which ones are actually, you know, concurred with, that you guys are trying to -- try to put forward by the -- by yourselves or by asking Congress for legislation, and which ones do you guys actually not concur with, the things -- you know, the -- the (inaudible) like the steering committee, like setting the various targets, like, you know, coming up with a strategy annex and so forth? Can you go through the checklist of things the commission wants you to do that you guys are, you know, green light, yellow light or red light on proceeding with?

GEN. GROEN: Yeah -- yeah, great question, Sydney, good morning. Yeah, so -- so -- so really -- really good question.

Now, the NSCAI report, if you look at it in its full breadth, addresses -- a lot of the recommendations are at the national level, a place where defense may play a part but defense might not lead. There are -- there is a subset of recommendations on the order of 40 that -- that -- that we've -- that we've taken a hard look at that -- that are military specific and that -- that really, by all rights, defense would lead.

So as we look at that list, our -- I'm sorry, it's closer to 100 recommendations -- as we look at that list, a -- a good number of them, about -- about half, maybe a little bit more, we're already moving out on to a significant degree.

So in those cases, it's really just a matter for us of taking a look at the NSCAI recommendations in detail to make sure that we've considered the full scope of what might appear in one of those recommendations and -- and then see if what we're doing today aligns with those. So that's -- that's -- that's, you know, kind of one large subset, which is -- which is the -- the majority.

Then there's another set of recommendations that -- you know, that we've looked at but we really don't have a plan for yet. You know, we recognize that it's a problem but we're not quite ready to move out in that direction just because of limited bandwidth here. So that's -- that's another subset that -- that we're looking at.

And then there's a third subset that -- you know, those that we really have to look hard at. There are things that we hadn't thought about before and we really need to kind of pull the strings on the implications of those. So there's that third subset.

When you -- when you talk about which ones we agree with or don't agree with, I -- I -- I can't think of any that we don't agree with. The things that are most pressing, that most closely align with what we're doing today are these ideas -- the ideas associated with starting to create a -- a -- an enterprise of capabilities, all of the recommendations about the ethical foundations. We are all about, you know, fleshing out our ethical foundations and really -- really integrating that into every aspect of our process, the recommendations about organizing, you know, with -- with defense priorities. You know, that will be the subject of the department, so we as an AI community can advocate but that's the department process that will decide what the priorities are. And so -- and we'll -- we'll adhere to whatever those priorities are articulated.

The -- the recommendation about workforce development and the -- the family of recommendations about workforce development, we could not agree more. So how do we -- how do we go -- you know, have a full range of train -- a training environment or an education environment that includes, you know, just, like, short -- short duration tactical training, for example, for, you know, a coder to get on a platform, all the way to, you know, building service academies or building, you know, ROTC (Reserve Officer Training Corps) scholarships and that sort of thing.

So -- so, you know, across the department, as some of these recommendations, you know, with large scale and large scope, it -- it starts to, you know, supersede what just the AI community and the department does too. So we work closely with the Research and Engineering Department, we work closely with the -- the -- you know, the personnel and readiness and the acquisition sustainment to start to form the coalitions to -- to get after the problems that -- that are underneath those recommendations to make sure that we understand them and that we are actually moving toward this -- this new, you know, operational model for how we are going to operate as a department.

STAFF: Thank you, Sydney.

MR. WORK: Sydney, I guess the way I would answer this -- I can't really add too much more to what Lieutenant General Groen said -- is, you know, just a little while ago, Secretary of Defense Mark Esper said "AI is the number one priority for me, as the Secretary of Defense," and he went on to say "the competitor that really wins in the AI competition will have a battlefield advantage for decades."

Now, if you believe that, and I certainly do, and I believe the commission -- I would think it's a unanimous consensus -- if you really believe that, you can't keep doing what we're doing now.

I mean, the Defense Science Board said in 2014 the one thing you got to get right is AI and AI-enabled autonomy. So here we are seven years later and we're saying, "OK, if we really believe that AI is going to give a competitor an advantage for a decade, are we satisfied with the progress that has happened since 2014?" And if the answer is no, then you have to say that we got to change things up.

And, of course, people are gonna say, "Hey, why would you make the undersecretary of defense for R&E the co-chair and the chief science officer of the JROC (Joint Requirements Oversight Council)? The JROC worked perfect." 

Well, does every single program have a plug in it for AI and being able to receive data for machine learning chips? Does it have the ports to allow them to pass on information? If the answer is no, we're not doing good enough.

I think Lieutenant General -- excuse me, General Hyten, the vice chair -- vice chairman, has said this very clearly. He's not satisfied with the way that JROC is functioning and he wants to change it so it really pushes these more broader, joint system-of-system things that Lieutenant General Groen was talking about.

So from the commissioners' point of view, look, right now we do not believe we are moving as fast as we should. And if the department agrees with that general assessment, then they need to change things.

Q: So what's the most glaring deficiency you see? You've got a long list of recommendations.

Q: Kristina Anderson, AWPS News.

I wonder if you could speak to getting the data -- the secure data fabric right?

And then taking that up a notch to, kind of, the global structure of AI, how -- how can we think about building the structure so that security is one of the fundamental elements of that? That's one of the criticisms of the Internet right now is that when it was built, not withstanding the tremendous benefits that we have, that it has -- was not built with security in mind.

Thank you.

GEN. GROEN: Thanks, Kristina. That's an excellent question.

And to me, that's the operative question. Because I -- I think, you know, there's -- there's a good alignment as we talk about, you know, the operational effects that we want to achieve. There's good alignment when we talk about building platforms and how we -- you know, how we're going to integrate data and share data.

The -- the very first question we start to ask at that point is, OK, how are we going to secure this? How do we secure this environment?

And so, we -- we have a full court press on -- so, of course, we have, you know, native cloud security, you know, additional security that -- that we've been able to add. You know, we've got lots of, you know, cybersecurity specialists helping us look at this problem set.

But more importantly we're trying to keep an eye on the entire research and development ecosystem. So not just from a cybersecurity perspective, but how do we deal with adversarial AI, for example; how do we deal with the purposeful intent to intervene or to interfere with our algorithms or spoof our algorithms.

So this is a very -- this is probably -- I would say this is certainly the top priority and probably our largest effort right now from a research and development perspective, is how do we make sure that as we build this out we squeeze out all the vulnerabilities that we can? We will never have a perfect system. We will never have a perfect Internet. But we need to protect it like we would protect any weapon system or other any other critical node.

Thank you.

MR. WORK: An essential question, Kristina.

As Lieutenant General Groen said, we're moving into an era of AI competition, and poisoning data is a way to gain an advantage. We have to be able to guard against that.

We need to red team the heck out of our databases. We need to have people trying to break into the database and poison data often so that we can identify vulnerabilities and fix them. We have to have means by which to check the data.

And there is all sorts of different things that the commercial -- you know, the commercial sector is doing this also. They're looking, how do you protect the data? And how do you protect your algorithms to make sure that no biases are inserted?

So, look, we don't have the answers -- you know, all the answers for this yet, but it's central to the thinking of the JAIC, I think you heard. And our AI has to be better than their AI. All you have to do is envision an AI-enabled cyber-attack, and if their AI is better on offense than our AI is better on defense, that's going to be a bad day for us.

So, you know, constant red-teaming, constant development with DevSecOps in mind, constant testing and evaluation, validation and verification. This is our future now. It's going to be something we just have to take as a matter of course.

STAFF: Next question goes out to Tony from Bloomberg News.

Go ahead, Tony.

Q: Hi, this is Tony Capaccio.

I've a question, an operational application question that I think most citizens can relate to. Next month marks the 10th anniversary at the bin Laden raid by SEAL Team Six. Conceptually, if AI was in widespread use in 2011, how might it have been employed in planning and executing the raid? I'm thinking facial recognition, pinpointing the movements, of activity in or around the compound, calculating the height of the walls and their thickness et cetera. Can you think outside the box and give us a couple of examples of how it might have been used in that raid?

GEN. GROEN: Yes, hey, so great question, Tony.

And I think that raises -- I mean, so when I look at what -- when I look at that, remember when I said, you know, amateurs study apps, professionals study architectures? I think we -- if we take any military operation, I can't really speak to -- you know, to that particular event, but any military operation it's easy to get fixated on the applications that exist on the tactical edge.

But when you when you walk back a military problem, I mean, you start with those -- you know, you start with those tactical warnings, you know, on the objective or near the objective. Then you back up a step and you need to be broadly situational aware. And you back up another step and you need to be aware of not just the red capabilities in the red force, but you also need to know where the blue forces and where your own forces are, and their readiness and their availability.

We also need to understand the green forces, you know, those partner forces that we might have in the area. Or the white forces, you know, the civilian -- the innocent civilian populations who might be in the area. So all of those kind of situational awareness activities can be worked through AI, right? That can be done much better than a human being can do it by leveraging AI to work on all that data.

And you start backing up even further. You talk about, well, how do you have effects integration, like when -- you know, when do you -- when do you get onto the objective and how do you coordinate with an adjacent unit? How do you make sure that your -- you know, that your fires are safe and are focused on the good targets?

Again, AI can help with the information flow that informs that decision-making.

Q: I got it.

GEN. GROEN: Back up further, you know, weather effects. Do we have global weather that's in a database that everybody can use and integrate into their application? Do we have threat picture that's integrated into our applications and defense? Do we know threatening behavior? Have we modeled that? Do we use it for understanding the human populations, predictive modeling? And, you know, the list goes on and on.

And, you know, the further you go back into -- into the institution, you're talking about modeling and simulation, you know, platform maintenance, you know, preventive maintenance for helicopter platforms, for example, integrated logistics, contingency management, you know, fleet maintenance, you know. Think -- think of -- of, like a electronic -- or electric car company that -- that broadcasts updates to their entire fleet of vehicles.

This is the sort of capabilities that I -- AI brings to the department, and when you start stacking those up you really see how it focuses -- you know, you focus that lens on a -- on a tactical military problem, it's not just the AI at the tactical edge, but it's all of the AI that has contributed all the way to the back offices of the Pentagon where we're doing financial records, right, or inventory management --

Q: Right.

GEN. GROEN: -- or all of the sort of the business of the -- of defense focused through data into that objective. So I -- I hope -- I hope that helps.

For -- you know, what -- I'll just give you one other point. I mean, for every -- for almost every military activity there's a commercial analog to that activity. I mean, you think about the, you know -- a -- a large-scale shopping -- online shopping network that has to deal with ordering and buying and recommending and -- and presenting options and selecting options and delivering. You know, for -- for -- every one of those has a parallel in the military space. The AI that we integrate from commercial industry today, that technology that's readily-available helps us do those same things with the efficiency and productivity that -- that any large-scale, successful commercial corporation does today. And -- and from a business perspective, that's exactly what we need to have.

Q: Makes sense. Can I ask a quickie?


MR. WORK: (inaudible) --

Q: The YouTube -- OK, good thank you.

MR. WORK: -- and to me, the biggest change would be our ability to look at enormous amounts of social media data, etc., to make predictive analysis, and also, make judgments.

I'm a movie aficionado, so everything I know about the bin Laden raid I learned in "Zero Dark Thirty".


MR. WORK: If "Zero Dark Thirty" is correct, what DCIA, the director of CIA Panetta was constantly asking is, "How sure are we that he's in the compound? You know, before we execute a raid in another sovereign country, how sure are we?"

Well, I just go to the shootdown of the Ukrainian airliner, and we knew the Russians did it immediately through national technical means and other -- other stuff, but we didn't want to release that because of sources and methods. There was a company called Bellingcat who essentially put together the storyboard for the entire shootdown using social media. You know, they had a picture of a TEL (transporter erector launcher) with three surface-to-air missiles on it, a picture of it crossing the border into eastern Ukraine with the serial number on the side. They had another picture of a missile contrail right next to the village where the shootdown occurred. They had another picture of the same TEL with the same serial number going back into Russia with two instead of three missiles. They put together a storyboard just using social media. It was 100 percent. You know, any objective person would say, "Whoa, the Russians really did shoot down that airliner." And had we had the capability we have now to go through all sorts of data, then I think the analysts would have been able to tell Director Panetta, "We are 100 percent certain that bin Laden is in that compound, and here's all of the data that we can show you."

And then predictive analysis, like Lieutenant General Groen said. The president might have asked, "What do we expect to be the reaction of the Muslim community if it becomes aware that we execute a raid and we kill bin Laden?" AI is able to do that type of predictive INW. We're doing it right now in Afghanistan, using AI to predict when attacks might occur or predict, you know, actions by our adversaries.

I don't think AI would have made that much difference in the raid force itself unless they had specific applications that they needed to say, what is the most up-to-date intelligence? You know, what is happening? Do we need to change our plan, etc.. But to me, we already have kind of an answer for you. AI is -- gives you a tool that we've never, ever really had.

One of our commissioners, Ken Ford, refers to this as "AI gives commanders eyeglasses for the mind," and I thought it was such a pithy observation. It helps look through enormous amounts of data that a human would be incapable of interpreting, and the AI is able to find patterns, make inferences, etc.

So that's what we mean by human-machine collaboration. You let the machine do all that hard number-crunching and stuff like that, and you leave the commander, the human commander to exercise their creative spirit and their initiative and their understanding of the broader strategic concept. The human-machine collaboration is a big, big deal in the future of AI

Q: Good -- good clear answer. Thank you.

STAFF: (inaudible) goes out to Jasmine from National Defense.

Q: Hi. Thank you so much for doing this.

My question has to do with comments that the chairman of the commission has made before, Eric Schmidt. He said that China is maybe two years behind the United States. Lt. Gen Groen, I was wondering if you agree with that assessment, or do you think that it's a bit -- we have a bit more of an advantage?

GEN. GROEN: Yes, thanks, Jasmine.

I -- I think I would echo what -- what Secretary Work articulated before. You -- you know, trying to measure advantage in a space like this is a very, you know, is a very difficult undertaining -- or -- or undertaking. I -- I think, you -- you know, you can look at places where there is -- there's clear superiority on the U.S. side, I think, like our academic environments -- I mean, the United States academic community that is -- is unsurpassed globally. You look at our small, innovative companies and the things that they're working, you know, the -- every -- almost every company these days is an AI company, and a lot of them have really good vertical stovepipe capabilities. So there's great innovation that goes all across the United States. You look at, on the -- on the Chinese side, I mean, you do have the organizational efficiency of autocracy, and you have all of the, you know, the moral impacts of that, as well.

But you know, I -- I think the competition, if you really wanted to simplify it, might -- you know, might be, in -- in a sense, the -- the organizational efficiency versus innovation, and innovation of efficiency. And so when you look at that competition from that -- from -- from the -- through those two lenses you really have to pay attention to both, right? It’s like, how do we achieve organizational efficiency in our efforts so that we can compete -- can compete -- keep pace with a -- with a bigger machine? But then also, how can we continue to innovate so that we're not stuck in yesterday's technology, and we continue to push the envelope?

So it -- it's a really hard thing to measure. I think both countries have a -- you know, have demonstrated significant global capabilities, and so we have to be in this fight, for sure.

MR. WORK: Yeah, I mean, I agree. This is really a tough thing to, kind of, judge. The way we did it, as I explained earlier, is we broke down the AI stack into its six components, we judged that we are ahead -- slightly ahead or ahead in three of the six and China is ahead or slightly ahead in three to six. So it's a really, really tight competition.

We admitted that the Chinese could probably catch up with us in algorithms within five to 10 years. We also say that we're 100 miles away from becoming -- from being two generations ahead in hardware to being two generations in -- behind, if, for example, China sees Taiwan and the fabricating -- the chip fabricating facilities that are on Taiwan.

So Eric Schmidt is -- been working in this area for a long time and his judgment is "look, I think we're about two years ahead," but he will tell anyone who listens the Chinese are coming on fast, you know, they're ahead in some, we're ahead in some, we need to take this company -- take this competition like a politician takes a political race. You have to run like you're losing. And so it's important that we really gear up and go.

STAFF: OK, we have time for one more question that'll go out to Jackson from FedScoop. Go ahead, Jackson.

Q: Thank you so much.

I hope I have my dates right here, but Lieutenant General Groen, I believe we're six months out from your announcement of JAIC 2.0 and shifting to be more of an enabling force. Hoping you can give us just an update on how that change is going.

And if I could ask specifically about -- you know, are you now sending out officials to kind of be liaisons to specific AI offices across the force? How is that going? Is there any, you know, tension with -- with JAIC (inaudible) showing up and -- and offering help? How is that successful, maybe how are things -- are there any things you might change in -- in the future?

And then if I could also ask Mr. Work -- previously, you've said that the JAIC should take a naval nuclear reactor Rickover-type strategy to being kind of an AI coordination office. Do you think that holds any tension between the kind of thousand flowers blooming approach that's being taken? What is your current stance on -- on that?

Thank you.

GEN. GROEN: So -- so I'll -- I'll start with -- thanks, Jackson, great question.

And -- so -- so as we -- you, I think, accurately described, what we want to do in JAIC 2.0 -- so we realized kind of our -- our initial business model wasn't getting us where we needed to go. It was not transformational enough. And so we really started focusing on broad enablement and I think we've been fairly successful in that space.

We do have a -- a -- great outreach organizations, we do pay keen attention to all of the service developments and we try to partner with all of them. We pay keen attention to the demand signals from the combatant commands and we want to -- we want to work with anybody who is doing AI today but here's how we approach that problem set, right -- like, our first duty, I think, or one of the things that we do well is measure our success and the success of others.

And the second thing that I think we do well is we don't go to these organizations or partner the -- with these organizations from a position of teacher-student. We come in as -- as archivists of best practice across the department and -- and say "Hey, show us how you're doing that, let us learn from you," and then we can share "hey, you know there's another agency in the department that has a problem very similar to yours and here's how they're addressing that."

So we play broker for information and expertise across -- you know, across agencies, across services, across combatant commands, and then what we can do is then turn that into -- because of our, you know, congressional -- congressional authority now to do our own acquisition, for example, now we can actually start providing a much broader array of -- of support service and -- services and the enabling services that help make all of those customers successful.

We think we're a force for good here. We -- we -- we approach the challenge with humility and we are -- we measure our success and the success of others, and so that has gotten us a long way. I will say this -- as I look at the challenge that Secretary Work has laid out so effectively, even now, I wonder is that a -- is JAIC 2.0 enough, right? Are we moving fast enough -- are we moving fast enough to create enterprises of capability and overcome stovepiped developments? Are we moving fast enough to really change our operating model to data-driven and data visibility across the department? Are we moving fast enough in integrating innovative technology into the department?

And sometimes I lay out -- white -- lay awake at night and say -- and the answer's no, and that challenge and feeling the, you know, hot breath on the back of our necks is what keeps the JAIC motivated and keeps us working hard every day, because we recognize how big this is and the scale of the Department of Defense and how necessary this transformation is at scale.

Thanks for the question, that's great.

MR. WORK: Jackson, you know, every now and then, somebody asks me a question like yours and I -- OK -- "God, did I really say that?"


But at the time, what I was saying is do we really believe that we're going to build the department around the capabilities of AI and AI-enabled autonomy? And nuclear reactors made the -- you're going to build a submarine around the reactor and you're going to have to have the people who understand everything about how that reactor works and how it interfaces with all of the other systems on the submarines.

We're going to make sure that we pick the people who are in charge, we're going to set the standards. No one -- no one can touch the standards except for us. And so at the time, I was saying, you know, there's a lot of advantages of this, but over the last two years, working with 14 other brilliant commissioners, the recommendations that we put into the commission, I'm fully behind, and I personally think that if you use -- well, I'll just lay my cards on the table -- we thought about this as a blueprint. We said "look, you really shouldn't look at all of our recommendations and say 'I kind of like that one, I'll pull that off the wall.'" You have to do them all together to get the effect that the commission feels is important.

So right now, I would say I've changed from the nuclear reactor model to the national commission on artificial intelligence model. And I would just like to say thank again for all of the people who listened in. The report is voluminous. You know, it's over 760 pages, but our staff, which is, like, a world class staff, did everything they could for it to be interactive, for you to be able to go into that final report and find the information that you would like.

There are so many recommendations, this is why I have so much paper. I mean, I can't keep track of all of the recommendations in the report. I need to be reminded of them. But I would ask all of you to read the report because we feel it is so important for our economic competitiveness and our military competitiveness. I want to thank you for hosting us today and allowing us to kind of pitch our product.

STAFF: Thank you for the 760-page to-do list, sir.


I'm afraid we're out of time but for those of you that we didn't get to with questions, please submit your questions to OSD Public Affairs and we can answer those. So thanks everyone on the lines and everybody here today for attending and thank you very much.

Q: Thank you.