Transcript

Lt. Gen. Michael S. Groen, Director, Joint Artificial Intelligence Center, Keynote Remarks at the Virtual National Defense Industrial Association's National Security AI Conference and Exhibition

March 23, 2021

HAWK CARLISLE:  And so our first keynote speaker today is somebody that's in the heart of making that happen.  And it is truly a pleasure to introduce our first guest speaker, Lieutenant General Mike Groen, United States Marine Corps.  Great American, as he says, just a great small-town boy from the Midwest, but a wonderful American working hard.

He assumed his current position as the Director of the Joint Artificial Intelligence Center (JAIC) in October 2020, took over from Jack Shanahan.  And as Head of the JAIC team, he leads the transformation of U.S. Joint warfighting and departmental processes through the integration of Artificial Intelligence (AI).

Incredibly challenging job when you think about it as we were just talking before this started is, unfortunately, sometimes we and the Department of Defense have a tendency to stovepipe things.  And unfortunately, that is exactly the wrong thing you do when you look at the way the Joint warfighting is transitioning today and certainly how Artificial Intelligence is going to play into that.

Part of his nomination for this position, General Groen was assigned to the National Security Agency and served as the Deputy Chief of Computer Network Operations.  He's also served as Director of Intelligence to the Joint Staff, the J2 in direct support of the Chairman of the Joint Chiefs of Staff.  And he also served as Director of Marine Corps Intelligence, so he's got an incredible background in all of this.

He's been down to the level of battalion, division.  He's worked across the board and directly for the Commandant on Marine Corps strategic group initiatives.  A well-decorated and incredible Marine, and as we said earlier just great American doing great work.

So again, ladies and gentlemen, it's wonderful to have you.  We are very honored to have our opening keynote speaker today, Lieutenant General Mike Groen from the JAIC.

Mike, over to you and again, thanks for being here.

LIEUTENANT GENERAL MICHAEL S. GROEN:  Hey, thank you, General Carlisle, really detailed introduction.  And thanks to the members of the NDIA for inviting me.

NDIA is an incredibly important organization, and it's an organization that's changing.  I think this -- this virtual conference is a great indicator in that direction and just in time, right, because your customers are changing, too.  And I think it's an important dialogue as we kind of share like, okay, what are those changes and how do we react to those.

So, I especially appreciate the opportunity to talk to the to the AI and National Security Conference because I think, I don't think you can separate AI and national security.  And I don't think you can separate the role that NDIA and your members play from national security either.  You're a critical component of that national security.  And that's what I'd like talk, talk to today.

I think, you know, across the board we're all seized with sort of the opportunities for AI and national security in the Department of Defense, in industry, in Congress certainly, the individual services, the academia, I mean, we've got great partnerships across the board.

And it's really exciting, but I hope that we're thinking fast enough, and I think I hope that we're thinking broadly enough about the transformational changes that are occurring not just in the information environment, but in all of our commercial activities, but also in defense, and the way warfighting is conducted, and the way the Department needs to prepare for that.

So, I think General Carlisle mentioned the NSCAI (National Security Commission on Artificial Intelligence) report.  I just want to, you know, give a shout out to Eric Schmidt, Bob Work and the other like really, really capable and insightful commissioners that produce that report.  That's been a labor of love for a couple of years to pull together like broad-based recommendations, you know, for defense, but across the nation.

And so that's really great work.  We are working diligently to harvest all of the goodness that's in that NSCAI report for implementation and defense, and so we really appreciate that partnership as well.

So, I'd like to give you a little perspective from my own view.  Specifically, you know, the partnerships that we have with you across the NDIA are critical, right?  And I look forward to getting a chance to hear your questions.

But, you know, I think we are all seized, we should be, that there never has been a more important time for our national security, right?  So, I don't think I need to tell anybody, you know, in this audience about AI, but I'd ask you to think about, you know, American competitiveness, especially with, if the estimates bear true even if they, you know, somewhat bear true, estimate a $16 trillion global industry in AI by 2030.

And, you know, through a national security lens, a econ, a strong economy -- a strong economic presence, our role in that $16 trillion industry is going to be critical for American competitiveness and, by extension, our national defense.

You know, the academic communities that are involved in AI, just incredible capabilities, incredible insight and thinking that's going on in those places across, across the research entities, it's really exciting.  The challenge now is to like bring this broad innovation into real practical implementation.  And that's one of the things that the JAIC, the Joint Artificial Intelligence Center and the Department.  This is what we're designing to do.

We are a do tank, not a think tank -- a do tank.  So, our job is to reach across the valley of death and pull capabilities into the Department from the research and development community, from the commercial community, from members like you and the capabilities. And this relationship is incredibly powerful for a, for and it's the core of our national security.

And I don't want to underestimate, and I don’t want you to underestimate, the key role that you play in underpinning national security. Your participation in this dialogue and your participation in this transformation is going to be absolutely critical.

So it's really important for the Department of Defense at this point, and I think there's broad awareness of AI and how transformational it can be. But what we need to start thinking to a much greater degree is implementation at scale.

And I think you can see 1,000 flowers blooming across the Department of Defense, and that's really powerful.  It's a step in the right direction, but we need to start building on it. And I think this is a truism that I think bears repeating again and again.

Look, if we want artificial intelligence to be our future, then we have to start building it in the present, right?  We need decision-makers, commanders, policymakers, everybody who could benefit from this transformation to start thinking about implementation now, how do we start with a mature technologies that we're surrounded by, and then continue to build as the as the industry expands and the technology improves and the technology expands, but getting started now is critical.

Think about this, the PRC (People’s Republic of China) has articulated very loudly, vocally, and continuously that their intent to dominate the AI space by 2030.  So, so the PRC, they also have the the Made in China 2025 initiative that they're actually accelerating.  And I think we're all very cognizant of the fact that Made in China 2025 means not made in Germany, not made in Japan, not made in Korea, not made in the E.U., (European Union) not made in the United States.  And so, there is an American competitive component, a component to this entire conversation that I don't think we can overestimate.

The PRC is likely to be the largest beneficiary of this evolution of this $16 trillion industry.  Projections are that this will be a 26 percent GDP growth for the People's Republic of China and about a 15 percent GDP growth for the United States.  And so again, from an American perspective, the American competitiveness perspective, we really need to pay attention to this dialogue.

Then when we start talking about the AI space, think about, the Chinese Communist Party, the surveillance of citizens, the social credit scores, the health report cards that you carry around your cell phone, the suppression of ethnic populations, integrating AI into the People's Liberation Army.

And the reason I bring these things up is because it's not just the scale of AI implementation and competitiveness of AI implementation that matters, it's the ethical foundations.  It's the ecosystems that you create, the systems that you used to control and govern and base your AI developments on that really makes a huge difference, right?  And I think that is one of the marked differences between the way we're approaching AI in the Department of Defense and maybe some of the other actors that are out there.

It's essential now for both an ethical baseline perspective and American competitiveness perspective that we re-craft the vigor, the vitality, the agility, the advanced capabilities of our defense and commercial industries, right?  We have to do this and we have to do especially in the AI space.  So, we're certainly thinking about that in defense.

I think the last time we spoke in a virtual in a virtual setting, and maybe some of you recall, I talked about 1915 and lancers, right?  So we talk about lancers riding into machine guns at the beginning of World War I where the impact of the Industrial Age on warfare became startlingly clear, right, with horrific effect as citizens of the Industrial Age were surrounded by the artifacts of the Industrial Age, failed to recognize the impact of those artifacts on Industrial Age warfare, right?  So, things that were imminently foreseeable were not foreseen.

And so, I repeat that now just because I think we should be asking ourselves where do those artifacts point now, right?  We're surrounded by the artifacts of the Information Age.  We're surrounded by the artifacts of an intelligent age, even starts using information in new ways.

We are surrounded by the artifacts in the personal and commercial space that we freely employ every day, right?  You know, option identification, recommendation engines, decision support, data synthesis, market opportunities, contracting for services, payment systems.  I mean, you name it.  It's all run by AI in the commercial space.

And that's great, but we should be thinking now what is the, what is foreseeable in those methodologies and those technologies that would impact warfare in an Information Age or an intelligent age.

And I think through a national security lens, it's critical that we think about what does that mean for like the execution of warfare, what does it mean about tempo of warfare or the scope of warfare or the integration of warfare, you know, across stovepipes like General Carlisle mentioned on the front end here.

This is truly becoming a system competition.  And the defense sector and even the public sector is now part of the conversation.  We could talk, we could talk for an hour about exploitation of the defense industrial base and data, seizure of data or stealing of data, clandestine surveillance of, monitoring networks. I mean, all of the stuff is in the press every day so you know that this fight is on from a system perspective.  And so, I think it's really critical that we should be thinking about not only just the technology itself, but no kidding, what are the artifacts that technology and how do they drive decision-making across the Department.

I think I'll give you another historical reference. General or Admiral Yamamoto, remember back in World War II, right after the strike on Pearl Harbor said something to the effect of, I fear we've only awakened a sleeping giant.  And, when you think about that, he was right, right?  So, the industrial might of the United States kind of swung into action.  And over the course of years, we developed an industrial capacity and capability that trounced dictators around the world.

But, it's different today, right?  Today there's no time, right?  Today with the threat of system warfare, an enemy can have intercontinental effects, orbital effects, cyber effects, information effects, almost at will at intercontinental ranges.  So, there's no time, right?

So if a conflict began, there's no time to read reports.  There's no time to pull data together.  There's no time to consider response options, when you have multiple hypersonic missiles inbound, right?

There's no time to start thinking about AI implementation or data strategies or data-driven, data-driven decision-making, autonomy, robotics - all these things we need to be investing in today, we need to mature our capability today, and we need to mature our system today so that we are prepared for the, for tomorrow's fight and systemic warfare.

So, the risk today is that that sleeping giant wakes up, but wakes up in its bound by a thousand threads and and can’t, cannot react in a timely fashion.  And so I think getting our system ready for this is the business of defense and the National Defense Industrial Association, your membership, you, we’ve got to solve this problem, right?  This is a vulnerability that we have to close and we have to close it together.

So, when you look at the artifacts, and again this is a great exercise, a mental exercise, you practice it every day, what do the artifacts point to, right?  Where should we be looking?

I think that there are couple, there are four themes I want to throw at you about these artifacts because I think that these things are fairly shocking to me, right?

The first is how close this is, right?  So, when you think about PRC's, intent to be dominant in the AI space by 2030, accelerating Made in China 2025, right now in the Department of Defense, we're working, POM (Program Objective Memorandum) ‘23 to '27, right, is the five-year budget plan.

So, when you think about again the Chinese intent for 2030 and the intent that we're articulating in the POM for even like FY (fiscal year) '27, to a Marine that's like danger close, right, like we're in the same space that the PRC is in accelerating their AI expansion and expelling … accelerating their AI industry.

I don't think that we can afford to stay on the sidelines of that conversation.  I don't think that we can afford to tiptoe into AI integration or tiptoe into data-driven high-quality decision-making in defense.  And this is where we need your help.

The second theme that I see in the artifacts is how dangerous this is, right, because as we talked about like if our system is not prepared, then we're not prepared, we're not ready.  If our architectures are not coherent if our integration, is lackluster, we still operate in stovepipes, whether those are service stovepipes or agency stovepipes or functional stovepipes.

If we're, if we are not in an integrated enterprise, then we're going to fail, right?  We will not have the capability that we need to succeed in this data-driven environment.

I mean, if we're still flying, for example, hard drives around, because we can't connect our networks so it's more efficient to take a hard drive in one place on the planet and fly it to another place on the planet because we can't connect our networks, then that's a symptom that we are not where we need to be, right?  And So, I think getting our data enterprise, data enterprise in order, is a number one priority for us.  Getting the data infrastructure in order is the number one priority to us.

I think things like - if we still have Airmen staring at screens, monitors, looking at live data feeds, then we're not ready, right?  Our system is not ready because these things are readily done by artificial intelligence.

The third thing that I think about here is how close it is, how dangerous it is but how different this is, right?  And this is a real challenge because I think there's a lot of us, old guys like me, we're not conditioned to think like scale enterprise data-driven, right?  And so, a lot of our processes are very sequential.  They're very limited in scale and scope.  And we need to think about how do we actually build architectures to make decisions at scale and make them quickly by being data-driven, right?

So, we're not going to be ready for tomorrow's fight if we're not prepared to do that. I think you share that perception, right?  And that's why we're here today.

The fourth one, or I guess the next category, how do we respond to this?  So, given the criticality of the challenge and the absolute imperative, the economic imperative, moral imperative, defense imperative, to actually, take action now, I think there's a couple of thoughts about how we have to do this.

The first one is we have to do this comprehensively, right?  So, transformation has to be wholesale if it's going to be effective, right?  Because the magic really starts happening when you connect automated processes, right?  So if you have a data-driven process and you can drive another data-driven process, now you're starting to execute at scale, right?  Now you're starting to link and integrate systems and processes.

So our warfighting architectures, we have to think about enterprise effects, decision tools that derive for massive data flows, right and integrated infrastructure that allows any sensor to inform any decision-maker or any sensor to inform any system, whether that's a fire system or what have you, right?  I think our warfighting enterprise has got to be modernized and has to be integrated, and it has to be at scale to be transformed.

There's a second broad category of areas that requires transformation that I think is of interest to you, and that is our broad support enterprises.  So, you think about the Department of Defense, there's a warfighting end of the Department of Defense, but there's the large gears that turn underneath the Department that make warfighting possible and make warfighting successful.

Things like our defense agencies, think about Defense Health Agency, Defense Logistics Agency, Defense Intelligence Agency, Counter Proliferation Agency [ed. Note: National Counterproliferation Center], think about all of these activities that occur that are really the gears that the Department rides on for effective warfighting.

These enterprises are sitting on massive amounts of data and it's a natural target for AI implementation to create more efficiencies and economies' ineffectiveness in those large-scale enterprises, right?  It's the Willie Sutton Rule, right?  So why rob banks?  Because that's where the money is, right?

Why AI - why focus AI on these broad support agencies?  Because that's where our capital is, right?  So, with massive capital investments in these places, these are things that, as taxpayers and as a defense industry for our national security that we need to tackle, right, we need to get after.

The third … third component of this, in the comprehensive task is our business processes, right?  So, the Department historically has been challenged from an audibility perspective, being able to account for where all of our dollars are -- are going, how it's spent, matching transactions, that sort of thing.  It's a natural playground or natural implementation ground for artificial intelligence, right, in our business processes.

Our objective - something that we're aiming for in the JAIC here is to advocate, and implement to the point where the Department of Defense operates with the efficiency and effectiveness as any large-scale commercial enterprise, right?  We believe that's possible and we think that's necessary.

Another thought is we have to do this right.  And I just kind of allude back to my earlier comments about, competing systems for artificial intelligence?  What are their ethical foundations?  What are their baselines?  And so, we have a very rigorous effort and team across the Department with lots of great partners in the academic and industrial communities to help us with artificial intelligence, ethical baseline, trusted AI and responsible AI ecosystems.  And so, I think that's a really important point.  We can talk about that more later.

I think another important component is a software engineering approach, and this is where I think where you can help us a lot here because the standard defense model derived from the 1960's has the Department create a very robust requirement tossing that over the fence to the industrial base, which produces a large system, an expensive system in response to that requirement set, tosses it back over the fence.

There's a tech inversion that's in place today where it used to be that the Department use that methodology to drive research and development, and sometimes that research and development spilled over into the commercial industry. I think we have a tech inversion today where commercial industry has very mature and robust technology especially in the artificial intelligence space and the data space that we need to incorporate into the Department.

You need to help us do that, right?  The machine doesn't run backwards very well, right?  The machine is used to pumping out technology and then that proliferates.  We have to do the reverse, which means we have to change our approach.  We have to have a much more software engineering approach, I would say, rather than a hardware engineering approach where we learn how to iterate.  We learn how to cooperate and work together to try to fail, to try again to implement small steps.  This is how we will achieve the large-scale software programs with -- that, you know, that -- that we're faced.

And so, I think it's really important that it kind of flips the Department on its head.  And we need your expertise, and we need your advice on how we address that kind of challenge.

I think another component of that is a fair market, right?  So, we have to think through like what is our right contracting methodologies, right?  How do we predict -- how do we protect government intellectual property from a defense perspective, how do we avoid vendor lock?  Because quite frankly, if we don't have an open integrated enterprise, we will not be successful.

Therefore, we can't implement proprietary systems and segments of our enterprise or give away our I.P. so that we are then hostage to a specific environment.  That's not acceptable in a modern software engineering model that's trying to build a system that can fight effectively in our national defense.

We need your help with this obviously.  We need your insights into how that's done.  We need infrastructure expertise.  And we need your good faith partnership.  And I think we have that today.  I think the urgency of our situation is that we just have to continue to grow together in ensuring our national security.

A third one, we have to think like a team, right?  And I think I've talked about this pretty well, but our defense processes, as General Carlisle articulated on the front end, there's still very service-driven stovepipes. Our system was intentionally formed that way.  We have to recover from that, right?  And so, we have to continue thinking about system warfare, multi-domain, multi-theater, multi-tempo operations.

And our service stovepipes will not magically coalesce into cohesive enterprises that are efficient and effective, right?  So, we need help with infrastructure, policies and data sharing.  And so, defense needs to change the way that we interact with you so that we can achieve those objectives.

So finally, I'll just end with this.  Look, we have to also think differently, right?  This is probably the hardest component.  Even our … even our digital natives in the Department. We’ve got lots of young folks in the Department. But, even our digital natives have a hard time thinking at scale, right, have a hard time thinking about imagining sort of a broad AI transformation.

So, it's a mix of technical expertise, functional expertise that we need to pull together so that we're actually asking the aviators how we need to do aviation in an AI-driven way.

We need to ask our artillery people how do we do fires in a data-driven way.  We need your help with that, right?  I think of like where we are with self-driving cars, it kind of redefines what a legacy system is inside the Department, right?

If you think of self-driving cars, a computer with a car wrapped around it, the body of a car and some wheels wrapped around it, that … that, you acquire something like that a certain way.  You make sure that that computer is exactly what you want and you're kind of flexible about how big the tires are, what the frame looks like, right?

We don't think that way in the Department, and I'm not sure that we think that way in the defense industry either, right?  We think about platforms and then we think about, well, yeah, let's see what kind of computer can we jam into this platform?

I think we've got that backwards, right?  And so, I think that this an artifact of how we need to think about systems, connectivity and processes.

So, there are level of litmus test that I think that will arise, you know, naturally here as we consider systems that we'll acquire across the Department of Defense.  I mean, one litmus test, litmus test, really is the level of autonomy, right?  Is this system -- is this capability designed for autonomy so that we can enjoy the effectiveness and efficiency that we need across the Department.

Another litmus test is what is the level of data integration that occurs in the system or process or architecture, right?  If we're not integrating data, then we're not helping ourselves, right?  We're not moving toward the objective.

A third one is integrated architectures and infrastructures, right?  We can't have everybody have their own development platform.  Everybody have their own data stores without, you know, cost-sharing, et cetera, et cetera.

And a final litmus test, because I want to get your questions, is buy-in from defense decision-makers, right?  We need warfighters and functional experts to work side by side with technical experts from your industries so that we can achieve the kind of effects that we need to as a Department.

I hope, again, I apologize for going on long here, but I think this is so critical to national security.  And I think it's so important that we recognize the teaming between us - and the way that we need to team, right, and how we need to learn from each other and do things in different ways.

And so, the JAIC is proud to be one of the change agents that's helping with contractual methodologies and that sort of thing so that we can do this effectively.  We look very much look forward very much to working with all of you, big companies and small, as we work to transform the Department.

So, if I'm passionate today, it's because of the scale and scope of our challenge, right, and because of the implications to our national security.  This is a competition that's real.  It's a competition that's live, and it's a competition that, frankly, I don't think we're running as fast as we need to.  And so, we need your help in doing that.  And I look forward to your questions, so thank you very much.

MR. CARLISLE:  Hey, Mike, thanks.  Great comments.  I really appreciate and that I love your passion.  Frankly, my friend, I think it's exactly what we need in this country, so thanks.  Great comments. I know our industry partners are very excited to hear your comments.

So I'm going to start off with a very tough question for you.

(Laughter.)

Okay.  And it basically goes to your passion, some of the challenges we face.  And I think it has to do with the values that we as a nation hold dear.  And the question comes from Patrick Tucker that basically says how do you compete with China on a more -- on more quickly implementing and integrating AI solutions while also adhering to ethical guidelines.  Do not those ethical guidelines into considerations in their very nature and design slow down the process by which you implement these changes?

GEN. GROEN:  So thanks for the question, Patrick.  Great, question. I think the answer is unambiguously no, right?  So I think when you start thinking about the ethical baselines in a context of responsible AI, and I'm speaking from experience here because I have my own sort of conversion here.

When I first came into this space and I said AI ethics, well, we're ethical people so, of course, we'll do everything ethically.  Can there be more to it than that?  And there actually is, right, because from an ethical baseline comes trust in AI systems.  It comes from a test and evaluation verification/validation environment where you're actually ensuring that systems work, they work effectively in the areas they're going to be employed, and they work effectively, you know, with other kinds of systems, right, so you’re achieving the overall operational effect that you're trying to achieve.

If you don't do those things, if you don't have that ethical baseline and that test and evaluation baseline that rides on top of it, and the validation verification baseline that rides on top of that, then you're setting -- you're setting yourselves up for AI that you do not trust, right?  And so, if AI is not trusted in the Department by commanders, decision-makers, etc., then it won't be used, right?  And rightly so.

So, this AI ethical baseline is absolutely critical to responsible AI and trusted AI at the end of the day.  So, I think that's an important point.

But Patrick, I know that doesn't quite answer your question because still you have to talk about systems here, too.

I would say this, democracy doesn't have an inherent capability of like working together, right?  The power of democracy comes from when you're unified in -- as a people and you're unified in an approach, and then democracies are unshakable, right, because they're working together on a common goal.

That's not an inherent capability of democracy if you don't have that inherent cooperation, right, and that unified approach and that unified perception.  And so, it's a difference between, the sleeping giant waking up, and pulling out his war club or a sleeping giant waking up and finding himself tied down by a thousand strands.

And I think in an environment where not only do you have sort of systems warfare, but you have information warfare, you know, fake news, you know, global fake news generated by Russians or Chinese or whoever, this is actually war against society, right?  And so, this again this tends to pick apart our capability to come together organized and unified in building a defense capability.

Do I worry about that?  Yeah, I do.  I think that organizations like this, conversations like this with the great patriots of NDIA who are committed to our national defense and committed to our national security with the Department of Defense who's obviously chartered to do that, I think partnerships like ours are the answer to how do we get over that bridge.

So, thanks for the question, Patrick.  That's a really good one.

MR. CARLISLE:  Yeah, Mike, thanks.  Great answer.  I couldn't agree more.  You know, your point is spot-on.  It really is about the fact that we become more innovative and we actually are better implementing things when we do it through a democratic process and through our values.  Well stated.

A couple of questions on updates for you (inaudible) okay -- from Jackson Barnett.

GEN. GROEN:  Of course.

MR. CARLISLE:  The first one is what's the latest on the Joint Common Foundation and where are we at with JAIC rolling out that platform?

GEN. GROEN:  Yeah, okay great question.  We're very excited about the Joint Common Foundation.  So for those of you who don't know what that is, so when -- more and more, you know, flowers are blooming across the Department, more and more elements across the services and the agencies want to get involved in an AI journey, right?  They see their challenge, they see that that the problems that they're having with data.  They want to make better data-driven decisions.

The first thing they need to find is a platform, right, that first assess their data readiness, right?  Like, okay, do you understand your data?  But if you do understand your data then the next challenge is, okay, well, what do I do with all this stuff?  Where do I put it?  What's the dev environment that I use?  Where you know, how do I share best practice and tools?  How do I store our algorithms where I have security and then also be able to share with other developers across the Department?

The Joint Common Foundation is our DevSecOps platform that we built as the JAIC to allow users from across the Department to find a home to start their AI journey.

In many places -- there are other development platforms in the Department of Defense.  You know, Air Force has got Platform 1.  The Army's got their own development platform, so we recognize that the JCF is one of those platforms that creates a development environment. We think that the market that we especially serve through the JCF is those users that don't have access to the other platforms -- a service platform, or otherwise.

And so, we're working very effectively in that regard.  We got a couple of services signed up already, we're IOC, Initial Operating Capability for the -- for the JCF.  And so JCF is live.  We have the tools.  We're starting to develop.  We're starting to host data.  We're starting to host algorithms.

And what we hope to do, what we call IOC, Initial Operating Capability, we hope to grow that into full operational capability by doing a block upgrade every month.  So, every month we want to add more services, more capacity, more more capability to the JCF, but it's up and running.

And we've done some great work.  Just in the last month, user surveys were talking across the architecture, across the defense enterprise, talking to lots of different users to figure out exactly what do you need so that we can -- so that we can provide the services that they need.  And we think that is a key tool to broad enablement across the Department in the transformation of AI, right?  So that's what we're after there.

I think I would just note because I think this is important.  One of the key efforts at this point is to stitch together these development enterprises, these development platforms into a fabric of platforms, right?  So as we build our Joint Common Foundation, we're thinking about this joint common fabric, right?  So how do you stitch these development and operational environments together so that you can actually share data readily, you know, from an Army sensor into an Air Force system or vice versa.

So we're expanding our approach as we start to flush out our DevSecOps environment across the Department.  Thanks for the question.

MR. CARLISLE:  Great answer, Mike.  Appreciate that.  That's a -- that's the update on the system is incredibly important to our membership, so thanks for -- for sharing that.

A great question and this is from Jay Dunbar.  And it goes to the fact that our asymmetric advantage are people, trying to weave the talent that we have in our folks.  And it says in order to make a DOD ad ready by 2025, like the National Security Commission AI Final Report says how important and how do we go about re-skilling the 3.2 million DOD workforce before that data and what does that look like training-wise?  It's a huge, huge question.

GEN. GROEN:  Yeah.

MR. CARLISLE:  If you could just -- a couple of things out there that we're thinking about would be great.

GEN. GROEN:  Right, right.  I mean, if only we could just get them all into a schoolhouse, and spend -- spend a year and given them all data science degrees.  But so you're right to point out, Jay, this is a massive challenge.  And I think the NSCAI commission did a really good job of sort of talking about all of the all the manifestations of this challenge.

So, I think we're binning it into a couple of different things.  The first is kind of -- is kind of triage, right?  So where we're taking existing like pilot training efforts, every services got one of these where they're taking some cadre and training them, some are more sophisticated than others, some are sending people to college for a couple of years, some are giving sort of off the cuff training, some are, you know, using online, you know, courses in catalogs.

What we're trying to do - we have our own program where we're - we're training against or training to a couple of different relationships in the AI architecture, right?  So not everybody needs to be a data scientist, not everybody needs to be able to do test and evaluation, not everybody needs to be able to write code, but we need all of those skills.  So, what we're trying to do and working - working in close cooperation with some of our vendors, is identify a pipeline of training or an archetype of -- of AI consumer.  So, if you are an AI operator perhaps, maybe you only need, you know, a certain level of AI coursework that we can that we can readily give you.

If you are a data scientist or if you're a coder, maybe you need something a little bit more sophisticated.  If you were in a leadership role and you really don't do AI, but you manage it or you oversee it, then we have a training package that we're putting together for that archetype as well.

So, when we look at all of these archetypes, what we're trying to do is flesh-out at least a minimum viable capability for training and educating across the force.  And we're leveraging the work that the services have done to make sure that, you know, if -- if one services has got a great idea that we're sharing best practice across the other services and across the department to make that work.

So that's good.  I think -- I think that's a great triage approach, right?  So like, okay, how do we stop the bleeding, right?  But I think -- I think there are broader requirements that are in place that we're working with now with like, you know, Assistant Secretary for Personnel and Readiness, right?

So, you know, do we need to build digital academies, for example?  Do we need to have the ROTC for -- like program for digital natives that can maybe -- get support for college education and enroll to the Department of Defense in a in a digital role?

Hiring authority for folks that have digital expertise, you know, in cyber roles or in artificial intelligence roles so that we can directly take talent from industry, people who want to serve, you know, people who probably could make more money doing something else, but want to serve and want to be an active participant in this national security challenge, we have those people and we have those kind of roles and so we bring people on board with that.

So, I mean, the answer to a broad question like this is we need a broad range of strategies and we have these in various maturities.  And we're peddling as fast as we can to try to make this, you know, to start to do this at scale.

One thing that still works against us is stovepiping, right?  If Service A says no, I got it, I'm going to train my people and I don't want any help, and I don't want anybody to look over my shoulder, then we're going to wind up with, you know, at least five different approaches, at least five different levels of certification.  So if you go into a joint job, what you know, what level are you actually certified as an AI expert, right?  We won't know if -- if we just stick with a stovepipe approach.  So, we're trying to leverage the best of our stovepipe approaches to create a broader AI education ecosystem.

But it is critically important, right?  I mean, just this stuff doesn't work without people who know what they're doing.  And so -- so turning that pipeline on and scaling it at the same time we scale AI in the Department has got to be a mirror image.

MR. CARLISLE:  Great answer, Mike.  I couldn't agree more.  And I think, you know, your point, too, is that to some degree it's generational.  And that, you know, we have the ability -- part of it is if we give the training and kind of unleash the power of the younger generation, they're going to -- they get more with what we give them than we ever thought possible.  So, I think that's part of the solution, but the training part is critically important.  So well stated.  Thank you.

GEN. GROEN:  Yeah, I'm a big fan of the younger generations who now that populate the Department of Defense.  But just because you are good at video games doesn't mean that you're AI-ready.  It doesn't mean you understand the implementation of architectures and technology.

So, even digital natives need help thinking about application of AI at scale and thinking about how you solve big problems.  To our great fortune, many of our young people in the Department of Defense, they get it quickly.  They inherently understand how apps should work, how data should be shared.  And so we have a lot of raw material, a lot of raw talent that we can take advantage of.  We just need to do it faster than we're doing it quite frankly.

MR. CARLISLE:  Yes, Sir.  But ultimately, our asymmetric advantage against our adversaries are our people and incredible talent.

A question from Todd Grego, and this is fascinating because I think it's one that is in the heart of what JAIC does.  And it says that one of the findings of the NSCAI final report was the need to look larger -- in a larger picture then to include all of AI and autonomy architectures utilizing high-bred AI solutions.

But it appears that some many organizations, both government and commercial, are focused purely on the application of machine learning vice the larger picture.  Can you comment on how we get DOD to think in the larger picture outside of just machine learning and into the hybrid solutions?

GEN. GROEN:  Yeah, great question.  So, one of the tools that the JAIC has is a great -- a great tie-in with the academic community and the research communities, right?  So, the JAIC is not a research and development organization, but we have great partnerships with a bunch of research and development entities.  And that includes the academic leaders across the United States.  One of our key advantages is our academic enterprise and just the expertise and the great research and knowledge and insight that's being gained in, you know, from -- from our academic environment.

We work hard to harvest all of that.  And we're focused on now, you know, learning continuously and continuing -- continuing to move up the AI value chain, if you will.  So, I think there's a couple of ways to look at it.  I wish I could draw a diagram here on the white board.

But -- but basically, there's a level of AI readiness like, i.e., the technology for machine learning, for example, is mature, it's available, it's readily accessible, it's understandable.  And so there's a wide swath of problems that can utilize machine learning, and so it's readily done.  And so these are sort of like the early victories of our transformation because, when you think about the scale of the Department of Defense, you're talking about three million people, $700 billion a year plus the capital investment, it's not just one and done, you know, it's one million and done, right?

And so like how do we train, how do we get this AI instantiation, and the next one, and the next one, and the next one, and the next one.  And so grabbing the simple methodologies are important to us today as we expand the scope or the scale of AI implementation.

But the point is a really good one because we can't stop there, right?  So -- so when we start talking about more integrative activities and some of the more complex operational environments and some of the more complex situations, as that technology matures, then we'll bring that into the Department as well from an implementation perspective.

There's a lot of great work going on for advance AI methodologies in DARPA (Defense Advanced Research Projects Agency), in the research and engineering community, in the service laboratories.  So, the -- the R&D environment is really aggressive and really good inside the Department.  So, we're tracking all of the kind of the next-generation of AI technology, but in our implementation state, which kind of was where the JAIC lives, we're not trying to push the envelope in implementation.  We're trying to push the envelope in research and development, and then transition that effectively into implementation.

So hopefully, that makes sense.  We recognize the scale of the challenge.  We recognize that there's a lot more out there. I think what you will see is a rollout and a -- and a move up the AI value chain as we mature and -- and expand the scale and scope of -- of our environment.

One of the things that we want to do through implementation and this broad enablement, this is why this is our number one priority -- broad enablement -- is because we want to start a thousand fires, right?  And as those thousand fires burn, there's going to be a demand signal for more complexity and better artificial intelligence and sharable architectures that, that start to stitch these things together.

So, we're not trying to solve every problem today, we're trying to start a fire underneath that starts to get hotter and hotter as it expands, you know, across the Department.

MR. CARLISLE:  Great answer.  Thanks, Mike.  I appreciate that.  I know we've kept you a long time, so I don't want to overdo it, but I would like to ask you just kind of one last question and then any closing comments you have as we go through this.  A great question from William Taylor.

In the concept of data is the new oil, which I think is spot-on, what is JAIC and DOD doing to establish relevant and data-labeled, curated data sources for military conflicts in competition that may be made available through DOD or the DIB (Defense Industrial base), in general, or basically the whole-of-government, in general?

GEN. GROEN:  Yeah, William, that's an awesome question, right?  And I love the data is the new oil analogy because it's a perfect analogy.  Because oil, it's deep underground, it's hidden.  It's full of sulfur, it's full of methane.  It's hard to get to.  You have to figure out specific methodologies to get to it.  And once you pump the data out or pump oil out, well, then you have to remove all the impurities and you have to burn off the methane, and you have to refine it to -- you know, to become fuel.

So I think data -- it's a perfect analogy because oil is really hard, right?  You know, we put gasoline in our tanks without thinking about it.  But the process from deep underground to your tank is complex, and it -- it takes a lot of work and a lot of infrastructure and architecture work to make that happen.

Data is the same, right?  We are sitting on mountains of data, gold mines of data, but to get to extract that data and use it, and make it and turn it into usable fuel, we've got to build the -- the architecture, right?  We have to build the refineries.  We have to do with gas stations.  We have to build the, the standard-sized pumps and nozzles, right?  So, that whole industry has to be replicated metaphorically in a -- in a data environment.  And this is exactly what we're trying to do with things like Joint Common Foundation.  So there's the hardware infrastructure component of this that we're ready building.

There's the whole data policy, data-sharing, algorithm-sharing set of work that has to get done, too, right?  And so this is where we have a great partnership with the Chief Data Officer of the Department of Defense.  You know, shockingly, there wasn't a Chief Data Officer in the Department of Defense just, you know, a year ago, right?  So now we have a Chief Data Officer, and chief data officers at all of the services and all of the agencies.

And this Chief Data Officer environment now can govern the shareability and the -- and -- and the security of data across the Department.  And so we're building what we're -- what we're -- what we're hoping to achieve is move beyond -- you know, today we still have some legacy programs that are stovepipe data, that's jealously guarded, they won't share it.

And so we're trying to move beyond that approach, which is kind of the natural instinct, move beyond that approach into a much more mature environment where data can be shared readily that it's catalogued, you know, at the Department-level, and so -- and then held wherever it's held.  You know, a service data probably held by the service, data held by the JAIC, data held by others, so it's catalogued and available and so -- invisible so that if, for example, you know, the Army does a human resources AI development, well, we want to make sure that that algorithm is available for, you know, for the Navy to use or the Marine Corps.

And similarly, if the Marine Corps has a great set of data from ISR (Intelligence, surveillance and reconnaissance) platforms, for example, well, we want to make sure that the Air Force can leverage that data for training their own algorithms.  So, setting up this whole again I'm an old guy, so I'm thinking like lending library sort of construct, right, where -- where data is housed in lots of different places, but it's connected by a cataloguing agent at the Department level.

And then we, this is kind of one of our big challenges right now, then how do you build a secure environment in which you can share datasets, share training data for algorithms, share algorithms, you know, across components, and then have -- have an interpreter that can tell you, if you’re a defense whatever agency, hey, as we look at your problem, your problem looks just like something that this company did for a different part of the Department of Defense.  Let's connect you with a phone number but connect you with a contract perhaps with a vendor who knows exactly how to address your specific type problem.

So generating this market intelligence inside the Department so that we know who's available, who's a player in this niche of data environment, who's a player in this niche of the algorithm environment, that’s generating that market intelligence through things like the Defense Innovation Unit out in Silicon Valley is -- is important for us, right, so that we can actually make oil and gasoline -- eventually gasoline available at the point of pump in any neighborhood on any block.  That's what we're after, but that's another big challenge to put on the stack.

Great question there, William.

MR. CARLISLE:  Yeah, that -- Mike, thanks very much.  I -- you know, that is critical.  And as you've stated at the outset, we know that our potential adversary and certainly the pacing threat in China, the way they get on their entire population gets them an advantage.  And we -- we will figure out a way to take advantage of our democracy and getting that as well as -- as we go forward.

GEN. GROEN:  Yeah, even in the face of the scope of this challenge, I am optimistic, right?  I see great things happening every day.  It's just  we need to accelerate, right?  We need to work faster.  We need to work harder, and we need to expand and scale, right?  So, we need to -- we need to contaminate more more enterprises with an AI data-driven approach.  And the more we do that, the more fires we light, the bigger and hotter it will burn across the Department.

And then in partnership with organizations like the NDIA and your members, that's where we're going to, that's where we're going to start implementing.  We're going to start swinging lots of hammers at this problem, and eventually will -- you know, we will evolve to scale capabilities.  And we're really excited about that, right, because we can't do it without you.  We absolutely need your help.

I hope -- I hope this conversation has helped you understand kind of like where we are in this evolution.  It's a massive project to transform the Department of Defense, but we're getting there, right?  And so forgive us for, you know, a thousand small steps, but I think those are a thousand small steps in the right direction.  And with your help, we can make those steps bigger and bigger, and eventually get to the -- the warfighting capability that we really need.

So, thanks, General.  I really appreciate the opportunity.

MR. CARLISLE:  Well, Mike, we can't tell you how much we appreciate it.  Ladies and gentlemen, having Lt. Gen. Mike Groen here from the Joint AI Center is incredibly valuable.

And -- and, Mike, you got to know we're in this together.  So, we're your partners, we're going to help.  You want to bring industry together, NDIA is here.  Our membership is here to support you and everything as we go forward.  So, we -- we can't thank you enough and we really look forward to working with you.

For everyone, thanks for being on today.  Again, great opening keynote.  We have a congressional panel later today, which could be fascinating as well.  That was great to hear from Mike.

Now we're on networking break.  Please take advantage of the networking break.  Go to sponsorship area as well as exhibitor area.  There's some great information out there.

And again, Mike, thank you very much.  And again, we're here to support you anyway we can.

GEN. GROEN:  Thank you, General.  I really appreciate it.

ai.mil -- ai.mil is the -- is the window into the JAIC.  And you're going to have a couple of JAIC members talking, you know, through the rest of the week.  So, if there are hard questions, I recommend that you ask them those hard questions.  So, I can feed you some questions if that will be helpful.

MR. CARLISLE:  Perfect.  Thanks, Mike.

GEN. GROEN:  All right.  Thank you.

MR. CARLISLE:  Okay.  Have a great one.