An official website of the United States Government 
Here's how you know

Official websites use .gov

.gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( lock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

You have accessed part of a historical collection on defense.gov. Some of the information contained within may be outdated and links may not function. Please contact the DOD Webmaster with any questions.

Joint Artificial Intelligence Center Press Briefing

LIEUTENANT COMMANDER ARLO ABRAHAMSON: My name is Lieutenant Commander Arlo Abrahamson. I'll be moderating today's press briefing. Today it's my pleasure to introduce the director of the Department of Defense [Joint] Artificial Intelligence Center (JAIC), Lieutenant General Michael Groen.

Lieutenant General Groan is joined today by Dr. Jane Pinelis, who is the Chief of Test and Evaluation for the JAIC, and Ms. Alka Patel, who is the Chief of Responsible AI (Artificial Intelligence).

We'll begin today's press briefing with an opening statement followed by questions. We've got people out in the line. And of course, folks in the room. And I think we'll be able to get to everybody today.

So with that, sir, over to you for the opening statement.

LIEUTENANT GENERAL MICHAEL S. GROEN: Thank you, Arlo. Well, good afternoon. And greetings to the members of the Defense Press Corps, really glad to be here with you today. I hope many of you got the opportunity to listen in to at least some of the AI symposium and technology exchange that we had this week.

This week, it was our second annual symposium. We have over 1,400 participants in three days of virtualized content. I want to say thank you, first of all, to all those senior leaders who have participated in that dialogue over the past three days and we've heard from senior leaders across the department including Deputy Secretary Hicks, the Honorable Robert Work, former deputy secretary and currently the Vice Chair for the National Security Commission on Artificial Intelligence (NSCAI).

Ms. Michele Flournoy joined us this week as well. She's been a tireless advocate for AI in the policy community and a number of other senior defense officials also included.

We especially thank the vice chairman, General Hyten, and the U.S. Special Operations Command Commander General Clarke, who brought their insight, not only their insights, but they brought the voice of the warfighter into our conversation, and it was really valuable to have them here as part of the session.

We all have benefited from the terrific work of the NSCAI, the National Security Commission on Artificial Intelligence and remain grateful for their insights into how we truly achieve the modernized force that we need.

Finally, we never forget the enormous support we get from Congress, who continue to recognize the transformational nature of our current challenges. Congress has steadied, steadfast support to the DOD's AI initiatives is one of the keys to victory.

So the level of the dialogue and the participation in the symposium from senior leadership really demonstrates how serious the department and our broader national security community takes artificial intelligence and the generational opportunity we have to preserve our military advantage through broad artificial intelligence implementation at scale across the force.

As this symposia demonstrated, we have true leadership from the top in bringing data, artificial intelligence, and new technical approaches to our most difficult challenges. And that's a really important thing that we do.

The competition is clearly working hard on this. Many have cited the ruthless efficiency of totalitarian organizations like the Chinese Communist Party or Russia, America and our ethically aligned international partners have always counted on the innovation of free societies.

And I'm happy to report after this symposium that the lights of American innovation are on and they're shining brightly, and they're really positioned to help us as we make our way through this transformation.

This is truly the challenge of a generation. And it's clear that our industry partner partners, and industry leadership is equally concerned and engaged in meeting the demands of this competition. I hope you captured Deputy Secretary Hicks' keynote on Tuesday, where she discussed the brand new AI and data acceleration initiative, that along with her recent signing of a memorandum affirming the department's commitment to responsible AI.

I think it's really important. The juxtaposition of these two announcements clearly marked the department's intent to modernize our capabilities. But to always do so standing on a rock solid ethical foundation. I think it's a powerful signal that both of these things that have happened here in the last 30 days.

Symposiums like these are important. These are -- these help us build and strengthen the AI ecosystem that will help broad -- drive broad transformation across the department. This is our vision and it's what we focus on in the Joint Artificial Intelligence Center every day.

Through fora like these, we broadly enabled defense transformation by illuminating, ideating, and integrating our thinking about the transformation underway.

The symposium talked about implementation. We talked about platforms and technologies and scale and data, and lots of other technical aspects of artificial intelligence implementation.

But at the pointy end of that spear of those capabilities, the hard work of creating successful environments and implementing AI in the dirty dangerous challenge warfighting environments, right at the edge is what really matters. And that's what we wanted to focus on this week.

Accelerating the capabilities to our warfighters at the tactical edge was really at the heart of our conversation. The explosion of innovation that we uncovered this week is really encouraging. And the seriousness, seriousness with which this group took -- take on our most pressing national challenges is really humbling.

As you may have heard on Tuesday from the Deputy Secretary, AI and data accelerator (ADA) seeks to expand our understanding. We want to understand things like latency challenges. We want to understand reliability and uptime requirements. We want to understand restrictive policy environments that may be a holdover from an earlier age, yet still hold us back in our implementation of AI, especially at the edge.

We wanted to discover the technical bureaucratic process and cultural obstacles to change and remove them from the path of our warfighters. And that's what the AI and data accelerator is all about. And I hope we have some questions about that later.

We want to understand that the challenges to implementation from this technology.

I think a couple of things that you may have taken away from Deputy Secretary Hicks's comments, certainly I did, is the first is a department level incentivization of this experimentation.

The department leadership knows our challenges and they want to accelerate the transformation. They have made AI a priority for resourcing. And we have an awareness on both sides of the Potomac and Congress as well, that this transformation to data driven and artificial intelligence human machine teaming is a really important transformation that we need for a modernized force.

A second thing we heard Tuesday is that the ADA starts with real warfighting challenges. Our combatant commanders have some of the most intense decision-making environments, but have yet to have the opportunity to apply the latest tools to responsive decision support. And we want to correct that. And we want to do that in a repeatable way.

We also want to do that in a way that scales. If we make progress at one combatant command and help their decision processes, we expect to be able to rapidly scale those capabilities across other combatant commands to help their decision-making as well. We want to date -- we want to do that in a way that illuminates a path for software capabilities that might be different than our historic norm.

The shift in balance from hardware to software defined capabilities will really require us to think differently about how we approach development. Through ADA, we are teaching ourselves how to implement software based capabilities and how to support them in infrastructure and how to achieve them at scale.

A few things I know we will discover, this is not a transformation that you can make at the surface or with a series of shiny objects. We will have to dig deep into AI architecture, data curation, network planning, and we'll have to ensure our development and operational planners for decision support are secure, reliable and tested.

A new coat of paint will not get us to the transformed decision-making and tempo generating machine that modernize defense capability demands.

There's a very clear implications of a transformative defense environment. Foundationally it depends on transforming the Department of Defense's technical operating model. The business model of what Department of Defense does for the nation doesn't change. But the ways, the operating model for how we do accomplish those goals certainly will change.

And we should think of this as the beginning of a joint operating system. We might compare that to a specific vendor, ecosystem or vendor architecture. You know, there are many examples out there, where the pieces fit together by design.

This is what a new operating model looks like for defense, pieces that purposely fit together, situational awareness that is automatically generated and widely shared, any sensor available to feed any decision-maker. The deputy secretary's AI and data accelerator sets us on that path.

To be honest, there's very little magic here. We have multiple models to copy in the commercial environments, in the industrial environment and elsewhere.

This is all about making the Department of Defense as productive and efficient as any modern and successful data driven enterprise.

And as we look at this, it's pretty easy to see the scale of the challenge that we face. In some ways, this transformation will require an integrated operating environment that could -- that could actually make jointness look easy. And here's what I mean by that.

Operating with data and human machine teaming across every domain, and integrated across domains demands a level of process and technical integration and data commonality that far exceeds what we practice today. What we're talking about here implies a much higher level of integration in platforms, in data and domain awareness that exceeds our current standards that is truly transformational, and it is truly necessary.

We won't achieve this by a scattered yard of shiny objects and stovepipe developments. We need to begin planning and developing for a purposeful operating system that stitches our various capabilities together. We look forward to your questions on this for sure.

Before we get into questions, though, and before I close here, I just want to acknowledge two really important leaders that are here with me today. One is Alka Patel, Esquire, if I may, whom I know -- I know some of you know very well, Alka has been an irresistible force in building our ethical foundations and baselines and I hope you have some questions for her today.

The other is Dr. Jane Pinelis, who is also a thought leader and a real leading technical expert on the emerging discipline of AI testing and evaluation, a critical component of AI development and AI integration. So I look forward to hearing from Dr. Jane. And I hope you have some questions for her as well.

These women have led the JAIC and the DOD through critical junctures in our development, and I hope you take advantage of the opportunity to speak with them.

We're grateful for their leadership, and we'll continue to lean on them as we mature a responsible AI and testing and evaluation initiatives for the future.

With that, we're very happy to take your questions. Thank you.

CMDR. ABRAHAMSON: Thank you, General Groen.

First question will go up to Tony from Bloomberg News, sir.

Q: For Dr. Patel, excuse me, I tested the valuation. I cover DOT&E (Director, Operational Test and Evaluation) a lot over the years. Are you working with them in terms of testing matrixes and modeling criteria for determining effectiveness and suitability what would pass or fail in terms of the AI construct?

And for the general, can you give a couple of examples of where the next couple years AI might be fielded? You came up with this really neat U2 copilot issue. I think was last year. Is that the -- is the U2 at some point soon going to have an AI copilot basically?

GEN. GROEN: Sure. Great questions, Tony.

So Jane, you want to go first, please.

DR. JANE PINELIS: Sure. So we work with DOT&E extensively. We talk to them several times a week. They're a very important partner for us in testing and evaluation as well as the service Operational Test commands, right. We have the kind of OSD component, and then the service component as well.

We primarily interact with our chief scientist, Dr. Greg Zacharias. And we work with them on a variety of issues, anything from test planning and updating test planning guidance, how to actually write a test and evaluation master plan, to how do you measure security of an AI enabled system, to operationally testing an AI enabled system and the infrastructure that's involved, et cetera. So we work with them and coordinate with them probably a few times every week.

Q: Do you have any current systems in testing that you can talk about? Just give a couple of examples.

DR. PINELIS: So we at the JAIC performs testing for all of JAIC's acquired systems. We are actually partnering with you DOT&E for a couple of them the different, I guess, our various systems, right, different stages of development. And so some of them, for instance, our force protection tools, we're currently testing both at the algorithm level. So we have vendor models. We evaluate are those models accurate in terms of their prediction on withheld test data set, but also to the extent that some of our systems are fielded now, we're able to evaluate the effectiveness with the human now, right. We care about things like human system integration.

And in fact, the JAIC recently came out with our human system integration framework, which were able to distribute to all of our DOD test partners that helps others evaluate their systems for human factors as well.

Q: Okay, thanks. It's good, the answer.

GEN. GROEN: Great.

And with respect I'll just -- just quickly, you know, with respect to that kind of the types of AI that we’ll continue to field I mean, obviously, there are layers of AI for example, you know, those AIs that, you know, run specific systems, for example, like co-piloting in the U2, for example. Most of those are service developed or service led developments for your specific, you know, pieces of equipment or weapon systems or you know, other constructs.

One of the things that we're really focused on that we think is it matches the maturity of AI technology today as we're implementing it is things like helping decision support, like teeing up good decisions for commanders. It helps commanders make decisions based on sound data, either patterns in historical data or knowledge of things that are happening on the battlefield, and you know, with the red force or with the blue force, but helping good decision-making.

If we can make a good decision-making and have informed decision-makers, we think that is the most significant application of artificial intelligence. And then we'll continue to go from there into other functions. And the list is endless from every, you know, moving logistics successfully around the battlefield, understanding what's happening based on historical pattern and precedent, understanding the implications of weather or terrain, on maneuvers, all of those things can be assisted by AI.

So you will see a rapid proliferation of really enabling tools for decision-makers across the wide range of warfighting functions.

Q: Is any of this part of the Pacific Defense Initiative, you know, field AI enabled in projectors over there? I don't have a great example, but anything Pacific Defense Initiative (PDI)?

GEN. GROEN: I'm not aware of what the Pacific -- specifically the Pacific Defense Initiative is going to resource. I mean, historically, and we've had a European Defense Initiative for years and in those kinds of environments. Combatant commanders have the ability to experiment and implementation of capabilities that they didn't have before. So I don't know any specifics of PDI. But I suspect that those kinds of things are on the table at least.

CMDR. ABRAHAMSON: Okay, we're going to go to the phones for the next question. The next question will go out to Sydney Freedberg of Breaking Defense. Sydney, go ahead.

And I believe there may be an audio question. So Sydney's question is about the AI and Data Initiative. And his question, I think there's a few reporters that had similar lines here about how the JAIC will work with the Chief Data Officer and how they work with the COCOMs (Combatant Commands). And some details on that Sir to the team.

GEN. GROEN: Yeah. Okay. Great. Thank you. Thank you, Sydney. So the ADA, the AI and Data Accelerator Initiative is something that is moving really fast. And frankly, I think it makes a lot of the historical defense process kind of uncomfortable, right, because we're moving so quickly.

And -- But what we want to do, the core is start -- really started from a series of combatant command exercises, where combatant commanders wanted to try new things. They wanted to become, you know, experiment with the -- with data driven decision-making, making sense out of noise, and, you know, creating options for commanders to consider in execution.

This sort of rapid -- this idea of rapid idea generation and support really drove us to a conversation about, okay, how do we really accelerate the data readiness of our combatant commanders, and the artificial intelligence tools that they have at their disposal to make good decisions.

And the combatant commanders were chosen specifically, because they have, one, they have their own exercise environments, but they have real decision environments, really the toughest decision environments of anybody. And yet, they often, you know, don't have a lot of tools to deal with those kinds of things. So we wanted to help them with that.

And it was clear that there were two lines of effort or two real problems that we wanted to address. The first one was data readiness. The combatant, you know, as in any large enterprise, if you're going to use artificial intelligence and start bringing those sorts of data driven tools to bear, you have to understand your data. You have to clean up your data. You have to get the data where you where you want it.

And so data curation, data conditioning, data quality control, data management, all become really important functions. Combatant commanders and their staffs are built to fight the U.S. Joint Force, right. They're not built to do those technical functions. They need help.

So the first part of ADA is to bring in data teams, operational data teams we’ll call them, and they will work with combatant commands, command staffs, and headquarters and commanders to get the data in a good place, right, to explore all the -- all the sources of data that combatant commanders can use in their decision-making, and then the create access to that data.

The second piece of this is the challenge of process flow. So today in our joint force, and we have many processes that are a series of stovepipes, you know, individual efforts with individual systems with, you know, individual sources of data, for example, contributing one piece of knowledge to a commander decision-making environment.

What we want to do is take all of those stovepipes and turn them into a collection of observations that are integrated in a way and fused in a way that help the commander make better decisions from a fused picture of what's going on not having to, you know, assemble in his or her head, you know, the contributions of 30 different systems.

And so given the, you know, the cleaning up the data environment, and then looking at the workflows at a combatant command, and this -- there are multiple workflows, as you might imagine, all the different functions that occur on, you know, under the auspices of that headquarters, that really could use machine assistance, right.

And so we're going to help build machines that make the decision processes smoother, that make the processes of integration of the combatant commands smoother, and help them with this. We're going to do that piece of it, the AI piece of it with something we call “fly-away” teams. We have -- we'll have a persistent engagement with our combatant command headquarters.

But what we will do when they're ready, you know, it linked into their decision cycles or their exercise cycle, their experimentation cycle. When they're when -- they have the time to look at this, we want to fall in on their efforts and help them experiment in this space. We're going to experiment with process flow. We're going to experiment with workflow.

And if we can develop something that works well for them, ideally, we'll leave that in place, and there'll be one step better than they were before. And then we'll come back. And we'll do it again. And we'll make them one step better again.

And so through this series of experimental activities, we hope to really start to gain real capability. And if this, you know, if this sounds familiar, it's because this is a conventional software engineering approach. And so, we're talking about largely software derived capabilities. So it only makes sense for us to use a software engineering approach for testing, experimenting and implementing these capabilities.

And one of the magics of doing this in a software way is that if you can create opportunities for one combatant command to streamline their decision processes, then that scales pretty readily to other combatant commands who have very simple similar challenges. They may have different data, they may have a different theater, but the challenges in the staff actions are largely the same. So we hope to be able to experiment rapidly and then scale across the joint force as much as we can.

CMDR. ABRAHAMSON: Thank you, sir. Ma'am?

Q: Thank you very much. About the limitations of AI. What are the some areas that the AI cannot do? And how will you cover these areas?

GEN. GROEN: So that's a great question. We spend a lot of time thinking about why AI can do and obviously, any process that is data driven, or requires inputs from a broad, you know, from a broad spectrum of data producers. It is very natural for artificial intelligence to help humans sort through large volumes of data.

And so AI is good at that sort of thing. And that's really the sweet spot for where we want to help commanders.

AI may not be as useful in decision-making that is, you know, integrated with humans and human emotions and working with individuals. I would submit, I mean, there are still ways that artificial intelligence can help in those interactions to the defense health, you know, enterprise, for example, has AI applications for lots of different aspects for treatment of a variety of illnesses, you know, both physical and mental.

And so like we -- there is a lot of work that AI can do to help doctors make better decisions, to help organizations make better policy. But those don't jump out at you quite as, you know, quite as cleanly as some of the ones that are working with, you know, tactical data or situational data on the battlefield, for example, or logistics data that is driven by, you know, that are very data driven enterprises. AI falls naturally into those things.

So we're going to pick carefully about which ones are, you know, have the data to actually support a data driven analytical engine, and which ones maybe we want to hold off until we have more mature technology?

COL. ABRAHAMSON: Next question goes out to Jackson Barnett from FedScoop. Go ahead, Jackson.

Q: Thank you very much for doing this. My question is directed toward Ms. Patel. What is the -- or any implementation or other type of guidance that you have created for understanding the ethics principles and responsible AI? And how does the new memo signed by Deputy Secretary Hicks change or alter any way the timeline for developing such guidance?

ALKA PATEL: Sure, thanks, Jackson, thanks for the question. And I really appreciate your diligence in holding me accountable in terms of our efforts on responsible AI at the department.

So as you alluded to the Deputy Secretary of Defense executed or signed a memo on May 26, that's really focused on how we implement responsible AI at the department. In addition to affirming the AI ethics principles which were adopted last year, this memo actually sets out six foundational tenants and those foundational tenants lay out the structure for our strategy and implementation plan going forward.

So we've taken a step forward to answer your question more specifically. I think you're aware of we've have a Responsable AI Subcommittee that has been convened last year. We meet on a monthly basis. We've met over 12 times over the last month, which also includes representation from individuals across the department. So it's not just JAIC individuals who are working on and trying to solve this problem, but really a cross-sectional representation from the entire department, and allow those discussions is what led to identifying what the foundational tenants are.

And so we've taken that step. And that came through a lot of learning and experimentation, as the general was talking about earlier. And then our next piece, as you'll see in the memo, are very specific action items, deliverables with corresponding timelines. And you'll see much of those timelines are fairly short in the sense that we recognize the urgency around this work and how important and critical it is.

And therefore, by September, October, we will have a final version of a responsible AI strategy and implementation plan for the department.

One other thing I will just add briefly is that in addition to that memo, this Tuesday, at the symposium, both the deputy secretary of defense and the general, again highlighted the priority of responsible AI for the department. But we also announced the release of a responsible AI, RFI, Request For Information through our acquisition project Tradewind, or vehicle Tradewind.

And so in that RFI, what we're asking for is for individuals, either from industry, academia, nonprofit organizations, from all sectors who have subject matter expertise, who have solutions, services, products, solutions, best practices in the responsible AI area to respond to that RFI, and that information will actually inform and guide what the department needs to do to build that operating infrastructure that you were alluding to, general, and really build that across the department. Because what we've learned with AI is it's not just about the technology. There's a number of different pieces that that impact this. And so we have to look at this holistically.

And so that is -- that has really been the focus of our efforts. And it's come to a culmination in the last couple of weeks between the memo, between the RFI. We've also recently had the third convening of the AI partnership for defense where we bring our international partners together, and responsible AI is at the heart of those conversations. So there have been three convenings to date. And all of them have responsible AI for foundation.

Additionally, there are other efforts on the acquisition side, which are contracting vehicles to be released. And two of them specifically, one is on data readiness, but the other one is on testing and evaluation. And those are really critical vehicles in terms of how we actually operationalize the principles.

And the last thing I'll just mention is that talent is also something that we're thinking about. This is still a fairly new area. And so how do we make sure that we are bringing in the necessary talent to think about all the areas on responsible AI, as well as internally thinking about workforce education to upskill our workforce to really be able to address this issue.

And so a lot of different pieces that we're looking at, and working on holistically where we're building this plane as we fly it so to speak. There is no playbook. You've seen the tech companies, the tech industry has been working on this for years, and there isn't one solution. And so we are making progress. And hopefully, by the end of the fall, you will see a published DOD responsible AI strategy and implementation guide.

GEN. GROEN: If I can just let me pile on one second. So I think it was enormously encouraging to me. You know, as the new administration came on board here, this, you know, early this year, and, you know, we're settling into their jobs. I mean, one of the things, you know, those of us in this business often get very excited about the technology and the technological aspects of it.

And as Deputy Secretary Hicks took her position, you know, her first impulse, and her first attention to the artificial intelligence conversation was all about ethical foundations. And I was, you know, as a maybe a technology person, I was a little bit, you know, put that -- put them on the back of my feet for a second there. But how encouraging that is, and what a wonderful way for us to start our interaction and really put a mark down by this administration for Responsible AI and an ethical AI baseline -- as a baseline for everything we do in this space is so critically important.

I think it's just that, you know, it's just so insightful to make that the first thing that we did here in the department. And so I'm very encouraged by that. And as, as Alka, can attribute where we are, we're really making good progress based on that baseline now.

DR. PINELIS: If I can add a little bit as well. We've been able to tie our human system integration framework in a really big way to the responsible AI principles. And that's an important tie in because that means that to the extent that some of responsible AI can be tested against in our framework, that means that we're not adding time or any kind of financial cost. It's something that we're doing already, because ultimately, using AI enabled systems responsibly is very much connected to using them effectively

And so what we're seeing just to give you a couple of examples, for instance, as we measure whether the warfighter has the information that they need to know, when they need to know it in the way that they understand. That's a very common human factors question. And that also ties very much to the responsible principle into the traceability principle.

When we talk about whether the operator can use the system to do precisely what they want to do with that system, in human factors, we may call that usability or function allocation in responsible AI terms, that's governability, et cetera. So there are a lot of these really important ties, which means really, that some of these principles are not going to be as difficult and as new or as costly to assess, as one might imagine, because we didn't necessarily ask these questions of conventional systems previously.

CMDR. ABRAHAMSON: Thank you. Next question, we'll go to Luis Martinez from ABC News. Go ahead, Luis.

Q: Thank you, Arlo. General, you spoke about building new machines. But I think what I would like to know is, tangibly, how does one see AI in the military framework? How does, you know, a commander, a tactical commander, a warfighter will hear AI? I'm asking, do they grasp what it is? What you will see. Tony asked about an autopilot. I mean, that sounds that sounds like something that's, you know, internal to an aircraft. Is it tied to a broader network while it's in the air? What can actually someone see? Or is it just as you said, it's software driven to the point where the only thing you will see is the team that is being created?

GEN. GROEN: That's great a question, Luis. And so in this -- this is, you know, part of the challenge of this transformation is that there's an educational aspect of this to be sure. Here's one thing that we have. So what we're trying to accomplish through this, you know, through when I talked about an operating model and an operating, you know, a defense operating system.

This idea of access to large volumes of data, wherever they are being able to write applications against that data to, you know, to navigate through traffic or navigate through some, you know, some battlefield situation or make some decision about a supply, you know, supply movement or transaction. This comes so naturally to the younger members of our force. They grew up in this in this environment, writing apps against data. Many of them do this as a hobby, right.

And so, so we have a -- we have a great swath of the force that grew up as digital natives or near digital natives. And they understand this implicitly, like they can see, because they've -- their minds have been trained, they can see the advantages of things operating at scale, you know, they see how, you know, large online, you know, marketplaces work, and how having access to that data and being given, you know, potential things to buy, maybe with recommendations, you know, based on things that you've said before, like they understand that implicitly, all of those models fall right into military processes.

For almost every commercial application there is a military analog that those algorithms end up in the processes for using artificial intelligence just fall right in there.

Older folks, not me. But, you know, people much older than me. You know, they -- it doesn't come as naturally, right. And so for some of the senior folks, they think of AI as a black box that's going to come in and make my decisions for me. We're getting past that as larger or larger, you know, proportion of the force really understand that. No, actually, we're taking your decision processes. And we're taking all of the hard data work, and we're making that really easy for you. That's what this is all about.

So commanders own decision processes. And what we're trying to do is give them tools to make better decisions, to make better decisions based on data. And the number of, you know, commanders who are now starting to appreciate how this works is growing rapidly. You can really see the light bulbs coming on, just in the last, you know, six or eight months that have been here in the department.

It's incredible to me how fast on a, you know, on a Department of Defense scale, how fast this transformation is taking hold, and how more and more people understand. It's not just about the shiny objects. It's not about the black box. It's about the architecture. It's about decision-making. It's about responsible decision-making, predictive decision-making based on the level of confidence that you get from understanding what's actually going on around you.

I mean, humans are, you know, famously, really bad at operating a large volumes of data. But we're really good at intuiting kind of right answers. We're building both, right, and we're bringing them together in a bigger and bigger swath of the department leadership. Commanders really are starting to take hold of this and they and they want it. And I think, you know, that is only accelerating the department now. It's really exciting to see.

CMDR. ABRAHAMSON: Okay, we'll go on to the phones. We have Will Knight on the line from Wired. Will, go ahead.

Q: Hello, thank you. Yes, I wanted to ask a question about the kind of data and tools that you'll be using from industry. So there was a report out of Georgetown a couple of days ago talking about the risks posed by AI data and tools built around that data.

And so I'm just wondering how when you're going to be using a lot of tools and data coming out of industry, how you're going to be sure that that data hasn't been poisoned. And one of the recommendations in this report is that you have a red team, red team machine learning team to test tools to make sure that they cannot be used, or misused by an adversary. So yeah, I'm wondering about this, how you -- how you're going to be vetting that.

GEN. GROEN: Yeah, great question Will. And I'm going to start and I'm going to turn it over to Jane here in just a second. But I think, you know, there's a couple of aspects here that are really important. One I would suggest is, you know, the idea in a human driven environment, we have no risks, is not true, right? In many cases, by bringing in algorithms, and protecting our data and securing our data, we actually can get to ground truth, and we can actually make better decisions without some of the risks that humans bring into the chain.

So there -- so I don't say that to downplay the risks of artificial intelligence. But in every aspect of this business, there's the comparative, if we don't use machines, what do we do?

So in this environment with -- when we start to bring in data, you're absolutely right, just as, you know, that when the first tank was invented, the next thing that was invented was an anti-tank grenade. When the first ship was invented, the next thing was a, you know, a cannon ball or a missile or something that would sink a ship. And in the AI, evolution of AI, especially as it's applied to military systems, that same dynamic is surely to be present.

And so working through the dynamics of artificial intelligence, anti-artificial intelligence and anti, anti-artificial intelligence, so that, you know, this cycle of development and securing, you know, securing your data that's going to continue. And so we are highly cognizant of the research. We're highly cognizant of the implementation. We have great relationships with academic environments, and with commercial environments that really help us keep on the cutting edge. So we understand where the threats are and what the threats are. We will never be able to use to eliminate all threats in this competition of AI and counter AI.

But what we want to do is as informed as possible about what is possible at the cutting edge, and how do we best secure our systems, and make sure that we're still informing good decision-making. Sorry, Dr. Jane, please.

DR. PINELIS: So to build a little bit on the general’s statement, part of operational testing is testing your system, again, in its realistic operational conditions against a realistic adversary with the information that is available. So to that end, we test our systems for a variety of robustness and resiliency issues

The first one being resilient to cyber threats, of course. There's also being resilient to even just natural perturbations, right. You think of a sensor, maybe in a computer vision problem, that sensor couldn't get attacked itself, but also it could just be cloudy that day, or it could be blurry, for whatever reason.

And then of course, we actually have a red team at the JAIC that tests our systems with respect to real adversarial threats.

Having said that, right, once the model is actually deployed, once the tool is deployed, there are additional things that we worry about as far as robustness and resilience. So we worry about data drift. We worry about model drift. So these are all runtime monitoring types of questions, because monitoring these systems doesn't stop once the systems are deployed, which is kind of how we've traditionally tested things in this department.

We partner on this with a few Federally Funded Research and Development Centers, with DARPA, and with a couple of university affiliated research centers as well, because a lot of this research is both operational in nature, but also somewhat academic in nature, too.

So those are the important relationships there.

And then as far as data poisoning very specifically, at the JAIC, we have a variety of operational data that we're able to share in a very secure way with our developers so that their models are developed, extremely operationally relevant and data. But of course, now we also have the data decrees that recently came out from the CDO's office (Chief Data Office).

But I think will be a nice and stuff to provide securing data sharing between organizations. Because as we develop these data of utmost quality, we need to ensure their security, as you mentioned.

MS. PATEL: And if I could also add a few additional comments in terms of some of the efforts that we're also doing. Because data is critical to all those principles as we think about responsible AI. But some of the efforts that we've done at the JAIC recently as well, and you've heard me talk about this before is the use of data cards.

So, you know, if we go back to the earlier stages of the development lifecycle and think about, you know, when we're designing and developing, so when we're designing the use case, how we identified the right sets of data, do we really need all that data, how we looked at that data, have we separated training versus testing and so forth, but using tools such as data cards, as documentation purposes, also having a governance process.

.So this all comes down to thinking through risk mitigation, right? Like, how do we mitigate the risk as much as we can? How do we monitor when it comes to runtime monitoring once the systems are deployed, to really be able to identify when there is data drift, so to speak.

And so how do we build a robust governance system or structure around that, so that we make sure that our use of data, our selection of data is aligned with our principles, and the scope of the project and the intent of the project, as well. And so the data decrease, but also, there are data ethics principles that are also out there as well. And so all of these seek it go hand in hand, frankly.

Q: Thank you.

CMDR. ABRAHAMSON: Ma'am, would you like to ask the question?

Q: Yes. And I want to talk a little bit about the concern about the competition with China and Russia in the field of AI, especially if there may be different ethical constraints.

GEN. GROEN: Yeah, so it's -- great question. Thanks. The -- Clearly, you know, one of the things that we pay key attention to is, as we've already talked about, is the sound ethical baseline, and everything that we base our AI development, it begins with AI principles and works its way through things like test and evaluation, it works its way through responsible AI integration.

So we have this entire process that is built foundationally on trust. And the results of our process, beginning right from those very principles is building in trust, building in trust through testing, building in trust through evaluation, building in trust through human systems integration, building in trust in operational employment doctrine to make sure that we're using our AIs where it's appropriate, and where they can be value added, validating and verifying our AI algorithms, so that we can be sure that they not only perform as they're designed to, but they also perform to design in the context that we want them to. They achieve the right effects that we want them to achieve.

So we have this very, you know, this very complex and multi-step process, you know, under the heading of responsible AI, that is just foundational to the way we do AI, right. It's everything we do has to pass those tests.

And so as a result, we think that actually creates tempo for us, because a trusted AI is an AI that a commander will use or an operator will use and will be comfortable using and know when it can be used and when it cannot be used.

I say it that, I answered your question that way, because some would contrast the, you know, maybe the speed and tempo. If an authoritarian regime, like the Russian regime, you know, develops, you know, a, you know, a weaponized AI capability, for example, without this, the ethical baseline and sort of the self-questioning all the way through. Well, then you may not be able to use that weapon effectively. You may not have trust of, you know, of the operators or a commander that those things will be effective.

We think that we actually gain tempo and speed and capability by bringing AI principles and ethics right from the very beginning, that we're not alone in this. We currently have AI partnership for defense with 16 nations that all who embrace the same set of ethical principles have banded together to help each other think through and to work through how do you actually develop AI in this construct.

And so we've we just had our third meeting of the AI Partnership for Defense with 16 nations. We just added three this last go around. All of these nations want to approach the same AI development from the same ethical baseline. And so it's an enormously powerful team.

And it's not, you know, when we get together, it's not just a talk shop about philosophy. We actually share real, you know, real examples of how you can develop AI and ethical ways how you can build trust in your operators, all of the aspects that make this effective.

We think doing it this way, makes our AI actually much more effective as a capability than it would be if we just handed out, you know, an algorithm that wasn't tested, that wasn't trusted, that we weren't sure if where -- it would work or where it wouldn't work. That kind of distrust would come from an AI development that's not based on those same kind of principles, that doesn't adhere to the transparency and the visibility, and accountability of process that ours does. Thanks for the question.

CMDR. ABRAHAMSON: We'll go to the phones to Jaspreet Gill from InsideDefense.

Q: My question was actually answered. So I'll just hand over to the next.

CMDR. ABRAHAMSON: Okay. Sir, over to you?

Q: So I'm relatively new to this AI stuff. Can you explain what is ethical AI versus what might be unethical, some practical examples. And what is the department doing to ensure that it always has ethical AI?

GEN. GROEN: Yeah. So I'll start with the easy stuff. You know, what -- I'll start with the ethical principles, right, because that's where we start this conversation.

So if our ethical principles, which includes reliability and transparency, and equitability, and then moves on from there so that we have not only fair and transparent and traceable AI algorithms, that we have a sense of reliability, we know they work. Then then that goes to the next AI principles, which include reliability and governability.

And so if you're building to those principles, first of all, you're ensuring that your AI actually works as it's designed to work. You're assuring that you understand any biases that those systems might have in almost any AI will have a natural bias, or will grow of natural bias as it's trained over time.

And so this is a real aspect of artificial intelligence development that you have to pay keen attention to. And if you understand how the AI helps your decisions, it's traceable. And so that you actually know how the algorithm works and how it comes to the conclusions that comes to or the predictions that comes to.

If you understand those things, and you build your AI consistent with those principles, then you kind of graduate to the next step, which is okay, does it do this, you know, in a responsible way? And does it do this in a governable way? Right? Can you actually ensure that the AI is doing is acting responsible in a human systems integration environment, or a valid -- a verification and validation environment where you're trying to, you know, test an AI algorithm in the context that it's supposed to perform?

And then finally is it governable? At the end of the day, can you pull the plug on it and decide, you know, what I'm going to let, you know, I don't trust the data that's coming out of this, you know, this particular algorithm in this context. So I'm not going to use the algorithm for that decision-making.

That's the core of like what ethical AI is, rather than a black box here, listen to the black box, whatever the black box tells you to do. That's what you're going to do. That's an unethical application of artificial intelligence in our mind. Alka, you can probably answer this question much better than I can. So please.

MS. PATEL: I think job well done there, general. So let me -- I'll just -- we could talk about this for hours. So I want to be mindful of time. And let me just go back for a quick second and say, the department already has a strong foundation, a strong enduring foundation in history and ethics. Right.

And so to go back to your question earlier, you know, just -- we are the U.S. Department of Defense. And we have that strong foundation, just because others may or may not doesn't mean we don't continue with our values and leading in with our values. So I just wanted to kind of reiterate that point.

When we talk about AI ethics, it's still a fairly new area, right. And it's being developed. And there's sort of two ways to think about it. One is from sort of the philosophical perspective, and, you know, thinking about the context of, should we or should we not use AI for certain use. We've seen other countries who use it for tracking purposes, using facial recognition in instances that you would not want it to be utilized. And they're not consistent with our values.

And the other aspect of this is thinking about ethics from an apply perspective, which is what we're really talking about when it comes to our principles, right? So the principles set out the values, the five principles that the general is talking about set the values.

So the next step is how do you actually take those values and make it into process driven steps into checkpoints, into guardrails. So as we're building these technologies, which are unique in the aspect that we say, you know, AI has never done, we need to make sure that we're building those safeguards into the process when we think about the use case of potential harms that might cause because this is a socio-technical issue, right? This isn't just about technology that, you know, you're always going to get the same output all the time. That's not what your -- that's not how the technology works.

And so thinking about what those safeguards and guardrails look like for the use case, for the data, for the model, for the output, and so it's more process driven. And those principles outline how we think about it from a higher level values perspective. But now the implementation that I was alluding to earlier, that strategy and implementation plan is how do we identify the actual processes? How do we think about the people who are responsible for those different steps? How do we think about those process flows, and how do we think about governance to make sure that we always stay consistent and aligned with the principles.

CMDR. ABRAHAMSON: Ma'am?

Q: Lee Hudson with Aviation Week. So I understand with the ADA initiative you're working with the COCOMs. But I wanted to see how you're actually going to be interacting with the individual services. For example, like will you be participating in the Army's project convergence or any of the ABMS (Advanced Battle Management System) demonstrations the Air Force is doing, if you could talk about that.

GEN. GROEN: We absolutely will. And we're partnered very closely with all of the service development efforts. And so this is what gives us confidence as we kind of go into this ADA environment. You know, we have technologies in the JAIC that we think are going to be really helpful for combatant commanders.

But we also know that we have a deep bench of AI capabilities that have been developed by the services, and that those are also really good tools that we want -- might want to bring into an ADA environment. So, you know, this is not just JAIC technology that we're talking about. We're talking about technology that's already been built and tested and employed by the services that we might be able to bring to combatant commands, you know, decision-making space really quickly.

So I think, you know, we are key partners, you know, with both, you know, the ABMS series of exercises, with a project convergence of great relationship with the, you know, with the services. We're working closely with the Navy on their Project Overmatch.

You know, the thing about this technology, so what we have in the department, you know, there are so many parties that are eager to get started on this journey, that we, you know, that we have flowers blooming all over the place, right? Like, you know, people are doing really good work.

And so what we want to do is one, illuminate that where, you know, where somebody has something that's scalable across, you know, across from one service to another, or to a different echelon or to a combatant command. We want to be kind of keepers of best practice, and so we understand what's available so that we can make that broadly available across the force.

And then that works two ways because as we, you know, maybe proliferate technology that the Navy is working on, and we talked to, you know, a Defense Agency about that, that Defense Agency might also have some best practices that we can bring back to the Navy.

So we -- one of the key elements of the JAIC, what we think is an important part of our mission is kind of be this brokers of best practice, right? Like we learn from our, you know, the people we work with, and then we can teach to the people we work with, again, from that body of knowledge that we collect. So absolutely. And the more we integrate, the better our collective outputs going to be.

CMDR. ABRAHAMSON: And last question of the day is to Jared from Federal News Network. Jared, go ahead.

Okay, Jared is not on the line. So we'll go ..

Q: I'm sorry, I'm here. Can you guys hear me now?

CMDR. ABRAHAMSON: Oh, Jared, there you are.

Q: Yup, didn't hit the mute button.

CMDR. ABRAHAMSON: Please go ahead, sir.

Q: My fault. Sorry about that.

CMDR. ABRAHAMSON: No worries.

Q: Thanks for doing this everybody. I wanted to go back to the ADA initiative. The OCONUS (Outsider Continental U.S.) Cloud Strategy the department has put out recently. Talk in a fair amount of detail about some of the challenges COCOMs have just in terms of basic IT infrastructure, you know, a lack of access to commercial cloud services reach back to CONUS.

And I'm just curious if that's right, how much is that going to be a hindrance to these fly-away teams? I mean, can they do real meaningful AI implementation if they're working in just a basic nuts and bolts IT environment that's kind of primitive and siloed by modern standards?

GEN. GROEN: Yeah, great, great question, Jared. And honestly, that's the reason we're doing ADA, right. Because what we want to do is experiment in the environments that we expect our algorithms to work, and you can't -- you can do it in a lab. But when you bring that lab, you know, that lab tested, you know, capability out to the combatant commander or out somewhere on the tactical edge, you're going to realize, holy cow, the latency here is horrible, or it's intermittent, holy cow the reliability and the uptime of the servers that are required is not sufficient, right.

But by doing this ADA experimentation in the place that we expect our algorithms to work, we will discover the bureaucratic obstacles, the cultural obstacles, and the technical obstacles to making these things successful. And then we can bring that back. So we're great partners. We mentioned the CDO and their role in the data enterprise. But we're also great partners with the CIO, right?

So the chief information officer, who actually owns operates, builds and fixes these networks, we can use what we observe in the real working environment to help inform upgrades to networks, upgrades to architecture, you know, rearchitecting things that, you know, that maybe have to be completely redone in a data driven environment.

And then also from a policy perspective, maybe it's insufficient to have a, you know, an authority to operate, you know, structure where you make really -- you make decisions about what can go on what network in a very deliberate and maybe a way that we could update. That's the kind of stuff that we hope to understand policy obstacles, cultural obstacles, technical obstacles, network obstacles.

And if we learn what those obstacles are, then we can address the real problems to AI implementation. And I think it's critical. I'm sorry for continuing -- to continue to go on here. But I think it's a really important question. Because we can have -- we can do design documents in the lab and build AI in the lab forever, until we can actually employ them on, in the environments that they're expected to operate in and then expect it to work, we're not going to know. And that's unacceptable to us.

And so, ADA is exactly designed for that purpose. We can find out for sure. Does this work or does it not work? Thanks for the question, Jared. It's a good question.

Q: Can I follow real quick just to ask, do you know how soon these teams are going to deploy? And where you're going to recruit from to actually staff them up?

GEN. GROEN: Yeah, yeah. So we're going to be -- we're going to push our first data reinforcements out I think within 30 days or so. We should, and we'll be building, we'll be working with combatant commands on fly-away teams from the -- largely from the JAIC. And folks that will contract to come with us within 60 or 90 days. So this is, you know, this is coming really quickly.

And as I indicated before, you know, combatant commands are busy, busy people and busy staffs, and they have a lot of things going on. They have large chunks of the world that they are -- that they're responsible for. And so we have to be very -- we have to be very attuned to what their battle rhythm is. And when they're available to kind of commit, you know, to commit to experimenting with us. And we'll align to their schedule to make sure that it works effectively for them. And then we'll report the results, you know, across the force.

And through repetition, we expect to do this about once a quarter, where we'll actually get into an experiment -- experimentation cycle with a combatant command. We'll try to do that and build capability about once a quarter. The data aspect of this is a little bit more continuous. So we'll have, you know, people that are there in a long term basis, steady state basis to help them shape their data and get their data into the right condition.

Q: Can I ask a quick question before we wrap up?

CMDR. ABRAHAMSON: Okay. Who's on the line please?

Q: This is Jack Poulson from Tech Inquiry.

CMDR. ABRAHAMSON: Hi, Jack. Go ahead, sir.

Q: So there have been several companies that the Department of Defense has procured from whether X-mode social or Clearview AI or (inadible) album that have arguably violated the AI principles commitment to audible data trails, whether that's X-mode social having source some of its data reportedly from a Muslim prayer app or Clearview AI being sued in the state of Illinois for the way it scrapes social media.

I guess I be curious if you could detail what the retrospective process might have looked like for those companies in doing an analysis of whether these companies have met the Defense Innovation Board's recommended AI principles.

GEN. GROEN: Jane?

MS. PATEL: Sure, I'm happy to try to address this. So the DOD app principles, right, which were founded on, as you alluded to the Defense Innovation Board's recommendations. So one of our efforts, actually, that we are doing through our Tradewind project at the JAIC is actually looking at how we are going to build in Responsible AI recommendations and practices within the AI acquisition process.

So to your point, I think as we work with vendors, we want to make sure that their processes, their practices, or their principles, or that they may have on their end aligned with ours. And so oftentimes, we talk about technical interoperability, but there's also this aspect of principles and practices interoperability.

And so what we're looking to do, and we're working with the Responsible AI Institute via our Tradewind vehicle, is to actually map out the AI acquisition lifecycle, and find all the different various entry points where we can ask certain questions, right, and do our own due diligence.

So for example, understanding these organizations practices around data, how they are, how they are doing their own data governance, where the data is coming from, thinking about and asking about their own ethics maturity assessment. So does that organization, have principals even? And if so, what are they doing to put them into practice? Understanding what their supply chain might look like from a responsible AI perspective.

And so that that tree is something that is on top, the top of mind for us. And that's one of our efforts that we're working on through our Tradewind project. And so hopefully, again, as part of the memo that has a number of deliverables, there's one that is focused on acquisition. And I think that deliverable, which is you -- sometime in the fall, early fall, addresses really what you're trying to get at.

GEN. GROEN: Yeah, I think that's a great question. And I'm glad you asked it because I, you know, we often just as evidence this afternoon, we often think of AI ethics and AI ethical principles of responsible AI in the context of algorithms themselves, right.

And one of the one of the most important pieces that we're doing and Alka alluded to it here. We're building an AI acquisition capability that produces AI, acquisition experts, so that in, you know, in the future, as this capability matures, we'll actually have people who are trained to look at that sort of thing. And they'll actually, you know, these -- I could see these kind of questions being part of source selection criteria for vendors as we bring vendors on new projects.

And so I think that speaks to a, you know, a systemic implementation of responsible AI and responsible AI principles and ethical principles, you know, right from the get go before we ever even bring somebody on contract to do this. So we can look inside our own house, certainly. I think we're getting better at learning how to look outside our house to make sure that, you know, that the wrong kind of practices come in the tent.

CMDR. ABRAHAMSON: Okay, thank you very much. That will conclude today's press conference. Some of you have my contacts. If there's follow ups for those that don't, you can contact the OSD Public Affairs Duty Officer and they'll connect with me if you have follow up. So thank you very much for attending tonight.

GEN. GROEN: Can I make one more comment? 

CMDR. ABRAHAMSON: Please go ahead.

GEN. GROEN: So, or those of you who are regular members of the circuit here, we in the JAIC have been served just tremendously by our public affairs officer, Commander Abrahamson. And I tell you what, he has done just a fantastic job and I hope, I hope you've had the same experience.

But for me, I mean, he's been thoughtful, he's been very patient. He's, you know, kept these events, you know, pull these events together for a long time. Arlo is going to be moving on. He's going to -- he's moving up, you know, one success after another for this guy. He's going to be moving on and go back to work for the Navy for a little while. We hope we see him again back in the joint force here soon.

You know, maybe he'll take Mr. Kirby's job someday, but I tell you what, I really want to thank just in the presence of those of you who have worked with Arlo. I wanted to say thank you Arlo for what a great job you've done as a PAO (Public Affairs Officer).

CMDR. ABRAHAMSON: Thank you, sir.

GEN. GROEN: I appreciate it.

CMDR. ABRAHAMSON: It's my pleasure. All right. Thank you very much, everybody.

1:05:40
Play
VIDEO | 1:05:40 | Joint Artificial Intelligence Center Director Holds News Conference