Good afternoon. Thank you for joining us to talk about the State of AI in the Department of Defense.
So, earlier this week President Biden and Vice President Harris spoke eloquently about the Administration’s commitment to advancing the safe, secure, and trustworthy development and use of artificial intelligence — and the President signed an executive order that lays out a strong, positive vision for government-wide responsible AI adoption, setting a model for industry and the world. As part of that here at DoD, we look forward to working with the White House and other national security agencies on a National Security Memorandum on AI that we expect will build on the responsible AI work we’ve done here at DoD.
Because this is a topic that we care about a lot. And we’ve been working on it for quite some time. For not only is AI on the minds of many Americans today; it’s a key part of the comprehensive, warfighter-centric approach to innovation that Secretary Austin and I have been driving from day one.
After all, DoD is hardly a newcomer to AI. The Pentagon has been investing in AI and fielding data- and AI-enabled systems for over 60 years:
- from DARPA funding for the first academic research hubs of AI, at MIT, Stanford, and Carnegie Mellon in the 1960s;
- to the Cold War-era SAGE air defense system, which could ingest vast amounts of data from multiple radars, process it in real time, and produce targeting information for intercepting aircraft and missiles;
- to the Dynamic Analysis and Re-planning Tool, DART, that DoD started using in the early 1990s, saving millions of dollars and logistical headaches in moving forces to the Middle East for Operations Desert Shield and Desert Storm.
More recently, Apple’s Siri has roots not just in decades of DoD-driven research on AI and voice recognition, but also in a specific DARPA project to create a virtual assistant for military personnel.
Of course, increasingly over the last dozen years, advances in machine learning have heralded and accelerated new generations of AI breakthroughs, with much of the innovation happening outside DoD and government. And so our task in DoD is to adopt these innovations wherever they can add the most military value.
That’s why we’ve been rapidly iterating and investing over the past two-plus years to deliver a more modernized data-driven and AI-empowered military now.
In DoD we always succeed through teamwork, and here we’re fortunate to work closely with a strong network of partners: in national labs, universities, the intelligence community, traditional defense industry, and also non-traditional companies, in Silicon Valley and hubs of AI innovation all across the country. In several of those, we’re physically present, including through offices of the Defense Innovation Unit, which we recently elevated to report directly to the Secretary.
As we’ve focused on integrating AI into our operations responsibly and at speed, our main reason for doing so has been straightforward: because it improves our decision advantage.
From the standpoint of deterring and defending against aggression, AI-enabled systems can help accelerate the speed of commanders’ decisions, and improve the quality and accuracy of those decisions — which can be decisive in deterring a fight, and in winning a fight.
And from the standpoint of managing across the world’s largest enterprise — since our vast scale can make it difficult for DoD to see itself clearly, spot problems and solve them — leveraging data and AI can help leaders make choices that are smarter, faster, and even lead to better stewardship of taxpayer dollars.
Since the spring of 2021, we’ve undertaken many foundational efforts to enable all of this, spanning data and talent and procurement and governance. For instance:
- We issued data decrees to mandate all DoD data be visible, accessible, understandable, linked, trustworthy, interoperable, and secure.
- Our ADA initiative deployed data scientists to every Combatant Command, where they’re integrating data across applications, systems, and users.
- We awarded Joint Warfighting Cloud Capability contracts to four leading-edge commercial cloud providers, ensuring we have computing, storage, network infrastructure, and advanced data analytics to scale on demand.
- We stood up DoD’s Chief Digital and Artificial Intelligence Office, or CDAO, to accelerate adoption of data, analytics, and AI from the boardroom to the battlefield. The Secretary and I are ensuring CDAO is empowered to lead change with urgency, from the E Ring to the tactical edge.
- We’ve also invested steadily and smartly in accompanying talent and technology: more than $1.8 billion in AI and machine learning capabilities alone over the coming fiscal year
- And today, we’re releasing a new Data, Analytics, and AI Adoption Strategy — which not only builds on DoD’s prior-year AI and data strategies, but also includes updates to account for recent industry advances in federated environments, decentralized data management, generative AI, and more. I’m sure our CDAO, Dr. Craig Martell, will say more about that when you all speak with him later this afternoon.
All this and more is helping realize Combined Joint All-Domain Command and Control, CJADC2. To be clear, CJADC2 isn’t a platform or single system we’re buying. It’s a whole set of concepts, technologies, policies, and talent that are advancing a core U.S. warfighting function: the ability to command and control forces.
So we’re integrating sensors and fusing data across every domain, while leveraging cutting-edge decision support tools to enable high-optempo operations. It’s making us even better than we already are at joint operations and combat integration.
CJADC2 is not some futuristic dream. Based on multiple Global Information Dominance Experiments, work in the combatant commands like INDOPACOM and CENTCOM, as well as work in the military services — it’s clear these investments are rapidly yielding returns.
That’s the beauty of what software can do for hard power. Delivery doesn’t take several years or a decade. Our investments in data, AI, and compute are empowering warfighters in the here and now — in a matter of months, weeks, and even days.
We’ve worked tirelessly, for over a decade, to be a global leader in the fast and responsible development and use of AI technologies in the military sphere, creating policies appropriate for their specific uses. Safety is critical, because unsafe systems are ineffective systems.
The Pentagon first issued a responsible use policy for autonomous systems in 2012. And we’ve maintained our commitment since, as technology has evolved: adopting and affirming ethical principles for using AI; issuing a new strategy and implementation pathway last year focused on responsible use of AI technologies; and updating that original 2012 directive earlier this year, to ensure we remain the global leader of not just development and deployment, but also safety.
As I’ve said before, our policy for autonomy in weapon systems is clear and well-established. There is always a human responsible for the use of force. Full stop.
Because even as we are swiftly embedding AI in many aspects of our mission — from battlespace awareness, cyber, and reconnaissance, to logistics, force support, and other back-office functions — we are mindful of AI’s potential dangers, and determined to avoid them.
Unlike some of our strategic competitors, we don’t use AI to censor, constrain, repress or disempower people. By putting our values first and playing to our strengths, the greatest of which is our people, we’ve taken a responsible approach to AI that will ensure America continues to come out ahead.
Meanwhile, as commercial tech companies and others continue to push forward the frontiers of AI, we’re making sure we stay at the cutting edge — with foresight, responsibility, and a deep understanding of the broader implications for our nation.
For instance, mindful of the potential risks and benefits offered by large language models and other generative AI tools, we stood up Task Force Lima to ensure DoD responsibly adopts, implements, and secures these technologies.
Candidly, most commercially-available systems enabled by large language models aren’t yet technically mature enough to comply with our ethical AI principles, which is required for responsible operational use.
But we have found over 180 instances where such generative AI tools could add value for us, with oversight — like helping to debug and develop software faster, speeding analysis of battle-damage assessments, and verifiably summarizing text from both open-source and classified datasets.
Not all of these use cases are notional. Some DoD components started exploring generative AI tools before ChatGPT and similar products captured the world’s attention. A few even made their own models — isolating foundational models; fine-tuning them for specific tasks with clean, reliable, secure DoD data; and taking the time to further test and refine the tools.
While we have much more evaluating to do, it’s possible some might make fewer factual errors than publicly-available tools, in part because — with effort — they can be designed to cite their sources clearly and proactively.
Although it would be premature to call most of them “operational,” it’s true that some are actively being experimented with, and even used as part of people’s regular workflows — of course, with appropriate human supervision and judgement, not just to validate but also to continue improving them.
We are confident in the alignment of our innovation goals with our responsible AI principles. Our country’s vibrant innovation ecosystem is second-to-none precisely because it’s powered by a free and open society committed to responsible-use values and ideals.
We are world leaders in the promotion of the responsible use of AI and autonomy, with our allies and partners. One example is the Political Declaration that we launched back in February and that Vice President Harris highlighted in London this week, which creates strong norms for responsible behavior. As the Vice President noted, over 30 countries have endorsed the declaration, ranging from members of the G7 to countries in the Global South.
Another example is our AI Partnership for Defense, where we work with allies and partners to talk through how we can turn our commitment to responsible AI into reality.
Those common values are a big reason why America and the U.S. military have many capable allies and partners around the world, and why growing numbers of world-leading commercial tech innovators want to work with us. Our strategic competitors can’t say that — and we are better off for it.
Those nations take a different approach. It’s deeply concerning, for instance, to see some countries using generative AI for disinformation campaigns against America, as has been reported by tech companies and the press.
But there is still time to work toward more responsible approaches.
For example, in the 2022 Nuclear Posture Review the United States made clear that in all cases, we will maintain a human “in the loop” for all actions critical to informing and executing decisions by the President to initiate or terminate nuclear weapons.
Other nations have drawn similar bright lines. We call on and would welcome more countries to do the same. And we should be able to sit down, talk, and try to figure out how to make such commitments credible. We hope all nations would agree.
As we’ve said previously, the United States does not seek an AI arms race with any country, including the PRC, just as we do not seek conflict. With AI and all our capabilities, we seek only to deter aggression and defend our country, our allies and partners, and our interests.
That’s why we will continue to encourage all countries to commit to responsible norms of military use of AI. And we will continue to ensure our own actions clearly live up to that commitment: from here at the Pentagon and across all our commands and bases world-wide, to the flotilla of uncrewed ships that recently steamed across the entire Pacific, to the thousands of all-domain attritable autonomous systems we aim to field in the next two years through DoD’s recently-announced Replicator initiative.
The state of AI in DoD is not a short story, nor is it static — we must keep doing more, safely and swiftly, given the nature of strategic competition with the PRC, our pacing challenge.
At the same time, we benefit from a national position of strength, and our own uses grow stronger every day. And we will be keeping up the momentum, ensuring we make the best possible use of AI technology responsibly and at speed.
With that, I’ll take your questions.