An official website of the United States Government 
Here's how you know

Official websites use .gov

.gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( lock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

You have accessed part of a historical collection on defense.gov. Some of the information contained within may be outdated and links may not function. Please contact the DOD Webmaster with any questions.

Secretary of Defense Austin Remarks at the Global Emerging Technology Summit of The National Security Commission on Artificial Intelligence (As Delivered)

Good afternoon.

Thank you, General Kumashiro, for that kind introduction and for your important work with this commission.

Thanks to all of you so much for being here, including those people who are joining us online.

And the fact that so many of us can gather safely in person here in Washington is a testament to what science and leadership can do.

You know, it says a lot about this commission that you’ve pulled together such an impressive lineup of speakers for this summit.

You’ve also brought together your impressive commissioners, including Eric Schmidt, Safra Catz, Gilman Louie, Chris Darby, and Katharina McFarland.

It’s truly great to see so many friends here, including Bob Work, who’s made such tremendous contributions as a commissioner and as a distinguished leader of the Department.

As this commission has argued, cooperation is key to ensuring that the forces of technology support the forces of democracy.

I couldn’t agree more, and I’m very grateful for all you do.

Now, I’d like to talk today about some big changes that we are bringing to the Department of Defense with respect to artificial intelligence. And they represent some big changes to some old ways of thinking.

This commission calls AI “the most powerful tool in generations for benefiting humanity.”

It’s a capability that this Department urgently needs to develop even further.

AI is central to our innovation agenda, helping us to compute faster, share better, and leverage other platforms.

And that’s fundamental to the fights of the future.

And we are all now present at the creation… all part of a new age of technology.

As President Biden has said, we are determined to work with our like-minded partners to shape the rules and the norms that will govern those sweeping advances.

That means standing up for democratic values even—and especially—in times of great change.

And it means ensuring that technologies like AI are, as the President has put it, “used to lift people up, not used to pin them down.”

So we are renewing our efforts to posture ourselves for what I would call the future fight.

Now, obviously, if it comes down to fighting, we will do so.

And we will win, and we will win decisively.

But our first goal should be to prevent conflict… and to deter adversaries. 

And that demands of us a new vision for deterrence in this century.

We call this vision integrated deterrence.

I’ll have more to say about this in the weeks to come, but basically, integrated deterrence is about using the right mix of technology, operational concepts, and capabilities—all woven together in a networked way that is so credible, and flexible, and formidable that it will give any adversary pause.

Integrated deterrence means working closely with our friends and partners, just as this commission has urged.

It means using some of our current capabilities differently.

It means developing new operational concepts for things that we already do.

And it means investing in cutting-edge capabilities for the future, in all domains of potential conflict.

America’s integrated deterrence relies on both innovation and investment. And we understand that those are interwoven.

Innovation requires the resources to develop new ideas and scale them appropriately.

And investment pays off when it’s focused on the challenges of tomorrow, and not yesterday.

Tech advances like AI are changing the face and the pace of warfare.

But we believe that we can responsibly use AI as a force multiplier… one that helps us to make decisions faster and more rigorously, to integrate across all domains, and to replace old ways of doing business.

AI and related technologies will give us both an information and an operational edge… and that means a strategic advantage.

But we know that truly successful adoption of AI isn’t just like, say, procuring a better tank.

You know, a closer analogy might be the Department’s use of computers.

And that began with a few critical applications, and over the decades became embedded in nearly every military system.

In a future that increasingly feels as if it’s already here, AI holds the promise of superior performance across a wide range of platforms and systems.

Over just the past decade, progress in AI research—especially in machine learning—has vastly expanded.

We see AI as a transformative technology, one that will require new processes, new policies, and new procedures across the Department.

Used right, AI capabilities can play a critical role in all four areas of the Joint Warfighting Concept that I approved this spring, including Joint Fires, Joint All Domain Command and Control, Contested Logistics, and Information Advantage.

Today, across the Department, we have more than 600 AI efforts in progress… significantly more than just a year ago.

That includes the Artificial Intelligence and Data Acceleration initiative, which brings AI to bear on operational data.

It includes Project Salus, a predictive tool for finding patterns in COVID-19 data that the Department built from scratch with some top Silicon Valley companies starting last March.

And it includes the Pathfinder project, which is an algorithm-driven system that helps us better detect airborne threats by using AI to fuse data from military, commercial, and government sensors in real time.

Of course, the Department, and especially DARPA, have a long history of AI research.

You know, in the 1960s, DARPA research shaped the so-called “first wave” of AI.

And today, through its multi-year investment of more than 2 billion dollars, DARPA’s “AI Next” campaign is paving the way for the future “third wave.”

I recently visited the professionals up at DARPA and I was so impressed to learn about their more than 60 programs that are applying AI, including using it to detect and patch cyber vulnerabilities.

And we’re just getting started.

And DARPA is just one of the many research, test, and evaluation organizations across the Department.

As this commission has recommended, we elevated the Joint Artificial Intelligence Center so that it reports directly to the Deputy Secretary, ensuring that we have the focus from senior leaders needed to drive AI transformation.

Over the next five years, the Department will invest nearly 1.5 billion dollars in the center’s efforts to accelerate our adoption of AI.

Done responsibly, leadership in AI can boost our future military tech advantage—from data-driven decisions to human-machine teaming.

And that could make the Pentagon of the near future dramatically more effective, more agile, and more ready.

But obviously, we aren’t the only ones who understand the promise of AI.

China’s leaders have made clear they intend to be globally dominant in AI by the year 2030.

Beijing already talks about using AI for a range of missions, from surveillance to cyberattacks to autonomous weapons.

In the AI realm as in many others, we understand that China is our pacing challenge.

We’re going to compete to win, but we’re going to do it the right way.

We’re not going to cut corners on safety, security, or ethics.

And our watchwords are responsibility and results.

And we don’t believe for a minute that we have to sacrifice one for the other.

We’re going to rely upon the longstanding advantages of our open system, and our civil society, and our democratic values.

That’s our roadmap to success, and I wouldn’t trade it for anyone else’s.

You know, American power has long been rooted in American innovation.

And that’s even truer today.

Our powerhouse universities and our nimble small businesses are brimming with good ideas.

And we’re working as their partners, through initiatives like our recently launched Institute for Nascent Innovation Consortium, which brings together small companies in a problem-solving network to tackle some of the government’s hardest tech challenges.

But ultimately, AI systems only work when they are based in trust.

We have a principled approach to AI that anchors everything that this Department does.

We call this Responsible AI, and it’s the only kind of AI that we do.

Responsible AI is the place where cutting-edge tech meets timeless values.

And again, you see, we don’t believe that we need to choose between them—and we don’t believe doing so would work.

The commission speaks of establishing “justified confidence” in AI systems.

And we want that confidence to go beyond just ensuring that AI systems function, but also ensuring that AI systems support our founding principles.

So our use of AI must reinforce our democratic values, protect our rights, ensure our safety, and defend our privacy.

Of course, we clearly understand the pressures and the tensions. And we know that evaluations of the legal and ethical implications of novel tech can take time.

AI is going to change many things about military operations, but nothing is going to change America’s commitment to the laws of war and the principles of our democracy.

So we have established core principles for Responsible AI.

Our development, deployment, and use of AI must always be responsible, equitable, traceable, reliable, and governable.

We’re going to use AI for clearly defined purposes.

We’re not going to put up with unintended bias from AI.

We’re going to watch out for unintended consequences.

And we’re going to immediately adjust, improve, or even disable AI systems that aren’t behaving the way that we intend.

To underscore this culture of responsibility, in May, the Department reaffirmed its commitment to our AI Ethics Principles.

And that includes training a workforce ready for Responsible AI; establishing structures for oversight; and cultivating a robust ecosystem for Responsible AI.

And I should note the outstanding efforts of our Deputy Secretary, Kath Hicks, in this crucial effort. An amazing job by a very talented professional.

Now, our wider vision of integrated deterrence relies on our unmatched network of allies and partners worldwide.

And so does our approach to Responsible AI.

We’re working together with our like-minded friends to advance global norms grounded in our shared values.

So the Department and 15 of our allied and partner countries are meeting several times a year in the AI Partnership for Defense.

As we’ve accelerated our integration of AI, we have, of course, relied heavily on expert advice, including recommendations from this commission.

You’ve pushed us to increase our investments in AI development and fielding…

 And in June, we announced the creation of the Rapid Defense Experimentation Reserve, which helps us get promising tech across the so-called “valley of death” and into new prototypes, capabilities, and concepts.

You’ve urged us to build the technical backbone to support AI systems throughout their life cycle…

And we have just launched the Department’s new AI and Data Acceleration initiative, which will help us to harness data at scale and it will speed up the gains from leveraging AI.

Your report also recommends that the Department’s budget focus more on science and technology.

And you know what -- you’re exactly right.

That’s why this year’s budget asks for 112 billion dollars for research, development, testing, and evaluation.

It is the Department’s largest R&D request ever.

And in that request, AI is one of the Department’s top tech modernization priorities.

And we’re not just investing in individual AI applications, either.

We’re investing in the infrastructure and the reforms to make our efforts more effective.

And our final and most important investment is in our people.

We’re going to have to do a lot better at recruiting, training, and retaining the talented people—which are often young people—people who can lead the Department into and through the AI revolution.

And that means creating new career paths and new incentives.

It means including tech skills as a part of basic-training programs.

And it means a significant shift in the way this institution thinks about tech.

You know, some of our troops leave homes that are decked out in state-of-the-art personal tech and then they spend their workday on virtually obsolete laptops.

You’re familiar with this.

And we still see college graduates and newly minted Ph.D.s who would never think about a career in the Department.

So we have to do better. We have to do better.

Emerging technologies must be central to our strategic development.

We need to tackle our culture of risk aversion.

We need to smarten up our sluggish pace of acquisition.

And we need to more vigorously recruit talented people, and not scare them away.

In today’s world, in today’s Department, innovation cannot be an afterthought.

It is the ballgame.

As President Biden has noted, we’re going to “see more technological change in the next 10 years than we saw in the last 50.”

And we know that some of our competitors think they see an opening.

But we are determined, as the President says, “to develop and dominate the products and technologies of the future.”

That’s central to our agenda.

And that mission is far easier because of two of America’s greatest assets: the creativity of an open society and the ingenuity of an open mind.

We’re going to need the help of our friends in all this… and believe me, we’re going to continue to lean on you.

But we’re going to get this done.

And we’re going to get it done right.

And we’re going to get it done together.

Thank you very much.