An official website of the United States Government 
Here's how you know

Official websites use .gov

.gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( lock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Chief Digital and AI Officer Craig Martell Holds a Press Briefing to Discuss DOD's 2023 Data, Analytics and AI Adoption Strategy

DR. MARTELL: Thank you all for joining this call this afternoon.  I'm here today to provide background on the department's new Data Analytics and A.I. Adoption Strategy.  First, I want to recognize that we are publishing this strategy at a time when the attention of this technology has perhaps never been greater and America's clear leadership on A.I. has never been more important.

Earlier this week, the White House issued an executive order prioritizing the safe development and deployment of A.I. technologies, and yesterday, Vice President Harris announced during the A.I. Safety Summit in London that 30 of our global partners and allies have joined us in signing a political declaration that commits to the responsible military use of A.I.

In this light, I want to reinforce Dr. Hicks' remarks from earlier.  A.I. is not new to the Department of Defense.  We've been doing it for over 60 years.  A.I. is like any technology that we consider in the department.  We determine what the use cases the technology is for and we do the work to make sure that we deploy it responsibly.

In the CDAO, we are responsible for accelerating the DOD's adoption of data analytics and A.I. and ensuring scalable A.I.-driven solutions are delivered for enterprise and joint use cases. 

I want to emphasize that this is not a capability development strategy.  Technologies evolve, things are going to change next week, next year, and next decade, and what wins today might not win tomorrow.

Rather than identify a handful of A.I.-enabled warfighting capabilities that will beat our adversaries, our strategies — strategy outlines the approach to strengthening the organizational environment within which are people can continuously deploy data analytics and AI capabilities for decision advantage. 

STAFF:  All right, with that, we will start Kristina Wong from Breitbart.

Q:  Hey, thank you so much, Dr. Martell for doing this.  Is CDAO leading the implementation of this strategy?  And what's the timeline for doing that?  And then also, can you just give us update on the steps CDAO has taken since the Pulse Survey earlier this year on improvement...

DR. MARTELL:  Sure, sure...

Q:  ... the leadership and morale?  Thank you so much.

DR. MARTELL:  Yes.  We are leading the deployment of this, and there's no one timeline.  We have specific timelines for specific things we're trying to drive, and I will say the main way that we are getting at the attitude that we want to see in the department is by picking marquee use cases, and these are the marquee use cases that we believe are going to create stickiness, that people — once there's an administration change, once all the leadership here changes, that the work that we've done will maintain.

And those use cases are first putting a metrics dashboard, a business health dashboard on the Deputy Secretary's desk, and if you think about it, the dashboard on the Deputy Secretary's desk means that — to give her the — sort of the state of the business, means that the data had to percolate up from all the way to the bottom up to her desk.

So, we're using that as a way to figure out what's the right data and the right metrics that the individual components need to be able to run their section of the business correctly and report up to her.  So, that's one use case — the business health metrics.          

And so, we have a set of initiatives with varying timelines. 

And the other use case is, I think we've talked about many times before is CJADC2, because our team is building out the data integration layer, and we're using that data integration layer really as a pathfinder demonstration of the right way to get data right, make it accessible and get it to the places it needs to be timely.

With respect to CJADC2, it really is about the combatant commanders getting the right situational awareness to make the right decisions quickly.

So, that was your first question.  And then about the Pulse Survey, we hired an organization called Gaping Void, they're helping us think through and work through our department — they're doing surveys throughout the whole department. 

We've had a number of offsites, a number of town halls and Gaping Void is going through all of our leadership and all of the teams and figuring out where they feel things are frustrating, where they feel things are working, and then they work with us and the leadership teams to figure out the kinds of changes we can make to address those issues.  

Q:  Thank you so much.

STAFF:  All right, next we'll go to Jaspreet Gill from Breaking Defense.

Q:  Hi, thank you for doing this.  Can you tell us what the top changes are in this version of the AI strategy compared...

DR. MARTELL:  Sure...

Q:  ... the '18 one?  I — I know the Deputy Secretary just spoke about how it's taking into account advancements from industry, but if you can just hit on some other areas?

DR. MARTELL:  Yes, absolutely.  In 2018, and look, the following's not a criticism, I think it was the right thing to do for 2018.  In 2018, the then JAIC focused on building a centralized AI ML pipeline, and that makes a lot of sense for 2018, because even industry hadn't yet figured out how to deliver that as a product to customers.

But in 2022, every one of the major vendors delivers a robust and industrial scale ML ops pipeline.  So, there's really no need for us to build that internally.  In fact, that's one of the first things that we sunset when I arrived. 

And we didn't sunset it because it was bad, we sunset it because it doesn't make any sense for the Department of Defense to run a centralized machine learning ML ops pipeline because first of all, the Department of Defense is the world's largest organization, and putting it in one place doesn't scale.  And everyone of the major vendors has one that was better.     

And so, our view now is let's let any component use which ever ML ops pipeline they need, as long as they're abiding by the patterns of behavior that we need them to abide by. 

And those patterns are going to include things like how are you monitoring your model to make sure that your model's bringing continual value.  Because models change.  The value of the model degrades over time and you have to always keep it up to date.  You always have to retrain it.  So, how are you evaluating that?

How are you doing your data labeling before you build the model?  And how are you making your data and the results of that model accessible to the rest of the department? 

So, the strategy really is about how do you get that data right, how do you make that data accessible and how do you make that data easily discoverable so that anybody can bring what tools they have and build an AI model?  Because I want to be really clear, you cannot build an AI model if you don't have high quality data, that's a non-sensical statement or a fool's errand, pick either one.

Q:  Thank you.  And I have a follow up, is there an implementation plan for this strategy?  And if so, when will that be released?

DR. MARTELL:  I'm going to defer to Andrew about timing on the implementation plan.

ANDREW PEPPLER, CDAO:  Yes, that's right — can folks hear me?

(UNKNOWN):  Yes.

PEPPLER:  Yes, so we are developing implementation guidance that will accompany the strategy, and we expect that to be published in the next couple of months here.  But it will look a bit different from traditional implementation plans because we need to think differently about agile approaches to adoption like the one that we've captured in this strategy.

DR. MARTELL:  Yes, thank you, and let me piggy back on top of that.  Our implementation plan is really going to be here's a set of best practices, here's a set of patterns.  It's not going to look like a traditional you have to do A, B — you have to do exactly A, exactly B, exactly C, exactly D. 

Because each of the department — think of the services, each of the services have wildly different needs and they're at wildly different points in their journey, and they have wildly different infrastructure, so we're going to insist on patterns of shareability, patterns of accessibility, patterns of discoverability and how those are implemented, we're going to allow a lot of variance for.

STAFF:  Okay, up next, Georgina DiNardo from Inside Defense.

Q:  Hi.  As the strategy fact sheet states, the Department identified an AI hierarchy of needs, could you talk a little bit about how the hierarchy was established and how it will be addressed and followed in a timely manner?  Thank you.

DR. MARTELL:  Sure.  Let’s think about the hierarchy of needs as a logical hierarchy of needs and not a temporal hierarchy of needs.  And I'll unpack that in a second, but look, logically speaking, if you do not have high quality data, again analytics is a fool's errand.  If you do not have high quality data, AI is a fool's errand. 

So, the thing that has to get right logically first, and I'll explain that difference in a second, that has to get right logically first is we have to really focus on getting the data high quality and getting the data available and accessible.

Now, does that mean that we won't make it available till the data's high quality?  Absolutely not.  We're going to make less than high quality data available, and then we'll iterate on that and make it better and better as we go, and we'll discover what needs to be iterated upon by building apps on top of it. 

So, this iterative approach where we build things, they work so we have to adjust them.  We build them again, we adjust, we adapt, the world changes.  We adjust, we adapt, this is the — this agile approach is the way we're thinking about this.  

So, getting data right is logically first.  And logically second is analytics and metrics.  And I put those two together because one, if you don't have metrics, then how do you know how well you're doing.  And metrics are data driven — we need data driven metrics, and so those depend upon high quality data.

And analytics is in that layer because I think I've said this publicly before, but you know, 60 to 75 percent of the use case demands that I've seen for AI aren't really demands for AI, they're demands for high quality data and visibility into what that data is saying, i.e. analytics.

So, getting analytics and metrics right extremely important.  And finally, the value of AI with high quality data and rigorous metrics, we can then actually build high quality AI.

So, that hierarchy of needs is a logical hierarchy.  And it was developed when I came in and I asked what — where — where should our — how should our energy be divided up?  Where — where should our energy — we have limited energy and we have to build a sustainable solution, a solution that maintains after the current set of folks leave, and to do that, high quality data — getting that right is the right thing to do first, getting metrics and analytics right, second and then finally, some really high value AI.

Now, the reason I say it's logically first and not temporally first is because we can't stop building analytics until data's right and we can't stop building AI until data's right, but the way it stands now, if we contract with a vendor and they build a really excellent AI solution, there's going to be some hackiness in the way that data flows. 

There's going to be some hackiness and who can have access to the data that's under that AI – it’s going to be a little bit stove piped, and that's okay for now.  We have to take that on as but once we are done with getting the data layer right, we're going to have to pay that back and refactor that model so it's using the open data and it's writing its results back into that data layer. 

I don't know if that made sense?  Please feel free to follow up if it didn't.

Q:  No...

Q:  ... thank you, it did. 

STAFF:  Okay, up next, we have Joseph Gedeon from Politico.

Q:  Hi, yes, thanks for doing this.  You talked about this a little bit earlier, but the executive order earlier this week emphasizes how AI development needs to be tied with cyber safety and security to protect it from foreign adversaries and attackers and hackers and all that. 

So, can you get a little bit more specific on how some of your cyber standards will match up to — with — with what you're working on?

DR. MARTELL:  Sure.  We abide by whatever the strong guidance the CIO is given about how to deploy software — a model's just software, right, so how to deploy software, how to make sure that we are — all of the tools that we're building and the models that we deploy are wrapped in the appropriate zero trust, for example.

So, our view is what we're delivering is software, and then that software we are abiding by all of the things that the CIO has provided us.  And we work very closely with them to make sure that assumption that I just made that there's not something different in that assumption about where there could be security leaks.  

And I'll give an example, it's a little bit different than "just software" when you're using a large language model that includes your own data, so we're working with them right now, as part of Taskforce Lima to decide is it the case that we can bring in some off the shelf large language model add our data to it and be assured that that data doesn't leak back to the originator of that model? 

And there's some really hard questions there.  We don't even know all of them to be honest with you, and so, that's a big part of what Taskforce Lima is doing before we feel good  giving the seal of approval to a particular use case — we need to figure out what it means to validate that large language model use case with respect to the security criteria that CIO has developed for us.

Now — and again, we might change — we're working with them, they might have to adapt if Gen AI presents a bigger challenge than we think.

Q:  Thank you. 

STAFF:  Okay, Pat Tucker from Defense One?

Q:  Hey, thank you for doing this.  So, one of the things that you and others have talked about before is of course that the United States isn't developing this capability in a vacuum.  There's other very large high-tech nations, mostly China that are moving very quickly on applications — military applications and security applications and influence applications and economic et cetera for AI.

Can you speak to how this strategy specifically allows the US to outpace China in — in either developing...


Q:  ... capabilities or is it more that this is what we have to do, we have principles that guide our movement and allies that have partnered with us to develop this stuff, and so, all of those principles mean we have to work twice as hard, because we're facing an adversary that doesn't have the same principles, ethics or regulations on use?

DR. MARTELL:  I see, great.  Yes — I think the strategy is really an architectural strategy, and a building strategy.  And it's actually specifically designed to address speed and agility.  So, here's how I think about when and where you might apply AI.   

So, let's start with AI is not a monolithic technology.  I know every vendor wants you to believe that their AI out of the box is the thing that solves everything.  But it's not true.  AI is not a monolithic technology, it's a set of statistically based technologies — a whole set of statistically based technologies which are amenable to some use cases and not amenable to others.

So, you can apply it to some and it works great, you can apply it to others and it doesn't work well.  And so, this agility allows us to test that, find use cases where we believe that an AI based technology — and AI based solution might be helpful, and then we can very quickly empirically test whether it does.  

Think about it, if you have the data right, if you have the data labeled right, you can quickly do an experiment to see if we're going down the right road.  And if those experiments fail — failing, you can quickly — and they do all the time, they fail more often than they succeed, because that's the nature of this kind of business, the building of the model experiments is what I mean I can just see the headlines saying something different.  

The building of the model experiments fail often, and we try something new and we try something new and we try something new.  And that sort of agility has to be part of the culture if we're going to keep pace with any of our adversaries.

So, this is really designed, both I think it's the right way to do it in general, but also it has the side effect of increasing our agility and speed.

And then with respect to use cases that our potential adversaries might be using AI for, some of them are going to work and some of them are not, and we just have to pay attention to which vectors from their side we need to defend against, and which ones we don't.

You know, we definitely need to defend against misinformation, but a lot of things where people say you know, we're going to apply AI to X, that's a fairly meaningless statement to me.  I need to understand what use case you're applying it to, what are your criteria for success, what data are you using and how are you measuring that success?

So, I really want us to think about this as use case by use case by  use case.  There isn't a single box that you can buy or a single thing floating in the sky called AI that monolithically solves all problems.

Q:  Okay.  And just super quick follow up, given now that you have the strategy that you're producing and you have at DOD the ducks in place to move out on this strategy, what would you say is the next big imperative for the US government broadly in terms of sharpening their position in that competition, particularly against China?  Is it workforce?  Is it ship supply?  Is it different ethics?  Where would you direct focus?

DR. MARTELL:  I'm going to stay closer to me than the broad question that you asked, and I would say the next big thing we need to figure out is now that we have clarity about what we want to build, and you can use the data mesh as the metaphor for really what the underlying layer's going to look like, but that's kind of a vague metaphor, now that we have clarity about what we want to build, we can't build that.  It's not something the government should build.  

We have to work very closely with our industrial partners and what we have to figure out now is how do we get out – and so, it's really an acquisitions question, how do we get our industrial partners to work with us in a way where they help us build out this open standard data layer and the data that they provide isn't locked up in a silo.  That's going to be our biggest challenge.       

If we end up having providers continually locking data up in silos and not in this data mesh that allows for free discovery and accessibility of the data, then that's going to be a blocker, so that has to be a real challenge we have to break through.

But I feel confident — you know, my conversations with all of the big folks, my conversations with Palantir, my conversations with Google, my conversations with Oracle, my conversations with Microsoft have all been very fruitful, and I think everybody's onboard with a new way of thinking about those. 

And think about it, if a vendor comes to the game and there's a bunch of data available to them, they can build a significantly better app than if they have to go do the hard work to find the data they need and silo it. 

So, I actually think this is the way we're thinking about it is — I think industry is really going to jump on the bandwagon, because it's actually beneficial to them as well.

Q:  Okay, thank you.

STAFF: Frank from the Associated Press.

Q:  Hi, thanks, Craig.  I wanted to ask about resources and talent for the big challenges with AI, which is like testing it, evaluating the models, making sure the data labeling is good and these are — you know, because you're going to be getting these — so much of this from the private sector, what kind of resources do you have in terms of human resources right now?  Do you expect to need a lot more?  And isn't there a retention problem right now...


Q:  ... because AI's data scientists are — you know, they're getting a lot more money...

DR. MARTELL:  Pretty valuable, pretty valuable, yes...

Q:  ... yes.

DR. MARTELL:  Yes.  Well, that's a huge can of worms, so let me start with we always  need more people, right.  But we've just stood up, within the CDAO, a Digital Talent Management Office.  And the first thing to note is that we are the functional community manager, which means we're responsible for the careers of these 10 new digital AI, data and analytics job roles.      

So, there's a whole new set of job roles, I don't have all 10 at my fingertips that were announced by the CIO about a year and a half ago now, but they include data engineer, data scientist, machine learning engineer, et cetera. 

And we're responsible for understanding how to grow those careers, and we're also responsible for thinking hard about how to fill roles for folks in those careers.  So, we've just stood up — we recognize exactly the challenge that you said, Frank.  I will accept the challenge almost verbatim the way you said it. 

We understand that challenge.  We've stood up this digital talent management org being headed by Angela Cough, and they're in the middle of pilots right now with the new Chief Talent Management Officer to actually start addressing some of these questions.  

For example, do we really  need to think about hiring as hire to retire?  Are we going to get somebody who has these kind of skills to stay within the Department of Defense for 20, 30 years?  Probably not.  But what if we can get them to stay for three or four?  What if we pay for their college and they pay us back for three or four?  And then we give them real-world experience that when they go off to get hired by Silicon Valley, they'll have a leg up. 

We're thinking really creative like this.  For example, can we be part of a diversity pipeline?  Can we go to HBCUs and say, hey, folks, you know, I know Silicon Valley is not knocking on your door even though they should be, what if you come work for us for three or four years and we give you a killer apprenticeship and then Silicon Valley's knocking on the door on the other end?  Or the tech world in general, not just Silicon Valley, but the tech world in general. 

So we're thinking really hard about creative ways that we can get folks in not for their whole career.  You know, in my industry experience, the average tenure of a software engineer is 18 years — 18 months to two years, right?  And Google, Google seems to be able to build some remarkable products.  You know, LinkedIn can build some remarkable products with engineers who are only there for a while. 

So that's a management challenge, but that management challenge can be overcome.  And we can learn how to deal with people who are going to be there for two or three or four years and use it to actually grow them and make their lives better and more lucrative on the back end.  And so if we can do that right, then people are going to be wanting to knock on their door on the front end. 

But, frankly, this is all at the beginning.  I'd say give us nine months to a year and we'd have a much more robust and much more robust things to say about this. 

Q:  Thanks. 

STAFF:  All right, we have time for one more question.  We will go to Jon Harper at DefenseScoop. 

Q:  Thank you.  I was wondering if this CDAO procurement for — with industry that you're going to be hosting at the end of the month, to what extent that, you know, the feedback you get from that will be shaping your implementation plan for everything that was rolled out today. 

DR. MARTELL:  Yes, absolutely.  You know, we called it the procurement forum because I think folks seem to think industry day has a — you know, a particular meaning that we didn't want.  What we really want is a forum.  And at our last — if you ask anybody who came, we really did treat it like a forum.  We told these folks what our vision was.  And then we had really great conversations about how we can partner to deliver on this. 

And so how we partner with industry, as I said earlier, is going to be extremely important to delivering the strategy.  We will not be able to do this without industry partners, without academic partners, and without our actual, you know, country partners and allies.  So it's going to have a big impact. 

If  I come with a vision that says, here's how I want to pay you because this is what I need, and they all say, nope, that's not going to work, well, great.  Then I have to rethink that.  And then I have asked them, well, you know, what is it that's going to be sustainable for your business?  Because I need those — look, I need those industrial partners to continue to build and sustain this.  If I have some crazy idea about what I want to build and nobody wants to build it for me, well, that's not going to work, right? 

So we absolutely have to do this in partnership with lots folks, in particular to your question, industry. 

Q:  Thank you.

STAFF:  All right, Dr. Martell, we thank you for your time. 

As I said at the beginning, we will have a transcript from this briefing posted to later tonight. 

DR. MARTELL:  Thank you all very much for coming.  We're really excited about the strategy.  We're really excited about it.  And we have already started moving out in this direction.  The real take-away is that we just need to get agile, and then learn and then be agile, and then learn and then be agile, and then learn.  The old way of sort of having perfect foresight about what the world is going to be five or 10 years from now just doesn't work with software AI. 

So, thank you all for coming. 

Q:  You're going to post a strategy to

STAFF:  So it's on right now and we will get to shortly.  But I can send you a link, Patrick. 

Q:  OK, thank you. 

DR. MARTELL:  OK, thank you, everybody.