Transcript

Lt. Gen. Jack Shanahan Media Briefing on A.I.-Related Initiatives within the Department of Defense

Aug. 30, 2019
Lt. Gen. John N.T. "Jack" Shanahan, Director, Joint Artificial Intelligence Center, Office of the Department Of Defense Chief Information Officer

STAFF:  Good morning, ladies and gentlemen.  Thank you for being here today.  I'm Lt. Cmdr. Arlo Abrahamson, and I will be moderating today's press conference.

Today's media briefing is on the record, off-camera, but audio is okay.  Our host for this morning's engagement will be Lt. Gen. Jack Shanahan, who is the director of the Artificial Intelligence Center -- the Joint Artificial Intelligence Center.  Gen. Shanahan will begin shortly with an opening statement, and then I'll turn it over to you all for questions and answers.

We'll have plenty of time, so, please, we'll get to as many questions as we can.  I will call on you for questions, so please raise your hand.  When you get called, please kindly give us your name and your media outlet before asking your question.  We request that each person ask only one question and one follow-up, and then kindly provide your name and your outlet as you ask your question.

Also, as a friendly reminder, today's discussions are on artificial intelligence, so we will not be talking about the JEDI [Joint Enterprise Defense Infrastructure] contract, which was discussed in a previous briefing.  Those -- transcripts for that are online.

And just a public service announcement:  We do have a Twitter presence, and you guys can find that at @JAICOnline.  And also, in the next couple of weeks we will have our AI.mil, which will be coming online.

And without further ado, I'd like to turn it over to Lt. Gen. Shanahan to give opening remarks.

LT. GEN. JOHN SHANAHAN:  Well, good morning, everybody, and – and, if you didn't catch that, I think it's, we have a little bit of a coup that we pulled off getting AI.mil -- not quite up and running yet, but within two weeks it -- we'll -- we'll have it out there and we'll post all our products on there.

Well, I'm Lt. Gen. Jack Shanahan, the director of DOD's Joint A.I. Center, or the JAIC [Joint Artificial Intelligence Center].  I've been in this position since January, and before this, as many of you have heard in the room, I led Project Maven, the artificial intelligence, machine-learning pathfinder project under the under secretary of defense for intelligence.

And let me start with the JAIC's mission:  to accelerate DOD's adoption and integration of A.I. to achieve mission impact at scale.
Leadership in the military application of A.I. is critical to our national security.  The table stakes are high.  For that reason, I doubt I will ever be entirely satisfied that we're moving fast enough when it comes to DOD's adoption of A.I.  My sense of urgency remains palpable.

Yet, at the same time, it's important to acknowledge the myriad challenges that come with building and sustaining an A.I.-ready force across the department, both in terms of people and weapon systems.

As a DOD A.I. and machine-learning pathfinder, Project Maven was always focused on building a product delivery pipeline.  The JAIC was designed from the beginning to be an A.I. center of excellence, expanding beyond product delivery to full A.I. capability delivery by adding other elements such as strategic engagement and policy, plans and analysis, intelligence and more with an operating model of centralized direction, a common foundation and decentralized development and experimentation.

I want you to know what we've accomplished since the JAIC was established a year ago.  At this time last year, the JAIC only had a handful of people, no money and no permanent spaces from which to operate, and we did not get the majority of our fiscal year '19 funding until the beginning of March this year.  We now have over 60 government employees, a real home, a healthy fiscal year '20 budget, and we are delivering some initial A.I.-enabled capabilities.

I am proud of our team's talent and diversity:  government civilians, active duty military, National Guard, Reservists, contractors, and, until their departure a couple of weeks ago, even two college summer interns.

We are seeing initial momentum across the department in terms of fielding A.I.-enabled capabilities.  You can see some evidence of this in the fiscal year '20 service and component budgets with even more investments expected in the fiscal year '21 FYDP [Future Years Defense Program] POM [Program Objective Memorandum].

Yet, we still have a long way to go to help bring pilots, prototypes and pitches across the technology valley of death to fielding and updating A.I.-enabled capabilities at speed and at scale.

It is difficult work, yet it is critically important work.  It's a multi-generational problem requiring a multi-generational solution.  It demands the right combination of tactical urgency and strategic patience.

We have to move beyond the hype where we don't view A.I. as another technology flash in the pan but instead focus on what it takes to weave A.I. into the very fabric of DOD.  And we'll know we have succeeded when you've gained irreversible momentum and A.I. has become ubiquitous.

I will briefly mention our ongoing and planned mission initiatives and can provide more details on each of them in the Q&A session that follows.

Our ongoing projects include predictive maintenance for the H-60 helicopter; humanitarian assistance and disaster relief, or HA/DR, with an initial emphasis on wildfires and flooding; cyber sense-making, focusing on event detection, user activity monitoring and network mapping; information operations; and intelligent business automation.

For fiscal year '20, our biggest project will be what we are calling A.I. for maneuver and fires, with individual lines of effort or product lines oriented on warfighting operations; for example, operations intelligence fusion, joint all-domain command and control, accelerated sensor-to-shooter timelines, autonomous and swarming systems, target development and operations center workflows.

We are also embarking with DIU [Defense Innovation Unit] and the services’ Surgeons General, as well as many others, on a predictive health project, with several proposed lines of effort, to include health records analysis, medical imagery classification and PTSD [Post-Traumatic Stress Disorder] mitigation/suicide prevention.

Our other major effort, one that is instrumental to our A.I. center of excellence concept, is what we are calling the Joint Common Foundation, or JCF.  The JCF will be a platform that will provide access to data, tools, environments, libraries and to other certified platforms to enable software and A.I. engineers to rapidly develop, evaluate, test and deploy A.I.-enabled solutions to warfighters.

It is designed to lower the barriers of entry, democratize access to data, eliminate duplicative efforts and increase value added for the department.  This platform will reside on top of an enterprise cloud infrastructure.

I would now like to share a few of our biggest lesson learned -- lessons learned from over the past three years, my two years at Project Maven and not quite a year as the JAIC director.  These lessons learned are hardly unique to DOD.  For those of you who are following A.I. in the corporate world, these will all sound very familiar.

First, problem framing.  I cannot overstate the importance of a comprehensive, user-defined, data-driven workflow analysis to determine if A.I. is even the right solution to the problem.  If there are any A.I. silver bullets or A.I. easy buttons, I have not yet found them, though I am optimistic that the pace of technological change over the next year and beyond will yield better and faster ways to simplify the A.I. delivery pipeline.

Second, data is at the heart of every A.I. project.  We are addressing challenges related to data collection, data access, data quality, data ownership and control, intellectual property protections, and data-related policies and standards.  In short, we have to liberate data across the DOD.

Next, DOD's A.I. adoption capacity is limited by the pace of broader digital modernization.  Along with enterprise cloud, cyber and C3 [command, control and communications], A.I. is one of Chief Information Officer Dana Deasy's four digital modernization pillars.  These four pillars are going to converge in such a way that digital modernization and warfighting modernization become synonymous.

In terms of culture, in DOD, we need to match the rate of institutional change to the rate of change of commercial technology.  As I said earlier, this is a multi-generation commitment.
We have a lot of work ahead, to build a data-literate force across the department.  Within the JAIC, we are cultivating a leading A.I. workforce with the aim of attracting world-class A.I. talent through training, targeted recruitment and industry and academia engagement.

We face hard decisions ahead in the department about striking the right balance between adapting legacy systems, legacy data practices and legacy workflows to A.I.; in effect, bolting on cutting-edge technologies to old systems and accepting a certain level of sunk costs by divesting legacy systems to accelerate the development and fielding of A.I.-ready systems.

Finally, we are thinking deeply about the ethical, safe and lawful use of A.I.  At its core, we are in a contest for the character of the international order in the digital age.  Along with our allies and partners, we want to lead and ensure that that character reflects the values and interests of free and democratic societies.  I do not see China or Russia placing the same kind of emphasis in these areas.

To conclude, contrary to a lot of the hype prevalent today, we don't view A.I. as a magical solution, a specific thing to be sprinkled on top of any problem to yield miraculous results.  A.I. is an enabler, much more like electricity than a gadget, a widget or a weapons system.

A.I.'s most valuable contributions will come from how we use it to make better and faster decisions.  This includes gaining a deeper understanding of how to optimize human-machine teaming.  We want A.I. to increase operational effectiveness, accelerate integration with autonomous systems, and enhance efficiency across the department.

This is less about any individual technology than it is about how we design, experiment with and deploy A.I.-enabled operating concepts to gain competitive advantage, from the tactical edge to the strategic level.  In some cases, perhaps only gaining a fleeting upper hand, a temporal advantage.  In others, achieving a sustained strategic advantage against a peer competitor.

And as we look to a future of informatized warfare, comprising algorithm against algorithm and widespread use of autonomous systems, we need to design operating concepts that harness A.I., 5G, enterprise cloud, robotics and eventually quantum.  This critical path from a hardware-centric to an all-domain digital force will shape the department for decades to come.

And finally, I am optimistic that 2020 will be a breakout year for the department when it comes to fielding A.I.-enabled capabilities.

And with that, I'm ready to take the questions.

STAFF:  Why don't you start, Sydney?  Good morning.

Q:  Sure, thank you.  Sydney Freedberg, Breaking Defense. 

You -- you mentioned the move to things that are closer to, you know, warfighting functions.  Though it doesn’t sound like you’re yet, you know, building the killer robots that people panic about.  You know, can you tell us in more detail what, you know -- how that's going to be different from the sort of predictive maintenance, other back office functions you've been doing?

And, you know, to devil’s advocate question, we've heard, you know, Air Combat Commander Gen. Holmes say, you know, "I have doubts about relying on Project Maven," which is, you know, all been out there for, as you say, a couple of years now.

You know, how can you -- you know, meet those concerns and get these new A.I. fire maneuver systems to the point where combat commanders like Gen. Holmes say, "Yes, I am totally confident in relying on that in a real world operation"?

LT. GEN. SHANAHAN:  So I will take a little issue with how you characterized Gen. Holmes' -- his article.  He said, "I don't -- I don't -- I'm not fully there yet with Project Maven."

He understands we field it as a, as a prototype.  We're in a sprint process where, from the moment we field sprint one in December of 2017, we knew, and the warfighters knew, they were getting a minimal viable product.  That's the whole point of agile software development and field these systems as quickly as possible.

And while I'm thinking about it, I was in a room no more than a week ago with a group of people from SOCOM [Special Operations Command] and JSOC [Joint Special Operations Command] who were almost, in a literal sense, pounding the table saying, "Just give us the capabilities.  We know it's not perfect yet, we're here to wring it out and tell you how to make it better," which has been a consistent theme from the special operators from the beginning of Project Maven.

So I think Gen. Holmes said exactly what I would have expected him to say that it's promising but the technology's not quite there yet.  And in sprint one it was not there.  They're fielding sprint two in Project Maven right now.  It still will not be where we intend it to be a year from now when it gets to what -- what Maven team's calling sprint 3.

So it's that rapid progression of getting the algorithms, the models better and better with updates.  The problem with Maven is lack of an enterprise cloud solution, and we weren't getting enough data.  The frequency of those updates was not happening fast enough.

The team now is doing those on at least a monthly basis.  It's what we're thinking about in the department in terms of continuous integration, continuous delivery.  That's the world we have to get to.

And then to finish up answering your question, what I think is -- is really getting at the heart of what you were asking, Sydney, is learning from the predictive maintenance and HA/DR -- very important.

Like, every piece of advice we ever got on starting an A.I. program at the department was start small, prove it, build some expertise and credibility, and then scale from there.

These next ones on maneuver and fires will be warfighting-focused.  Again, we'll start relatively small scale.  Maven's got a head start on the intelligence.  What you'll hear is the -- the term of smart system in Maven, which is taking the Maven metadata, the products coming out of Maven, fusing them with other sources of intelligence and then taking what has been a common operating picture, a common intelligence picture and a sensor picture, collapsing those onto one so operations as -- we've talked about this for years, we may be seeing the indications in a year or so, we actually have a place where ops, intel and sensor information is available to everybody at the same time.

So we'll -- intel -- or Maven's work on the intelligence side, what we're going to be doing in JAIC is working on the operations and C2 [command and control] side, working very closely with the Joint Staff J6 [Command, Control, Communications & Computers/Cyber] and the J7 [Joint Force Development], who are charged with coming up with joint all-domain command and control.

Other things like fire support coordination.  The team just got access to tens of thousands of real world records from Iraq and Afghanistan on calls for fire.  Start curating that data, and the more we learn from that using natural language processing and probably some deep learning, how do we get through the fire support coordination process much faster, much more efficiently?

Those are some of the -- the -- the -- they're not game-changing in the sense of lethal autonomous weapons considerations, but they'll make an enormous impact day-to-day on the warfighter by getting through command and control faster.

If we're having challenges with that in today's fight, imagine what that time cycle looks like against a peer competitor.  It's all about getting through that decision cycle much faster.

STAFF:  Sir, go ahead.

Q:  Travis Tritten with Bloomberg Government.  Thank you very much for doing this.  We appreciate it.

You mentioned a healthy budget, and Congress is putting together the NDAA appropriations now.  Are you anticipating a significant increase in funding, and how would that enable some of these new warfighting applications that you're talking about?

GEN. SHANAHAN:  Now -- thanks.

Last year's budget was -- was $93 million, and that was a fair budget, because it -- we were PowerPoint deep at the beginning.  As I said a year ago, there's a handful of people that existed in the JAIC.  We had to make our case, we -- we had a hold on -- on some of that money, which was released.  That's why -- so we didn't get our full funding until March of this year, we had to -- to show what our plan was.

In fiscal year '20, it is -- it is a very good budget.  Now, we have been refining our plans for exactly how to use that budget and that's challenging in this business.  As I learned from Project Maven, there is no perfect analogy in what an A.I. budget looks like.  This is -- this is part of our challenge of trying to take a startup culture into the institutional bureaucracy.  It's just sometimes those -- those are a little bit at odds with each other.  And I don't know, as any startup would not know, exactly what I'm going to be spending on a year from now, but we have been refining those budget numbers very carefully, very deliberately, over the past just two months to really show that we have a -- a deliberate plan on the mission initiatives.

So we'll have five big mission initiatives that I already talked about.  The predictive maintenance and HADR will continue, the health one will pick up, the maneuver and fires will -- will get going and then we're looking at some component initiatives, which are service or component projects that we're there helping them as opposed to leading -- to leading those.

And then, for others, if you're going to ask a question about the fiscal year '21 budget, of course I can't get it -- get it in front of the building here, but as you would expect, if we are an A.I. Center of Excellence, we'll be seeking a commensurate level of funding support for an A.I. Center of Excellence for the Department of Defense.

But I -- I am very happy -- I'll just tell you that Congress has been extraordinarily helpful in -- in looking at A.I. and the department needs to accelerate what we're doing, and it's been a bipartisan issue.

STAFF:  We'll move over to the other side of the room and we'll work our way back.  Sir, go ahead.

Q:  Hi, Scott Maucione with Federal News Network.

We hear everyone just saying, you know, "We're using A.I. in the Defense Department," and it's kind of hard to understand exactly what that means.  But could you tell us, in the private sector, who's that gold standard and what can -- what are their capabilities and where is DOD compared to that?

GEN. SHANAHAN:  Yeah, I -- I -- it's hard to say that -- that there's any gold standard.

And -- and to your -- to your very point about people who say I -- A.I. -- what do they even mean by A.I.?  It's the -- it's the classic, if I ask three economists, I get seven different opinions.  Well, sometimes in A.I. you find the same things.

We're -- we're -- we're -- one of our tasks in NDAA '19 is have a definition of A.I.  Which in general terms, is along the lines of machines performing at or better than the level of -- of human performance.

Where -- where we look to commercial industry, it's -- it's a panoply of everyone from the biggest companies on the planet that are cloud and A.I. companies to some of the startups that are coming out of places, you know -- people who stood the companies up, coming from the -- the great A.I. institutions across the country, showing they have fantastic capabilities that might have applicability to DOD problem sets.

What I'm challenged with is a vendor may have an A.I. solution to a particular narrow slice of a much bigger problem, what I call the DOD A.I. enablers, with data being one of the core problems we run into every time.

Our -- we would say, in general terms, up to this point -- and a little bit of a blanket statement -- that we spend 80 percent of our time and resources on the enablers and the other 20 percent on the algorithm and developing the model.

We're going commercial first, and that's the same model we're using in JAIC as we used in Maven, because there's such -- there's such good capability out in the -- in commercial industry.

But in terms of the gold standard, you know, the biggest A.I. companies in the world show what they can do every day.  And it's Google, Amazon, Microsoft, you know, Facebook for a different reason, we -- not necessarily for DOD business, but just to see how they're incorporating in. Netflix.  They're all using it in different ways.

And what I find is, I learn something new every day through blogs, like The Wall Street Journal blog on A.I., how corporations are tackling A.I.  All of them are finding it's harder than it appears on the surface.  And that's why I say we have to be careful about the hype.

The visionary 15, 20 years in the future, I understand that, I generally acknowledge where people think A.I. can -- can go.  But in the here and now, it's groundwork, it's get your sleeves rolled up, dive in and figure out how do you curate your data.  First of all, where do I get my data from, how do I curate the data, then what do I need to do to train against it?  And then once I've trained a model, how do I actually integrate it into a DOD system?  That's much harder than people realize, we found with Project Maven.

And then the last step, once I've integrated it into the system, how do I sustain it?  What do continuous integration look like for a Department of Defense, when it comes to A.I.?  We're not used to thinking about that, broadly, at scale for the department.

Q:  Real quick, what was your 2020 request?

LT. GEN. SHANAHAN:  $268 million.  And we got marked, but we're still in very good shape.

STAFF:  Kristina, good morning.  You can go next.

Q:  Oh.  Okay, thank you.

Thank you for doing this, General.  Kristina Wong with Breitbart News.

In the beginning, you mentioned urgency.  Can you talk about why there's a sense of urgency, what are challengers, near-peer threats, you know, China and others may be doing?

LT. GEN. SHANAHAN:  All right.  So it's two part -- I'll answer that in two parts.

First of all, we just see what the potential for artificial intelligence, machine learning, reinforcement learning is.  We're seeing it happen, unfold in corporations and these big companies every day.  So you can't ignore it.

 It's the idea of -- we see a future of digital modernization, yet we, in some ways, are still looking at the world through an industrial-age lens.  We're trying to figure out, to make that leap from industrial age to digital age.

So we see that it's out there.  That by itself gives us that sense of urgency, is seeing the potential of it.

To your other point, is, yes, our potential adversaries are moving very deliberately towards a future of artificial intelligence.  And I pause for a second, because Russia looks at it a little bit differently than China does.  I'd say Russia, generally more on the robotics and automation side.  But China, the whole breadth of artificial intelligence capabilities.

And from President Xi Jinping down to the provincial level, they have had a strategy of accelerating adoption and integration of artificial intelligence; well beyond the resource, well beyond the theory, actually fielding those capabilities.

And as a regime that's made it clear that artificial intelligence is part of their strategy as they look towards 2035, they've made no secrets about what they want to do on the military side as well:  everything from autonomous weapons to a big emphasis on how you use A.I. in command and control.

I don't ever look at this.  I actually -- I stay away from any phrase of "arms race."  I would say we understand how fast we need to move, we see our adversaries moving faster.  It's a strategic competition, not an arms race.  They're going to keep doing what we're doing; we acknowledge that.

And what I don't want to see is a future where our potential adversaries have a fully A.I.-enabled force and we do not when it goes back to this question of time and decision cycles, and I don't have the time luxury of hours or days to make decisions.  It may be seconds and microseconds where A.I. can be used to our -- to our competitive advantage.

Q:  Is there a timeframe when we might field the first?

You mentioned, you know, several systems or projects that the JAIC is working on.  Is there a timeframe for when something might be fielded?

LT. GEN. SHANAHAN:  It's a little hard to say.  We're deep into the problem-framing stage, because I didn't have the funding for this is fiscal year '19.  And as I said, as one of our biggest lessons learned, it's the classic Einstein, you know, quote, is, "The more time we spend on problem-framing, the better off we'll be."

And part of that problem-framing is working with a service or a component on an agreement, both from the user sides, but from these program offices who might be asked to integrate this thing in a year.  So I -- I am optimistic, we'll see progress in a couple of our different lines of efforts within six months from October.  That's pretty quick.

And back to the earlier question, there'll be a minimum viable product, as Sydney was alluding to with Gen. Holmes, and then it'll get better and better as we get quicker at it.

Now, this isn't just the JAIC, of course.  The services themselves are beginning to move out, trying to find some projects that they can go after.

I will emphasize that as the JAIC -- as an organization, we do not make these up.  We're doing the projects we're doing, because it came as a result of a requirement from a service or component, and then we have a governance process that actually whittles down the number of requests we have.  And we're really trying to reach out to everybody possible to give them an incentive to come in and get our support.

So at the same time the JAIC is the centralized direction in beginning to do our own projects, we're also really turning to the services and components to make sure they're accelerating their own A.I. fielding plans.

STAFF:  Sir, why don't you take the next one?

Q:  Thanks.  Zach Biggs with the Center for Public Integrity.

You mentioned, both in your blog post earlier this week and today, looking at the difficult ethical questions associated with A.I.  There's obviously a couple of external processes under way, whether it's the DIB [Defense Innovation Board] or the A.I. commission.  But what are you doing internally to look at --

LT. GEN. SHANAHAN:  Yeah.

Q:  -- those ethical questions?  And do you think that there needs to be some kind of limitation or restriction on the application of A.I. for a military purpose?

LT. GEN. SHANAHAN:  Okay.  To -- again, I'll answer it in two parts.

On the internal, we spent a lot of time talking this up.  As I alluded to earlier -- I actually explicitly said we have a center of excellence concept in the JAIC.  So part of that center of excellence is a strategic engagement and policy team.  Within that, our team is spending a lot of time working with Defense Innovation Board, but also just internally and with the services components, on this question about the ethical use of A.I., the safe use of A.I., the lawful use of artificial intelligence.

I would -- I will tell you that in 35-plus years in uniform, I have never spent the amount of time I'm spending now thinking about things like the ethical employment of artificial intelligence.  It is -- we do take it very seriously.  It's core to what we do in the Department of Defense in any weapon system.

A.I. is different in some respects.  The technology is different enough that people are nervous about how far it can go.  But as a department, this is what we do.  Before we field a weapon system, we think about how it can be used and should it be used in a certain manner.

To the second part of your question, I am strongly in favor of discussions internationally about things like norms.  I think with this point, it would be counterproductive to have outright bans on things that people don't even fully understand what they mean when they say, "Ban this."  What do you mean by that?  Nobody has fully defined.

There's a tendency, a proclivity to jump to a killer robot discussion when you talk A.I., and yet if you come and watch what my systems in Project Maven were doing, what we're working in, it's -- it's as far from that spectrum as you could possibly imagine.

But it is a -- it is a completely valid conversation we should be having on things like international norms.  And those are ongoing.  The Convention on Certain Conventional Weapons in Geneva -- there are ongoing forums at the international level.

What I'm interested in doing is having a DOD and State Department partnership to understand what the future should be, in terms of this question of -- of -- of norms and behavior.

Q:  And just, when you talk about that internal conversation, is there a formalized structure?  Is there an expectation of some sort of report?  I mean, what is the actual formal process?

LT. GEN. SHANAHAN:  Yeah.

There is no formal process per se right now.  We're building one as part of the JAIC through our governance process.  And as the Defense Innovation Board comes in with its proposed A.I. principles for defense, they're presenting that to the department, and then internal to the department, we will offer recommendations to the secretary on what to do with that.

That's not just the JAIC.  It's the JAIC.  It's the lawyers.  It's research and engineering.  This'll be a coordinated effort.  And the National Security Commission on A.I. is also looking at ethics as part of its broader mandate to study artificial intelligence for the United States.  We will take all of those inputs in together.

We are going to put somebody into the JAIC.  We -- realize, I'm still standing up an organization, so I'm still trying to -- trying to fill a lot of gaps.  But one of the positions we are going to fill will be someone who's not just looking at technical standards, but who's an ethicist.

I think that's a very important point that we would not have thought about this a year ago, I'll be honest with you.  In Maven, these questions really did not rise to the surface every day, because it was really still humans looking at object detection, classification and tracking.  There were no weapons involved in that.  So we are going to bring in someone who he will have a back -- deep background in ethics, and then with the lawyers within the department we'll be looking at how do we actually bake this into the -- to the future of the Department of Defense?

And oh, I didn't mention OSD [Office of the Secretary of Defense] Policy, but of course, OSD Policy is part of this, as well.

STAFF:  We'll take some questions in the back.  Sir, why don't you go ahead?

Q:  Hi.  Yeah, Justin Doubleday with Inside Defense.  Thanks for doing this, Gen. Shanahan.

LT. GEN. SHANAHAN:  Yeah.

Q:  And just back to the fires and maneuver project, really quickly on the budget, how much additional funding will you need above that $268 million to get that started?

LT. GEN. SHANAHAN:  That's -- we already planned for that.  It's already -- I've already built it in.

It's -- you know, as we were -- it was -- it was interesting, because you just never know how to build a startup organization sometimes.  So last fall, as we were -- I was advising at the time; was not -- had not been confirmed, but through my Maven role was advising the JAIC, trying to come up with -- with what the budget should look like, there are not a lot of very helpful models that we use, because DOD has just got a lot of differences from -- from commercial industry.  But we took -- A.I. maneuvers and fires wasn't even on the books.  Officially, what we had asked for at the time was just the three projects that I've already mentioned.

I realized very quickly after taking the job -- and I was thinking about this before I took the position in January -- it would be insane not to come up with a warfighting operations-focused project.  We just hadn't -- we hadn't put a lot of detail behind it at that point, but we had thought about what does a budget need to look like when you bring in an additional project like that?

So we have been refining those numbers.  I am biasing a little bit more funding towards that, not yet understanding how much it will cost.  I -- we'll find areas where there may be fielded solutions available that we just need to begin to integrate, so a much lower cost.  But -- but as we get further along -- we're talking six months to a year down the road -- we may find, so just like we did with Maven, big projects that have difficult integration challenges that may bring financial resource, or just overall resource requirements that are unexpected.  Just don't know that right now.

And that's the -- that's the dilemma I have.  I said I had some amount of money for each of the projects, but there has to be an element of fungibility here with my dollars, because at one point we may find that we are accelerating a project much faster than we realize, and we have to take some money away from maneuver -- or from predictive maintenance, not to -- to harm that one, but we may have overestimated the costs from -- it's just one of these things.  We've just got -- we have to work this day in, day out, just based on what the -- what -- what the supported unit really needs us to do.

Q:  And how do you expect to run that fires and maneuver program?  Will you have, you know, some sort of industry day, or who are you going to go to to do that?

LT. GEN. SHANAHAN:  Yeah, so they -- just a couple of months ago my project lead actually did a workshop just to bring in -- we thought we were going to get 40 people.  Over 130 people showed up.  We had to turn some people away just for fire considerations.  Really well received by the services and combatant commands.

And we're -- again, back to what I said earlier, this is really one of the first, if -- it really is the first big project that the JAIC's going to take on that involves the combatant commands, not just the services and some of the other components.

So very, very strong appetite for this.  What we have to do is begin to take all the good ideas that came forward in that workshop, down-select to a manageable number of them, and then begin to bring teams together.

One of the things we're asking for in our budget is to account for product planning teams; nothing new from anybody doing this in industry.  Maven was so much a -- so much of a small team, it was hard to get everybody out everywhere, 24/7.  We're trying to build those teams in so we're out with the combatant commands, we're out with the services, we're out talking to the users and the program offices to really understand their requirements and get this right on the front end so that we know we're asking for.

There has been a tendency in every A.I. project inside or outside the department to jump to an A.I. solution before you fully understand the problem.  We may jump into something, find out the data is too messy, it doesn't exist or it's just too complicated, and we'll stop the line of effort.  That's entirely possible, and that's okay.  Or we may start one and fail at it.  I would suggest that's okay as well, because we're going to learn big lessons, just like we learned in the last two years of Project Maven.

STAFF:  (Inaudible), I'll go ahead and take your question.

Q:  (Inaudible).

You had mentioned allies and partners in your opening remarks.  A lot of DOD initiatives are prioritizing those international efforts with partners and allies.  Do you see A.I. as something that the DOD needs to be more protective of, or are you looking for those international partners?

LT. GEN. SHANAHAN:  Well, what -- it's -- it's -- it's both.

We'll always have concerns about technology protection.  But in this case, because of the ubiquity of artificial intelligence technologies today, which is open source to a great extent until you train it and it becomes secret sauce based on data, we protect our data.  That may be our most important thing we need to protect.  And once the model is developed, then we protect that.  So we have -- we have efforts underway to make sure we're protecting our entire A.I. ecosystem in the department.

But, that aside, very interested in actively engaging a number of international partners, because if you envision a future of which the United States is employing A.I. in its military capabilities and other nations are not, what does that future look like?  Does the commander trust one and not the other?

What we have to do is figure out interoperability.  How do we actually go out and -- and plan to have crisis management together, fight together?  That is a question of interoperability.

So the same Strategic Engagement and Policy Team I mentioned, small that it is -- it only has a -- just a few people -- have been very active in outreach to over, I think at this point, 50 different nations.

And these are just initial starting points of, "Here is what the DOD A.I. strategy is.  What does your strategy look like?  If you don't have one, when are you going to have one, especially as it relates to your military use of artificial intelligence?"

And from there, to your point, we will then try to prioritize who those partners are.

I did have a discussion over at State Department this week.  We'd be very interested in having a very tight partnership with State on finding who those -- who those partners should be on a priority list.  Even though I think all of them are interested in A.I., are some far enough along that those are the countries we should be engaging first and foremost?

But if you were to name any country, we're trying to just reach out to them and have reached out to them to begin the initial conversations, and then there will be this question of priorities.

And it's hard to do because we only have so many people to go along.  The common answer is, all interested in A.I., all at the same point of, "This is hard work."  We're still trying to figure out how we organize for artificial intelligence in each of those individual countries, just like we've gone through here in the department.

STAFF:  Just a friendly reminder, if you could just tell us where you're from and what your outlet is.

Tony, go ahead.

Q:  Okay.  Tony Capaccio with Bloomberg.

On Google, what impact did their reticence have on continuing in Project Maven have on the rest of the industry that you're concerned about?  Did you -- was -- did it resonate?  Or was Google's concerns a one-off?

And, two, did Google's concerns lead directly to the need for an ethicist, and reshape your thinking on the need to address ethics of war issues?

LT. GEN. SHANAHAN:  No.

The second part first, the -- do not -- I do not directly link those two -- those two issues, the fact that Google did not renew their contract with Maven.

We've just been thinking about ethics from the beginning of the JAIC, because it is -- it is such a relevant topic in every single engagement that I personally participate in with the public, people want to talk about ethics, which is appropriate.

So we just knew it was important.  And we just didn't have people.  Just -- we have to build it, build the team up.

But I would say, two years ago, the fact that we even thinking about an ethicist in an artificial intelligence organization, I wasn't thinking about it, I'll be honest with you.  But it's at the forefront of my thinking now.

Now, to your first point on Google, no.  I'm speaking to the one person, in my three years of kind of working this with Maven and now the JAIC, I did not see that reticence translate into other companies.  Every company is different.  Every company, internal to their workforce, deals with it separately.  There are always concerns in any workforce about what is this technology going to be used for?

What is the Department of Defense -- it's incumbent upon us -- and I think we have to do a better job, quite honestly -- to provide a little bit more clarity and transparency about what we're doing with artificial intelligence, without ever having to delve into deep operational details.

But to even be talking about ethical use of A.I., I hope we'll give some people a reassurance or just an assurance that the department is serious about this.  But what I did not see is, that did not transmit like wildfire across other companies.

As you saw, some of those other companies came out very publicly in the aftermath of -- of Google and Maven and said, "We're in with the Department of Defense."

Now, we may -- we may have internal deliberations about how far we go in a particular technology.  But I'm telling you, the -- I have not seen what I would call a widespread backlash against the department.

There are people who have real concerns about A.I. in DOD.  Fair, and we need to address those, and by being a little bit more transparent about what we're actually trying to do with these capabilities.

And as I said, there are companies that are actually founded on working with the Department of Defense, because they know the importance of getting to that digital modernization future that I've talked to.

Q:  One quickie.

LT. GEN. SHANAHAN:  Yeah?

Q:  The DIA [Defense Intelligence Agency], two years ago, was using Watson, one of the most famous A.I. voices, algorithms or whatever, to sift through Islamic State documents.  What did you learn from that use of Watson and is -- does Watson have a potential use in the -- the joint --

LT. GEN. SHANAHAN:  I can't -- I cannot speak for those that are still using Watson.  And there're -- some agencies are still using it.

I would postulate that if IBM were in the room, they'd say Watson, like others, was a pilot project that they learned a lot from.  It didn't quite perform initially as everybody had -- had hoped, but gets better and better over time.

I can't speak for agencies who are -- -- who are -- who may be using Watson right now.  But with Project Maven using -- as you know, one of their projects is -- is enemy materials, so just you collect all this information -- voluminous amounts of information off of a battlefield or just somewhere in the -- in -- in part of the world, what do you do with that?  So they have a line of effort getting through that.

A lot of commercial companies in this space, everything from natural language processing, optical character recognition, computer vision, facial recognition, those sort of things.

We all know that's an important one to go after because we're overwhelmed by information.  It's like full motion video, just a different -- just a different domain than full motion video.

STAFF:  We'll take a question on this side.  Sir, go ahead, please.

Q:  Yeah, I was wondering if you -- Nathan Strout, C4ISRNET.

I was wondering if you could expound a little bit on the Joint Common Foundation?  I don't know if you guys have a timeline for when we could see something from that or any big contracts that would -- coming down from that?

LT. GEN. SHANAHAN:  Yeah, the timeline is we're just about launched on Version 0.5.  So it was really -- think of it as a minimal viable product.

We were -- we were held up a little bit because we couldn't -- we were trying to go out and actually get an enterprise cloud.  But absent the JEDI contract, we had to go out and come up with a -- a -- a sort of interim solution, just to be able to provide that enterprise cloud environment.

So the team's building that out as we speak, putting the initial, sort of, DevSecOps tools on it, some additional A.I. capabilities, machine learning tools.  We'll get -- we'll continue to build that out over the next six months.

We have some challenges, like everybody in the department, getting through accreditation and information assurance.  Getting -- getting behind the cloud access point is the next big step.  Even that takes -- takes time.

I -- I will tell you, by the time I left Project Maven, the team was -- was trying to get algorithms delivered at a rate that information assurance was the limiting factor.  They worked their way through that, but that's going to be a problem from us -- not a main problem, we've just got to get through it in the next couple of months to get the common foundation really where we want to go.

Now, where do we really want to build it out to be?  As I said earlier, a platform.  How much can we put in there?  And we're -- we're just in the middle of these discussions right now.

There will be the standard tools available that someone can get in and get access to, but how many other platforms can we -- can we get people at -- what I want to build, the Joint Common Foundation, is an incentive for people to come into the JAIC and get away from all of the bespoke solutions that they've had to stand up across the department, mostly in research labs cause they had no other choice.

But if I can give somebody an environment and a platform to come into and I can do that democratized access today -- easier said than done, but let's say it's there and we give people tools and it's an AIML [Artificial Intelligence Markup Language] marketplace of a -- of a -- of a type, and then we give other vendors opportunities to be a part of that same platform.

So it is early in the process right now.  We were delayed a little bit just getting cloud environment stood up.  0.5 version, even as we speak, just about to be launched.  But this is internal, sort of, test environment to begin with, starting to bring a couple of our mission initiative workflows into it.

So if you go back to predictive maintenance and HA/DR, we had to do those in other environments.  We had to do the development with Carnegie Mellon in one and using the Maven infrastructure on another.  We want to now bring those workflows in to the -- to the common foundation.

And, by the way, one of the things that we weren't thinking a lot about until recently is access to HPC, High Performance Compute, which we're using right now, because it offers an alternative than an enterprise cloud.  Now it's a little bit more restricted in how we do it.  It's actually a pretty good solution.  We're doing that with one of our projects.

Q:  Can I just follow up real quick?

STAFF:  Move on to the next question.

Q:  Okay.

STAFF:  Sir, go ahead.

Q:  Yes, thank you.  My name is (inaudible), I'm a reporter with Congressional Quarterly.

So, there's a lot of concern in Congress about several elements of A.I. -- for example, facial recognition, pattern recognition and -- and the, sort of like, the downsides of all of these technologies, and lots of members of Congress have, you know, potential legislation that could address those.

How do you think some of that law, legislation, regulation could affect the overall development of A.I. for Pentagon purposes?

LT. GEN. SHANAHAN:  Yeah, go back to a -- to the -- something I said earlier.  We're very fortunate that A.I.'s such a bipartisan issue in -- in Congress, and I am confident that in any draft legislative proposals, that we would get an opportunity to comment on the potential ramifications if that legislation were -- were it to be approved.

And I know facial recognition's one of the hotter topics right now.  Not something the Maven was working on, not something the JAIC is working on, but I understand the concerns, mostly that -- around security in the United States, police forces' use -- use or -- or not use of it.

But I -- what I'm most interested in is just making sure that the department has the ability to voice any potential concerns or, in some cases on legislation, why it's beneficial.  On recruiting side, for instance, and giving us new incentives to bring people in faster and give them more money to be competitive with the civilian market, we love those -- those legislative proposals.

But to your point on some of the ones that -- that are -- take a -- take the approach that A.I. could potentially be damaging or harmful and violate civil liberties and privacy, we're very interested in making sure that we have some -- some voice in that process, just -- just in terms of unintended consequences more than anything else.

It might sound like it is a great solution so that this capability is not used internal to the United States, but what will it do to our capabilities fielded in combat in the Middle East, for example?

STAFF:  Sir, go ahead.

Q:  Thank you.  Jeff Schogol with Task & Purpose.

The United States is constrained by privacy concerns and laws that the Russians and Chinese are not.  Would it be accurate to say, because the Russians and Chinese have access to a greater pool of data, their A.I. is smarter than what the United States military has at the moment?

LT. GEN. SHANAHAN:  Not necessarily.  It depends on what data you have available to it.

You could say that in -- in China, depending on -- on who rank orders A.I.'s nations in the world, China is said to have an advantage over the United States in -- in adoption, speed of adoption and also data, and I take that point because they don't have the same restrictions -- at least nothing that I've seen shows that they have those restrictions that we would put on every company, the Department of Defense included, in terms of privacy and civil liberties.  But, if it's social media data, it's credit card data, that's a different sort of data than full motion video from an Afghanistan and Iraq.

Now, the counterpoint to my own point would be yes, but they're learning every time they bring in more data and build an A.I. delivery pipeline.  But just the fact that they have data does not tell me that they have an inherent strength in fielding in their military organizations.

It's important for us in the JAIC, as part of the Department Center of Excellence for A.I., is really getting to the facts on what China and Russia are doing on the military side.  Great reporting on the open source side of what they're doing; I'm -- I'm equally interested in what are they really fielding.

But to your broader point, I would agree with you that they're -- the fewer restrictions they have on privacy and civil liberties gives them some advantages in getting data faster, and then building capabilities faster, as a result of what they have available in data.  But just by itself, it does not grant them an inherent strength over the United States. It depends what it's used for.

Q:  Thank you.

STAFF:  Take a question on this side of the room.  Ma'am, go ahead, please.

Q:  Hi.  Yasmin Tadjdeh, National Defense Magazine.

Back in spring you attended a JAIC pitch day in New York City and heard from some startups.  I was wondering, since then, have you done something like that, and if there are plans to do so in the future?

LT. GEN. SHANAHAN:  Yeah.  So, thanks for asking, because yes, we have one in Michigan next month, in September, that I -- that I'm right now planning to attend.  Not -- it has nothing to do with the fact that I'm a University of Michigan graduate.  It is going to be in Ann Arbor.  That was a coincidence.  It's not driven -- driven by the JAIC director by any -- by any stretch of the imagination.

But the -- and we were looking at another one in the springtime, maybe on the West Coast.  We're trying to not isolate it to the classic Silicon Valley or Boston area.  We're interested in all different locations where there's a thriving A.I. ecosystem, where there's a venture capital presence.

And so yes, we are -- next month we have one scheduled, and then we're looking at one in the spring of next year.

(CROSSTALK)

STAFF:  Sir, did I call on you yet?

Q:  No, sir.

STAFF:  Did -- did you get a question?  (inaudible)

Q:  No, sir.

STAFF:  Okay, please go ahead.  Thank you.

Q:  Thank you.  Ryan Pickrell with Business Insider.

So, you were talking just a minute ago about who -- the advantage that China has.  China also enjoys tremendous military, government and industry support for their A.I. programs.  In this environment, can DOD really compete with that, given fluctuations in the domestic industry support here?

LT. GEN. SHANAHAN:  Yeah.  You hit on -- it's a really important point, the idea of civil-military integration in China.

If -- I asked somebody who spends time in China working A.I., I said, "Could there ever be a Google-Project Maven situation in China?"  He laughed and said, "Not for very long."  The idea of that civil-military integration does give strength in terms of their ability to take commercial and make it military as fast as they -- they can integrate it or have certain companies actually working on behalf of the military.

What we're dealing with in the course of the last 25 years or so in the Department of Defense is how the balance has shifted in many ways, especially, or most importantly, in this digital modernization age where these new developments, these cutting-edge tech are coming from commercial industry, some of whom have never worked with the Department of Defense before.  So there's where part of that wariness comes in, is "Why should we work with DOD?  I can't even understand their contracting rules, never mind what they're going to do with this."  So that -- that's -- that is a limitation for us.

What -- the objective of us being more transparent about what we're doing is to increase that -- that strength of the relationship of industry, academia and the government.  It's what fueled Silicon Valley beginning in the 1950s.  And in the last 25 years, for a variety of reasons, it splintered -- Snowden, Apple encryption -- I mean, there were a number of reasons for it.  Not judging one way or the other; it just happened.

If we don't find a way to strengthen those bonds between the United States government, industry and academia, then I would say we do have the real risk of not moving as fast as China, when it -- when it comes to this.  Because there is -- it is mandated, President Xi Jinping down, "This is what we will do together, civil and military integration."

Does it -- is it perfect?  Probably not.  There are -- there are companies that want to work with United States or other Western nations, but because they're tied so closely to the military, they do not have that opportunity, because companies don't want to work with them if that's the case.

So it's not all -- it's not all a great advantage for China, but it does -- it does, and -- and I say give them a leg up.  And we have to work hard on the strengthening the -- the relationships we have with commercial and industry and prove that we're a good partner, and they'll be good partners with us.  As -- as I said before, we want to work with companies that want to work with us.  It's a -- it's a two-way street, and were really trying to -- trying to get those -- those relationships better.

STAFF:  Sir, go ahead.

Q:  (inaudible) from Fox News.

General, what is the most critical part of your A.I. system that if Russia or China stole it, you'd call Secretary Esper and say, "We have a serious problem here"?

LT. GEN. SHANAHAN:  If I told you that --

(Laughter)

I will just say, when you look at what it takes to deliver A.I. capabilities, you begin with data.  So it's data.  If I know what you -- what -- where the data came from and what data you used, I can almost figure out what that model's going to look like on the other end.  So protecting our data.

Protecting our actual models that -- that get built.  So a commercial company will come in.  They will take their version of -- of a -- of an algorithm, turn it into a model based on DOD data.  Very careful about protecting that.

And then from there is, when we put into weapon systems.  Now it's just part of the broader problem that DOD has, is protecting its weapon systems, cyber-security.  All of those are -- are germane in -- for A.I. as they are for any other weapon system.

But -- and it includes -- this includes people, as well.  So if we're using commercial companies, that we understand those commercial companies' supply chain.  Supply chain management, risk management is important to A.I. as it is to everything else.  If I can get on the inside with the microelectronics, circuit board or whatever, okay, that's a potential problem.

But that -- that is -- that is the same with A.I., but with A.I., really, I focus on -- on the data, and then the model once it's developed, and then how it's actually performing, and feedback from that system.

Without getting into details -- I mean, there's a lot of literature on this.  It really is a cottage industry across academia right now, is how -- how can I poison data?  How can I fool your algorithms?  How can I deceive your algorithms?

This is why I went -- when I said it in my remarks, we have to be careful about the hype, because it is still brittle, it is still fragile.  We're learning a lot lessons just from Project Maven alone, plus all the research work that's going on at the -- at the service research labs.

The more we learn about those vulnerabilities, the more we can strengthen them and learn what aspects of it that could really be most damaging to us, right?

If I field a -- if I field a model that is a zero percent performer, easy answer:  Throw it away.  Build over.  If it's 100 percent -- do I really believe anything is 100 percent?  If it's 50 percent, coin flip.  Now, do I even trust the algorithm?  What am I going to do about that?

So we're -- we're trying to -- we're trying -- we understand when we field a sprint one version that is that 50 to 60 percent and get better and better.  That was Gen. Holmes referring to in the article that Sydney talked about, is it's going to take a while to get that performance.  But the warfighters are demanding the capabilities.  Let them chew it up, and then tell us what needs to be better.

But it is looking at the entire lifecycle of -- of the ecosystem, when it comes to protection.

Q:  Are you confident that commercial industry can protect this data?

LT. GEN. SHANAHAN:  We're working very closely with commercial industry to let them know what risks and vulnerabilities that might be out there for companies that might have never worked with DOD, that didn't understand what China or Russia might be trying to do to get entry into their networks, and to get access to the data.  That's a two-way education, and we are working with all the stakeholders from across the government to ensure that.  This isn't just the JAIC.  This is the services.  This is the ODNI [Office of the Director of National Intelligence].  They're augmenting intelligence for machines.  All have a very deep interest in securing our ecosystem.

STAFF:  Take a question on this side of the room, if there is one.

(CROSSTALK)

STAFF:  Excuse me.  I'm sorry.  I -- I want to get to everybody here that is -- is there anybody that I have not called on?  Please raise your hand.

Q:  Yes, sir, right here.

STAFF:  Okay, sir?  Go ahead.

Q:  Hi.  Wes Morgan with Politico.

You mentioned that you had a recent meeting with -- with special operations leaders who were, kind of, clamoring for this, as are other combatant commanders.  Can you talk a little bit more about what it is they want to do with it?

LT. GEN. SHANAHAN:  I'm not going to go into a lot of operational details, but when I -- when I talked earlier about smart systems, that's what -- what I was referring to.

We have all suffered for years and years about disparate systems telling different stories.  We've always had this vision of a global command and control system where you could be tailored and see whatever picture you needed to see.  Never really has existed.  Part of it's been a classification problem, bringing in a lot of disparate sources and different classification levels, but we just never had that opportunity to do that.

But in -- in the downrange operations for Special Operations Command and JSOC, they're trying to get to that common operation picture, common intelligence picture and a picture coming off of a sensor that -- that -- those three Venn -- or -- that is a Venn diagram, those three circles collapse into one in a perfect world and now I could -- let's say I do the John Madden telestrator on my operational picture.  It shows up in the intelligence picture.  And vice versa, if an analyst wants to make an annotation on the intel picture, it shows up in the operational picture.

Why is that important?  A massive increase in situational awareness, it allows things to go faster, it helps mitigate the chances of human mistakes.

I'm never going to say that A.I. is going to eliminate friction, chaos, the fog of war.  Never.  I'd be very careful about that.  But what it does is help mitigate human mistakes and lets a human and machine -- let the humans spend time with -- humans really do best at, so for SOCOM and JSOC is, where do I need to put my brain power and think about the next steps I may recommend to a commander in the field?  That is -- if I can get through the machine through the enormous volume of information much faster, find signals and noise, present that in a different way.

That's -- that's about as broad as I -- I want to make it right now and without getting into the specifics of it.  But it is that idea of shared awareness across a battlefield with an enterprise cloud infrastructure backbone that allows all this data to be available to all participants.  That's a little bit utopian sounding, but in terms of where A.I. needs to go, in terms of the massive amounts of data that will make algorithms better and better, I need an enterprise cloud solution.

Q:  Thank you.

Q:  This is inherently multi-domain, I presume?

LT. GEN. SHANAHAN:  Inherently multi-domain.  All domains -- I stopped using "multi-."  I'm using all domains.

It's been -- I want to just note that we are working closely with the Joint Staff, who -- J6, Lt. Gen. Shwedo, has been given the task of standing up a joint all-domain command and control cross-functional team.  And then the Joint Staff J7 has been tasked similarly to come up with operating concepts for an A.I.-enabled future.

We see our A.I. for maneuver and fires fitting in nicely with both of those efforts.  And we're already embarked on that, trying to understand our first couple of projects.

STAFF:  Sir, thanks for your patience, please go ahead.

Q:  Hi, way back here.  Luis Martinez with ABC News.

I'm trying to -- my question's going to be trying to bring in all of these different elements that I've heard back here today, which maybe people up front have heard, as well.

You talked about JSOC and SOCOM pounding the table; they want it now.  You talked about 50 percent (inaudible) of get there.  You also talked about we don't really know what A.I. is, we need to define it.  And then you talk about the hype.

Where are we in this, where warfighters may not even have an idea of what it is that you're doing based on what they need?

LT. GEN. SHANAHAN:  It's a -- it's -- it's an important question.  It's a good question.

This is about education and training as much as it is about anything else.

I was just reading the stories of Jack Ma and Elon Musk talking yesterday at the A.I. Conference.  They're talking at a vision that is so far out in the future, it has nothing to do with my practical applications of A.I. in the here and now.

I have probably 90-plus percent of the department -- maybe the number's higher, I -- I -- I'm just guessing -- that don't even understand the basics of what this capability is or is not.  It's as important to understand the limitations inherent in A.I. as it is to understand the strengths.  But until we give them something in the field, they don't know either way.

So there's a multi-faceted way we need to come after -- at this, and one is give them capabilities, take their requirements, field something as a minimal viable product and then get into continuous integration, continuous delivery.

We can only get to that future on the infrastructure and enterprise cloud.  We have to do this faster and faster.

I will give you a case in point from Project Maven.  There was a little bit of a contracting hiccup along the way, and it took six months to get one algorithm updated to the next version.  At that point, the warfighter was very frustrated, as you expect them to be.  How long does it take in your iPhone to get an update on -- on whatever app you've got on there?  It's -- it's hours, it's days at worst.  And then once they're contracted, now they're back into this monthly cycle.

So it's the idea of showing them the art of the possible, both the education and training piece.  When I say multi-faceted, I mean we have to go back to where we bring people into the services.  Do we teach them coding skills from the beginning or just general principles of artificial intelligence?

One of them -- one of our roles in -- in the JAIC, as a center of excellence, is to begin to work with -- with the DIB and others in the services on what should those training programs even look like.

From the senior most leaders in the department down to the junior most, I would say in general, the people that are coming in to the military service today, whether civilians or -- or uniform wearers, have a pretty good idea of what this technology is capable of without knowing the coding behind it or the mathematics behind -- behind it.  But they understand what they're capable of doing.  Now they just need to see how it could be applied in the field.

The whole point I want to emphasize on the decentralized development experimentation part of the JAIC model is to allow the people that most are going to use this day in and day out to experiment and play around with it.

But they just don't see it yet.  Until we give them something to -- to -- to use, they don't even know what to do.

But I'm -- but I'm hopeful -- I'm hopeful that in a year or two years -- I hope less -- that the JAIC and the Common Foundation will be one of its real critical elements so people can come in and use that environment.  But they're out there doing that decentralized development.

That somebody on a battlefield in -- in Afghanistan, to get the access to the Joint Common Foundation, can build their own app and just do it in real time, I don't even know about it, I just am the beneficiary of whatever data comes off of that, feeds back into the Joint Common Foundation.

That's the vision that I think is a promising one, but it's a -- it's that balance between the hype of -- of killer robots versus I'm just trying to detect, classify and track an object in four classes.  And -- and there is -- there is a disparity.

I -- we have a CTO [Chief Technology Officer] now on board that's 25 years experience in Silicon Valley, and his view of -- of the JAIC -- and I -- and I have plagiarized ruthlessly from him -- is the hockey stick analogy.  We're still in the blade until maybe 2023.  We'll incrementally get better, and then all of a sudden, we'll go up the stick of the hockey -- rest of the stick and really begin to take off.

We don't know which ones are going to succeed and become that big breakthrough.  That's why we're going to rely on users in the field to just wring it out.  And then based on that, we'll be able to do some things in the JAIC to field that at scale.

But this is about getting fielding at speed and at scale.  That's why we exist.  It's adoption and integration, not research and development.  We have a close relationship with research and engineering, with Dr. Griffin and Dr. Porter and Matt Daniels on that side, because they're looking at A.I. next, we're looking at A.I. now, and we're trying to figure out can we bring some things across that technology valley of death faster.

You know, there's some really great research going on in these areas.  How fast can we bring it in while still adhering to tests and evaluation requirements in that?

Q:  Thank you.

STAFF:  We've got a bit of time for those that might want to do follow ups, but I want to see if there's anybody that didn't ask a question that wants to ask one.  So if that's you, raise your hands.

Go ahead, ma'am.

Q:  Hi.  Lauren Williams, Federal Computer Week.

We're talking about making sure that you can field at least minimal -- minimal viable products.  Are you having any work force challenges --

LT. GEN. SHANAHAN:  Yes.

Q:  -- making sure --

LT. GEN. SHANAHAN:  Oh, yeah.

Q:  Can you --

LT. GEN. SHANAHAN:  (inaudible)

Q:  Can you talk about how you're -- you'd want to overcome them?

LT. GEN. SHANAHAN:  Yes.

No, it's -- if you were to ask me, what are the three, kind of, day-to-day biggest challenges that we had in Maven and that we have in JAIC, it's data, culture and talent, or -- or expertise.

Why?  Because it's just, there's a limited pool of it out there.  As we stood up to JAIC and we brought people in as detailees initially -- remember, we don't even have our -- our official positions on our books until 1 October of this year, so we were bringing detailees from the services, and it's hard to find people with deep experience in artificial intelligence/machine learning, or in product development and delivery, which is really a commercial industry.  There's -- Kessel Run's a good model for it.  Compile (inaudible) on the Navy side.

(UNKNOWN):  Yeah.

LT. GEN. SHANAHAN:  But there's just not a lot of deep bench in these areas.  So we're having to build it somewhat, but we're also bringing in people to help us upscale the force.  A large part of this is going to be, how quickly can we upscale the force?

I've got a Marine colonel as part of the team.  It -- have never done any machine learning before.  He sat out there on Coursera, taking courses and making himself.  A lot of this is self-initiative to get us -- we have to build credibility and expertise in the JAIC.  If we want people to come into the JAIC as -- as the center of excellence, we have to have that credibility and expertise.  Just going to take us a little while to build it.

So we're -- we have -- we have a lot of different commercial and academic institutions -- commercial companies, academic institutions that are interested in helping us get there.  But part of this is, we just have to build it over time.

The -- the vacancies I have right now across the JAIC are -- really, the most critical ones are in the two areas I mentioned -- AIML expertise and product development expertise.  It just takes a while to build it.

It's common across the entire government.  I'd say it's common across society.  How do I incentivize somebody to come in at a 50 to 75 percent salary cut and work in a government where our rules and regulations are a little different than they are in commercial industry?  They have to do that for -- for personal motivation reasons that I hope they're going to jump at, like the CTO.  He came in because he wanted to serve the government, help bring A.I. into the Department of Defense in terms of fielding.

But as you're getting at, this is -- this is common across the entire United States right now.  It's just, there's a great competition for those resources.  I want to build some, but I need some outside help, as well.

STAFF:  We have time for a few follow-ups.  Sir, you've been waiting patiently, so why don't you go ahead?

Q:  Thank you.

You've used different terms to describe these new warfighting applications.  But for the layman, can you just paint a picture, some real-world examples of how you envision these applications being used on the battlefield?

LT. GEN. SHANAHAN:  Yeah.

I -- the way I'd love to see it in this, five years down the road, people don't even know it's A.I. anymore, just like they don't know it's A.I. when they're getting recommendations from Netflix or Amazon on what to see or what to buy.  It just becomes ubiquitous, become baked into the very fabric so it's just allowing better and faster decisions, allowing the machine to get to just massive amounts of data, and get through it quickly, find signal in the noise and offer recommendations.

Right now we're still in the early stages of what I call perception, which is Project Maven was -- was a perception project, which is just, can I -- can I automatically detect, classify, track and maybe provide a little bit extra information so that a human doesn't have to stare at a video screen for 11 hours at a time, which is how we've been doing it for the last 20 years in the Department of Defense.

So this is about, let the machines go through the data as fast as possible, make recommendations or -- or options to an analyst, to a commander, to an operator.  And it just gets through decision-making processes better and gives humans time back.

So it -- it's a little hard to describe, because it is not -- it's not so tangible that I can say, "Here's a piece of A.I., and let me tell you what it's going to do for you."  What it is is fielding that minimum viable product and having somebody say, "Now I understand what you're talking about.  Could it do this?"  "Well, actually, yeah.  We didn't know about that.  Didn't think about that.  Let's make it better."  Then we field in an updated version.

So it's going to take a little while for people to understand or to put their -- put their hands on A.I., so to speak.  But those who have been playing with it see the future and understand, and they want a -- they want a role in -- in shaping this.  Which is what the JAIC, working with the services and components and combatant commands are here to help them get through that process faster.

But it is -- it is a little hard to describe.  It's just a thing, but I would just go back to, pull out your phone.  What's on that phone?  Why do you think it's giving you that recommendation?  Why is Waze telling you go that way, but not that way?  It's an optimization problem.  It's what it is, and that's what artificial intelligence is.  It's probabilistic.  It's not deterministic.  Which is why we're careful to say that humans will always be somewhere in that process.  A human/machine team, machine-to-machine interfaces, all of it designed to let humans get to decision-making faster.

STAFF:  Sir, did -- did you have a follow-up from earlier?

Q:  I did, yes.

STAFF:  (inaudible)  Go ahead.

Q:  Thank you.

STAFF:  Yup.

Q:  Yeah, going back to the Joint --

LT. GEN. SHANAHAN:  Yeah.

Q:  -- Common Foundation, I was just curious how much that idea of a suite of A.I. tools being applied to a set of data -- how -- how much that's building off of, relating to or tying into similar attempts in the intelligence community --

LT. GEN. SHANAHAN:  Yes.

Q:  -- like MARS, or --

LT. GEN. SHANAHAN:  Yeah, no, I'm glad you brought up MARS [Machine Learning, Automation, Robotics and Space], because I just -- just met with the team the other day.  It's more on the USDI [Under Secretary of Defense for Intelligence] side with Project Maven, because if you look at what MARS is trying to do, is replace MIDB [Modernized Integrated Database].

For those of you, it's just massive database of everything -- everything you could know about any potential target anywhere in the world.  But it's a very old Industrial Age system; can't keep up with the volume of data coming in today. So, MARS is designed to get to that future.

But imagine if all of that metadata coming off the Project Maven capabilities can feed into MARS.  I can't even -- I don't even know if that begins with a "Z", zettabytes, whatever.  Whatever the amount is, it's going to be massive.  But we can handle that in the enterprise cloud future that we all see coming.  MARS can take that data.  Now what do you do with it?  I build algorithms on top of algorithms.  Now we go from perception to actually reasoning.

That's -- that's what Maven's working on now.  I think we all see that that's the next big step, is how do I actually make reasoning recommendations?  Or if you saw these three things, here is what we think might be happening behind the scenes.  This is likely that they're moving a missile from here to here.  And based on previous analysis, we think there's a 70 percent chance they will file this -- fire this missile at this target for this reason.

That's why MARS is so important to be connected in an A.I. future, a machine-learning future -- for those -- for those massive amounts of data that humans are never going to get through.

If you're familiar with DIA [Defense Intelligence Agency], they're bringing in social media now, but it's -- it's -- it's millions to billions of records, and you just can't possibly.  So they've already got some projects going on to sift through that faster.  It's what commercial industry is doing, as well.

But that is -- that is the future as I see it, and then how do we, Maven, DIA and others tie in together?

STAFF:  Sir?  Go ahead.

Q:  (inaudible) with (inaudible) Network again.

I know we're not talking JEDI, but can you just talk a little bit about how not having a large enterprise cloud is hindering what you (inaudible)?

LT. GEN. SHANAHAN:  Yeah, so -- yeah, so I'll just put this at just an enterprise cloud discussion, no more than that.

What do I need from modern A.I.?  I need massive amounts of data.  I need massive amounts of compute, and elastic, at that, so it goes up and goes down based on my needs.  I'm doing a training run, massive amount.  Inference, I might need a little bit less.  So data, compute, bandwidth/transport -- massive amounts.  And then I need continuous integration, continuous delivery.

How do I get all of that simultaneously?  It's in hyper-scale commercial cloud right now.

That's what -- that's why the future of an enterprise cloud is so important to the JAIC.  If you have disjointed compute platforms or disjointed architecture, your solutions will be disjointed.  This is about getting to a common -- it's not going to be the only cloud.  I think Mr. Deasy made that very clear in -- in his talks.  This is -- department's going to be multiple cloud for -- forever, I -- I think.  But this is -- as an enterprise cloud for A.I. purposes, we need that to do continuous integration, continuous delivery.

The stat I heard just two weeks ago was that in training runs for the big companies that are doing A.I. at hyper-scale, the amount of compute available in those training runs is doubling every three and a half months.  If you were to try to build a data center, it would be obsolete in seven months because you'd just -- you'd be out -- you'd be out.  (Inaudible) rely on an enterprise cloud (inaudible).

This is a discussion that should have -- should have been over with five years ago, but we just didn't realize where this was -- was all headed five years ago.  Now it should just be -- it's part of our infrastructure.

You know, look at infrastructure as a service, it's the platform on top of that that really matters to the future of A.I.  We just want to make that part of the fabric of the JAIC, it's just there.  But we -- we are only going to get that right now through an enterprise cloud.

It's not ever going to be the only solution, but when we have data coming off of things like platforms and sensors that want to talk to MARS, then, boy, it should be -- be a lot more beneficial, so I don't have problems with interoperability of pulling data out of one cloud infrastructure and putting it into another.

This is about trying to get to that Common Foundation.

STAFF:  We started this morning with Sydney and we'll take our last question from Sydney.

But before we do that, I just want to make sure for the folks here, did -- did everybody that wanted to ask a question, were you able to -- to ask it in -- in the crowd here?

Okay, great.  Sydney, go ahead.

QUESTION:  Thanks.

Final question about the 50 percent solution, you know, which (inaudible) projects.  Let me ask, you know, in rough terms, how does that compare to everything else?

Cause obviously traditional means of intelligence are hardly infallible, either.  Is it a matter of well, actually, 50 percent is, you know, pretty close to what the -- the current system does, or is it rather is this 50 percent something you would have never seen before because there was no way to winnow it out from the massive data?

LT. GEN. SHANAHAN:  So, I think -- yeah, the -- when I say 50 percent, let's say we're doing intelligent business automation and, sort of, robotic business automation.  That could be 100 percent solution the first time it's fielded, because it's a relatively simple problem.

So we -- depends on the complexity of the problem we're going after.  It could be predictive maintenance is an 80 percent solution, which -- when first fielded.  So it will -- it will be different.

Full motion video's a complex scenario.  Admittedly a little bit more complex than we probably expected at the time we took this on.  We had no choice, we had to do it; we ran out of analysts and -- and had to -- had to do it.

But the important point, Sydney, that I'd emphasize is as long as there is an understanding between the user and the developer of what they're getting, that's the most important part, if they understand it's a 50 percent or 70 percent or 80 percent.  Because if they don't understand that and they find that it's a coin flip, and they didn't realize it was a coin flip, they'll give up and not use the system.

This is why user engagement on the very front end -- in the development phase -- this is just commercial industry standard right now -- you're in there talking about this on the front end of this all the way through integration when the service ends -- ends up picking it up.  And then having things like dynamic retraining, which a button on a machine that an intel analyst is using, say, you got that one wrong.  That data feeds back, it's blended into the other problems that were identified, and goes into an algorithm update and gets fielded in -- in -- within a month.

That's why this is different than we've done things in the past.

And -- and to your point, which I think is a very important one, is humans are fallible.  In combat, humans are particularly fallible and mistakes will happen.  A.I. can help mitigate the chances of those mistakes -- not eliminate them, but mitigate, reduce.  Maybe we have a lower incidence of civilian casualties or collateral damage because we're using artificial intelligence.

So your point is right, whether it's a 50 percent number or not, humans are very fallible in combat and we want to give them the best possible tools.

This is about protecting U.S., allied, partners, service members fighting nations.  This is a really important point.

I go back to Desert Storm, where somebody in a squadron I used to be in fired a High-speed Anti-Radiation Missile, a HARM, off an F4-G in Iraq.  Heat of battle, lot of friction, designated a Patriot weapons system instead of a -- what he thought was a surface-to-air missile system in Iraq.  There was no failsafe mode in the HARM, it didn't -- he couldn't recall it, and it hit and killed American soldiers.  I am convinced that had artificial intelligence been around that would not have happened, for a variety of different reasons.

But it's the real-world case studies of people who have been downrange in combat that said, "If I had had this, here is the difference it would have made in my decision-making."

Again, not perfect, and we have a lot of work to do in that area.  But the idea of just putting it out in the field and letting people understand what the art of the possible is, is what we're focused on right now.

STAFF:  Ladies and gentlemen, thank you very much for attending today.

If you have any other follow-ups over today and the weekend, I will provide my e-mail and also you all -- you know to get ahold of Heather and Elissa, who have the desk for the CIO and A.I. accounts for OSD(PA) [Office of the Secretary of Defense for Public Affairs].  You can contact them to connect me with -- they can connect you over to me.

So thank you for being here and we'll see you next time.

LT. GEN. SHANAHAN:  Thanks, everyone.  Thank you.