An official website of the United States Government 
Here's how you know

Official websites use .gov

.gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( lock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

You have accessed part of a historical collection on defense.gov. Some of the information contained within may be outdated and links may not function. Please contact the DOD Webmaster with any questions.

DOD Official Briefs Reporters on Artificial Intelligence Developments

STAFF: Good afternoon, everybody. So good afternoon, or good morning, or good afternoon, ladies and gentlemen. Welcome to today's press briefing. Today, the acting director of the Department of Defense (DOD) Joint Artificial Intelligence Center (JAIC), Mr. Nand Mulchandani, will provide an update and overview of ongoing AI initiatives in the Department of Defense. After the opening remarks, we'll go out to the room and to the phones. I'll ask that each person either raise their hand if you're in the room -- and I've got a list of people that have called in, and I'll go out to the phones. We'll alternate, and it'll provide an opportunity for everybody to ask their question.

With respect to time, we ask each person to only ask one question and follow-up, and then if we get to everybody, we'll go back around if there's additional questions. As standard, just identify yourself with your name and your organization before asking your question. 

And with that, we'll -- I'll now turn it over to our director, Mr. Mulchandani, for his opening remarks.

ACTING DIRECTOR NAND MULCHANDANI: Great, thank you, Arlo. Good afternoon, ladies and gentlemen. It's so great to have everyone here and on the phones today.

I'm both personally excited and humbled to be addressing all of you in my first press -- press briefing here at the Pentagon. After a long career in the technology industry, my pivot to public service at the DOD has been both educational in how the department works and exciting in how much change is happening in adopting new technologies and practices.

Last month I took over the leadership of the JAIC from General -- Lieutenant General Jack Shanahan, who left behind an amazing track record both as a career Air Force officer, but also the founder of AI at the DOD, having led Project Maven, and then the JAIC. General Shanahan was a great mentor and a friend to me in helping to learn about operating in the department, and we worked together on fusing the best ideas from the American technology industry and customizing them to work at the DOD. He's missed by all of us, and leaves behind this transformation that we're driving right now. I'm personally grateful for the time that I got to work with him.

Now let's turn to a discussion of the state of the JAIC, laid out as a story of a classic startup that is operationalizing new technology with an innovative business model. As someone who has cofounded and led multiple startup companies and has been involved in the venture capital industry for many years, I know that starting a new organization to productize new technology is always a challenge, and that's doubly true for a new technology start-up inside the DOD. And I know this is obvious to everyone, but in spite of the fact that AI is an emerging area of tech, it is still technology and not magic. 

The initial challenge that we're working on right now is adopting it for use in new or existing capabilities with other horizons of challenges beyond that. Having been through these types of major technology shifts, I am a big believer in the classic hockey stick curve. It takes time and investment to get going, but technology maturity and adoption typically hits an accelerating curve once it's ready for scaling. The modulation of additional investment in adjusting this curve is a core part of how you manage tech transitions like this.

So matching this adoption of this new technology without getting ahead of this curve is where the game gets won or lost. So when the JAIC was stood up in 2018, the focus was on picking low technology risk areas, but solid payout projects in areas such as disaster relief and predictive maintenance. These products are now maturing, and we're working closely with the services and combatant commands to transition these products into production and delivering value, such as the 860 helicopter engine health model that's now in production at SOCOM (U.S. Special Operations Command) and other announcements that we'll be making in the future.

As an organization, we've learned a great deal from our successes and setbacks in managing these mission initiatives, and those lessons are reflected in our refined business model and plan that we call JAIC 2.0. We're now in an even stronger position to operationalize those lessons learned, drive scale, and catalyze long-term change in the DOD through AI 

Just to remind everyone that the JAIC itself is not just a technology and product organization. Our missions team engaged deeply with our customers, who are the services and combatant commands, our acquisition, strategy and policy, human resources and education and training initiatives enable the JAIC to fulfill its mandate as the DOD center for excellence for AI We view the mission to transform the DOD through AI as a whole-team effort, and more broadly, the JAIC now leads the DOD-wide AI governance process, which involves participation from most of the senior leaders across the combatant commands and DOD components in the services that allows us to synchronize efforts and share learnings across the DOD.

Just as with traditional software, AI software is relevant to the full spectrum of DOD activities, from the back office to the front lines of the battlefield. The JAIC has six mission initiatives underway that are all making exciting progress: joint warfighting operations, warfighter health, business process transformation, threat reduction and protection -- which is what used to be called HADR -- and joint logistics, which covers our predictive maintenance efforts, and our newest one, which is called joint information warfare, which also covers cyber operations.

As we have matured, we are now devoting special focus on our joint warfighting operations mission -- mission initiative, which is focused on the priorities of the National Defense Strategy and its goal of preserving America's military and technological advantages over our strategic competitors.

In late May of this year, the JAIC awarded the joint warfighting operation's prime contract, with an $806 million ceiling, to Booz Allen Hamilton. The list of companies supporting the JAIC and the Joint Warfighting Operations (JWO) Mission Initiatives (MI) now includes not only many of America's largest and most recognizable technology companies but also a host of innovative, smaller defense technology start-ups, as well.

In partnership with the American industry and organizations across the DOD, the AI capabilities that the JAIC is developing as part of the joint warfighting operations missions initiative will use mature AI technology to create decisive advantage for the American warfighter.

As just one example, we are developing, in collaboration with the Marine Corps Warfighting Lab and Army PEO C3 (Program Executive Office Command, Control and Communications), the fire support cognitive assistant that will help commanders triage incoming communications and support Joint All-Domain Command and Control, which is also called JADC2. For FY (Fiscal Year) '20 and -- FY '20, spending on joint warfighting is roughly greater than the combined spending on all of the other JAIC's mission initiatives, just to underscore the importance of this new initiative for us. 

Now having emphasized our strategic focus, let me emphasize speed and customer focus. I know some of you here are already familiar with Project Salus, which is an AI-enabled predictive analytics platform that was developed by our team and is currently enabling the Common Operational Picture, the COP, interface of the U.S. Northern Command headquarters in Colorado and is helping their operational planners with supply chain management in support of the federal COVID-19 response.

By embedding a team of customer engagements professionals directly with them, this product went from concept to code -- so there's a -- a basic idea of how quickly we can get from an idea to actually production -- or in operations or code deployed -- is actually in a -- a few weeks. We plan to use this development model across the other products that we're building in an effort towards scaling our -- our products. 

So I hope this has been a good overview of where things are with the JAIC. Now for the last part of this discussion, I wanted to take this opportunity to directly address a few issues that always come up around any discussion of our organization.

As we all know, any significant advances that we're going to make in new technology areas will be in partnership with our industry and academic partners. I know that in the past some of you have written about the challenges that the Department of Defense has had working with the American tech industry, such as with Project Maven, and there are no question there have and always will be specific incidents that make the news. 

However, it turns out that we have had overwhelming support and interest from tech industry in working with the JAIC and the DOD and have commercial contracts and work going on with all of the major tech and AI companies, including Google and many others, on projects that have a major impact on U.S. national security.

They're free to engage with us on projects across the spectrum, from bending the cost curve to combat systems, but in all cases the engagements and relationships have been incredibly valuable and our bonds are only getting stronger.

The other issue that I'd like to address is the concept of AI superiority and whether our peer competitors are somehow ahead of us in AI. While it is true that the United States faces formidable technological competitors and challenging strategic environments, the reality is that the United States continues to lead in AI and its most important military applications.

First, the concept of AI as a single monolithic technology does not exist. When talking about AI technology and products, it is best to do that on a case-by-case basis, of technology and a vertical focus basis. Like with other parts of business, anything that has value gets investment. Let's remember that this current wave of AI was driven by online shopping and advertising through click analytics. So the key point to be made here is that leadership in military AI is inherently application specific and contact specific. 

There are some areas where China's military and police authorities undeniably have the world's most advanced capabilities, such as unregulated facial recognition for universal surveillance and control of their domestic population, trained on Chinese video, gathered from their systems, and Chinese language text analysis for Internet and media censorship.

It is not that the United States military is technologically incapable of developing such systems, it is that our Constitution and privacy laws protect the rights of U.S. citizens and it's -- how their data is collected and used and therefore we simply don't invest in building such universal surveillance and censorship systems.

Further, we know that China and Russia are developing and exporting AI-enabled surveillance technologies and autonomous combat systems without providing any evidence of adequate technical or ethical safeguards and policies. By contrast, the United States has openly published its policies on both military autonomy and AI ethics. So put simply, the United States is not behind in a race for these AI applications, we simply deny that their development and use for the ends of state repression and control represents forward progress. 

However, for the specific national security applications where we believe AI will make a significant impact in the future balance of military power and strategic competition, I believe the United States is not only leading the world but it's taking many of the steps needed to preserve U.S. military advantage over the long term, as -- as evidenced by what we just discussed, what we're doing here at the JAIC.

In stark contrast to our strategic competitors, we have done this while preserving and doubling down on our very public commitment to developing safe and ethical, responsible AI technology and our commitment to cooperating with our allies and partners.

During my recent trip to NATO and the European Union (EU) in January, it was very apparent that we share much in common with our European partners on our commitment to military ethics and the benefits of AI to modernize our military forces and strengthen our alliances and partnerships for the digital age. So through all of this, just like any other great start-up organization, we'll maintain a sense of urgency in all we do. We understand the stakes are high, the competition is busy, and that we can't afford to slow down. 

So I wanted to thank all of our leadership in the department, including Secretary of Defense, Deputy Secretary, services secretaries, the Chairman, Vice Chairman, Joint Chiefs of Staff, my boss, DOD CIO (Chief Information Officer) Dana Deasy, to our leadership in Congress, partners in industry and academia, allies in militaries across the world, and of course the American people for your confidence and support of the JAIC to deliver in our mandate to transform the DOD through AI.

We have a lot of work in front of us but I'm absolutely confident about the direction we're heading in and the impact we'll have on our military over the next couple of years. With that, happy to take your questions. Thank you.

STAFF: Thank you, sir, for those remarks. And I'm going to go out to a journalist in the room first and then we'll go to the phones. And ma'am, if you could lead us off if you have a question?

Q: Yes, thank you -- thank you. Sandra Erwin with Space News.

MR. MULCHANDANI: Hi.

Q: Hi. I wanted to ask you about your connections with the Intelligence Community as far as their investments that they're making in AI for analytics, which is a significant investment for geospatial intelligence and all the tools that they need for military commanders. How are you leveraging that, how are you interacting with that side of the national security community? Thank you. 

MR. MULCHANDANI: Yeah, great, great question, thank you. 

As you noted, there's huge investments going on that side of the -- of the DOD and other agencies as well. Right now, a lot of the work going on in AI is fundamental building block AI, if you may. So if you look at other products going on in the intel side, or even the work we're doing, we're working on fundamental AI algorithms, data collection, et cetera that builds basically algorithms for things like vision and full motion video, et cetera.

We do share a lot of common building blocks and models with them. However, we do keep a strict -- you know, separation in terms of data, which obviously is the lifeblood of AI, in terms of all of the legal limits and policies and other things, in terms of sharing information that may spill over in terms of citizens' data or other things there.

But from a fundamental buildings blocks perspective, we actually have a lot of work going on between other projects -- like for instance Maven and others, but also the broader intelligence community there.

Q: And just a quick follow-up, how do you work with the private sector? Do you -- do they pitch ideas to you, do they tell you what's going on? What is sort of your interaction? 

MR. MULCHANDANI: Actually, extensive interaction. First is, we have -- our industry relations individual, who runs industry relations for JAIC, is actually located out in Mountain View, California, not in Washington, D.C. So we're right at the heart of Silicon Valley, collocated with DIU (Defense Innovation Unit), with whom we work very closely. 

Second, we -- all of our solicitations in terms of our fees or other work that we do are all posted publicly. We tend to get dozens and dozens or large amounts of inbound interest, both through our RFPs (Requests for Proposals) but also on a regular basis. I have an entire -- we have an entire data and AI team that actually spends time evaluating new products, looking at the frontier of what's coming down the pipe, both in academic research but also in industry research. 

The other thing I'd say is, the JAIC is -- you know, we are located under the CIO, focused on commercializing technology that is scaled out there. 

There's another entire part of the DOD -- obviously DARPA R&E (Defense Advanced Research Projects Agency Research and Engineering) – that actually does other types of engineering and fundamental research on AI. And one of the things that we're doing is actually working very closely with them on productizing and taking mature technologies as they're maturing, and bringing them into production. 

And so the JAIC actually is part of a much larger system that we have at the DOD that is going all the way from fundamental research, all the way to delivering deployment. And the JAIC is sort of a key part of it, but we're a part of a much larger machine that deals with industry, deals with tech and all the other pieces there. 

Q: Thank you.

STAFF: I'm now going to go out to the phones. The first question will come from Gopal Ratnam from Congressional Quarterly. 

Gopal, go ahead. 

Q: Yes, hello, can you hear me?

MR. MULCHANDANI: Yes. 

STAFF: Hi, Gopal. 

Q: Yes, hi, thanks for doing this call, Nand. You mentioned in the context of AI and China, how that country's facial recognition program, for example, is completely unregulated. And so I want to ask a question on that front. 

In the context of the current civil rights protests all across the country, lots of companies that are in the business of making facial recognition software and technology -- Amazon, Microsoft, and others -- have said that they would either suspend or completely halt, you know, any further development, investment in those technologies, especially for police use. 

Is there any -- how do you think about use of facial recognition or related technologies in military applications? 

MR. MULCHANDANI: So couple of things to -- to say on that. Number one is, private companies are obviously absolutely open and if -- this is a free country, they can build any products, sell any products they'd like. Local and state governments, other parts of the U.S. government are free to adopt any technologies that are within the law or within their policies or local sort of other pieces there. 

As far as the DOD goes, though, we are strictly regulated in terms of not only dealing with U.S. citizens' data, personal data, doing any work inside the United States. So from that standpoint, we the DOD, the JAIC actually do not invest or work in any of that type of technology, especially when it comes to handling any form of data when it comes to U.S. citizens or other things.

So I can't comment on what other organizations may be doing outside the DOD, but when it comes to the DOD itself, we are strictly regulated by those policies and laws and as a matter of fact, I can actually clearly tell you that at the JAIC, we actually do not have work going on in any form of facial recognition technology today. 

Q: Great. If I can just ask one more question on a slightly different topic, I mean, you referenced the Project Salus work, is there any -- and you said how that has been transitioned and it's now being used across the country. Can you give any -- offer any concrete example of how that particular technology is being deployed? I think this was created in the context of dealing with shortages driven by the COVID pandemic, so can you offer a specific example? Thank you. 

MR. MULCHANDANI: Yeah, so this product was developed in direct work with NORTHCOM (U.S. Northern Command) and - the National Guard. They have obviously a very unique role to play in ensuring that resource shortages -- whether it be water, medicine, supplies, et cetera -- are harmonized across an area that's dealing with a disaster. What they did not have before is predictive analytics on where the shortages will occur, as well as real-time analytics in terms of supply and demand. 

So we have, now, roughly about 40 to 50 different data streams coming into Project Salus at the data platform layer, and then we have another 40 to 45 different AI models that are all running on top of the platform that allow for General O'Shaughnessy, the NORTHCOM Operations team, all of them, to actually get real-time information but also actually get predictive analytics on where shortages and things will occur. 

So for instance, based on a particular weather event hitting , for instance, the system is able to predict where traffic bottlenecks will happen, hotel vacancies may happen, where military bases are that could actually stockpile, you know, food and other things there, as well as retail information, flow-through retail information. 

So there's a tremendous amount of data that we've aggregated. Again, all fully vetted by our lawyers, by OSD policy and others to make sure that there's no personally identifiable data, it's down to the zip code level. But this is a product that NORTHCOM, we worked incredibly closely, collocated with them, and -- and build this product in record speed. This is really exciting product for the JAIC.

STAFF: I'd now like to go - sir, you can go ahead and take a question.

Q: Thanks. 

Zach with the Center for Public Integrity. I wanted to ask about the DOD AI principles. 

MR. MULCHANDANI: Yeah?

Q: So General Shanahan, when they were proved by the secretary, noted that the principles themselves are fairly vague, and that it was really going to come down to implementation and...

(CROSSTALK)

Q: ... the actions taken to ensure those principles are part of the process for acquiring and developing AI. What have -- what's being done now, how far along are we in implementing those principles, and how has it worked, getting them to be part of the process?

MR. MULCHANDANI: Great question. Right here. I think about them all the time. We have an entire team at the JAIC focused on nothing but policy – AI policy, AI ethics, Alka Patel obviously is our lead ethicist, who leads that. Sunmin Kim who’s on the policy side, Mark Beall who is our Chief of Strategy and Policy runs that.

So, exactly the point that you've made, is how do you we take these lofty bigger goals and actually productionalize them? And that's where the JAIC comes in. So we have a multi-pronged approach that we're working on. 

Number one, we have deep engagement with industry and other teams that have actually already created their own ethics principles, whether they be for advertising, for bias, or other things, and we are now actually going out and picking and choosing the best part of how they've implemented it, how they've taken those goals and actually put them in code. 

As a former, obviously, computer engineer, but an entrepreneur in the tech industry, I've done my whole career in terms of taking sort of translated English into actual coded deployment. 

What we're doing also is our policy team has a seat at the table with our product development teams. So, for instance, the Joint Warfighting RFP that I just mentioned, that Booz Allen Hamilton won, if you go back and read it, it is the first RFP that the DOD has every delivered, where we actually embedded some of the ethical principles into the actual RFP, not as a requirement, but as more informational for us to actually understand how vendors start answering these questions.

Because, we do understand that as the DOD standardizes hopefully on these types of principles as we put them into RFPs, this becomes a structure that industry has to respond to. So, we have an iterative process where we're actually learning how this will work, so we actually don't over regulate or do something that actually cuts off good sort of good commerce there.

The third is, there's an AI governance process that I mentioned across the DOD. There's an entire track focused on ethical principles and other pieces there. And then last but not the least, we have an international engagement strategy where Mark and Mark's team led by Stefani Coverson actually has direct contact with other countries, our partners, allies, et cetera, all of working on harmonizing these principles here.

So, I can go into probably an hour long more of work that we're doing, but we take this work very seriously, we have an entire team on it. And this to us is an incredibly important JAIC product, so other than all this product, tech products we've talked about, these principles are actually going to be actually one of the most important products that the JAIC delivers. 

Q: Let me just ask a quick follow on that, which is one of the initiatives is the Joint Warfighting initiative that General Shanahan had described and potentially refers to lethal AI application that would out be in about 2021 is when there would be start to be testing on it as he described it. 

So, first off, is that still the plan? Because he described that there was going to be testing on the first lethal AI implementation in 2021, and then the second component, which is why I wanted to raise it, how have those principles impacted what could be the first lethal AI in the industry?

MR. MULCHANDANI: I see. So, let me -- let me actually -- this is a very interesting question you're asking, but let me parse that out a little bit. 

I don't want to start straying into issues around autonomy and lethality versus lethal -- or lethality itself. So yes, it is true that many of the products we work will go into weapon systems. 

None of them right now are going to be autonomous weapon systems. We're still governed by 3000.09, that principle still stays intact. None of the work or anything that General Shanahan may have mentioned crosses that line period. 

Now we do have projects going under Joint Warfighting, which are going to be actually going into testing. They are very tactical edge AI is the way I describe it. And that work is going to be tested, it's actually very promising work, we're very excited about it. It's -- it's one of the, as I talked about the pivot from predictive maintenance and others to Joint Warfighting, that is the -- probably the flagship product that we're sort of thinking about and talking about that will go out there. 

But, it will involve, you know, operators, human in the loop, full human control, all of those things are still absolutely valid. 

STAFF: OK, I'm going to go onto the phone now, I'd like to go out to Mr. Travis Tritten from Bloomberg News. 

Q: Hi, can you hear me okay?

MR. MULCHANDANI: Yes, thank you.

Q: Fantastic. Thanks for doing this. You know, Congress is putting together the annual authorization and spending bills. So, I'd like to ask you again about your Joint Warfighting portfolio. You've touched on this a couple of times. 

I'm just wondering if -- if you could talk a little bit more over whether you'll be expanding that work in fiscal year '21? 

And what are your priorities, specifically battlefield capabilities and projects you'll be pursuing? It seems like you've kind of touched on that, but can you be a little bit more specific about what you're planning for FY 21?

MR. MULCHANDANI: Sure. Sure, here's how we're tackling the problem. Obviously the JAIC is a small organization inside a very large DOD, so changing or transforming it is going to take time and other things there. 

So what we're doing is focusing on key projects going on across the military that are -- that are the largest sort of change agents. One of them, which all of you -- I did mention, it's call JADC2, so Joint All-Domain Command & Control systems.

Now JADC2 is not a single product. It is a collection of platforms that get stitched together and woven together into effectively a platform and the JAIC is spending a lot of time and resources focused on building the AI components on top of JADC2. 

So, if you can imagine a command and control system that is current and the way it's configured today, our job and role is to actually build out the AI components both from an data and modeling and then training perspective and then deploying those. So, that' going to be a big focus on it. 

And then there's a whole area around cognitive assistance that I talked about for instance. The way we see AI evolving over time is it's going from things like numerical data that you see in things like predictive maintenance, et cetera, we've got obviously healthcare, which directly addresses operational readiness. 

And then you move to Joint Warfighting, where we believe that the current crop of AI systems today, because they're not going to go into fully automated, you know, mode, are going to be cognitive assistance, very similar to Project Salus in terms of predictive analytics or picking out particular things of interest. And those types of information overload cleanup are the type of products that are actually being invested in. 

And then we move to more autonomy over time, but that's purely a progression in terms of time and AI maturity that will take up. So, we're riding the maturity curve like I talked about, but cognitive assistance, JADC2, command and control, these are all pieces there and all the technical components, like I talked about tactical edge AI, working in denied or -- you know -- you know, jammed environments, these are all the realities that we're looking through in the future in terms of what systems need to deploy in. Very low communication bandwidth environments, degraded environments, denied environments, et cetera. Those are all things that we're actually spending a lot of time focused on as part of the joint warfighting.

STAFF: Okay, I'll go back to the room...

(CROSSTALK)

Q: If I could just follow-up? As far as -- as far as your joint warfighting budget, could you talk about how you want to expand that versus FY 20 to FY 21? What type of an expansion are we looking at? Thank you.

MR. MULCHANDANI: It's a little early because the model -- I don't know if any of you have been following, kind of, how we also select products. It's a very different model of, we think of this is as an enterprise sales or a venture capital model. 

So think of a giant funnel of ideas that you create that are all competing with each other for money. And if you think about building a healthy pipeline, you want the healthiest pipeline you possibly can so you can actually funnel them down to a set of products you're actually going to invest in. 

We have a very healthy pipeline that's actually getting bigger. And that process of whittling down the funnel into the projects that we -- we focus on from a portfolio perspective in ‘21 is happening right now. We've just closed out FY 20. 

The joint warfighting contract that you saw has an $806 million ceiling, which means up to. We clearly aren't going to be spending all that money because the entire JAIC's budget for a couple of years is actually $800-some million. So I'll just leave it at that, but there's a lot of room there. But the fact is that the actual products and what we select and what we announce, we'll be doing that over time.

STAFF: I'm going to go back to the room...

MR. MULCHANDANI: Thank you.

STAFF: ... Sir, you can go ahead and field your question.

Q: Thank you, I'll just go ahead and remove my...

(CROSSTALK)

MR. MULCHANDANI: Yes, please. Thank you.

Q: ... Okay. A moment ago you talked a little bit about information warfare as something that you were looking at. And I know that they're -- you're going to be limited in terms of details. But can you talk a little bit about how you see AI changing the delivery of cyber effects and also the defense against AI cyber effects?

MR. MULCHANDANI: Great question. And I'll break it out into both what you would consider to be traditional cybersecurity and then there's information security, which are probably two parallel but similar sort of pieces there. And both have, obviously, offensive and defensive modes. So there's kind of four quadrants to really talk about.

On the defensive side, AI -- I've had a 20-plus year career in security products and what you find is AI has actually been used for a very, very long time in analysis, event analysis, or malware analysis, or attack analysis, et cetera. So that's a well-trodden path. There are actually very mature products in that space, et cetera. But that's really for defending the DOD networks and other tactical networks.

On the offensive side, actually I think there's huge potential for using AI in offensive capabilities. Obviously, CYBERCOM (U.S. Cyber Command) is going to be the focus there, the combatant commands that we focus on that we focus on and work with.

I can't specifically talk about any areas of technology themselves, but I can sort of point to things like vulnerability assessments and vulnerability discovery. There's a huge amount of work there in terms of finding attack surface, and weaknesses in networks and things, anomaly detection or actually being able to map networks and things. So there are -- there's a huge goldmine of work there that actually is -- the industry has just barely started in terms of attacking that.

Now, when you talk of cybersecurity, one of the issues is there's sort of offensive and defense. On the offense -- on the defense side, you deal with the issue of false positives. So AI technologies just in general, as all of you know, you're dealing with probabilities. There are no certainties. 

Based on the training data and the record -- the records that you take of the network traffic, et cetera -- for instance, trainer models, you may end up with things like false positives which then either generate network outages if you put it in denial mode or you have to actually go scrub those events. So there's that aspect.

But in the case of offense, you're actually a lot -- it's a lot better and easier. And information warfare, I'll just cover one piece on -- two pieces there. 

When it comes to the analysis side, there is a huge amount of work, as you know, going on both in NLP (Natural Language Processing) in being able to do -- you know, text recognition across multiple languages, et cetera. That work is actually -- it's getting scaled in industry. We have multiple companies that are out there - that can do this.

NLP and speech-to-text is actually a fairly mature AI technology that can be deployed in production. And that actually is going to be used in reducing information overload. So being able to scan vast quantities of open-source information and bring the sort of nuggets and important stuff on the NLPs.

And then when it comes to offensive operations, I can't, obviously, talk about that. But you can read the news in terms of what I think our adversaries are doing out there and you can imagine that there's a lot of room for growth in that area.

Q: Can I ask a quick question on a slightly different topic? So the JAIC sits at the center of a nexus between the Defense Department and Silicon Valley, and you have a lot of Silicon Valley experience. Every tech CEO (Chief Executive Officer) I've ever spoken to, they've all emphasized the extreme importance of H-1B visas to what they do to the American technology innovation model.

So in your conversations with different partners that are enthusiastically embracing what the JAIC is doing, when do they think that they -- a suspension in H-1B visas might begin to impact their ability to create the things that we need them to create? What might be the larger effect of a suspension in H-1B visas on what you do?

MR. MULCHANDANI: I've got to say, I -- in -- in this year that I've been at the DOD, I really haven't -- that hasn't been a real topic of discussion directly with any of the companies or CEOs. Those are really internal matters.

What -- what we focus on is where is the AI talent. What's clear is that private industry, the larger companies, et cetera, there was a CEO who talked about -- we often talk about inside the JAIC is what we call the war for talent. 

There is a huge war, a global war for talent on AI. There's no question about it. You graduate right now with a degree in statistics or physics or AI in computer science or whatever, the salaries, the bonuses, the stock options, they're incredible because there's just a lot of money chasing these problems.

We at the DOD agree that we're not going to win this war for talent, so to say, by just overpaying or other things there. Some of us are here for a variety of reasons. But the fact is that we can leverage that. So inasmuch as any of these policies impact the talent that these companies -- American companies can -- can hold onto, the better it is for, I think, for the DOD and, I think, for the country.

STAFF: Okay. I'm going to go out to the phones. I'd like to go out to Lauren Williams from FCW, if she's on the line. Go ahead.

Q: Yes, thank you for doing this. I want to kind of follow-up on that question of talent. Does the JAIC have enough of the STEM (Science Technology Engineering Math) talent to fulfill its missions? And also, what is the JAIC doing to kind of spread that sort of AI training and develop talent within DOD?

MR. MULCHANDANI: That's a great question. Well I will flat out say we'll never have enough talent. We can just double, triple, quadruple the number of people we have on STEM talent and never had enough -- have enough but we've -- from when General Shanahan and the team started the JAIC to now, we have made incredible progress on that front.

So our product team is organized into sort of four key pieces, just to sort of -- telling you how we're organized. We're organized actually very much like a software company that we would start in the Valley together.

We have a core team that focuses on product and project management. So we actually have real product managers who own products and deliver them out to market. So it's really about taking the customer needs and requirements, getting the products on contract, managing them through, and transitioning.

We have a data, science, and AI team led by Marcus Comiter and Nate Bastian, who is a Major. Both of them have done a great job of -- of building out that team. Marcus just graduated -- he -- from -- that's his top Ph.D. from Harvard, came over to the JAIC, I -- I got to know him when I was there and he's been an incredible, you know, resource but also a win for the JAIC, just in terms of the quality of applicants and people that we're getting to come work at the JAIC, both outside civilians but also internally.

Nate actually just recently set up this thing called Brain Camp that we're actually replicating across the DOD. It was an all day training session on AI, all the way from the fundamental basics to algorithm developments and other pieces there. We have an AI Symposium coming up, which actually will have some technical tracks and things there. There's an entire AI training and education policy work that Mark and team own. 

So, you know -- oh, and then our test and eval team led by Dr. Jane Pinelis. She has assembled an incredible team, she's one of the world's leading experts in AI testing and she's helping set the standards across the DOD for testing and eval, which is obviously an incredibly important part of deploying and fielding AI.

So when it comes to quality of employees and people who understand and know tech, we obviously don't rival some of the larger tech companies but the core nucleus and leverage that we get out of the -- the people that we have has -- is -- is incredible.

STAFF: Okay. Go back to the room here, sir. We -- do you have a question that you'd like to ask? Okay, so I'll go back out to the phones. This time, I'll go out to Mr. Sydney Freedberg from Breaking Defense.

MR. MULCHANDANI: Hi, Sydney.

Q: Hello, hi. Sydney Freedberg, Breaking Defense. Thank you, Mr. Mulchandani. Let me ask -- you know, on the one hand, we -- we have things, you know, like Google and so forth, you know, and people quitting over working with Maven or, you know, wanting -- they're coming to sign declarations. On the other hand, we have a DOD that applies words like, you know, "lethality" to things that -- you know, that word gets placed on anything sometimes. It's not the things that sound scary but perhaps are not the killer robot people imagine.

You, of course, have been put in both worlds. You know, how do you explain -- I mean, take the -- the cognitive fire support, for example. You know, how do you take something like that and explain to, you know, people in the Valley who might have ethical concerns and lack, you know, much experience with the Armed Forces, you know, that that is not the Terminator. What does that actually do and why should they be comfortable working on it?

MR. MULCHANDANI: Yeah. You know, I -- I'll -- I -- the only thing I can say to that, Sydney, is all of these systems are running on software and hardware systems today. What we're doing with AI is either making them more efficient, making them faster from a decision cycle time, making them more accurate, in many cases reducing overload in terms of cost, operational, you know, overhead.

So there's nothing magic that we're really doing here other than applying AI to already existing processes or systems. When I got here from Silicon Valley a year ago, I -- I didn't know what a JTL (Joint Target List) was and now I do. I hope you won't make me actually tell you what the acronym stands for but it's -- it's how targeting is done.

So these are -- these are great examples of where you take existing processes -- there's nothing magic about them -- they have a workflow, they have a set -- a well-defined criteria that -- that people go through. What's really interesting about the military that I found is the amount of training and processes that are put in place to make many of these things happen. And they lend themselves incredibly well from a training -- so you have to train soldiers to actually follow these processes. Well, you can automate them actually very easily because they're so well defined. And so there's nothing really magic here on that side.

Now, the word "lethality" -- and I think this is where the killer robots and Terminator stuff comes in -- the edge case that everyone is -- that focuses on is the -- such an outer edge case and we are nowhere near, from a platforms, technology, capability, hardware, software, algorithms perspective to get anywhere close or near to that but that obviously is where everyone jumps to.

So you know, the cognitive assistant for helping that, there's still a human sitting in front of a screen who's actually being assisted by an AI algorithm. So again, it's always a human in the loop, it's making it faster, making it more accurate, nothing more than that.

STAFF: Okay. So I'm going to try to get to one or two more questions before we wrap up today. The next question I'd like to go out to Jazmine from National Defense, if she's on the line.

Q: Yes, I'm on the line, thank you. Hi, thank you so much for doing this, sir. My question is about the budget. So it seems that officials are pretty resigned to either having a flat budget or having a declining budget.

How is the JAIC -- you know, how do you see the budget going for the JAIC in the future? Do you think you're at all (inaudible) to making cuts and how can you make the case that you guys need, you know, continued funding and keep levels up? Thank you.

MR. MULCHANDANI: Yeah, great question. Obviously impossible to foresee the future. We are super grateful for Congress's -- you know, the budget that they give us, or the FYDPs (Future Years Defense Program), that we have visibility over what the budget would look like.

The way we operate is -- like I talked about is we have a funnel. The funnel can elastically help us expand or contract the number of products that we do. It can basically move around basically based on the amount of budget or other things that's available to us.

So we effectively can -- can plan or deal with whatever scenario actually comes up. Our expectation and hope is, you know, if you look at the top one or two or three priorities for the Department of Defense, just as the Secretary of Defense and -- and other officials at the DOD talk about, AI is literally in one of the top initiatives that we have to focus on.

So even in a case of declining or smaller - cut budgets, we believe that the powers that be, both Congress as well as the DOD management, will focus on funding the right priorities that -- that are important to them and I -- I believe AI will -- will hopefully be one of them and -- and it -- you know, we'll -- we'll see.

But when it comes to planning, you know, I'm the veteran of multiple start-ups. Money comes, money goes. You have to really just get the job done with whatever funding you have. So we don't sweat a lot about it because it's an unknown, it's something that we can't control and, you know, we obviously want the budget to expand because we've got great projects to deliver on. But in the grand scheme of things, we'll see where things end up and we'll do what's necessary.

STAFF: Okay, let me go back out to the phones, this time to Justin Doubleday from Inside Defense. Over to you, Justin.

Q: Hey, thanks for doing this. I just wanted to ask, go back to the joint warfighting project, and just kind of ask Project Maven, to go back to that as a comparative example. They got things done pretty quickly, got some products out in the field, I think, in six months, but they were building off of an established -- somewhat established field of object recognition.

I was wondering for the joint warfighting projects and things like the fire support cognizant assistant, is this building off of some existing technologies, and how quickly are you looking to get things out in the field, are there things already being tested in the field? Thank you.

MR. MULCHANDANI: Oh, yes. Yes, so that's a great question actually. So with the cognitive of fire's assistant, the core technology there is NLP. So when you think of how targeting actually gets done and assistance to it, you have multiple chat windows, lots of information coming in. 

Somebody, the product manager runs that, when you actually look and go see what actually happens on the screen, there is literally 10 to 15 different chat windows all moving, sort of, at the same time. So we're applying NLP technology to a lot of that information and being able to condense it down and then raise the importance level and actually structure the information over time off of that.

So that's using NLP. In the case of, say, Maven, so the core building blocks around object recognition and video, I wouldn't say that that was a scaled technology just yet. That's one of the hardest problems to solve out in AI right now is being able to get that level of detail. There is -- the level of complication in terms of sensor technology, angles, sunlight, et cetera, you will not imagine the complexity in doing this thing correctly and right.

So yes, they did a brilliant thing in terms of fielding functionality quickly, and then continuously iterating over the past couple of years. We're using exactly the same model, which is break down the products into smaller chunks, identify and isolate the risk, make sure you can contain the risk from a funding perspective and keep that manageable, and then iterate, iterate, iterate off of that.

And so that's literally how we do things in the valley. That's exactly how things -- we're doing things at the JAIC, and we're hoping -- there's some incredible work going on around DevOps, cloud infrastructure, standardizing the APO (Advanced Planner and Optimizer) process, et cetera, that I think is going to have a huge impact on the way we develop code here at the DOD.

So not only from a development perspective in terms of how we build products and scale them out, but also the development process itself is getting faster. So, I think the next couple years here at the DOD, it's taken a little bit while, like I said, the hockey stick, you invest, invest, invest, but I think all the right ideas and right fires are lit around, that I think are going to come together in a really good way over the next couple of years.

STAFF: Okay, we'll take just one or two more questions. I'm going to go to Jackson Barnett from FedScoop, if he's on the line.

Q: Yes, thank you very much. There was a recent DOD IG (Inspector General) report that urged the DOD to be more cohesive in its AI efforts. Is there a current, either, study or project underway to understand kind of the full scope of DOD's AI efforts and are you -- how are you working across both that service level AI task force and things like that to understand kind of who is doing what and where you can collaborate on, so what's kind of the effort there?

MR. MULCHANDANI: Yes, that -- good question. So, on the IG report piece, let me just say, I had a couple of questions earlier about that as well, things are moving so fast that when the IG report and then there was a RAND report that all of you may have mentioned, a lot of those focused on the early days of the JAIC, standing it up, what the issues are extensionally for AI across the DOD and everything.

Ironically, we obviously loved the transparency, having them done the report. It turns out that most or all of the issues were actually already either addressed by the time the report came out or even while the report was being written, we had addressed a couple of those issues. So, we welcome the report, but it's really something, the JAIC is moving so fast and things have changed so fast that many of them are no longer relevant.

To the broader question of how we're synchronizing AI work across, the -- the main work that's going on is what we call the ESG (Executive Steering Group) process, which is a three star level engagement process across the services, combatant commands and other parts of R&E and others. So, there are a number of subcommittees, committees that have been setup, to address different parts of AI, AI technology, policy and ethics, training, testing, et cetera, that are all actually meeting.

And that has become the watering hole, if you may, for everybody to bring together AI work across the DOD, and that's the whole idea is the idea of synchronization, which is the JAIC is not supposed to be the central authority to do all AI work. 

We are -- not only we provide sort of scaling capabilities, we provide certain products that are scaled products, but for the most part, this is something I -- this is something I stole from General Shanahan is, at the DOD, we don't have a department of fire, or department of electricity anymore because all of that has now just been integrated into the system.

The hope and expectation is, is that there doesn't need to be a department of AI at the DOD because AI is going to get integrated into -- it's like the internet, it's like mobile phone development, when these new technologies came out, it took time for industry to absorb them into that, but now, for instance, nobody actually has a mobile phone development department anymore.

We believe that's simply the case is going to be the case for the DOD. The next five years, the next 10 years, I don't know how long, but absolutely, certainly, there's probably going to be a center of excellence for AI maybe in the future, but it's going to look very, very different than what the JAIC looks like today.

STAFF: Okay, so we have time for one more question, and I'm going to go out to Mr. Jeff Schogol from Task & Purpose.

Q: Thank you for doing this. What steps is DOD taking to make sure that artificial intelligence does not become self-aware and declare war on humanity?

MR. MULCHANDANI: The big red switch in the back, but more seriously, I don't know about self-awareness. I wouldn't even know how to design an algorithm, right, really to make the thing anything self-aware, but the trite answer I would give you is -- first is, there is policy and laws that we have to follow. 

And I will tell you, the other -- the other big learning I've had here at the DOD is that in the tech industry if you're dealing with personalization, let's say, on a page or you suggest a bad film to somebody by accident because of their browsing history or the training that -- data that was used. The worst is you end up with a bad movie night or, you know, a bought a tchotchke off an online shopping site that you really didn't want.

Here at the DOD, things are taken very seriously in terms of systems that get deployed, which is why there tends to be this negative connotation that the DOD's slow, the government's slow, et cetera. Well it's slow for a reason. There -- there's a maturity of technology you have to -- to -- to put in there, there's tests and eval that is taken very, very seriously here. The work that Jane and her team do is at par or even more important than the product development that we do.

So the -- the care that gets attended to with things like a cognitive assistant or other things there, we're going to take this very deliberately and slowly when it comes to building products and fielding them and then training our troops and the workforce in how to wield these products correctly, how to take feedback.

We're just in the early stages of that but I think we're taking a very deliberate -- it may seem a little slower but it is really, really important. So that outcome that you're looking for, I -- I don't know but I can tell you all of the stuff we're doing to hopefully prevent that in the future.

That's as -- as good of an answer as I could get.

(Laughter.)

STAFF: All right, ladies and gentlemen, thank you for attending today. If you're on the line and I didn't get to your question, I'm so sorry. You can contact me afterwards, most of you have my contacts, and we'll help you get your question answered.

Thanks very much for being here today and I'll sign off.

MR. MULCHANDANI: Thank you all. Thanks so much.

(UNKNOWN): Thank you.