Artificial Intelligence in the Department of Defense with General Michael Groen

Lieutenant General Mike Groen joins ACME General Corp. to talk about leading the Joint Artificial Intelligence Center and the potential of AI to transform the Department of Defense.

Lt Gen Groen is the commander of the Joint Artificial Intelligence Center (JAIC), as well as a distinguished Marine Corps officer and veteran of multiple combat deployments. Learn more about the JAIC at ai.mil and find them on Twitter at @DoDJAIC.

ACME General: My guest today is Lieutenant General Mike Groen, a distinguished Marine Corps officer and veteran of multiple combat deployments who now serves as the commander of the Joint Artificial Intelligence Center. The JAIC, as it’s often called, exists to, and I’m quoting, “Transform the U.S. Department of Defense by accelerating the delivery and adoption of AI to achieve mission impact at scale.” General Groen, I hope you’re ready to talk about that. Welcome to Accelerate Defense.

Mike Groen: Hey, thanks Ken. I really appreciate it. Thanks for having me here. And I have to say I truly enjoy Accelerate Defense. Great insightful people, a great context for your audience. I mean, General Bolden was my favorite. You do a really good job. And I want to say on the front end, too, thanks for your Team Rubicon work, really exciting cause, really opportunity and capability. So I appreciate that.

ACME General: Thanks General for the shout out. I will make sure to pass that on. It’s in incredible hands now, since I left, but as you can imagine, they got those hands pretty full with everything going on.

MG: Absolutely.

ACME General: I definitely want to talk about AI transformation, but I want to get a bit of your backstory first, because you’re an intel officer by training, but you’ve got a combat action ribbon. You bring not just the war fighters ethos to the business of AI integration and transformation, but experience. And I’m wondering how that informs the work you do and how your experience as a combat Marine led you to become such an outspoken champion for AI transformation in the Department of Defense.

MG: Yeah. Great. Thanks Ken. So, I mean, I’ve been a Marine for over three decades now. So, I’ve had a couple opportunities to get my boots dirty. And it’s true what they say, every Marine is a rifleman. And that means if you want to be a good intelligence officer, I mean, you’ve got to be right in the fight. You’ve got to be where the action is so that you can understand it and help other people understand it too. That being said, I mean, I think I probably had the strangest career that I think you could imagine, and I’ll tell you this because it actually, at the end of the day actually makes a lot of sense, right. I planned like a lot of people, I planned to do just a couple of years in the Marine Corps and then move on.

But I found every time I got to a transition point, I mean, there was something there that intrigued me, something that I said, “Hey, that might be pretty cool. I’m going to try that.” And so most of the time it panned out, right. But even if it didn’t, even if I didn’t like the job, I still loved what I was, right. I was proud to serve as a Marine in any case. And I mean, you probably experienced a lot of the same in your military career, right. You’re in the cockpit and you’re back ashore and you’re doing other things. So in that back and forth of a career where you spend time in the operating forces and in other billets I found myself actually in a lot of technical billets.

So I mean it’s great the way a military career takes you back and forth, right? You’re carrying a pack one month and then the next year in grad school, right. And you’re leading Marines in the field, and then you find yourself designing organizations, or managing a program of equipment. You get to be an operational commander and then you serve in a national intelligence agency or the Joint Staff. So none of this is any sort of plan, but when I look back, I can see how all the pieces fell in place over the years, operational assignments, technical assignments, leadership assignments. And the reason I bring that up is because we’re going to talk about technology here. But one of the things that’s really true, right, being an agent of change is made possible by understanding the environment that you’re trying to change.

So it takes more than technical skills, it takes more than policy insights. And I’ve had the great fortune to gain experiences and insights in a lot of different places, overseas, around the world. And those formative experiences as an operational Marine and as a technologist have really set us up well for managing technical programs inside the department. We bring in lots of technologists to help us, but having our feet grounded in operational reality and understanding how we fight so that we can bring the technology along is a really important set of skills.

ACME General: You described yourself as an agent of change there. And I think it goes without saying that for large institutions, change is often resisted. It’s often feared. How are your peers receiving what you’re selling, broadly speaking?

MG: Yeah. So it’s a great question. And I mean, it really leads you down the path of organizational culture. Here’s what I find in the Department today. The senior leadership of the Department with no exception, absolutely understands this transition and is absolutely backing it and driving this to the degree they can. And so that intent is pretty clear from the top. But throughout the Department and the institutions of the Department, not everybody understands the technological change that this represents. People think of AI, some people think of AI perhaps as a black box. And it’s something that operates on its own and somehow that influences operational decision making, for example. And nothing could be further from the truth.

I mean, we’re talking about artificial intelligence. It actually enables decision makers. It enables commanders. It lets commanders move quicker and to make decisions faster and better. And so getting across the board, the military services, the departments and agencies, get them to understand what artificial intelligence actually represents is a challenge. And I think we’re winning. I think we’re gaining ground there, but there’s still a little bit of, not resistance, but maybe a lack of understanding of how transformational this technology is. And if you don’t believe, all you have to do is look out the window and see all of the companies in the marketplace today that have transformed themselves through data and artificial intelligence. And that’s what we’re after is to get this broader understanding of the promise of the change and the real transformation that occurs when you start adopting that approach.

ACME General: Well, let’s level set here. You’re right, AI is all around us. We both don’t see it and take it for granted. Can you talk about the stakes of this competition at a strategic level? AI has been described as a winner takes all technology. You have said that you don’t think we’ve really thought through the implications of the information age on warfare. And I would imagine there is some concern that we are behind our competitors. What are the stakes in this competition?

MG: Yeah, that’s an important question, because, first, I don’t think it’s winner take all, but we have to think about our capabilities, frame our capabilities in the context of system competitiveness. So it’s not the implementation of any given technology or any given platform, for example, but we need to think about the system that we have in place. Is our system competitive? And I can talk a little bit more about that. But clearly, if you look at the context, President Putin, famously, the nation that leads AI will be the ruler of the world. And so here we have an opponent that’s ruthless and has focused military research in this area. Think what you want about the Russians and the Russian military, but their scientific capabilities are great, and they have lots of clever people.

And here you have that in an environment that’s fairly ruthless in application. And what kind of threat does that represent? What do you need to do to compete with that? And similarly, if you look at the Chinese Communist Party and the People’s Liberation Army, one of the things that advantages them is their civil military fusion. That is, any capability, China of course, is a very entrepreneurial society, any capability that is generated there is immediately hoovered up into the People’s Liberation Army and becomes a military capability. There are no checks or balances in that system. And so here is an opponent who is world leading and surveilling its citizens, right?

Social credit scores and health checks on the street, a very entrepreneurial culture, but one where all of this is focused on military capability at scale. The organization is superb. And so I think when you think about the competition, there’s an element of organization versus innovation. I mean, we pride ourselves as innovators and we’ve got a long track record of innovation in the United States. And it’s a great draw, and it’s something to be proud of you. When we think about system competition, you have to ask yourself, is the steady drumbeat of organization, building up machines that operates reliably and at scale? How does that fare when it competes with an innovative opponent that may be not as organized. And so one of the things that we’ll talk about is getting all of our capabilities focused into a systemic competitiveness that is, let’s be competitive as a department, as a joint force. That level of organization, just enough organization that allows us to still innovate, take advantage of the unsurpassed academic environment in the United States, the unsurpassed number of, and quality of the technology companies.

I mean, we have enormous innovation. It’s really just a question, can we organize ourselves well enough and fast enough so that we can compete with an organizations that may not be as innovative, but are well organized, and in some cases, very ruthless?

ACME General: So make this head-to-head competition real for us. For the uninitiated, how does that manifest itself? You gave a couple examples, social credit scores in China, for example. What are other real world examples of this in action, so we can wrap our heads around it?

MG: Yeah. So clearly, this is a transformational technology. And what it allows you to do is if you have access to data, if you collect data on everything that impacts your ability to operate, whether you’re talking about a commercial or a national application, if you invest in those data sources, if you invest in the ability to connect those data sources and then draw insights from that collection of data, you are going to start enjoying the benefits of artificial intelligence, just like all of the commercial enterprises that are still around in the market today. So that means if you have a sensing system, for example, that you can use artificial intelligence to understand what you’re sensing and use that in an integrated way so that you can make better decisions, make well better informed decisions.

If you have a large scale enterprises, whether that’s a logistics enterprise where you’re distributing fuel, or even a fleet of ships that you want to optimize its deployment, machines can help you with that, based on the data that you have on historic precedent, what you’ve seen in war games, etc. So what we see is our organized opponents getting really good at is pulling lots of different types of data together, so that they can gain a broad understanding. And this is exactly the place that we are in too. And so all of us are probably about the same level of technological maturity. So it’s a question again, of system competitiveness and our ability to take this technology that we understand pretty well and is continuing to grow, but taking that technology and actually turn it into capability. That’s the real magic. We have to think purposefully of not just about technology, but how do you take technology and turn it into capability.

ACME General: You’re talking about integrating AI into existing systems, into existing approaches, and you gave some great examples. But surely, when you talk about AI as a transformational technology, it promises inventing whole new approaches, ways of doing business and war fighting that we haven’t even imagined yet. Is that too far over the horizon or can you envision scenarios, approaches, where we throw out the entire existing system and build from scratch with AI enabled thinking, I’ll call it.

MG: Yeah. Well, I can see, when we think about artificial intelligence in defense, I mean, I think it’s really important to, again, to kind of anchor our thinking in the way, because we have a very important example, and that is the commercial industry. And when we talk about the transformation of commercial industry in an information age, commercial companies, financial companies, retail companies, service delivery companies, social media companies, all of them are using data to inform their consumers, to inform their advertising, to inform the services that they provide, and then to scale those services. So if you have a car delivery application, right, if you’re an Uber, then you can start expanding your business model to include Uber Eats and things that that data allows you, then to branch out and to optimize your capability.

This is the exact same thing that we need to achieve in the military space. And again, it’s not so much just thinking about the technology, it’s thinking about the capabilities that that technology represents. And that transformation, the transformation and the information age for defense applications of artificial intelligence, this is hugely significant. This will be as significant as gun powder or mechanized warfare. I mean, this is a real transformation in capability that results in much greater operational tempo, much greater precision on the battlefield, much greater ability to anticipate what’s happening around you and react to that, or pre-act to that, so that you can maintain tempo on the battlefield.

Tempo, of course, is the rate of your operations over the rate of your opponents. And if you have templo advantage, that means you can act faster than your opponent, and you can continue to do that in ways that can continue to frustrate his plans and actually allow you to achieve your objectives, because you’re always one step ahead. This is what’s to offer. This is what we have to gain, if we just follow the same model that we see in the industry around us.

ACME General: That’s an optimistic take on AI transformation within DoD. And I agree with you that it has the potential to be as revolutionary as the advent of gun powder, but where are we right now? And I’m going to use an example that you have used before. Are we the horse cavalry at the outset of World War I?

MG: No, I don’t think we’re the cavalry in World War I, which is a great example for not thinking through the implications of technological change. And of course, in that example, we had lancers, literally men on horseback, with long sticks, with iron spikes on the end, riding into machine guns and integrated mass fires and poison gas and all of these things of a technological age that was all around them, but they had failed to adapt to that in their military thinking, and they failed to understand the implications of that technology on war fighting. We have the opportunity to not make that mistake. We have the opportunity to actually embrace the data that we collect, embrace our processes and our algorithms that will make us operate at speed.

And so I think we’re doing a really good job but there’s a place for learning. And so we’ve been learning for the last couple of years. And I can tell you, just having seen this for a couple of years now, it’s extraordinary, the progress that we’re making. It may seem slow, but if you look at the proliferation of the technology, the growing number of people in the department, in uniform, or not in uniform who understand or start to understand the implications of this technology, and you see a great enthusiasm for adoption and a great enthusiasm for technological change. So I think we’re in a good place as far as understanding the opportunities in front of us. It’s really not just a question of, can we culturally adapt to the ways that AI has brought into being? Culturally adapt to the ways that AI can help us, and the way you need to think if you’re going to develop AI, that’s the real magic, is can we actually think about this the right way so that we can start moving from technology to capability?

ACME General: So tell us about that effort at the JAIC, where the JAIC sits at the intersection of all these competing interests and how you are positioning it.

MG: So the JAIC’s mission is twofold. That is to accelerate the adoption and integration of artificial intelligence for, at scale, for mission effect. So when you think about those two words, really important ones, adoption and integration. Like any technology, we’ve been on an adoption journey. And I think Congress has helped us a lot here, making sure that we’re not missing this transformation that’s happening all around us. So when Congress created the JAIC and is resourcing this development across all the service efforts, it’s really important that we understand how we go down that adoption and integration journey. Because the adoption has brought us some really great insights. We have lots of algorithms, we have lots of projects, hundreds of them across the department, and all of those are creating insights into how we actually use this technology.

But soon, we’re going to be at a place where we have to take all of our examples or our illustrative capabilities, these individual algorithms that we’ve curated by hand. And we’re going to have to start thinking about how do you actually operate that at scale? So this is why an organization like the JAIC or the chief digital and artificial intelligence officer, those organizations are really focused on how do you take adoption, which is good, and turn it into integration. And that is better. That represents true war fighting capability. And we’re getting started down that path. In its early days, the JAIC built a lot of AI algorithms, the JAIC partnered with a lot of organizations across the department to kind of get the fire to start to burn.

There’s lots of illustrative and illuminating capabilities that kind of show how AI can help you in a data environment. About a year ago, we started to move from the adoption model to what we call the integration model. We realized that adoption was not enough. Adoption was not transformational. So to get lots of people experimenting with AI is great, but now we need to start thinking about how we fight with artificial intelligence across the joint force. And when you think about all of the applications of artificial intelligence that will start to populate all of our systems, any system that collects data, any system that you can draw inferences from that data to actually help you make better decisions on the battlefield, in the cockpit, wherever, all of those things need to starts to becoming stitched together. So we have a place today where every service has an AI capability of some sort, some level of infrastructure, some level of development environments.

And so what we need to do through an integration lens is now let’s make that a competitive system. Let’s federate all those capabilities so that you can actually share data from one service to another. And as a matter of fact, you want to make it easy. You want to be able to move code, move algorithms seamlessly across an enterprise of capabilities. If we can get to an integrated enterprise, now, instead of just individual capabilities, we have a competitive system, and that’s where we need to keep our eyes. Let’s build that competitive system by federating the capabilities that are emerging today within the services, but stitching them together in a place that’s actually competitive at scale. When you think about that, the joint force at war, we’re talking about thousands of algorithms, maybe tens of thousands of algorithms, in platforms, in autonomous systems, in command and control nodes, in the places that combat decisions are made.

All of these places are going to be populated with decision aids and tools that will help commanders make better decisions. That’s a really powerful thing, but it won’t emerge spontaneously. It’s something that we have to purposely think about and pull together.

ACME General: Where are the remaining nodes of resistance to that integration? And it might not be explicit, it just might be bureaucratic, is there a training resistance? Where are you finding the greatest difficulty with that integration crusade?

MG: So the greatest obstacle to integration really still is cultural. If you think about the way the Department of Defense is designed, our processes are really oriented around service efforts that are brought together by a joint force and brought together by the department. But the services are sovereigns in their own right. And control the development and architectures and all of the components of their war fighting capability in their domain. And so we have great domain war fighting capabilities, but I think everybody understands that we’re going to be really effective. We’ll be competitive as a system when we can fight seamlessly across domains. So today we have things that bring us together, things like the joint force. So we are used to operating in a joint way where service capabilities can operate alongside other service capabilities with a joint command and control environment.

But when you think about artificial intelligence and the ability for data to drive your decision-making and accelerate your operations, that data environment is much more pervasive than anything that we’ve experienced in jointness. In jointness, you preserve your own identity and your own sovereignty, if you will, within your service. In an integrated war fighting environment, that’s a competitive system, you have a data environment that’s intrusive, right? You’re all operating from the same data environment. You can all access any sensor for any shooter or any stream of data to inform any decision. This is the core of what’s called JADC2, right? The Joint All-Domain Command and Control concept. So JADC2 is not a program, it’s a concept, right?

It’s a construct for how do you actually really build all domain capabilities? And the answer to that is you build it by an integrated data environment, the ability to share code seamlessly, the ability to federate your capabilities, to gain resilience, but also to gain tempo, that includes all domains. When I was on the Joint Staff, I mean, one of our big challenges was the idea of global integration. How do you leverage service capabilities and command decision environments and stitch those together? How do you inform from one domain to another domain so that you can ensure that the effects are complementary? This is a really hard challenge if you’re not integrated in a way that you can gain insights into those environments and quickly understand what’s really happening and quickly make decisions that allow you to operate across domains seamlessly. All of that comes from a competitive environment that is built on a federated set of AI platforms, a federated stream of data, and a federated model for how we make decisions in that space.

ACME General: What do you think the inherent strengths and weaknesses of that federated model are compared to the total lack of any such divides in, for example, the Chinese system. I mean, they can move quickly, but innovation is lacking.

MG: Yeah, exactly. And so when you think about that, the Chinese model at scale, I mean, it is quite something to look at when you see how they control their population, how they surveil your population. Clearly, that’s not what we’re interested in. The warfare, the warfare model that we’re interested in is much more dynamic. It’s unpredictable, but it’s measurable. And so we want to move to a place where we can rapidly measure what’s happening. And when I say measure, that means detecting objects on the battlefield, detecting platforms wherever they happen to be, understanding their distribution and how they fight, and being able to be predictive about what’s going to happen next. So in that kind of model, our innovative ability to think about capabilities and counter-capabilities, our integrated capability to think about, hey, we could get after this if we could leverage different data streams coming from a different domain. That kind of thinking is going to help us succeed against a well organized opponent.

And again, we have lots of allies in this conversation. Our academic environment, we have some of our best universities in the United States, by extension, best universities in the world, who are really interested in helping us. And really helping us understand the technology and understanding how you bring that technology to bear in responsible ways. And so we’ve got great support from our academic environment, we have great support from the vendors that support us. The vendor space that supports AI adoption in the Department of Defense has absolutely exploded. So not only the standard defense primes, but there is a whole ecosystem of smaller, innovative companies and we’ve created acquisition capabilities, so we get after or get at those small innovative companies. And it’s resulting in just a blooming of capabilities that are available across the department. Innovation is alive and well, we just need to start thinking about implementation at a system level, a competitive system that allows us to stay one step ahead in this competition.

ACME General: I’m glad you brought up the vendor space, because you said this recently about the JAIC. You, I’m quoting, said, “We are a do tank, not a think tank. Our job is to reach across the valley of death and pull capabilities into the Department from the research and development community, from the commercial community.” How important are the smaller innovators to that effort and how are you removing the typical barriers to entry for these non-traditional actors?

MG: Yeah, that’s a great and important question. Across the Department of Defense, we have relationships with all of the major vendors in the AI space, and those names are pretty obvious. But we have made a concerted effort to expand the way that we integrate or the way that we advertise and the way that we engage with the smaller companies in the environment. And small, innovative companies who may have like one small capability. I’ll give you an example, there’s companies that build great chat bots, right, chat bots that can roll through an environment and answer questions and use data to help people get through a process quickly. Small vendors that have boutique capabilities, but capabilities that are important as part of an enterprise, we have created a number of acquisition vehicles specifically to bring those capabilities or to bring those types of providers onboard.

And so you think about the scale, the range and scale of applications of artificial intelligence in the Department of Defense, there are military applications to be sure. But when you think about the Department of Defense, you think about the department as a business entity. Here’s a company, if you will, with a $750 billion budget, right. And so, we should have every expectation. The taxpayers should have every expectation that we’re using the same sort of modernized integrative management structures for the Department of Defense that are being used, of course, in all of the companies that surround the Pentagon. And so, this is what we’re after, is to bring in that innovation, bring in that business process transformation, so that we can apply the same degree of technology to our business processes that we apply to our war fighting capabilities.
And when you do that, that expands the window of the companies that you can bring aboard and how you bring them together. There’s one more point that I think I would make with respect to when you’re bringing in vendors and there are thousands of them, when you’re bringing in vendors, and when you’re trying to build an integrated federated enterprise, and the ability to seamlessly operate across the scale of the Department of Defense, to integrate our war fighting capabilities, then it’s really important that you have an open systems architecture. You can’t have an open federated enterprise that has a proprietary component sitting in the middle of it.

And so we’ve got to think very carefully about what is the relationship we want to have. How do we protect government intellectual property? How do we protect our data and partner with industry to help us use that data to greater effect? And we’ve got a bunch of budding relationships and really good relationships across the industry. And we’re starting to think better. We’re becoming a much better customer in this space so that we can clearly identify what we want and then know how to do that in a way that’s fair and also protects government intellectual property.

ACME General: Let’s talk about AI ethics, because this transformation is happening as fast as probably any technological transformation in human history, in spite of those occasional nodes of resistance. I’m wondering if you have time to pause and think about the implications and how to deal with them?

MG: Yeah. So, the conversation about artificial intelligence and ethics is absolutely foundational. And so I don’t have to pause to think about it. We’re thinking about it all the time. As a matter of fact, we really start almost every conversation with that sensing of where are we on the ethical environment. For all the talk of Hollywood versions of the Department of Defense, I tell you, the conversations are responsible, they’re ethical, and they’re really focused on, are we doing the right thing here, and what are the risks and how are we going to mitigate those risks? And I think about the ethical foundations of what we do, one, it’s absolutely central to everything that we do here when we do artificial intelligence development. But I look through the lens of three things, and that is legal imperatives, moral imperatives, and what I call comparatives.

And so legal imperatives are those things that everybody in the department is very familiar with, law of conflict, the protection of innocence, the proportionality, the minimization of collateral damage, et cetera. All of the unfortunate artifacts of war, we’re very careful to think about the ethics of how we build capabilities. So clearly there’s legal element to this. There’s a moral element, of course, and that is we represent the values of the American people, and we fight for those principles and for those values that the American people represent. And so we think very carefully about the morality of the things that we are… the capabilities that we might field, and second, third order effects. We think very carefully about that.

And then we also think about the comparatives, because in the environments that we are in today, when you compare what’s possible with artificial intelligence to what we do today, you start to see some very striking opportunities. Today, we make lots of decisions with minimal information or tired commanders who’ve been fighting for days. Or watch, the human watch standards that may look away from the screen, or may lose sight of something that’s really important on the battlefield. If we bring machines to help us with that, then suddenly now we have the ability to actually make better decisions, to be more discriminating in our targeting. And so, to a point, when we look at the ethical foundations of artificial intelligence in application, today, we send young men and women into very dirty and dangerous places.

We put their lives at risk. And similarly, we count on watch standards and commanders who may have been on duty for 24 hours. Maybe they’re tired, maybe they missed a piece of data or something that came into the command post. Artificial intelligence will help us shore that up so that we actually make better decisions because we have a better picture of the data environment. We actually understand what’s going on in the environment, so we can act more effectively with greater precision and greater care. We’re building and it’s probably the most important ethical aspect of artificial intelligence is how our consumers, how our commanders consider that technology? Do they trust it? Do they know how to use it? And this is what we call the journey to trust. So clearly we start with a set of ethical principles. We’ve got the ethical principles articulated, but it’s much more than just the articulation of a set of ethical principles.

What we do with artificial intelligence is ensure that we have a test and evaluation environment, so that we actually know whether our algorithms work and under what conditions do they work. So we test and evaluate. We also validate and verify, and that is validate that the system produces the outcomes that we expect in the environment that we operated in. And then we verify to make sure that not only the system works, but also the system in the context of what it’s intended to accomplish works. And so we have very careful measurements and very careful controls to ensure we understand the boundaries of where an algorithm is effective and where it’s not effective. And so we can measure that very carefully, but then also, work with human systems integration, so that our operators and anybody who uses AI understands the risks, understands the boundaries of effectiveness of a specific algorithm or a specific capability and make good decisions about whether it’s appropriate to apply that AI algorithm at that time.

I’ll give you an example. We have a lot of object detection and imagery capability derived from objects in a desert environment. If you take that trained algorithm, who is used to seeing objects in a desert environment and try to use it in a snowy environment or a wooded environment, the algorithm won’t work very well. And so, understanding where applications are used, how they’re used, the context in which they’re used are really important ethical foundations for how we build artificial intelligence and journey all the way to trust of commanders. If we build artificial intelligence that’s not trusted by commanders or operators, then we’ll have wasted our time, we’ll have wasted our resources. Because we have to work the humans and the systems in an integrated way so that they know how to work it, they know what its limits are and they trust it. And when we get to that environment, then we’ll be more effective.

ACME General: I understand that framework, the legal, moral, comparative ethical framework. But it seems to me that a lot of decision-making still rests on if not intuition, then the moral compass of individual leaders and individual operators. My question is, is the legal framework yet up to the task? Have the legal guardrails been able to keep up with the acceleration of this technology?

MG: Yeah, that’s a great question, and I like the way you framed that too. Because in the environment today, the state of the art of artificial intelligence today, it helps commanders make decisions, but the decisions are still decisions made by humans, and made by commanders. They may be informed in a much better way by data, historic precedent, by new things that have populated the battlefield. So the machines are helping commanders understand the environment much better, not just the threat environment, but their own environment. So you can understand yourself, that is what is the disposition of my force? What is the status of ammunition? Where are my force elements and are they in a position that they can actually accomplish the mission that I want to do?

Do I have enough fuel for tomorrow’s operations and is the fuel for the day after that on its way. All of that data driven understanding of the operational environment is what is on offer today with the artificial, the state of the art of artificial intelligence, that’s fielded today. And so it a very useful tool for decision-making, because commanders can now plan more effect and then act much quicker. One of the processes that takes the most time in a modern battlefield is deconfliction. To ensure that you’re striking the right threat, and you’re not putting other blue capabilities at risk. And so if you have a much better understanding, much clearer understanding of where everything is on the battlefield, then you can make those decisions much faster.

And the famous technologist Sun Tzu, when he talks about, know your enemy and know yourself, this is exactly what artificial intelligence allows commanders to do. To know themselves in a broad way, so that they can make better decisions and gain tempo. Because when you understand your environment, you can make decisions faster and you can execute faster. And when you execute faster, you gain tempo. You always keep the opponent off balance, because you’re always one step ahead in understanding what’s going on right now and predicting what’s going to happen next. And so from a military application, this decision support environment, command and control is important. But you can extend it beyond command and control because now, not only can you understand the environment and make better decisions, but you can execute much faster too. So the same sort of ideas for a commander in a command post apply at a firing battery level, for example.

So your effectors, those capabilities of the department, or the capabilities of the force to actually achieve an effect on the enemy, they can prepare better, they can plan better. They can understand their own environment better. They know when they can respond, when they can’t respond, they know timelines, all of this generated from an understanding derived from data. And this is the magic. It’s not about machines making decisions. It’s about machines creating a data environment so humans can, making sense of that. So humans can make decisions much better than they do today.

ACME General: What are your biggest concerns about this whole scale integration of AI? I imagine unintended escalation has to be one of them.

MG: Yeah. So the primary concerns, clearly from a policy level and thinking about the future of the technology, we clearly spend a lot of time thinking about those effects that are down the road, beyond the state of the art today. So we think about the application today, there are questions of how fast can we do this and how can we do it responsibly? And so we’re always balancing speed of employment versus the responsibility of doing this in an ethical way and ensuring that we’re anticipating all the different artifacts.

When you think about the application or the integration of artificial intelligence in the Department of Defense, the thing that I worry about the most, when you think about the competitive influence is our ability to start thinking about this capability in a different way. One of the things that we spend a lot of time looking at is the integration of capability into the department and how we do that, and how we think about that. One of the things that’s introduced with artificial intelligence is that, and this is part of the natural progression in the information age, that is we now operate with a lot more software derived capability, and how do we build capabilities in software? And, in the information age, I mean, our software based capabilities almost starts to rival the hardware based capabilities that we’re very comfortable with.

All of the processes of the Pentagon are derived from this idea of heart building hardware capabilities. The way we think about building a requirement, the way we think about acquisition, the way we think about testing at the end of our acquisition pipeline, a software capability environment is much different than that. And so in the software capability, you start with a problem and you start introducing solutions to the problem. And you continue to refine that. If you achieve a little bit of success, then you continue to move down that path. If you run up against failure, then you can either move to a different effort, or you can maneuver around that failure through software capabilities. And so the whole approach of capability design and capability generation has to start to shift, to think about software derived capabilities in the same level of importance that we think about hardware capabilities.

And that’s a cultural shift because it impacts the way we do program management. It impacts the way that we do acquisition. It impacts the way that we do testing and all of those things require cultural change and organizational change that we advocate for. We try to help describe what that change is going to be, and then try to help the department along in building the capabilities to respond to those changes in the way that we gain our operational capabilities and our mission.

ACME General: Last question. And if you would put your Marine infantry man hat back on for this one, what is this going to look like if you’re successful at the squad level in five, 10 and 20 years?

MG: Obviously, a lot of the conversation is about commanders at higher level commands. But all of the same applications of technology can be applied at a much smaller scale. So for example, in a modernized environment some years from now, a squad leader will likely be able to get immediate, responsive fire support when that squad runs into trouble. Today, that’s a long process that takes a long time, where Marines or soldiers, or sailors or airmen or guardians are in positions that they’re at risk or their mission is at risk, and we’re waiting to deconflict, or we’re waiting to understand where capabilities are. So the ability of the force to be much more responsive to distributed elements, because they understand where those elements are.

They understand their role in the mission and their ability, or their preparation to do that and understand the status of fire support or logistic support or ammo, or fuel or whatever is going to be much greater. So the ability of the force to support smaller elements is going to be much better and much faster. But even within the squad, for example, or a platoon, the ability to understand the health of your soldiers or Marines, the ability to anticipate the requirements over the next hill or the next day and pre-plan and gain immediate support. When those processes are automated to a degree that support is anticipated and made readily available. Now you have even small unit leaders across the battlefield, being able to access the opportunities that a data-informed environment creates.

And that’s really powerful. Squads who have the ability to understand their environment, because they have an integrated command and control environment in their palms, for example. Think about how Google Maps can project the traffic that you can expect on the way home. Now imagine a squad leader able to project the threat described by the enemy in his or her environment. That’s going to be really powerful. So you have these intuitive interfaces, so that squad leaders, platoon commanders, battalion commanders, can actually understand their force better, understand their mission in context better, and understand the enemy better. All of those benefits rapidly go down to the lowest level. This is not a capability that’s held at higher levels. It’s a capability that proliferates anywhere where you have data and you have a system that you want to influence and use that data to make better decisions.

I mean, this is truly something transformational and we have great support across the department. I tell you, our young leaders, our young service members, those folks who grew up expecting that if they had a problem, there was an app for that. They’re looking for that from defense. They want an app for their situation, whatever their situation is. And that’s a family of applications, and that’s an enterprise of capabilities that informs and allows them to operate with the level of flexibility and freedom and effectiveness that they have in their personal lives, in their economic lives. They’re looking for the same level of productivity and accomplishment that they know they can do at home. They want to be able to do that on the battle of field as well. And so we’re really focused on getting that capability into their hands so that they can actually operate with that kind of effectiveness.

ACME General: Well, General Groen, thanks. This has been really illuminating and looking forward to see what you’re able to achieve over there at the JAIC.

MG: Thanks very much, Ken. I appreciate the opportunity.

ACME General: Thanks again to General Mike Groen for joining us on this episode of Accelerate Defense.

If you enjoyed today’s episode, please rate and review Accelerate Defense on Apple Podcasts – it really helps other listeners find the show.

And follow the series today wherever you get your podcasts, so you get each episode in your feed when they come.

Accelerate Defense is a podcast from ACME General Corp. Our producer is Isabel Robertson. Audio engineer is Sean Rule-Hoffman. Special thanks to the team at ACME. I’m Ken Harbaugh, and this is Accelerate Defense. Thanks for listening.

Request Future Topic

Where to Listen

Find us in your favorite podcast app.