Autonomous Weapons and Artificial Intelligence with Paul Scharre

Paul Scharre, Executive Vice President and Director of Studies at Center for a New American Security, joins ACME General Corp to talk about his recent trip to Ukraine and his observations of and predictions for autonomous weapons and artificial intelligence.

In addition to his work at CNAS, Paul is the award-winning author of Army of None: Autonomous Weapons and the Future of War and Four Battlegrounds: Power in the Age of Artificial Intelligence.

ACME General Corp: Welcome to Accelerate Defense, a podcast from ACME General Corp.. I’m David Bonfili, co-founder and CEO at ACME and host of today’s episode. On Accelerate Defense, we hear from thought leaders ranging from political figures to military professionals to investors, entrepreneurs and established members of the defense industrial base about how innovation shapes our national security landscape. Today’s guest is Paul Scharre, Executive Vice President and Director of Studies at the Center for New American Security, an award winning author of Army of None: Autonomous Weapons and the Future of War and Four Battlegrounds: Power in the Age of Artificial Intelligence. Paul previously worked in the Pentagon, where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technology, and led the Defense Department’s Working group that drafted DoD Directive 3009, which established the department’s policies on autonomy in weapons systems. Earlier in his career, he was also a special operations reconnaissance team leader in the Army’s Third Ranger Battalion, where he completed multiple tours in Iraq and Afghanistan. Paul, welcome to Accelerate Defense, and thanks for making time to join us today. 

Paul Scharre: Oh, thank you David. Thanks for having me. 

ACME General: I’d like to start with a trip you and I recently had a chance to take to Kyiv, where we spent about a week talking with various members of the government there, as well as with a number of Ukrainian defense tech startups and more mature members of the Ukrainian defense industrial base. And I’d like to get a sense of what your key takeaways from that trip were, and whether there was anything in particular that you found surprising. 

PS: Well, yeah. Thanks. I mean, it’s just a fascinating trip where we had this chance to talk to not just their armed forces and hear what they’re struggling with and some of the different ministries in the government, but also a number of different tech startups in Ukraine that are just like- I think the thing that I took away was, I was just blown away by the scale of innovation there. How many small mom and pop companies there, it feels like, I mean, I guess it’s all about selection bias because that’s who we met with. But it just felt like half the country was out there building drones on the weekend, you know, taking 3D printers, buying components off the shelf from China and innovating in just these really incredible ways with precision guided munitions, small drones, autonomy, electronic warfare, a lot of the tech, you know, maybe not sophisticated in the sense that you might think about like, you know, top-end U.S. Defense Department technology, but people are doing it in their garage. They’re doing it fast, and they’re doing it cheap. That was really impressive. And I think it’s really significant in terms of the shape of the conflict in Ukraine and a huge advantage for Ukrainians. But also, I think it’s huge implications for how we think about the proliferation of this technology in the years to come. 

ACME General: Yeah. So you mentioned how fast things were changing over there, both in terms of conditions on the battlefield and in terms of the development of new technologies. I’m curious what your take is on implications for the US, given the speed we saw there, the dynamic conditions in the environment, and what in our own country has been a pretty slow process for getting anything new through the process of budgeting and requirement development and acquisition. 

PS: I mean, the problem here is that, like even the DoD version of fast is just abysmally slow compared to the pace of innovation out on the front lines. And this was just, I mean, it came out, I think, in stark relief in the conversations that we had in Ukraine where, you know – I remember talking to one drone developer who was saying that he was in real-time chats on a daily basis with troops on the front lines that was using his drones, getting feedback on what was working, what was not. I mean, that’s particularly critical in electronic warfare space, where there’s so much jamming going on and getting that immediate feedback of, okay, what are the Russians doing now? How do we respond to that? How do we adapt? And then, you know, similarly, I remember when we spoke with their armed forces and we heard from people just a lot of kind of disappointment in US tech in the war, in particular US drones, people saying, well, you know, it’s not as effective. They’re overengineered, they’re not really that useful. And I remember asking, well, why? What’s the difference? And it was this issue of time and then laying out, well look, you know, a US company comes and they get their technology. They bring it out the front lines, they get some feedback on it. Now they’re going to go back together, re-engineer it. It’s got to get tested. It’s got to get validated before they can send it back out to Ukraine. And that whole process for U.S. companies. Now, this was what we’re hearing from Ukrainians and maybe US companies here feels differently about this. I don’t know. But what we’ve heard from Ukrainians was that’s like a year for a US company. And that’s just too slow, just the timetable that the DoD likes to operate on and tends to operate on, is just not fast enough for – not just this conflict, but I think for a lot of future conflicts. So we’ve got to find ways to speed things up. 

ACME General: Yeah. It’s interesting that you talked about not just the speed at which they were developing the technology, but the access they had to end users. People were reaching out directly via telegram channels or, you know, through personal relationships and getting direct and repeated contact with end users on the front line about what was working and what wasn’t working. And that was allowing them in part to move faster. I wonder sometimes here, even if we got the systems of procurement moving faster, if we would be able to adapt the system of end user touchpoints to get the contacts and reps that people need to develop the right technology in the first place. 

PS: Yeah. I mean, I think that idea of those reps of that conversation between users and the engineers building the system is just really critical. And to me, it reinforces something that I’ve seen quite a bit over the last 20 years, which is that what’s gonna enable rapid development of emerging technologies is that really tight feedback loop. That’s certainly what you see out in the commercial sector with development, things like, you know, the iPhone in software tools or social media platforms. It’s being able to get technology early into the hands of users, get that feedback and have that kind of dialog. And the system inside the Defense Department is not well designed to make that happen. There’s some of that, but it’s not really optimized to push that quickly. 

ACME General: Yeah. You know, it’s funny, I remember when the Army came out through Futures Command with Project Convergence and the the idea in the beginning was, this is going to be a chance to get emerging technology in the hands of users and test it out in a operational context and in sort of classic DoD fashion, it just scaled and got bigger and more complex. And we brought in allies and partners, and we made it joint. And suddenly you’ve got to be submitting technology 12 months in advance to get considered, to be injected in for an exercise that now is happening every two years because it’s too big and – and so there has to be some way to sort of avoid that creep, where things become slow and cumbersome when they’re meant to be fast and agile, not just on the procurement side, but also, I think, on the end user touchpoints. I’m curious, as a student of this space, when you think about autonomy and AI and their use in war, and you look around the world today to widen the aperture a bit, it could be in Ukraine, it could be otherwise. What are some examples where you see autonomous technologies or AI being used the most effectively and or the most concerning leap? 

PS: Yeah. I mean, I guess there’s two recent examples that stand out in ongoing conflicts, one in Ukraine and one in the Gaza war. So, you know, one of the things that is being developed in Ukraine is autonomous terminal guidance, basically a way to respond – and the Russians are working on this as well – a way to respond to the jamming on the front line. So most of these drones and the Russians are also using just an enormous number of drones, talking about like tens of thousands of drones that are being used on the frontlines on a regular basis. Well, most of these are remotely controlled. And in fact, many of them are these first-person view drones. So the drone pilot has these kind of wraparound glasses, and they’re steering and navigating the drone as it swooping in to do reconnaissance or to attack someone. Well, the vulnerability of those, of course, is the communications link. So if you jam the comms link the drone is useless. And so that’s a big point of vulnerability. Jam resistant communications are potentially one way around that. But another way is autonomy. And the simplest, you know, sort of – sometimes I think like in the US system, what I hear people talk about autonomy, people sort of go for like the hardest possible problem, you know, like the idea is like, well, we’re going to have autonomy, then we need to be able to replicate the human brain inside a cockpit. It’s like, we don’t know how to do that. Like nobody can do that. That’s like beyond the state of the art right now. So, you know, lots of leading companies are doing exciting things. We’ll let you know when we get there. But in the meantime, what could we do? And there’s a ton of things that you can do. And that’s what people are in the war in Ukraine actually trying to solve, just like real practical problems. So what people are working on is autonomous terminal guidance, where basically you lock onto a target and then the drone just autonomously maneuvers in to strike the target that the human has chosen. So it sidesteps all of these thorny legal and ethical questions about autonomous weapons and technologically, what you’re trying to do. It’s just not complicated at all. It’s basically just like lock on some pixels, expand the pixels. This technology has been around for decades. You don’t even need machine learning, although that could make it better. And we saw some demonstrations of this. And so I think that this is likely to be something that we see develop probably on both sides in the next year or so that will see more of. This kind of being used on the front lines, but I think it speaks to this idea that there’s always a competitive dynamic. In military technology, everything’s got a countermeasure. The countermeasure to remotely piloted drones is communications jamming. And the countermeasure to that is autonomy. And the longer the war goes on, we’re going to see pressures towards more and more autonomy, which I think, you know, has pros and cons. There’s valuable ways to use the technology. There are some concerning things about autonomous weapons. And, you know, I’ve got a book on this, so I do think there are lke issues here worth exploring. But that particular application seems pretty straightforward. I guess maybe the other thing that might be worth mentioning is there have been some interesting reports about the Israel Defense Forces, the IDF, using artificial intelligence for targeting for their aerial campaign in Gaza. There was this report out of 972, you know, media outlet earlier this spring. Now the IDF has denied it. And so, you know, what’s the truth here? We don’t really know. But the allegations about using AI to help support targeting. So not an autonomous weapon, but really generating lists that then people would vet to go into strike packages. It seems like a plausible use case. It seems like the kind of thing that you can see militaries wanting AI to do, in this case, combing through information from social media, cell phone data, other things to try to build target packages of potential suspected Hamas fighters that could then be targeted in their locations. And that at least sort of aligns well with the kinds of things that we’ve heard militaries, including the US military, say for a while they want to use A.I. for, which is to speed up target development and seems like a potentially useful application. 

ACME General: So, you know, maybe to pull the string on that a little, in a recent piece in Foreign Affairs, you wrote about a Chinese military scholar who had theorized about a coming singularity on the battlefield, where the pace of machine-driven decision making outstrips what humans are capable of, and eventually leads to a state where machines are not just autonomously suggesting targets, not just selecting targets, but actually planning and executing entire operational level strategies of war. Do you think that that’s a realistic concern over a short time horizon, a medium time horizon, a long time horizon? Never? And do you think that we’re doing what we ought to be doing today in the view of that time horizon to prepare for that? 

PS: Yeah, I think so. So the idea is basically just a fascinating idea from some Chinese scholars that you can get some point in time where the pace of AI driven action outstrips human’s ability to respond. And militaries basically have no choice but to turn over the keys to AI. We’re clearly not there yet, and I don’t think it’s in near-term concern. I think it’s more like over the longer term, you know, maybe several decades as we see militaries adopting AI in a whole variety of applications in autonomous weapons and AI for targeting and AI for planning, what does that do in aggregate to warfare? And so you can think about the Industrial revolution had a whole bunch of different specific technologies that came out of it, tanks and trucks and airplanes and submarines, but kind of like on a larger scale, a macro scale. What did the Industrial Revolution do to war? Well it dramatically increased the physical scale of destructiveness that was possible in a way that was never possible in a pre-industrial sense. Countries could generate mass, generate firepower, maneuver on the battlefield, then concentrate firepower in this way that was completely unprecedented. And as we saw in World War Two, for example, the wholesale destruction of cities in Europe and Asia as a result of this concentration of firepower in a way that was possible because of this technology. So what might AI do to warfare in the future over the long term? You know, nobody knows. I don’t have a crystal ball either. But I think like one way to think about this is that while the Industrial Revolution changed the physical aspects of machines taking over physical labor from humans, AI is doing that for cognitive labor. So changing the decision making processes that militaries are doing, and it is enabling in principle, if you do this right, better decisions that are faster. And so that could lead to a world where the information aspects, the cognitive dimensions of warfare, are radically transformed, maybe 30, 40, 50 years from now, ones where the pace of decisions might be such that it’s very hard for humans actually, to keep up, just the same way that like the whole point of a tank or an airplane is to do something that physically you can’t do, right. And then, you know, that might be where we head with AI and that that might change in a potentially very radical way how would humans actually engage in warfare and our relationship to war, which is, I think, just a really interesting and profound idea. But yeah, it’s speculative. 

ACME General: So one of the things that struck me when we were talking to folks in Kiev was the battle for the Black Sea and the literals off the coast of Kiev. And you had a situation at the beginning of the conflict, where Ukraine basically started out with very little navy and quickly ended up with none. The Russians had an entire fleet in the Black Sea. Ukraine had a need to export grain, both for their own economy and to avoid global famine. Russia backed out of a grain deal to let that out. And Ukraine at very low cost, really using not especially sophisticated autonomous technology, managed to effectively drive the Russian fleet out of the Black Sea and reopen those grain routes. One of the comments that some people made while we were there was that that same approach could be taken with any major Navy in the world today, that the idea of concentrating assets on these expensive capital ships full of people was just too vulnerable in the worlds that we’re in, and that this movement towards cheap and attritable wasn’t just sort of a nice to have, but required just rethinking of force structure at a fundamental level. What’s your take on that? Do you think that sort of modern sophisticated navies with aircraft carriers and frigates and destroyers are defensible? And if so, then why aren’t the Russians doing a better job of it? 

PS: Yeah. Well, you may have a take on this as well, I realize, right,  much more close and personal experience here. But I think I guess there’s a couple of takeaways for me. We’ve seen across the board that the Russian military is just not very competent. So that is a factor here. What Ukraine has done in the Black Sea is incredible. It’s really remarkable that they’ve basically been able to neutralize the Russian fleet without a navy of their own. That’s amazing. I would not then draw that connection directly to, say, a vulnerability that the US Navy has. The U.S. Navy’s a lot more capable, not just in technology, but in training and all of the soft skills that matter enormously on the battlefield, as, frankly, we’re seeing play out in Ukraine. But the broader lesson here of, hey, people can do clever, nasty things with asymmetric approaches. Yeah. That’s true. It’s not a new one. Frankly, it’s one that the U.S. military learned painfully in Iraq and Afghanistan with things like roadside bombs. And if you look back at those conflicts alone, if we had to refight those wars today, small drones, for example, would be a threat that we would have to then contend with. And I know the U.S. Army is starting to think about this already and investigate counter drone systems. So, that’s going to be true in other places as well. It’s certainly going to be true in the maritime environment. That’s not, on one level, a fundamentally new problem. I mean, that’s what the USS Cole faced, this asymmetric threat from a small boat laden with explosives, it just had people on it. And the Navy’s been wrestling with the threats from Iranian small boats for a very long time in the Gulf. But drones are going to enable new, potentially more challenging aspects of this threat, whether it’s sea drones or aerial drones. And I do think there is this broader lesson you’re pointing out. It’s sort of like cheap and attritable, that’s important here. And the big lesson is not that large capital platforms don’t matter. You know, if what you want to do is have a large platform that you can then project air power from at sea, you need an aircraft carrier, and they’re going to be limited to, you know, the range and endurance of small aircraft, for example. So there’s a role for big platforms for submarines and aircraft carriers and bombers and destroyers. It’s that there is also a role for small, cheap and attritable. And that is something that really, up until very recently, the US military has not embraced and that we’ve sort of had just all of our eggs in kind of the big and expensive and exquisite side of the equation here. Now we’re hearing talk in DOD about things like replicator and, well, you know, I want to see it. I’m kind of from Missouri on this one. Like big fan, I’d like to see them be successful, but I need to see these drone before I’m going to believe them. 

ACME General: Well, to drill down on that a little. I’m curious, if you look at the current defense industrial base, the established players in the US, the Lockheeds, the Northrups, the Raytheons, I guess RTX, it seems like when it comes to cheap and attritable, that for the most part, they’re just not that interested in leaning into those platforms. And that where you see people focused on building cheap and attritable is in the sort of emerging, VC-backed, defense focused startups that really didn’t exist five years ago, but where because there’s private capital available to fund risk taking, you’re seeing the emergence of a whole sort of new industry there. Do you think that defense primes are actually doing more than it appears and it’s just happening behind closed doors? Do you think they’re just waiting on a demand signal from the government that they haven’t gotten yet, and the sort of where sophisticated players are in the space, if you want, and I’ll build it, but I haven’t heard that from you in a serious way? Or do you think there’s something fundamentally different about the economics of cheap and attritable platforms that just don’t work for the big defense primes, who are used to building on – bidding on contracts to build billion dollar ships? 

PS: Well, I mean, look, you sit in a different place here in the industry. Maybe you have a different sense than me. I think you’re right in terms of the  symptom we’re observing here, my guess is that it’s a demand signal issue that, I mean, this is a monopsony market. And so the defense industry, I think, ultimately is going to respond to what the customers are asking for, but they’ve got to see DoD put their money where their mouth is. If you just look at munitions, we’re not buying munitions. Munitions, well, they’re not really cheap. They are certainly designed to be expendable and we’re just not buying them in quantity, despite the fact that everyone’s talking about it now. After the war in Ukraine, we not only have these shortages, but it still is a major challenge. There was this moment of insight I had a few years ago where I was just – I was struggling to understand, like, why do we keep shortchanging munitions and enablers and all of these things that we need to actually have real war fighting capability? Because if you can have a fifth generation jet, but if it runs out of missiles and bombs, it’s not going to be very useful. And I’m like, what’s the disconnect? And finally it dawned on me, it’s because the DoD – just my opinion, all right – I don’t think they care about capability. I don’t –  I actually don’t think that’s what the services care about. The services care about force structure. They care about force structure because it protects their budget. So when you talk to the services, what do you need? The Navy’s going to tell you a number of ships. The Air Force is going to tell you a number of airplanes. The Army’s going to tell you a number of people, because they sort of think in these like pre-industrial metrics, but they know that what you need is more than just that platform to have a warfighting capability, but all of their incentives are to protect the force structure, because that helps to protect the budget. That’s my theory. 

ACME General: That’s fascinating. And you’re right. I mean, it’s only been when you’ve seen the need to really draw down inventories on expendables to supply them to Ukraine or elsewhere, that there’s been suddenly an interest in our ability to reconstitute those inventories and a recognition that we’re not very well positioned to be able to do that in a in a meaningful time frame. If you go back to 2020, there was a congressionally directed panel or a congressionally chaired panel run by Representatives Seth Moulton and Jim Banks, called the Future Defense Task Force coming out of the House Armed Services Committee, and they called at the time for a Manhattan Project level of attention to AI and specifically recommended requiring that all new major defense acquisition programs be AI-ready and evaluate at least one AI or autonomous alternative prior to funding. We’re now sort of four years on from that recommendation. And if you were grading the Defense Department here, to bring it back a little closer to home, on how they’re doing at taking AI and autonomy seriously, how would you grade them? 

PS: Oh, well, I think I’ll just make one note first. What is wild is we are seeing a Manhattan project in AI, but it’s coming out of the private sector, right? We’re seeing massive, massive expenditures coming out of private companies and the numbers that keep going up. The current cost to train state of the art models is about maybe about $100 million, give or take. And we’re seeing those costs doubling every ten months, just this incredible cost growth in models. 

ACME General: It does seem hard to do that on like a million and a half dollar SIBR.   

PS: Well, that’s the thing, right? So DoD is completely out of the picture here and they’re, I mean, anywhere from maybe 5 to 10 years behind the private sector in terms of adopting the technology. And it’s – I think it could be okay that the private sector is in the lead. These are U.S. companies, after all, and many of them are eager to actually work with the Defense Department, if the DoD could be close behind the state of the art. But, you know, right now, okay, so U.S. companies are leading, but the best Chinese models are maybe 18 to 24 months behind US companies. That’s a tiny, tiny edge. But if the Defense Department is then five years behind state of the art, well, then we’ve squandered that edge, that edge is meaningless then, right? So I think we’ve got to do a better job. My experience has been talking to people in the Defense Department. There’s so many folks who are trying to make this work, and they’re just like frustrated people in the private sector, people in government. They’re trying to find ways to connect the dots, but we need to put in the policies and I know that the Deputy Secretary Higgs has been working on this to try to ease some of the policies for data sharing across the department. But we need to free up data. We need the compute infrastructure in place. We lost over two years on the JEDI cloud contract, just running in circles, getting nothing done. And that’s a real problem. So we need the compute infrastructure because a lot of these are very computationally intensive. We need the right tools in place internally and the right human capital to capitalize on this technology. And it’s been a real challenge. You know, when Secretary Esper was Secretary of Defense, at one point in time, he told Congress that AI was his number one priority. And I thought that was a fascinating thing to say, because that’s great. But when you look at what the DoD is doing, like, no, it’s not.  Joint strike fighters are your number one priority, that’s your number one – that’s what, I see what you’re doing. That’s where you’re spending your time and attention and money on. And the best independent estimates – Bloomberg Government has done some good ones on this – is the DoD is spending about 1% of its budget on AI. That’s not a priority. I would love to see it be a priority, but I think we’re not there yet. 

ACME General: So you hit on a couple of different things in that last response that I’d love to get your thoughts on further. One was data. You and others have remarked, as we’ve moved to the next generation of AI models, large language models, etc., the things underlying ChatGPT that, unlike older models that were rules based, these models are fundamentally a database. They learn from being trained on large data sets. Do you think that the US Department of Defense has the data that they need to train models effectively on defense specific use cases? And if they do have that data, do you think that they’re making it available to people who need that data to train? 

PS: I think the data is in principle discoverable. Or, you know, we could record it. I don’t think we have the culture that’s needed for data. So what do I mean by that? If you look at private tech companies, they value data. They’re hoovering up all all the data they can get their hands on. Not just AI companies, but social media platforms and other companies. They are scooping up our personal data. Whatever they can grab, they’re holding and they value data. They know that data has value. The Defense Department doesn’t really think about it that way. So, for example, you know, we have had hundreds of thousands of full motion video hours over these recent conflicts we just trashed, we just got rid of because it’s expensive to hold on to all this data and nobody saw it as valuable. Well, what an enormous loss. A lot of it was real world operational data that would have been incredibly valuable to use, even if it wasn’t initially  clean, the way that a lot of AI data needs to be. There might have been stuff that we could have gone back to to train models on over time, even if it wasn’t like the first cut of data. We want people to train an AI model could go in and try to clean that up and find some use for it. We got rid of all that stuff. So if you really want to harness data, we’ve got to value it. We’ve got to invest in it. It’s going to cost money to record the data to save it. We’re probably going to have to do things to better instrument our forces, to better instrument our systems, so that we’re collecting quantifiable data on the performance, on the, you know, engine performance of, you know, various platforms and how they’re doing under different conditions so that we can then use that to train models. And I just think we’re a long ways from that right now. 

ACME General: Yeah. You know, it’s funny, I think you may know, prior to founding ACME, I had a long career in finance, and I used to work at a quantitative hedge fund. And I remember – so one day I was sort of in the guts of the system there, and I realized we were capturing the closed captioning from Barney the purple dinosaur. And I went to my boss, and I said, you know, this is fascinating. I had no idea we could trade off of Barney the purple dinosaur. Like, what’s the monetizable signal in this information? And the response was very simple. It was, oh, there’s there probably isn’t one, but we don’t know what’s going to be valuable in the future. So we cast a wide net. We gather the data. We have the history then to do the research to figure out what matters. And it’s it’s easier to capture the data and then discover what’s valuable than to try to predict what’s valuable and use that to determine what you capture, right. We probably need a little more of that in DoD. That maybe gets to the second question then, which is around the talent piece. And in your book on four battlegrounds, one of the battlegrounds in the war for AI power is around talent and human capital. And I’m curious what your take is both at a national level, it seems like historically, as a country, we did a good job of attracting the best and the brightest from around the world through our research universities and the depth of our capital markets and our entrepreneurial culture. And there have been some signs in recent years that that advantage is starting to erode. Students are going to universities and other English speaking countries, for example. Our immigration laws are making it harder for talented tech workers to come to the US, while other countries are making it easier. I’m curious what your sense is on whether we’re still winning that battle for talent or not at the national level, but also then to drill down into the Defense Department itself. It seems so critical that we have the best talent from human capital perspective within the department, both uniformed and civilian. And do you think that we’re able to both attract and maybe more importantly, retain that talent in the Defense Department as it’s structured today? 

PS: Yeah, no, that’s great. I think, I mean, on the national level, the US continues to be a magnet for global talent. US universities and companies are a major draw, and we see the top scientists and engineers from around the world that want to come to the United States. That shows up in certainly like polls of AI scientists, but also in people’s actions. People want to come here, including from competitor nations. So, for example, China produces more of the top AI scientists than any other country in the world. But they don’t stay in China. They leave and they come to the United States. And over half of top undergraduates studying AI in China come to the United States for their graduate studies. And then they tend to stay here actually after graduation. So 90% of Chinese undergraduate who do their PhD in computer science here in the US stay in the US after graduation. So this is a tremendous advantage for the United States. And just in also the sense of just a bare knuckle competition with China, we are stealing their best and brightest. I mean, this is an incredible advantage. And if they were doing this to us, we would be furious. But the problem here is our immigration laws are this massive self-inflicted wounds. So we are making it challenging for top scientists and engineers to come here to the United States. And the wait time, for example, like for Indian-born PhD scientists in science and engineering is decades long. It’s crazy. And so there’s no strategic reason why we’re making that difficult and it’s really harming us. And so to the talent one is one that is ultimately ours to lose if we continue putting in place these barriers. And if you want to talk about government regulation getting in the way, I mean, this is an example, a really painful one. Here you have U.S. companies that are trying to hire top talent, and we have these sort of really arbitrary government regulations getting in the way, stopping that from occurring on the DoD side. I think that we’re actually doing a great job of recruiting top talent who want to work on military problems. There was this moment a couple years ago when Google discontinued the work on Project Maven, when there was this sort of panic in the national security world that we would be shut out of top tech talent. And obviously that’s not true. There are tech startups that are focused entirely on defense. You’ve got people in the investor community who are interested in investing in defense. I’ve spoken to a whole lot of people working in government who could be making lot more money somewhere else, but they’re interested in helping the government coming from the tech background, and they have the right skills. I think the thing that’s missing is people need to see success, right? People need to see that their efforts, their sacrifices are going to pay off and that people are willing to do that. They’re willing to say, I’m going to invest my efforts into bringing this technology into the US military, whether as an entrepreneur or as a government official. But if they find that their efforts are stymied and they’re doing this all for nothing, then they’re going to give up. And so I think, like, we can give engineers exciting, interesting, hard problems that, frankly, they can’t do in the private sector, it would be illegal to do in the private sector. But we’ve got to show people that there’s a payoff in terms of really just like being able to field things and solve problems. 

ACME General: Stepping back from the development of technology for a second and thinking about the enablers around that. We talked about data. We talked about talent. One of the things that we maybe don’t spend enough time thinking about, but I think you’ve gone quite deep on is the policies around that and the way we set frameworks for thinking about the development, the use of new technology. When you look across the department today and you think about the future of autonomy in AI and the direction you see things going, where is it from a policy perspective that you think we need to be doing more work? 

PS: I actually think we’re in a pretty good place in terms of Defense Department policies. There are some good works to be done internationally that the State Department is engaging on. And DoD has a role, of course, there through the inter-agency processes. But I think internally, you know, we’ve had a policy on the role of autonomy in weapons since 2012, so we’re really pretty  early to need there. That remains, with some adaptations, remains in place. One of the positive side effects of the whole Maven controversy was the DoD then starting to get its act together, thinking about AI ethics principles and responsible AI and what does that mean. And now there’s quite a bit of work internal to the DoD to go beyond some kind of just high level statements like, oh, well, we’ll be ethical and legal and moral. And to say, like in practice, what does that mean? How do we give guidance to engineers and put in place processes internally? And so I think that we’ve kind of created the space then for people to build things. Because that question has always been out there. I remember way back in 2009, these conversations started and people said, well, we’re adding – no one wanted to talk about AI then, people talked about autonomy. People said, well, we’re adding more autonomy, and we already have weapons on these drones. So what does that mean and where this is going? Now, I want a place where we could say, hey, we’ve cleared a path for you, and we have ways for you to get approval for things. And we can tell you, this is what the questions that they’re going to ask you, for example, and these are kind of what the guidelines are as you build these systems. And now we’ve just got to go do it. But I think we’re in a pretty good place internally for DoD in terms of policies. Now actually, where the more interesting work is, I think globally, the U.S. State Department, working with other nations on trying to help spread some of these principles to others, improve their ability to do test, evaluation and assurance of AI systems. And the State Department’s been involved, for example, with these responsible AI guidelines and military AI that they’ve been leading AI that over 50 countries now have joined. But I think we’re a good place internally. 

ACME General: So that’s super encouraging. What is it when you look at the future of autonomy in AI and what we’re doing today in the US, if anything, then that keeps you up at night? What are you the most worried about looking over the horizon? 

PS: Well, there has been some discussion in the last couple of years about the capabilities of some of the most advanced AI systems, or sometimes called frontier AI systems, like what would today be GPT4 and Google’s Gemini, Anthropic’s Claude. But then as we, you know, we’re just seeing these systems advance. They’ll be more advanced versions of each of these. And I think there’s a couple things about them that are concerning. I guess I would say one is that we don’t have a good method of predicting our capabilities in advance. And so there you get this problem of sort of emergent capabilities, because we’re training these models on these huge data sets of trillions of words and include all sorts of things. So they’ve gotten these massive data sets that people are scooping up from the internet. They’ve got computer programing data, they’ve got computer code, they have scientific papers, and the models are able to as they become more sophisticated, these more computing power of larger neural networks, they’re able to build internal world models of different problems. And we’ve actually seen this in some of the testing of the models. And so they can learn from example from chess notation on the internet. They can learn to play chess. They can learn to write computer code. They can learn to provide guidance for how to conduct scientific experiment. And a lot of these capabilities are very basic today, but we don’t have a good sense of predicting in advance. And people are working on this, like how those capabilities were advanced in the future. So if we scale up by, say, 100x, the number of – the amount of data and computing power and the size of these neural networks, how much better does it get at conducting scientific experiments or writing computer code? We don’t really know. And sometimes we don’t even know until after. Even if you build it, it’s hard to test a model sometimes because like how you prompt it can draw out different kinds of behavior. So there have been a couple kind of, I would say, sort of proof of concept behavior that we’ve seen that is concerning. One is the ability to aid in the development of chemical or biological weapons. Now, the best independent analysis that I’ve seen, which comes out of the Rand Corporation, they said that they’re not any better today than what you could get from Google search. Okay. But the AI systems are getting better and they’re getting better fast. So that’s probably coming. We’re probably likely to see in two years, five years, I don’t think it’s 20 – that the models will be better than what you can get from Google search. That doesn’t mean that you just automatically get access to some deadly biological weapon. You still have to make this thing. And thankfully, there seems like what we can tell biological weapons are pretty hard to make, but we’re starting to lower some of the barriers, right? So that’s a concerning thing, and it probably will happen even sooner than that is, they’re used to developing malware and in conducting cyber attacks. And we’ve already seen some indications. OpenAI published a report a couple months ago about hostile state actors using their systems to try to basically do some like process workflow acceleration, the same thing that coders are doing in other things. It’s not really to develop like novel types of cyber attacks, but just to aid in their workflows. And then we’re also seeing these types of capabilities emerge in the systems themselves that might make controlling them challenging. So I want like – I realize that sounds maybe like a little science fiction, a little kooky. So I just want to ground it in, like, real things. So, for example, Apollo Research has done work demonstrating that the most advanced large language models can engage in strategic deception. To use a bit of an anthropomorphic term, they will lie. And they will lie to their user when they believe the model – I have a talk in these really anthropomorphic terms here, and I just, I apologize, it’s because there’s like, it’s a hard thing to actually talk about otherwise. But basically when a model’s given a prompt that gives it a goal and the conditions are such that its goal – accomplishing its goal is in conflict with being honest under the right conditions, it’ll lie to its user, which is not what we want in these systems. What’s wild is this doesn’t happen in the dumber models. It’s like the smaller ones are a lot less capable. So this is an emergent property that happens as it gets smarter. So that’s like, that’s a little bit concerning right. And we don’t know how to make it not lie because you’re telling the model you should be honest, like don’t tell lies. It’s like, okay got it. But then if you push it enough, it’ll lie. So that’s the kind of stuff that – we don’t understand this technology well. And again, I’d say these are not at the state today of, oh my gosh like mash on the panic button. But the systems are getting better and we don’t have a good way of predicting where they’re going. It’s not like building a rocket where we know how much propellant we’re putting on the rocket. We can do some math to estimate how far it’s going to go. The connection between input and output here is really non-linear and hard to predict. And so that’s the kind of thing that, you know, I think we just want to keep an eye on. It’s interesting. 

ACME General: That’s super interesting. I wonder – maybe this is the last question to bring us home here – on in some sense the opposite end of that spectrum, one of the things that we’ve observed in Ukraine and elsewhere is that as technology has advanced to become more accessible and as costs have come down, the ability to develop solutions that have significant military efficacy in your garage on a budget where you know someone’s neighbors can buy it for the friend on the front line has really gone up. And in a world where you can very cheaply and effectively and on an ad hoc basis, build pretty effective autonomous AI enabled tools cost effectively, do you think that we’re building the regimes on the counter side to defend against that, particularly as you get out of major combat zones and into just, say, urban areas where you have overlapping jurisdictions between state and local and federal forces. 

PS: No, I don’t think we are. I don’t think we have those systems in place, right? I mean, I just think it’s this trend that you’re describing of the rapid proliferation of these technologies, especially with I mean, you can go on to GitHub and Hugging Face and other places and download all of these. These tools are available for free online. That’s great for human creativity and innovation and productivity. That’s horrible from a US Defense Department standpoint, because it’s not that we don’t have a moat around things like submarines. We do. Like those are really hard to build – and aircraft carriers. But there’s all of this other stuff that, it’s a very level playing field, and those things are also going to matter, whether it’s drones or AI or other tools as well as countermeasures to US systems. Like AI systems can be manipulated, they can be fooled, and U.S. systems are going to be vulnerable to those types of manipulations as well. And so that’s one that, relatively speaking, benefits the less capable actors. It benefits the non-state groups and militias and terrorists and small governments. Just because, you know, like a stealth drone is really hard to build. And there’s not a lot of people in the world to be able to build that,  even fewer that’ll  be able to build one that’s any good. And so that’s great. But this technology is like the – AI is like the opposite of stealth technology. And so I think that that reality of how level the playing field is going to be and how much that is really not to US advantage, I’m not sure that that’s totally internalized inside the broader defense community, and it’s just something that we’re going to have to contend with, where it’s going to be a more fair fight than we would like. And that’s something we’re going to have to be ready for. I’ll close on one anecdote here. I remember in 2008 I was in Iraq and a buddy of mine was bragging about how he shot down a drone. He was all excited about this. He shot it down, so drones are small enough, it’s vulnerable to ground fire. Shot it down with his M4, and he was really, really psyched. And I was like, you know, that’s ours, right? I was like, they don’t have that. Like, you shot down our drone. And he was like, it still counts. He was still excited. But that’s different today, right. And now if you see a drone, like you don’t know, it might not be yours. It might be theirs. And that’s a different world we’re going to have to live in. 

ACME General: Well, and it’s a great point, sort of circling back to the data question. You don’t know today when you see that drone whose drone it is. And it’s not clear that time now we’re collecting the data that we would need to be able to train the models to effectively identify in a time sensitive way at the edge, whose drone that is, to respond. It’s a fascinating problem set. Paul, thank you very much for joining us today on this episode of Accelerate Defense. Really fascinating conversation. You’re a leading thinker in the field, and our audience is privileged to have you join us today. 

PS: Thank you. Thanks for having me. 

ACME General: If you enjoyed today’s episode, please rate and review Accelerate Defense on your favorite podcast platform. It really helps other listeners find the show. And don’t forget to follow the series so you can get each episode in your feed when they come out. Accelerate Defense is a podcast from ACME General Corp.. Our producer is Isabel Robertson. Our audio engineer is Sean Rule-Hoffman, and I’m David Bonfili. Thanks for listening.

Request Future Topic

Where to Listen

Find us in your favorite podcast app.