Mission-Focused Artificial Intelligence w/ Nick Beim

Investor and Venrock partner Nick Beim joins ACME General Corp. to talk about Rebellion Defense and building mission-focused artificial intelligence products for the defense industry.

Nick is a partner at Venrock and one of the lead investors in Rebellion Defense. Follow him on Twitter at @NickBeim and find Venrock at @Venrock and Rebellion at @RebellionDef. Learn more about Venrock at venrock.com/about-venrock and Rebellion Defense at rebelliondefense.com.

ACME General: Welcome back to another season of Accelerate Defense, a podcast from ACME General Corp. I’m Ken Harbaugh, Principal at ACME, and host of this month’s episode. On Accelerate Defense, we hear from political figures, military professionals, and other thought leaders about how innovation shapes our national security landscape.

My guest today is Nick Beim, partner at Venrock and an expert in artificial intelligence, software, and FinTech, and one of the lead investors in Rebellion Defense, which builds mission focused AI products for the defense industry. Nick, welcome to Accelerate Defense.

Nick Beim: Thanks very much for having me.

ACME General: We normally start with the backstory, what got you on this path? But I came across a description of Rebellion Defense, which just leapt off the page at me. And I have to ask you about the SWAT team of nerds that you have invested in. Give us the backstory of that.

NB: Sure. Rebellion Defense is an AI focused startup, as you know, that is defense first. It’s unusual for venture backed startups focused initially, exclusively on national security organizations. And the founders were civil servants who the primary founder, Chris Lynch, who’s the CEO had been a successful software entrepreneur who left Seattle and ended up joining the government and ended up leading the defense digital service to help them solve some of their biggest technology problems in. And Rebellion for him is really a continuation of that mission to bring the best technologies to the department, to help them solve some of their biggest technology needs.

And Chris’s view fundamentally is that, and he’s incredibly right on this point, is that the way you solve these big technology problems is to get the smartest minds on the planet, which in the technology world tend to be Silicon Valley types. They need not live in Silicon Valley, they could be in New York or Texas or anywhere in the country, but who work with the most innovative companies and have experience doing that and bring them to government, to help solve some of the government’s biggest problems. And so he calls that the SWAT team of nerds approach. And he’s assembled quite a SWAT team of nerds at Rebellion. It’s humbling. It’s a very impressive group.

ACME General: What you just described, Rebellion Defense as a defense first startup. And you said that’s unusual for this space. That’s a theme that we have explored quite a bit here, but why does that barrier exist between the startup world and the defense contracting world? I’m sure a lot of it’s cultural, some of it is just inertia and policy driven. How would you diagnose that challenge?

NB: Yeah, there are many dimensions of it. Certainly there’s a cultural dimension, but I would start maybe with a historical one that the department of defense wasn’t set up to adopt new technologies and particularly new digital technologies rapidly. It has succeeded in doing that in times of crisis. We can all remember post Sputnik, and with the creation of DARPA and the vast amount of new funding and really innovative work that was done inside the department and with contractors, with universities to make sure that we stayed ahead of the Soviet Union of the Cold War. It can certainly happen, but the current department of defense is built on really industrial structures that were put in place primarily by Bob McNamara when he was the secretary of defense.

And you remember, he was also the president of the Ford Motor Company, and it’s sort of a 1950s view of how to manage a large organization that’s incredibly bureaucratic, that has very, very long planning cycles and they doesn’t have the flexible, fast moving iterative approach that the most successful private sector companies take today. I think that one of the biggest problems is that the DOD, which is the biggest part of government and the place where this need to adopt new technologies and especially digital technologies is most acute, given our geopolitical competition with China, primarily also with Russia, just isn’t set up for the task that’s most important to it today.

And so it’s trying to change, it needs some structural change, and it’s got a lot of circumvention in the meantime, but I think that’s… And I can dig more deeper into why it is so ineffective at adopting new technologies. But I think that’s really one of the most important reasons.

ACME General: Well, you’ve obviously become an expert on not only that history, but the current landscape. Did something in your background push you towards this? How’d you get here?

NB: Yeah, it’s a good question. I think I’m a hybrid creature of I’ve spent my professional career focused primarily on technology and 20 years in the venture capital industry. And in that time, over the last 10 years really primarily focused on artificial intelligence and the transformational capabilities that it brings. But prior to that, and alongside of my professional career, I’ve always had a deep interest in international security. Interestingly, it started when I was in college, when I served as the lead researcher for the book, Charlie Wilson’s War, which I’m sure you’ve read and no doubt some listeners have read the book or seen the movie. I think I was bitten by the bug of how interesting a world international security is. And studied international relations in grad school and have been involved in a variety of think tanks since I started in venture capital.

And have gotten very heavily involved in a couple, particularly the Council on Foreign Relations and the Center on Global Energy Policy recently. And what’s happened as I’ve evolved in the venture world is the areas that I’ve focused on, particularly AI have become critically important for the national security world. And since this was a world I already knew, it was very natural for me to invest in companies that were focused on it. I also really cared about the mission. And so I’ve ended up being one of really a handful of venture capitalists that have been willing to take the risk that rebellion for example, is taking in trying to serve the national security community as a primary customer in these defense first startups.

It’s hard, there’s so many problems with defense acquisition. It’s just very hard to break through, government is an unpredictable customer, but it’s so important for the country and the biggest winners when they succeed and I think SpaceX and Palantir are two good examples, can be immensely successful. That’s how I’d trace the journey to a high level.

ACME General: Well, I definitely want to get into that mission driven approach you take to venture capital, but I can’t let the Charlie Wilson’s War comment go. We’re going to try to avoid the explicit rating on this episode. But when you say you were the lead researcher, does that include like the hot tub stuff? Or are you talking about the geopolitical mujahideen, Afghan stuff?

NB: I was not involved in direct hot tub research or any of the more controversial parts of Charlie Wilson’s private life. But I was focused on a lot of primary research in the Soviet Union. I actually spent a year there. It was the final year of the Soviet Empires of 1990, ’91. And the reason I got a chance to do this is I had been working as a production assistant for the 60 minutes producer who wrote the book. And we had done a number of 60 Minutes documentaries together. He actually was the producer for a documentary at 60 Minutes, that was the original story of Charlie Wilson’s War. And he decided at the end of it, he said, “Nick, this is such a great story, let’s make this into a book and would you help me research it?” And I said, “Sure, this sounds like an adventure, sounds like fun.” And so I focused primarily on the Russian side of the story.

ACME General: Well, it had to have been an amazing adventure, even if it was a fraction of what Charlie Wilson himself experienced. I’m going to go reread that. Your investing background focuses heavily on AI. And the thing that strikes me about your commentary and your writing about this world is how philosophical you become, you see AI as a transformational technology. A lot of people have made that observation, but you also think it might have the power to force humanity itself to become more introspective. Is it fair to say that you are an optimist when it comes to your AI evangelism, you see a bright future enabled by AI?

NB: At a high level, yes. I think it’s a technology that will force us to think very carefully about how we make decisions, how AI makes decisions, how we work with AI, and to be very sensitive to the ethics of decision making, to be very sensitive to things that can go wrong. I think in some respects, we’ve done this with many past technologies, from computers to more significant military technologies. I do think AI is very philosophically interesting because its capabilities are evolving so quickly in so many dimensions. And it really does replicate certain types of humanlike intelligence. And so in many respects, we’re sort of looking at a different intelligence. We inevitably compare it to ourselves and we want to make sure that it doesn’t take over things that we can do uniquely well, but it’s interesting.

And I think in virtue of that, we become more introspective in how we make our own decisions. For example, people talk a lot about the cognitive… The many types of biases in AI decision making and which is true. AI has a lot of biases based on the data sets that it’s using, based on the algorithms it’s founded on, but we have to recognize and make those judgments that we are incredibly biased in all sorts of ways. We’re just beginning to understand the human mind is an evolutionary kludge of many different types of decision making. And I think when we start to recognize how biased we are and how we can make decisions better, and we start to think, well, how can we work with AI together and correct each other’s biases?

And I think a lot of the early writing about AI focused on AI as something that competes with us. Is it going to take our jobs? Is it going to be smarter than we are? Is there going to be artificial general intelligence that will take over the world and killer robots? Is it smarter than we are? Or will it become smarter than we are? I think the better question is how can we… and I should say as a preface to this, the same thing happened with computers where, there was a lot of concern when computers first came out, that it was going to destroy jobs, that it was going to outthink us, but both for computers and even more so for AI, I think the key question is really how do humans work most effectively in partnering with AI to make the best decisions, make the most helpful innovations.

It’s really that teaming that I think is at the essence of the productivity increases and the new capabilities that we’ll see. I do have faith that with there are going to be many bumps and stops and starts, but I do think we will be able to harness AI in the same way that we’ve harnessed computers, and not be taken over by it. I think some of the thinking about artificial general intelligence was pretty far-fetched of what it would do. And to assume it would be human like I think was a very anthropomorphic way of looking at things. But I’m optimistic that we will be able to work well with AI and do pretty incredible things. And that in the process of figuring out how to do that, we’ll become much more thoughtful and deliberate in how we make decisions, which is a good thing independently.

ACME General: I’m drawn to one of your quotes. You said, “It’s almost as if by encountering a new form of intelligence, we learn more about ourselves and become more conscious and deliberate about how we make decisions. It’s this framing of AI as a new form of intelligence.” And in your last answer, you used the words evolving and replicating to describe AI, that you really are identifying it as a, well, I don’t want to say a separate life form because I think you captured it well in calling it new intelligence.

NB: Yeah. But it highlights, I think one of the most important things about AI, which is that it learns, that’s why it’s so powerful and we learn with it. And if we do the kind of partnering that I was discussing, I think we can help it learn in the right ways. But it really gets to the heart of what’s different about AI than a lot of modern, digital technologies, computers don’t improve themselves as they evolve. They don’t evolve. We evolve them by creating ever faster chips and better software, but AI really does. And it doesn’t evolve in a kind of way where it effectively is like an organic life form that wants to survive and starts competing with us. There are certain boundaries within which it evolves, it gets better and better at the specific tasks we set for it with more and more data and more and more effective algorithms.

But I think the way of thinking about AI is it does have flashes of human-like intelligence, but it’s in small defined tasks. The area of AI, interestingly, that has been a place where there’s been a real dearth of breakthrough progress is judgment. Complex decision making induction logic, that those areas of judgment are areas that humans are pretty uniquely good at right now. And even though there’ve been big breakthroughs in image recognition and robotic movement, that AI has brought, it hasn’t… That’s a harder nut to crack or harder set of nuts to crack. And it’s something that we do particularly well. As long as we continue to evolve with AI and help it make the decisions in the right way, I think it will become one of the most powerful, perhaps the defining technology of our age. And we play a big role in how it learns. We have to be very responsible about how we do that.

ACME General: You’ve said that general AI is pretty far away. Do you think it’s ever possible? Are there strong or theoretical barriers to AI being able to reach a level where it can actually compete with humans in terms of judgment and those complex decisions?

NB: It’s a great question. And answering questions about the future of AI has historically been a pretty humbling exercise where it’s just difficult to say what will happen in the future. But I think maybe the best place to start is to ask what do we mean by artificial general intelligence? And I think what most people implicitly mean is an intelligence like us that can make complex decisions that wants to survive and excel, perhaps replicate, but a human-like holistic form of intelligence that can solve the problems that we can solve. Which is a very human way of looking at it. We may end up with AI that’s way beyond what we ever thought in specific tasks, but never develops a kind of holistic, cross-functional set of judgments and set of priorities that tend to make us explicitly human.

I also think there’s a really interesting question of how we learn and how AI learns. And if those are fundamentally different, if AI could ever have human-like intelligence of that type of complexity, it’s really interesting. A big debate in the academic AI world is between nature and nurture and relates to the debates in human psychology on the same subject. But where there’s some people who say, that you could call them the empiricists who say, “If you give machine learning enough data, it will infer the laws of physics. The laws of human behavior will infer the world and be able to understand the world and be that type of artificial general intelligence.” There are others who say, “That may not be possible ever. And in addition, it may not be possible until the sun burns out, just given the amount of computing power and time and data that would be required.”

But a faster path for progress in AI is to make a… And this has historically been where some of the biggest AI breakthroughs have come through, how to make AI more human. For example, and the thinking of this group is there are some sort of elements, precognitive elements in our mind. We have a proclivity to language. We have proclivity to understand certain core facts about the world that give us a big head start in learning. And if you pre-program some of those elements into your AI and not just wait for it to figure out the world from scratch, it’ll grow much faster and that might grow into a more human-like intelligence. It’s hard to say, but I certainly don’t know if an artificial general intelligence will become possible.

And if so, at one point, it’s certainly possible. I think it’s more likely if we program it in more human-like ways, but I think we generally are very aware of the dangers of AI, growing at rates or being able to do things that would put humans in a difficult spot. And so I just don’t think the technology will be that capable or that we will tend to follow the paths that will put us in a position where we’d really have to worry about artificial general intelligence, but it’s a great open-ended topic. And I’m certainly not the… It’s hard to be an expert on something so futuristic.

ACME General: Yeah. Can you give an example for the layperson of just how differently humans and AI approach data and problems. I’m thinking of this observation? It wasn’t probably a few years ago where an AI fed images from, I think it was CAT scans assumed that the ruler in the image was relevant and associated the presence of a ruler with the existence of tumors. Just a window into how parsing of data we take certain assumptions for granted as humans.

NB: Yeah, absolutely. What AI is supremely good at is finding correlations and often finding correlations that we wouldn’t even think about. And in dimensions, we can’t understand. You could think of AI as operating in million dimensional space. And it can just see in relationships between entities that we can’t. And for example, it’ll find connections between words and be able to do translation, and figure out grammar, and language in ways that humans just don’t think. And could be incredible shortcuts. They can find, maybe evidence of tumors where our existing technologies don’t yet see them, but they exist. They’re terrible at causality. Understanding that the ruler is there to measure a tumor is not something an AI would know. It would just look at causality, but for pattern recognition, it’s incredibly strong.

And it can recognize patterns that we didn’t think was possible. For example, I’ve invested in a company called Data Miner that ingests hundreds and hundreds of thousands of publicly available data sets, nothing with any sort of private or personal data. And it figures out from those what events are happening in the world. It sort of determines news before it’s news with incredible accuracy, and it finds a lot of other things that are happening in the world that the news just hasn’t picked up. And so the company’s kind of become the news. It’s the primary source of breaking news for most media organizations.

And it’s just an incredible superpower where in looking through all this public data, it can say, “Wait a minute, something really significant happened over there.” Maybe it was a political scandal, maybe it was a terrorist attack. Maybe it was a protest. Maybe it was something else. And before humans are aware of it or before humans who aren’t immediately in that vicinity are aware of it. It is patterned recognition where AI excels, but it’s understanding context and causality where it still lags.

ACME General: Got it. It’s pretty clear that the feedstock for effective AI is data. I mean, there are other elements, you need computing power, you need the right algorithms, but without a massive amount of data, your AI can’t do anything. And this leads us to the part of the interview where we talk about incumbents versus disruptors. And how can you possibly empower disruptors when the essential ingredient is having a massive amount of data for which the incumbents have just such an incredible head start.

NB: Very interesting question and something I think a lot about both in the commercial context and in the national security context. I’d say several things, one is that while the amount of data that you have does remain the most important factor in determining the capabilities of your AI, as long as it’s AI ready, and a lot of particularly government organizations, but also commercial organizations have tons of data that’s not AI ready, and it’s a lot of expense to get it AI ready. And it takes a lot of internal expertise that these organizations sometimes don’t have. I’d also say that some of the most interesting new areas of innovation in AI are in low shot learning and being able to learn really quickly. Kind of like a child or a baby does, maybe the original successful image recognition apps that were used by Google needed to see millions of images of a cat to recognize a cat, a baby, figures it out on the second citing.

It doesn’t need all those examples. And so there are interesting developments in low shot learning. However, if you look at the biggest AI projects, GPT-3 is an example. This is a natural language processing program that you can fill in a lot of texts too. And it will turn out what it thinks will be the text that should follow it. And it’s sort of eerily impressive. If you start writing a story or start writing an essay, it can finish parts of it for you in unexpected ways. It still rife with challenges, but it got there by swallowing as much data as it could. And if you look at what OpenAI, which developed it and DeepMind and Google and others are doing, generally, they start with, let’s get all the possible data we can.

With all that being true, and with it being true, as you said, that incumbents have a lot of existing data. And I think particularly in the private sector of the big internet and cloud platforms, so Google and Amazon and Microsoft and Facebook, they’ve got so much data already. What’s also true is there’s just a vast explosion of ever new data being created, outside of these organizations. One example of a company that was not even created as an AI company, but just as an electric car company, and then suddenly realized, “Hey, wait a minute, we’re generating so much cool AI. We can become the leading autonomous driving company,” was Tesla.

It instrumented its cars to be able to ingest all of this really relevant data and succeeded through that creation of all this new data in developing AI capabilities and becoming a leader. I think that pattern will repeat itself and the world is generating ever more data. But I think the core truth remains that AI at least today is primarily what you might call a futile game. Where those who own the most data tend to win, or certainly have an advantage. They may need help of other companies in unlocking the value of that data, to the extent they own that data. It’s often hard to catch them unless you take some orthagonal approach, at least with where AI is today.

ACME General: Do you worry about the monopolistic tendencies of the industry writ large or the monopolistic impulses of the large actors?

NB: I do, very much. I think technology naturally lends itself to monopoly, particularly in platform technologies. And I think AI greatly enhances that. I think as a country, we have to get more thoughtful about antitrust and really understanding. I don’t think the traditional knee jerk reactions of, oh, let’s break up Facebook, that should work. What’ll happen is that one of them will again become dominant because that’s the nature of the network effect that drives their business. And then we have the problem all over again. But I do think it requires a very sophisticated form of regulation where you don’t want to stop innovation, but you do want to prevent certainly monopolistic practices of keeping others out of innovation.

One interesting thing in the venture capital world that’s happened now that the internet consumer, internet platforms are so strong is there’s been very little innovation consumer internet. And you try something good, Facebook will copy it right away, or some other platform, Amazon will copy it right away. And so its venture capitalists are reluctant to invest in those areas unless they see real credible defensibility and entrepreneurs are more cautious about starting companies in those areas.

ACME General: You said, we need a very sophisticated form of regulation, I’m trying to suppress a laugh here. Do you think our government understands AI well enough to regulate it in a sophisticated way?

NB: It’s interesting. It certainly doesn’t generally. And I think the lag time between the creation of a big social problem that needs government attention and an effective government response is much greater today, because it takes more time for government to understand. I think we all remember what happened when the heads of a number of social media companies went to speak and the Senate a few years ago, it was just embarrassing, how little understanding from the sort of septuagenarians and octogenarians in the Senate of, how these companies even worked and it didn’t suggest a lot of confidence.

I think Congress is getting better. And there’s a big generational dynamic here. We’re getting younger people, and who are digital native, who understand these technologies into an important position on congressional staff and Congress itself in regulatory bodies, Lina Khan is an example. In the FFC and the FTC, I think there’s a lot of important progress that can be made, but it will take time. And I think there’s also a bigger challenge with unintended consequences with many of these platform companies than there have been historically with companies that were not tied to many different parts of the economy.

I think we’re getting better. I think some of the recent antitrust initiatives against some of the big platform companies coming out of this administration have been focused on good issues, not all of the issues and not always in the most effective way, but much, much better than government has done historically. And I don’t think as a country, we can afford a hands off approach to saying, let them be monopolist, because that really would stifle innovation and potentially create other problems.

ACME General: When it comes to that regulatory regime, surely you’ve thought about more than just the need for antitrust considerations. I mean there have to be guardrails on the technology itself, right?

NB: Yes. For sure. I think that’s an area where government will likely never, or at least regulatory traditional consumers of regulatory authorities are generally going to always be behind the ball. They’re catching up to the technology that was significant five years ago. And when it comes to thinking about future oriented guardrails, I don’t think they’re great, but in more specific government context, like say the use of AI in defense, I think those government bodies can be much more forward looking. And in working with vendors in proactively suggesting guidelines as the DoD has done, and the Defense Innovation Board suggests some very helpful ethical guidelines for the use of AI, that were, I think further articulated by DIU for the defense contractor base and that can be pretty effective.

It’s ultimately, I think this evolution of guardrails will do best at iterative discussions involving more spec… Not big regulatory authorities, but more specific parts of government that really understand the application of that technology to their domain with vendors, with nonprofits, with leading academic thinkers. And I think the kind of process that the Defense Innovation Board went through, it’s a healthy one and it’s also a learning process. We’ll learn as we go, but investing in that early, I think is incredibly important and that’s likely to be much more effective in waiting for the FCC or FTC to solve the problems of future technology guardrails. It’s just going to be more effective.

ACME General: Do you think we’ll reach the point where we can rely on culturally enforced instead of legislatively or legally enforced norms where industry actors hold each other accountable for good behavior?

NB: What’s interesting, so we are a values driven species and when the values are sufficiently important, we legislate around them. I think there are some areas that are going to be important to legislate. I think legislating around potential future technology development is a hard thing, it’s not impossible. For example, I think, we can say, well, we never want to have a technology that does this specific thing that could launch a kill order and potentially be grossly mistaken with no human involved.

There’s certainly very specific things that we can say, but technology is too nuanced for legislation to be really effective. And it’s future focused. I think the future focused stuff is better dealt with through iterative discussions with the actual users of the technology and public constituents care a lot, and including government. But I just think it’s hard to legislate about the future too effectively.

ACME General: Yeah. I think you’re probably right there. I want to ask you about the mission driven ethos that you bring to Rebellion Defense, especially in light of new geopolitical realities. Have you sensed a shift in the overall conversation around defense and innovation in national security, given the changing geopolitical realities and the wake up call that we all received a month and a half ago when Russia tore up the rule book of international norms and invaded Ukraine.

NB: For sure. And I think it actually started earlier. I think as China adopted a much more aggressive foreign policy and its actions in Hong Kong and its actions in India. And it became increasingly clear that the rules based international order that we all I think had benefited greatly from and started, and really taken for granted was not going to define the decades ahead. And that we were in a much more unstable and dangerous world of a very tech-centric, great power competition. It’s interesting, a lot of the early discussions about the disconnects between the tech world and the national security community started and understandably, and in the wake of the Edward Snowden revelations. And there’s a lot of distrust that resulted from those. But I think today the attitudes are much more informed by the instability in war that we’re seeing in the world. And the importance of democracies to come together and make sure that we stay ahead both militarily and economically.

And I think that there’s been a clear recognition, both in the private sector, but also at the senior levels of government and defense that we’re in a geopolitical game today where the great power that innovates most rapidly and adopts innovations most effectively will be dominant. And I think there’s been a growing realization that is critical that the US not lose its status as the leading innovator in the world, scientific and technological and the military and economic dominance that’s come with that. I think we have a very interesting, strategic competitor in China, that’s much more economically potent and sophisticated than the Soviet Union was. And I think there’s a realization that that’s the core game we’re playing. That’s the long term game we’re playing. It’s going to be a multi-generational game.

And it’s been great to see the senior levels of national security organizations embrace that and understand it. It has not yet permeated the organizations more broadly. And where I feel generally confident in our ability to innovate most rapidly as a country. I think particularly given our recent re medication to investing more aggressively in basic research through organizations like the NSF and DARPA, and there’s some important legislation pending in USICA and the CHIPS Act that will, I think, fundamentally reinvigorate that basic research, which really gave rise to Silicon Valley in the first place and help create the semiconductor, the computer, the internet. We also clearly have an extraordinarily innovative entrepreneurial sector. I feel generally confident, but I want to make sure we get these bills passed on our ability to innovate most rapidly as a country.

I should say we are behind in a few key areas. We’re behind in visual intelligence, we’re behind China, we’re behind in 5G. And I think there are a lot of lessons to be learned about how we fell behind in those areas and what we could do better going forward. But what I’m quite worried about is the second part of what I said, not just innovating most rapidly, but adopting innovations most effectively. I think I see in the private sector, a just ferocious competition among companies to adopt new technologies and particularly new digital technologies most rapidly and change the way they do business. And those that do win, those that don’t often go out of business. And we see a much higher turnover in the S&P 500 is one kind of measure of how turbulent these times are reflecting that kind of competition.

But when I look at the DoD and to a lesser extent, the IC, they’re not built to adopt digital technologies rapidly to test and innovate, to take risks. And in the way that the best private sector organizations can and the successful ones have to, in order to survive. And I think historically in the US, and in most countries, you get the most innovative adoption of new technologies in the military after a major crisis. After Pearl Harbor, after Sputnik, and I don’t think we can afford to wait for a crisis. I think we need a lot more urgency in the sort of bureaucratic middles of these organizations, not just at the senior level.

Because I don’t see it at that middle level. E.O. Wilson had a great quote that I think kind of summarizes this problem. He said “the real problem with humanity is that we have paleolithic emotions, medieval institutions and godlike technology.” And I think in the national security, our institutions are unusually medieval, or as I said earlier, kind of early industrial and haven’t updated and adapted their practices and really need to do that.

ACME General: Can you talk about the importance of non-traditional innovators in addressing these national security problems and also the challenges that they face, that the primes and other large incumbents do not in actually turning those innovations into usable tools?

NB: Sure. I think so many innovations, really transformational innovations, come from small and mid-size companies. And if you were to talk to the CEOs of Google, Amazon, Facebook, they would tell you the same thing. As we’ve gotten bigger, it’s become harder and harder to innovate. And that’s why we like to buy lots of small and mid-size companies. And many of these companies are traditional. They’re started by founders that have an orthogonal view of the world that turns out to be right. And they kind of build that future. And eventually the rest of the world catches up and thinks, “Oh yeah.” As in the early days of Amazon, the internet really will take over commerce. And I spend a lot of my time working with those types of innovators in investing in them, in venture capital.

And those types of organizations have an incredibly hard time at trying to sell to the Department of Defense. I think the Department of Defense recognizes that it needs to improve in these areas. And some of the improvements it needs to make are related to big structural things that are not going to happen soon, but are critically important. The PPB and E processes, or these incredibly long resource planning cycles and these commitments to huge platforms that absorb most money. And you’re locked into for decades and not having the kind of quick iterative processes and associated acquisition practices to enable them to do that. I think there’s just a lot of structural stuff that needs to be done. Certainly it’s true that the small companies are at a massive disadvantage, relative to the usual suspects, to traditional primes who kind of run much of the acquisition machinery.

And they’re so good at contracting, they know how to win big contracts. And sometimes, independent of their ultimate performance, they consistently win these big contracts. And I think an important realization that’s happened in the DoD and in the intelligence community is the traditional defense industrial base doesn’t have this kind of innovator in it and doesn’t have the kind of… Or many, but it doesn’t have the kind of top software and AI development talent that you need to bring these new capabilities.

But the venture backed world does, this is what Silicon Valley is all about. Now, when I say Silicon Valley, I mean the type of tech startup, it could be anywhere in the country. And so I think there’s a real focus now on how do we work most effectively with these companies? And there are a bunch of things that I think we need to do differently, but maybe let me highlight a couple of quick things that I would put at the top of my list.

The most important, is just more urgency. As I mentioned earlier, there’s a lot of senior defense leaders, intelligence leaders understand we need to bring in more innovative talent. We need to work with these kinds of companies. We got to make acquisition easier, but that is not translated to the behavior of the bureaucratic middle of these organizations that actually gets this work done, or  more often than not, I think when it comes to some key technology areas fails to, or does it in a super delayed fashion. One thing I’d recommend, make contracting officers across DoD and the IC familiar with all the different contracting authorities they have access to, the OTAs and POTs, especially. So they don’t default to the traditional far based contracting, which is just ptolemaic, it’s so complicated and it slows things down so much.

It’s not good for cutting edge acquisition. Two, hire a lot more tech people so they can understand these companies. They can understand how you cooperate with these vendors to develop new capabilities iteratively instead of through the classical requirements process for someone who is not a technologist and who won’t be an end user for technology writes down specs that are out of date within months of them being put on paper, that’s not how great technology is developed. But if you have more technologists in the mix you’re going to do better contracting, better purchasing. A third I’d say is, we’ve got to fix the moral hazard problem that the party that often decides whether or not to use commercial software is not the end customer. It’s not the Department of Defense or a particular organization in it.

It’s not the IC or a particular organization in it. It’s the prime contractor who has every incentive to choose not to adopt commercial software and to staff up whatever needs with large service projects of their own. I think that is a moral hazard problem that a lot of customers aren’t sufficiently aware of. And I’ve seen it really block out a lot of startups who would do a far, far better job and often have technology ready to go, but the prime wants to keep that business for themselves. And guess what? They’re the decision makers, so they do.

I think maybe one last thing I’d mention is change the culture of procurement to reward rapid iteration and to reward risk taking. There’s just a lot of hesitancy to take risks, that the DoD in particular has an incredibly conservative culture where people don’t want to stick their necks out because they think it could hurt their career. And in the private sector, there was once a saying that you could never go wrong by buying IBM and you’d never get fired for buying IBM until you could. And it suddenly changed. And in a lot of areas, there are far better vendors. I think we need to change the culture of DoD in a procurement.

People will be rewarded if they try out new technologies, figure something out, which inevitably requires iteration and some success and failure learning from those, getting better and better, working closely with vendors. Just what have happens in the private sector. If people do that well, they should be promoted, they should hire more people like themselves and be given leadership positions. And I worry about risk, because if you want to win and stay ahead, you don’t slowly play defense and organize your activities to never make a mistake. Because then you’re not going to be reaching for the higher awards that are going to enable you to win.

ACME General: I’m with you there. I don’t think the concept of fail fast has ever occurred to… Well, has rarely occurred to folks in uniform because of that risk aversion when it comes to tech. You talked about the mismatch between neolithic brains and godlike technology. How should leaders think about the ability of AI to enhance their decision making without subverting it or taking over?

NB: I think the most important thing for leaders to recognize is that how to make AI most effective is a process of continuous learning. And no one has all the answers and even if they had answers for a particular point in time, the answers are always changing. So you have to structure your organization to learn effectively, to learn about how to use a particular technology internally, to learn about how to adopt and figure out what are the new technologies that are going to help us make our decisions most effectively. You have to learn and constantly iterate on how do we make sure that we understand how these black boxes of AI work and be accountable? How do we understand how to make sure that the ethical principles that are central to our business are put into place? Right now, all of those areas require a lot of thinking, a lot of great people who come from different kinds of backgrounds and a lot of testing and a lot of learning.

I think if one is hoping to plug and play some cool new AI, that’s going to radically change things, it’s unlikely. There’s some great new products that can provide pockets of that, but overall you have to be dedicated to learning. And so I would point to organizations as examples and I think particular examples for those in government, private sector organizations that had to completely change the way they did things with the emergence of the internet. And now with the emergence of AI and earlier with the emergence of computers, by learning, by testing, by asking the hard questions, by hiring new people that had the right capabilities.

One group of organizations I know well that I would highlight, two groups maybe are investment banks and hedge funds who, part of whose core businesses are trading for investment banks. And much of the investment activities done by hedge funds were being radically disrupted by the combination of high power computing and the emergence of AI and the best organizations leaned into that.

They said, “What do we have to change? What are we going to get wrong? How do we stay ahead? How do we win?” And they’ve changed the way they did things. They put technology people in much more senior positions. They tried things, some succeeded, some failed, but they were on a constant learning curve. And today, if you look at… An organization I worked at a long time ago, Goldman Sachs, they’ve done a terrific job in making themself an organization that really understands how to use these modern technologies to hold a very significant position in their market.

And when it comes to trading, when it comes to supporting a lot of the other activities that they do, if you contrast that with, I think, and this is a really telling example, when it comes to the DoD to Kodak in the early days of digital photography, Kodak had its own internal DARPA. Kodak, invented digital photography, but then failed to adopt it.

And it’s in this, there were a lot of innovator dilemma type dynamics at work where they didn’t… They were focused on their current customers, they had had more conservative management, they didn’t want to mess up things with this new technology and eventually they lost out big time. Their core business was hurt tremendously. They had to go through bankruptcy and they learned the hard way that they didn’t adopt new technology fast enough.

And I think in defense you only learn how far ahead or behind you are in moments of conflict like we’re learning right now for Russia. And it’s really important for us to follow more of the Goldman Sachs path and just make sure we’re… And many other organizations, Walmart is another great example and stay on the edge of constant learning.

ACME General: Well, thanks, Nick. This has been enlightening and we’d love to have you back.

NB: Anytime. Thanks so much for having me.

ACME General: Thanks again to Nick Beim for joining us on this episode of Accelerate Defense.

If you enjoyed today’s episode, please rate and review Accelerate Defense on Apple Podcasts – it really helps other listeners find the show. And follow the series today wherever you get your podcasts, so you get each episode in your feed when they come.

Accelerate Defense is a podcast from ACME General Corp. Our producer is Isabel Robertson. Audio engineer is Sean Rule-Hoffman. Special thanks to the team at ACME. I’m Ken Harbaugh, and this is Accelerate Defense. Thanks for listening.

Request Future Topic

Where to Listen

Find us in your favorite podcast app.