Oct. 9, 2024

Why AI Isn't Where We Thought It Would Be | Itamar Novick from Recursive Ventures

In the latest episode of the Understanding VC, Itamar Novick from Recursive Ventures examines the transformative potential of artificial intelligence (AI), likening it to the late 1990s tech boom. Itamar highlights AI's ability to enhance productivity in sectors like consumer services and enterprise operations, though it may also displace certain jobs. He stresses the need for innovative applications in areas such as search, fraud detection, and cybersecurity, while discussing the existing limitations and regulatory challenges on our podcast.

🕰️ Timestamps:

00:00 - Introduction

00:50 - Opportunity in AI

02:39 - OverInvestment in LLMs

05:37 - Impact of AI on Jobs

07:19 - Value Creation in the Application layer

10:32 - SaaS evolving into AI Agents

12:48 - Exaggeration of Capabilities in Gartner Hype Cycle

19:22 - Human Brain v/s LLMs

21:53 - Cost of doing Search is High

27:42 - What is Accelerated Computing

31:42 - Limitations of LLMs

36:57 - LLM progress curve is Plateauing?

38:49 - Opportunity for VC in AI Space

43:15 - Will future Startups be High Margin?

44:25 - Conclusion

📍 About:

Itamar is a solo capitalist and the founder of Recursive Ventures, a pre-seed fund focused on fintech, AI and emerging tech startups. Itamar has been on all sides of the startup table: as a founder and executive, an institutional VC, and an angel investor. He has supported over 50 successful startups, including Deel, Honeybook, Placer, Credible (IPO), MileIQ (acquired by Microsoft), Automatic Labs (acquired by SiriusXM), Tile (acquired by Life360), SafeGraph, and Armory. He’s been recognized by Business Insider as a Top 100 global seed investor. As an operator, he helped take Life360 from Seed to IPO, scaling the business to over $250m in revenue. Before that, Itamar was a founding team member and head of Product at Gigya (acquired by SAP). He holds an MBA from Berkeley Haas and an undergraduate degree in computer science from the Tel-Aviv Jaffa College.

 

🔗 Follow us:

🏠Website : https://understandingvc.com/

🤝🏻LinkedIn : https://www.linkedin.com/company/understanding-vc/

🎧 Spotify : https://open.spotify.com/show/1q7DxW3FEyP7EhH8m0VvAD

🍎 Apple Podcasts: https://podcasts.apple.com/in/podcast/understanding-vc/id1551524895

 

💌 Connect with Rahul: 

🤝🏻 LinkedIn : https://www.linkedin.com/in/rahulthayyalamkandy/

📩 Email : understandingvc@gmail.com

🐣 Twitter : https://twitter.com/rahul0720

 

📍 About: 

Understanding VC is a podcast that provides founders with the knowledge and resources they need to understand venture capital. Our goal is to create the best and most comprehensive resource on venture capital on the internet, so that founders can make informed decisions about their businesses and secure the funding they need to succeed.

Transcript

Itamar: [00:00:00] I think the impact of AI over us, you know, humans and over the human society is going to be massive in the trillions, investors just piling on onto the same opportunities. And in a way, this is very much akin to what we saw in the late 1990s, what we're looking for is not just value creation, but value creation that stays and keeps generating that value over tens of years, potentially, if you don't know what's going to be successful in the application layer.

Itamar: What you do is. Instead is, you know, focus on basting in all the enablers because it's more like a sure bet, right?

Rahul: Hi, Itamar. Thank you so much for joining me today. Uh, it's great to record a podcast, in person here in Singapore.

Itamar: Raul, I'm so excited to be here with you, in person in Singapore. And thank you for having me back in your podcast again.

Rahul: Yeah. So I was listening to, um, couple of interviews, uh, from the All In Summit last week with people like Larry Page and Elon Musk.

Rahul: [00:01:00] Everyone keeps mentioning that, they have not. seen anything like AI in terms of the progress it's making in their lifetime. So,this is going to be important, but then where is the opportunity in AI?

Itamar: Yeah, so I have a little bit of a contrarian view myself. I think we've seen a big step function with the introduction of Generative AI.

Itamar: And now we've moved from things like classifiers using, you know, more traditional machine learning and computer vision where we would know how to, for example, differentiate between a cat and a dog or do all sorts of tasks like that into the age of generative AI, where Computer systems, AI, generative AI is basically producing, right?

Itamar: Generating, content, whether it's text, image, or video. and I think we've seen a lot of progress there, but we're actually at a point where I believe we're plateauing a little bit. And even, you know, on that topic, Bill Gates actually more recently said that he doesn't expect GPT 5 to be significantly better than GPT 4.

Itamar: I think we're seeing optimizations, incremental increases, and not necessarily more [00:02:00] step functions. Um, so. That's just my personal view. Uh, but then, you know, more toward what you're talking about, the opportunities. I myself am a firm believer that the majority of value creation is going to happen actually at the application layer.

Itamar: so when you think about. Sort of the stack of A. I. There's obviously semiconductors. A lot of people would throw NVIDIA in there. then there's, uh, data and, machine learning model every infrastructure. another value there. Then obviously there's L. M. S. Yeah. and then on top of all that stack will have applications built.

Itamar: So I believe the vast majority of opportunities actually at the application layer.

Rahul: but, uh, majority of the investment so far has gone into the LLM models, right? Why is that then?

Itamar: Oh, I think it's a misallocation. I think it's a mistake. it's actually easier to pour money into NVIDIA and big LLMs. It's, investing, and picks and shovels. [00:03:00] If you don't know what's going to be successful in the application layer, what you do instead is, you know, focus on basing in all the enablers because it's more like A sure bet, right?

Itamar: You're not taking a lot of risk. So I think it's herd mentality. Investors just piling on onto the same opportunities. And in a way, this is very much akin to what we saw in the late 1990s. Where, you know, Cisco was valued at trillions of dollars because they were the ones that are going to build the infrastructure for the internet.

Itamar: Cisco and other sort of network providers. But what we ended up seeing then, which I think is Very likely similar to what we're going to see in the future is, that the Cisco's of this generation, they're going to rise in value and then, you know, go down in value significantly, like we've seen in 1999, while the applications built on top of, you know, of that infrastructure applications that in the internet era were things like Google.

Itamar: And Facebook and these guys are building the applications, [00:04:00] they're actually going to be the ones that are going to create long lasting value. So I think we're seeing something very similar with this cycle.

Rahul: okay, so application layer is the opportunity, specifically in which sort of maybe industrial sector?

Rahul: Yeah. Or what sort of applications?

Itamar: this is where I'm in line with a lot of the other, great leaders that, you know, were in the All In Summit, where I think the impact of, AI, over us, humans and over the human society is going to be massive. In the trillions. Okay. So I think it's going to impact everywhere.

Itamar: It's going to impact in consumer services, and it's going to make a significant impact in how businesses operate enterprises, right? I think in each one, each for in the enterprise, for example, in each department, whether it's sales or marketing or operations or customer support or finance and so on and so forth, we're going to see significant penetration of AI based capabilities.

Itamar: Peace. In some cases, we're going to [00:05:00] see AI significantly enhancing the productivity of humans. In other cases, we're going to see AI potentially replacing humans in some of their, activities and maybe even eclipsing humans. So, I don't think we have a clear enough answer on, How big of an impact we're going to have in each one of these.

Itamar: But some of the early indications we're seeing are pretty impressive. For example, with customer support, we've seen what Klarna has been able to do with the help of OpenAI. Basically replacing more than half of their incoming customer support tickets with, with the capabilities to the satisfaction of their customers.

Rahul: Yeah. Other than customer support, what other industries you think are like, you know,going to be replaced or what sort of jobs are going to be replaced soon?

Itamar: Yeah. let's actually also address consumer. I'll give you a few, Use cases, maybe, that I think are going to be interesting. So, obviously, there's a big race now, for search.

Itamar: Search is the best business in the planet, right? [00:06:00] Google has basically built this massive empire based on search. And that's why we're seeing companies like Perplexity and even OpenAIchasing down market share in search, because that's going to be a huge business, right? Similarly, in the consumer side, we know we're going to have a slew of personal assistants.

Itamar: And Potentially even robots that are going to do, you know, tasks inside our house and so on and so forth. Those things are going to be enabled by AI, if not today, in the next 10 to 20 years. moving more into sort of enterprise, sort of B2B use cases, which is what we focus on more at Recursive Ventures, the venture firm that I run.

Itamar: we are seeing a lot of opportunities around workflows in the enterprise that are very data rich. Accounting, FBNA, as an example, the ability to engage with customers in various ways, whether it's replacing SDRs, cell development representatives in what they do, whether it's,automating things like Fraud detection and cyber [00:07:00] security using AI.

Itamar: I think those are some of the opportunities that are potentially in hand. While other opportunities where there's less data involved and require more human discretion and potentially even human intuition are going to be harder for AI to replace in the near term future.

Rahul: Yeah. Yeah. Consumer use case. I mean, there is not an element of my job that is not done with the help of, these tools.

Rahul: Actually, I use perplexity to prepare for this. Um, yeah, you mentioned about, most of the value creation, is going to be an application, right? I believe so.

Rahul: Does that also mean most of the value capture will be there or that it will be separate?

Itamar: Yeah, I believe most of the value is going to be captured in applications for consumer and enterprise.

Itamar: It's not all, but most. And also, it's a question of, what type of value will be captured. Is it going to be a long lasting value or sort of a [00:08:00] shorter lasting value? And that goes back to my example with the Ciscos of the world, where obviously Cisco is still a very successful and significant company.

Itamar: But, what we're looking for is not just value creation, but value creation that stays and keeps generating that value over, tens of years potentially.

Rahul: So value creation not always equals value capture, right? Yeah,

Itamar: that is correct.

Rahul: Yeah. Good point

Itamar: though. Huh? Good point. Ro

Rahul: no, I was thinking about it earlier, uh, in preparing for this because, you know, there are industries, uh, where like Nvidia, uh, like the chip manufacturing, uh, captures a very low value in comparison to the design by Nvidia.

Itamar: Right. Manufacturing versus chip design. Yep. Correct. Right.

Rahul: Yeah. Oh, yeah. One other point that I wanted to really discuss was like, how is a user interface going to change? Because until now we were using a lot of tools, right? [00:09:00] And this is more like it's actually doing the job for me. And I think people are going to call it AI agents or something.

Itamar: So how do you think the interface is going to change? That's a very, very interesting question. And I don't think I have, I don't know what the future holds. So my answer here is not going to be a complete one. But I'll tell you what I do feel. I do feel that the existing interfaces that we're seeing that are really chatbot based are really not going to be the end user experience that we're going to see with a lot of applications.

Itamar: I think we're going to see a mix of, conversational back and forth. Chat. Um, but we're also going to see, other type of interfaces that were more familiar with graphical type of interfaces, toggles of all sorts. And really what I think we're going to be able to do with generative AI because generative AI can also create user interfaces is we'll have sort of ad hoc dynamic interfaces that enable [00:10:00] humans to control AI and Humans in the loop in the most productive and effective way for the point of control that they need to influence in that point in time.

Itamar: So sometimes the best way to feed tokens into these, you know, uh, LMs is by inputting a lot of text. Sometimes it's by clicking a button or choosing between a few different options. And I think what the AI is going to be able to do, it's going to be able to create dynamically the right interface to be the most productive with the human that's guiding them toward the right path.

Rahul: Yeah. SaaS industry is going to sort of morph into this AI agents industry, right? Otherwise they die.

Itamar: yes and no. Okay. I think there's a couple of things happening all at the same time.

Rahul: So, yes, I think we're going to see significant disruption of SaaS companies with AI first, next generation SaaS companies, essentially.

Itamar: but, you know, I also think that we might be, exaggerating, especially in the short and maybe even in the midterm, [00:11:00] in terms of, the ability of AI to cover all the use cases, all the different workflows that people have in their day lives or their jobs, I think there's still a point for, SAS as it is today to help us as humans do the things that we need to do.

Itamar: and then kind of. Taking it, one more level, um, deeper, I would say that we're going to see a lot of existing SaaS companies, incumbents, if you will, uh, who are going to integrate AI into the existing capabilities of their software in very smart ways and kind of mesh between the two, right? And at the same time, we're going to see potentially startups emerge with a completely different approach to solve the same problem, with sort of a, uh, AI native, uh, approach and interface.

Itamar: We're also going to see that, but I think what's most interesting, especially for venture capitalists, is actually focusing on the use cases. that were not really [00:12:00] enabled before AI, right? Use cases that you couldn't have imagined before AI. And when you think about the parallels, it's actually pretty similar to what happened in mobile, where, like, a perfect example is, Webvan in the late nineties, right?

Itamar: It's like, oh, you're going to get your grocery delivered to you, right? obviously that didn't happen. We were 15 years ahead, but when we combined mobile. And payments. And,very smart networks of delivery. Suddenly, we can have Webvan, and it's called Instacart. But it took 15 years to figure that out.

Rahul: I think Uber or Grab is a perfect example. Without Google Maps, it's not possible, right?

Itamar: Yeah, Google Maps and people with always on connected smartphones in their hands.

Rahul: Yeah, good point. And you mentioned some of these are capabilities are exaggerated. So, uh, the, the one, uh, sort of, thing that I keep track of is obviously the Gartner hype cycle, [00:13:00] and I have a copy of the AI version of it, here.

Rahul: What are the things that are, like where we are exaggerating in terms of capabilities? AI is at the peak of the hype cycle at the moment.

Itamar: I'd say in Silicon Valley, we're past the peak. We're in disillusionment. we're like, we're understanding that this thing has much more, many more limitations than what we initially thought it would have.

Itamar: Yeah.

Rahul: I'm just curious about the, a few other things here, in the hype cycle.

Rahul: what are things like model ops? Edge AI

Itamar: Yeah, so model ops is really part of the infrastructure that you need to be able to do the sort of technical operations of delivering, the, outputs of models to all sorts of use cases to consumers and in businesses. There's been a heavily heavy investments in that space, similar to the investments that we've seen in semiconductors and in the LMS themselves.

Itamar: And when you're over investing infras infrastructure before you even have the application, then [00:14:00] obviously that's not, that's, a miss. you mislocation right of the investment and, probably some of the, of your viewers have, noted, uh, Sequoia, 600, uh, billion,infrastructure investment article that more recently came out, I think it was a couple of months ago.

Itamar: And I think that actually does a good job of describing what we've seen with tools like MLOps.And then what, what was the second category you wanted to talk about? The second thing

Rahul: is, Edge AI. What is that?

Itamar: Edge AI is really all about the ability to deliver generative AI models in,an edge computing situation.

Itamar: So that would be on your phone or on an IoT device, that's connected to the cloud, where, there's significant limitations on, Compute, right. And accelerated computing, which is what you need to actually deliver, do inferences. So that piece is actually, very early. Like we're not yet at the point where we know how to deliver very good models at the edge in some of those clients.

Itamar: And I think one of the interesting tests that we're going to [00:15:00] see, and obviously there's a lot of, speculative thoughts about that is what Apple is going to do with the iPhone Yeah.

Rahul: Yeah.

Itamar: Yeah. So that's going to be intriguing. I just got the beta, I think, a few days ago with the first version of some of the AI features coming out.

Itamar: And I just can't wait to see, if this thing is going to be another version of, chat GPT or maybe something bigger. We'll see. Yeah.

Rahul: Okay. One last thing. I was surprised to see autonomous vehicle. It's along the slope of enlightenment, but it's still like five to 10 years away.

Itamar: Yeah. I think Gardner might've gotten that wrong, but we're still seeing a lot of limitations.

Itamar: I'm an early investor in a company called May mobility, uh, which is, I think they're a unicorn or almost a unicorn. They're pretty far along. they drive, mini buses. In cities in the U. S. completely autonomous, there's no driver, there's not even a wheel, um, and drive people around, and it's actually free, you can hop on one and just drive around town.

Itamar: [00:16:00] It's a fixed route, and they also have along the route, they actually have sensors who communicate with the minibuses and can help them figure out what's going on. So it is starting to work in some areas, but is still, it's The last mile problem of all the crazy number of edge cases that we just don't have enough data about that make autonomous driving just not as reliable as it should be.

Itamar: I've been in the Tesla beta forever, and they've done a great transformation with their ability when they introduced what's called imitation learning by actually using generative AI techniques to replicate how humans drive based on a lot of videos of human driving. It's improving. It's far from perfect.

Itamar: Would I rely on it every day and kind of go to sleep while the thing is driving? Absolutely not. So I think that's the gap that, you know, Gardner is talking about.

Rahul: But, uh, okay. So the one thing I keep [00:17:00] thinking is, okay, human drivers are still first off than the best version of AI driver. is that true or not at the moment you think?

Itamar: I think, we have different expectations from humans than we have from ai. if you've

Rahul: driven in India, maybe Yes, for

Itamar: sure. I have, and I still miss the beeps everywhere. It's like, I'm here, I'm here, you know, with the beeps. It's funny. Um, I think. We are, we would give humans way more slack and we're not going to give the AI slack.

Itamar: The AI has to be really so much better, so much trustworthy than what we're expecting with humans. And because it's not there, basically the bar is higher, right? And I think that's part of why this is. It's still a challenge and a couple of years out for sure.

Rahul: Yeah. but don't you think if you don't allow experiments or like what San Francisco is doing or many cities in us is doing, if you don't allow it to run an actual,

Marker

Rahul: uh, sort of, scenarios, that's the only way to improve, right?[00:18:00]

Itamar: Very

Rahul: likely.

Itamar: Uh, like anything in AI, the way to improve is to create a massive amount of data that actually has significant overlap with the type of use cases you're trying to cover. The issue is the amount, the number of edge cases, use cases in driving is actually endless. So you would need, Almost an endless amount of data to be able to cover all the things that could happen while you're driving.

Itamar: While the way the human brain works, it enables us to leapfrog on no or very little amount of data to actually solve problems. And that is actually, one thing that very nicely illustrates the gap between how humans perceive AI today and what AI can actually do. Generative AI is not like the human brain.

Itamar: It's a statistical model that knows how to predict a word or an image or whatever, based on a [00:19:00] lot of data it's seen before. And we think about it, some humans think about it more as like AGI. This thing has context. This thing is smart. It's not. It's not. If it doesn't have a lot of data that directly relates to what it's trying to do now, you're going to get a much, much lower quality response because there's no intuition.

Itamar: There is no context. This is not a human. This is a statistical model.

Rahul: So like, how is human brains different, uh, to how the LLM models are?

Itamar: Uh, so I'm probably not the best person to answer that question. I'm not a researcher. I'm a VC that invests in, in, in AI applications, but I can comment, so I can comment on the human brain as much, but I can comment on what AI is. AI, LLM, specifically generative AI can do. And, and I think, again, it comes down to a couple of things like, and it's about setting expectations, like everything in life, you know?

Itamar: So again, humans [00:20:00] expect this thing to be sort of this sentient general intelligence thing, and it's not. And the type of issues that we're seeing are stemming from a few different things. First of all, unlike. traditional like pre non AI systems, it's actually not entirely deterministic, right? We're expecting to put input X and get output Y out of computer systems.

Itamar: This thing, if you give it the same input with a very slight tweak, you're suddenly getting a completely different output because it correlates to, you know, some other data point that it has. So it makes it a little bit unpredictable for humans on one hand. Then obviously you've got all those issues around hallucination.

Itamar: So hallucination is this, again, you have a statistical model that's just replicating. Outputting based on statistics, everything that it says it has seen, it doesn't have the context of whether this thing makes [00:21:00] sense or not, whether it's real or not.

Itamar: you can have this discussion with Chad GPT, there's a kind of a known blog post about that, where the thing argues with you for hours that strawberry only has two R's, right? And like, why not? Because it has actually spelled strawberry. And counted the number of R's, which is what you would expect a human to do.

Itamar: But because it's, it has all this data around all this texts, and there is statistical significance somewhere in there that says that strawberry has two R's, and that's what it's going to output. So it's actually not that simple. deriving the results that we expected humans in the same way. And that's why we're not getting the results that we want pretty significant number of times.

Itamar: Very disappointing actually.

Rahul: Yeah. Yeah. you mentioned Search is going to be a big business, right? Using [00:22:00] LLM. But then I also, um, I was going through your deck and then I saw that it's not a good business as in the cost of doing search, especially for just, if you're searching for information, the cost of doing it is very high.

Rahul: so perplexity Is not a good business.

Itamar: thank you for looking at my, uh, good, bad and ugly of, uh, AI investing in Silicon Valley. the point I was trying to make there, which is very similar to what you said is that we've actually been able over the last two to three years to realize an order of magnitude decrease in cost when it comes to inference, right?

Itamar: Which is great. The issue is we have one or two more orders of magnitude for us to get to a point where we can enable from a unit economic standpoint, uh, many of the use cases and search is a classic example where, you know, with search, search, Today, you basically have this [00:23:00] very,wide, healthy, margin from a unit economics standpoint when you're using classic search tools.

Itamar: But if you let LLMs do the search, then you're running into a big problem. And part of it is also tied to, The underlying architecture of transformers, where you have two costs centers in delivering LLMs. One is the training, accelerated computing. I call it the NVIDIA tax, right? It's like the difference in cost of GPUs in the cloud versus CPUs, specifically GPUs that are used to train is pretty significant.

Itamar: Just it's just crazy, right? And then the second cost center is the inference, right? and what happens is, the more data parameters you put in training, and the more data you put with inference in, in tokens, essentially, the cost does not go up linearly. It goes up exponentially or more. So that is actually, that [00:24:00] architecture is driving an opposite result in what we wanted, right?

Itamar: We want to feed it more tokens. We want to feed it more data, but then it's exponentially taking us longer to get the results that we want, and it's costing us more. And that is a pretty significant gap that we have not yet been able to close.

Rahul: Yeah. And, um I also read again from your deck that the applications built on top, if it's using open source model plus serverless architecture, they can reduce the cost by 89%.

Rahul: Yeah.

Itamar: Yeah, order of magnitude, exactly. That's what my companies are realizing today. so yeah, so the open source models, obviously, you know, it's open source. so you don't have to pay the open AI tax, if you will, or API costs, cost as much. Um, another kind of big trend there, which is, Related is that these very large, hundreds of billions of parameters, [00:25:00] models, the bigger the model gets, the more it costs to use inference with it.

Itamar: So what a lot of companies are doing now in Silicon Valley is they're actually use increasingly using smaller open source models and then doing significant fine tuning. So post training, and iterative training sessions with, data that is either industry specific or use case specific.

Itamar: And the results that they get from doing that work in terms of being able to cover with a high quality of coverage those specific use cases and workflows is actually much better than what we're seeing with a big generic LLM. Not only are the results better, it's also significantly cheaper. And that's why we're seeing this rise of open source, smaller models, more advanced infrastructure, driving costs by an order of magnitude.

Rahul: Yeah. Meta is doing open source sort of modeling. Why is that? How does it [00:26:00] Help them. if you look at, okay, this usually happens when Google, was initially competing with Microsoft, they did a lot of, uh, gave away a lot of things for free because it helped, you know, feed their ad business.

Rahul: Is it something along that

Itamar: lines? yes and no. So I have two theories. I'll give them to you really quickly. Let's start with,it's not exactly my theory, but I think it's the fun one because it's a conspiracy theory. The conspiracy theory is that Meta is trying to either way all the value generated in AI in the future by open sourcing everything.

Itamar: So there won't be a rise of a significant competitor. Like an open AI, which basically captures significant value from the ecosystem. That's sort of one take on it. The second take on it, which is interesting is, um, let me actually pose a question to you, uh, there, uh, there, interviewer. where do you think the vast majority, like 90 percent of value today is generated using AI?

Rahul: Uh, NVIDIA. Mm mm. [00:27:00] No?

Itamar: I don't know. Predicting what you want to read next on social media and optimizing which ads you're going to get served. Yeah. That is the number one use case and value creation in AI, and it's not even new.

Rahul: Yeah, it's been, that's how they really optimized and improved the business over the years, both.

Itamar: So , Meta, by open sourcing AI models and building communities of data scientists to leap forward with those, can actually leverage those same tools to enhance their advertising business and find the best ad for Europe. That is another take on it.

Rahul: Yeah. UmI don't know what the context of this, but then, you also talk about, accelerated computing, and how that's.

Rahul: insanely expensive. what is accelerated computing?

Itamar: Accelerated computing is GPUs. NVIDIA techs. It's the cost of [00:28:00] training models.

Rahul: Okay. Okay.

Itamar: Yeah. just in general, we have CPUs, which are, uh, computes. And they typically, don't operate in parallel.

Rahul: Yeah.

Itamar: Well, accelerated computing, the architecture of the chips, the GPUs, is all about computing a lot in parallel.

Itamar: And that's what you need to train models. yeah.

Rahul: Okay,You invest in B2B,businesses, and, you make this argument that, most of the POCs now, especially with enterprises, fail. Um, why so?

Itamar: Well, um, so yeah,a recent data point came up and, there's always potentially some contradictory data.

Itamar: And the data point was that, CIOs, CTOs, and bigger organizations are saying that, Approximately 90 percent of POCs with generative AI are failing to reach production. Now, don't freak out, because POCs fail sometimes, and probably 40 50 percent of POCs would fail anyway. But [00:29:00] obviously 90 percent is a big number.

Itamar: And, I think there's several reasons why that is happening. One is obviously the limitations of the technology. Which we started touching upon. And actually, my deck has a list of issues that we know the transformer architecture exhibits, right? So, you know, the technology has limitations. The second thing is, we've started talking about that as well a little bit, is human expectations.

Itamar: We expect this thing to be an AGI. Well, it is not. Yeah. And that, that, that makes it hard. And then the other reasons are like more classic reasons of, tech adoption, around, uh, change management. Obviously, a lot of people are concerned about, oh, yeah, it's going to take our jobs, right? So, um, Switching costs.

Rahul: Maybe you can

Itamar: define

Rahul: it

Itamar: as switching costs or not. I think this is more about human nature. Okay. if we're going to place, replace Raul with an AI, Which is going to ask Itamar the best [00:30:00] questions, Raul might freak out, he might not want to adopt this piece of AI software that's being thrown his way, right?

Itamar: He might resist, and it's just human nature, but I think with this cycle of innovation and technology, it's way more,in your face than in previous cycles because up until now, we've been, moving people from, Doing farm job, and automating blue collar jobs, but now we're automating knowledge workers, right?

Itamar: We're automating white collar jobs, and that's scary for many people, right?

Rahul: Okay, is it really though? Because, okay, I can't maybe imagine this, but what would have the people who first started using computers felt? Like maybe 40, 50 years, they would have also felt the same, right? Like suddenly you have, I was doing pen and paper and, and then they have, you have this tool that can do it much faster.

Itamar: Well, you realize that when those PCs started coming out, everybody was like, nobody's going to, nobody's [00:31:00] ever going to have these things in their homes. There's actually a famous interview with Steve Jobs somewhere in the, I don't know when, where he's like, everybody's going to have one of these in their homes and they're going to use it to do shopping, blah, blah.

Itamar: And everybody was laughing at him. It didn't make sense. Um, so I think early on they didn't really understand the power of computers. And I think AI is actually very similar in how it's going to impact our society to the introduction of personal computing. Um, but. Now we're much smarter about this, and we understand that this thing can replace a lot of the things that we do, and I think that creates panic, and there's a lot of unknown.

Itamar: So that's a big part of the issue.

Rahul: Yeah.you mentioned, uh, earlier about the limitations in LLM.is one of the reason for the failure in an option. could we like, go deep there in the sense? Double click? Yeah. Double click is the

Itamar: word a

Rahul: lot of people use.

Itamar: Yeah. it's a Silicon Valley thing.

Itamar: Obviously very digital. yeah. So there are [00:32:00] quite a few, um, obviously there's. Limitations around data, uh, getting data, privacy, using user data, HIPAA compliance, in health, related data, that's actually a big barrier because again, we have already established that these LLMs are just very data heavy.

Itamar: And if we're going to have all those kind of data walled gardens, that's going to be an issue to actually deliver the quality that we want. outputted from LLMs. So privacy and privacy concerns is one barrier. obviously hallucinations, it's just the way this thing is architected, it just makes up stuff.

Itamar: Yeah. And that doesn't really exist. And it doesn't have contextual mechanisms like the human brain to actually say, oh no, wait. This output doesn't make sense because there's no, there are no dragons out there, right? the LLM be led, depending on what data it's being fed, to believe that dragons are real.

Itamar: It just depends on what data you feed the [00:33:00] thing, right? so hallucination is a big issue and it leads us to a point of mistrust, right? Um , this thing also doesn't have guardrails. let's say we take the strawberry example I had before, and you put this thing in front of one of your end customers, right?

Itamar: Of your customers. It's going to argue with your customer that strawberry, like, it's like, no, it's going to make my brand look really bad, hurt my reputation. I don't know if I can trust it without having a human in the loop, and, that's again a big problem because we're designating taking humans out of the loop with this thing, but, it's not ready for that.

Itamar: in Silicon Valley, we're all very worried in different ways about the regulators. you have, Administrations all over the world saying that we need to control AI, that we need to inhibit AI. I think that's scary for innovation, and I think it's also very scary for big companies. So if you're a bank, if you're an insurance [00:34:00] carrier, you know, kind of a company operating in a highly regulated environment, then this is a concern that is creating limitations on the capabilities of generative AI.

Itamar: The next one, which I think is really important, is explainability. This kind of adds up to trust. there's a lot of companies out there that can tell you today that, oh gosh, we can explain and even like Perplexity, which we threw in as an example, it gives you the source of where it got the answer from.

Itamar: But that's actually not explainability. Nobody really knows how to do explainability well because there's a black box in there in that transformer architecture that even our best scientists do not understand. Yeah. So the fact that at the end it has given you a result and now it can attribute where it got that result from does not mean that it can explain how it got there.

Itamar: And actually, we don't know how to explain that yet. And if we can't, we have a black box. And if [00:35:00] we have a black box, and we let the black box make a decision for our lives as humans, that's kind of scary. Um, the last issue that we're seeing, which is big, is overfit. We've talked about this a little bit.

Itamar: Uh, too much data leads to this thing bouncing around, getting out of context. Inputting too much data into tokens, like the example that you described where we have a very long token, Sentence and it's got stuff in the middle and stuff at the end. And so the LM picks different parts of this and like, not, doesn't weigh them properly in terms of their impact and what the response should be, um, again, sort of generates, um, results that are just lower quality.

Itamar: Yeah. And obviously, if it's not high quality enough results, what is this good for? And by the way, it can't even tell us what is the probability that this thing is right versus wrong, which is another big problem.

Rahul: Yeah, I've listened to a bunch of interviews, even the CTO of OpenAI, nobody knows how this works.

Rahul: It's a black box. Yeah,

Itamar: that [00:36:00] is insane. Yeah. but by the way, I, I don't want to come across as like super negative here. I think AI is absolutely going to change the world. It's going to be the biggest change we've seen with compute computing, since the invention of the personal computer. It's just, it's going to take longer and it's going to be more complicated than, we think it is today, and, that's why we shouldn't get ahead of our skis.

Itamar: We should understand what this technology is capable of delivering now, make the best of it, and then continue to explore and figure out what is the right architecture, the right approach to solving some of the other problems that we might have, and eventually lead the road to AGI, which, you know, is a very, uh, high level term, like nobody even knows how to define AGI, but.

Itamar: I think there will be more and more capabilities that will be unraveled using AGI. I don't think the transformer, the LLM technology can deliver today.

Rahul: Yeah, earlier you mentioned, uh, the Bill [00:37:00] Gates, uh, code that the GPT 5 will not be considerably. So, um, that means that, maybe, the sort of, the technology progress curve with LLM The current sort of transformer architecture is slowing down.

Rahul: Plateauing,

Itamar: potentially. Plateauing. Which, by the way, should come as no surprise. You have technology step functions, and then what you get from there is not necessarily a step function. It's more of an optimization. Yeah. So this is almost as expected. More like

Rahul: our phones now. Nowadays.

Itamar: Yeah, like iPhone 14 versus 15 versus 16.

Itamar: All the same. Sure. There's a slightly better camera and that's as expected. So I think that is what, you know, I open up YouTube, but there's so many YouTubers talking about the icing. Oh, this thing is moving so fast. It's like, they are assuming that we're going to continue to see a series of step functions.

Itamar: Well. I would assert that we've seen a very significant step [00:38:00] function. Yeah, this is massive, right? But the assumption that those step functions are going to keep coming in spades at that pace is unlikely. And those YouTubers need to get views. So they're trying to convince us that this thing with another 50 billion parameters is going to solve all these problems.

Itamar: Well, likely, that's actually not the right path forward.

Rahul: Yeah. So to summarize, it's like that saying, right? We underestimate, what is going to happen in long term and overestimate things in short term, that's a hundred percent like

Itamar: any technology cycle. I don't think this is different. And I think that a lot of people out there who think that this is going to happen so much faster.

Itamar: I think they might be wrong. I'm not saying this is going to take 15 years, but it's definitely not there today. And it's going to take us a while to figure out.

Rahul: Yeah. if it takes us a while to figure out, then as a VC, where is the opportunity? you invest in, uh, specifically B2B, [00:39:00] um, companies.

Rahul: Operating in AI sort of data space, right? So where are you seeing the opportunities?

Itamar: Yeah. I think again, we're going through a significant misallocation phase. Investors are. We're misallocating on timing. We're jumping ahead of ourselves.

Itamar: The Gartner hype cycle, right? we're in the, we're in a bubble. And we're also misallocating wrongly in the stack. We're. Putting on this money in NVIDIA, semiconductors, infrastructure, LLMs, the money again is going to be in applications. So that really is the, at the core of our thesis at Recursive Ventures and how we invest is because we're very early investors, pre seed and seed, we invest in companies today that are going to change the world, five, 10 years from now.

Itamar: So that's a significant amount of time in the technology world. And we do that in the application layer. And I think that's where at least private capital, venture capital should flow into, well, the numbers speak very [00:40:00] differently, actually an interesting stat for you Raul. Is that, in, in 2023 and early data about the entire 2024 is indicating that there's actually a decrease in VC investments in vertical applications.

Itamar: That is, you know, negative correlation to what I believe we should be investing in.

Rahul: Yeah. Okay. Just to clarify. you mentioned stack that we should be allocating more resources at the application layer. Right. So timing wise, how are we misallocating?

Itamar: the Gardner hype cycle. We think that AI is coming tomorrow and it's actually going to be another 5 to 10 years.

Itamar: So we're investing hundreds of billions of dollars in semiconductors and tools to deliver trillions of, value in AI applications. And those applications are not there yet.

Rahul: Oh, but this usually happens, right? going back to the dot com thing. I think a lot of companies invested billions of [00:41:00] dollars in internet infrastructure, laying down all the cables and stuff.

Rahul: a lot of it went bankrupt. Um,Oh, one final thing. Umjust one, this is how do you think this is going to change?

Itamar: Oh, that's a massive question. I have so many different interesting answers for you. first of all, there's a couple of things happening at the same time. productivity lift, especially for cutting edge knowledge workers, is on the rise.

Itamar: I have companies in my portfolio where the CTOs are telling me that they're doing You know, two, three, five X more productivity with a single engineer than they could have had done before. are we going to need hundreds of engineers to build software in a world where, you know, AI, uh, is able to build a lot and test a lot of this software for us?

Itamar: Um, same goes with, go to market functions, marketing, sales, sales development. [00:42:00] Like we're increasingly seeing AI agents and agentic workflows that are delivering massive productivity or potentially replacing humans there. So are you going to need that? Yeah. Absolutely. 500 million check from Andreessen Horowitz to take it to the next level, right?

Itamar: For your company, I am not entirely convinced that's what we're going to see. So I believe we're going to see the VC industry potentially shrinking, where, we're going to see smaller funds, because of that increased productivity. And I think we're going to be able to build software, AI software, much faster.

Itamar: Much cheaper and the result would be,a narrowing down of the sizes of vent, uh, of venture funds and,to be completely honest and blunt about this, I don't think that's what VCs want to see because they want to grow their AUM and get more fees, but they will have to adopt and that's also part of why for us at Recursive Ventures, [00:43:00] we're very focused on staying small, staying scrappy, And,and focused on the inception stages of companies versus trying to find ways to inject hundreds of millions of dollars into companies at a later stage when they don't necessarily need that.

Rahul: so what you're saying is, we can build, uh, so, uh, future startups would be a lot more capital efficient, right? Will it also be high margin similar to software? That's an interesting question.

Itamar: I believe that eventually, yes, but it's going to be a little bit of a journey until we get there. Obviously, with what we've talked about, the cost of training, the cost of fine tuning, the cost of inference, we're not seeing the type of unit economics that we want to see.

Itamar: You know, we're not seeing the potentially 80 plus percent terminal margin profile for companies today, but I think we're very likely to see them in the future because at the end of the AI models exhibit the same type of behavior that model that, that, software exhibits where you kind of theoretically [00:44:00] you build it once, but the cost of servicing the end plus one user trends to zero.

Itamar: So I think we will get there, but we're not there today.

Rahul: Yeah. And if it's capitalization, high margin, then VC should do really well.

Itamar: Yeah, but what if it doesn't need as much capital? It will do well, but not with a 5 billion fund.

Rahul: That has to change, probably. Maybe. That's my bet. Cool. Thank you so much.

Rahul: This is great. Thank you so much for taking the time to do this.

Itamar: Yeah. Thanks for having me. This is always fun. And I hope we can do this again in person in Singapore.

Rahul: Yeah. Probably next year, same time. Yeah. For sure.

Itamar Novick Profile Photo

Itamar Novick

Founder & Solo Capitalist at Recursive Ventures

Itamar is a solo capitalist and the founder of Recursive Ventures, a pre-seed fund focused on fintech, AI and emerging tech startups. Itamar has been on all sides of the startup table: as a founder and executive, an institutional VC, and an angel investor. He has supported over 50 successful startups, including Deel, Honeybook, Placer, Credible (IPO), MileIQ (acquired by Microsoft), Automatic Labs (acquired by SiriusXM), Tile (acquired by Life360), SafeGraph, and Armory. He’s been recognized by Business Insider as a Top 100 global seed investor. As an operator, he helped take Life360 from Seed to IPO, scaling the business to over $250m in revenue. Before that, Itamar was a founding team member and head of Product at Gigya (acquired by SAP). He holds an MBA from Berkeley Haas and an undergraduate degree in computer science from the Tel-Aviv Jaffa College.