Artificial Intelligence

Will the $1 trillion of generative AI investment pay off?

A tide of investment is pouring into generative artificial intelligence. Will it be worth it?

“That’s the center of all debate right now,” says Sung Cho of Goldman Sachs Asset Management. Investment is flowing into everything from the silicon underpinning the training of artificial intelligence models to the power companies that supply electricity to acres of data centers.

To see where the industry is headed, Cho and Brook Dane, portfolio managers on the Fundamental Equity team in Goldman Sachs Asset Management, met with executives from 20 leading technology companies driving AI innovation. Those conversations — with public and private firms, from semiconductor makers to software giants — indicate that some companies are already generating returns from AI, and some would buy even more AI hardware if they could get their hands on it.

“Our confidence continues to increase that this technology cycle is real,” Dane says. “It's going to be big, as they say.”

But there are also risks. Cho and Dane say it’s possible that the large language models being built by a handful of companies will find they are competing in a winner-takes-all market. The use cases, or killer apps, that fully justify the intense investment are yet to emerge. They also point out that, in a year in which US stock indexes have set successive record highs, no rally in tech stocks ever goes in a straight line. “You get these waves of both investment-digestion and hype-reality,” Dane says. “And the two of them play out across a multi-year horizon.”

Cho and Dane’s insights come as Goldman Sachs Research recently published an examination of the immense amount of investment in AI, featuring interviews with Daron Acemoglu, Institute Professor at MIT, and Jim Covello, Goldman Sachs' Head of Global Equity Research. The report, titled “Gen AI: too much spend, too little benefit?” notes that mega tech firms, corporations, and utilities are set to spend around $1 trillion on capital expenditures in the coming years to support AI.

We spoke with Goldman Sachs Asset Management’s Cho and Dane about the prospects for the industry’s return on its investments, whether this trend is primarily playing out in public or in private markets, and the companies that can hope to capitalize on the AI boom.

How much investment are you seeing in these models? And do you think it's realistic to see a return on investment that justifies that capital anytime soon?

Sung Cho: That’s the center of all debate right now.

Brook Dane: The biggest question in the marketplace right now is: Are we getting a return on the investment? I’m reasonably comfortable that we are seeing that return. And there are a couple of data points I'm looking at that give me comfort.

First: We spent a lot of time on this trip talking with the CFO of a hyperscaler who had just come back from their strategic planning, where they're doing their one-, three-, five-year forward looks. This person talked very openly, not with any kind of numbers whatsoever, about how they were doing the RoI calculations across the clusters where they were deploying GPUs and how they were finding it very accretive from a return standpoint.

Now, this company is already running massive inferencing (using already-trained AI models to reason or make predictions) workloads across their infrastructure for recommendation engines. They’re seeing results in terms of increases in time spent on their platforms, as these models have predicted, with the next piece of content.

So for them, the RoI calculation is probably the simplest to calculate, because you can deploy a cluster, you can do a more sophisticated algorithm that can then lead to more time spent, which can lead to more advertising surface, which can then drive revenue.

The second thing, and this is from following the industry over the long term and having had lots of recent discussions with another hyperscaler around their capital spending plans: We know how disciplined they have always been historically, and how they're seeing both incremental revenue pick up, and seeing the incremental returns that they get out of their capital spending. This CFO is emphatic that they have the money, and if they could get more GPUs to deploy they would.

Having known this person for 20 years, and understanding how they approach capital budgets, how they spend their capital — this person wouldn't be doing that if there wasn't a genuine, real, tangible return that they can see in front of them. And they’re pretty emphatic.

But it's early, and the other downside is that for these frontier models you can't fall off the front end of the wave. You can't be the fourth frontier model that doesn't spend the incremental $1 billion dollars to get your model to be better. So for those guys, there’s a bit of an arms race here, and there's a little bit of a leap of faith embedded in that.

Sung, what’s your view on the RoI question?

Sung Cho: This is one of the most important questions. And that's what's going to dictate the direction of markets over the next six to 12 months at least, and whether tech continues to outperform or not.

Obviously with any RoI question, you have to understand the scale of what's been invested so far. If you look at NVIDIA's revenues in calendar year 2022, they did $26 billion dollars in revenue. And they did $26 billion dollars in revenue this most recent quarter. So in basically two years, NVIDIA has quadrupled its revenues. If you compare what's being spent on NVIDIA to total cloud capital expenditures, nearly 50% is going into NVIDIA chips.

So the investment in AI has been massive. And if you think about RoI, the starting point is around the “I,” which has been very, very high.

If you are a bull, the most important thing here, and Brook mentioned this, is that right now there's a race to see who can build the best foundational model (general purpose models that can be applied to many applications). That race isn't going to slow down anytime soon. From an RoI perspective, if you look at it over the next one or two years, maybe the RoI isn't great. But if you have a return stream of 20 years associated with your building the best tech stack today, then certainly you could justify the investment.

On the flip side, NVIDIA thinks they're going get one million times more efficient at processing AI over the next decade. That's one million times on the same kind of chip infrastructure. And also, when you speak to them, you understand that the infrastructure that's being built for training right now is also the same infrastructure we're going to use for inference. So as the world moves from training to inference, it’s going to be fungible. It's not like you have to build an entirely new infrastructure for inference.

And we look around and we say: OK, there are some cool applications. But there isn't this killer application that's consuming a lot of capacity right away.

So we understand the reasons why there’s so much investment in AI. There could be a pause in the near term, obviously, and that's going to dictate the shorter-term direction of markets. But I think we're both confident over any medium- to long-term horizon that AI remains one of the biggest trends we've seen in our history. And so I think it really just depends on your time frame. But we understand deeply both sides of the argument.

So where does the market and focus go from here?

Brook Dane: To Sung’s point, and that is a massively important point, no tech cycle goes up in a linear fashion. They just don't. You get these waves of both investment-digestion and hype-reality. And the two of them play out across a multi-year horizon.

Our perspective right now is that we're putting all this infrastructure in place to run these things. And we're seeing incredible improvements in how these models perform and what they can do. But, as Sung mentioned, we really need to see, at some point over the next year to year-and-a-half, applications that use this technology in a way that's more profound than coding and customer service chatbots.

If this ends up just doing coding and customer service, we're massively overspending on this. Again, I think both of us are very convinced that, over the intermediate term, we're going to see those applications and those use cases. And it's going to profoundly change how all of us do our work.

But I think the whole market is trying to figure out what else needs to take place for new applications and use cases to develop, and what you will see coming out the other side. So we're in that period right now where we need to see progress on that.

It sounds like you're saying this is a winner-takes-all market. Is this like the development of the internet, where there’s going to be a dominant player as we have seen in search or email platforms?

Brook Dane: This is another big topic.

Sung Cho: We know there's not going to be more than four. There's nobody else that can compete. The only companies that can make this level of investment are Meta, Google, OpenAI, and Anthropic.

Brook Dane: But what we don't know yet is: As these models mature, and as you stop seeing these step-function increases, which will happen at some point, will the best model three years from now be so much better than everybody else's model that it takes 80% of the market share? Or do we have four really good models, and people will use them for different use cases, in different areas, and in different ways? Will we have four models at scale?

If there are four equally robust models, you would think that gets commoditized pretty quickly. Whereas if one of them becomes the clear, dominant leader, then it's going to have incredible economics. We don't know that answer yet.

But what we do know is that none of the four can afford to fall off the pace of innovation. Because if you do, if you stop at this first-year-college-student level of intelligence, and the other guys become Phd-level intelligence, it may be hard for you to have a market if they're that much better, and they're riding the efficiency and the cost curves down faster than you are.

Sung Cho: What I think is going to happen over time is: There are going to be vertical specialists. And so I think the race beyond just raw speed and intelligence is: How can we build models that are much more efficient for specific subsectors and use cases?

Our colleagues in Goldman Sachs Research have said AI favors the big tech incumbent companies. I'm hearing the same thing from you today. Do you see anything challenging that narrative?

Sung Cho: From an infrastructure perspective, the race is largely over.

But in terms of building out vertical and industry specific LLMs and models, and a lot of edge-use cases, I don't think that's been settled yet, and I think that's where a lot of the innovation is going to come.

Brook Dane: And what I would add there is that I don't think this market is just the handful of mega-cap names as the winners. Essentially the most important thing beyond the model training piece here is: What data do you have that's unique, that you can bring to bear to help clients?

So what we're really looking for, on that software layer, is companies that have deep proprietary data that they can use to create differentiated use cases and experiences.

But this is largely a public markets phenomenon from an investment opportunity perspective. There's not going to be a host of private companies that emerge to disrupt the structures of these industries.

So to sum up: What’s your biggest takeaway from this research?

Sung Cho: My main takeaway is a little bit more specific to semiconductors. NVIDIA obviously has been dominating the AI silicon landscape for the better part of two years. But now there are real alternatives that are going to start to hit the marketplace. And I think there's a real debate as to whether NVIDIA is going to continue to maintain a 100% share or they're going to start to cede some of that share to others. And we're starting to believe that there are going to be other beneficiaries beyond just NVIDIA over the next couple of years.

Brook Dane: My main takeaway is that we are very early in a very profound, hugely impactful, technology transition, and there’s confidence that the state of these models and how they're moving forward is going to drive the structural changes that we have all been thinking and hoping were going to play out.

Our confidence continues to increase that this technology cycle is real. It's going to be big, as they say.

 

This article is being provided for educational purposes only. The information contained in this article does not constitute a recommendation from any Goldman Sachs entity to the recipient, and Goldman Sachs is not providing any financial, economic, legal, investment, accounting, or tax advice through this article or to its recipient. Neither Goldman Sachs nor any of its affiliates makes any representation or warranty, express or implied, as to the accuracy or completeness of the statements or any information contained in this article and any liability therefore (including in respect of direct, indirect, or consequential loss or damage) is expressly disclaimed.

Please click here for additional disclosures from Goldman Sachs Asset Management.