00:00 Speaker A
Dan, I want to make sure we get to Core Weave here because they’re having their best day in about two months and this is off of the news of a multi-year agreement with Anthropic to support the development and deployment of Claude models. So talk to us a little bit about this move.
00:11 Dan
Yeah, we didn’t get any uh pricing details on this uh or any gigawatt details. So it’s kind of just, you know, a deal, right? Uh usually we get we get those kinds of numbers but they weren’t being released uh through here. But a big deal uh for Core Weave. This is a huge customer obviously. Anthropic uh working with a number of companies when it comes to getting that capacity, whether that’s uh Amazon or Google or uh Microsoft as well.
00:43 Dan
Uh and so this is just another way for them to to build out capacity as they train and run their models. And look, you know, training is a big part of this still. Uh obviously when you’re one of these frontier labs, you’re going to be wanting to train a lot. That takes a lot of power. And so, you know, uh you’re going to have to get that capacity. And then as you get more popular, you’re going to need to have a lot more servers that can then run this stuff.
01:13 Dan
And so that’s what this kind of deal uh signifies that, you know, that demand, that capacity constraint is still going on. You’ll, you know, you listen to these companies, uh whether it’s Anthropic or Open AI or Microsoft or Google or or Amazon and they all talk about capacity constraints. Um, you know, one of one of the issues for Microsoft has been that while they try to build out their own models, they’re also serving Open AI and then they’re also serving their own customers.
01:46 Dan
And so that’s why they’re so capacity constrained. So this is just another way for them, uh Anthropic to to build that out and ensure that they’re able to both train and run models and serve their customers.