BlackRock comment on DeepSeek developments, implications for US tech companies

What happened? U.S. tech sold off on Monday as investors focused on the implications of Chinese AI startup DeepSeek’s latest model that rivals OpenAI’s ChatGPT 4's performance, said to have been achieved at a fraction of the cost. In particular, the selloff reflects market doubts over whether the AI buildout will require the hundreds of billions of dollars of investment in chips, infrastructure and electricity planned and underway by U.S. hyperscalers. Semiconductor and utility shares were among the hardest hit.

What do we know? DeepSeek seems to have made some remarkable advances in efficiency by using a rule-based system that encourages the algorithm to break down problems in steps and better allocate computation time relative to the step complexity. Yet the exact extent of compute used is not certain. The results suggest these models could solve problems in a fraction of the time of other models that rely on unstructured training on massive data sets and that thus have massive computation and electricity needs. Running costs certainly appear lower for DeepSeek, but the true development costs are uncertain especially since this builds on existing tech and is not a built-from-scratch innovation (it functions similarly to other U.S. models, it does not expand current AI capabilities).

What does it imply for AI adoption? If indeed a significant cost and efficiency gain, this is likely to substantially accelerate the AI mega force. A few months ago, there was anxiety around AI hopes being overdone on worries that the exponential growth in computing capacity was reaching limits. This development is the exact opposite situation. Think of it as if the chips themselves had become 10x faster. We think this should increase the number of AI use cases and increase the speed of adoption. The winners of the AI transformation can broaden as it accelerates.

What does it imply for U.S. big tech? Markets viewed these more efficient algorithms — and reduced reliance on crunching massive data sets — as calling into question the extent of AI infrastructure needed and thus the investment pouring into the buildout. The amount of infrastructure needed might be in question if AI was already widely adopted. But adoption has just begun, and we are barely scratching the surface of the potential use cases. Instead, we think this could open up new avenues of AI demand, and the capacity U.S. hyperscalers are building can still be redeployed in other places. For example, big capital spending might still be needed to keep advancing toward artificial general intelligence (AGI). So we think it is much too soon to conclude that the capex spent by U.S. tech on the AI buildout is overdone.

Expertise is key. The fact that such AI model improvements came as such a surprise to markets reveals that some of the AI hopes reflected in valuations may be based on weak assumptions. Expertise is absolutely key to tracking who the winners of the AI transformation will be. We keep our overweight to U.S. equities but monitor developments to identify the AI beneficiaries through the different phases of AI’s evolution.