The Zhitong Finance App learned that after the company's management announced the latest funding plan to raise 2 billion US dollars by issuing debts (that can be converted into bonds) that can be converted into shares, the stock price of CoreWeave (CRWV.US), a leader in cloud AI computing power leasing with the title of “Nvidia's son”, fell sharply before the US stock market. Generally speaking, converting stock prices will give investors a sharp discount or implied income. This also directly causes existing shareholders' profit per share and value per share to be drastically diluted. Therefore, after hearing the news, stock prices react first and directly fall sharply.
In pre-market trading of US stocks, the company's stock price once fell 7% to $82.10. According to information, this artificial intelligence computing power leasing company plans to issue convertible bonds due in 2031 through non-public private placement, and can choose to increase the convertible bond issuance scale by an additional 300 million US dollars.
CoreWeave previously completed its initial public offering (IPO) in the US stock market in March of this year, attracting investors who want to bet on the full explosion of AI capital spending. The company is headquartered in Livingston (Livingston), New Jersey, and is one of the closest partners of AI chip superpower NVDA.US (NVDA.US). Its largest customers include OpenAI and Microsoft (MSFT.US).
The company plans to use part of the raised capital to conduct a derivatives transaction aimed at reducing the significant risk of a dilution cliff fall in its share price if these bonds were converted to shares. The rest of the money will be used to help maintain the company's business operations.
The current global demand for AI computing power resources has undoubtedly continued to explode. This is why the valuations of cloud AI computing power leasing leaders such as Fluidstack and CoreWeave have continued to expand since this year. AI computing power resource requirements, which are closely related to AI training/inference, have pushed the capacity that the underlying computing power infrastructure clusters can meet to the limit, and even large-scale AI data centers that have continued to expand recently cannot meet the extremely strong computing power demand on a global scale.
After Google launched the Gemini 3 AI application ecosystem in late November, this cutting-edge AI application immediately became popular all over the world, driving an instant surge in demand for Google's AI computing power. Once released, Gemini3 series products brought huge AI token processing capacity, forcing Google to drastically reduce the amount of free access to Gemini 3 Pro and Nano Banana Pro, and also imposed temporary restrictions on Pro subscribers. Combined with South Korea's recent trade export data, demand for HBM storage systems and enterprise-grade SSDs continues to be strong, further verifying that “the AI boom is still in the early stages of construction where computing power infrastructure is in short supply.”
How sacred is CoreWeave, which has the title of “son of Nvidia”?
As the earliest cloud leasing user of Nvidia's graphics processor (GPU) in the data center field, CoreWeave won the favor of Nvidia's venture capital department by seizing the wave of demand for AI computing power resources in data centers, and was even able to prioritize the extremely high demand Nvidia H100/H200 and Blackwell series AI GPUs, which forced cloud service giants such as Microsoft, Google, and Amazon to rent cloud AI computing power resources from CoreWeave.
As early as August 2023, CoreWeave became the first cloud computing service company to deploy the Nvidia H200 Tensor Core GPU. This is a high-performance AI GPU, which enables it to provide customers with extremely powerful computing capabilities. Driven by the AI wave, especially in 2023, CoreWeave's popularity in the cloud AI GPU computing power market grew rapidly due to large-scale procurement of high-end NVIDIA AI GPUs (such as H100/H200) and comprehensive cooperation with Nvidia in the CUDA software and hardware collaboration ecosystem.
The most prominent feature of the CoreWeave AI cloud computing power rental service is that it focuses on providing high-end AI GPU (especially NVIDIA GPU) clusters in large quantities, so that users can obtain high-performance AI GPU computing power resources as needed in the cloud service system — that is, cloud AI computing power resources, for AI workloads such as machine learning, deep learning, and inference. CoreWeave supports large-scale flexible deployment. Users can quickly increase or decrease the number of AI GPUs according to project requirements, suitable for AI model training (such as large language models, computer vision systems, etc.) and huge inference workloads requiring real-time processing. In addition to AI, CoreWeave's Nvidia AI GPU resources can also be used in traditional HPC scenarios (scientific computing, molecular simulation, financial risk analysis, etc.).