As the world becomes increasingly enveloped in artificial intelligence (AI) innovation, a critical question emerges: is it necessary to persist with substantial investments in computational power? This inquiry has gained prominence following the surge of interest in Chinese AI startup DeepSeek, triggering apprehensions among investors regarding the sustainability of large-scale computational spending.
Since the beginning of 2023, there has been a collective understanding between Chinese and American tech companies regarding the requirement for extensive resources to develop significant models in AI. The mantra that has echoed through the industry is “it takes immense effort to achieve extraordinary outcomes” — implying that substantial computational power and funds are indispensable for ongoing technological advancements and commercial viability of AI models. Among the companies that have excelled in this domain is NVIDIA, which has become the primary beneficiary of the increased demand for computational capabilities. The supply chain is dominated by NVIDIA’s AI chips as both Chinese and American firms engage in bulk purchases to fuel their AI ambitions.
Nevertheless, an unexpected proposition arose from DeepSeek at the start of 2025, suggesting a novel approach dubbed “using light forces to achieve great effects.” The company asserted that it could achieve performance levels comparable to OpenAI's flagship models using just 2,048 NVIDIA H800 chips at a training cost of approximately $5.5 million. This figure represents a mere fraction of the expenditure that established tech giants normally incur. Furthermore, the inference cost reflected a staggering reduction, amounting to one-thirtieth of that of GPT-3. Such revelations challenge the prevailing notion that piling on computational power is the only pathway to success.
The pivotal question remains: should the future of large models hinge on the traditional paradigm of “immense effort” or the novel approach of “light forces”? Presently, at least four significant American tech companies—Microsoft, Amazon, Google, and Meta—insist on continuing their expenditures based on the tried-and-true “great effort” model, underpinning their substantial computational investments.
In 2024, these tech giants encountered unprecedented levels of capital expenditure, as recent financial reports exhibit. Microsoft, Amazon, Google, and Meta's capital outlays for 2024 were reported at $75.6 billion, $77.7 billion, $52.5 billion, and $37.3 billion respectively, marking growth rates of 83%, 62%, 63%, and 35%. This collective expenditure reached a total of $243.1 billion—or approximately 1.8 trillion yuan—reflecting a striking year-on-year increase of 63%.
Looking ahead to 2025, it is apparent that while capital expenditures will continue to rise for these tech leaders, the pace may show signs of moderation. Financial communications reveal that Microsoft anticipates a quarterly increase in capital spending, while Amazon plans to allocate $100 billion for capital expenditures, implying close to 30% growth. Google expects a spend of $75 billion, with growth likely to exceed 40%, whereas Meta estimates its investment to be between $60 billion to $65 billion, indicating a staggering growth rate of 60% to 75%.
Based on insights provided during financial calls, it appears likely that the total combined capital expenditures of the industry giants for 2025 will eclipse $320 billion, marking an approximate growth of 30%. This trend serves as a bellwether for Chinese companies, as it reflects American firms' market sentiments, illuminating whether firms should withdraw investments cautiously or continue interpreting market signals to deepen investments in computational power.
Historical data indicates that the levels of investment in computation have reached new peaks. The successful training and inference of larger models distinctly depend on computational capabilities. Typically, large tech enterprises allocate over 60% of their capital expenditures annually toward resource investments, including procurement of chips, servers, networking equipment, land leasing, and data center construction.
By charting the capital expenditures of Microsoft, Amazon, Google, and Meta over the past 20 years, it is evident that 2024 stands as a historic zenith in their capital outlays. Conclusively, executives from these companies unanimously conveyed through their financial calls that the current demand for AI capabilities drastically outweighs supply, necessitating high levels of capital spending driven by market needs.
In a recent conference call, Microsoft’s management specified that capital spending was destined to rise in subsequent quarters. However, details regarding 2025’s spending plans were not divulged. They underscored their limited AI capability and noted that more than half of their computational investments produce long-term assets poised to drive commercial gains over the next 15 years.
Financial reports revealed that by Q4 of 2024, Microsoft's annualized revenue from AI exceeded $13 billion, showcasing a remarkable growth rate of 175%. The MS Azure platform recorded $105.4 billion in revenue with AI contributing to 12.3% of this income, indicating significant traction within just two years of expansion into AI services. Meanwhile, Amazon disclosed intentions to surpass $100 billion in capital expenditures in 2025, emphasizing a major focus on computational power, particularly as it establishes a compute cluster featuring thousands of proprietary Trainium 2 AI chips.
Similar narratives unfolded at Google and Meta. Google disclosed plans for $75 billion in capital expenditure by 2025 to meet surging demand, as they noted the rising consumption of computational power by AI training, already at eight times the consumption from 18 months prior. Meta, emphasizing a starkly different business model as it does not possess cloud computing capabilities, nonetheless aims to enhance its AI investments between $60 billion and $65 billion. Its strategy hinges on leveraging AI in advertising, recommendation systems, and information flows, emphasizing that the monetization of its AI pursuits remains rudimentary.
Despite investor concerns regarding the efficiency and returns on investments tied to the computational expenditures of these four tech leaders, these firms continue to lean heavily into AI developments. The recent fluctuations in stock prices of technology stocks like NVIDIA prompted considerations of whether this sector may be unwinding. However, such bearish sentiments quickly dissipated as NVIDIA’s value rebounded significantly, further affirming market confidence.
Notably, DeepSeek played a crucial role in reshaping investor perceptions. It revealed that it could generate effective AI models without relying on mass quantities of expensive computational power. While skepticism regarding the accuracy of DeepSeek’s claims persists, the implications of their cost-effective model prompted a reassessment of bustling investment practices within the industry.
The conversations around computational power have coalesced into disparate voices, with pessimists fearing the burst of an investment bubble while optimists argue that decreasing model costs could expand the entire computational ecosystem. Regardless, the extensive capital outlays by Microsoft, Amazon, Google, and Meta reflect their unwavering commitment to technology and growth—indicative of optimism towards future innovation and adaptations.
In conclusion, while the implications of sustained high-intensity computational investment raise inevitable discussions around fiscal sustainability, the overall trajectory points to a retraction towards normalized spending yet within a context of inevitable growth pathways. The lessons from American tech giants can serve as a guiding framework for their Chinese counterparts facing similar environmental conditions. Stakeholders must remain vigilant in balancing investments against escalating demand, particularly as the technological landscape continues to evolve briskly.
Cross-Border ETF Volatility: Key Factors
July 20, 2025