• Semiconductor design giant Arm revealed on March 25, 2026, it will market its own chips for the first time, aiming for $15 billion in annual sales.
• CEO Rene Haas discussed the move in exclusive interviews, marking a shift from licensing designs to direct competition in AI and data center markets.
• The announcement boosted SoftBank shares and highlights re-industrialization trends in U.S. tech supply chains amid AI demand.
• US prosecutors in the Southern District of New York charged three individuals affiliated with Super Micro Computer, including co-founder Yih-Shyan Liaw, with smuggling Nvidia AI chips to China.
• The scheme involved illegally diverting billions in advanced semiconductors intended for US servers to unauthorized Chinese entities, violating export controls.
• This case highlights escalating US efforts to curb AI technology transfers amid national security concerns over China's military advancements.
• Samsung Electronics unveiled an $82 billion investment plan in chip manufacturing and AI technology while its union warned of potential labor action, signaling internal tensions over worker conditions amid expansion plans.
• The company expects to distribute approximately 9.8 trillion won (roughly $7.3 billion) in regular dividends for 2026, with additional returns possible if surplus funds remain available.
• Samsung's major capital commitment reflects intensifying competition in semiconductor and AI markets, particularly as global demand for chips and AI infrastructure accelerates.
• Elon Musk stated Tesla and SpaceX AI will continue large Nvidia chip purchases even as Tesla advances its AI5 chip, optimized for edge compute in Optimus and Robotaxi.
• Tesla's Terafab AI chip manufacturing facility is set to launch within seven days from March 14, potentially by March 21.
• Musk praised Nvidia CEO Jensen Huang, noting AI5's efficiency in half-reticle format could halve fab needs, with AI6 potentially matching dual AI5.
• Nvidia CEO Jensen Huang is unveiling new chips and software at the GTC conference to solidify the company's position as the industry shifts from training giant foundation models to inference workloads that power real-world applications.
• The strategic pivot targets lower-cost, high-throughput AI deployments within enterprise workflows, autonomous systems, and product integrations rather than just headline-grabbing training clusters.
• Nvidia's next moves will shape the cost, speed, and competitive structure of the global AI software and infrastructure market as the industry enters a new phase of AI economics.
NVIDIA announced the Rubin platform, named after astronomer Vera Rubin, marking a shift from Blackwell with extreme co-design across six new chips targeting 1.6nm processes for massive computing power gains. First Rubin systems will launch in the second half of 2026 through AWS, Microsoft, and Google, with Microsoft integrating them into 'Fairwater' AI superfactories. This hardware leap supports edge-centric AI via specialized ASICs, enabling real-time insights, amid rising demand projected over $700 billion in global datacenter leases.
ByteDance is working with Aolani to deploy approximately 500 Nvidia Blackwell computing systems in Malaysia, comprising roughly 36,000 of Nvidia's most advanced B200 chips, according to reports citing people familiar with the matter. The hardware deployment is valued at more than $2.5 billion and represents a significant effort by the Chinese tech giant to access top-tier AI processors despite US export restrictions. ByteDance's AI portfolio includes Dola chatbot, Dreamina video creation tool, Gauth homework assistant, and Seedance video generation model, which has gained attention for its ability to convert written scripts into realistic short film scenes. The move underscores ongoing tensions around US-China tech competition and export controls on advanced semiconductor technology.
Amazon Web Services began deploying AI chips from Groq alongside its Trainium2 processors in US data centers on March 13, 2026, to diversify inference capabilities. The move supports over 100,000 Inferentia chips already in use, aiming to cut AI training costs by 50% for customers. This hybrid strategy enhances AWS competitiveness against Nvidia amid chip shortages, boosting scalability for enterprise AI workloads. Expansion to additional regions is slated for April 2026.
Datacentre investment boom is one of the biggest infrastructure gambles of this era, and Britain may be uniquely exposedStargate was to be the world’s biggest AI investment: a $500bn infrastructure project to “secure American leadership in AI”. Never shy of hyperbole, its key backer, the ChatGPT-maker OpenAI, promised “massive economic benefit for the entire world” with facilities to help people “use AI to elevate humanity”.Now, OpenAI appears to be dropping out of a part of the deal – the expansion of a flagship datacentre stretching across a swathe of land in Abilene, Texas, which has become one of the most visible manifestations of a frenzy of investment in the chips and power plants required to build and run AI. There has been a breakdown in negotiations over project financing, as well as the timeline of when the expanded capacity might come online. Continue reading...
Nvidia announced on March 13, 2026, major investments in capital and its latest chips to develop the next layer of AI cloud infrastructure, targeting US data centers. The initiative involves partnerships with hyperscalers to deploy Blackwell GPUs at scale. This positions Nvidia to dominate AI compute markets valued at over $100 billion annually. Experts note it accelerates US leadership in AI amid global competition.