The US announced partial rollback of AI semiconductor export restrictions on March 13, 2026, targeting key technologies amid global competition. This move is poised to boost domestic chipmakers ahead of Nvidia's GTC event next week. Nvidia, HBM, and related AI plays face heightened focus with potential upside. Policy shift aims to balance national security and industry growth.
NVIDIA announced the Rubin platform, named after astronomer Vera Rubin, marking a shift from Blackwell with extreme co-design across six new chips targeting 1.6nm processes for massive computing power gains. First Rubin systems will launch in the second half of 2026 through AWS, Microsoft, and Google, with Microsoft integrating them into 'Fairwater' AI superfactories. This hardware leap supports edge-centric AI via specialized ASICs, enabling real-time insights, amid rising demand projected over $700 billion in global datacenter leases.
ByteDance is working with Aolani to deploy approximately 500 Nvidia Blackwell computing systems in Malaysia, comprising roughly 36,000 of Nvidia's most advanced B200 chips, according to reports citing people familiar with the matter.[1] The hardware deployment is valued at more than $2.5 billion and represents a significant effort by the Chinese tech giant to access top-tier AI processors despite US export restrictions.[1] ByteDance's AI portfolio includes Dola chatbot, Dreamina video creation tool, Gauth homework assistant, and Seedance video generation model, which has gained attention for its ability to convert written scripts into realistic short film scenes.[1] The move underscores ongoing tensions around US-China tech competition and export controls on advanced semiconductor technology.
The US Commerce Department withdrew a planned rule on artificial-intelligence chip exports on Friday, marking the latest policy reversal by the Trump administration regarding technology trade controls.[4] The withdrawal represents a significant shift in the administration's approach to regulating AI semiconductor exports, which had been a contentious issue affecting companies like Nvidia and other chipmakers.[4] This decision follows earlier statements from administration officials indicating a more flexible stance on AI chip distribution policies compared to previous regulatory frameworks.[4] The reversal signals potential changes in how the US will manage competition with foreign nations in advanced semiconductor and AI technology sectors.
Amazon Web Services began deploying AI chips from Groq alongside its Trainium2 processors in US data centers on March 13, 2026, to diversify inference capabilities. The move supports over 100,000 Inferentia chips already in use, aiming to cut AI training costs by 50% for customers. This hybrid strategy enhances AWS competitiveness against Nvidia amid chip shortages, boosting scalability for enterprise AI workloads. Expansion to additional regions is slated for April 2026.
Neuron-powered computer chips can now be easily programmed to play a first-person shooter game, bringing biological computers a step closer to useful applications
Datacentre investment boom is one of the biggest infrastructure gambles of this era, and Britain may be uniquely exposedStargate was to be the worldβs biggest AI investment: a $500bn infrastructure project to βsecure American leadership in AIβ. Never shy of hyperbole, its key backer, the ChatGPT-maker OpenAI, promised βmassive economic benefit for the entire worldβ with facilities to help people βuse AI to elevate humanityβ.Now, OpenAI appears to be dropping out of a part of the deal β the expansion of a flagship datacentre stretching across a swathe of land in Abilene, Texas, which has become one of the most visible manifestations of a frenzy of investment in the chips and power plants required to build and run AI. There has been a breakdown in negotiations over project financing, as well as the timeline of when the expanded capacity might come online. Continue reading...
The US Commerce Department withdrew a planned rule on Friday that would have required permits for exports of advanced AI chips from companies like Nvidia and AMD to global customers. The abandoned Trump administration proposal aimed to involve case-by-case reviews by the Commerce Department's licensing office, contingent on factors such as government agreements and end-user computing power needs. Commerce officials rejected returning to the prior administration's 'burdensome, overreaching and disastrous' AI diffusion framework. This decision eases restrictions on the semiconductor industry amid ongoing trade tensions.[1][2]
Nvidia announced on March 13, 2026, major investments in capital and its latest chips to develop the next layer of AI cloud infrastructure, targeting US data centers. The initiative involves partnerships with hyperscalers to deploy Blackwell GPUs at scale. This positions Nvidia to dominate AI compute markets valued at over $100 billion annually. Experts note it accelerates US leadership in AI amid global competition.
The US Commerce Department withdrew a planned rule on AI chip exports on March 13, 2026, marking another policy reversal by the Trump administration amid shifting trade priorities. The rule had aimed to tighten controls on advanced semiconductor shipments, particularly to adversaries. This decision eases pressures on US chipmakers like Nvidia, potentially boosting exports while maintaining national security reviews. Tech industry groups welcomed the move, citing reduced regulatory burdens for $50 billion in annual chip trade.