Meta Platforms shares rose to $657.26 (+0.49%) today after the company unveiled a roadmap for four new in-house artificial intelligence chips designed to power its rapidly expanding data center infrastructure. The announcement highlights Meta’s growing focus on custom silicon as demand for AI computing surges across the technology industry.
The new chips are part of Meta’s Meta Training and Inference Accelerator (MTIA) program and are intended to support both recommendation systems and advanced generative AI workloads across the company’s platforms, including Facebook and Instagram. As AI becomes central to product development and advertising systems, Meta is increasingly looking to design specialized processors that can handle its massive computing needs more efficiently.
Meta introduces four AI chips under MTIA program
The company introduced four processors: the MTIA 300, MTIA 400, MTIA 450, and MTIA 500. The MTIA 300 is already in use and powers Meta’s ranking and recommendation systems, which determine the content users see in their feeds. These systems are critical to engagement and advertising performance across Meta’s social platforms.
The next generation chip, the MTIA 400, is designed to handle both recommendation workloads and generative AI tasks. According to Meta, the processor can be deployed in large server rack configurations containing up to 72 chips, similar to advanced AI hardware systems built by companies such as Nvidia and AMD.
The company’s later processors — the MTIA 450 and MTIA 500 — are focused primarily on inference workloads. Inference refers to the stage where trained AI models respond to user requests, generate content, or make recommendations in real time. These chips are expected to feature faster high-bandwidth memory and improved performance for large-scale AI applications.
Meta said the chips will be released at roughly six-month intervals, reflecting the speed at which the company is building new data centers and scaling its AI infrastructure.
Focus on inference as AI demand explodes
Meta executives emphasized that inference workloads are becoming a key priority. While training large AI models receives much of the industry’s attention, running those models across billions of daily user interactions requires enormous computing resources.
“We see inference demand exploding at the moment and that’s what we’re currently focused on,” Meta Vice President of Engineering Yee Jiun Song said in an interview.
The company’s later MTIA chips are designed specifically to handle this surge in demand. By optimizing chips for inference workloads, Meta hopes to deliver faster responses while reducing the cost and energy consumption associated with running AI models at global scale.
Meta still buying chips from Nvidia and AMD
Despite its push into custom processors, Meta is continuing to rely heavily on third-party suppliers. The company recently signed multiyear agreements worth tens of billions of dollars to purchase AI chips from Nvidia and Advanced Micro Devices.
Custom chip development therefore complements rather than replaces existing partnerships. In the near term, Meta plans to combine its own processors with commercial GPUs in its data centers.
Still, the strategy reflects a broader trend among hyperscale technology companies. Alphabet, Amazon, and Microsoft have all invested heavily in designing their own AI chips to reduce reliance on external vendors and improve cost efficiency.
Broadcom and TSMC involved in chip development
Meta is not building the processors entirely on its own. The company has partnered with Broadcom for certain elements of the chip design process, although it has not publicly detailed which specific components Broadcom is responsible for.
The processors themselves are manufactured by Taiwan Semiconductor Manufacturing Co. (TSMC), the world’s largest contract chipmaker and a key supplier to many of the technology industry’s biggest companies.
This type of collaboration is common in the semiconductor industry, where companies design chips internally but rely on specialized partners for fabrication and certain engineering functions.
Massive spending on AI infrastructure
Meta’s chip roadmap comes as the company dramatically increases its capital spending to support AI development. In January, Meta projected total capital expenditures of between $115 billion and $135 billion this year, much of which will go toward building new data centers.
The scale of investment mirrors similar spending plans across the technology sector. Amazon, Microsoft, Google, and Meta are expected to collectively spend as much as $650 billion on capital expenditures in 2026, with the majority directed toward AI infrastructure.
For Meta, controlling more of the hardware stack could become an important financial advantage. Chips optimized for specific workloads can potentially use less energy and deliver better performance than general-purpose processors.
That efficiency could help Meta manage the enormous computing costs associated with training and running large AI models.
Generative AI training remains a challenge
Meta has previously struggled with its ambition to build chips capable of handling the most demanding generative AI training tasks. While the company has made progress with inference processors, developing a training chip capable of competing with Nvidia’s most advanced GPUs remains a longer-term challenge.
The MTIA 400 represents a step in that direction. Meta says the chip is designed to deliver performance that is competitive with leading commercial products while also providing cost advantages.
Still, the company has not specified exactly which competing products it is targeting.
Implications for the AI chip market
Meta’s announcement adds another dimension to the competition among major technology companies to control the infrastructure behind artificial intelligence. As hyperscalers expand their data centers and AI capabilities, the balance between custom silicon and commercial chips could reshape the semiconductor landscape.
For Nvidia and AMD, hyperscale customers remain critical. Nvidia recently said that slightly more than half of its data center revenue comes from hyperscale cloud providers.
However, those same companies are increasingly developing their own hardware to complement external purchases.
More information about Meta’s AI strategy can be found on the company’s official Meta newsroom, while broader coverage of the AI chip race is regularly reported by Reuters technology news.
Market reaction
Investors responded positively to the announcement, pushing Meta shares up 0.49% to $657.26. While the immediate stock move was modest, the roadmap reinforces Meta’s long-term strategy of building deeper control over the hardware that powers its AI ecosystem.
As artificial intelligence becomes a central driver of growth for the technology sector, companies that control both software and silicon could gain a strategic edge.
Meta’s latest chip roadmap suggests the company intends to be one of them.















