On Wednesday, Meta Platforms released information regarding the upcoming iteration of their proprietary AI accelerator hardware.
The company disclosed the development on its official page as it is investing billions in its own AI research to overtake competitors in the generative AI market.
According to the announcement, the business is looking at several hardware solutions as part of a larger bespoke silicon project that includes the new Meta Training and Inference Accelerator (MTIA) processor.
Meta has invested much in creating the software required to optimise the power of its infrastructure, in addition to manufacturing the chips and hardware.
According to Meta, the improved MTIA is now available in 16 of its data centre areas and offers up to three times the total performance of the MTIA v1.
You’re not incorrect if the “3x” claim seems a little hazy; that’s what we too thought.
However, Meta would only offer that the statistic originated from evaluating the functionality of “four key models” on both chips.
In a blog post, Meta claims, “We can achieve greater efficiency compared to commercially available GPUs because we control the whole stack.”
The company also disclosed plans to build silicon to work in cooperation with our existing infrastructure.
“We’re designing our custom silicon to work in cooperation with our existing infrastructure as well as with new, more advanced hardware (including next-generation GPUs) that we may leverage in the future.
“Meeting our ambitions for our custom silicon means investing not only in compute silicon but also in memory bandwidth, networking and capacity, as well as other next-generation hardware systems,” it stated.
We earlier reported that to participate in building experiences for the platform, developers are currently being invited by Meta to register and gain access to Threads’ API, which is scheduled to launch in June.