Ampere Alters Course, Adding Its Own Cores: Why Now, and For Whom?

The system-on-a-chip maker had been scoring points on scalability and performance, with its Neoverse N1-based Altra series. If Altra truly is a Xeon-slayer, why take it out of the fight?

Scott Fulton III, Contributor

June 14, 2021

8 Min Read
Ampere Alters Course, Adding Its Own Cores: Why Now, and For Whom?

There are Arm processor cores, and there are cores built with Arm-licensed technology. The difference is actually not at all subtle: The former is a complete design licensed to a plurality of chip makers, while the latter uses the concepts and intellectual property introduced by Arm Ltd. to produce a design that is unique to that one licensee. Up to now, Ampere has been producing 80- and 128-core Altra SoCs using Arm Neoverse N1 cores.

In May, Ampere announced that, starting in 2022, as it transitions to a 5 nm lithography node, it intends to produce a new line of Arm-based processors with its own core design. The emerging headlines contained a variety of colorful verbs, including the softer “transition,” the more pragmatic “switch,” and the somewhat derogatory “dump.” Ampere CEO Renee James tagged the changeover a “follow-on,” suggesting that 7 nm Altra chips may still be available for some time following the introduction of the new, unnamed product line.

Whatever verb fits best, why should Ampere want to change horses further down the race, when its leading horse is already poised to make the pass? As Ampere Chief Product Officer Jeff Wittich told us, its altered course is part of a calculated strategy that his company has been executing from the beginning.

Related:Ampere Says Its Chips in Microsoft, Tencent Data Centers, Unveils New Design

“From the start, we had always had a plan to develop our own cores,” Wittich told us.  “Obviously, we weren’t able to develop these cores in a year. So we had been working on these for the last three-and-a-half years. And the real reason it’s so important to us is because of the fact that our design point is uniquely the cloud. It ’s that type of scale-out compute. It’s inherently multi-tenant. It needs to be very, very dense compute. Isolation is important. There’s a lot of manageability features required, that may not be very useful in other types of infrastructure, but are useful with this type of infrastructure.”

The Wrath of Ouroboros?

This new line of processors, which has yet to be given an official designation, may very well be the dream that launched the company: a hyperscale-centered core that addresses hyperscale providers’ exclusive needs. One of those needs is secure, isolated execution — which, Wittich told us, isn’t so much reliant upon the chip’s instruction set architecture (ISA) as the underlying microarchitecture.

210519 Basic Ampere roadmap.jpg

210519 Basic Ampere roadmap

It’s this latter part that Ampere is looking to produce for itself, beginning with a 2022 sampling period, while continuing to use Arm’s licensed instruction set. Isn’t Ampere openly admitting that it believes servers deployed in cloud and hyperscale environments require at least some features it won’t be able to provide in Altra, with its Neoverse cores?

Related:Ampere’s Arm Data Center Chips Come to Oracle Cloud

“For us to have a core that is very unique to this use case — which happens to be a massive use case,” continued Wittich, “we really did need to be able to innovate all the way down to the microarchitecture level. It came down to doing our own cores because it gives us optimal performance, [and] performance consistency. We can ensure we’re able to deliver not just the highest performance, but also very consistent performance. And don’t [get me wrong], Neoverse cores deliver great performance — no knock on them, they’re great cores. Our Altra and Altra Max products are delivering amazing amounts of performance, and I expect the same thing will be true from the Arm roadmap going forward. We just wanted to deliver the performance in a way that’s a little bit more tailored for what our customer base is looking to do in the cloud.”

Just who would be inciting this round of “tailoring” is no longer a secret. Ampere announced last month it had already signed on Microsoft Azure, Oracle Cloud, and Shenzhen-based cloud giant Tencent as long-term collaborative partners, suggesting they were already providing insights into new designs. Last December, Microsoft had let slip (probably on purpose) that it was exploring designing its own Arm-based server processors, which it could still conceivably do.  (Microsoft explores many production avenues it doesn’t necessarily take.)

In the meantime, though, Ampere’s partners may find themselves in a more advantageous position now, than if they were to actually make a go of it themselves like Apple did with its M1. Here is a company willing to risk derailing the success of its own, established product line, to implement improvements that its high-volume customers are looking for. Wittich told Data Center Knowledge, Microsoft has already begun working on adapting its platforms and software for Ampere’s next-gen core.

The implication there is not that Ampere will be designing its own new core, but has already done so.

“Ampere’s vision to create a new generation of cloud-scale data center processors, enables Microsoft to optimize our products to improve scalability, performance, and power efficiency,” stated Microsoft Azure Core Solutions CVP Erin Chapple, in a recent Ampere video.  “Their enhancements in hardware security and impactless maintenance will enable seamless delivery of security and stability improvements that cloud customers require.”

Arm Ltd. may have been under the impression that it would be the innovator of those requirements that cloud customers so urgently demand. At least Arm put on a game face last April, when it unveiled the next generation of its Neoverse core architecture, after making a point of citing Ampere as a partner, and even taking credit for its Oracle Cloud and Tencent partnerships.

“When Arm’s partners are offering full cores, compared to traditional threads,” remarked Arm SVP and GM Chris Bergey at the time, “there will be no question which offers better performance and value to the end customer. That’s primarily a cloud statement, but when you consider the performance we’ll be offering, with the power efficiency Arm is known for, it extends equally to all infrastructure markets — from HPC to the cloud to the edge. The time for Arm Neoverse across all infrastructure segments has come, and starts now.”

Ampere’s move makes it clear, not even all of Arm’s own partners quite believe that a single microarchitecture can, or should, extend across all infrastructure segments.

The Next Generation of Hyperconvergence?

Cloud service providers may still prefer the kind of manageability that was originally promised by hyperconvergence (HCI), but this time for real, at the microarchitecture level. When HCI was first introduced, journalists tended to perceive it as synonymous with hyperscale, using the two terms interchangeably. Compute, storage, memory, and in emerging use cases, network capacity were treated as liquid commodities, each being made available to workloads according to their configurations. That seemed to be the HCI ideal, at least the way it was originally marketed.

As things actually turned out, HCI appealed to smaller enterprises more than hyperscalers — to the evident dismay of some HCI equipment makers. Now, with Intel Xeon-based server makers Dell and HPE working to pivot HCI back toward its original course, Wittich told us, Ampere is gearing up “to be hyperconvergence” — to take back the original mantle.

“The basic idea is to create giant pools of resources,” he remarked.  “That way, you turn it into a bin-packing problem. With big pools of resources, bin-packing is easier, and you can just provision whatever you need, from a resource perspective, out of a big pool, and you’ve got great efficiency. And being able to replace things as you want, being on a different refresh cycle for compute, memory, storage, accelerators, has advantages as well.

“The holy grail here is memory,” Ampere’s Wittich continued. Already, hyperscalers have succeeded in disaggregating storage, separating it into separate, networked arrays. But a completely disaggregated system with resource pools will need to include memory.  “And we’re still not quite there yet,” he said.  “Let’s see where CXL [Compute Express Link, a developing memory interconnect standard] gets. But if we can crack the high-performance, pooled memory problem, and it all is exposed to the system as though it really is local memory, then that’s a game-changer, from an architectural perspective.”

Wittich’s enthusiasm for this evolutionary direction leads us to believe Ampere may already have devised a next-gen Arm core that’s more adaptive to cloud providers’ desire for disaggregation. This would lead to rack-scale architectures and revolutionized data center configurations, many of which are bound to leak into the enterprise and colocation spaces. Keep in mind, Equinix is one of those Ampere partners.

All this being taken into account, though, the phasing out of Ampere’s Altra core may be much more gradual than the company’s charts implied.

Ampere adheres to an annual product cadence, Jeff Wittich reminded us, and will continue to do so. By 2024, both Altra (including the 128-core Altra Max) and the new SoC will share its product space. A full product generation, he told us, may last about four years, and Altra Max has just begun. But a majority of Ampere’s sales by that year, he projects, will be “own-core.”

“Not a knock on Neoverse at all,” Wittich repeated.  “Not that we’re saying that we’re somehow replacing Neoverse, and will have moved to ours. It’s just, our latest-and-greatest will be our own core.”

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like