Intel pulls the plug on Omni-Path networking fabric architecture

Intel pulls the plug on Omni-Path networking fabric architecture

Intel’s battle to gain ground in the high-performance computing HPC market isn’t going so well. The Omni-Path Architecture it had pinned its hopes on has been scrapped after one generation.

An Intel spokesman confirmed to me that the company will no longer offer Intel OmniPath Architecture O products to customers, but it will continue to encourage customers, OEMs, and partners to use O in new designs. 

“We are continuing to sell, maintain, and support O. We actually announced some new features for O back at International Supercomputing in June,” the spokesperson said .

Intel said it continues to invest in connectivity solutions for its customers and that the recent acquisition of Barefoot Networks is an example of Intel’s strategy of supporting end-to-end cloud networking and infrastructure. It would not say if Barefoot’s technology would be the replacement for O.

While Intel owns the supercomputing market, it has not been so lucky with the HPC fabric, a network that connects CPUs and memory for faster data sharing. Market leader Mellanox with its Enhanced Data Rate HDR InfiniBand framework rules the roost, and now Mellanox is about to be acquired by Intel’s biggest nemesis, Nvidia.

Technically, Intel was a bit behind Mellanox. O is Gbits, and O was intended to be Gbits, but Mellanox was already at Gbits and is set to introduce -Gbit products later this year.

Analyst Jim McGregor isn’t surprised. “They have a history where if they don’t get high uptick on something and don’t think it’s of value, they’ll kill it. A lot of times when they go through management changes, they look at how to optimize. Paul Otellini did this extensively. I would expect Bob Swan, the newly minted CEO to do that and say these things aren’t worth our investment,” he said.

The recent sale of the G unit to Apple is another example of Swan cleaning house. McGregor notes Apple was hounding them to invest more in G and at the same time tried to hire people away.

The writing was on the wall for O as far back as March when Intel introduced Compute Express Link CXL, a cache coherent accelerator interconnect that basically does what O does. At the time, people were wondering where this left O. Now they know.

The problem once again is that Intel is swimming upstream. CXL competes with Cache Coherent Interconnect for Accelerators CCIX and OpenCAPI, the former championed by basically all of its competition and the latter promoted by IBM.

All are built on PCI Express PCIe and bring features such as cache coherence to PCIe, which it does not have natively. Both CXL and CCIX can run on top of PCIe and co-exist with it. The trick is that the host and the accelerator must have matching support. A host with CCIX can only work with a CCIX device; there is no mixing them.

As I said, CCIX support is basically everybody but Intel: ARM, AMD, IBM, Marvell, Qualcomm, and Xilinx are just a few of its backers. CXL includes Intel, Hewlett Packard Enterprise, and Dell EMC. The sane thing to do would be merge the two standards, take the best of both and make one standard. But anyone who remembers the HD-DVDBlu-ray battle of last decade knows how likely that is.

Join the Network World communities on and LinkedIn to comment on topics that are top of mind.

Source: agen poker online

Comments are closed.