Hewlett-Packard Enterprise is making what appears to be the rather impertinent claim that it now has the fastest, most powerful single-memory computer in the world.
However, there are two facts to realize here: a) this claim is very likely true; and b) this huge, huge computer is located in a mere single rack in Fort Collins, Colo.
HPE, after more than 13 years of “drawing-on-a-napkin-to-hardware” development, on May 16 finally showed a prototype of what it call The Machine at that lab in Colorado, the location of much of the company’s R&D and also where its workstations are designed and original parts built.
The Machine, announced as a project back in 2014 with a mere 8TB of memory after 10 years of development, turns on its head all that we know about conventional computing. The Machine takes mechanical servers, storage and networking and virtualizes all the moving parts into a large central memory–in this case, a whopping 160 TB of main memory.
If 8TB were a mere muscle-car engine, 160TB would be a SpaceX rocket.
“This is the latest scaled prototype of the research project we call The Machine, and it represents the approach we have to computing, called memory-driven, which will drive everything from data anaytics to high-performance computing to everything else, you name it,” HPE Fellow Andrew Wheeler told eWEEK. Wheeler also is Vice President, CTO and Deputy Labs Director at Hewlett-Packard Labs.
“It’s the umbrella for this new computer architecture.”
Largest R&D Project in HP’s History
HPE said that The Machine is the largest R&D program in the history of the company.
The single most important feature in the mega-server is its 160TB of memory; no single server on Earth can come close to that memory capacity. The Machine has more than three times the memory capacity of HPE’s Superdome X and anything IBM puts into a data center.
The Machine comprises 1,280 Cavium ARM CPU cores. The memory and 40 32-core ARM chips—separated into four Apollo 6000 enclosures—are linked in an ultra-fast fabric interconnect. Multiple co-processors–as many as a user needs–can be plugged into it for whatever-size workload is in play.
The connections are arranged in a mesh network so memory and processor nodes can easily communicate with each other. FPGA (field-programmable gate arrays) provide controller logic for the fabric.
“This is memory that every processor in the system can directly address to the loadstore mechanism,” Wheeler said. “It’s not an I/O block like an SSD or storage, it’s literally main memory.”
The prototype unveiled May 16 is capable of simultaneously working with the data held in every book in the Library of Congress five times over–or approximately 160 million books. It has never been possible to hold and manipulate whole data sets of this size in a single-memory system; this is just a glimpse of the immense potential of Memory-Driven Computing, Wheeler said.
Scalability and Societal Implications
Based on the current prototype, HPE expects the architecture could easily scale to an exabyte-scale single-memory system and, beyond that, to a nearly-limitless pool of memory – 4,096 yottabytes. For context, that is 250,000 times the entire digital universe today, HPE said.
With that amount of memory, it will be possible to simultaneously work with every digital health record of every person on earth; every piece of data from Facebook; every trip of Google’s autonomous vehicles; and every data set from space exploration all at the same time–getting to answers and uncovering new opportunities at unprecedented speeds.
Memory-Driven Computing puts memory, not the processor, at the center of the computing architecture, Wheeler said.
Technical Specs
The new prototype features the following:
–160 TB of shared memory spread across 40 physical nodes, interconnected using a high-performance fabric protocol;
–an optimized Linux-based operating system (OS) running on ThunderX2, Cavium’s flagship second generation dual socket capable ARMv8-A workload optimized System on a Chip;
–photonics/optical communication links, including the new X1 photonics module, are online and operational; and
–software programming tools designed to take advantage of abundant persistent memory.
For more information about Memory-Driven Computing and The Machine research program, go here.