RedNeurons (Shanghai) Information Technology Co., Ltd. announced today the completion of the Tensor MPU2016 High Performance Embedded HPC technology demonstration and development platform. RedNeurons’ technology plan is designed to reduce the current cost per Giga-flop/sec for the largest supercomputers, currently ranging between 500 to 1500 US dollars, to less than half this amount before the end of 2009.
“This product is a key milestone in delivering to the World’s scientific community a realistic, cost-effective, method of achieving performance in the Peta-flop (1000 Tera-flop per second) range on standard benchmarks such as Linpack” stated Yuefan Deng, PhD (Columbia University), CEO of RedNeurons and Professor of Applied Mathematics at Stony Brook University (SUNY). A 20 year veteran of the HPC research field, Dr. Deng stated “Tensor is an apt name for this patent-pending architecture, as it embodies a true multi-dimensional full mesh topology. Tensor equations were favored by Einstein as a simple way to describe multi-dimensionality. By balancing the network fabric evenly with the processor, we have managed to reduce cabling complexity, increase scalability, and preserve the use of standard processors and HPC legacy programs created with the standard high-level languages and MPI (message passing interface) functions.”
Jack Dongarra, Distinguished Professor of Computer Science at the University of Tennessee and primary author of the Linpack benchmarking library, said, “Dr. Deng’s team in Shanghai has designed and completed initial benchmark testing on a new HPC platform in under 12 months; this is unprecedented”.
Dr. Chi Xuebin, PhD (Chinese Academy of Science, CAS), and a frequent contributor in the HPC research field stated, “RedNeurons has an approach that offers a startling advantage in acquisition and operational costs over other approaches; their MPU system integrates the network, computational, and storage resources in a very beneficial, balanced manner”.
The Tensor MPU2016, a second generation HPC development platform developed with support from the People’s Republic of China Ministry of Science and Technology and Shanghai Science and Technology Commission, is currently being used for development of the interconnect hardware and software logic for the third generation RedNeurons Tensor MPU3064 platform, which will form the foundation for a 100 Tera-flop/second machine slated to be constructed next year.
According to RedNeuron’s CTO, Alex Korobka, PhD (SUNY-Stony Brook), “The Tensor MPU2016, with 16 processor cards containing Freescale 8641D SoC (system on chip) processors and Xilinx Virtex-4FX FPGAs, is an ideal platform for companies which may be working on the development of high performance solutions for the embedded systems market. MPU or, Master Processing Unit, is a novel approach that provides high density and reliability while preserving CPU and interconnect flexibility. Initial performance tests resulted in achieving a HP Linpack benchmark score of 32 Giga-flops for a single chassis configuration, which triples the performance demonstrated by the prototype Tensor MPU1016 system produced by Red Neurons in the first quarter of 2007.”
RedNeurons (redneurons.com) is a leading High Performance Computing technology design firm specializing in the use of embedded systems components and advanced interconnection architectures to drive practical and cost-effective HPC across a broad range of form factors and sizes.
Please Note: This Press Announcement may be freely reproduced in its entirety or freely quoted with advance approval incorporating appropriate reference and with a notice sent to the media contact noted herein. All trademarks are the property of their respective owners. The information in this document is believed to be correct but cannot be guaranteed. Opinions constitute our judgement as of this date and are subject to change without warning. This document is not intended as an offer or solicitation to buy or sell securities.