AI feeds on a diet of big data; to gain insight from these enormous data sets, powerful compute capabilities are required. This is why industries such as healthcare, automotive, manufacturing and finance are turning to HPC to rapidly accelerate the understanding they can get form their information.
Cheaper than a supercomputer
Unlike supercomputers which are hugely expensive and require specialized experts, HPC provides a way of affordably aggregating computer power. According to the High Performance Computing Advisory Council (HPCAC) there is no clear definition of HPC, other than doing exactly what it says on the tin: providing high performance computing. The HPCAC has noted a growing trend away from specialized supercomputers to clusters of commoditized, affordable and scalable general-purpose systems, which has put the technology in the reach of more enterprises.
Orange Business, for example, is looking to expand the graphic processor unit (GPU) specific hardware for its Flexible Engine public cloud platform, an IaaS/PaaS platform jointly developed in partnership with Huawei, to support HPC. It will be complimented with Infiniband, a networking communications standard used in HPC that provides very high throughput and very low latency. This will open up HPC to enterprises for applications such as big data and high performance analytics, AI, machine learning and IoT.
Higher power, on tap
The arrival of HPC as a service (HPCaaS) is also enabling greater system flexibility in providing enterprises with on-demand access to scalable, high performance computer resources. Access to this type of compute power provides much leaner R&D processes, cutting time to market. Instead of wasting time testing all prototypes, for example. HPC and AI can be invaluable in filtering prototypes and pick out the best performing models to move forward.
HPC is already being used in engineering simulations and predictive maintenance. OTTO Motors in Canada, for example, uses HPC to run petabytes of simulation data hundreds of thousands of times to test its autonomous vehicles before they are deployed to customers. The heavy-duty driverless vehicles are designed to move pallets and massive payloads in dynamic production spaces.
HPC also powers the AI for these autonomous vehicles, which enables them to build maps and make decisions about moving around their environment and predict any issues.
HPC computing has been designed to handle huge volumes of data thus making them more than capable of supporting high performance data analysis. Its innate power provides faster data processing with high accuracy. It is for these reasons that AI and deep learning will drive the HPC market to hit $43 billion by 2021 up from around $20 billion in 2016, according to HPC industry watcher Intersect 350 Research, which has singled them out as the key drivers for HPC growth in the commercial sector.
The emergence of AI on HPC is critical to achieving the potential of AI, according to Pradeep Dubey, Intel fellow and director of Parallel Computing Lab, part of Intel Labs. “Machines that don’t just crunch numbers but help us make better and more informed complex decisions,” he explains. It is this combination in the future Dubey believes will “help make important business decisions in real time and more”.
The collaboration between AI and HPC is relatively straightforward on the surface. AI applications and associated technologies, such as machine learning and deep learning, allow organizations to train systems using enormous amounts of data to gain insight. HPC clusters provide the horsepower to join the dots between this data in hours as opposed to weeks.
“From concept to engineering, from design to test and manufacturing, from weather prediction to medical discoveries, our day-to-day life depends more and more on HPC simulations,” said a spokesperson for the HPCAC.
Real-time analysis and insight is dependent on fast throughput and data sharing. InfiBand, which offers high throughput and low latency, is still one of the most used interconnect standards, according to HPCAC, along with 10 Gigabit Ethernet which can rapidly push traffic between cluster servers or nodes.
The attraction of HPC is that that it is totally adaptable to different workloads and industries. Software-defined infrastructures have made it possible to define the number of cores to certain tasks, optimizing performance and speeding up processes. AI and HPC is already being used to test components without the need for building prototypes. In the future AI and HPC could help to further optimize complex logistics and supply chains and underpin modeling and simulating real manufacturing to iron out any problems, for example.
HPC systems have been used in science and academia for data intensive workloads in simulation and data analytics for a long time. It is little surprise that business has decided to learn from their experiences and embrace the power of HPC systems for their own AI and data analysis to get ahead of the data insight curve.
Jan has been writing about technology for over 22 years for magazines and web sites, including ComputerActive, IQ magazine and Signum. She has been a business correspondent on ComputerWorld in Sydney and covered the channel for Ziff-Davis in New York.