Machine learning

GPU servers for machine learning

Greater computing power via GPU deployment

Machine learning, neural networks and deep learning are disciplines in the field of artificial intelligence that have experienced an enormous boom in recent years and are increasingly shaping our lives – from voice assistants to autonomous vehicles. Most machine learning algorithms require high computing power. High-performance servers powered by highly specialized graphics cards with thousands of cores are particularly suited to these applications.

Applications for machine learning

Machine learning has opened up completely new possibilities for the long-standing development of artificial intelligence in science, research and industry – particularly through neural networks and deep learning.

For example, adaptive voice assistance systems such as Siri (Apple), Alexa (Amazon), Google Assistant and Cortana (Microsoft) are based on deep learning processes. Machine learning systems are also increasingly being used in Industry 4.0, for example to equip autonomous systems with AI, build networks of smart IoT devices or make big data usable by evaluating sensor data. Today, medicine leverages the strengths of machine learning methods primarily in diagnostics when evaluating imaging procedures. Last but not least, machine learning algorithms are increasingly being used in the field of IT security to improve the recognition of irregularities in critical systems.

Machine learning methods are primarily used when patterns need to be recognized in very large, unstructured data volumes, such as image data, sensor data, transaction data, video data or even spoken language. The more data an algorithm is provided with, the better the system learns to distinguish between typical and peculiar behaviors. The number of computing operations required for this can be extremely high. However, the advantage is that they can be processed in parallel better than in many other applications.

GPU Computing for machine learning

Machine learning processes generally place very high demands on the hardware, including in terms of computing power, waste heat, power consumption and storage connection. However, conventional processors (CPUs) are not suitable for the actual algorithms with their relatively small number (and power hungry) cores – especially when it comes to deep learning using neural networks. The algorithms are therefore usually outsourced from the CPU to the graphics processors (GPUs), as these are based on thousands of logical cores due to their architecture. NVIDIA in particular develops high-performance GPUs that are specially designed for such parallel data processing and are addressed using CUDA technology. With specialized GPUs, such as the NVIDIA Tesla GPUs and their successors, machine learning applications are significantly accelerated. Even with a single GPU card of the Volta generation, these applications are often run 40 times faster than would be possible using CPUs alone – and our GPU servers offer space for up to eight GPU cards.

Do you have any questions about our machine learning GPU servers or do you need help designing your GPU server? Our experienced contacts will be happy to advise you on the configuration of your new server system for machine learning.