AI Inference

AI Inference

Transforming data, shaping the future

AI inference is the crucial step in which our trained models apply their acquired knowledge to make precise decisions or predictions based on new data. By training our AI models on large data sets, they gain the ability to recognize patterns and process information. In the inference phase, these models use the learned knowledge to generate relevant insights or solve specific problems. This process allows our AI to respond to queries in real time and deliver informed decisions. In turn, we can apply these responses to practical applications in various fields.

MIC-711D-ON
Highlights
Server (actively cooled) in mini-format with graphics chip for AI inference.
The development kit does not have a chassis cover.
  • Incl. 4 GB RAM
  • upgradeable up to 8 GB RAM (1 DIMMs)
  • Incl. 128 GB M.2 NVMe
  • 2x Nano SIM card holder
  • 51 mm (H)
  • 125 mm (W)
  • 125 mm (D)
  • 1 x Gbit/s LAN (RJ-45)
  • Price incl. Arm Cortex-A78AE (6 cores)
starting at450
MIC-711D-OX
Highlights
Server (actively cooled) in mini-format with graphics chip for AI inference.
The development kit does not have a chassis cover.
  • Incl. 8 GB RAM
  • upgradeable up to 16 GB RAM (1 DIMMs)
  • Incl. 128 GB M.2 NVMe
  • 2x Nano SIM card holder
  • 51 mm (H)
  • 125 mm (W)
  • 125 mm (D)
  • 1 x Gbit/s LAN (RJ-45)
  • Price incl. Arm Cortex-A78AE (6 cores)
starting at715
MIC-711-ON
Highlights
Silent server in mini format with graphics chip for AI inference.
  • Incl. 4 GB RAM
  • upgradeable up to 8 GB RAM (1 DIMMs)
  • Incl. 128 GB M.2 NVMe
  • 2x Nano SIM card holder
  • 46 mm (H)
  • 130 mm (W)
  • 130 mm (D)
  • 1 x Gbit/s LAN (RJ-45)
  • Price incl. Arm Cortex-A78AE (6 cores)
starting at685
MIC-711-OX
Highlights
Silent server in mini format with graphics chip for AI inference.
  • Incl. 8 GB RAM
  • upgradeable up to 16 GB RAM (1 DIMMs)
  • Incl. 128 GB M.2 NVMe
  • 2x Nano SIM card holder
  • 46 mm (H)
  • 130 mm (W)
  • 130 mm (D)
  • 1 x Gbit/s LAN (RJ-45)
  • Price incl. Arm Cortex-A78AE (6 cores)
starting at1.085
MIC-743-AT
Highlights
Embedded with NVIDIA Jetson T5000 up to 2070 TFLOPS (FP4)
  • NVIDIA Jetson T5000
  • 2560 NVIDIA® CUDA® cores 96 5th Gen Tensor cores MAXN: 1.57 GHz
  • 128GB LPDDR5X
  • 1x QSFP28 (4 x 25GbE)
  • 1x 5GbE, 4x USB 3.2 Gen 2, 2x M.2
  • Support multiple purpose of large AI model for Generative AI application
starting at5.175
NEW
NVIDIA DGX Spark
  • GPU architecture: NVIDIA Grace Blackwell
  • Tensor computing units: 5th generation
  • RT computing units: 4th generation
  • Tensor performance: 1,000 AI TOPS
  • CUDA computing units: Blackwell generation
  • CPU: 10x ARM Cortex-X925 + 10x ARM Cortex-A725
  • Memory: 128 GB LPDDR5x
Price on request

All prices are net prices and do not include statutory VAT; they are intended exclusively for entrepreneurs (Section 14 of the German Civil Code (BGB)), legal entities under public law and special funds under public law.

AI Inference in use: case studies & success stories

Advantech provides AI-powered solutions for coffee producers to analyze and sort beans based on various characteristics across the supply and value chain.

Read the case study now

Harvesting robots equipped with AI models and image processing can recognize ripe fruit and collect it with a robotic arm, resulting in an efficient and accurate fruit harvest.

Read the case study now

With AI-based early detection systems for animal health management, farmers can identify sick cows and take immediate action to prevent the further spread of disease.

Read the case study now

ebook_NVIDIA


NVIDIA Metropolis e-book

This e-book on Metropolis gives you a comprehensive overview of the new generation of AI applications.

Read the NVIDIA Metropolis e-book NOW

Optimized hardware for AI inference

When processing image or voice data, numerous and complex connections have to be weighted and taken into account. The calculation can therefore take a long time and place high demands on the CPU(s), RAM and power supply. Here, suitable hardware configurations can help you avoid unnecessarily long computing times. Our online shop offers high-performance systems that have been optimized for this purpose.

Efficiency through automation

An AI can only make qualified decisions that meet your business objectives after an extensive learning phase. This involves analyzing a large amount of relevant data, which serves as the basis for future decisions by the AI. Based on the analyzed data, the AI will make optimal decisions for your company's interests completely autonomously after a sufficient learning phase thanks to AI inference – saving you time, effort and resources!

Would you like to learn more about AI inference at Thomas-Krenn?
Get in touch!

Keine Ergebnisse gefunden
Keine Ergebnisse gefunden
Keine Ergebnisse gefunden