Machine learning acceleration for the data center

A new scalable accelerator is available in the data center today.

Emerging machine learning workloads need tightly coupled hardware and software solutions to meet stringent latency, cost and power requirements. The heart of these accelerators harness multiple of Myrtle’s proprietary, performant and highly efficient MAU™ cores. These cores can be deployed immediately in the world’s most popular cloud infrastructure.

Myrtle provides software tools that create optimized-for-purpose accelerators for YOUR recurrent neural network (RNN) workloads.

& At the edge

Our expertise in compressing recurrent neural networks allows us to do more at the edge today. Re-configurable to future-proof your designs for whatever AI may come tomorrow.

Real Time Voice Channels Supported In A Single Chip
Faster Inference Than multi-core Xeon
Performance per watt improvement when compared with a V100 GPU
1/29 the latency for batch 1 inference versus V100 GPU

Why this matters

Machine learning, our flagship expertise, is expected to fuel a significant proportion of data center growth. Our vision is to produce data center solutions that accelerate machine learning whilst reducing its power consumption: advancing the vast potential of machine learning to enhance our lives without costing the planet.

Machine learning models change so fast that hardware optimized for current models can quickly become inefficient. We view machine learning optimization
as a three-piece problem. We believe machine learning models must be designed paying close attention to their numerics; their quantization, sparsity and compression; with a clear understanding of the hardware cost of implementing them on all the available platforms.

Only by understanding machine learning model behavior, efficiencies in model computation and optimal design of computation hardware, can we generate truly efficient solutions. We advocate algorithm-accelerator co-design to create world class solutions. Models have unique properties that can be capitalized upon. New data center boards from Xilinx and Intel allow optimized data structures to be continually redesigned and deployed, allowing new machine learning models to be refined for performance and power consumption.

We target recurrent neural networks which are 29% of all data center machine learning inference: major services like speech synthesis, speech transcription and machine translation.

Speech Synthesis
Speech Transcription
Machine Translation

Trusted By

One of ten global mlperf benchmark owners