QTML 2025: Scalable Neural Decoders for Practical Real-Time Quantum Error Connection
Автор: Centre for Quantum Technologies
Загружено: 2026-03-11
Просмотров: 4
Описание:
Authors: Changwon Lee, Tak Hur and Daniel Kyungdeock Park
Abstract: Implementing efficient and scalable decoders for quantum error correction is essential for practical quantum computing. Recurrent transformer-based architectures such as AlphaQubit achieve high decoding accuracy but suffer from prohibitive computational costs. To address this, we
introduce a Mamba decoder that replaces each Multi-Head Attention block of AlphaQubit with a Mamba module.
On Google’s Sycamore memory experiment, our Mamba decoder matches transformer-level performance, achieving logical error rates of $2.98\times10^{-2}$ at distance 3 and $3.03\times10^{-2}$ at distance 5. We further evaluate real-time performance over 400 cycles with a latency-dependent noise model tied to computational complexity.
The transformer’s prohibitive $O(d^4)$ complexity leads to a severe accumulation of decoder-induced errors, whereas the Mamba decoder’s efficient $O(d^2)$ scaling avoids this problem, demonstrating more robust performance. Our results thus highlight Mamba’s superior speed-accuracy trade-off, establishing it as a viable architecture for large-scale, real-time decoders for quantum error correction.
This talk was presented at the Quantum Techniques in Machine Learning (QTML) 2025 conference in Singapore.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: