Articles

16 Publications
Applied Filters: First Letter Of Last Name: K Reset

K

We present the Hybrid Polar Decoder (HyPD), a hybrid of classical CMOS and quantum annealing (QA) computational structures for decoding Polar error correction codes, which are becoming widespread in today's 5G and tomorrow's 6G networks. Our results show that HyPD outperforms successive cancellation list decoders of list size eight by half an order of bit error rate magnitude at 1 dB SNR. Further studies address QA compute time at various coding rates, and with increased qubit numbers.

We present the Hybrid Polar Decoder (HyPD), a hybrid classical–quantum decoder design for Polar error correction codes, which are becoming widespread in today’s 5G and tomorrow’s 6G networks. HyPD employs CMOS processing for the Polar decoder’s binary tree traversal, and Quantum Annealing (QA) processing for the Quantum Polar Decoder (QPD)–a Maximum-Likelihood QA-based Polar decoder submodule. QPD’s design efficiently transforms a Polar decoder into a quadratic polynomial optimization form, then maps this polynomial on to the physical QA hardware via QPD-MAP, a customized problem mapping scheme tailored to QPD. We have experimentally evaluated HyPD on a state-of-the-art QA device with 5,627 qubits, for 5G-NR Polar codes with block length of 1,024 bits, in Rayleigh fading channels. Our results show that HyPD outperforms Successive Cancellation List decoders of list size eight by half an order of bit error rate magnitude, and achieves a 1,500-bytes frame delivery rate of 99.1%, at 1 dB signal-to-noise ratio. Further studies present QA compute time considerations. We also propose QPD-HW, a novel QA hardware topology tailored for the task of decoding Polar codes. QPD-HW is sparse, flexible to code rate and block length, and may be of potential interest to the designers of tomorrow’s 6G wireless networks.

In order to meet mobile cellular users’ ever-increasing data demands, today’s 4G and 5G networks are designed mainly with the goal of maximizing spectral efficiency. While they have made progress in this regard, controlling the carbon footprint and operational costs of such networks remains a long-standing problem among network designers. This paper takes a long view on this problem, envisioning a NextG scenario where the network leverages quantum annealing for cellular baseband processing. We gather and synthesize insights on power consumption, computational throughput and latency, spectral efficiency, operational cost, and feasibility timelines surrounding quantum annealing technology. Armed with these data, we analyze and project the quantitative performance targets future quantum annealing hardware must meet in order to provide a computational and power advantage over CMOS hardware, while matching its whole-network spectral efficiency. Our quantitative analysis predicts that with quantum annealing hardware operating at a 82.32 μs problem latency and 2.68M qubits, quantum annealing will achieve a spectral efficiency equal to CMOS computation while reducing power consumption by 41 kW (45% lower) in a 5G base station scenario with 400 MHz bandwidth and 64 antennas, and a 160 kW power reduction (55% lower) using 8.04M qubits in a C-RAN setting with three 5G base stations.

Forward Error Correction (FEC) provides reliable data flow in wireless networks despite the presence of noise and interference. However, its processing demands significant fraction of a wireless network’s resources, due to its computationally-expensive decoding process. This forces network designers to compromise between performance and implementation complexity. In this paper, we investigate a novel processing architecture for FEC decoding, one based on the quantum approximate optimization algorithm (QAOA), to evaluate the potential of this emerging quantum compute approach in resolving the decoding performance–complexity tradeoff.

We present FDeQ, a QAOA-based FEC Decoder design targeting the popular NextG wireless Low Density Parity Check (LDPC) and Polar codes. To accelerate QAOA-based decoding towards practical utility, FDeQ exploits temporal similarity among the FEC decoding tasks. This similarity is enabled by the fixed structure of a particular FEC code, which is independent of any time-varying wireless channel noise, ambient interference, and even the payload data. We evaluate FDeQ at a variety of system parameter settings in both ideal (noiseless) and noisy QAOA simulations, and show that FDeQ achieves successful decoding with error performance at par with state-of-the-art classical decoders at low FEC code block lengths. Furthermore, we present a holistic resource estimation analysis, projecting quantitative targets for future quantum devices in terms of the required qubit count and gate duration, for the application of FDeQ in practical wireless networks, highlighting scenarios where FDeQ may outperform state-of-the-art classical FEC decoders.

With unprecedented increases in traffic load in today's wireless networks, design challenges shift from the wireless network itself to the computational support behind the wireless network. In this vein, there is new interest in quantum-compute approaches because of their potential to substantially speed up processing, and so improve network throughput. However, quantum hardware that actually exists today is much more susceptible to computational errors than silicon-based hardware, due to the physical phenomena of decoherence and noise. This paper explores the boundary between the two types of computation–-classical-quantum hybrid processing for optimization problems in wireless systems–-envisioning how wireless can simultaneously leverage the benefit of both approaches. We explore the feasibility of a hybrid system with a real hardware prototype using one of the most advanced experimentally available techniques today, reverse quantum annealing. Preliminary results on a low-latency, large MIMO system envisioned in the 5G New Radio roadmap are encouraging, showing approximately 2–10\times× better performance in terms of processing time than prior published results.

Overcoming the conventional trade-off between throughput and bit error rate (BER) performance, versus computational complexity is a long term challenge for uplink Multiple-Input Multiple-Output (MIMO) detection in base station design for the cellular 5G New Radio roadmap, as well as in next generation wireless local area networks. In this work, we present ParaMax, a MIMO detector architecture that for the first time brings to bear physics-inspired parallel tempering algorithmic techniques [28, 50, 67] on this class of problems. ParaMax can achieve near optimal maximum-likelihood (ML) throughput performance in the Large MIMO regime, Massive MIMO systems where the base station has additional RF chains, to approach the number of base station antennas, in order to support even more parallel spatial streams. ParaMax is able to achieve a near ML-BER performance up to 160 × 160 and 80 × 80 Large MIMO for low-order modulations such as BPSK and QPSK, respectively, only requiring less than tens of processing elements. With respect to Massive MIMO systems, in 12 × 24 MIMO with 16-QAM at SNR 16 dB, ParaMax achieves 330 Mbits/s near-optimal system throughput with 4-8 processing elements per subcarrier, which is approximately 1.4× throughput than linear detector-based Massive MIMO systems