All Publications

74 Publications

2024

2023

In order to meet mobile cellular users’ ever-increasing data demands, today’s 4G and 5G networks are designed mainly with the goal of maximizing spectral efficiency. While they have made progress in this regard, controlling the carbon footprint and operational costs of such networks remains a long-standing problem among network designers. This paper takes a long view on this problem, envisioning a NextG scenario where the network leverages quantum annealing for cellular baseband processing. We gather and synthesize insights on power consumption, computational throughput and latency, spectral efficiency, operational cost, and feasibility timelines surrounding quantum annealing technology. Armed with these data, we analyze and project the quantitative performance targets future quantum annealing hardware must meet in order to provide a computational and power advantage over CMOS hardware, while matching its whole-network spectral efficiency. Our quantitative analysis predicts that with quantum annealing hardware operating at a 82.32 μs problem latency and 2.68M qubits, quantum annealing will achieve a spectral efficiency equal to CMOS computation while reducing power consumption by 41 kW (45% lower) in a 5G base station scenario with 400 MHz bandwidth and 64 antennas, and a 160 kW power reduction (55% lower) using 8.04M qubits in a C-RAN setting with three 5G base stations.

Tomorrow's massive-scale IoT sensor networks are poised to drive uplink traffic demand, especially in areas of dense deployment. To meet this demand, however, network designers leverage tools that often require accurate estimates of Channel State Information (CSI), which incurs a high overhead and thus reduces network throughput. Furthermore, the overhead generally scales with the number of clients, and so is of special concern in such massive IoT sensor networks. While prior work has used transmissions over one frequency band to predict the channel of another frequency band on the same link, this paper takes the next step in the effort to reduce CSI overhead: predict the CSI of a nearby but distinct link. We propose Cross-Link Channel Prediction (CLCP), a technique that leverages multi-view representation learning to predict the channel response of a large number of users, thereby reducing channel estimation overhead further than previously possible. CLCP's design is highly practical, exploiting existing transmissions rather than dedicated channel sounding or extra pilot signals. We have implemented CLCP for two different Wi-Fi versions, namely 802.11n and 802.11ax, the latter being the leading candidate for future IoT networks. We evaluate CLCP in two large-scale indoor scenarios involving both line-of-sight and non-line-of-sight transmissions with up to 144 different 802.11ax users and four different channel bandwidths, from 20 MHz up to 160 MHz. Our results show that CLCP provides a 2× throughput gain over baseline and a 30% throughput gain over existing prediction algorithms.

This paper presents SoundSticker, a system for steganographic, in-band data communication over an acoustic channel. In contrast with recent works that hide bits in inaudible frequency bands, SoundSticker embeds hidden bits in the audible sounds, making them more reliably survive audio codecs and bandpass filtering, while achieving a higher data rate and remaining imperceptible to a listener. The key observation behind SoundSticker is that the human ear is less sensitive to the audio phase changes than the frequency and amplitude changes, which leaves us an opportunity to alter the phase of an audio clip to convey hidden information. We take advantage of this opportunity and build an OFDM-based physical layer. To make this PHY-layer design work for a variety of end devices with heterogeneous computation resources, SoundSticker addresses multiple technical challenges including perceivable waveform artifacts caused by the phase-based modulation, bit rate adaptation without channel sounding and real-time preamble detection. Our prototype on both smartphones and ESP32 platforms demonstrates SoundSticker’s superior performance against the state of the arts, while preserving excellent sound quality and remaining unaffected by common audio codecs like MP3 and AAC. Audio clips produced by SoundSticker can be found at https://soundsticker.github.io/.

Exploiting (near-)optimal MIMO signal processing algorithms in the next generation (NextG) cellular systems holds great promise in achieving significant wireless performance gains in spectral efficiency and device connectivity, to name a few. However, it is extremely difficult to enable optimal processing methods in the systems, since the required computational amount increases exponentially with more users and higher data rates, while available processing time is strictly limited. In this regard, quantum signal processing has been recently identified as a promising potential enabler of the (near-)optimal algorithms in the systems, since quantum computing could dramatically speed up the computation via non-conventional effects based on quantum mechanics. Given existing quantum decoherence and noise on quantum hardware, parallel quantum optimization could accelerate the process even further at the expense of more qubit usage. In this paper, we discuss the parallelization of quantum MIMO processing and investigate a spin-level preprocessing method for relatively finer-grained decomposition that can support more flexible parallel quantum signal processing, compared to the recently reported symbol-level decomposition method. We evaluate the method on the state-of-the-art analog D-Wave Advantage quantum processor.

This paper presents Monolith, a high bitrate, low- power, metamaterials surface-based Orbital Angular Momentum (OAM) MIMO multiplexing design for rank deficient, free space wireless environments. Leveraging ambient signals as the source of power, Monolith backscatters these ambient signals by modulating them into several orthogonal beams, where each beam carries a unique OAM. We provide insights along the design aspects of a low-power and programmable metamaterials- based surface. Our results show that Monolith achieves an order of magnitude higher channel capacity than traditional spatial MIMO backscattering networks.

 

Mobile video applications have gained increasing popularity and become part of everyone’s daily experience. The quality of video has a significant impact on both the quality of users’ experience for video streaming and the accuracy of video analytic systems, which further impacts the application revenue. The challenge to building a consistently high-quality video delivery system lies in two aspects. On the application side, the emerging new video applications are evolving to become more user-interactive, where existing prefetch and buffering algorithms cannot work properly. On the network side, the wireless network itself is fundamentally dynamic and unreliable due to the multipath effect and interference on the wireless channel. In this thesis, we present cross-layer optimizations from the application layer, network layer, and physical layer to improve the quality of video streaming over wireless network with the design and implementation of the following systems: Dashlet, a short video streaming system tailored for a high quality of experience by adapting to dynamic user actions. Dashlet proposes a novel out-of-order video chunk pre-buffering mechanism that leverages a simple, non machine learning-based model of users’ swipe statistics to determine the pre-buffering order and bitrate. Spider, a multi-hop, millimeter-wave (mmWave) wireless relay network design to maximize the video analytic accuracy of the delivered video. Spider integrates a low-latency Wi-Fi control plane with a mmWave relay data plane, allowing agile re-routing around blockages. Spider proposes a novel video bit-rate allocation algorithm coupled with a scalable routing algorithm that maximizes application-layer video analytics accuracy. LAIA, a system to programmable control the wireless channel so that the wireless network can achieve consistently high throughput for robust video delivery. With the programmable interface to control the wireless channel, LAIA can improve wireless channels on the fly for single- and multi-antenna links, as well as nearby networks operating on adjacent frequency bands. Putting it together, this thesis demonstrates a set of optimizations in different layers in through network stack for building a high quality and robustness wireless video delivery system. The extensive evaluation demonstrates a significant improvement on both quality of experience for video streaming and accuracy for video analytics.

A central design challenge for future generations of wireless networks is to meet users’ ever-increasing demand for capacity, throughput, and connectivity. Recent advances in the design of wireless networks to this end, including the NextG efforts underway, call in particular for the use of Large and Massive multiple input multiple output (MIMO) antenna arrays to support many users near a base station. These techniques are coming to fruition, yielding significant performance gains, spatially multiplexing information streams concurrently. To fully realize MIMO’s gains, however, the system requires sophisticated signal processing to disentangle the mutually-interfering streams from each other. Currently deployed linear filters have the advantage of low computational complexity, but suffer from rapid throughput degradation for more parallel streams. Theoretically optimal Maximum Likelihood (ML) processing can significantly improve throughput over such linear filters, but soon becoming infeasible due to its computational complexity and limitations in processing time. The base station’s computational capacity is thus becoming one of the key limiting factors on performance gains in wireless networks. Quantum computing is a potential tool to address this computational challenge. It exploits unique information processing capabilities based on quantum mechanics to perform fast calculations that are intractable by traditional digital methods. This dissertation presents four design directions of quantum compute-enabled wireless systems to expedite the ML processing in MIMO systems, which would unlock unprecedented levels of wireless performance: (1) quantum optimization on specialized hardware, (2) quantum-inspired computing on classical computing platforms, (3) hybrid classical-quantum computational structures, and (4) scalable and elastic parallel quantum optimization. We introduce our prototype systems (QuAMax, ParaMax, IoT-ResQ, X-ResQ) that are implemented on real-world analog quantum processors, experimentally demonstrating their substantial achievable performance gains in many aspects of wireless networks. As an initial guiding framework, this dissertation provides system design guidance with underlying principles and technical details and discusses future research directions based on the current challenges and opportunities observed.

We present the Hybrid Polar Decoder (HyPD), a hybrid classical–quantum decoder design for Polar error correction codes, which are becoming widespread in today’s 5G and tomorrow’s 6G networks. HyPD employs CMOS processing for the Polar decoder’s binary tree traversal, and Quantum Annealing (QA) processing for the Quantum Polar Decoder (QPD)–a Maximum-Likelihood QA-based Polar decoder submodule. QPD’s design efficiently transforms a Polar decoder into a quadratic polynomial optimization form, then maps this polynomial on to the physical QA hardware via QPD-MAP, a customized problem mapping scheme tailored to QPD. We have experimentally evaluated HyPD on a state-of-the-art QA device with 5,627 qubits, for 5G-NR Polar codes with block length of 1,024 bits, in Rayleigh fading channels. Our results show that HyPD outperforms Successive Cancellation List decoders of list size eight by half an order of bit error rate magnitude, and achieves a 1,500-bytes frame delivery rate of 99.1%, at 1 dB signal-to-noise ratio. Further studies present QA compute time considerations. We also propose QPD-HW, a novel QA hardware topology tailored for the task of decoding Polar codes. QPD-HW is sparse, flexible to code rate and block length, and may be of potential interest to the designers of tomorrow’s 6G wireless networks.