Articles

68 Publications

A

C

The Coronavirus disease (COVID-19) pandemic has caused social and economic crisis to the globe. Contact tracing is a proven effective way of containing the spread of COVID-19. In this paper, we propose CAPER, a Cellular-Assisted deeP lEaRning based COVID-19 contact tracing system based on cellular network channel state information (CSI) measurements. CAPER leverages a deep neural network based feature extractor to map cellular CSI to a neural network feature space, within which the Euclidean distance between points strongly correlates with the proximity of devices. By doing so, we maintain user privacy by ensuring that CAPER never propagates one client s CSI data to its server or to other clients. We implement a CAPER prototype using a software defined radio platform, and evaluate its performance in a variety of real-world situations including indoor and outdoor scenarios, crowded and sparse environments, and with differing data traffic patterns and cellular configurations in common use. Microbenchmarks show that our neural network model runs in 12.1 microseconds on the OnePlus 8 smartphone. End-to-end results demonstrate that CAPER achieves an overall accuracy of 93.39%, outperforming the accuracy of BLE based approach by 14.96%, in determining whether two devices are within six feet or not, and only misses 1.21% of close contacts. CAPER is also robust to environment dynamics, maintaining an accuracy of 92.35% after running for ten days.

In order to meet mobile cellular users’ ever-increasing data demands, today’s 4G and 5G networks are designed mainly with the goal of maximizing spectral efficiency. While they have made progress in this regard, controlling the carbon footprint and operational costs of such networks remains a long-standing problem among network designers. This paper takes a long view on this problem, envisioning a NextG scenario where the network leverages quantum annealing for cellular baseband processing. We gather and synthesize insights on power consumption, computational throughput and latency, spectral efficiency, operational cost, and feasibility timelines surrounding quantum annealing technology. Armed with these data, we analyze and project the quantitative performance targets future quantum annealing hardware must meet in order to provide a computational and power advantage over CMOS hardware, while matching its whole-network spectral efficiency. Our quantitative analysis predicts that with quantum annealing hardware operating at a 82.32 μs problem latency and 2.68M qubits, quantum annealing will achieve a spectral efficiency equal to CMOS computation while reducing power consumption by 41 kW (45% lower) in a 5G base station scenario with 400 MHz bandwidth and 64 antennas, and a 160 kW power reduction (55% lower) using 8.04M qubits in a C-RAN setting with three 5G base stations.

Tomorrow's massive-scale IoT sensor networks are poised to drive uplink traffic demand, especially in areas of dense deployment. To meet this demand, however, network designers leverage tools that often require accurate estimates of Channel State Information (CSI), which incurs a high overhead and thus reduces network throughput. Furthermore, the overhead generally scales with the number of clients, and so is of special concern in such massive IoT sensor networks. While prior work has used transmissions over one frequency band to predict the channel of another frequency band on the same link, this paper takes the next step in the effort to reduce CSI overhead: predict the CSI of a nearby but distinct link. We propose Cross-Link Channel Prediction (CLCP), a technique that leverages multi-view representation learning to predict the channel response of a large number of users, thereby reducing channel estimation overhead further than previously possible. CLCP's design is highly practical, exploiting existing transmissions rather than dedicated channel sounding or extra pilot signals. We have implemented CLCP for two different Wi-Fi versions, namely 802.11n and 802.11ax, the latter being the leading candidate for future IoT networks. We evaluate CLCP in two large-scale indoor scenarios involving both line-of-sight and non-line-of-sight transmissions with up to 144 different 802.11ax users and four different channel bandwidths, from 20 MHz up to 160 MHz. Our results show that CLCP provides a 2× throughput gain over baseline and a 30% throughput gain over existing prediction algorithms.

LoRaWAN has emerged as an appealing technology to connect IoT devices but it functions without explicit coordination among transmitters, which can lead to many packet collisions as the network scales. State-of-the-art work proposes various approaches to deal with these collisions, but most functions only in high signal-to-interference ratio (SIR) conditions and thus does not scale to real scenarios where weak receptions are easily buried by stronger receptions from nearby transmitters. In this paper, we take a fresh look at LoRa’s physical layer, revealing that its underlying linear chirp modulation fundamentally limits the capacity and scalability of concurrentLoRa transmissions. We show that by replacing linear chirps with their non-linear counterparts, we can boost the throughput of concurrent LoRa transmissions and empower the LoRa receiver to successfully receive weak transmissions in the presence of strong colliding signals. Such a non-linear chirp design further enables the receiver to demodulate fully aligned collision symbols — a case where none of the existing approaches can deal with. We implement these ideas in a holistic LoRaWAN stack based on the USRP N210 software-defined radio platform. Our head-to-head comparison with two state-of-the-art research systems and a standard LoRaWAN base-line demonstrates that CurvingLoRa improves the network throughput by 1.6–7.6× while simultaneously sacrificing neither power efficiency nor noise resilience.

D

We present the Hybrid Polar Decoder (HyPD), a hybrid of classical CMOS and quantum annealing (QA) computational structures for decoding Polar error correction codes, which are becoming widespread in today's 5G and tomorrow's 6G networks. Our results show that HyPD outperforms successive cancellation list decoders of list size eight by half an order of bit error rate magnitude at 1 dB SNR. Further studies address QA compute time at various coding rates, and with increased qubit numbers.