63 Publications



The Coronavirus disease (COVID-19) pandemic has caused social and economic crisis to the globe. Contact tracing is a proven effective way of containing the spread of COVID-19. In this paper, we propose CAPER, a Cellular-Assisted deeP lEaRning based COVID-19 contact tracing system based on cellular network channel state information (CSI) measurements. CAPER leverages a deep neural network based feature extractor to map cellular CSI to a neural network feature space, within which the Euclidean distance between points strongly correlates with the proximity of devices. By doing so, we maintain user privacy by ensuring that CAPER never propagates one client s CSI data to its server or to other clients. We implement a CAPER prototype using a software defined radio platform, and evaluate its performance in a variety of real-world situations including indoor and outdoor scenarios, crowded and sparse environments, and with differing data traffic patterns and cellular configurations in common use. Microbenchmarks show that our neural network model runs in 12.1 microseconds on the OnePlus 8 smartphone. End-to-end results demonstrate that CAPER achieves an overall accuracy of 93.39%, outperforming the accuracy of BLE based approach by 14.96%, in determining whether two devices are within six feet or not, and only misses 1.21% of close contacts. CAPER is also robust to environment dynamics, maintaining an accuracy of 92.35% after running for ten days.

In order to meet mobile cellular users ever-increasing network usage, today s 4G and 5G networks are designed mainly with the goal of maximizing spectral efficiency. While they have made progress in this regard, controlling the carbon footprint and operational costs of such networks remains a long-standing problem among network designers. This Challenge paper takes a long view on this problem, envisioning a NextG scenario where the network leverages quantum annealing computation for cellular baseband processing. We gather and synthesize insights on power consumption, computational throughput and latency, spectral efficiency, and operational cost, and deployment timelines surrounding quantum technology. Armed with these data, we analyze and project the quantitative performance targets future quantum hardware must meet in order to provide a computational and power advantage over silicon hardware, while matching its whole-network spectral efficiency. Our quantitative analysis predicts that with quantum hardware operating at a 140 μs problem latency and 4.3M qubits, quantum computation will achieve a spectral efficiency equal to silicon while reducing power consumption by 40.8 kW (45% lower) in a representative 5G base station scenario with 400 MHz bandwidth and 64 antennas, and an 8 kW power reduction (16% lower) using 2.2M qubits in a 200 MHz-bandwidth 5G scenario.

LoRaWAN has emerged as an appealing technology to connect IoT devices but it functions without explicit coordination among transmitters, which can lead to many packet collisions as the network scales. State-of-the-art work proposes various approaches to deal with these collisions, but most functions only in high signal-to-interference ratio (SIR) conditions and thus does not scale to real scenarios where weak receptions are easily buried by stronger receptions from nearby transmitters. In this paper, we take a fresh look at LoRa’s physical layer, revealing that its underlying linear chirp modulation fundamentally limits the capacity and scalability of concurrentLoRa transmissions. We show that by replacing linear chirps with their non-linear counterparts, we can boost the throughput of concurrent LoRa transmissions and empower the LoRa receiver to successfully receive weak transmissions in the presence of strong colliding signals. Such a non-linear chirp design further enables the receiver to demodulate fully aligned collision symbols — a case where none of the existing approaches can deal with. We implement these ideas in a holistic LoRaWAN stack based on the USRP N210 software-defined radio platform. Our head-to-head comparison with two state-of-the-art research systems and a standard LoRaWAN base-line demonstrates that CurvingLoRa improves the network throughput by 1.6–7.6× while simultaneously sacrificing neither power efficiency nor noise resilience.


We present the Hybrid Polar Decoder (HyPD), a hybrid of classical CMOS and quantum annealing (QA) computational structures for decoding Polar error correction codes, which are becoming widespread in today's 5G and tomorrow's 6G networks. Our results show that HyPD outperforms successive cancellation list decoders of list size eight by half an order of bit error rate magnitude at 1 dB SNR. Further studies address QA compute time at various coding rates, and with increased qubit numbers.

Princeton Advanced Wireless Systems Lab
35 Olden Street
Princeton, NJ 08540 USA

Department of Computer Science
School of Engineering and Applied Sciences