A
Abstract
Rapid delay variations in today's access networks impair the QoE of low-latency, interactive applications, such as video conferencing. To tackle this problem, we propose Athena, a framework that correlates high-resolution measurements from Layer 1 to Layer 7 to remove the fog from the window through which today's video-conferencing congestion-control algorithms see the network. This cross-layer view of the network empowers the networking community to revisit and re-evaluate their network designs and application scheduling and rate-adaptation algorithms in light of the complex, heterogeneous networks that are in use today, paving the way for network-aware applications and application-aware networks.
C
Abstract
The Coronavirus disease (COVID-19) pandemic has caused social and economic crisis to the globe. Contact tracing is a proven effective way of containing the spread of COVID-19. In this paper, we propose CAPER, a Cellular-Assisted deeP lEaRning based COVID-19 contact tracing system based on cellular network channel state information (CSI) measurements. CAPER leverages a deep neural network based feature extractor to map cellular CSI to a neural network feature space, within which the Euclidean distance between points strongly correlates with the proximity of devices. By doing so, we maintain user privacy by ensuring that CAPER never propagates one client s CSI data to its server or to other clients. We implement a CAPER prototype using a software defined radio platform, and evaluate its performance in a variety of real-world situations including indoor and outdoor scenarios, crowded and sparse environments, and with differing data traffic patterns and cellular configurations in common use. Microbenchmarks show that our neural network model runs in 12.1 microseconds on the OnePlus 8 smartphone. End-to-end results demonstrate that CAPER achieves an overall accuracy of 93.39%, outperforming the accuracy of BLE based approach by 14.96%, in determining whether two devices are within six feet or not, and only misses 1.21% of close contacts. CAPER is also robust to environment dynamics, maintaining an accuracy of 92.35% after running for ten days.
Abstract
In order to meet mobile cellular users’ ever-increasing data demands, today’s 4G and 5G networks are designed mainly with the goal of maximizing spectral efficiency. While they have made progress in this regard, controlling the carbon footprint and operational costs of such networks remains a long-standing problem among network designers. This paper takes a long view on this problem, envisioning a NextG scenario where the network leverages quantum annealing for cellular baseband processing. We gather and synthesize insights on power consumption, computational throughput and latency, spectral efficiency, operational cost, and feasibility timelines surrounding quantum annealing technology. Armed with these data, we analyze and project the quantitative performance targets future quantum annealing hardware must meet in order to provide a computational and power advantage over CMOS hardware, while matching its whole-network spectral efficiency. Our quantitative analysis predicts that with quantum annealing hardware operating at a 82.32 μs problem latency and 2.68M qubits, quantum annealing will achieve a spectral efficiency equal to CMOS computation while reducing power consumption by 41 kW (45% lower) in a 5G base station scenario with 400 MHz bandwidth and 64 antennas, and a 160 kW power reduction (55% lower) using 8.04M qubits in a C-RAN setting with three 5G base stations.
Abstract
Tomorrow's massive-scale IoT sensor networks are poised to drive uplink traffic demand, especially in areas of dense deployment. To meet this demand, however, network designers leverage tools that often require accurate estimates of Channel State Information (CSI), which incurs a high overhead and thus reduces network throughput. Furthermore, the overhead generally scales with the number of clients, and so is of special concern in such massive IoT sensor networks. While prior work has used transmissions over one frequency band to predict the channel of another frequency band on the same link, this paper takes the next step in the effort to reduce CSI overhead: predict the CSI of a nearby but distinct link. We propose Cross-Link Channel Prediction (CLCP), a technique that leverages multi-view representation learning to predict the channel response of a large number of users, thereby reducing channel estimation overhead further than previously possible. CLCP's design is highly practical, exploiting existing transmissions rather than dedicated channel sounding or extra pilot signals. We have implemented CLCP for two different Wi-Fi versions, namely 802.11n and 802.11ax, the latter being the leading candidate for future IoT networks. We evaluate CLCP in two large-scale indoor scenarios involving both line-of-sight and non-line-of-sight transmissions with up to 144 different 802.11ax users and four different channel bandwidths, from 20 MHz up to 160 MHz. Our results show that CLCP provides a 2× throughput gain over baseline and a 30% throughput gain over existing prediction algorithms.