Haoyue Tang

Research Interests

My research aims at exploring the fundamental limits, design efficient algorithms and prototyping for decision making in stochastic networks and stochastic systems. Recently I am interested in:

  • Data Driven Network Resource Allocation

  • Semantic and Task Oriented Networking

  • Sensing Data Retrieval

  • Network Anomaly Detection

    Below are my past research projects:

Online Timely Data Sensing

1350 

How to optimally manage the freshness of information updates (have a good estimation about \(X_t\) at the destination) sent from a source node to a destination via a channel when the channel statistics is unknown? By using the Age of Information (AoI) as a freshness metric, we first present a stochastic approximation algorithm that can learn the optimum sampling policy almost surely, and prove that the cumulative regret of the proposed algorithm is minimax order optimum. By incorportating more information on the time-varying process and design content agnostic data collection policies, our algorithm can lower the estimation error of \(X_t\). This project is supported by NSF-AI Institute Athena. Slides, Poster

Highlights

  • Theoretic Contributions:

    • New convergence results for stochastic optimization in an open set.

    • New converse results for online learning algorithms based on non-parametric statistics.

Publications

Image Inversion using Diffusion Models

1350 

We notice that the sensed timely data may be blurred and has low resolutioin. Retrieving the original image can be viewed as image inverse problems, i.e., generating a high fidelity image that matches the given input (the low-resolution image or the masked image). By using the input image as guidance, we can leverage a pretrained diffusion generative model to solve a wide range of image inverse tasks without task specific model fine-tuning. To precisely estimate the guidance score function of the input image, we propose Diffusion Policy Gradient (DPG), a tractable computation method by viewing the intermediate noisy images as policies and the target image as the states selected by the policy. Experiments show that our method is robust to both Gaussian and Poisson noise degradation on multiple linear and non-linear inverse tasks, resulting in a lower FID and LPIPS score on FFHQ, ImageNet and LSUN dataset.

Cross-Layer Scheduling for Data Freshness Optimization

140 

Previous work reveal that, to keep data fresh, it is important to guarantee: (i) low latency; (ii) high data rate; and (iii) service regularity. Considering sensors in wireless networks have energy constraints and the wireless channels are time-varying, how to opportunistically generate, transmit and deliver data so that the the multi-objective optimization problem can be settled? Based on dynamic programming, bandits and large deviation analysis, I propose a joint data sampling, power control and scheduling algorithm that is optimum in large scale networks.

Highlights

  • The first optimal multi-user scheduling algorithm in AoI literatures: we show that for a network with \(N\) users and \(M\) bandwidth, by fixing \(N/M\) as a constant, the average AoI optimality gap between the proposed algorithm and the lower bound is \(\mathcal{O}(1/\sqrt{N})\), indicating that the proposed algorithm is optimal in large-scale networks. ITW2020 presentations Allerton 2019 presentations

Publications

Signal Processing for mmWave Channel Estimation

By exploiting spatial sparse structure in mmWave channels, we propose an angle domain off-grid channel estimation algorithm for the uplink millimeter wave (mmWave) massive multiple-input and multiple-output (MIMO) systems. The proposed method is capable of identifying the angles and gains of the scatterer paths. Comparing the conventional channel estimation methods for mmWave systems, the proposed method achieves better performance in terms of mean square error. Numerical simulation results are provided to verify the superiority of the proposed algorithm.

Publications

Domain Adaptation and Out-of-Distribution Generation using Causal Inference

350 

Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output. To address the problem, we propose a Causal Semantic Generative model (CSG) based on a causal reasoning so that the two factors are modeled separately, and developed a variational Bayesian method for training CSG, a method for OOD prediction from a single training domain. This work is done during my internship at Microsoft Research. Poster

500 
  • We prove that under certain conditions, CSG can identify the semantic factor by fitting training data.

  • Empirical study shows improved OOD performance over prevailing baselines.