Rachel A. Ward
Assistant Professor of Mathematics
and ICES member
University of Texas at Austin
2515 Speedway, RLM 10.144
Austin, TX 78712
and ICES member
University of Texas at Austin
2515 Speedway, RLM 10.144
Austin, TX 78712
My research interests include mathematical signal processing, applied harmonic analysis, compressed sensing, theoretical computer science, and machine learning.
- Dimension reduction via random projections, University of Wisconsin math colloquium, May 2014.
- Stochastic gradient descent with importance sampling, IPAM, February 2014.
- Strengthened Sobolev inequalities for a random subspace of functions, University of Arkansas spring lecture series, April 2013.
- Sparse recovery without incoherence, Rice Math/Applied Math colloquium, February 2013.
- Deficiencies in chi-square and classical exact tests of goodness-of-fit, CalTech CMS colloquium, October 2011.
- Near-equivalence of the Restricted Isometry Property and Johnson-Lindenstrauss Lemma, MSRI, September 2011.
- Fall 2014: Graduate course on Applied and Computational Harmonic Analysis, UT Austin. Class webpage.
- Spring, Fall 2013: Real Analysis I, UT Austin
- Fall 2012: Graduate course on compressive sensing, UT Austin
- May 2011: Sparsity and Computation, Program for Women and Mathematics, Institute for Advanced Study, Princeton.
- Summer 2009: Discrete dynamical systems with applications to A/D conversion, Summer Workshop In Mathematics (SWIM) for high school students, Princeton University.
Selected preprints and publicationsA full publication list can be found in my vitae or on the arXiv.
Coherent matrix completion
(with Y. Chen, S. Bhojanapalli, and S. Sanghavi).Matrix completion concerns the recovery of a low-rank matrix from a subset of its revealed entries, and nuclear norm minimization has emerged as an effective surrogate for this combinatorial problem. Here, we show that nuclear norm minimization can recover an arbitrary n by n matrix of rank r from O(nr log^2(n)) revealed entries, provided that revealed entries are drawn proportionally to the local row and column coherences (closely related to leverage scores) of the underlying matrix. Our results are order-optimal up to logarithmic factors, and extend existing results for nuclear norm minimization which require strong incoherence conditions on the types of matrices that can be recovered, due to assumed uniformly distributed revealed entries. We further provide extensive numerical evidence that a proposed two-phase sampling algorithm can perform nearly as well as local-coherence sampling and without requiring a priori knowledge of the matrix coherence structure. Finally, we apply our results to quantify how weighted nuclear norm minimization can improve on unweighted minimization given an arbitrary set of sampled entries. pdf >
Stable and robust sampling strategies for compressive imaging
(with F. Krahmer). IEEE Image processing, 2013.To date, the theory for compressive sampling with frequency measurements has only been developed for bases that, like the canonical or `pixel' basis, are incoherent with the Fourier basis. In many applications, such as Magnetic Resonance Imaging (MRI) or inverse scattering, one instead acquires images that are sparse in transform domains such as spatial finite differences or wavelets which are not incoherent with the Fourier basis. For these applications, overwhelming empirical evidence and heuristic arguments have suggested that superior image reconstruction can be obtained through certain variable density sampling strategies which concentrate on lower frequencies. Here we fortify these empirical studies with theoretical reconstruction guarantees, showing that sampling frequencies according to suitable power-law densities enables image reconstructions that are stable to sparsity defects and robust to measurement noise. Our results hinge on proving that the coherence between the Fourier and Haar wavelet basis is sufficiently concentrated on low frequencies that an incoherent preconditioned system results by resampling the Fourier basis appropriately. pdf >
Interpolation via weighted l1 minimization
(with H. Rauhut). Submitted.Functions of interest are often smooth and sparse in some sense, and both priors should be taken into account when interpolating sampled data. Classical linear interpolation methods are effective under strong regularity assumptions, but cannot incorporate nonlinear sparsity structure. At the same time, nonlinear methods such as l1 minimization can reconstruct sparse functions from very few samples, but do not necessarily encourage smoothness. Here we show that weighted l1 minimization effectively merges the two approaches, promoting both sparsity and smoothness in reconstruction. More precisely, we provide specific choices of weights in the l1 objective to achieve rates for functions with coefficient sequences in weighted lp spaces, p<=1. We consider the implications of these results for spherical harmonic and polynomial interpolation, in the univariate and multivariate setting. Along the way, we extend concepts from compressive sensing such as the restricted isometry property and null space property to accommodate weighted sparse expansions; these developments should be of independent interest in the study of structured sparse approximations and continuous-time compressive sensing problems. pdf >
Stable image reconstruction using total variation minimization
(with D. Needell). SIAM Journal on Imaging Science, 2013.This article presents near-optimal guarantees for accurate and robust image recovery from under-sampled noisy measurements using total variation minimization, and our results may be the first of this kind. In particular, we show that from O(slog(N)) nonadaptive linear measurements, an image can be reconstructed to within the best s-term approximation of its gradient up to a logarithmic factor, and this factor can be removed by taking slightly more measurements. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of a suitably incoherent matrix. pdf >
Testing Hardy-Weinberg equilibrium with a simple root-mean-square statistic.
(with R. J. Carroll). Biostatistics, 2013.We provide evidence that a root-mean-square test of goodness-of-fit can be significantly more powerful than state-of-the-art exact tests in detecting deviations from Hardy-Weinberg equilibrium. Unlike Pearson’s chi-square test, the log--likelihood-ratio test, and Fisher’s exact test, which are sensitive to relative discrepancies between genotypic frequencies, the root-mean-square test is sensitive to absolute discrepancies. This can increase statistical power, as we demonstrate using benchmark datasets and through asymptotic analysis. With the aid of computers, exact P-values for the root-mean-square statistic can be calculated effortlessly, and can be easily implemented using the code available on the author's webpage. pdf >
New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property
(with F. Krahmer). SIAM Journal on Mathematical Analysis, Volume 43 (2011).Consider an m by N matrix Phi with the Restricted Isometry Property of order k and level delta, that is, the norm of any k-sparse vector in R^N is preserved to within a multiplicative factor of 1 +- delta under application of Phi. We show that by randomizing the column signs of such a matrix Phi, the resulting map with high probability embeds any fixed set of p = O(e^k) points in R^N into R^m without distorting the norm of any point in the set by more than a factor of 1 +- delta. Consequently, matrices with the Restricted Isometry Property and with randomized column signs provide optimal Johnson-Lindenstrauss embeddings up to logarithmic factors in N. In particular, our results improve the best known bounds on the necessary embedding dimension m for a wide class of structured random matrices; for partial Fourier and partial Hadamard matrices, we improve the recent bound m = O(delta^(-4) log(p) log^4(N)) appearing in Ailon and Liberty to m = O(delta^(-2) log(p) log^4(N)), which is optimal up to the logarithmic factors in N. Our results also have a direct application in the area of compressed sensing for redundant dictionaries. pdf >
- Jason Jo
- Chris White
- Soledad Villar
FundingI am supported by an NSF CAREER award, an AFOSR Young Investigator Program Award, and ONR Grant N00014-12-1-0743.
- Matlab code implementing the 1-bit compressive sensing with norm estimation algorithms from the paper: "One-bit compressive sensing with norm estimation."
- R code for computing root-mean-square and chi-square P-values in testing Hardy-Weinberg equilibrium, as described in the paper "A simple and powerful metric of Hardy-Weinberg proportions." Created by Shubhodeep Mukherji.
- Matlab code implementing the iteratively reweighted least squares algorithm for low-rank matrix completion from the paper: "Low-rank matrix recovery via iteratively reweighted least squares minimization". Created by Stephan Worm.
designed by firstname.lastname@example.org