Rachel A. Ward
Assistant Professor
Mathematics Department
University of Texas at Austin
2515 Speedway, RLM 10.144
Austin, TX 78712
rward@math.utexas.edu
Mathematics Department
University of Texas at Austin
2515 Speedway, RLM 10.144
Austin, TX 78712
rward@math.utexas.edu
My research spans signal processing (applied harmonic analysis and probability),
numerical linear algebra, and statistics. Much of my work is in compressed
sensing.
News
- 4/2/13: I will be co-organizing the Fall 2014 ICERM Semester Program on High-dimensional Approximation
- 2/3/13: UT Austin's first Sonia Kovalevsky High School Mathematics Day will be held on the UT Austin campus Saturday, April 20. Register here.
- 2/1/13: Chris White created a fast Sudoku solver using L1 minimization, based on the paper Linear systems, sparse solutions, and Sudoku by Babu, Stoica, and Jian. Hard Sudoku puzzles are solved instantly!
Selected talks
- Sparse recovery without incoherence, Rice Math/Applied Math colloquium, February 2013.
- Chi-square and classical exact tests often wildly misreport significance; the remedy lies in computers, CalTech CMS colloquium, October 2011.
- Near equivalence of the Restricted Isometry Property and Johnson-Lindenstrauss Lemma, MSRI, September 2011.
Selected preprints and publications
A full publication list can be found in my vitae or on the arXiv.-
Beyond incoherence: stable and robust sampling strategies for compressive imaging
(with Felix Krahmer). Submitted.To date, the theory for compressive sampling with frequency measurements has only been developed for bases that, like the canonical or `pixel' basis, are incoherent with the Fourier basis. In many applications, such as Magnetic Resonance Imaging (MRI) or inverse scattering, one instead acquires images that are sparse in transform domains such as spatial finite differences or wavelets which are not incoherent with the Fourier basis. For these applications, overwhelming empirical evidence and heuristic arguments have suggested that superior image reconstruction can be obtained through certain variable density sampling strategies which concentrate on lower frequencies. Here we fortify these empirical studies with theoretical reconstruction guarantees, showing that sampling frequencies according to suitable power-law densities enables image reconstructions that are stable to sparsity defects and robust to measurement noise. Our results hinge on proving that the coherence between the Fourier and Haar wavelet basis is sufficiently concentrated on low frequencies that an incoherent preconditioned system results by resampling the Fourier basis appropriately. pdf > -
Stable image reconstruction using total variation minimization
(with Deanna Needell). Submitted.This article presents near-optimal guarantees for accurate and robust image recovery from under-sampled noisy measurements using total variation minimization, and our results may be the first of this kind. In particular, we show that from O(slog(N)) nonadaptive linear measurements, an image can be reconstructed to within the best s-term approximation of its gradient up to a logarithmic factor, and this factor can be removed by taking slightly more measurements. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of a suitably incoherent matrix. pdf > -
Testing Hardy-Weinberg equilibrium with a simple root-mean-square statistic.
Submitted.We provide evidence that a root-mean-square test of goodness-of-fit can be significantly more powerful than state-of-the-art exact tests in detecting deviations from Hardy-Weinberg equilibrium. Unlike Pearson’s chi-square test, the log--likelihood-ratio test, and Fisher’s exact test, which are sensitive to relative discrepancies between genotypic frequencies, the root-mean-square test is sensitive to absolute discrepancies. This can increase statistical power, as we demonstrate using benchmark datasets and through asymptotic analysis. With the aid of computers, exact P-values for the root-mean-square statistic can be calculated effortlessly, and can be easily implemented using the code available on the author's webpage. pdf > -
Chi-square and classical exact tests often wildly misreport significance; the remedy lies in computers
(with Will Perkins and Mark Tygert). Submitted.If a discrete probability distribution in a model being tested for goodness-of-fit is not close to uniform, then forming the Pearson chi-square statistic can involve division by nearly zero. This often leads to serious trouble in practice -- even in the absence of round-off errors -- as the present article illustrates via numerous examples. Fortunately, with the now widespread availability of computers, avoiding all the trouble is simple and easy: without the problematic division by nearly zero, the actual values taken by goodness-of-fit statistics are not humanly interpretable, but black-box computer programs can rapidly calculate their precise significance. pdf > -
Sparse Legendre expansions via l1-minimization
(with Holger Rauhut). Journal of Approximation Theory, Volume 164, Issue 5 (2012).We consider the problem of recovering polynomials that are sparse with respect to the basis of Legendre polynomials from a small number of random samples. In particular, we show that a Legendre s-sparse polynomial of maximal degree N can be recovered from m = O(s log^4(N)) random samples that are chosen independently according to the Chebyshev probability measure. As an efficient recovery method, l1-minimization can be used. We establish these results by verifying the restricted isometry property of a preconditioned random Legendre matrix. We then extend these results to a large class of orthogonal polynomial systems, including the Jacobi polynomials, of which the Legendre polynomials are a special case. Finally, we transpose these results into the setting of approximate recovery for functions in certain infinite-dimensional function spaces. pdf > -
New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property
(with Felix Krahmer). SIAM Journal on Mathematical Analysis, Volume 43 (2011).Consider an m by N matrix Phi with the Restricted Isometry Property of order k and level delta, that is, the norm of any k-sparse vector in R^N is preserved to within a multiplicative factor of 1 +- delta under application of Phi. We show that by randomizing the column signs of such a matrix Phi, the resulting map with high probability embeds any fixed set of p = O(e^k) points in R^N into R^m without distorting the norm of any point in the set by more than a factor of 1 +- delta. Consequently, matrices with the Restricted Isometry Property and with randomized column signs provide optimal Johnson-Lindenstrauss embeddings up to logarithmic factors in N. In particular, our results improve the best known bounds on the necessary embedding dimension m for a wide class of structured random matrices; for partial Fourier and partial Hadamard matrices, we improve the recent bound m = O(delta^(-4) log(p) log^4(N)) appearing in Ailon and Liberty to m = O(delta^(-2) log(p) log^4(N)), which is optimal up to the logarithmic factors in N. Our results also have a direct application in the area of compressed sensing for redundant dictionaries. pdf > -
Quiet sigma delta quantization
Nonlinearity, Volume 23, Number 9 (2010).In this paper, we introduce a family of second-order sigma delta quantization schemes for analog-to-digital conversion which are `quiet' : quantization output is guaranteed to fall to zero at the onset of vanishing input. In the process, we prove that the origin is a globally attractive fixed point for the related family of asymmetrically-damped piecewise affine maps. Our proof of convergence is twofold: first, we construct a trapping set using a Lyapunov-type argument; we then take advantage of the asymmetric structure of the maps under consideration to prove convergence to the origin from within this trapping set. pdf > -
Compressed sensing with cross validation
IEEE Transactions on Information Theory, Volume 55, Issue 12 (2009).Compressed Sensing decoding algorithms can efficiently recover an N dimensional real-valued vector x to within a factor of its best k-term approximation by taking m = 2klog(N/k) measurements y = Phi x. If the sparsity or approximate sparsity level of x were known, then this theoretical guarantee would imply quality assurance of the resulting compressed sensing estimate. However, because the underlying sparsity of the signal x is unknown, the quality of a compressed sensing estimate x* using m measurements is not assured. Nevertheless, we demonstrate that sharp bounds on the error || x - x* ||_2 can be achieved with almost no effort. More precisely, we assume that a maximum number of measurements m is pre-imposed; we reserve 4log(p) of the original m measurements and compute a sequence of possible estimates (x_j)_{j=1}^p to x from the m - 4log(p) remaining measurements; the errors ||x - x*_j ||_2 for j = 1, ..., p can then be bounded with high probability. As a consequence, numerical upper and lower bounds on the error between x and the best k-term approximation to x can be estimated for p values of k with almost no cost. Our observation has applications outside of compressed sensing as well. pdf >
PhD Students
- Jason Jo
- Karin Knudson (co-advised with Jonathan Pillow)
- Chris White
Collaborators
- Boris Alexeev @Princeton
- Nicolas Burq @Universite Paris-Sud
- Raymond Carroll @Texas A&M
- Rick Durrett @Duke
- Semyon Dyatlov @Berkeley
- Massimo Fornasier @TU München
- Mark Iwen @Duke
- Felix Krahmer @Göttingen
- Deanna Needell @Claremont McKenna
- Will Perkins @Georgia Tech
- Holger Rauhut @Hausdorff Center, Bonn
- Rayan Saab @Duke
- Fadil Santosa @University of Minnesota
- Mark Tygert @Courant Institute
- Maciej Zworski @Berkeley
Teaching
- May 2011: Sparsity and Computation, Program for Women and Mathematics, Institute for Advanced Study, Princeton.
- Spring 2010/Fall 2010: Calculus III.
- Summer 2009: Discrete dynamical systems with applications to A/D conversion, Summer Workshop In Mathematics (SWIM) for high school students, Princeton University.
- Fall 2008: Integrated Math, Engineering, and Physics (similar to Calculus III), Princeton University.
Code
- R code for computing root-mean-square and chi-square P-values in testing Hardy-Weinberg equilibrium, as described in the paper "A simple and powerful metric of Hardy-Weinberg proportions." Created by Shubhodeep Mukherji.
- Matlab code implementing the iterately reweighted least squares algorithm for low-rank matrix completion from the paper: "Low-rank matrix recovery via iteratively reweighted least squares minimization". Created by Stephan Worm.