- Journal Home
- Volume 42 - 2024
- Volume 41 - 2023
- Volume 40 - 2022
- Volume 39 - 2021
- Volume 38 - 2020
- Volume 37 - 2019
- Volume 36 - 2018
- Volume 35 - 2017
- Volume 34 - 2016
- Volume 33 - 2015
- Volume 32 - 2014
- Volume 31 - 2013
- Volume 30 - 2012
- Volume 29 - 2011
- Volume 28 - 2010
- Volume 27 - 2009
- Volume 26 - 2008
- Volume 25 - 2007
- Volume 24 - 2006
- Volume 23 - 2005
- Volume 22 - 2004
- Volume 21 - 2003
- Volume 20 - 2002
- Volume 19 - 2001
- Volume 18 - 2000
- Volume 17 - 1999
- Volume 16 - 1998
- Volume 15 - 1997
- Volume 14 - 1996
- Volume 13 - 1995
- Volume 12 - 1994
- Volume 11 - 1993
- Volume 10 - 1992
- Volume 9 - 1991
- Volume 8 - 1990
- Volume 7 - 1989
- Volume 6 - 1988
- Volume 5 - 1987
- Volume 4 - 1986
- Volume 3 - 1985
- Volume 2 - 1984
- Volume 1 - 1983
Cited by
- BibTex
- RIS
- TXT
We consider the rank minimization problem from quadratic measurements, i.e., recovering a rank $r$ matrix $X \in \mathbb{R}^{n×r}$ from $m$ scalar measurements $y_i=a_i^T XX^T a_i,\;a_i\in \mathbb{R}^n,\;i=1,\ldots,m$. Such problem arises in a variety of applications such as quadratic regression and quantum state tomography. We present a novel algorithm, which is termed $exponential-type$ $gradient$ $descent$ $algorithm$, to minimize a non-convex objective function $f(U)=\frac{1}{4m}\sum_{i=1}^m(y_i-a_i^T UU^T a_i)^2$. This algorithm starts with a careful initialization, and then refines this initial guess by iteratively applying exponential-type gradient descent. Particularly, we can obtain a good initial guess of $X$ as long as the number of Gaussian random measurements is $O(nr)$, and our iteration algorithm can converge linearly to the true $X$ (up to an orthogonal matrix) with $m=O\left(nr\log (cr)\right)$ Gaussian random measurements.
}, issn = {1991-7139}, doi = {https://doi.org/10.4208/jcm.1902-m2018-0109}, url = {http://global-sci.org/intro/article_detail/jcm/16467.html} }We consider the rank minimization problem from quadratic measurements, i.e., recovering a rank $r$ matrix $X \in \mathbb{R}^{n×r}$ from $m$ scalar measurements $y_i=a_i^T XX^T a_i,\;a_i\in \mathbb{R}^n,\;i=1,\ldots,m$. Such problem arises in a variety of applications such as quadratic regression and quantum state tomography. We present a novel algorithm, which is termed $exponential-type$ $gradient$ $descent$ $algorithm$, to minimize a non-convex objective function $f(U)=\frac{1}{4m}\sum_{i=1}^m(y_i-a_i^T UU^T a_i)^2$. This algorithm starts with a careful initialization, and then refines this initial guess by iteratively applying exponential-type gradient descent. Particularly, we can obtain a good initial guess of $X$ as long as the number of Gaussian random measurements is $O(nr)$, and our iteration algorithm can converge linearly to the true $X$ (up to an orthogonal matrix) with $m=O\left(nr\log (cr)\right)$ Gaussian random measurements.