Least squares solutions of linear equations $Ax=b$ are very important for parameter
estimation in engineering, applied mathematics, and statistics. There are several methods for
their solution including QR decomposition, Cholesky decomposition, singular value decomposition
(SVD), and Krylov subspace methods. The latter methods were developed for sparse $A$ matrices
that appear in the solution of partial differential equations. The QR (and its variant the RRQR)
and the SVD methods are commonly used for dense $A$ matrices that appear in engineering and
statistics. Although the Cholesky decomposition is backward stable and known to have the least
operational count, several authors recommend the use of QR in applications. In this article, we
take a fresh look at least squares problems for dense $A$ matrices with full column rank using
numerical experiments guided by recent results from the theory of random matrices. Contrary to
currently accepted belief, comparisons of the sensitivity of the Cholesky and QR solutions to random
parameter perturbations for various low to moderate condition numbers show no significant
difference to within machine precision. Experiments for matrices with artificially high condition
numbers reveal that the relative difference in the two solutions is on average only of the order
of $10^{-6}$. Finally, Cholesky is found to be markedly computationally faster than QR — the mean
computational time for QR is between two and four times greater than Cholesky, and the standard
deviation in computation times using Cholesky is about a third of that of QR. Our conclusion
in this article is that for systems with $Ax=b$ where $A$ has full column rank, if the condition
numbers are low or moderate, then the normal equation method with Cholesky decomposition is
preferable to QR.