Wilkinson on a priori error analysis

I’ve been reading a lot of NLA lately (e.g., a recent paper on communication-avoiding RRQR), and necessarily brushing up on some details I paid scant attention to in my NLA courses, like the details of the different types of pivoting. Which led me to this quote by a famous numerical analyst:

There is still a tendency to attach too much importance to the precise error bounds obtained by an a priori error analysis. In my opinion, the bound itself is the least important part of it. The main object of such an analysis is to expose the potential instabilities, if any, of an algorithm so that hopefully from the insight thus obtained one might be led to improved algorithms. Usually the bound itself is weaker than it might have been because of the necessity of restricting the mass of detail to a reasonable level and because of the limitations imposed by expressing the errors in terms of matrix norms. A priori bounds are not, in general, quantities that should be used in practice. Practical error bounds should usually be determined by some form of a posteriori error analysis, since this takes full advantage of the statistical distribution of rounding errors and of any special features, such as sparseness, in the matrix.

Can I get an amen? This could be the epigraph of the career I’m building. I strive for a priori analyses— whether they are of algorithms or physical systems—, because in the best cases, they enhance our understanding of the factors relevant to our problems. I seek them out in others’ work and try to provide them in my own because I’m deeply skeptical of purely empirical results: without sufficient theory, how do you know you’re not just avoiding inputs that would expose some failing in your idea? This is why I’m an applied mathematician.