Handling wavelet expansions in numerical methods
Abstract (summary)
When we use wavelet expansion in numerical methods we can use the approximating power of wavelets for the operator only, or for both the operator and the operand. The former is simpler, the latter is more promising.
In the latter case there is no longer a fixed finite index set, but now the indices used in the expansions come from a virtually infinite index set. Applying an operator is equivalent to computing the product of a sparse vector and a (quasi-) sparse matrix. For each nonzero component in the vector one needs to approximate one column of the matrix. The accuracy of such an approximation may depend on the absolute value of the vector component.
Cohen et al. proposed the following approximation to a product [special characters omitted], [special characters omitted]where [special characters omitted] is the best 2j-term approximation to [special characters omitted], and (B0, B1, …) is a sequence of approximations to B with increasing accuracy. The drawback of this construction is that all entries of [special characters omitted] need to be sorted by absolute value to find the best approximations [special characters omitted].
We propose alternative approximations that can be constructed without sorting all the terms. These approximations are based on a division of the vector components into groups of similar absolute value. We consider two division of different coarseness. In general one can not find the best 2 j-term approximation based on this division only. In order to achieve the same accuracy, we use up to four times as many components. In practice we seldomly need so many extra elements.
We have tested the best 2j-term approximations and our alternative ones in an iterative solution method for a certain boundary integral equation. For low m using the best 2 j-term approximations gives slightly faster iterations. If the number of elements in [special characters omitted] is large enough, the cost of sorting all the elements becomes higher than the cost to process the extra elements. Processing more elements improves the accuracy, whereas sorting the elements only costs time. In our test case the alternative approximations lead to more efficient computations.