Skip to Main Content

Compressed Sensing-Based SARS-CoV-2 Pool Testing

Bubacarr Bah
Hendrik Bernd Petersen
Peter Jung

Communicated by Notices Associate Editor Reza Malek-Madani

Introduction

In most countries, it seems that the population will have to live with COVID-19 for at least a while. We hope that most readers of this article will require very little convincing of the need to continually monitor the level of infection in the population. However, the monitoring has financial implications, in particular the cost of test kits. This results in a strong desire to reduce the number of tests required to identify infected individuals. One approach is pool testing, where test samples of individuals are pooled together and tested as one sample. If this sample turns out negative, then every individual whose sample is in the pool is declared negative. Otherwise, the group of samples can be subdivided and retested. This has the potential to significantly reduce the number of tests required. It has been intensively studied for COVID-19 (see for example SHB20MNB20), and the United States Federal Drug Agency (FDA) granted pooled testing methods an emergency use authorization in July 2021.

The mathematical field of group testing is concerned with this pooling problem and has therefore also gained interest recently. It was invented by Dorfman Dor43. The methods from classical group testing often suffer from several drawbacks including slowness due to adaptivity of the tests and sensitivity to errors DH06. More on group testing in the next section. Compressed sensing, on the other hand, is a mathematical field concerned with recovering a vector with many zero entries from as few nonadaptive linear measurements as possible Don06. Compressed sensing has achieved several theoretical goals including achieving the optimal number of measurements, independence of the measurements from each other, robustness to noise, and sometimes even error-correcting properties Don06CRT06LDB09. A more detailed discussion on compressed sensing follows after the discussion on group testing.

In this manuscript, we revisit the use of compressed sensing for pool testing. We propose to use compressed sensing instead of classical group testing for viral detection since it tackles some weak points in the method of group testing mentioned above. We by no means claim originality in this; one of the first works is GIS08. However, we advocate for the use of a practical pooling strategy out of the forest of strategies proposed in the literature and the use of a new algorithm with comparable performance to the state-of-the-art in the field but having additional desirable properties that give it an advantage over other algorithms for pool testing.

The use of compressed sensing for viral detection of pools is premised on the fact that it is possible to measure the viral loads of patients. Presently, a standard test for the detection of SARS-CoV-2 is the real-time or quantitative polymerase chain reaction (qPCR).⁠Footnote1 The qPCR measurement are fluorescence intensities at every DNA amplification cycle, and potentially every one of those measurements can be used to compute viral loads.

1

This should not be confused with a reverse transcription PCR which is often being abbreviated RT-PCR SLB06.

Remark 1.

The main reason for pooling, be it using group testing or compressed sensing, is to reduce the number of tests needed to identify infected individuals. This is why we focus on the number of pools (directly or indirectly in evaluating performance of polling designs and algorithms), typically given in complexity (“Big-O”) notation.

This article is organized as follows. The next two sections give a more detailed introduction to group testing and compressed sensing. Then we discuss the modeling of the pooling process and the identification of infections. We end with a discussion and conclusion.

Group testing

Group testing is concerned with discovering a class of interest from a larger set containing non-members of the class of interest without testing each member of the set. This is done by subdividing the set into subsets and testing each subset as one element. The goal is that with the number of subset/groups being far smaller than the total number of elements in the set, one is able to successfully identify the class of interest. More precisely, group testing seeks to identify members of a class of interest out of a set of elements by performing tests of groups, where . In the disease identification setting, we would like to identify infected individuals from a population of individuals performing group (pool) tests, again we require that . The ratio (denoted as ) is known as the prevalence. Usually, group testing works well when is small. It can be shown by a combinatorial argument that we need tests for pool testing to work Hwa72. Figure 1 shows an example of pool testing to identify an infected individual. Here each individual participates in only one pool.

Figure 1.

Testing a population of size with only 1 infected individual (in white), i.e., prevalence of . If each individual is tested, it would required 24 tests (i.e., 24 test kits). Using group testing with pool sizes of 6, members of pools 2, 3, and 4 are declared negative; while those in pool 1 are positive. Each individual in pool 1 is then retested, making it a total of 10 tests for group testing as opposed to 24 tests for testing each individual.

Graphic without alt text
Figure 2.

Testing a population of size with only 1 infected individual (white dot), i.e., prevalence of . Individual testing would require 25 tests (i.e., 25 test kits). Using group testing with pool sizes of 5 and each individual participating in two pools, the infected individual is identified by the row and column pools which contain their sample. This pooling strategy requires only 10 tests.

Graphic without alt text

We can have pool testing setups where individuals participate in more than one pool. Figure 2 illustrates this with each individual participating in 2 pools. This can be represented in a two-dimensional (2D) grid; while the example in Figure 1 can be considered one-dimensional (1D). Going from 1D to 2D may reduce the need to retest. Moreover, this can be extended to a -dimensional (dD) setting, where each individual will participate in pools. This is what the successfully applied hypercube method is about, see MNB20.⁠Footnote2 The 1D, 2D, …, dD pool testing are instances of the so-called array testing, see PS94.

2

This is the approach being used in Rwanda.

Mathematically, we can represent the population of size by a binary vector, where there are ones for the infected individuals and zeros for the non-infected individuals. Pooling samples of a group together is equivalent to evaluating

for some appropriate binary vector . For a matrix whose rows represents the groups, we set

The vector is referred to as the measurement/observation vector.

The recovery of the infected individuals, equivalently , from observing and knowing would lead to solving the following binary system of equations, since both and are binary.

The setting described above where the groups are fixed is known as the nonadaptive case of group testing. Alternatively, one may form a group after knowing the outcome of the preceding test(s). In that case we perform adaptive group testing. Nonadaptive approaches are faster but they require more measurements than adaptive approaches. Adaptive methods achieve the optimal measurements Hwa72; while non-adaptive methods need measurements AA13.

Many recovery algorithms for group testing have been proposed. One popular such algorithm for nonadaptive group testing is Combinatorial Orthogonal Matching Pursuit (COMP). In the noiseless setting the COMP algorithm assigns a sample as positive (i.e., infected) if and only if all the tests containing the sample satisfy . COMP will work perfectly, if the pooling strategy is such that we have -disjunct” sets of measurements (see PBJ20, Definition 4.1). Another algorithm for the adaptive case, is the binary splitting algorithm by Hwa72.

Compressed sensing

Mathematically, compressed sensing is concerned with the construction of a linear operator and the solution of an underdetermined system of linear equations resulting from the application of the linear operator. More precisely, let be our linear operator, let be the unknown variable vector and let the application of result in . Therefore, with , we have an underdetermined linear system (note both and are real):

In compressed sensing parlance, is called the sensing/measurement matrix, is the measurement vector, and is the signal of interest. Linear algebra tells us that there are infinitely many solutions to 4. However, it is possible to obtain a unique solution if we make some assumptions on and . The main assumption on is that it has some simplicity/redundancy. The simplicity of is either sparsity or compressibility. By sparsity of we mean it has few nonzero elements. Let be the support of , i.e., and let , then is said to be -sparse. Conventionally, the (the -norm of )⁠Footnote3 is used to denote the sparsity of . Precisely,

3

This is not a norm because for all scalars and all vectors .

where is the indicator/characteristic function. On the other hand, is said to be -compressible when can be approximated quite well by a -sparse vector. Furthermore, is -sparse (-compressible) in a basis when is -sparse (-compressible). We denote the restriction of on its support as and the restriction of on the complement of the support as .

The assumption on is that it is an information-preserving projection or a bi-Lipschitz linear metric space embedding of all -sparse vectors into (henceforth referred to as a stable linear embedding). The conditions for to fulfill the information preservation requirement include the following.

(i)

Restricted Isometry Property (RIP). A more general definition of the RIP, i.e., -norm-restricted isometry property () for is the following.

Definition 2.

Matrix has of order , with constants , if for all -sparse , it satisfies

For small , we have being a near isometry and this implies that it is information preserving. Note that is typically written as RIP (without the subscript ).

(ii)

Nullspace Property (NSP), here defined in the noiseless settings.

Definition 3.

Matrix has the nullspace property of order , if for any and any set with , we have

If satisfies this property, then there exists a unique -sparse vector solving 4.

(iii)

criterion is defined as follows.

Definition 4.

Matrix obeys the criterion with vector and constant , if and .

Note that is actually a condition number of the diagonal matrix with diagonal .

An important goal of compressed sensing is to make projections by to map to spaces with very small dimensions (i.e., small ) such that recovery is possible to a certain error tolerance. This translates to finding matrices with small , which have a good restricted isometry property or a good nullspace property. It is known that matrices can have with or or the nullspace property of order , if the number of measurements, CDD09. This is referred to as an optimal sampling rate.

The second part of the compressed sensing problem is the reconstruction of from the projection . The system in 4 is underdetermined, which means is not invertible. The aim of getting the sparsest solution that is faithful to the data leads to what is called the problem, i.e.:

This is a combinatorial and nonconvex problem, which has been shown to be NP-hard to solve. However, many discrete (also referred to as “greedy”) algorithms have been proposed for this problem with provable recovery guarantees. These include iterative hard thresholding (IHT), orthogonal matching pursuit (OMP), and compressive sampling matching pursuit (CoSaMP) FR13.

On the other hand, the problem can be relaxed to a convex one. A typical case is the Basis Pursuit (BP) algorithm that solves

It has been well established that the solution of BP coincides with the solution of the problem under most of the conditions on and stated above. The recovery of all compressed sensing algorithms are expected to be stable and obey an instance optimality (-approximation) guarantee according to which any solution satisfies

where is an absolute constant independent of and

that is the best -term approximation of . There is a more general definition of the instance optimality encompassing the noise setting too FR13.

We conclude this section by mentioning compressed sensing with non-negativity constraints. This problem is typically formulated in the following way.

In the case relevant to the focus of this manuscript, which is pool testing, the norm in 12 is taken to be (i.e., the -norm). In this setting 12 is known as the non-negative least absolute deviation (NNLAD) problem. The authors of PBJ21 proposed an efficient and tuning-free algorithm (dubbed NNLAD) for this problem.

Viral Detection in Pooled Tests

Testing design

Suppose we want to find individuals infected with a virus among individuals. According to the information theoretic lower bound we require at least tests to find the infected individuals. The binary splitting algorithm proposed by Hwa72 finds the infected individuals with a number of tests in . However, since the tests are adaptive, each subsequent test is designed depending on the outcome of previous tests, and thus each test has to wait for the result of the previous one. If it takes a long time to perform a test, it might be desirable to perform multiple tests at once. In such cases nonadaptive methods are preferable. There are many deterministic nonadaptive methods for group testing. Most of these prove their results using disjunct matrices. For instance, in AA13 a nonadaptive group testing method is presented whose number of tests is in .

We propose to use compressed sensing with additional nonnegativity constraints to solve this problem. We collect specimens from individuals arranged in a vector , and the amount of virus in the specimen of the th individual is denoted by the non-negative quantity . We assume that viruses are evenly distributed in each specimen, meaning that if we take of the volume of the specimen of the th individual, it will contain roughly viruses. Since, individuals are infected we have . Let the sample of the th test contain a fraction of the amount of specimen of the th individual. The sample of the th test thus contains, up to rounding errors, the amount of viruses

where is an matrix with entries with column sums of at most one. The number of tests is thus . Often one makes the assumption that is a random vector with i.i.d. elements, for instance a Poisson random variable multiplied by a Bernoulli random variable with parameter . Instead we take it here as a deterministic unknown. It is assumed that qPCR can be used to generate an estimate of the amount of virus in the th test . This procedure is not accurate and errors might occur. We try to recover the amount of virus in the specimen of the individuals according to

with a possibly small and as small as possible. As discussed above, this is exactly a compressed sensing problem where, due to the nature of the problem, is non-negative and exactly -sparse (compressed sensing guarantees often also work for compressible vectors which are well approximated by sparse vectors). The theory of compressed sensing states that there exist matrices and efficient decoders that allow recovery of if is Don06CRT06. It remains to design a suitable measurement/sensing matrix and a reconstruction algorithm.

Measurement matrices

Recall from the discussion on compressed sensing above that there exist matrices that allow recovery of from 13 if is in , referred to as the optimal scaling. The testing design described above is a special compressed sensing task where the elements of the matrix and the unknown sparse vector are non-negative. Non-negative does not have in the optimal regime and instead one has to resort to or use tools like the NSP 7. For example, it is known that matrices with independent and uniformly distributed entries on achieve optimal scaling KJ18 using the NSP, while adjacency matrices of expander graphs have the optimal scaling using . However, deterministic (i.e., non-random) construction of such matrices is an open problem.

As far as we know, the best deterministic construction (in terms of optimal scaling) of binary compressed sensing matrices (expander matrices) is GUV09. This construction has a near optimal scaling but is difficult to implement. Precisely, for any there exists a constant such that the number of rows of the matrix is:

The downside of this result is that the constant is rather large for small .

For ease of implementation (i.e., practical reasons), we propose using the sub-optimal matrices with explicit constructions in LV20 and KS64, i.e., adjacency matrices of low-girth left-regular bipartite graphs and -disjunct matrices respectively, in which the number of measurements is in the order of . Actually, we established the equivalence of the two constructions in PBJ20, Theorem 4.3. Such -disjunct matrices do not achieve the optimal rate for fixed as according to DR82. However, for viral detection we have a fixed prevalence in mind. The prevalence might be larger in the beginning of a pandemic and smaller when healthcare professionals are testing regularly and asymptotically, but for each of these applications we consider a fixed . In this scenario our result achieves the rate

which can outperform results that achieve optimality according to DR82. For instance the construction of AA13, Corollary 1 achieves a number of measurements

For a suitably chosen , this upper bound and other constructions can be outperformed by the construction we propose.

Determining infections

The classical decoding procedure for disjunct matrices (known as COMP) iterates over all and, if for a fixed and for all with , the th test is positive, it declares the th individual as infected. This process is simple and, if is scaled to be -disjunct with column sums of , there is no noise and there are no more than infected individuals, then it is guaranteed to find all infected individuals KS64. However, every false negative test will result in at least one individual that is falsely flagged as not infected. Thus, this decoding procedure is incredibly sensitive to noise. There are extensions to these matrices which tolerate a fixed number of errors under restrictive conditions AA13.

Compressed sensing, on the other hand, not only detects infected individuals but also estimates the viral load, which may have further benefits to the medical practitioners. The noise in 13 is nonzero in general, since the estimate is affected by some errors including, rounding errors and inaccuracy of the qPCR. In GAR20 the noise is modeled as a heavy-tailed random variable depending on the unknown .

Therefore, it is difficult to apply recovery methods from compressed sensing for independent additive noise out of the box. Parameter tuning, using, e.g., cross validation, is often crucial for most of these methods in these noise settings. Interestingly, non-negativity helps with such noise models. Combining the heavy-tailed noise model with the non-negativity of the viral load data (i.e., ), we recommend using the parameter tuning free non-negative least absolute deviation (NNLAD), proposed in PBJ21, for recovery which is any minimizer

In PBJ21, the authors showed that for certain matrices , for example what they refer to as random walk matrices of lossless expander graphs (which can be considered to include disjunct matrices), the convex NNLAD approach is indeed sparsity promoting and allows for an estimate of the form

for some constant independent of , and .

Finally, given a certain threshold , one declares the individual to be infected if , for small . If the noise level is small enough, this method will guarantee that these are exactly the infected individuals. Thus, in this sense compressed sensing gives a nonadaptive testing procedure which provably finds infected individuals among with the scaling-optimal number of tests, and small errors would have no effect on the test result. The guarantees in PBJ21 follow from a self-regularization feature in the non-negative case which has been worked out already for other cases, like the non-negative least squares in KJ18. Note that a comprehensive comparative study of the performance of NNLAD vis-a-vis other traditional compressed sensing algorithms was conducted in PBJ21.

The take-home message of this section is that we suggest a quasi-optimal pooling procedure with an efficient noise-robust recovery algorithm. This is a more practical setup that can be implemented in a straight forward way at a medical lab doing COVID testing. In particular, for the measurement matrix/design we trade-off optimal scaling with ease of use; while for the decoder, i.e., NNLAD, we trade-off ease of decoding for noise robustness (when compared to COMP) and for no parameter tuning (when compared to many compressing decoders).

Discussion

Now we try to put the above theoretical results into the right perspective by doing a bit more detailed comparison of our proposed approach to other approaches proposed for group/pool testing. We also discuss some empirical results and then draw conclusions.

Nonadaptive methods

Consider the nonadaptive method from AA13 with number of tests less than , see 16. Some empirical example comparisons are made in Table 1. In many scenarios, our method requires a similar number of tests as other methods from nonadaptive group testing and sometimes even requires fewer.

Table 1.

A comparison of the method of AA13 to ours looking at different prevalences and population sizes. We denote prevalence by , number of infected individuals by , population size by , pool size by and number of tests by . Note that the rates () are based on approximations of the exact expected number of tests.

Work ()
AA13 0.01 900 unknown 0.5573
Ours 0.01 900 30 0.3333
AA13 0.001 10000 unknown 0.0800
Ours 0.001 10000 100 0.1100

Adaptive methods

Adaptive methods outperform non-adaptive methods in general. Although nonadaptive, we compare our method to other adaptive methods in Table 2 below. Many adaptive group testing methods require half as many tests as our method, but ours has the advantage that it only requires one stage and can thus be performed faster.

Table 2.

A comparison of our method to other methods including Dor43MNB20 for different prevalences and population sizes. We denote prevalence by , number of infected individuals by , population size by , pool size by and number of tests by . Note: (i) that the rates () are based on approximations of the exact expected number of tests; (ii) where these approximations can be computed without involving , we put independent (abbreviated indep.) of n under the column.

Work ()
Dor43 0.01 indep. of n 10 0.2000
MNB20 0.01 indep. of n 35 0.1168
Ours 0.01 900 30 0.3333
Dor43 0.001 indep. of n 32 0.0633
MNB20 0.001 indep. of n 350 0.0179
Ours 0.001 10000 100 0.1100

Error correcting properties

We demonstrate an advantage of the NNLAD approach over other compressed sensing decoders in a short experiment with synthetic data. One common error model is to use multiplicative noise, i.e., for some random variable . This yields that the magnitude of certain noise components are significantly larger than others and you end up with a peaky noise. This motivates us to model the additive noise vector approximately as a sparse random vector. Following PBJ20, Theorem 4.5 for group sizes of and , we are guaranteed to detect up to infected people among by using tests. However, empirically the identification will succeed even if more than individuals are infected and even if multiple measurements are corrupted. In Figure 3, we vary the prevalence and the fraction of corrupted measurements and plot the probability that the NNLAD estimator is sufficiently close to the true signal in the -norm.

Figure 3.

A phase-transition plot of the probability of recovery as a function of the prevalence and the number of corrupted measurements for pool sizes of and .

Graphic without alt text

We see that as guaranteed by PBJ20, Theorem 4.5 for and , the recovery succeeds. But empirically the recovery also succeeds for and , i.e., 10 times higher than guaranteed. This suggests that and thus could be reduced and still recover whenever for instance, which might be sufficient for a prevalence of .

Further the NNLAD seems to successfully recover even in the presence of sparse noise, i.e., when is small. This is not surprising as the NNLAD minimizes the norm of all possible noises and -minimization is sparsity promoting CT05. Recovery seems to succeed whenever and . This error correcting property gives the NNLAD advantages over other compressed sensing decoders in the presence of heavy outliers.

However, the error correcting properties cannot be guaranteed uniformly for all with . If the noise components are active exactly on the support of a column of where the signal is nonvanishing, the measurement might correspond to a different signal with the same support. Hence, recovery might fail as soon as is at least as large as the number of nonzero entries in a column of , in this case . However, there are different possible supports of a -sparse noise, but only of these appear as columns of the matrix. Thus, if the noise support is drawn uniformly at random, such an event is highly unlikely. Thus, we see that the NNLAD is empirically correcting more errors than expected.

Conclusion

We have explained how compressed sensing can be used to solve the viral detection problem. It generates a nonadaptive testing procedure. Further, we have presented a construction of design matrices that can be used for classical nonadaptive group testing and compressed sensing based viral detection. The construction requires roughly as many tests as other methods from nonadaptive group testing and can possibly outperform those. Adaptive group testing methods still require fewer tests but can only be performed sequentially which is a critical problem in a pandemic with an exponential growth rate. We have proposed to use the NNLAD as a compressed sensing decoder, since, compared to other compressed sensing decoders, it is robust against heavy-tailed noise and does not requires knowledge of noise level. In particular, it also has certain error correcting properties. Lastly our method also computes the viral load of infected individuals.

References

[AA13]
Rudolf Ahlswede and Harout Aydinian, New construction of error-tolerant pooling designs, Information theory, combinatorics, and search theory, Lecture Notes in Comput. Sci., vol. 7777, Springer, Heidelberg, 2013, pp. 534–542, DOI 10.1007/978-3-642-36899-8_26. MR3076127,
Show rawAMSref \bib{ahlswede2013new}{article}{ author={Ahlswede, Rudolf}, author={Aydinian, Harout}, title={New construction of error-tolerant pooling designs}, conference={ title={Information theory, combinatorics, and search theory}, }, book={ series={Lecture Notes in Comput. Sci.}, volume={7777}, publisher={Springer, Heidelberg}, }, date={2013}, pages={534--542}, review={\MR {3076127}}, doi={10.1007/978-3-642-36899-8\_26}, }
[CT05]
Emmanuel J. Candes and Terence Tao, Decoding by linear programming, IEEE Trans. Inform. Theory 51 (2005), no. 12, 4203–4215, DOI 10.1109/TIT.2005.858979. MR2243152,
Show rawAMSref \bib{candes2005decoding}{article}{ author={Candes, Emmanuel J.}, author={Tao, Terence}, title={Decoding by linear programming}, journal={IEEE Trans. Inform. Theory}, volume={51}, date={2005}, number={12}, pages={4203--4215}, issn={0018-9448}, review={\MR {2243152}}, doi={10.1109/TIT.2005.858979}, }
[CRT06]
Emmanuel J. Candès, Justin Romberg, and Terence Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory 52 (2006), no. 2, 489–509, DOI 10.1109/TIT.2005.862083. MR2236170,
Show rawAMSref \bib{candes2006robust}{article}{ author={Cand\`es, Emmanuel J.}, author={Romberg, Justin}, author={Tao, Terence}, title={Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information}, journal={IEEE Trans. Inform. Theory}, volume={52}, date={2006}, number={2}, pages={489--509}, issn={0018-9448}, review={\MR {2236170}}, doi={10.1109/TIT.2005.862083}, }
[CDD09]
Albert Cohen, Wolfgang Dahmen, and Ronald DeVore, Compressed sensing and best -term approximation, J. Amer. Math. Soc. 22 (2009), no. 1, 211–231, DOI 10.1090/S0894-0347-08-00610-3. MR2449058,
Show rawAMSref \bib{cohen2009compressed}{article}{ author={Cohen, Albert}, author={Dahmen, Wolfgang}, author={DeVore, Ronald}, title={Compressed sensing and best $k$-term approximation}, journal={J. Amer. Math. Soc.}, volume={22}, date={2009}, number={1}, pages={211--231}, issn={0894-0347}, review={\MR {2449058}}, doi={10.1090/S0894-0347-08-00610-3}, }
[Don06]
David L. Donoho, Compressed sensing, IEEE Trans. Inform. Theory 52 (2006), no. 4, 1289–1306, DOI 10.1109/TIT.2006.871582. MR2241189,
Show rawAMSref \bib{donoho2006compressed}{article}{ author={Donoho, David L.}, title={Compressed sensing}, journal={IEEE Trans. Inform. Theory}, volume={52}, date={2006}, number={4}, pages={1289--1306}, issn={0018-9448}, review={\MR {2241189}}, doi={10.1109/TIT.2006.871582}, }
[Dor43]
Robert Dorfman, The detection of defective members of large populations, The Annals of Mathematical Statistics 14 (1943), no. 4, 436–440.
[DH06]
Ding-Zhu Du and Frank K. Hwang, Pooling designs and nonadaptive group testing: Important tools for DNA sequencing, Series on Applied Mathematics, vol. 18, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2006, DOI 10.1142/9789812773463. MR2282446,
Show rawAMSref \bib{du2006pooling}{book}{ author={Du, Ding-Zhu}, author={Hwang, Frank K.}, title={Pooling designs and nonadaptive group testing}, series={Series on Applied Mathematics}, volume={18}, subtitle={Important tools for DNA sequencing}, publisher={World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ}, date={2006}, pages={x+237}, isbn={981-256-822-0}, review={\MR {2282446}}, doi={10.1142/9789812773463}, }
[DR82]
A. G. D′yachkov and V. V. Rykov, Bounds on the length of disjunctive codes (Russian), Problemy Peredachi Informatsii 18 (1982), no. 3, 7–13; English transl., Problems Inform. Transmission 18 (1982), no. 3, 166–171 (1983). MR711896,
Show rawAMSref \bib{dyachkov1982bounds}{article}{ author={D\cprime yachkov, A. G.}, author={Rykov, V. V.}, title={Bounds on the length of disjunctive codes}, language={Russian}, journal={Problemy Peredachi Informatsii}, volume={18}, date={1982}, number={3}, pages={7--13}, issn={0555-2923}, translation={ journal={Problems Inform. Transmission}, volume={18}, date={1982}, number={3}, pages={166--171 (1983)}, issn={0032-9460}, }, review={\MR {711896}}, }
[FR13]
Simon Foucart and Holger Rauhut, A mathematical introduction to compressive sensing, Applied and Numerical Harmonic Analysis, Birkhäuser/Springer, New York, 2013, DOI 10.1007/978-0-8176-4948-7. MR3100033,
Show rawAMSref \bib{foucart2013a}{book}{ author={Foucart, Simon}, author={Rauhut, Holger}, title={A mathematical introduction to compressive sensing}, series={Applied and Numerical Harmonic Analysis}, publisher={Birkh\"{a}user/Springer, New York}, date={2013}, pages={xviii+625}, isbn={978-0-8176-4947-0}, isbn={978-0-8176-4948-7}, review={\MR {3100033}}, doi={10.1007/978-0-8176-4948-7}, }
[GAR20]
Sabyasachi Ghosh, Rishi Agarwal, Mohammad Ali Rehan, Shreya Pathak, Pratyush Agrawal, Yash Gupta, Sarthak Consul, Nimay Gupta, Ritika Goyal, Ajit Rajwade, and Manoj Gopalkrishnan, A compressed sensing approach to group-testing for covid-19 detection, arXiv:2005.07895 (2020).
[GIS08]
Anna C Gilbert, Mark A Iwen, and Martin J Strauss, Group testing and sparse signal recovery, 2008 42nd Asilomar Conference on Signals, Systems and Computers, IEEE, 2008, pp. 1059–1063.
[GUV09]
Venkatesan Guruswami, Christopher Umans, and Salil Vadhan, Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes, J. ACM 56 (2009), no. 4, Art. 20, 34, DOI 10.1145/1538902.1538904. MR2590822,
Show rawAMSref \bib{guruswami2009unbalanced}{article}{ author={Guruswami, Venkatesan}, author={Umans, Christopher}, author={Vadhan, Salil}, title={Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes}, journal={J. ACM}, volume={56}, date={2009}, number={4}, pages={Art. 20, 34}, issn={0004-5411}, review={\MR {2590822}}, doi={10.1145/1538902.1538904}, }
[Hwa72]
F. K. Hwang, A method for detecting all defective members in a population by group testing, Journal of the American Statistical Association 67 (1972), no. 339, 605–608.
[KS64]
W. Kautz and R. Singleton, Nonrandom binary superimposed codes, IEEE Transactions on Information Theory 10 (1964), no. 4, 363–377.
[KJ18]
Richard Kueng and Peter Jung, Robust nonnegative sparse recovery and the nullspace property of 0/1 measurements, IEEE Trans. Inform. Theory 64 (2018), no. 2, 689–703, DOI 10.1109/TIT.2017.2746620. MR3762586,
Show rawAMSref \bib{kueng2018robust}{article}{ author={Kueng, Richard}, author={Jung, Peter}, title={Robust nonnegative sparse recovery and the nullspace property of 0/1 measurements}, journal={IEEE Trans. Inform. Theory}, volume={64}, date={2018}, number={2}, pages={689--703}, issn={0018-9448}, review={\MR {3762586}}, doi={10.1109/TIT.2017.2746620}, }
[LDB09]
J. N. Laska, M. A. Davenport, and R. G. Baraniuk, Exact signal recovery from sparsely corrupted measurements through the Pursuit of Justice, 2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers, 2009, 1556–1560.
[LV20]
Mahsa Lotfi and Mathukumalli Vidyasagar, Compressed sensing using binary matrices of nearly optimal dimensions, IEEE Trans. Signal Process. 68 (2020), 3008–3021, DOI 10.1109/TSP.2020.2990154. MR4114738,
Show rawAMSref \bib{lotfi2020compressed}{article}{ author={Lotfi, Mahsa}, author={Vidyasagar, Mathukumalli}, title={Compressed sensing using binary matrices of nearly optimal dimensions}, journal={IEEE Trans. Signal Process.}, volume={68}, date={2020}, pages={3008--3021}, issn={1053-587X}, review={\MR {4114738}}, doi={10.1109/TSP.2020.2990154}, }
[MNB20]
Leon Mutesa, Pacifique Ndishimye, Yvan Butera, Jacob Souopgui, Annette Uwineza, Robert Rutayisire, Emile Musoni, Nadine Rujeni, Thierry Nyatanyi, Edouard Ntagwabira, Muhammed Semakula, Clarisse Musanabaganwa, Daniel Nyamwasa, Maurice Ndashimye, Eva Ujeneza, Ivan Emile Mwikarago, Claude Mambo Muvunyi, Jean Baptiste Mazarati, Sabin Nsanzimana, Neil Turok, and Wilfred Ndifon, A strategy for finding people infected with sars-cov-2: optimizing pooled testing at low prevalence, medRxiv (2020).
[PBJ20]
Hendrik Bernd Petersen, Bubacarr Bah, and Peter Jung, Practical high-throughput, non-adaptive and noise-robust sars-cov-2 testing, arXiv preprint arXiv:2007.09171 (2020).
[PBJ21]
Hendrik Bernd Petersen, Bubacarr Bah, and Peter Jung, Efficient tuning-free -regression of nonnegative compressible signals, Frontiers in Applied Mathematics and Statistics 7 (2021), 615573.
[PS94]
RM Phatarfod and Aidan Sudbury, The use of a square array scheme in blood testing, Statistics in medicine 13 (1994), no. 22, 2337–2343.
[SLB06]
Jan H Schefe, Kerstin E Lehmann, Ivo R Buschmann, Thomas Unger, and Heiko Funke-Kaiser, Quantitative real-time RT-PCR data analysis: current concepts and the novel “gene expression’s CT difference” formula, Journal of molecular medicine 84 (2006), no. 11, 901–910.
[SHB20]
Michael Schmidt, Sebastian Hoehl, Annemarie Berger, Heinz Zeichhardt, Kai Hourfar, Sandra Ciesek, and Erhard Seifried, FACT-Frankfurt adjusted COVID-19 testing-a novel method enables high-throughput SARS-CoV-2 screening without loss of sensitivity, medRxiv (2020), 2020–04.

Credits

All figures and photos of the authors are courtesy of the authors.