In this paper we examine complexity issues in DNA computation. We believe that these issues are paramount in the search for so-called "killer applications", that is, applications of DNA computation that would establish the superiority of this paradigm over others in particular domains. An assured future for DNA computation can only be established through the discovery of such applications. We demonstrate that current measures of complexity fall short of reality. Consequently, we define a more realistic model, a so-called strong model of computation which provides better estimates of the resources required by DNA algorithms. We also compare the complexities of published algorithms within this new model and the weaker, extant model which is commonly (often implicitly) assumed. We also argue that "killer applications" are most likely to come through algorithms employing polynomial volumes of DNA and running in polylogarithmic time. These algorithms are likely to construc solutions rather than filter them from an exponentially large initial sample of DNA.
展开▼