By Sharonmoyee Goswami*
A pdf version of this article may be downloaded here.
INTRODUCTION
Genetic diagnostic tests promise a wealth of benefits. They can figure out if a particular drug will work in your body, tell you what disease you have, or one you might get. There are two main types of genetic diagnostic tests, commercially developed “kits”—complete test systems with all the components and instructions needed to conduct the test that are sold to multiple labs—and “laboratory-developed tests” (LDTs)—preassembled test systems intended for use at a single laboratory.[1] Unlike kits, LDTs are sold to individual health care providers and directly to patients.[2] Until recently, the Food and Drug Administration (FDA) regulated commercially developed kits, but did not regulate LDTs, even when the two performed the same function.[3] In 2010, the FDA decided to start regulating all laboratory-developed tests. However, regulation does not occur in a vacuum, and increased regulation necessitates increased outlay for companies seeking to market genetic diagnostic tests. For drug companies, the risk that a product will require extensive ex ante investment in clinical trials but may not be approved for market release is mitigated by the presence of drug patents, and to a lesser extent, by statutory data exclusivity.[4] Both ensure that if a company succeeds in getting a drug past clinical trials and onto the market, then a competitor will not be able to copy their drug and unfairly profit without an initial outlay. Because the patentability of diagnostic tests is uncertain,[5] diagnostic test companies do not enjoy the same level of protection as drug companies. If the FDA expands the scope of regulation without the backstop of patent protection, it seems likely that the current booming market for genetic diagnostic tests will wane, to the detriment of consumers. To combat this result, I address three principle options:
- Require no regulation, but create no change in the status of patentable subject matter. Effectively, this option maintains the status quo and responds to critics’ suggestions that the FDA is expanding its reach too far by regulating genetic diagnostic tests.
- Pursue regulation, but create a per se patentable subject matter rule for diagnostic tests to balance the additional expense (notwithstanding additional barriers to patentability), creating a baseline similar to that for pharmaceutical drugs.
- Create a new regulatory regime that balances the additional expense of regulation through data exclusivity and limiting additional regulation to only the most necessary areas.
In exploring these options, this paper proceeds in five parts. Part I outlines the current regulatory framework and the FDA’s recent shift in regulation. Part II covers the main problems with the current unregulated market for genetic diagnostic tests and concludes by eliminating the first principal option. Part III addresses the current framework for patentable subject matter and discusses whether a per se rule would fit within this regime. In Part III, I determine that a twenty-year patent term may provide too much protection for genetic diagnostic tests, particularly considering that enforcement of such patents serves to limit consumer choice and may stifle innovation. Part III further finds an appropriate alternative to be FDA-mediated data exclusivity, as seen with the Hatch-Waxman Act. Part IV addresses what regulatory changes are necessary to accommodate the unusual needs of genetic diagnostic tests. The solutions suggested would reduce the ex ante uncertainty facing diagnostic test companies by clarifying the classification regime in the FDA, creating mandatory maximum times for approval within the FDA, streamlining complaint procedures to ensure that any issues with tests are quickly corrected, particularly for software and internet-based tests, and most importantly, creating a data exclusivity backstop for diagnostic tests that fall outside the scope of patentable subject matter. Corresponding solutions would benefit consumers by strengthening labeling requirements and genetic counseling for particular test varieties and increasing methods of keeping up with new technology in this field. Finally, Part V applies this new regulatory system to a purely computational genetic diagnostic test, which I believe to be emblematic of genetic diagnostic tests in the future.
I. Current Regulatory Framework for Genetic Diagnostic Tests and the Dangers of Increased Regulation
In this section, I discuss the current regulatory framework for genetic diagnostic tests, beginning with a short background of how institutional relationships led to the FDA’s current framework. Subsequently, I address the main problems with the regulatory framework that increased regulation would exacerbate and the negative impact that expansion could have on the genetic diagnostic test industry, namely, (1) the delay in time-to-market for the tests; (2) the uncertainty as to the classification of a particular test, and (3) the absence of any exclusivity backstop to prevent copying by competitors after approval.
A. Institutional Frameworks at the FDA
According to the traditional “transmission belt” theory,[6] Congress creates rules that govern agency action, the agency adopts procedures that adhere to those Congressional directives, and these processes are kept in check through judicial review. However, unlike other agencies that adhere to the traditional “transmission belt” theory, the FDA is a particularly powerful agency that takes a greater role in governance than mere transmission.[7] The increasingly difficult subject matter that the FDA regulates creates a situation where Congress and the courts simply do not have the necessary expertise to create specific policies governing drugs and devices. Indeed, well-known practitioner Thomas Austern has criticized the FDA’s substantial power, alleging that it was “delegation running riot”.[8] Historically, Congress and the courts have stepped in to curtail FDA power in one of two situations: (1) when a high-risk product creates public outcry and (2) when FDA regulation injures industry substantially. The FDA originated out of concern about high-risk products—Congress created the agency in response to Upton Sinclair’s 1906 book The Jungle and its exposé of the meatpacking industry.[9] Subsequent expansion continued in response to other high-risk disasters. The 1938 Act[10] responded to medicines for children that had been mixed with antifreeze.[11] The 1962 Kefauver-Harris Amendments responded to the narrowly averted Thalidomide disaster.[12] Further, during the 1960s, all but one of the Congressional oversight hearings were conducted to criticize the FDA for failing to take adequate regulatory action against products that the committee concluded were safe or ineffective.[13] Congress also traditionally steps in when FDA involvement threatens to substantially injure industry. This second situation may be a result of agency capture and Congressional interests in the industry. Established companies who get many products approved by the FDA have developed long-standing relationships with the agency, particularly given a “revolving door” between the agency and corresponding companies. Since these established companies are familiar with existing FDA procedures, they are unlikely to lobby Congress to change that framework. Even when FDA procedures are onerous, established companies are likewise better equipped to deal with them, often to the detriment of smaller entities. As those small companies are unlikely to have sway in Congress, the framework remains unchanged until problems become large enough to attract the attention of the general public or established companies. The high-risk disasters discussed supra are examples of FDA change in response to problems that affect the general public. More recently, changes at the FDA have come from established companies that helped enact legislation to make the FDA more efficient, most notably the Prescription Drug User Fee Act,[14] or to reduce adverse effects on industry from increased regulation, as with the Hatch-Waxman Act,[15] discussed infra. Because of the significant discretion given to the FDA by Congress and the courts, the agency is adept at responding to technological changes. Instead of relying on rulemaking or case-by-case adjudication like most agencies, the FDA promulgates new policies through guidelines, which often have a de facto binding effect because of the close relationship those companies have with the agency. If the company is able to follow the guidelines, it is in their best interest to do so because going against the FDA’s policies could put their entire line of products at risk. Of course, the agency would not retaliate by refusing to approve a particular product, but may delay the approval or requests for additional clinical trials, which increase expenses to the company with little corresponding benefit to the consumer. Finally, the FDA and its counterpart state agencies have fared extremely well in the courts.[16] When faced with complex scientific and technical issues, judges have been reluctant to overrule decisions made by the FDA, and the Supreme Court has repeatedly upheld the power of the FDA to protect the public from “dangerous products.”[17] Furthermore, because judges generally do not choose which cases they will adjudicate (although the Supreme Court does to a certain extent), the courts cannot provide a reliable mechanism for effecting policy, except in a reactive fashion. Despite these limitations, the courts have acted to influence policy in this area in two ways: (1) by curbing FDA procedural irregularities and (2) by marshalling public opinion of high-risk products through lawsuits. The main target of courts is the non-binding guidelines discussed supra, which have been struck down by courts when they become too much like binding regulations without any of the requisite procedural mechanisms.[18] The result has been an FDA policy in which the guidelines are never actively enforced, but upheld through extra-legal mechanisms. As for the marshalling of public opinion of high-risk products through lawsuits, many tort suits and class-actions have created changes in the way that the FDA approaches regulation. Although most of these suits target the individual companies rather than the FDA, because companies seek to avoid similar situations in the future, large entities may actually seek increased regulation from the FDA to avoid tort liability. This backdrop has led to the current state of genetic diagnostic test regulation. Until recently, the FDA had not exercised its enforcement discretion to regulate most genetic diagnostic tests. Because the adverse effects from the unregulated market for genetic diagnostics are not likely to surface for many years due to their predictive nature, there is not likely to be a “Thalidomide moment” in Congress or the courts for such an industry. Furthermore, most industry players have benefited financially from the FDA’s neglect, and consumers are much less able than either large or small companies to lobby Congress for increased regulation.
B. FDA Shift in Regulating Genetic Diagnostic Tests
The majority of genetic diagnostic tests fall within the category of “lab-developed tests,” which until recently, were completely unregulated by the FDA. Instead, laboratory procedures used in such tests were subject to minimal regulation under the Clinical Laboratory Improvement Amendments of 1988 (CLIA) through the Centers of Medicare and Medicaid Services (CMS).[19] CLIA applies to all clinical laboratories that operate or provide testing services in the United States[20] and mandates that laboratories not accept materials from the human body for testing without adequate certification. In 1992, CMS issued regulations for implementing CLIA, creating “specialty areas” for laboratories that perform high-complexity tests, but did not include genetic diagnostic tests.[21] CMS had not instituted specific requirements for molecular or biochemical genetic testing laboratories by 2010.[22] Likewise, CLIA can only regulate a laboratory’s analytical validity (whether a test properly measures the characteristic it was intended to measure[23]), leaving clinical validity considerations (whether or not the test actually diagnoses the condition) up to the laboratory director.[24] The FDA currently regulates clinical validity for experimental diagnostics to some extent.[25] For example, suppose that a laboratory hypothesizes that a given gene sequence is associated with a particular disease. If the laboratory wished to offer this test to volunteers to bolster their confidence in this correlation, the FDA would require that the Internal Review Board of the institution oversee all such studies and inform the volunteers of the test’s experimental status.[26] More generally, the FDA regulates devices using biological materials outside the body (in vitro) differently depending on whether they are commercially available kits or lab-developed tests. The FDA regulates kits as medical devices,[27] which are organized into three classes. In contrast, lab-developed tests presently fall within the discretion given to the laboratory director under CLIA,[28] leading to two separate options for genetic diagnostic tests: one for commercially available kits and another for lab-developed tests administered by CLIA-regulated labs.[29] Many have complained that this creates an uneven playing field between these two categories.[30] In 2007, the FDA changed course and decided to regulate lab-developed tests more stringently to address the uneven field. There were a number of reasons for this change. Early lab-developed tests were limited to small entities, with close relationships between the physician or technician performing the test and the patient.[31] More recently, it became clear that the laboratory developing the test was actually a large corporation interacting with the patient only through the postal system.[32] The complexity of the tests had also increased in recent years.[33] Finally, the Secretary’s Advisory Committee on Genetic Testing had recommended that the FDA become more involved in pre-market review of these tests.[34] This 2007 guidance stated that the FDA would require pre-market review for a limited subset of lab-developed tests known as in vitro diagnostic multivariate index assays (hereinafter, “algorithm assays”). These algorithm assays analyze laboratory data using an algorithm to generate a result for diagnosing, treating, or preventing disease[35]. The agency was particularly concerned about this subset of lab-developed tests because they use proprietary methods to calculate patient-specific results that healthcare providers are unable to independently derive or confirm.[36] In 2010, the FDA decided not to issue final guidance on algorithm assays, and instead chose to pursue comprehensive regulation of all laboratory-developed tests.[37] In doing so, it recognized that the absence of oversight may make it easier for laboratories to develop and offer tests quickly, but believed that comprehensive regulation would create a level playing field.[38]
C. The Three Classes of Devices for FDA Approval
This section examines the framework currently used for those genetic diagnostic tests (kits) that are regulated by the FDA. Because the FDA will likely proceed under a similar framework for future genetic diagnostics that fall within its enforcement discretion, I treat this classification system as a baseline for future suggestions for regulation in Part IV. Under the Medical Device Amendments of 1976,[39] once the FDA classifies a test as a medical device, it places the product into one of three classes of regulation. Class I, or low-risk, products require no approval before sale, although the FDA monitors adverse effect reports after sale.[40] Class I devices includes items like bandages where there are likely to be few ill effects even from device misuse.[41] Class II, or moderate-risk, products require clearance of a pre-market notification submission known as a 510(k).[42] The 510(k) submission requires a comparison of the submitted device with a legally marketed device and a showing that they are substantially equivalent.[43] There are also de novo classifications for Class II items that have no identifiable predicate device.[44] Class II devices may require additional clinical testing, but generally do not. Clinical testing in this context does not necessarily mean randomized, double-blind studies; submissions can rely on published studies or earlier data, although the FDA prefers studies where samples are prospectively collected.[45] The 510(k) requirements were envisioned as an efficient regulatory option: allowing companies to build upon established clinical and scientific evidence of safety.[46] As a result, the 510(k) option is more widely used than the Pre-Market Approval (PMA) option, discussed infra.[47] Finally, Class III, or high-risk, products require submission of an application for Pre-Market Approval.[48] PMA is the FDA process of scientific and regulatory review to evaluate safety and effectiveness. These devices are those that support or sustain human life, are of substantial importance in preventing impairment of human health, or present an unreasonable risk of illness or injury, including patient misuse.[49] Examples include diagnostics for Hepatitis B and C and for HPV.[50] Regardless of whether the company follows the Class II or Class III pathway, the FDA may request that the company provide clinical data to support clearance or approval.[51] After the introduction of any product (of any class), adverse events must be reported to the FDA, even if such malfunctions do not cause any injury.[52] The FDA estimates that 50 percent of regulated devices are Class I, 42 percent are Class II, and only eight percent are Class III.[53] In the 2007 draft guidance, the FDA stated that most algorithm assays would be Class II or Class III devices, depending partly on the seriousness of the disease measured.[54] Although the requirements after classification are very clear, the FDA has been unclear about why certain devices fall into certain classes. Furthermore, given the deference accorded the FDA by the courts, it is difficult to change the classification of a device subsequent to FDA classification. The overlap between Class II and Class III indicates a need for increased clarity in device classification, including ex ante categories for genetic diagnostic tests. This is particularly necessary given the letters that the FDA has sent out to diagnostic test companies requesting that they submit information for approval and regulation pursuant to its enforcement discretion, discussed infra.
D. Choice of Classification Delays Market Entry and Can Be Uncertain
The FDA’s decision to place a product into one of these three classes can make a substantial difference in both the expense associated with clinical trials and the amount of time that it takes before the product can enter the market. At the same time, empirical data shows that the European Union (EU) regulates many of the same products, but that these products are introduced many months or even years earlier in Europe than in the United States.[55] Because the FDA is planning to expand this regulation further, this indicates that previously unregulated products may be particularly affected by this disparity in time-to-market. The data suggests two solutions, (1) that the FDA create more stringent time limits on the approval of medical devices and (2) that the FDA provide greater transparency as to why they have chosen to classify a particular device into a given class, thereby enabling companies to better predict their expenses. A recent survey of 204 public or venture-backed medical technology companies compared the efficiency of the FDA’s regulation to equivalent EU practices and concluded that despite having similar safety outcomes, the FDA’s current practices are less efficient than the agency’s European counterparts.[56] Although some delays were caused by personnel changes during the approval process,[57] the average review time for a Class II product from first filing to clearance was ten months, suggesting that the FDA requirements are particularly onerous.[58] For those companies who communicated with the FDA prior to making a 510(k) submission, the total time from that first communication was nearly three years compared to seven months in Europe.[59] The average total cost for participants to bring a low-to-moderate risk 510(k) product from concept to clearance was approximately $31 million, with $24 million spent on FDA dependent activities.[60] For Class III products, the average review time increased to 54 months, almost four years longer than European agency review for the same products.[61] At least some of this delay stems from increased risk aversion to new products at the FDA.[62] For higher-risk products that require pre-market approval, the average total cost from concept to approval was approximately $94 million, with $75 million spent in stages linked to the FDA.[63] Because of the additional time spent in the regulation process, as well as the additional funds used, earlier expansions in FDA regulatory authority have been financially detrimental to the regulated industry. This extensive time frame is exacerbated by the uncertainty as to how the FDA will classify the newly regulated diagnostic tests. In 2010, the FDA sent several letters to genetic testing companies, including a letter to the genomic chip manufacturer Illumina.[64] The letter to Illumina stated that their chip (used to sequence genes) was classified as a device under the FDCA and had not been submitted for pre-market clearance or approval.[65] The letter further suggested that the agency could consider the chip a Class III device.[66] Similar letters were sent to deCODE Genetics, Navigenics and 23andMe, which use the Illumina chip for use in testing kits.[67] These letters stated that these products did not fall under the category of lab-developed tests because they were not “developed by and used in a single laboratory,”[68] and because the collection kits were distributed through a website or by a third party distributor.[69] All three letters intimated that such kits were considered Class III devices and required Pre-Market Approval submissions. A fifth letter was sent to Knome, which provides whole-genome sequencing and software to interpret this data.[70] Here, the FDA specifically stressed that as a software program that analyzes genetic test results generated by an external laboratory, the product is a diagnostic device that requires pre-market approval under the FDCA, and because it was not within a single laboratory, it was not considered a laboratory-developed test (which were then unregulated).[71] Fourteen additional letters were sent to other companies suggesting that their devices may require pre-market approval under Section 201(h) of the FDCA, although they did not suggest that they were Class II or Class III devices.[72] These letters suggest that these genetic diagnostic test manufacturers may need to go through the onerous Class III approval process, likely delaying the time-to-market for these devices even more.
E. Historical Expansion of Regulation Has Been Alleviated by Commensurate Expansion in Patent and FDA-Mediated Data Exclusivity
This section shows that historical instances of regulatory expansion of previously unregulated products did have negative impact on those industries, and suggests that similar concerns about increased regulation of genetic diagnostic tests today are well-founded. Congress responded to negative results and complaints from industry by providing for greater patent exclusivity and for a new data-exclusivity backstop, which helped to reverse the trend in fewer drug approvals following increased drug testing regulation. These historical results suggest that there is no need to wait for a negative impact on diagnostic tests to aid the industry, particularly after many of these companies may choose to relocate to Europe, and provides support for the exclusivity solutions explored infra in Part III. As mentioned, the expansion of pre-market scrutiny by the FDA is not new. In 1962, the revolutionary Kefauver-Harris Amendments significantly expanded the FDA’s ability to regulate drugs before they went to market.[73] As a result, thirteen out of fourteen new products regulated by the FDA took longer to get to market in the United States than in other countries.[74] Without the protection of patents, it is unclear whether manufacturers would have invested the significant funds required for product approval.[75]Indeed, the delay in reaching market was already hurting the pioneer drug manufacturers by eating away at their patent term. In response, because generic manufacturers were subject to similarly stringent requirements, the pioneer drug manufacturers obtained court decisions preventing generic manufacturers from beginning clinical trials while a brand drug was still under patent.[76] The solution to these tactics was the Drug Price Competition and Patent Term Restoration Act of 1984 (hereinafter “Hatch-Waxman,” the names of its two sponsors), which tied the extension of patent terms to the period spent in the FDA approval stage.[77] The unusual thing about Hatch-Waxman is that in addition to providing a handy mechanism for resolving conflicts between pioneer drug manufacturers and generic drug manufacturers, it also provides for three-year data exclusivity for performing further clinical studies.[78] This means that for a generic company to gain FDA approval even after the expiry of the pioneer patent term, they must perform their own clinical trials rather than rely on the data from clinical trials performed by the pioneer drug manufacturer, or wait five years until this period of exclusivity expires. The Hatch-Waxman exclusivity period does not apply to medical devices, but the subsequent Safe Medical Devices Act of 1990 created a six-year data exclusivity provision for Class III devices[79] subject to PMA requirements.[80] As discussed earlier, there are very few Class III devices approved each year, but some Class II devices still require clinical trials. A baseline level of exclusivity for any diagnostic test that needs clinical trials for approval or reclassification by the FDA would help to alleviate the uncertainty created by the current patentability status discussed in Part III. To ensure that the expansion of regulation does not hurt companies, it is insufficient to simply apply the current medical device framework used for kits to all genetic diagnostic tests (including LDTs). Instead, we should pursue a framework that would reduce the significant times required in device approval, as well as clarify device classifications to ensure that diagnostic companies are prepared for the investment that the approval process would require. In addition, companies’ investments in clinical trials should be protected with either a patent or a data-exclusivity backstop.
II. Why We Need Better Regulation of Genetic Diagnostic Tests
As discussed in Part I, the FDA is currently overhauling the regulation of diagnostic tests to include additional regulation of genetic diagnostic tests. Additional regulation, particularly where FDA classifications of such devices is unclear, could be damaging for the industry. Critics find the expansion of FDA oversight into this sector as emblematic of FDA overreach stifling industry and suggest that the agency avoid expanding its enforcement discretion to regulate laboratory-developed tests.[81] However, this section discusses why additional regulation is necessary for these laboratory-developed genetic diagnostic tests and why this expansion is beneficial for consumers. Diagnostic tests detect the presence of a condition by measuring some quantity associated with the condition. Similarly, genetic diagnostic tests can predict an individual’s propensity for a condition based on their nucleotide sequence, as discussed infra. If such tests worked perfectly to predict disease and provided completely clear information to patients who use the tests, there would be no need for regulation. Unfortunately, this is not the case. First, I address common errors made in diagnostic genetic testing and the role that regulation may play in minimizing such errors. These common errors include: (1) errors caused by using molecular markers rather than sequencing the whole genome; (2) errors due to variations in gene expression between individuals; and (3) errors resulting from statistical studies relied upon in designing the test, which are compounded by the complexity of gene interactions. Oversight by a regulatory agency, such as the FDA, would reduce the occurrence of these errors and increase the utility of such tests for consumers. To understand these problems, this section discusses the principles that enable genetic diagnostic tests to function and explains where these principles break down, resulting in errors in the test results. All humans have strands of DNA comprising four fundamental nucleotides, Adenine (A), Thymine (T), Guanine (G) and Cytosine (C).[82] The sequence of these nucleotides form genes, which determine which proteins the body produces, and thus determines how the body functions.[83] Within every species, there are variants of genes known as alleles.[84] To use a simplified example, not all humans have the same eye color. However, we all have genes that affect eye color, such that some of us have a blue-eye color allele whereas others may have alleles for brown or green eyes.[85] Certain alleles are more likely to lead to diseased conditions than others. [86] For example, a mutant allele of a gene known to repair damaged DNA may be less effective at repairing such damages than the normal allele.[87] Because the DNA cannot be repaired, individuals with the mutant allele may have diseases caused by the damaged DNA, including colon cancer.[88] Therefore, a genetic diagnostic test may be able to predict the development of cancer by detecting the presence of the mutant allele. One method for detecting the presence of a mutant allele is by sequencing all of the nucleotides. A possible issue with sequencing is that a genetic diagnostic test may return an incorrect sequence, reading “ATAC” when a subject actually has “ATGC”, demonstrating a failure of analytical validity.[89] A recent study compared the data of five individuals sequenced by two different genetic testing companies for thirteen diseases.[90] The results of the tests were 99.7% the same between the two companies, suggesting that the analytical validity for unregulated tests (the status quo) is quite high.[91] Even those companies that have been criticized for their inaccuracy do not have analytical validity thresholds below 99%.[92] Therefore, the issues in this section focus on issues aside from analytical validity.
A. Molecular Markers That Do Not Correctly Predict the Presence of a Disease Allele
Although whole genome sequencing (where every nucleotide is sequenced) is the best way to detect the presence of certain genes, it currently costs approximately $20,000 per person.[93] To save costs, companies use molecular markers that are statistically associated with the presence of certain mutant alleles.[94] The use of molecular markers means that rather than sequencing A, T, G, and C, a company may only sequence the starting “A” and determine the remaining nucleotides based on their statistical analyses. A simplified example can help to illustrate how molecular markers are able to replicate full sequencing. Suppose that, in addition to having brown eyes and blonde hair, your mother also had an unusual allele that protected against the bubonic plague.[95] If your mother’s ancestors lived in areas where the plague was prevalent, those individuals who did not have this allele would be more likely to die. Soon, the population would have a high prevalence of the plague protection allele and the alleles surrounding that allele on the chromosome. If those alleles were for blonde hair and brown eyes, this population would likely share those characteristics as well. However, if the correlation between blonde hair and the plague allele were perfect, there would be no error from using the blonde hair as a marker for the actual allele. As discussed infra, such a perfect correlation, however, is seldom the case. Sometimes the statistical assumptions used may be incorrect, causing errors in the genetic results reported to consumers. Events such as “cross-over” between genes may cause such errors. Humans have two copies of each gene, one from each parent.[96] For example, while your mother might have blonde hair and brown eyes, your father might have black hair and blue eyes. Let us also assume that the genes corresponding to hair and eye color are next to each other on the same chromosome.[97] If you inherited one copy of each gene from each parent, you would have one copy with blue eyes and black hair and the other copy with brown eyes and blonde hair. Assuming that your father’s genes were dominant, no child would ever have blue eyes with blonde hair or brown eyes with black hair.[98] Therefore, to create beneficial variation in the population, chromosomes “cross-over” one another in sex cells, yielding new chromosome combinations like blue eyes with blonde hair and brown eyes with black hair.[99] Such natural variation makes the job of genetic diagnostic test companies more difficult. Returning to your mother’s hypothetical plague protection allele associated with her blonde hair and brown eyes, a genetic test company may always assume that any blonde-haired, brown-eyed individual possesses the unusual allele. However, if an individual has either had a cross-over event between the plague allele and their hair color allele, the genetic diagnostic test will yield incorrect results. An error like this can lead to a false negative or a false positive result. Such errors are more likely when the company is using a “weak-effect” marker—one where the correlation between the marker and the full nucleotide sequence is statistically tenuous. Additional regulation can alleviate this concern by either requiring full sequencing for certain regions of the genome that incur many cross-over events or by requiring that companies rely on multiple markers (or a single “strong-effect” marker) rather than a single “weak-effect” marker.[100]
B. Mistakes in Genetic Test Interpretation Occur Because of Genetic and Environmental Variation
Another regulatory criterion is clinical validity, the accuracy of a test in actually diagnosing a particular condition.[101] Regulating clinical validity is more difficult because of the interpretation involved, and because of genetic concepts known as “penetrance” and “expressivity.” This is largely because consumers can find it difficult to understand that a heightened risk of breast cancer from a single gene can mean anything from never developing breast cancer to dying from the disease in the next ten years. Penetrance is the percentage of people with a particular genotype (nucleotide sequence) who show the diseased phenotype (physical manifestation of the allele).[102]Expressivity, on the other hand, is the intensity of the phenotype expressed by someone who possesses a particular genotype.[103] These variations among individuals present the most difficult challenges for genetic test regulation. Genetic diagnostic tests would be easier for consumers to understand if explained by a physician intermediary, who could show that the same mutation could lead to widely varying results in onset of the disease, as discussed in the earlier example. However, unlike similar non-genetic diagnostic tests, genetic diagnostics are sold and marketed directly to consumers, presenting a challenge for regulators. This problem is exacerbated because an interpretation error can lead to dire results. Suppose a company sequenced a subject’s genes for repairing damaged DNA and found that the woman carried the disease allele for BRCA1, a gene associated with the onset of breast cancer.[104] The woman may not develop breast cancer in her lifetime—a penetrance problem—or may develop a form that does not rapidly metastasize—an expressivity problem. Indeed, very few diseases have 100% penetrance, where all of the individuals with a particular genotype have the corresponding disease.[105] As a result, interpretation of the test results remains difficult even with the knowledge of the full sequence.[106] Expressivity is a lesser concern for clinical validity because such variation arises once the individual has the disease. However, for serious diseases, individuals may take action based on these disease risk profiles when in reality their disease expression may be very low based on other risk factors, such as age and lifestyle.[107] As a result, much of the regulation in this area will depend on what information the company provides to consumers. If the genetic diagnostic testing company only provides the sequence with no additional information, very little regulation may be necessary. On the other hand, if the company provides extensive information, such as “breast cancer will appear at age thirty-four,” such claims require more regulation because consumers are more likely to rely on such information for treatment purposes.[108] The molecular markers problem compounded with expressivity and penetrance strengthens the case for regulation; even with a perfect correlation between “A” and “ATGC”, it is not clear that the disease associated with “ATGC” may present in the individual. It is even less likely that the disease associated with “ATGC” will present in an individual who actually has “AAGT”. Because of the research used to create these correlations, an individual with characteristics different from the majority population, particularly race, has a much higher likelihood of having test results that are not actually correlative.[109] These factors support the use of strong-effect markers, which move diagnostic companies closer to “perfect correlation”. As it stands, when companies sequence strong-effect markers, the agreement between multiple testing companies is higher than when companies use weak-effect markers (that have more moderate effects).[110] To use the earlier example, suppose your mother’s ancestors did not live in an environment where the plague was prevalent. Rather than facing evolutionary pressures when the plague allele was not present, individuals without the allele would survive at the same rate as the rest of the population. As a result, the population would show a very different distribution of alleles from the original example and the molecular marker of blonde hair would be less helpful. Regulation can mitigate such problems by requiring that consumers receive additional information about the studies that produced the genetic testing data and by creating greater incentives to include a wider population in initial genome-wide-association studies.[111]
C. Mistakes in Genetic Test Interpretation Occur Because of Unforeseen Interactions between Genes
Not all genetic tests simply convey the likelihood of a single disease based on a single gene. Many tests predict propensity for a disease or for reoccurrence of a condition based on algorithms that approximate the disease pathway.[112] For example, scientists believe that many genes affect the development of schizophrenia.[113] By putting together multiple genotype markers from different locations on the genome, a company could predict an individual’s likelihood of developing schizophrenia. However, if there were errors in developing this algorithm, this clinical prediction would be wrong. Indeed, the chance of error is very high because the interactions between genes can be unpredictable and small-effect genes can be difficult to find. For instance, one study found that a predictive test using the cumulative effects of thousands of small-effect genes was more accurate at predicting the existence of schizophrenia than a test using only major-effect genes.[114] One way that this sort of error can occur is when a lab relies on a meta-analysis, a type of study that puts together data from multiple studies to deduce the interactions between different genes. The problem with meta-analyses is that the actual interaction between two particular genes may not have been tested directly, which could lead to incorrect conclusions. One possible cause for this error is when the two genes are on the same pathway that leads to the disease.[115] In most cases, if gene Z affects a function earlier in a molecular pathway than gene Y, it does not matter if the allele at gene Y carries a mutation, since the pathway would have already been altered due to a mutated allele at gene Z.[116] For a simple example, consider the genes that control certain properties of human hair: suppose that gene Z governs hair growth and gene Y governs hair texture. If the individual carries an allele at gene Z that prevents hair growth entirely, it does not matter if the allele at gene Y is for curly or straight hair, the individual will be unaffected. A meta-analysis of genes like Z and Y might infer increased effects from mutations at both genes when the actual effects remain the same when both genes are present. Even with perfect testing results, there are dangers in how companies present their data to patients.[117] Is it sufficient to say that the patient has an increased likelihood of developing disease sometime in their life? This may depend on the seriousness of the disease tested, and even the character of the testing service. The common errors with (1) molecular marker technology, (2) disease phenotype variation from expressivity and penetrance, and (3) the interaction between different genes show that regulation of genetic diagnostic tests is necessary. Therefore, continuing the status quo of minimal regulation is not a valid option, despite the possible adverse effects on industry.
III. Is a Per Se Patentable Subject Matter Rule for Genetic Diagnostic Tests the Solution?
Now that we have established that there must be some regulation for genetic diagnostic tests, this section examines e statutory or judge-made exclusivity for diagnostic tests to help underwrite the costs of such regulation and bolster the financial viability of genetic testing companies. One such option is a per se patentable subject matter rule. As we discussed earlier, with increased regulation of diagnostic tests, companies become more interested in protecting their investment that they have made in new technologies because of the additional financial outlays required to fund additional regulatory submissions and possible clinical trials. Patents can be one form of protection for these companies. Under the patent system, companies in the United States may file a specification and claims (which must fall within the scope of patentable subject matter and meet patentability criteria of novelty, non-obviousness, enablement and written description) in exchange for twenty years of exclusive rights to the claimed technology.[118] The patent system protects the patent-holder from others who copy their invention by allowing the patent-holder to sue for damages or an injunction if another company’s product infringes their patent.[119] Aside from patenting the genetic identity associated with the disease, companies may also be interested in patenting the algorithm that predicts the propensity for metastasis of tumors or the interaction of multiple genes. While the Hatch-Waxman framework helped counter he delay to market for drugs through patent-term extension, this solution is insufficient for genetic diagnostic tests, given the current uncertainty as to whether such tests fall within the scope of patentable subject matter. Therefore, to provide the same protection to diagnostic test companies as we do drug companies, we must create an equally effective exclusivity backstop. This may be accomplished by bringing all genetic diagnostic tests within the scope of patentable subject matter through the creation of a per se rule (making all genetic diagnostic tests “patent-eligible”), or through FDA-mediated data exclusivity, a feature also found within Hatch-Waxman. This section explores two issues with respect to the creation of a per se rule: (1) does it fit within the current patentable subject matter jurisprudence and (2) is patentability really necessary for the financial success of genetic diagnostic tests, or will the lesser protection of data exclusivity be sufficient? As this section will show, data exclusivity is a superior option because of the possibility of patent over-enforcement stifling innovation in the diagnostic test arena.
A. A Per Se Rule Does Not Fit within the Patentable Subject Matter Framework
The current era of controversy over patentable subject matter began in 1980 with the seminal Supreme Court case, Diamond v. Chakrabarty, where the inventor, Chakrabarty, filed a patent application claiming “a bacterium from the genus Pseudomonas”—not merely the process claims for producing the genetically modified bacterium, but the bacteria themselves.[120] The Court determined that this bacterium was not an unpatentable natural phenomenon, but rather a non-naturally occurring manufacture or composition of matter, and therefore held it was patentable.[121] A related case from many years earlier is Parke-Davis & Co. v. H.K. Mulford Co., where Judge Learned Hand reasoned in dicta that purified adrenaline was patentable subject matter because it was materially different from the naturally-occurring adrenaline found in the human body.[122]Parke-Davis provides some of the support for the modern patenting of genes, reasoning that such genes are patentable because they are the purified forms—the inventor has separated the gene from the rest of the chromosome and from the other cellular matter in the naturally occurring state.[123] Four recent cases have brought into question the patentability of genetic diagnostic tests. This section discusses each case and determines that not only is the framework for patentable subject matter unpredictable and fact-specific for diagnostic tests, but also that a per se patentability rule would be unfeasible to implement without disrupting the patentability of unrelated industries. The first such case is Laboratory Corporation v. Metabolite Laboratories,[124] where the patent at issue claimed a process that detected deficiency of two vitamins, folate and cobalamin, by measuring the level of homocysteine in the body fluid. If the homocysteine levels fell within a particular range, the patent suggested that there was a vitamin deficiency.[125] In his dissent from the Court’s unusual decision to dismiss the case for certiorari improvidently granted, Justice Breyer wrote that the patent office should not have permitted this patent as it “claim[ed] a monopoly over a basic scientific relationship,” yielding an interpretation that would cause doctors’ medical diagnoses to infringe the patent.[126] Although Justice Breyer’s view was not ultimately adopted by the Court, his remark suggests that the area of patentable subject matter was far from settled and created an opening for subsequent cases.[127] The second major case relevant to our analysis of patentable subject matter is In re Bilski (later Bilski v. Kappos), which did not deal with diagnostic tests, but rather software patents.[128] The inventor, Bilski, claimed an algorithm for hedging.[129] The Federal Circuit disposed of Bilski’s patent by determining that an algorithm fell within the scope of patentable subject matter so long as it was implemented with a non-trivial machine or involved the transformation of matter, reaffirming the so-called “machine-or-transformation test”.[130] As Bilski’s method did not involve any such transformation of matter nor was it implemented in a non-trivial machine, the patent-in-suit was therefore held to be unpatentable.[131] Immediately, it was unclear how Bilski would apply to diagnostic tests and methods. The Federal Circuit applied the “machine-or-transformation test” in a third major case, Prometheus Laboratories v. Mayo Collaborative Services.[132] Prometheus is the exclusive licensee of two patents that calibrate the dosage of thiopurine drugs.[133] Although doctors had used such drugs to treat autoimmune diseases for many years, their efficacy was limited by non-responsiveness and drug toxicity in some patients.[134] To address this complication, the patents claimed the following “method of treatment”: (1) administer a particular dosage of the drug; (2) determine the levels of the drug metabolites found in the body fluid; (3) depending on the levels of those metabolites, either increase or decrease the dosage of the drug subsequently administered.[135] In the original Federal Circuit decision in 2009, Judge Lourie applied the “machine-or-transformation test” and found the requisite transformation in the method of treatment.[136] This overruled the District Court judgment that the patent was invalid because the administration was a trivial “data-gathering step” (rather than fundamental to the invention) and the claims relied on a naturally-occurring correlation.[137] Subsequently, when In re Bilski was appealed to the Supreme Court in Bilski v. Kappos, the Court found the “machine-or-transformation” test was too restrictive, and held that although the test could provide “a useful and important clue” as to the patentability of an invention, it was not the sole test for patentability.[138] In dicta, Justice Kennedy noted that the formalistic “machine-or-transformation test” may have the unfortunate side-effect of rendering “advanced diagnostic medical techniques” unpatentable.[139] Indeed, he was not alone in this fear: when a lower court invalidated Prometheus’ patent based on the Federal Circuit’s machine-or-transformation test, one response alleged that this decision would “threaten to invalidate the entire field of medical treatment and diagnostic patents on which the innovative and lifesaving biotech industry is built.”[140] When the Federal Circuit revisited Prometheus Labs after Bilski v. Kappos, they upheld the court’s initial finding of validity.[141] The court recognized that Congress envisioned a permissive approach to patent eligibility to ensure that “ingenuity should receive a liberal encouragement.”[142] The Federal Circuit found that the decision turned on whether the claims covered a natural phenomenon, whose patenting would entirely preempt the use of that correlation, or whether it was only a particular application of that phenomenon.[143] The court held that it was the latter, and therefore within the scope of patentable subject matter.[144] Whether the Supreme Court upholds the Federal Circuit’s decision is another question. The final major case I address is Association for Molecular Pathology v. United States Patent and Trademark Office, or “the Myriad Case.”[145] The patents enforced by Myriad Genetics claimed the particular nucleotide sequence associated with genes for breast cancer, in effect claiming isolated forms of the genes in the manner prefigured by Parke-Davis.[146] These patents enabled Myriad to have a monopoly over nearly all diagnostic tests for breast cancer caused by the two major breast cancer genes, BRCA1 and BRCA2,[147] causing outrage in the scientific community, as well as amongst patients seeking testing for susceptibility to breast cancer. Because this case deals explicitly with genetic diagnostic tests, it best illustrates the state of affairs that arises in the absence of regulation. Myriad offers a laboratory-developed test, and although the lab is accredited under the CLIA discussed supra, the FDA has never approved the actual test offered by Myriad.[148] Therefore, there is a greater chance that the test conducted by Myriad Genetics leads to incorrect results, and without a legally available second opinion, a patient is left without recourse.[149] In Judge Sweet’s opinion in the Myriad Case, he determined that genes were outside the scope of patentable subject matter and struck down the nucleotide sequence claims (and many other claims in the two patents).[150] The argument was that purified genes, despite whatever Judge Learned Hand may have said in Parke-Davis, are not “markedly different” from naturally-occurring genes, and thus the patent covered a product of nature rendering it invalid.[151] After this decision was appealed to the Federal Circuit, the United States Department of Justice (the “DOJ”) filed an amicus brief that effectively straddled the positions of the two parties, the DOJ supported the decision to disallow purified gene patents but felt that complementary DNA patents should remain valid.[152] The distinction that the DOJ made is that since complementary DNA does not contain introns,[153] it is sufficiently different from naturally-occurring genomic DNA.[154] In contrast, isolated DNA is no different from cotton fibers isolated from the cotton plant; it may be necessary to isolate a substance to make use of it, but by itself does not yield an invention.[155]. The DOJ distinguished the purified adrenaline in Parke-Davis from the purified gene in Myriad by raising a crucial point: with genes it is the similarity to the naturally-occurring substance, rather than the differences from purification, that yield the benefits of the patent for diagnostic and medical purposes.[156] Subsequently, the Federal Circuit reversed Judge Sweet’s District Court decision in part, holding that Myriad’s composition claims to “isolated” DNA molecules were patent-eligible, but finding that those claims comparing DNA sequences were patent-ineligible, as they only involved mental steps.[157] These inconsistent positions have created a great deal of uncertainty about which products would fall within the scope of patentable subject matter, which makes it difficult for companies to make ex ante investments in this technology. Indeed, the positions adopted by Judge Sweet in the Myriad Case and in the DOJ amicus brief might be seen as arbitrary. Both interpret the words of the Constitution, “to promote the progress of science and the useful arts” to exclude a type of invention that they do not feel would promote that progress without any clear scientific demarcation.[158] This analysis shows that there is no clear answer regarding whether genes should or should not be patentable, especially given that the patentability of genes and diagnostic tests is not clearly prohibited by the language of the patent statute. If one conceptualizes the relationship between a segment of DNA and a disease as correlations, then they are no different from the correlations upheld by the Federal Circuit in Prometheus. If they are conceptualized as products of nature, the boundary shifts depending on how natural something must be to fall within that unpatentable class. Therefore, even where courts declare otherwise, the patentable subject matter question must be driven by policy: would it be beneficial for the progress of innovation to allow the patentability of these technologies? Given the flux of the law on this matter, it would be difficult to create a per se rule declaring that all genetic diagnostic tests were patentable subject matter. Not only would Congress or the courts need to deal with the many scientists and academics who oppose such a position, the rule itself would throw into question the entire jurisprudence dealing with patentable subject matter, with effects on other technologies, as discussed by Justice Kennedy in Bilski.[159] Therefore, it would be unwise for the genetic diagnostic test industry to rely on predicting the direction of the Supreme Court in deciding whether their particular test would fall within the scope of patentable subject matter. Most importantly, even if Congress supported such a rule, it is not clear that blanket patent-eligibility would “promote the Progress of Science and the useful Arts,” as the Intellectual Property Clause of the Constitution requires,[160] as shown by the innovation-stifling actions taken by some patentholders in this arena, as I discuss next.
B. In the Absence of a Per Se Rule, Companies Still Need a Baseline Level of Exclusivity
In the absence of a per se rule, is the solution to have extensive regulation of genetic diagnostic tests, but not patentability? It is crucial to remember that the Myriad Case took place against a backdrop of no regulation, and even then, companies were concerned about their initial investment in the technology. This section discusses the possibility of data-exclusivity as a lower tier of protection that would enable diagnostic test companies to recoup investment in new tests even if courts find that such tests fall outside of the scope of patentable subject matter. Based on this discussion, I conclude that FDA-mediated data exclusivity for all tests that require clinical testing for approval is the best option. Many articles have discussed blocking patent issues in gene patents.[161] Some have argued that patents are not necessary to incentivize the development of genetic diagnostic tests.[162] Gene patents’ claims cover nearly twenty percent of the human genome, and those interested in doing research on those claimed genes must overcome the hurdle of whatever costs the patent-holder has placed on her patent.[163] University researchers have also asserted that “[the prospect of] patents do not affect research in this area” as most research is done in a university setting, funded by government grants.[164] At the same time, private companies conduct a large amount of research building on university research in developing diagnostic tests,[165] which definitely does depend on patents.[166] A recent study found that at least one patent in either Europe or the United States covered 19 of the 22 most prevalent hereditary diseases, and that many were covered by several patents.[167] In both the United States and Europe, universities are the top patent-holders,[168] and most genetic diagnostic patents originate in the United States,[169]which the study authors speculate stems from the liberal patent policy in this country. In Heller and Eisenberg’s seminal article, the anti-commons is characterized as many cross-cutting patents that require new entrants into the field to license from each of these actors.[170] However, if patents can be circumvented or “designed-around,” no such licenses are necessary. Therefore, the current state of genetic diagnostic patents is not necessarily an anti-commons as imagined by Heller and Eisenberg: as the Huys study recognizes, although 25% of patents claim particular genes, only 3% of these patents cannot be circumvented.[171] However, 38% of these gene patents claimed diagnostic methods, which are generally more difficult to circumvent.[172] The Huys study made no substantive conclusions about the existence of any patent thicket, but noted that the uncertainty associated with the patentability of such tests created more difficulties for inventors in the development of technologies associated with gene patents.[173] More recently, a series of extensive studies, led by Professor Cook-Deegan, focused on ten hereditary diseases and the effects of patents on their treatment and research.[174] Here, I focus firstly on breast cancer and associated cancers, and then on Alzheimer’s Disease tests, given their high rates of occurrence and significant impact on the population. Breast cancer is a particularly lucrative disease for patent-holders like Myriad. For breast cancer, Myriad is the exclusive licensee and sole provider of tests based on BRCA1 and BRCA2 in the United States. For colorectal cancer, tests are available from multiple laboratories (including Myriad), but none have the same monopoly position as Myriad does in breast cancer testing.[175] Therefore, the comparison between breast cancer and colorectal cancer testing is a useful metric of the effects of diagnostic patents. Even as a monopoly entity, Myriad often acts in the public interest. As of August 2008, Myriad has submitted over 18,000 entries for the 2,600 unique mutations to the Breast Cancer Information Core database (a publicly available central repository for information regarding mutations and polymorphisms in breast cancer susceptibility genes).[176] However, Myriad has also limited certain types of research by using a very broad definition of what constitutes infringing—when the Genetic Diagnostics Laboratory began testing patients using National Cancer Institute protocols, while additionally providing breast cancer results to patients, Myriad claimed that this constituted patent infringement.[177] Myriad does not enforce its patents against non-commercial research or against laboratories providing tests that it does not sell,[178] although its ambiguous policy may still create a chilling effect. In contrast, the studies found no similar chilling effect with the two tests offered for colorectal cancer.[179] In House Judiciary Committee hearings, some scientists argued that Myriad purposely did not adopt more cost-effective testing methods.[180] However, the sequencing methods used by Myriad for breast cancer tests are actually cheaper than those used by Myriad and other providers for similar colorectal cancer tests; if the technology has advanced, Myriad’s competitors have not adopted it either.[181] Still, even assuming that Myriad is using the most cost-effective testing methods, its tests remain prohibitively expensive compared to molecular marker tests, breast cancer gene sequencing as offered by Myriad costs $2400 per patient on average, compared with $99 for the 23andMe test that includes preliminary results for various breast cancer risk factors, including breast tissue density. Similarly, Alzheimer’s Disease, which cost the U.S. healthcare system $61 billion in 2002, is a disease where patents can be very lucrative.[182] The majority of those with the disease have late-onset Alzheimer’s, which has only one clearly established risk factor known as APOE.[183] A small percentage of cases arise from early-onset Alzheimer’s, for which there are three dominant mutations found in one of three genes.[184] In the United States, genetic testing for Alzheimer’s is provided almost exclusively by Athena, who has licenses to three major Alzheimer’s gene patents,[185] and offers the test for $475. However, Graceful Earth, a direct-to-consumer testing company, produces a non-FDA approved test for Alzheimer’s for $280,[186] using indirect markers rather than the strong-effect markers used by Athena.[187] In a well-regulated environment, consumers could expect near certainty in their risk results provided by Athena or Myriad, and a reasonable certainty from their marker tests from Graceful Earth or 23andMe. However, such regulation is costly for companies. In the absence of any data exclusivity or patent protection, it is likely that Athena or Myriad would need to provide extensive clinical data to the FDA in order to market their products, while Graceful Earth and 23andMe would have used that data free of charge and benefited from Athena or Myriad’s initial investment. Such copying by other actors might make inventors like Athena or Myriad reluctant to take on the initial burden under additional FDA regulation. The crucial question becomes whether in the presence of additional regulation we need the complete protection offered by patents—20 years of exclusivity in exchange for the information disclosed to the public—or whether a lower-tier of exclusive protection such as clinical data exclusivity would suffice.[188] The answer seems to be that a lower tier of protection would be sufficient, even beneficial. Patents are often enforced indiscriminately against copyists and innovators alike, stifling new growth in this field. A lower tier of exclusive protection would enable new genetic diagnostic tests to be developed in a short period, but allow new innovators to protect their investment in the technology and in producing clinical data for the FDA. Furthermore, the disclosure provided by the patent system in this field is minimal compared to the information found in scientific publications, suggesting that the bargain struck by the patent system may not be fulfilled in the area of genetic diagnostic tests.[189] Furthermore, concerns about lower test quality from lack of competition could be allayed by stringent FDA regulation of these tests.[190] Therefore, a per se patentable subject matter rule is not the solution for protecting the interests of consumers and genetic diagnostic test developers. Not only would it fit poorly within the current patentable subject matter framework, but it is also not the best promoter of innovation. At the same time, a baseline level of exclusivity for a shorter period is required to allay some of the uncertainty of diagnostic test developers, particularly given extensive regulation by the FDA. For these reasons, innovation policy would be better served by granting genetic diagnostic test developers data exclusivity rather than patent protection.
IV. Suggestions for a New Regulatory Regime for Genetic Diagnostic Tests
With the earlier discussions in mind, this section presents a comprehensive framework for the regulation of diagnostic tests. In response to the regulatory problems discussed in Part II, some legislators have suggested the creation of an entirely new department within the FDA to address the unique needs of genetic diagnostic tests.[191] However, such drastic measures are unnecessary. Notably, I do not recommend that the FDA change current procedures for testing analytical validity (allowing the CMS to continue such regulation), nor should the FDA change significantly the current three-class system for medical devices before applying it to genetic diagnostic tests. Instead, I propose the following five simple changes to the regulatory system for genetic diagnostic tests at the FDA.
- Increase Requirements for Labeling and Genetic Counseling That Keep up with Developing Technology through Monthly Meetings
With a genetic diagnostic test, adverse consequences from an incorrect result can be significant, leading to mismanagement of the disease or unnecessary surgery.[192] In light of the known errors in the genetic test methods outlined earlier, I would add more to the FDA’s analysis. Firstly, there should be different oversight for those tests that rely on molecular markers and for those that rely on sequencing. The former should have clear labeling outlining how molecular markers may not predict your risk at all, particularly for patients of certain backgrounds. Molecular marker tests that frequently lead to disparate results when using different markers should be required to use multiple-effect markers for the same gene, or consistently use strong-effect markers. Secondly, the FDA should mandate expansive labels outlining factors required in the interpretation of diagnostic tests. In a long-term study of effects from a representative genotyping test, researchers found no indications of test-related distress in 90.3% of participants.[193] However, the Bloss study’s authors acknowledged the unusual demographics of the study cohort: most of those who completed the study had some post-graduate education, and of the 44% of participants who failed to follow up after testing, most had completed a four-year college degree.[194] To address this artifact, the FDA should expand the labeling requirements in the test results for the presentation of certain disease alleles. Many companies that provide genotyping for multiple markers have associated websites and blogs that are continually updated with new information associated with the subject’s disease alleles. If the initial test results are provided on the website, this format is adequate. If they are communicated via mail, the FDA should mandate associated genetic risk information and context for the most severe disease alleles in this printed report. Thirdly, certain tests should require consultations with a genetic counselor. Informally, the FDA has indicated that the availability of consultation with certain tests will be taken into account in the approval process, but no official guidelines have mentioned such treatment.[195] I recommend these consultations despite the Bloss study’s finding that only 10% of cohort participants elected free genetic counseling,[196] partly because 26% of cohort participants elected instead to show their test results to their physician. Because most physicians in the United States feel ill-equipped to deal with such genomics data,[197] these results suggest that genetic counselors should attempt to augment the role of the physician, which includes taking into account family histories and environmental factors—elements that can affect the expression of the gene in question and help reduce errors from penetrance and expressivity—when interpreting test results. Occasionally, the FDA can choose to allow a company to only offer the particular genetic diagnostic test to members of a particular population or racial cohort or require additional scientific studies for support from the manufacturer, as many genetic diagnostic tests have fidelity within certain populations.[198] The FDA has a reasonably well-defined set of non-binding regulations governing the collection of race and ethnicity in clinical trials, and in these regulations it requires application sponsors to analyze whether dosage must be altered for certain sub-groups.[199] However, although the FDA is cognizant of these issues, it may be difficult to give proper weight to each issue when a test offers data for multiple traits, each prefaced on different studies.[200] Finally, every month, a committee should convene in the FDA to determine future regulations and options for genetics-based diagnostic tests. This committee should have limited conflict-regulation options.[201] There should be options for regulations that deal with race and sex-based differences in the test data. Although the FDA has stated that it would be difficult to hold these meetings with regularity to address new developments in the field, it would be difficult to regulate this technology without them.[202] Until a decade ago, most genes were discovered using extensive family histories from cultures that kept good records and often had a history of inbreeding. With the advent of better sequencing technology, genome-wide association studies have become more widespread.[203] As a result, the information that is translated into diagnostic tests, particularly genetic diagnostic tests, is rapidly advancing. This means that a command-and-control approach will likely become obsolete every few years. This suggestion allows the FDA to move incrementally, detailing how the present medical device framework will apply to diagnostic tests, with frequent consultation from advisory groups. These groups can specifically examine the problems with genetic tests discussed earlier: expressivity, penetrance and, complex traits involving multiple genes. As genes become better-defined, these committees can ensure that tests do not rely on outdated data.
The FDA has admitted that it does not take into account costs or the patent system in its analysis.[204] However, as discussed supra, biotechnology and pharmaceutical companies do take into account the protections that the patent system offers when deciding which products to pursue on a commercial scale.[205] Furthermore, the patentability of these diagnostic tests that the FDA seeks to regulate is not clear. While Judge Sweet and the subsequent amicus brief by the Department of Justice have made a reasonable argument to limit the patentability of the “purified form of a gene,” this position does not clarify the patentability status of other genetic products. Nor will such questions be resolved quickly. Indeed, the biggest lament of biotechnology companies has been that the status of these products is so uncertain that it makes it difficult to invest in these new technologies. At the same time, numerous studies on the development of genetic diagnostic tests for breast cancer, colorectal cancer, and Alzheimer’s Disease have shown that much of the research that has led to these discoveries is not privately funded, but rather funded by the government.[206] In fact, it is not until the commercialization stage that private actors generally enter the picture. In the previously open regulatory climate, the comparatively small investment required for transferring a laboratory invention to a commercial product meant that patents on such technologies were not always beneficial, and occasionally harmful when enforced stringently. However, the regulatory situation has changed, which means that those interested in providing diagnostic tests must invest more to bring their tests to market. Given the comparatively smaller investment required for developing diagnostic tests compared to pharmaceutical drugs,[207] as well as the smaller scope of clinical trials that must be completed, there is an appropriate role for the FDA to provide a baseline level of data exclusivity, expanding the current Safe Medical Devices Act regime and exclusivity regimes under the Orphan Drug Act or the Pediatric Exclusivity Act.[208]In the latter models, companies receive data exclusivity from the FDA when they voluntarily undertake clinical studies suggested by the FDA to allow the use of their current drugs on children or to treat rare diseases. The Pediatric Exclusivity Act provides six months exclusivity for all uses of the drug, not just pediatric uses, while the Orphan Drug Act provides an extraordinary seven years of exclusivity.[209] Here, a three-year exclusivity term for Class II diagnostics can incentivize companies and even university groups to go the extra step of preparing data for regulation without worrying about rival groups using their test data for their own approval, creation of a performance standard or the classification of their device.
- Create Mandatory Maximum Times for Class II and Class III Approval Processes or Increase the Scope of Class II Devices to Decrease Overall Approval Times.
Suggested guidelines for Class II and III approval processes include: (a) no 510(k) clearance for an in vitro diagnostic should take longer than 6 months from start to finish and (b) no PMA for an in vitro diagnostic should take longer than 12 months from start to finish. It is crucial that in providing an additional safety factor, these additional regulations do not also institute a new obstacle. The Center for Devices and Radiological Health has a two-fold mission: to protect public health while promoting public health through new medical technologies.[210] This second mission cannot be fulfilled through an inefficient approval process. In implementing these new time-frame limitations, the FDA must also implement a more transparent monitoring system to track how long the approval process really takes. The FDA has often argued that this additional time provides additional safety for consumers. Firstly, from studies discussed supra, it is evident that although some additional time can be useful, the extreme amounts of time that the FDA has taken have not translated into commensurate benefits for consumers.[211] Furthermore, it is not clear that the safety gains from such delays outweigh the adverse effects on patients from delays in approving life-saving drugs.[212] The timelines suggested here are based on the average review times for equivalent products in the European Union and the review times that the FDA itself espouses in the FDA white paper discussed earlier. However, it may be insufficient to require shorter review times. As an FDA white paper previously addressed after the passage of the Prescription Drug User Fee Act, the review times suggested have been contemplated by the agency.[213] Therefore, I also suggest that in order to classify genetic diagnostic tests into Class III, the burden be placed on the FDA to show why prior adverse actions by individuals have necessitated this stringent classification, creating Class II and the 510(k) clearance process as a default position for most genetic diagnostic tests.
The complaint procedures for faulty tests must be streamlined because under the proposed scheme consumers with direct access to genetic diagnostic tests will use the software rather than just healthcare providers. All companies with web-presences—even if they do not have a software or computational test—must clearly link patients to a complaint page provided by the FDA, currently available through the MedWatch system.[214] Indeed, experts estimate that the current voluntary reporting system by physiciansvastly reduces the number of complaints reported to the agency, or only about ten percent of all adverse events.[215] Consumers may submit such complaints for slow-turnaround in tests, a frequent complaint made about small-testing operations, in addition to incorrect test results and misleading language. Turnaround time can be important: for example, PGxHealth, a commercial provider of genetic tests, returns results within two months, compared to one year for research-based providers of the same genetic tests.[216]Many companies have offered direct-to-consumer genetic testing earlier in the treatment cycle than other diagnostic tests, treating them as a “first-pass-through” before submitting a patient to extensive medical treatment or physician analysis.[217] If genetic diagnostic test results are not available quickly, this role as a preliminary diagnostic measure is lost.
Despite some jurisdictional issues, it is clear that software-based tests are “systems intended for use in the diagnosis of disease or other conditions” within the meaning of the Medical Device Amendments of 1976, even though they do not directly involve the withdrawal of samples from a patient. As whole-genome sequencing becomes an increasingly viable option for consumers,[218] such software-based tests will become more common. Armed with a copy of their full-sequence, users can run their data through frequently updated software that takes into account more recent meta-analyses and scientific studies. Indeed, such software is similar to the PrometheASE program already used by many in analyzing the raw data provided by 23andMe and Navigenics.[219] If the FDA could only regulate the companies who actually take samples from their clients to create the whole-genome sequences, they would not be regulating clinical validity at all, but the analytical validity of high-throughput sequencing machines, a mostly useless task. Furthermore, it would be unusual to require clinical trials from entities that may consist exclusively of programmers. If the programmers have properly designed the software from existing data in published papers, the FDA should easily complete the verification of such data. This situation may require the FDA to alter the types of individuals who work in their testing division and require a more ad hoc approach to such tests, such as including more individuals familiar with computer programming or statistical techniques and involving the creation of trials that depend on paper data, or data that is not from clinical trials of the particular genetic test, but previously published research on the particular genetic correlation. If the test cannot be verified based on published data, then the programmer cannot make them available to consumers. If the programmer wishes to take on the burden of clinical testing to have the option of offering it to consumers, this option should be left open. Additionally, with software-based tests and other tests that are updated frequently, the FDA should implement a simple online submission system for previously verified tests. This will function differently from pre-market or post-market options, and act as more of a constant monitoring system: the entity seeking approval should submit the change to the test along with the study that supports the change, allowing minimal FDA oversight to verify test changes.
V. Application of Proposed Regulatory Regime to a Challenge for the FDA
This section analyzes the application of this regulation to a test that flouts the conventions for both patents and FDA regulation, what Nature describes as “The Renegade Gene Test.” This test—also known as the Salzberg test is a computer program that can check any genome for 68 gene mutations that increase the risk of breast cancer and other cancers.[220] Although there are over 1000 gene mutations allegedly linked with the occurrence of breast cancer, the programmers believe that their program can be easily expanded to include the effects of those mutations when the time comes. Their software is freely available under an open source license that allows others to use, modify, and redistribute the program.
A. Regulation of Analytical Validity and Classification of the Tests
Under the suggested regime, neither the Myriad test nor the Salzberg test would require any testing for analytical validity. Since the Salzberg test is an open source framework, this might create problems, but would likely be resolved through the current user generated content model that has been successful with Linux and Wikipedia. Since breast cancer is a serious disease, both the Myriad test and the Salzberg test could fall under either Class II or Class III. Because women may undergo preemptive mastectomies and other serious treatment after taking the Myriad test, but might not pursue the same options under the Salzberg test, Salzberg might argue for a lower level of regulation. As a result, Myriad could be regulated as a Class III device, while Salzberg could be regulated as a Class I or II device. Regardless, the Myriad test would likely be approved with little difficulty within the time frames suggested supra, although the FDA might require warnings addressing the utility of a negative result on the test, as many breast cancer patients do not necessarily have mutations in either BRCA1 or BRCA2, two genes associated with the onset of breast cancer. Salzberg would first argue that computational tests fall outside of FDA’s regulatory jurisdiction. This argument would likely be unsuccessful because computational tests clearly fall within the language of the Medical Device Amendments of 1976.[221] The approval of the Salzberg test would also be somewhat difficult. Because it only looks at 68 gene mutations when 1000 are possible, the test does not “represent a comprehensive list of BRCA mutations.”[222] Since the clinical studies published on breast cancer do not rely on such a small subset of gene mutations, the FDA could not rely on the computational test providing the same accuracy as if it had 1000 mutations. At this juncture, this Note has recommended that the FDA refrain from requiring full clinical trials from computational test providers. Instead, the FDA may require that Salzberg either cite an earlier study that demonstrates that 68 mutations can predict the disease with the same accuracy as 1000 mutations, or at least show the level of accuracy provided using these 68 mutations (or molecular markers, as the case may be). It is likely that Salzberg would need to refine the test to include all 1000 mutations because of the considerable expense involved in conducting their own clinical trials. Even if Salzberg and Pertea provided a test that did use all 1000 mutations, the FDA would likely require them to place certain warnings on their website detailing the possible errors from inputting in the raw genome data on one’s own, as well as a link to the FDA complaint page suggested supra.
B. Possible Patent Protection and Data Exclusivity
At minimum, the Salzberg test would infringe upon a number of already existing gene patents. However, because individuals would have to input their own genetic data to perform the actual test covered by the patents, Myriad could only sue the programmers Salzberg and Pertea for inducing infringement, rather than direct infringement.[223] Let us suppose that the Supreme Court strikes down the Federal Circuit’s Myriad decision and that the only option is for data-exclusivity under the plan outlined in Part IV. Given the clear correlations available between the BRCA1/BRCA2 genes and breast cancer, it is unlikely that Myriad would have needed additional clinical trials for approval at the FDA. Since it does not need to submit new data and can rely on published data, there is no option for data exclusivity at the FDA and Myriad would receive no protection from those who wish to replicate its test. This is likely a good thing, given that the earlier studies (discussed supra in Part III) show that more options for breast cancer tests would be available at cheaper options if the Myriad patents were not enforced as stringently. However, it could also cause future companies like Myriad to avoid publishing its results when a new gene is discovered, particularly as the patent system is no longer an option and thus there is no mechanism forcing disclosure. However, this threat is largely illusory, as the identity of the gene can be derived from the publicly available test. Therefore, the proposed regulatory scheme in this paper the proper incentives for both innovators and consumers.
Conclusion
This paper could not possibly cover all of the issues arising from the regulation of diagnostic tests or the questions of their patentability. The proposed regulation aims to minimize the effects of three common errors in genetic diagnostic tests affecting consumers by increasing labeling and genetic counseling requirements and adopting new testing protocols, while reducing ex ante uncertainty for genetic diagnostic test companies by clarifying classification regimes and approval times and providing for a data exclusivity back-stop of three years if the test is found to be outside the scope of patentable subject matter. The regulation also aims to improve FDA administration by enhancing adverse effect reporting and maintaining monthly meetings to address the rapid improvements in the field.
[223] 35 U.S.C. § 271(b)–(c) (2006) (outlining the requirements for actionable induced infringement, including the requirement that the underlying direct infringement be successful).