Group Compromise: Perfect Cases Make Problematic ...

45 downloads 2282 Views 55KB Size Report
information about the possibility of group-based harms, ... Research using health information from members of such a group .... Is deidentification sufficient to protect health ... most policy challenges related to the electronic exchange of health ...
Health Privacy in Research

Group Compromise: Perfect Cases Make Problematic Generalizations Leslie Pickering Francis, University of Utah John G. Francis, University of Utah Rothstein (2010) argues that groups may be harmed by research on deidentified data. He concludes that researchers are obligated to minimize group harms and demonstrate respect for a studied group through robust opt-out capacities, information about the possibility of group-based harms, and publications referencing the group that reflect “extraordinary caution and precision.” Like other commentators, Rothstein uses as a touchstone example for group harm the research at Arizona State University involving the Havasupai. The Havasupai contended that research originally intended for diabetes, a significant health problem for them, was extended without permission to mental illness and migratory patterns—the latter challenging critical cultural traditions. The tribe’s lawsuit was recently settled on published terms that included payment of $700,000 to 41 individual plaintiffs, benefits for the tribe such as telemedicine, and scholarships for tribal members to several Arizona universities (Capriccioso 2010). Yet the Havasupai are an unusual example for analyzing group harms. Research using health information from members of such a group has the recognizable potential to affect both the group and its individual members. With a group as identifiable as the Havasupai, risks may be relatively clear before research is undertaken and there may be recognized structures for seeking consultation with group members. Most publicized was the possible discovery of the inaccuracy of the group’s tradition of origin in the Grand Canyon—a discovery challenging both long-held cultural traditions and the role of these traditions in attracting tourists or establishing claims to the land (Pritzer 2000). But there are other possibilities for compromise in research with such a small group. Consider the risk of discovery that a percentage of the group’s members do not have the familial relationships they believe they have; discovery of a percentage of cases of nonpaternity, for example, might have deleterious consequences ranging from others stigmatizing the group as promiscuous to efforts within the group to identify those whose membership is suspect. Or consider the discovery of patterns of illness among members of the group that might have partial genetic explanations or explanations citing common practices (diet?) or common experiences (toxic exposures?). With a comparably small group, such discoveries might lead group members to fear for their own health or others to view them as health risks to themselves or others. These risks do not require individ-

ually identifying information. People do not understand probability well and may believe that if a given percentage of a group has a characteristic, each group member has that probability of having that characteristic. Or, framing effects may lead people to associate membership in a group with a published characteristic, no matter how inaccurate the association in a particular case. Individual tribal members thus may find their own choices to opt out of little efficacy when others opt in. The factors that constitute the Havasupai are rarely shared in such clear fashion by groups. The “People of the Blue-Green Water” have 450 of the approximately 650 enrolled tribal members who live in close proximity in Supai village (Certification 1946). The tribe relies on tourism for much of its income and is governed by an elected, seven-member council. This is a group with close familial, cultural, geographic, governmental, and economic ties (http://www.havasupaitribe.com, accessed May 1, 2010). More fundamentally, what it is to be a member of most groups may be contested on many levels. The idea of a group may be metaphysically contested: Is the group linked biologically, through function as a category of social explanation, or through identification by group members, or should talk of the group be eliminated altogether (Mallon 2006)? “Identity”-based groups—women, LGBT (lesbian, gay, bisexual, and transgendered) people, people with disabilities, African-Americans—may have no shared geographical location or political institutions and indeed may themselves have multiple identities (Appiah 2005). Crucially, when shared characteristics are identified by analysis of health data, people may not even be aware of their likely connections to groups until after publication of the research. In such circumstances, two of Rothstein’s suggested protections—robust opt-out capacities and information about the potential for group harm—function weakly at best. Only the most general kind of consent to data use is possible: warning that unexpected findings may have negative consequences, such as being believed to be at risk or a risk to others, or being disparaged, mocked, or stigmatized. But it will be very difficult to concretize these possibilities during an informed consent process conducted in advance. “Opt-out” may seem irrelevant unless the reason for it is highly salient, particularly problematic given the well-known tendency to accept default settings rather than taking active steps to opt out. Once findings have been

Address correspondence to Leslie Pickering Francis, University of Utah, Law/Philosophy, 332 S. 1400 E, Salt Lake City, UT, USA. E-mail: [email protected]

September, Volume 10, Number 9, 2010

ajob 25

The American Journal of Bioethics

published, moreover, “opt-out” possibilities are no longer realistic protections for anyone who might be identified with the finding about the group. Indeed, even those who have opted out from the very beginning will be for practical purposes included in the messages drawn from the research if they are otherwise identified with the group. This leaves only Rothstein’s third suggested protection: “extraordinary caution and precision” in publication. Rothstein does not say more about what these might mean in practice; a legal analogy is “strict scrutiny,” requiring a compelling state interest and regulation narrowly tailored to its achievement (Obasogie 2008). On this analogy, publication must await clear confirmation: Ill-confirmed speculations about a “gay” gene or about intelligence and racial groups would be inappropriate. Publications must explain any findings with care, pointing out limits to significance and confounding factors; discussions of diabetes among particular ethnic groups, for example, must reference poverty. Publications must also draw accurate lines of inclusion and exclusion; references to spatial intelligence among women or to mental retardation and people with autism spectrum disorder would be both under- and overinclusive. But scientific publication, even when robustly confirmed and cautiously described, may be met by misunderstanding and inflammatory re-description. The results may reasonably threaten the trust of patients who learn belatedly that the findings under discussion are, rightly or wrongly, taken to apply to them (Francis 2010). Nonetheless, research involving large and relatively complete sets of health data is important for science; public health; health care quality, safety, and efficacy; and individual patient care. Herein lies the apparent appeal of deidentified data: critical benefits without concomitant risks to individuals. Rothstein argues persuasively why the deidentification strategy is inadequate, but frames his recommendations by an example that oversimplifies problems for proposed solutions. Are we, then, left with the dilemma of choosing between utilitarian benefits and compromise of groups or those associated with group characteristics? Several further strategies may help. The first is far more transparency and public discussion of uses of data than are occurring now. In this vein, Rothstein suggests “public engagement” and study before regulatory regimes are extended to deidentified data sets. Damage may be done, however, before and despite the extension of regulation; transparency and engagement can only be part of the picture. Ours is an age of health data set collection and availability at many levels: health care systems, health information exchanges, state health departments, and the federal government. Regulatory regimes vary from state to state, but findings from data collected in one state with a relaxed regulatory regime may have substantial implications not only for members of a group resident in the state of collection but also for members of the same group residing elsewhere (Francis and Francis 2010). Where regulatory regimes are

26 ajob

decentralized and groups lack effective voice, there are obvious limits to regulating unanticipated harm to groups through the political process. What is meant by “public engagement” will thus be problematic, and it is useful to consider additional recourse for unintended harm by the publication of findings that are construed to have implications for people identified with a given group. Currently, antidiscrimination law in the United States comes into play for the most part after harms have occurred. Individuals who have lost jobs, promotions, or insurance must first recognize that they have been discriminated against, bring suit (albeit with possible help from the Civil Rights Commission), shoulder any costs of litigation delay, and bear the burden of persuasion (especially difficult in “mixed” motive cases). This is even true of the Genetic Information Nondiscrimination Act (GINA) of 2008, which prohibits “acquisition” of genetic information in addition to its discriminatory use. By contrast, in the area of worker safety the Occupational Safety and Health Administration has the power to inspect for violations before harm occurs. With health reform, the risk of discriminatory insurance denials may diminish, although employers may still try to dismiss employees out of fears drawn from health information that they will be less productive and more expensive generally. Thus, in the United States, the predominance of employment at will and the lack of social safety nets more generally remain a problem that must be recognized and addressed concomitantly with the power and promise of health data. 

REFERENCES Appiah, K. A. 2005. The ethics of identity. Princeton, NJ: Princeton University Press. Obasogie, O. 2008. Beyond best practices: Strict scrutiny as a regulatory model for race–specific medicine. Journal of Law, Medicine & Ethics 36: 491–496. Capriccioso, R. 2010. Havasupai blood case settled. Indian Country Today, April 21. Available at: http://www. indiancountrytoday.com/archive/91728874.html (accessed May 1, 2010). Certification. 1946. Corporate Charter of the Havesupai of the Havasupai Reservation, Arizona. Available at: http://books.google.com/ books?id=ko7vYv7otNgC&pg=PT1&lpg=PT1&dq=havasupai+ tribal+charter&source=bl&ots=BbDplsfs66&sig=3sDs1B0H5Bn5j Sq - IyuI - y0gFuo&hl = en&ei = kVHcS - 7L4bSsgPUhb2 - Bg&sa = X&oi=book result&ct=result&resnum=5&ved=0CB0Q6AEwBA #v=onepage&q=havasupai%20tribal%20charter&f=false (accessed May 1, 2010). Francis, L. 2010. The physician–patient relationship and a national health information network. Journal of Law, Medicine & Ethics 38: 36–46. Francis, J. G., and L. Francis. 2010. Rights variation within a federalist system: The importance of mobility. Political Research Quarterly, doi:10.1177/1065912909340893.

September, Volume 10, Number 9, 2010

Health Privacy in Research

Genetic Information Nondiscrimination Act. 2008, Public Law 110–233, 122 Stat. 881.

Pritzer, B. M. 2000. A Native American encyclopedia: History, culture, and peoples, 30–31. New York: Oxford University Press.

Mallon, R. 2006. ‘Race’: Normative, not metaphysical or semantic. Ethics 116(3): 525–551.

Rothstein, M. A. 2010. Is deidentification sufficient to protect health privacy in research? American Journal of Bioethics 10(9): 3–11.

Guiding Deidentification Forward Melissa M. Goldstein, George Washington University Medical Center The Health Information Technology for Economic and Clinical Health (HITECH) Act (Division A, Title XIII of the American Recovery and Reinvestment Act [ARRA] 2009, §§ 13101–13424, 123 Stat. 115, 228–279) requires the Secretary of the U.S. Department of Health and Human Services (HHS), in consultation with stakeholders, to issue guidance on how best to implement the requirements for the deidentification of protected health information as designated in the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule (ARRA 2009, § 13424(b), 123 Stat. at 277 [codified at 42 U.S.C.A. § 17953 (2010)]). In order to solicit stakeholder input from experts with practical, technical and policy experience, the federal Office for Civil Rights (OCR) at HHS organized a two-day workshop that was open to the public in March 2010. The workshop consisted of multiple panels geared toward collecting views and addressing policy concerns regarding deidentification implementation and management of the current deidentification standard and potential changes. The panels addressed specific topics related to the Privacy Rule’s deidentification methodologies and policies, including: methodological issues associated with HIPAA Privacy Rule deidentification; statistical disclosure control and HIPAA Privacy Rule protections; anonymization and the HIPAA Privacy Rule; policy interpretations of HIPAA Privacy Rule deidentification requirements; and deidentification and legal contracts (see http://www.hhshipaaprivacy.com/index.php). OCR plans to synthesize input from the workshop panelists as well as general comments to incorporate into guidance to be posted for public comment (see http://www.hhs.gov/ ocr/privacy/hipaa/understanding/coveredentities/Deidentification/deidentificationworkshop2010.html). Mark Rothstein (2010) provides a cogent and thorough discussion of the many policy issues surrounding the use of deidentification to protect the privacy of health information, including many of the topics discussed at the OCR meeting in March. His effort is both timely and prescient: It coincides with a rapidly evolving federal policy landscape in health information technology (HIT) and, depending on the strength and specificity of the guidance produced by OCR, could foretell a sea change in how we conceptual-

ize the deidentification of health care information, at least in terms of compliance with the HIPAA Privacy Rule. Obviously, clarification and/or amendment of the regulatory language could have wide-ranging effects on numerous sectors, including but not limited to research and the developers of electronic health record applications. A particularly interesting question, however, is how such changes might affect a patient. Rothstein argues that the current regulatory framework regarding deidentification overlooks the autonomy interests of individuals whose health information and biological specimens are used in research without their knowledge, consent, or authorization. Indeed, the issue of whether, to what extent, and how individuals should have the ability to exercise control over their health information—even identifiable information—currently represents one of the foremost policy challenges related to the electronic exchange of health information. States and other entities engaged in the exchange of electronic health information are struggling to establish policies and procedures for patient participation in their exchange efforts. While some have adopted policies enabling patients to exercise some degree of individual choice, others prioritize the needs and concerns of other key stakeholders more highly (Goldstein and Rein 2009). I have argued elsewhere that during this nascent stage of HIT adoption it is critical that we engage in discussions regarding the proper role of personal autonomy and choice in a health care environment where electronic information sharing holds primary (and possibly rightful) importance (Goldstein 2010). As evidenced by ARRA’s statutory mandate, our need for these discussions is no less pressing in the case of deidentified data. We know that the majority of Americans are “very concerned” about identity theft or fraud (80%), the use of their medical information for marketing purposes (77%), and the possibility that their data might become available to employers or insurance companies (56 and 55%, respectively) (McGraw et al. 2009, 417). We also know that, despite these concerns, patients want their physicians to be able to communicate with one another (McGraw et al. 2009, 417), generally support the development of HIT as a whole and believe that it will improve care and reduce costs (Schneider 2009, 5). As Rothstein notes,

Address correspondence to Melissa M. Goldstein, Department of Health Policy, School of Public Health and Health Services, The George Washington University Medical Center, 2021 K St., NW, Suite 800, Washington, DC 20006, USA. E-mail: [email protected]

September, Volume 10, Number 9, 2010

ajob 27

Copyright of American Journal of Bioethics is the property of Routledge and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.