Wednesday, July 29, 2015
Go straight to the survey, or read on.
The Community Ethics Committee (CEC) has become a valued resource for the ethics leadership of Massachusetts General Hospital, Boston Children’s and other institutions affiliated with Harvard Medical School.
Now you can provide a valuable voice to the CEC in its current study of an especially vulnerable patient population. These patients are known by various names -- unrepresented, unbefriended, adult orphans, etc. -- but the CEC refers to them as Unknown and Unrepresented Patients.
Due to dementia, brain trauma, substance abuse or other reasons, these patients at least temporarily cannot comprehend their medical situation or influence decisions. Further compounding the mystery, they have no one to speak for them, and there is no available documentation of their wishes.
Decisions that sometimes involve life-sustaining treatment can be left to overburdened courts, inadequately resourced public guardianship, or decision-making practices and biases unique to each physician, hospital or care facility. The result ranges from neglect to over-treatment, potentially harming both the patient and the moral values and integrity of doctors and nurses caring for them.
This patient population is growing commensurately with a doubling of the elder population nationally by 2030. The conservative estimate is that 25,000 of these unknown humans die in American critical care wards annually. Most of these involve a decision to withhold or withdraw life-sustaining treatment.
How, and by whom, ought medical decisions be made for patients who cannot make their own and have no one who knows them well enough or cares enough to make it for them?
You can inform this important project by answering 10 questions in this survey. The time commitment is small, but the contribution is great.
The CEC is grateful for the work of Talia Burstein, a student at Emory College and a graduate of Brookline High School, for her thoughtful and diligent work in creating this survey.
By Paul McLean at 9:45 AM
Tuesday, July 28, 2015
Robert Klitzman insisted on a question mark.
When “The Ethics Police” was in consideration as the attention-grabbing title for Klitzman’s probing and perceptive book about institutional review boards, his initial reaction may also have involved an exclamation point. But key to agreement was a question mark.
Klitzman, a psychiatrist and director of Columbia University’s masters program in bioethics, had no such concern with the subtitle, “The Struggle to Make Human Research Safe,” because that is precisely what his book is about. Making human research safe is indeed a struggle -- a profoundly difficult balancing of the public’s eagerness for cures and treatments with the scientific community’s eagerness to respond.
With $100 billion spent annually on biomedical research in the US, the money at stake is staggering and aggressively fought over, ethical shortcuts are tempting, patients themselves increasingly insist on leap-frogging the science, and the terrain is rife with conflicts of interests.
Klitzman’s interest in research and oversight is not purely professional. His father died at age 78 of a blood cancer after an experimental round of chemotherapy that Klitzman talked him into.
“The treatment had arguably made him suffer more, not less,” he writes. “I wondered whether my mother’s wish to let him die in peace had been right -- whether I had been biased, too ‘pro-science?’”
He wondered, too, whether his father’s doctor had been overly optimistic about the treatment. He took solace in the hope that his father’s experimental treatment contributed to the care of future patients.
Klitzman’s personal story is no less important to “The Ethics Police?” than his bioethics and medical credentials, his experience as a researcher or his interviews with IRB members. Indeed, the personal gives this book its power.
Klitzman’s role in his father’s consenting to treatment -- treatment that Klitzman’s mother opposed -- provides a view into the complexities of informed consent, and the importance of a physician knowing how much to explain so that consent is indeed “informed.” Easier said than done, to be sure.
How much of the decision reflected the father’s wishes, and how much was deference to the son? How ought the physician to have weighed this?
In such a story, informed consent can seem a fool’s errand, as the most important “facts” of risk versus benefit often are known only in retrospect, and yet the consent process has long been widely accepted as crucial to safety in research. For reasons of research integrity and transparency, it’s easy to see the value in requiring that the researcher explain risks and benefits to the patient/subject, and that both actually understand the difference between “patient” and “subject (or, participant).”
“Many of the achievements that have advanced biomedicine have also, by necessity, involved human subjects, posing myriad cultural, ethical, and legal questions,” according to Klitzman. “These trade-offs are so intricate that every year scandals erupt, unforeseen by those involved.”
For more than four decades, institutional review boards, or IRBs, have been responsible for overseeing human research and avoiding those scandals. They in a sense act as arbiters of the social contract: tasked with halting unethical research, approving valid research, and readily recognizing the difference. They also have a reputation among some researchers for slowing the advance of science. Thus, the nickname: ethics police.
Klitzman insists “The Ethics Police?” is not “an anti-IRB book” and credits IRBs with doing “extremely valuable work” as “mandated committees generally struggling to do their best to protect subjects.”
The great irony within the pages of “The Ethics Police?” is that many of these committees, established by Congress in 1974 to ensure research ethics and safety, are themselves subject to little or no scrutiny or oversight. There are now 4,000 IRBs in the US, many essentially playing by their own sets of rules, though all are subject to the federal research regulations known as “the Common Rule.” (A long-awaited revision to the Common Rule may be forthcoming later this summer.)
Klitzman doesn’t see IRBs as police, except in perhaps the exceptional cases -- those controversial occasions that shed more heat than light on the process.
And yet he has had the experience of having his own research mired in IRB quicksand, losing valuable time and opportunity to bureaucratic inefficiency. (In this, it is not hard to see aspects of the debate over 21st Century Cures Act and its intended reforms of the Food and Drug Administration’s processes of drug approval.)
Perhaps “Ethics Traffic Cops” would be the more apt title: keeping safe and ethical research flowing while directing more troubling traffic to an alternate route and avoiding gridlock.
But in no small way, public trust in health care, especially among the marginalized and most vulnerable, is dependent on IRBs to act as guardian at a time when the distinction between research and care is blurring. And yet few know who IRBs are or what they do, and many IRBs apparently like it that way.
Klitzman writes convincingly that this lack of transparency must change. He wants researchers to appreciate that “IRBs are not the enemy,” and more public education about the research and oversight process.
He also calls for greater transparency from IRBs, which can be varied in approach and secretive in practice. According to Klitzman, “Whatever criticisms IRBs provoke, reforms cannot be designed or implemented effectively without fuller comprehension of the perspective of the individuals serving on the boards.”
Also needed, he writes, are studies into “how frequently committees construe and apply regulations differently due to particular psychological and institutional factors, and whether and how educational or other interventions can reduce problematic IRB variations.”
The success Klitzman had interviewing IRB members only underscores the need for more such access.
“I think we need to explore far more the interpretations of ethics,” Klitzman writes. “Many philosophers and others have tended to see ethical principles as simply involving universals that are either present or absent in particular arguments.
“(The interviewees) illustrate how these principles get interpreted and applied in variable ways, molded by broader social, institutional, and psychological influences. This is not by any means to say that all interpretations are equally strong, valid, or well-reasoned: some interpretations are better than others.”
Clinical medicine increasingly turns to multidisciplinary teams for guidance in difficult areas of decision making and care. There is an important place in research for such teams, as well.
According to Klitzman, “We need models that are based not solely on science, law, or ethics, but are instead multidisciplinary, integrating the humanities with natural and social science. Much of research ethics is, and will always need to be, done by consensus, and negotiated over time by a complex array of researchers, ethicists, and others. We should strive to reach the best consensus we can. Much depends on this balance.”
The Ethics Police? The Struggle to Make Human Research Safe. Oxford University Press. Robert L. Klitzman, M.D., author.
Also posted at paulcmclean.com
By Paul McLean at 10:22 AM