How long ago it seems. While we await the final negotiations in Washington to figure out some solution to the budgetary battles--which will likely produce a bill, endorsed by the President, that will be either "very right" or "extremely right" but will somehow be billed as "centrist"--we at the Billy Rubin Blog are feeling nostalgic tonight for those heady days of the summer of 2009, when a health care bill was slowly working its way through Congress.
Remember that? What passed for reasonable dialogue got hijacked by a very noisy rabble of Know Nothings, a group for whom the descriptions "willfully ignorant" and "anti-intellectual" are taken as praise, who screeched at town hall meetings and demanded that "government get its hands off my medicare". But the meme that took the cake at the time, the Chant Of Nitwits as it were, was that somehow the government was secretly planning to arrange "Death Panels," and the paranoia from the imbecilic mob sent the few remaining sensible Republican politicians running for cover and pandering to save their souls (as Chuck Grassley of Iowa did here).
Between the media, who largely buy into the myth of false equivalence that every story must have two equally valid sides and thus reported on the protests without pointing out the basic stupidity of the protesters, and the politicians, too many of whom were spineless in shouting down the nonsense, the silliness carried the day. The Obama administration backpedalled in the Public Relations game, giving up ground to its political foes (sound familiar?), and the consequence was that the country ended up with two presents. The first was a not especially progressive healthcare bill. The second was the Tea Party.
Which brings us up to the present, more or less. So what did ever happen to the Death Panels? Well, they never left, argues the massively awesome blog Health Care Renewal. HCR links Bloomberg News reporter Peter Waldman's investigation into the for-profit hospices now littering the landscape. It makes for grizzly reading. For instance, Waldman relates the story of former social worker Misty Wall, who alleges in a lawsuit against Gentiva Health Services, Inc. that she was "assigned to convince people who weren't dying that they were." (A spokesman for Gentiva said that the allegations predate Gentiva's ownership of the hospice at which Ms. Wall worked. She was fired from the hospice in 2005 for refusing to continue such practices.)
Perhaps even more troubling--and that is saying something--are the allegations that for-profit hospices gave "financial kickbacks" to "referral sources" (in English, that usually means money to doctors) and tied employee bonuses to "enrollment goals". This easily has the potential to induce some employees to move the goalposts a bit and encourage hospice for some patients inappropriately. How's that for a "Death Panel"?! No grim bureaucrats in Washington doling out the Number of The Beast, but rather a "Death for Dollars" in which the most successful recruiters walk home with a tidy cash sum at the end of the day, along with the corporation supervising it.
One notes more than a touch of righteous indignation as HCR writes,
There has been a lot of blather from politicians in the US about "death panels" in debates about health care reform. Many such politicians seem worried that the US government has or will have death panels under the new health care reform legislation. We have criticized that legislation for not addressing many important health care problems. No one, however, has convincingly demonstrated how its provisions would convene "death panels."
I couldn't agree more.
Lest I am misunderstood, hospice has been a tremendous step forward in American medicine. It allows people to die in greater comfort and with greater dignity than before. We need hospice, which is precisely why we shouldn't sully it by making it the object of some corporation's greed. But if people don't want their government involved in their health care, the business of dying will be overseen by for-profit businesses, and the Death Panels will be convened in elegant board rooms with oak tables, plush carpeting, and executives enjoying record salaries. Sound appealing?
--br
Where a spiritual descendant of Sir William Osler and Abbie Hoffman holds forth on issues of medicine, media and politics. Mostly.
Tuesday, July 26, 2011
Saturday, July 16, 2011
The Budget "Crisis" and Medical Residencies
Among the more amusing tidbits of news that came out of the state government shutdown in Minnesota were pieces like this in which certain voters expressed outrage over the stalemate and demanded their elected officials to just "get things done". "They could have talked more, but they get their feelings hurt a little bit, and act like a bunch of little kids," said one frustrated citizen.
Umm....no. What we witnessed in Minnesota, and what the media is playing up in Washington at this moment, is not merely a bunch of squabbling children, even if there are some childish antics involved. The problem, in Minny as in Washington, is that you have genuinely, truly divided government, with huge blocs of both parties with irreconcilable views about the proper function and structure of government, and to expect them to arrive at agreements on operating costs totally misunderstands the ideology of these blocs. You can't simply expect them to "go and get it done" because there's no consensus on even the most basic roles of government.
Despite what some commentators have said about the current inter-party spats being just more of the same-old same-old, the different visions being fought over today really are much more substantive than any other political fight since the early 20th century. For instance, Richard Nixon and a Democratic-led Congress, while political foils, really did agree on the basics: yes, Nixon presided over some unpleasantries in Vietnam and Cambodia, and obviously sought to limit government in ways that Democrats didn't. But Nixon bought into the concept that government could play a role in preventing drag on the economy, so much so that he proposed a comprehensive health care bill not dramatically unlike the one the current President finally got passed, the hysteria over which led to the election of a House so radically different in philosophy from Nixon (Richard Nixon!) that he would have blushed. Given the budgetary concessions that President Obama has already put on the table, and his breathtaking capitulations since the arrival of the new Congress, the fact that the House "Hell No" caucus won't budge in the current debt-ceiling discussions is evidence enough that this is not just your ordinary political dust-up.
The point here is: if it really does come to pass that the US defaults because of the political stalemate, don't blame them--because the voters were the ones who put Obama into office, and then two years later not only seated an opposition party, but an opposition who would legislate against the sunrise if the President said, "the sun will come out tomorrow" in his increasingly grating Annie-like naiveté. It doesn't make any sense to elect a Democrat like Obama (even one as willing to adopt right-wing talking points as him) and then vote for Republicans further to the right than what even George W. Bush could have dreamed of. At least the voters of my home state of Ohio were consistent when they seated a right-wing executive and legislature in the most recent elections, and now have a new budget crafted by people with a very particular view of the role of government in the lives of its people. Let's see how that one works out for you guys in the years to come; as Chrissie Hynde of The Pretenders once lamented, "Hey, way to go Ohio."
Anyway, this blog is geared toward commenting on such political machinations in relation to the world of medicine (with a jaundiced eye befitting its name), and needless to say the world of medicine is going to undergo serious revisions if the Tea Party really does get its way and something resembling the Ryan medicare proposal (see here if you want to read warm fuzzies about it, and here if you prefer cold pricklies) becomes federal law. (Side note: I think there's a pretty good chance that our "Democratic" President is going to avert this "crisis" by eventually signing legislation that passes with either zero, or very few, Democratic party votes in the House.) And to that end, here are two stories (NYT and Boston radio station WBUR) discussing the huge impact that deep medicare cuts are going to have on teaching hospitals.
To help lay readers understand the structure, Medicare is the government-run health insurance program for senior citizens. While you almost certainly knew this (though obviously some people are not too quick on the uptake), what you may not have realized is that Medicare is also responsible for financing the postgraduate medical training in the US, which costs, give or take, a little over $6 billion per year, although a proposal that even Obama himself endorses would cut that amount by an estimated 60 percent. The feds don't pay residents directly, but rather pay the hospitals running the programs, which tend to be large, academic medical centers. These places may also get hit by a decreasing NIH budget to fund research at those medical centers, as this November 2010 article indicates current House Whip Eric Cantor's philosophy.
While places like New York and Boston are going to be hit disproportionally by major cuts to the residency training budget, you can bet that smaller places like Peoria, Illinois, with its 10 residencies affiliated with two local hospitals, will smart as well. In fact, as residency programs in smaller, more rural communities tend to serve as feeders of physicians who would otherwise not move to such places, the damage to residency training programs is going to reverberate well beyond the hospital parking lots, and have a good chance of doing so well into the future.
Perhaps this will all work itself out and the changes will benefit everyone; perhaps all that extra money from the taxes that nobody seems to be paying will give taxpayers more money to afford...well, afford something. We shall see, though hope (© Senator Barack Obama, 2008) is becoming as thin as gruel. Dickens would appreciate the consistency.
Umm....no. What we witnessed in Minnesota, and what the media is playing up in Washington at this moment, is not merely a bunch of squabbling children, even if there are some childish antics involved. The problem, in Minny as in Washington, is that you have genuinely, truly divided government, with huge blocs of both parties with irreconcilable views about the proper function and structure of government, and to expect them to arrive at agreements on operating costs totally misunderstands the ideology of these blocs. You can't simply expect them to "go and get it done" because there's no consensus on even the most basic roles of government.
Despite what some commentators have said about the current inter-party spats being just more of the same-old same-old, the different visions being fought over today really are much more substantive than any other political fight since the early 20th century. For instance, Richard Nixon and a Democratic-led Congress, while political foils, really did agree on the basics: yes, Nixon presided over some unpleasantries in Vietnam and Cambodia, and obviously sought to limit government in ways that Democrats didn't. But Nixon bought into the concept that government could play a role in preventing drag on the economy, so much so that he proposed a comprehensive health care bill not dramatically unlike the one the current President finally got passed, the hysteria over which led to the election of a House so radically different in philosophy from Nixon (Richard Nixon!) that he would have blushed. Given the budgetary concessions that President Obama has already put on the table, and his breathtaking capitulations since the arrival of the new Congress, the fact that the House "Hell No" caucus won't budge in the current debt-ceiling discussions is evidence enough that this is not just your ordinary political dust-up.
The point here is: if it really does come to pass that the US defaults because of the political stalemate, don't blame them--because the voters were the ones who put Obama into office, and then two years later not only seated an opposition party, but an opposition who would legislate against the sunrise if the President said, "the sun will come out tomorrow" in his increasingly grating Annie-like naiveté. It doesn't make any sense to elect a Democrat like Obama (even one as willing to adopt right-wing talking points as him) and then vote for Republicans further to the right than what even George W. Bush could have dreamed of. At least the voters of my home state of Ohio were consistent when they seated a right-wing executive and legislature in the most recent elections, and now have a new budget crafted by people with a very particular view of the role of government in the lives of its people. Let's see how that one works out for you guys in the years to come; as Chrissie Hynde of The Pretenders once lamented, "Hey, way to go Ohio."
Anyway, this blog is geared toward commenting on such political machinations in relation to the world of medicine (with a jaundiced eye befitting its name), and needless to say the world of medicine is going to undergo serious revisions if the Tea Party really does get its way and something resembling the Ryan medicare proposal (see here if you want to read warm fuzzies about it, and here if you prefer cold pricklies) becomes federal law. (Side note: I think there's a pretty good chance that our "Democratic" President is going to avert this "crisis" by eventually signing legislation that passes with either zero, or very few, Democratic party votes in the House.) And to that end, here are two stories (NYT and Boston radio station WBUR) discussing the huge impact that deep medicare cuts are going to have on teaching hospitals.
To help lay readers understand the structure, Medicare is the government-run health insurance program for senior citizens. While you almost certainly knew this (though obviously some people are not too quick on the uptake), what you may not have realized is that Medicare is also responsible for financing the postgraduate medical training in the US, which costs, give or take, a little over $6 billion per year, although a proposal that even Obama himself endorses would cut that amount by an estimated 60 percent. The feds don't pay residents directly, but rather pay the hospitals running the programs, which tend to be large, academic medical centers. These places may also get hit by a decreasing NIH budget to fund research at those medical centers, as this November 2010 article indicates current House Whip Eric Cantor's philosophy.
While places like New York and Boston are going to be hit disproportionally by major cuts to the residency training budget, you can bet that smaller places like Peoria, Illinois, with its 10 residencies affiliated with two local hospitals, will smart as well. In fact, as residency programs in smaller, more rural communities tend to serve as feeders of physicians who would otherwise not move to such places, the damage to residency training programs is going to reverberate well beyond the hospital parking lots, and have a good chance of doing so well into the future.
Perhaps this will all work itself out and the changes will benefit everyone; perhaps all that extra money from the taxes that nobody seems to be paying will give taxpayers more money to afford...well, afford something. We shall see, though hope (© Senator Barack Obama, 2008) is becoming as thin as gruel. Dickens would appreciate the consistency.
Monday, July 11, 2011
Republican Primary Maneuvering and Hypocrisy About Science, Writ Large
I keep telling anyone who will listen that we all better get used to the phrase "President Bachmann" as the Republican candidate increasingly looks like a not-so-improbable contender for the nomination. Extreme though she may be, she doesn't come with Mitt Romney's troubling baggage of being both Mormon and the former governor of a liberal state whose signature legislation was the passage of a health care law remarkably similar to what nearly all conservatives refer to as "Obamacare"; she doesn't have to resort to a two-step to explain why she worked for President Obama's administration as John Huntsman does; she is a good deal more media savvy than former Alaska governor Sarah Palin; and she isn't Newt Gingrich. I'm no political expert, but in the age of the Tea Party dominated Republican primaries, I see her as having a legitimate shot, with her only substantive competition being former Minnesota Governor Tim Pawlenty, unless current Texas Governor Rick Perry gets into the mix. Maybe I'm misjudging Romney's chances, but I'd call Bachmann the favorite right now.
Representative Bachmann certainly has no fears about wading into controversy, and in doing so making herself a darling of the extreme right. Earlier in the week she became the first candidate to sign a pledge for the protection of marriage entitled "The Marriage Vow: A Declaration of Dependence on Marriage and Family," which as the NY Times noted in a blog post, puts her rivals--at least some of whom do not have such distinguished marital records themselves--into a tricky position. In a fairly short time, "The Marriage Vow" has managed to generate an uproar over whether or not it explicitly endorses banning pornography (it doesn't, as noted here); its not-so-subtle racism, as discussed here; and its religious fanaticism (see, for instance, here). In short, it highlights all of the qualities most commonly associated with the rightmost wing of the Republican Party, and quite possibly the group best positioned to put a candidate over the top for the 2012 nomination. And Bachmann got there first.
So let me pile on here and point out that one of its additional hypocrisies involves science: as part of the rationale for why the "Institution of Marriage in America is in great crisis," the Vow argues that "[the debasement of marriage continues due to an] anti-scientific bias which holds, in complete absence of empirical proof, that non-heterosexual inclinations are genetically determined, irresistible, and akin to other traits...as well as an anti-scientific bias that holds, against all empirical evidence, that homosexual behavior in particular, and sexual promiscuity in general, optimizes individual or public health." [my emphasis]
There's a lot to unpack in that pile of nonsense but here's a start: whether "non-heterosexual inclinations" are indeed genetically determined is an open question, but to say that those who posit the theory are "anti-scientific" and are making such assertions "complete absence of empirical proof" is patently false. The neuroanatomist Simon LeVay (author of the fascinating tome Queer Science--it proclaims its allegiance right on the cover!) pioneered studies on differences in brain structures between heterosexual and homosexual men, and while I'm skeptical of the results or even the meaning of the findings, there's no question that LeVay's work constitutes science--the empirical testing of hypotheses about the mechanisms of the world. He's hardly the only example, and the literature of scientific publications is rife with tests, theories and arguments about the origin and nature of human sexuality. Nobody's got a definitive answer, but The Vow label of "anti-scientific" is really just tossing out a phrase to make itself seem respectable.
An additional yuck can be had from the fact that the author of The Vow, Bob Vander Plaats, is...well, typically anti-scientific in his fundamentalist Christianity! While running for the position of Lieutenant Governor of Iowa, Vander Plaats endorsed the teaching of "intelligent design" as an adjunct to evolution. As the redoubtable Tara Smith at the blog Aetiology points out, intelligent design isn't a scientific theory at all, something even some of its proponents realize. The casual disregard for critical thought appears to be part and parcel of the document, so a disregard for science shouldn't really be a surprise. Nor should it be a surprise that Bachmann immediately signed on to it.
--br
PS--today's Times has a fascinating article on how some schools (including Billy's medical alma mater, the University of Cincinnati!) have changed their admission interview strategies in the hopes of finding future doctors who are better team players than those who may have stellar grades but are arrogant & condescending (and who in being this way may foster poor communication leading to medical errors). I ran this past a senior colleague who works on a med school admissions committee and he seemed skeptical: "what you do in your life = what you say in a mini-interview; however, grades = long term commitment" was his quick response. Count me tentatively among those hopeful for the new system.
Representative Bachmann certainly has no fears about wading into controversy, and in doing so making herself a darling of the extreme right. Earlier in the week she became the first candidate to sign a pledge for the protection of marriage entitled "The Marriage Vow: A Declaration of Dependence on Marriage and Family," which as the NY Times noted in a blog post, puts her rivals--at least some of whom do not have such distinguished marital records themselves--into a tricky position. In a fairly short time, "The Marriage Vow" has managed to generate an uproar over whether or not it explicitly endorses banning pornography (it doesn't, as noted here); its not-so-subtle racism, as discussed here; and its religious fanaticism (see, for instance, here). In short, it highlights all of the qualities most commonly associated with the rightmost wing of the Republican Party, and quite possibly the group best positioned to put a candidate over the top for the 2012 nomination. And Bachmann got there first.
So let me pile on here and point out that one of its additional hypocrisies involves science: as part of the rationale for why the "Institution of Marriage in America is in great crisis," the Vow argues that "[the debasement of marriage continues due to an] anti-scientific bias which holds, in complete absence of empirical proof, that non-heterosexual inclinations are genetically determined, irresistible, and akin to other traits...as well as an anti-scientific bias that holds, against all empirical evidence, that homosexual behavior in particular, and sexual promiscuity in general, optimizes individual or public health." [my emphasis]
There's a lot to unpack in that pile of nonsense but here's a start: whether "non-heterosexual inclinations" are indeed genetically determined is an open question, but to say that those who posit the theory are "anti-scientific" and are making such assertions "complete absence of empirical proof" is patently false. The neuroanatomist Simon LeVay (author of the fascinating tome Queer Science--it proclaims its allegiance right on the cover!) pioneered studies on differences in brain structures between heterosexual and homosexual men, and while I'm skeptical of the results or even the meaning of the findings, there's no question that LeVay's work constitutes science--the empirical testing of hypotheses about the mechanisms of the world. He's hardly the only example, and the literature of scientific publications is rife with tests, theories and arguments about the origin and nature of human sexuality. Nobody's got a definitive answer, but The Vow label of "anti-scientific" is really just tossing out a phrase to make itself seem respectable.
An additional yuck can be had from the fact that the author of The Vow, Bob Vander Plaats, is...well, typically anti-scientific in his fundamentalist Christianity! While running for the position of Lieutenant Governor of Iowa, Vander Plaats endorsed the teaching of "intelligent design" as an adjunct to evolution. As the redoubtable Tara Smith at the blog Aetiology points out, intelligent design isn't a scientific theory at all, something even some of its proponents realize. The casual disregard for critical thought appears to be part and parcel of the document, so a disregard for science shouldn't really be a surprise. Nor should it be a surprise that Bachmann immediately signed on to it.
--br
PS--today's Times has a fascinating article on how some schools (including Billy's medical alma mater, the University of Cincinnati!) have changed their admission interview strategies in the hopes of finding future doctors who are better team players than those who may have stellar grades but are arrogant & condescending (and who in being this way may foster poor communication leading to medical errors). I ran this past a senior colleague who works on a med school admissions committee and he seemed skeptical: "what you do in your life = what you say in a mini-interview; however, grades = long term commitment" was his quick response. Count me tentatively among those hopeful for the new system.
Sunday, June 5, 2011
Jack Kevorkian: Goodbye, Good Riddance
The news of Jack Kevorkian's death brought out a large number of laudatory comments in The New York Times (laudatory about the man rather than his death, natch). "I hope that one day the world will look back on the service Dr. Kevorkian provided and will be shocked and saddened to learn that he was ostracized and incarcerated for the practice of providing dignity and some control to those in the late stages of terminal illness," SteveBnh of Virginia wrote in a representative sample of the praise heaped on the crusader for physician-assisted suicide.
Count me among the ostracizers. As the warm comments from seemingly well-informed readers demonstrates, Kevorkian was widely perceived to be a fierce advocate for patient's rights, a promoter of death with dignity, and the victim of a hypocritical and vindictive profession hellbent on maintain its Godlike power over patients. His trial, conviction, and imprisonment in 1999 for second-degree murder has the flavor of martyrdom, reinforcing the admiration of his followers and inviting comparisons to various legendary civil-rights activists.
In reality, Kevorkian was none of these things, but rather a creepy zealot obsessed with death who knew nothing about actual patient care. (I am not using the word "creepy" lightly; read on.) Although he was trained at a bonafide medical school and thus was a "doctor" in the general sense of the term, his training and subsequent practice was in pathology, where his work involved autopsies and analysis of human tissues on slides rather than actually taking care of living, breathing souls with joys and fears--making his public persona as "doctor" a bit misleading, as if he were the same as Marcus Welby, M.D. Kevorkian's nickname, "Doctor Death", didn't come from the notoriety he generated in the 1980s and '90s, but rather from perplexed and amused housestaff during his early days in a wry observation about his peculiar fixation on photographing patients' eyes at the precise moment of death. (Various blogs and websites supportive of Kevorkian state that this is because he wanted the profession to be able to distinguish the moment so that resuscitation could be performed, or something to that effect. It's utter nonsense: even in the 1950's, which some might consider the Dark Ages by medical standards, there were EKGs, a considerably more precise tool to determine death than staring into people's eyes, which seems positively medieval. Whatever his stated justifications, his "death photography" was pure fetish.) Long before he took up physician-assisted suicide as his cause, he bounced from hospital to hospital, disturbing various medical staffs with his distinctly unconventional preoccupations.
He was praised for his compassion despite the fact that he had not only not taken care of living patients except during his internship, but had never received any training of any kind in treating patients with depression (common enough among the terminally ill), palliative care, or any of the diseases that he claimed to treat. His choices reflect this very poor training: among the 130 or more cases in which he was the prescriber of death, several had no terminal illnesses nor were suffering, such as the case of Janet Adkins, who had been recently diagnosed with Alzheimer's disease but aside from mild memory loss was in otherwise reasonably good health.
Even more disturbing were the reports of the death of Judith Curren, a 43 year-old woman who not only didn't have a clear-cut underlying disorder, she had reportedly been a victim of domestic violence. These are not the only cases, but even the inclusion of these two suggests at best a sloppiness in methods, and at worst a murderous instinct hidden under the guise of medical concern for suffering. ("How could I have known?" was Kevorkian's retort after being confronted with the news of the messy life of the Curren family. Perhaps if his only acquaintance with them had not been through a questionnaire, and had been based on caring for Judith Curren in a legitimate medical practice for several years, such surprises wouldn't have popped up.)
In short, Dr. Kevorkian-the-Caring was a total media fabrication. He was a murderer, and if anything was treated gently by the justice system.
Other, far more responsible doctors have spoken out in favor of physician-assisted suicide--doctors who personally knew and ministered to their patients before taking the terrifying power into their hands and helped patients end their lives, doctors who gave such power its proper due, only arriving at that moment after slow and careful deliberation, wholly unlike Dr. Kevorkian's quickie-in-a-Volkswagen butchery. Perhaps the most famous of these doctors is Timothy Quill, a practicing doc in New York who challenged the ban on physician-assisted suicide in the State of New York which was ultimately decided by the US Supreme Court; the court decided 9-0 against Dr. Quill. Even Quill, as forceful an advocate for physician-assisted suicide as could be, found Kevorkian's behavior troubling, saying that he "is very much on the edges of what ordinary doctors do."
I have heard Timothy Quill speak on two occasions and found him an eloquent man whose concerns are ultimately for the health and happiness of his patients. That said, I still believe that physician-assisted suicide is a terrible idea. Ironically, the two times I attended lectures by Dr. Quill mark dramatic shifts in opinion I have had on the subject: the first time happened before I started medical school and was strongly in favor of his ideas, while the second time was a few years ago, after I had undergone more than a decade of medical training, and my attitude had changed considerably.
Generally, the discussions about physician-assisted suicide revolve around two themes. The first is what bizzyblog refers to as "the euthanasia theme song," or having a life that is not worth living. The second deals with the scenario of unbearable and unremitting suffering, which the supporters of physician-assisted suicide regard as the ultimate justification for the practice. This is often where the accusations of "doctors playing God" come in--docs are so invested in keeping people alive that they consider it a personal affront to allow patients to die. (In general, my experience has been the opposite, not withstanding the rather regrettable final few days of my father's life, in which we attempted in vain for several days to have his life-support removed after an episode of sudden cardiac death. Based on what I've seen, it's usually the doctors, and not the families, who see little or no value and much suffering in store for families and patients with terminal illness requiring intubation, PEG tubes and the like, and often have difficulty explaining to families the benefits of "letting nature take its course.")
It turns out that, not unlike the public misperceptions of Dr. Kevorkian, the picture of frequent, unremitting suffering of the terminally ill is for the most part a fiction. Curiously, over the past 20 or so years attitudes about physician-assisted suicide and euthanasia haven't changed a great deal among the general public or physicians in general (those numbers are different from one another, but stable over time). However, one group in which attitudes have changed significantly is among oncologists, who have had a steep drop in approval for those practices.
Why? It's hard to say with complete certainty, but it's likely because oncologists are more aware of, and tuned into, the multiple ways in which terminally ill patients can remain pain-free and finish their lives with meaning and dignity, to paraphrase the article in the link. A telling statistic: among oncologists, surgical oncologists, who deal with the long-term care of their patients far less often, were twice as likely to support physician-assisted suicide as their medical oncologist colleagues. In other words, the further away one gets from the actual practice of death and dying, the greater the fear of pain and suffering among laypeople and physicians alike, and the corresponding increase in support of physician-assisted suicide.
As for judging whether a life is worth living, that's much more straightforward. Physician's have no business judging the worth of any of their patients' lives. That is playing God.
It is not hard to kill onself in the US: over 30,000 people do it each year, and do it in a multiplicity of ways ranging from relatively peaceful to gruesome. And while there are technically laws on the books against suicide and no Supreme Court recognition of a "right to suicide," the practice is tacitly accepted. Suicides are allowed to be buried with everyone else, and the state does not seize their assets. So given the ease by which people can commit suicide, the debate around physicians being involved in the taking of lives has increasingly for me had an odd ring about it. Why must physicians be present to sanctify this process? It has the feel of approval-seeking, and docs shouldn't be in the business of approving or disapproving anything about a patient's lifestyle, except maybe smoking. Even then: maybe.
Doctors cannot take lives; it's not our job and should never be so. If we administer comfort medications that may hasten death to a suffering patient as a side effect, that is more than acceptable. If doctors withdraw tubes or machines that "artificially" keep patients alive, that's fine as well. But there's a big difference between maintaining a morphine drip and injecting a bolus of potassium chloride into a patient. The former is a drug with legitimate medical uses; the latter is never used under any conditions except to kill. Morphine is an everyday drug in hospices across the US; the potassium bolus was a "medication" unique to Dr. Kevorkian. May there never be another one like him again.
--br
Count me among the ostracizers. As the warm comments from seemingly well-informed readers demonstrates, Kevorkian was widely perceived to be a fierce advocate for patient's rights, a promoter of death with dignity, and the victim of a hypocritical and vindictive profession hellbent on maintain its Godlike power over patients. His trial, conviction, and imprisonment in 1999 for second-degree murder has the flavor of martyrdom, reinforcing the admiration of his followers and inviting comparisons to various legendary civil-rights activists.
In reality, Kevorkian was none of these things, but rather a creepy zealot obsessed with death who knew nothing about actual patient care. (I am not using the word "creepy" lightly; read on.) Although he was trained at a bonafide medical school and thus was a "doctor" in the general sense of the term, his training and subsequent practice was in pathology, where his work involved autopsies and analysis of human tissues on slides rather than actually taking care of living, breathing souls with joys and fears--making his public persona as "doctor" a bit misleading, as if he were the same as Marcus Welby, M.D. Kevorkian's nickname, "Doctor Death", didn't come from the notoriety he generated in the 1980s and '90s, but rather from perplexed and amused housestaff during his early days in a wry observation about his peculiar fixation on photographing patients' eyes at the precise moment of death. (Various blogs and websites supportive of Kevorkian state that this is because he wanted the profession to be able to distinguish the moment so that resuscitation could be performed, or something to that effect. It's utter nonsense: even in the 1950's, which some might consider the Dark Ages by medical standards, there were EKGs, a considerably more precise tool to determine death than staring into people's eyes, which seems positively medieval. Whatever his stated justifications, his "death photography" was pure fetish.) Long before he took up physician-assisted suicide as his cause, he bounced from hospital to hospital, disturbing various medical staffs with his distinctly unconventional preoccupations.
He was praised for his compassion despite the fact that he had not only not taken care of living patients except during his internship, but had never received any training of any kind in treating patients with depression (common enough among the terminally ill), palliative care, or any of the diseases that he claimed to treat. His choices reflect this very poor training: among the 130 or more cases in which he was the prescriber of death, several had no terminal illnesses nor were suffering, such as the case of Janet Adkins, who had been recently diagnosed with Alzheimer's disease but aside from mild memory loss was in otherwise reasonably good health.
Even more disturbing were the reports of the death of Judith Curren, a 43 year-old woman who not only didn't have a clear-cut underlying disorder, she had reportedly been a victim of domestic violence. These are not the only cases, but even the inclusion of these two suggests at best a sloppiness in methods, and at worst a murderous instinct hidden under the guise of medical concern for suffering. ("How could I have known?" was Kevorkian's retort after being confronted with the news of the messy life of the Curren family. Perhaps if his only acquaintance with them had not been through a questionnaire, and had been based on caring for Judith Curren in a legitimate medical practice for several years, such surprises wouldn't have popped up.)
In short, Dr. Kevorkian-the-Caring was a total media fabrication. He was a murderer, and if anything was treated gently by the justice system.
Other, far more responsible doctors have spoken out in favor of physician-assisted suicide--doctors who personally knew and ministered to their patients before taking the terrifying power into their hands and helped patients end their lives, doctors who gave such power its proper due, only arriving at that moment after slow and careful deliberation, wholly unlike Dr. Kevorkian's quickie-in-a-Volkswagen butchery. Perhaps the most famous of these doctors is Timothy Quill, a practicing doc in New York who challenged the ban on physician-assisted suicide in the State of New York which was ultimately decided by the US Supreme Court; the court decided 9-0 against Dr. Quill. Even Quill, as forceful an advocate for physician-assisted suicide as could be, found Kevorkian's behavior troubling, saying that he "is very much on the edges of what ordinary doctors do."
I have heard Timothy Quill speak on two occasions and found him an eloquent man whose concerns are ultimately for the health and happiness of his patients. That said, I still believe that physician-assisted suicide is a terrible idea. Ironically, the two times I attended lectures by Dr. Quill mark dramatic shifts in opinion I have had on the subject: the first time happened before I started medical school and was strongly in favor of his ideas, while the second time was a few years ago, after I had undergone more than a decade of medical training, and my attitude had changed considerably.
Generally, the discussions about physician-assisted suicide revolve around two themes. The first is what bizzyblog refers to as "the euthanasia theme song," or having a life that is not worth living. The second deals with the scenario of unbearable and unremitting suffering, which the supporters of physician-assisted suicide regard as the ultimate justification for the practice. This is often where the accusations of "doctors playing God" come in--docs are so invested in keeping people alive that they consider it a personal affront to allow patients to die. (In general, my experience has been the opposite, not withstanding the rather regrettable final few days of my father's life, in which we attempted in vain for several days to have his life-support removed after an episode of sudden cardiac death. Based on what I've seen, it's usually the doctors, and not the families, who see little or no value and much suffering in store for families and patients with terminal illness requiring intubation, PEG tubes and the like, and often have difficulty explaining to families the benefits of "letting nature take its course.")
It turns out that, not unlike the public misperceptions of Dr. Kevorkian, the picture of frequent, unremitting suffering of the terminally ill is for the most part a fiction. Curiously, over the past 20 or so years attitudes about physician-assisted suicide and euthanasia haven't changed a great deal among the general public or physicians in general (those numbers are different from one another, but stable over time). However, one group in which attitudes have changed significantly is among oncologists, who have had a steep drop in approval for those practices.
Why? It's hard to say with complete certainty, but it's likely because oncologists are more aware of, and tuned into, the multiple ways in which terminally ill patients can remain pain-free and finish their lives with meaning and dignity, to paraphrase the article in the link. A telling statistic: among oncologists, surgical oncologists, who deal with the long-term care of their patients far less often, were twice as likely to support physician-assisted suicide as their medical oncologist colleagues. In other words, the further away one gets from the actual practice of death and dying, the greater the fear of pain and suffering among laypeople and physicians alike, and the corresponding increase in support of physician-assisted suicide.
As for judging whether a life is worth living, that's much more straightforward. Physician's have no business judging the worth of any of their patients' lives. That is playing God.
It is not hard to kill onself in the US: over 30,000 people do it each year, and do it in a multiplicity of ways ranging from relatively peaceful to gruesome. And while there are technically laws on the books against suicide and no Supreme Court recognition of a "right to suicide," the practice is tacitly accepted. Suicides are allowed to be buried with everyone else, and the state does not seize their assets. So given the ease by which people can commit suicide, the debate around physicians being involved in the taking of lives has increasingly for me had an odd ring about it. Why must physicians be present to sanctify this process? It has the feel of approval-seeking, and docs shouldn't be in the business of approving or disapproving anything about a patient's lifestyle, except maybe smoking. Even then: maybe.
Doctors cannot take lives; it's not our job and should never be so. If we administer comfort medications that may hasten death to a suffering patient as a side effect, that is more than acceptable. If doctors withdraw tubes or machines that "artificially" keep patients alive, that's fine as well. But there's a big difference between maintaining a morphine drip and injecting a bolus of potassium chloride into a patient. The former is a drug with legitimate medical uses; the latter is never used under any conditions except to kill. Morphine is an everyday drug in hospices across the US; the potassium bolus was a "medication" unique to Dr. Kevorkian. May there never be another one like him again.
--br
Thursday, May 5, 2011
Racism--The Gift That Keeps On Giving
My philosophy of bedside medicine is founded on trying to be aware of how patients view the world well before I bring that quarter million-dollar scientific education to bear upon their problems. I've met docs who can form a differential diagnosis with greater length, and in faster time, than me, but I've also observed a lot of docs of that ilk who don't have a clue about how to use their stellar clinical acumen to explain to patients what they are thinking. This inability frequently leads to all sorts of problems for patients and their families, either because they are anxious and don't understand what is being told to them, or because they don't understand instructions and are afraid to ask out of a desire not to appear stupid. Being simultaneously intimidating and clueless can have lethal consequences, even when a doc is just as smart as House.
Put yourself into their shoes, I tell my med students, think about how they're feeling when you're talking to them before you start with the technical talk. Imagine, I say to them, how it feels to be lying there, usually half-naked, while an army of White Coats is standing there at your bedside, looking down at you, speaking in a language that sounds vaguely like English but makes no sense at all. (See this clip here, for instance, from the British TV miniseries The Singing Detective, which illustrates this in simultaneous hilarious and exquisitely painful detail. I mean it--follow that link, team! Not only is it a fantastic, biting satire of academic medical culture, it features a much younger Michael "Dumbledore 2" Gambon as well as Imelda "Dolores Umbridge" Staunton. And though made in the 1980s, the medical language--indeed, the medications!--hasn't changed much.) One thing Billy always does when seeing his hospitalized patients is to sit in a chair at patient's-eye level, and if no chair is present, then he gets down on his knees to communicate. It's symbolic, but I think it means a lot.
Being exposed and vulnerable is a universal condition of patienthood, but there are other factors that influence the physician-patient interaction as well, and race is one of the biggies. I don't believe that every interaction between black and white Americans has to "devolve" to race by necessity, but I sure's hell think that it's something to be aware of when you step into a room as a white doc with an African-American patient. I, nor any of my ancestors, were ever involved in any overtly racist act, but that doesn't mean that I shouldn't at least be cognizant of the fact that African-Americans often are leery of white docs, and not unjustifiably so (more on this in a moment).
Race was on my mind today as I listened to a fascinating lecture about blood transfusions. Most Americans have at least a vague understanding that there are 4 major blood types (A, B, AB, and O) each of which can be described as "Rh positive" or "Rh negative"--thus 8 blood types in total. In order to have a blood transfusion safely, these types must be matched to prevent immune responses. The blood types are distributed across all races making "universal" transfusion a generally easy process. (Though some readers may recall an episode of M*A*S*H* taking the topic of race and blood head-on, when a white GI needing surgery tells the surgeons not to give him "any of that black blood"--no doubt reflecting the attitudes of real people in the 50's, when the scene was set, as well as the early 70's, when the episode was filmed. Plus there's the cultural convention in many Asian communities, especially in Japan, that the ABO blood types correlate with personality, and Japanese are even more keenly aware of their blood types than Americans are of, say, our zodiac signs. So we haven't eliminated this brand of nonsense from humanity just yet, but we're getting there.)
It turns out that the ABO and Rh+/- system is just the beginning, and the immune response to blood is a good deal more complicated than this. But in the majority of cases the model of eight blood types is sufficient to save people with the magic of transfusions. (This assumes that people have ready access to blood, which they often don't, for instance, in many northern Mexican communities, as described here, but in the US that's almost never a problem.) The exceptions to this, where patients have to have a host of other blood cross-typing done, are frequently found among patients of African ancestry, in particular among those who suffer from that quintessentially African disease, Sickle Cell Anemia. Whether this is due to the inherent genetic variation in Africans, or whether it's a process of sickle cell disease, is not fully clear to me, but from a clinician's standpoint it hardly makes any difference. Patients who have requirements beyond the eight common types need to be cross-matched for special blood, sometimes very rare blood indeed. One patient under discussion in the lecture today essentially had only one person who was known to have blood that she could accept--in the entire world. And since that patient's donor (a close relative) was still a child, not so much a help.
Anyway, the likelihood that you'll get a match for that special blood is increased if you have a large pool of donors who more closely resemble you genetically. Meaning: from your ethnic or racial group. So Africans and African-Americans--who constitute a major if not the major group of people with these rare reactions to blood transfusion--are the ones most in need of blood donors of African ancestry. And there's the rub, because African-Americans are far less likely to donate their blood, at the rate of 25 to 50 percent the rate of blood donation among whites. This is very much unlike the situation in a group of transfusion-dependent diseases called thalassemias, which sometimes afflict people of African descent, but more often are seen in Caucasians, who have a much larger pool of donors from which adequate matches can be found, and so there are far fewer transfusion crises and dilemmas.
Why is this? Well, if you don't think that American history, filled with its pernicious racism, is grasping with its fetid hands our modern system of blood donation, then you're missing just as much as the brilliant-but-clueless docs I've described above. Leave aside slavery and all of its ill consequences alone for a moment--just consider the treatment of African-Americans at the hands of doctors, some of whom were employed by the Federal Government of the United States, as they were prevented from being cured of syphilis or bombarded with radiation without awareness or consent. Or the story of Henrietta Lacks, whose ultimately fatal cancer cells became the first human cells cultured outside of the body, remaining the workhorse cells for biomedical scientists to this day, a major source of commerce in the scientific world, worth billions of dollars, while her descendants struggle to afford health insurance.
Think about that while you swallow the statistic on the poor rates of blood donation among African Americans when your next sickler needs a transfusion. Does blood donation cause syphilis? No, of course not--but would you trust a system that had treated your brothers and sisters like this for generations? As the Tuskegee Syphilis Study Legacy Committee Report wrote in 1996: "the [study] continues to cast its long shadow on the contemporary relationship between African Americans and the biomedical community."
Indeed. It has not only done that; it has cruelly deprived some members of its own community the lifeblood it so desperately needs. This is why doctors need to be as aware of history as they are of science.
--br
Put yourself into their shoes, I tell my med students, think about how they're feeling when you're talking to them before you start with the technical talk. Imagine, I say to them, how it feels to be lying there, usually half-naked, while an army of White Coats is standing there at your bedside, looking down at you, speaking in a language that sounds vaguely like English but makes no sense at all. (See this clip here, for instance, from the British TV miniseries The Singing Detective, which illustrates this in simultaneous hilarious and exquisitely painful detail. I mean it--follow that link, team! Not only is it a fantastic, biting satire of academic medical culture, it features a much younger Michael "Dumbledore 2" Gambon as well as Imelda "Dolores Umbridge" Staunton. And though made in the 1980s, the medical language--indeed, the medications!--hasn't changed much.) One thing Billy always does when seeing his hospitalized patients is to sit in a chair at patient's-eye level, and if no chair is present, then he gets down on his knees to communicate. It's symbolic, but I think it means a lot.
Being exposed and vulnerable is a universal condition of patienthood, but there are other factors that influence the physician-patient interaction as well, and race is one of the biggies. I don't believe that every interaction between black and white Americans has to "devolve" to race by necessity, but I sure's hell think that it's something to be aware of when you step into a room as a white doc with an African-American patient. I, nor any of my ancestors, were ever involved in any overtly racist act, but that doesn't mean that I shouldn't at least be cognizant of the fact that African-Americans often are leery of white docs, and not unjustifiably so (more on this in a moment).
Race was on my mind today as I listened to a fascinating lecture about blood transfusions. Most Americans have at least a vague understanding that there are 4 major blood types (A, B, AB, and O) each of which can be described as "Rh positive" or "Rh negative"--thus 8 blood types in total. In order to have a blood transfusion safely, these types must be matched to prevent immune responses. The blood types are distributed across all races making "universal" transfusion a generally easy process. (Though some readers may recall an episode of M*A*S*H* taking the topic of race and blood head-on, when a white GI needing surgery tells the surgeons not to give him "any of that black blood"--no doubt reflecting the attitudes of real people in the 50's, when the scene was set, as well as the early 70's, when the episode was filmed. Plus there's the cultural convention in many Asian communities, especially in Japan, that the ABO blood types correlate with personality, and Japanese are even more keenly aware of their blood types than Americans are of, say, our zodiac signs. So we haven't eliminated this brand of nonsense from humanity just yet, but we're getting there.)
It turns out that the ABO and Rh+/- system is just the beginning, and the immune response to blood is a good deal more complicated than this. But in the majority of cases the model of eight blood types is sufficient to save people with the magic of transfusions. (This assumes that people have ready access to blood, which they often don't, for instance, in many northern Mexican communities, as described here, but in the US that's almost never a problem.) The exceptions to this, where patients have to have a host of other blood cross-typing done, are frequently found among patients of African ancestry, in particular among those who suffer from that quintessentially African disease, Sickle Cell Anemia. Whether this is due to the inherent genetic variation in Africans, or whether it's a process of sickle cell disease, is not fully clear to me, but from a clinician's standpoint it hardly makes any difference. Patients who have requirements beyond the eight common types need to be cross-matched for special blood, sometimes very rare blood indeed. One patient under discussion in the lecture today essentially had only one person who was known to have blood that she could accept--in the entire world. And since that patient's donor (a close relative) was still a child, not so much a help.
Anyway, the likelihood that you'll get a match for that special blood is increased if you have a large pool of donors who more closely resemble you genetically. Meaning: from your ethnic or racial group. So Africans and African-Americans--who constitute a major if not the major group of people with these rare reactions to blood transfusion--are the ones most in need of blood donors of African ancestry. And there's the rub, because African-Americans are far less likely to donate their blood, at the rate of 25 to 50 percent the rate of blood donation among whites. This is very much unlike the situation in a group of transfusion-dependent diseases called thalassemias, which sometimes afflict people of African descent, but more often are seen in Caucasians, who have a much larger pool of donors from which adequate matches can be found, and so there are far fewer transfusion crises and dilemmas.
Why is this? Well, if you don't think that American history, filled with its pernicious racism, is grasping with its fetid hands our modern system of blood donation, then you're missing just as much as the brilliant-but-clueless docs I've described above. Leave aside slavery and all of its ill consequences alone for a moment--just consider the treatment of African-Americans at the hands of doctors, some of whom were employed by the Federal Government of the United States, as they were prevented from being cured of syphilis or bombarded with radiation without awareness or consent. Or the story of Henrietta Lacks, whose ultimately fatal cancer cells became the first human cells cultured outside of the body, remaining the workhorse cells for biomedical scientists to this day, a major source of commerce in the scientific world, worth billions of dollars, while her descendants struggle to afford health insurance.
Think about that while you swallow the statistic on the poor rates of blood donation among African Americans when your next sickler needs a transfusion. Does blood donation cause syphilis? No, of course not--but would you trust a system that had treated your brothers and sisters like this for generations? As the Tuskegee Syphilis Study Legacy Committee Report wrote in 1996: "the [study] continues to cast its long shadow on the contemporary relationship between African Americans and the biomedical community."
Indeed. It has not only done that; it has cruelly deprived some members of its own community the lifeblood it so desperately needs. This is why doctors need to be as aware of history as they are of science.
--br
Wednesday, April 27, 2011
Government and Medicine: Legislative and Judicial Follies
I try--I repeat, I try--to construct eloquent blog posts as often as I write them, taking care to choose my words as I tiptoe through the minefields of cyberspastic hyperbole and vitriol. That said, I could do little more than utter "blech" at this news piece that the Massachusetts State House has voted overwhelmingly, as part of an "economic development" bill, to repeal a ban on gift-giving from pharmaceutical companies to physicians that had passed in 2008. For those who have been trapped in solid ice since, say, the mid-1930s and the heyday of the Henry Cabot Lodges and William Morgan Butlers, in Massachusetts, the House belongs to the Democratic party. So how does such a law that seems to once again encourage the wink-wink nudge-nudge relationship between docs and the pill-pushers--particularly when the same legislative body clamps down on labor's bargaining rights in an effort to rein in spending costs--get passed?
Amazingly, the answer appears to lie in...the dining & entertainment lobby. According to Garrett Bradley (D) of Hingham, the sponsor of the measure, the ban stifles business, hurting convention centers and "restaurants where companies typically hosted physician events and dinners." (NB--the quote is from the article, not a direct of Mr. Bradley, though I doubt he'd quibble if the line were attributed to him.) Never mind the fact that this sector of the Massachusetts economy appears to be doing reasonably well, with an increase in overall revenue compared to last year, the measure's backers appear to be saying that it's perfectly fine if a payola-style arrangement is in place, as long as the palms continue to be greased and the filet mignon gets served with the Cabernet.
Blogger-doc Dan Carlat has already staked out the Swiftian rhetorical territory with a delightful skewering of the follies, leaving me and others to play the straight guy. So here goes my best effort: no self-respecting physician compromises the health of his or her patients by allowing themselves to be manipulated by claptrap. There is ample evidence that gift-giving induces an attitude of reciprocation, lucidly-but-luridly described in such books as The Truth About Drug Companies and White Coat, Black Hat, regardless of the actual quality of the product, and that drug reps know this and seize on the vanity of physicians to play them for dupes. Drug companies, however beneficial their societal effects may (or may not always) be, have a responsibility to shareholders, whose primary or sole interest is in the generation of wealth. To anyone in any state of mind other than that of abject denial, this is a primary objective that is in direct conflict with the caring of patients. Thus, doctors cannot accept gifts of any kind from those whose job it is to sell drugs.
A related theme is being played out in the judicial branch of government, as the US Supreme Court is hearing arguments on a Vermont law that bars the commercial use of physician prescription patterns. Based on the early returns, and noting previous Court decisions that take a fairly broad view of "free-speech" rights (at least if you are a corporation), it appears that the law is destined for being overturned. I can't claim to be a legal expert and thus won't even begin to take a crack at the wrangling over the First Amendment, other than to note a certain puzzlement at what passes for "free speech" these days among the Court's "strict constructionist" wing. Did the Founding Fathers really have the selling of a doctor's prescription habits in mind when crafting the First Amendment? I'm thinking not, but I await the peals of derision from my philosophico-legal foils (and loyal readers!) such as Ted Frank, a conservative maverick (I'm not sure if "conservative" is the right word for him; I'm certain that "maverick" is) and who ranks as the second smartest person I have ever had the pleasure of knowing in my life. (And in case anyone might misunderstand, I'm not implying that I occupy the top spot; I doubt I crack even the top 75, and I don't have that many friends.)
Regardless of the legal principles at stake, Chief Justice John Roberts made his contribution to the follies by appearing to frame this as an argument of "restricting the flow of information to doctors," as silly a line that can be uttered in such an august house as SCOTUS. How would withholding prescription info from drug companies prevent them from making their pitch for their drug? How does this even remotely "restrict the flow of information"? The naivete exhibited by the Chief Justice is pretty remarkable (and shared, without surprise, by Justices Scalia and Kennedy). The "it's all data, it's all protected by the First Amendment" argument seems to only work when big business benefits, but not the other way around. The recipe to Coke is just data, too; somehow I don't think the Coca-Cola Corporation considers that to be something that your local Joe can just barge on in and demand. Pray tell, what's the difference?
A hat-tip to Carey Goldberg at WBUR's Common Health Blog, especially for her humoring me in my response of "blech"!
--br
Amazingly, the answer appears to lie in...the dining & entertainment lobby. According to Garrett Bradley (D) of Hingham, the sponsor of the measure, the ban stifles business, hurting convention centers and "restaurants where companies typically hosted physician events and dinners." (NB--the quote is from the article, not a direct of Mr. Bradley, though I doubt he'd quibble if the line were attributed to him.) Never mind the fact that this sector of the Massachusetts economy appears to be doing reasonably well, with an increase in overall revenue compared to last year, the measure's backers appear to be saying that it's perfectly fine if a payola-style arrangement is in place, as long as the palms continue to be greased and the filet mignon gets served with the Cabernet.
Blogger-doc Dan Carlat has already staked out the Swiftian rhetorical territory with a delightful skewering of the follies, leaving me and others to play the straight guy. So here goes my best effort: no self-respecting physician compromises the health of his or her patients by allowing themselves to be manipulated by claptrap. There is ample evidence that gift-giving induces an attitude of reciprocation, lucidly-but-luridly described in such books as The Truth About Drug Companies and White Coat, Black Hat, regardless of the actual quality of the product, and that drug reps know this and seize on the vanity of physicians to play them for dupes. Drug companies, however beneficial their societal effects may (or may not always) be, have a responsibility to shareholders, whose primary or sole interest is in the generation of wealth. To anyone in any state of mind other than that of abject denial, this is a primary objective that is in direct conflict with the caring of patients. Thus, doctors cannot accept gifts of any kind from those whose job it is to sell drugs.
A related theme is being played out in the judicial branch of government, as the US Supreme Court is hearing arguments on a Vermont law that bars the commercial use of physician prescription patterns. Based on the early returns, and noting previous Court decisions that take a fairly broad view of "free-speech" rights (at least if you are a corporation), it appears that the law is destined for being overturned. I can't claim to be a legal expert and thus won't even begin to take a crack at the wrangling over the First Amendment, other than to note a certain puzzlement at what passes for "free speech" these days among the Court's "strict constructionist" wing. Did the Founding Fathers really have the selling of a doctor's prescription habits in mind when crafting the First Amendment? I'm thinking not, but I await the peals of derision from my philosophico-legal foils (and loyal readers!) such as Ted Frank, a conservative maverick (I'm not sure if "conservative" is the right word for him; I'm certain that "maverick" is) and who ranks as the second smartest person I have ever had the pleasure of knowing in my life. (And in case anyone might misunderstand, I'm not implying that I occupy the top spot; I doubt I crack even the top 75, and I don't have that many friends.)
Regardless of the legal principles at stake, Chief Justice John Roberts made his contribution to the follies by appearing to frame this as an argument of "restricting the flow of information to doctors," as silly a line that can be uttered in such an august house as SCOTUS. How would withholding prescription info from drug companies prevent them from making their pitch for their drug? How does this even remotely "restrict the flow of information"? The naivete exhibited by the Chief Justice is pretty remarkable (and shared, without surprise, by Justices Scalia and Kennedy). The "it's all data, it's all protected by the First Amendment" argument seems to only work when big business benefits, but not the other way around. The recipe to Coke is just data, too; somehow I don't think the Coca-Cola Corporation considers that to be something that your local Joe can just barge on in and demand. Pray tell, what's the difference?
A hat-tip to Carey Goldberg at WBUR's Common Health Blog, especially for her humoring me in my response of "blech"!
--br
Wednesday, March 23, 2011
Fukushima Daiichi, and the Perception of Radiation Risk
Evolutionarily speaking, we are as a species hardwired to analyze risk based off of information that's directly in front of us--immediately accessible to our five senses. We're designed not to trust food that smells funny, can instantly calculate how far away we should stay from large cats capable of having us for a snack, and do a host of other things that were very useful for us to eke out a living in Olduvai Gorge.
But we live in the 21st century, and nowadays our ability to perceive and estimate risk is hampered by the fact that many of today's risks are abstract, and require a resonably sophisticated understanding of statistics. Take, for example, a recent discussion found in the Paper Of Record about income distribution in the United States. True--it's not really a round table about "risk" per se, unless you consider radically unequal wealth distribution to be a risk to democracy, as Supreme Court Justice Louis Brandeis did when he said that "we can have democracy in this country, or we can have great wealth concentrated in the hands of a few, but we can't have both." Still, the NY Times roundtable was remarkable in that all of the contributors, whether approaching the issue from a left or right viewpoint, agreed that most Americans had vastly underestimated how much wealth is held by relatively few. In particular, a study by Michael Norton and Dan Ariely found that not only do Americans think that wealth distribution to be significantly more equitable than it actually is, but that they would prefer it to be even more equitable than what they (wrongly) perceive.
If this isn't a classic example of what George W. Bush would call "misunderestimation" then it's not clear what is, and moreover, it highlights the difficulties people have in making accurate estimates about things like the distribution of wealth in a hugely complex society: the information simply cannot be found by opening your eyes and looking around. In medicine, we see this all the time: people are often terrified of exotic diseases that pose little threat to them, while being utterly blithe to the daily assaults on their bodies--frequently self-inflicted--that are much more likely to send them six feet under. To wit: drinking, smoking, eating poorly and not exercising.
The recent events at the Fukushima Daiichi nuclear power plant have been a case study in this process of risk assessment, and not altogether surprisingly, we haven't done well collectively in harmonizing our level of panic to the actual threat that the reactors pose. Despite a good number of depressing news stories, some cataloging evasive action by non-Japanese governments, it is far from clear how huge an impact the nuclear accident is going to have. While it is already comparable to the Three Mile Island accident in 1979, and is not (yet) as catastrophic as the Chernobyl accident of 1986, the question still remains: just how dangerous is it? Though the story is far from over there with events taking dramatic swings in short periods, the short answer is something like dangerous, but not as dangerous as you think. Not nearly as dangerous as you think.
That point is neatly illustrated, both in sound and visual format, in this news story from Adam Ragusea of WBUR (Boston), and is described as the "Dread-to-Risk ratio" by Andrew Revkin of the Dot Earth blog at NYT. Both pieces have the same useful graphic to give you some sense of the relative levels of radiation that we're talking about. Live within 50 miles of a nuclear power plant for one full year? That will give you about 0.1 microSieverts (uSv) of radiation. (What a Sievert is, is a longer discussion, but we'll just shorthand it here and say that it's some relative value of radiation, and that the higher the number, the more dangerous it gets.) One flight from New York to LA buys you about 400 times that amount (40 uSv). That's not even the round trip! A standard chest x-ray, meanwhile, is worth about 20 uSv. But a mammogram is a whopper, clocking in at 3 milliSieverts--thus about 150 "standard" x-rays and just under forty roundtrip flights from NY to LA. A CT scan can be worth almost twice the amount of the mammogram (5.8 mSv).
(In case I've lost some people on this micro/milli distinction, you need 1000 "micros" to make 1 "milli." I'm going to flip back and forth but will point it out when I do.)
So how do these numbers stack up to the nuclear disasters? If you lived within 10 miles of the Three Mile Island plant during the accident and didn't make a run for it, the total dose of radiation you received was 80 microSieverts--far less than one mammogram. By contrast, one area near the Fukushima plant recorded a total dose in one day of 3.6 milliSieverts: less than a CT but more than a mammogram, though of course we're only talking about one day's worth of radiation. With Chernobyl, the radiation levels fluctuated wildly both in time and place so making a general statement about the radiation is essentially impossible, but had you been moved by some weird spirit to take a stroll on the grounds just last year, about 25 years after the accident, you would have gotten two mammograms' worth of radiation for your troubles: 6 milliSieverts. I won't reproduce the pic here out of respect for copyright but highly recommend it to anyone with the time; Ragusea of WBUR translates this into a tone equivalent, and the radiation from TMI is a blip, while the sound for "mammogram" is substantially longer.
I draw two conclusions from all of this. First, while the troubles at Fukushima are by no means trivial, and for that matter aren't yet finished, I think it's a bit premature to write the obituary for nuclear power. In terms of accidents, it's not nearly as dangerous as most people suppose. The problem with nukes, in the TMI and Chernobyl age as well as today, is what to do with the radioactive waste generated by the plants rather than the risks they pose viz. accidents. Second is that we should try to minimize mammograms! Do only the amount that will help save lives, and not one more after that. This was the logic behind the recently revised US Preventative Services Task Force, which recommended no mammograms to women under 50, and biannual ones to those over. Despite this entirely sensible approach--based on good research with a careful eye toward the risk/benefit ratio of the radiation, it should be noted--there were howls of indignation from people purportedly speaking for women, accusing the very bureaucrats who issued the new recs to be female-hostile, or something like that.
--br
(NB--the first draft of this version, which snuck out prematurely, posted some incorrect calculations with respect to x-rays, mammograms, and NY-LA flights. The corrected version is now present.)
But we live in the 21st century, and nowadays our ability to perceive and estimate risk is hampered by the fact that many of today's risks are abstract, and require a resonably sophisticated understanding of statistics. Take, for example, a recent discussion found in the Paper Of Record about income distribution in the United States. True--it's not really a round table about "risk" per se, unless you consider radically unequal wealth distribution to be a risk to democracy, as Supreme Court Justice Louis Brandeis did when he said that "we can have democracy in this country, or we can have great wealth concentrated in the hands of a few, but we can't have both." Still, the NY Times roundtable was remarkable in that all of the contributors, whether approaching the issue from a left or right viewpoint, agreed that most Americans had vastly underestimated how much wealth is held by relatively few. In particular, a study by Michael Norton and Dan Ariely found that not only do Americans think that wealth distribution to be significantly more equitable than it actually is, but that they would prefer it to be even more equitable than what they (wrongly) perceive.
If this isn't a classic example of what George W. Bush would call "misunderestimation" then it's not clear what is, and moreover, it highlights the difficulties people have in making accurate estimates about things like the distribution of wealth in a hugely complex society: the information simply cannot be found by opening your eyes and looking around. In medicine, we see this all the time: people are often terrified of exotic diseases that pose little threat to them, while being utterly blithe to the daily assaults on their bodies--frequently self-inflicted--that are much more likely to send them six feet under. To wit: drinking, smoking, eating poorly and not exercising.
The recent events at the Fukushima Daiichi nuclear power plant have been a case study in this process of risk assessment, and not altogether surprisingly, we haven't done well collectively in harmonizing our level of panic to the actual threat that the reactors pose. Despite a good number of depressing news stories, some cataloging evasive action by non-Japanese governments, it is far from clear how huge an impact the nuclear accident is going to have. While it is already comparable to the Three Mile Island accident in 1979, and is not (yet) as catastrophic as the Chernobyl accident of 1986, the question still remains: just how dangerous is it? Though the story is far from over there with events taking dramatic swings in short periods, the short answer is something like dangerous, but not as dangerous as you think. Not nearly as dangerous as you think.
That point is neatly illustrated, both in sound and visual format, in this news story from Adam Ragusea of WBUR (Boston), and is described as the "Dread-to-Risk ratio" by Andrew Revkin of the Dot Earth blog at NYT. Both pieces have the same useful graphic to give you some sense of the relative levels of radiation that we're talking about. Live within 50 miles of a nuclear power plant for one full year? That will give you about 0.1 microSieverts (uSv) of radiation. (What a Sievert is, is a longer discussion, but we'll just shorthand it here and say that it's some relative value of radiation, and that the higher the number, the more dangerous it gets.) One flight from New York to LA buys you about 400 times that amount (40 uSv). That's not even the round trip! A standard chest x-ray, meanwhile, is worth about 20 uSv. But a mammogram is a whopper, clocking in at 3 milliSieverts--thus about 150 "standard" x-rays and just under forty roundtrip flights from NY to LA. A CT scan can be worth almost twice the amount of the mammogram (5.8 mSv).
(In case I've lost some people on this micro/milli distinction, you need 1000 "micros" to make 1 "milli." I'm going to flip back and forth but will point it out when I do.)
So how do these numbers stack up to the nuclear disasters? If you lived within 10 miles of the Three Mile Island plant during the accident and didn't make a run for it, the total dose of radiation you received was 80 microSieverts--far less than one mammogram. By contrast, one area near the Fukushima plant recorded a total dose in one day of 3.6 milliSieverts: less than a CT but more than a mammogram, though of course we're only talking about one day's worth of radiation. With Chernobyl, the radiation levels fluctuated wildly both in time and place so making a general statement about the radiation is essentially impossible, but had you been moved by some weird spirit to take a stroll on the grounds just last year, about 25 years after the accident, you would have gotten two mammograms' worth of radiation for your troubles: 6 milliSieverts. I won't reproduce the pic here out of respect for copyright but highly recommend it to anyone with the time; Ragusea of WBUR translates this into a tone equivalent, and the radiation from TMI is a blip, while the sound for "mammogram" is substantially longer.
I draw two conclusions from all of this. First, while the troubles at Fukushima are by no means trivial, and for that matter aren't yet finished, I think it's a bit premature to write the obituary for nuclear power. In terms of accidents, it's not nearly as dangerous as most people suppose. The problem with nukes, in the TMI and Chernobyl age as well as today, is what to do with the radioactive waste generated by the plants rather than the risks they pose viz. accidents. Second is that we should try to minimize mammograms! Do only the amount that will help save lives, and not one more after that. This was the logic behind the recently revised US Preventative Services Task Force, which recommended no mammograms to women under 50, and biannual ones to those over. Despite this entirely sensible approach--based on good research with a careful eye toward the risk/benefit ratio of the radiation, it should be noted--there were howls of indignation from people purportedly speaking for women, accusing the very bureaucrats who issued the new recs to be female-hostile, or something like that.
--br
(NB--the first draft of this version, which snuck out prematurely, posted some incorrect calculations with respect to x-rays, mammograms, and NY-LA flights. The corrected version is now present.)
Monday, March 21, 2011
The "Back-Door CD4" and Its Ethics, or Lack Thereof
Every year, virtually every first-year medical student gets introduced to medical ethics by learning about the quirky religious beliefs of a small Christian sect: Jehovah’s Witnesses. Based on their interpretation of a passage from Leviticus, Jehovah’s Witnesses consider blood transfusion to be against God’s word, and will thus not accept them, even if their life depends on it. Medical students are introduced to this situation as the classic example of how an American physician is supposed to behave: it don’t matter what your personal beliefs are with respect to the Witnesses, only that you accept their right to their beliefs and, if need be, protect them, defend them, and do whatever else is necessary to let them decide what to do with their bodies. We call that concept informed consent, and it’s the basis of most serious discussions about life-saving medical care in this country. (Of course, I'm talking about adults here; with kids, the situation gets stickier, and with adolescents, stickier still.)
In theory, informed consent is the process by which patients take the reins and make all the genuinely important medical decisions for themselves, while docs serve as something like advisors. Needless to say I’m not talking about the minutiae of medical decision making, like whether to switch a patient from amlodipine to atenolol, but rather the stuff that most patients worry about when they (or their loved ones) walk through the hospital doors: do I want to be resuscitated? do I want to have “everything” done for me? can I refuse some procedures the doctor recommends? These are precisely the kinds of questions where we, as physicians, have an obligation to help patients and their families figure out what they want—that is, not impose on them what we want—and guide them as best we can, even if, indeed, especially if they make decisions that we find shortsighted or wrongheaded or both.
That’s the theory. Fortunately, in terms of practice, most of the time I think we physicians do a pretty admirable job of supporting our patients. You would be hard pressed to find a doctor who thinks it’s acceptable to override the beliefs of a Jehovah’s Witness and force-feed them blood to save their lives. That said, while the core of that philosophy is wholly adopted by the profession, precisely where the boundaries lay can be contentious. And last week I was again reminded that we don’t all agree on what constitutes honoring our patients’ wishes.
It happened when I was hearing about a case of a patient in the Intensive Care Unit fit for an episode of House, M.D. The patient was older without being elderly, and had a respiratory illness that had defied diagnosis despite the best intentions of cardiologists, pulmonologists, infectious disease specialists, and a few other medical professionals to boot. The ID docs, though unsure of what was going on, thought that this was most likely some infection seen in the setting of underlying AIDS, and I, being an ID guy, shared their point of view while hearing the details. Like my colleagues, the question to which I wanted to know the answer was: what was the result of the HIV test? But the patient had adamantly refused HIV testing. And that’s within his right: a doctor can order most blood tests without having to discuss them with a patient, but an HIV test—just like a blood transfusion—is special, and requires a signed form saying that the patient agrees to it. In this case, where a diagnosis of HIV infection might be helpful (more on this later), such a refusal can be maddening. But that is how the rules are set up right now, and once the patient says no, then that’s all she wrote.
However, in this case, that wasn’t all she wrote. HIV is a virus that infects, and destroys, a special kind of white blood cell called a CD4 cell. In general, the further your CD4 cell count drops, the more you are at risk of being infected by the weird organisms that are the sine qua non of AIDS, things like toxoplasmosis, cryptococcosis, penicillium marneffei and a host of other parasites, bacteria and fungi that people with healthy immune systems never develop. A person with a healthy immune system typically has a CD4 count that runs from 500 to 1500, give or take; the definition of AIDS is someone with HIV infection and a CD4 count less than 200.
One of the loopholes of informed consent for HIV testing is that it does not cover CD4 counts.
You can see where this is going. Some doc tried to do an end run around the refusal and checked the patient’s CD4 count. Surprise! It was low, less than 200, although the pattern of the CD4 cells didn’t really look like AIDS (skipping some technical detail here). So then what do you do? You have gotten no closer to the diagnosis, and you have put yourself into the uncomfortable situation where you may be tempted to take action. Sometimes the treatment for some of those weird “opportunistic infections” in AIDS is to just give a person medications for HIV, but you wouldn’t give those meds to a patient without a diagnosis of HIV. In this situation, that wasn’t the case, but what if it were? Would you throw antiretrovirals at the person because his CD4 count was suggestive of HIV—even though the patient unambiguously refused the test? To me, this smells exactly like giving blood to a Jehovah’s Witness: doing what I call a “Back-Door CD4” might seem clever, but it pretty obviously violates the spirit of the patient’s wishes, if not the letter.
Some would argue that the very need to consent people for HIV is outdated. HIV consent was established at a time when the diagnosis was severely stigmatizing, and “positives” could lose jobs, insurance premiums and in general face ostracism from their communities; it also happened at a time when treatment wasn’t exactly effective. Today, the latter is definitely not true, and as I tell my HIV patients all the time, there is no reason to suppose that, should they take their meds every day faithfully, they should live as long as anyone else. Whether the former is true I am a touch skeptical, but I acknowledge that the level of stigma nowhere near approaches where it was twenty years ago. Thus, proponents of the Back-Door CD4 would say that the time for HIV consent has come and gone. Way back in 2004 a doc named LA Jansen wrote in the Journal of Medical Ethics and supposed that the Back-Door CD4 was a form of “conscientious subversion,” something akin to conscientious objection, where a physician acknowledged the existing legal landscape but did his or her own thing based on their personal ethics.
Jansen dresses the term up in calling it “conscientious subversion”; I prefer using simpler language and think of it as a bad idea. By my compass, patients have rights to refuse tests, medications, procedures, and any attempt at thwarting those desires defeats not only the entire point of informed consent, it belies the idea that we are advocates for our patients. Don’t get cute, is what I’d say: we are better than most professions at standing by those we serve. Let’s not mess things up by thinking we know better than they do.
—br
PS—the Jehovah’s Witness example has, for me, not been entirely academic. Twice I have cared for Witnesses who were in situations where transfusion was definitely worth considering, and in one of the cases it was pretty clearly indicated. Like most of my profession, it never entered my mind to try to push the idea on them once I learned of their religious beliefs. For more on Jehovah’s Witnesses and their philosophy behind their refusal of blood transfusions, see here, here and/or here.
In theory, informed consent is the process by which patients take the reins and make all the genuinely important medical decisions for themselves, while docs serve as something like advisors. Needless to say I’m not talking about the minutiae of medical decision making, like whether to switch a patient from amlodipine to atenolol, but rather the stuff that most patients worry about when they (or their loved ones) walk through the hospital doors: do I want to be resuscitated? do I want to have “everything” done for me? can I refuse some procedures the doctor recommends? These are precisely the kinds of questions where we, as physicians, have an obligation to help patients and their families figure out what they want—that is, not impose on them what we want—and guide them as best we can, even if, indeed, especially if they make decisions that we find shortsighted or wrongheaded or both.
That’s the theory. Fortunately, in terms of practice, most of the time I think we physicians do a pretty admirable job of supporting our patients. You would be hard pressed to find a doctor who thinks it’s acceptable to override the beliefs of a Jehovah’s Witness and force-feed them blood to save their lives. That said, while the core of that philosophy is wholly adopted by the profession, precisely where the boundaries lay can be contentious. And last week I was again reminded that we don’t all agree on what constitutes honoring our patients’ wishes.
It happened when I was hearing about a case of a patient in the Intensive Care Unit fit for an episode of House, M.D. The patient was older without being elderly, and had a respiratory illness that had defied diagnosis despite the best intentions of cardiologists, pulmonologists, infectious disease specialists, and a few other medical professionals to boot. The ID docs, though unsure of what was going on, thought that this was most likely some infection seen in the setting of underlying AIDS, and I, being an ID guy, shared their point of view while hearing the details. Like my colleagues, the question to which I wanted to know the answer was: what was the result of the HIV test? But the patient had adamantly refused HIV testing. And that’s within his right: a doctor can order most blood tests without having to discuss them with a patient, but an HIV test—just like a blood transfusion—is special, and requires a signed form saying that the patient agrees to it. In this case, where a diagnosis of HIV infection might be helpful (more on this later), such a refusal can be maddening. But that is how the rules are set up right now, and once the patient says no, then that’s all she wrote.
However, in this case, that wasn’t all she wrote. HIV is a virus that infects, and destroys, a special kind of white blood cell called a CD4 cell. In general, the further your CD4 cell count drops, the more you are at risk of being infected by the weird organisms that are the sine qua non of AIDS, things like toxoplasmosis, cryptococcosis, penicillium marneffei and a host of other parasites, bacteria and fungi that people with healthy immune systems never develop. A person with a healthy immune system typically has a CD4 count that runs from 500 to 1500, give or take; the definition of AIDS is someone with HIV infection and a CD4 count less than 200.
One of the loopholes of informed consent for HIV testing is that it does not cover CD4 counts.
You can see where this is going. Some doc tried to do an end run around the refusal and checked the patient’s CD4 count. Surprise! It was low, less than 200, although the pattern of the CD4 cells didn’t really look like AIDS (skipping some technical detail here). So then what do you do? You have gotten no closer to the diagnosis, and you have put yourself into the uncomfortable situation where you may be tempted to take action. Sometimes the treatment for some of those weird “opportunistic infections” in AIDS is to just give a person medications for HIV, but you wouldn’t give those meds to a patient without a diagnosis of HIV. In this situation, that wasn’t the case, but what if it were? Would you throw antiretrovirals at the person because his CD4 count was suggestive of HIV—even though the patient unambiguously refused the test? To me, this smells exactly like giving blood to a Jehovah’s Witness: doing what I call a “Back-Door CD4” might seem clever, but it pretty obviously violates the spirit of the patient’s wishes, if not the letter.
Some would argue that the very need to consent people for HIV is outdated. HIV consent was established at a time when the diagnosis was severely stigmatizing, and “positives” could lose jobs, insurance premiums and in general face ostracism from their communities; it also happened at a time when treatment wasn’t exactly effective. Today, the latter is definitely not true, and as I tell my HIV patients all the time, there is no reason to suppose that, should they take their meds every day faithfully, they should live as long as anyone else. Whether the former is true I am a touch skeptical, but I acknowledge that the level of stigma nowhere near approaches where it was twenty years ago. Thus, proponents of the Back-Door CD4 would say that the time for HIV consent has come and gone. Way back in 2004 a doc named LA Jansen wrote in the Journal of Medical Ethics and supposed that the Back-Door CD4 was a form of “conscientious subversion,” something akin to conscientious objection, where a physician acknowledged the existing legal landscape but did his or her own thing based on their personal ethics.
Jansen dresses the term up in calling it “conscientious subversion”; I prefer using simpler language and think of it as a bad idea. By my compass, patients have rights to refuse tests, medications, procedures, and any attempt at thwarting those desires defeats not only the entire point of informed consent, it belies the idea that we are advocates for our patients. Don’t get cute, is what I’d say: we are better than most professions at standing by those we serve. Let’s not mess things up by thinking we know better than they do.
—br
PS—the Jehovah’s Witness example has, for me, not been entirely academic. Twice I have cared for Witnesses who were in situations where transfusion was definitely worth considering, and in one of the cases it was pretty clearly indicated. Like most of my profession, it never entered my mind to try to push the idea on them once I learned of their religious beliefs. For more on Jehovah’s Witnesses and their philosophy behind their refusal of blood transfusions, see here, here and/or here.
Thursday, March 10, 2011
Profile in Courage, Writ Small, But Still
Today was Grand Rounds at my academic medical center. The subject was diabetes and how we--"we" being the medical system as opposed to "we" the individual doctors--can improve outcomes in this disease, which is a killer, and which we (pick whichever "we" you like) stink at treating successfully. The view of the speakers, with which I'm sympathetic, is that we require less gee-whiz bioscience breakthroughs than we do a comprehensive, systematic plan for identifying, following, and ensuring affected patients stay on their meds. None of their suggestions were particularly sexy and didn't involve lots of fancy technology except for using a personal computer. I was persuaded by their assertion that sometimes it's simple but labor-intensive solutions in medicine that are the ones with the best chance of success.
Grand Rounds at my hospital always begins with a physician "presenting a case." Typically this involves a resident summarizing a bare-bones medical history of some patient who has some affliction related to the topic being discussed: gout, Wegener's granulomatosis, multiple myeloma, sepsis, a heart attack, you name it. Often the speaker will make some remark about the case in relation to his or her talk, and then it's on with the show. This kind of case presentation is de rigueur among physicians, and after one has lived & breathed medicine for long enough (i.e. survived the third year of medical school), one becomes so acclimated to the rhetorical form that one can get fairly desensitized to the reality that it's actual human beings that are being spoken of.
I don't mean to imply that physicians speak about patients in a de-humanizing way when a case is presented--that's never acceptable--only that the process of summary and discussion of history, physical exam, and laboratory findings in the dry, sterile, & detached form of the "case presentation" is second-nature to physicians, and must be creepy as hell to patients if they had to listen to themselves being discussed. Sometimes I try to teach residents and students at the bedside in the old-fashioned manner, but I always make sure to alert patients that such feelings might overtake them as I "do some doctor-talk with my colleagues." I do everything I can think of to make that moment as comfortable as possible for patients, but ultimately my suspicion is that all my efforts, at best, help blunt the sense of creepiness rather than remove it altogether.
So you can imagine what it must have felt like for the gal today to have her case of diabetes discussed in the amphitheater filled with well over 100 physicians in attendance, watching the medical facts of her life, neatly summarized into three Power Point slides, as she sat in the fifth row. I've been part of this community for more than ten years now and I still get nervous when facing the White Coat Army en banc; I can only imagine how intimidating that must have felt for her. Then, at the end of the presentation, the presenter noted to the crowd that the patient was in attendance, and asked her if she had any thoughts to add. Again, with what I would describe as remarkable poise, she eloquently explained some of the life circumstances that made her choose treatment options that, without that critical context, would puzzle and frustrate physicians.
She not only did this, but managed to deliver an observation with a small barb attached to the end of it: "I see that many of you here are eating really nice lunches here today, really healthy food. Well, my family has to live month-to-month because of our income, and I can tell you that a pound of pasta and some tomato sauce goes a lot further than some other food." It was a complex observation, but the sheer nerve & determination it took to march into what could very well have felt like a Lion's Den, and deliver that speech with such clarity, was quite a thing to watch. (Disclosure: lunches are not sponsored by anyone at our medical center. Mostly this woman was referring to tasty-but-modestly-sized deli sandwiches using fresh ingredients and a fruit salad.)
It's very unusual to invite patients to hear their own cases discussed in this kind of format, weirder still to give them a platform for a few minutes to speak about their challenges. Certainly in this setting it was a brilliant idea to include such a patient in the dialogue: my school gets an "A" not merely for effort but execution as well! Though at the end of the day, when the speaker concluded the lecture and the audience gave its polite applause per the cultural conventions of Grand Rounds, no one thought to give a special thanks for this woman. On that count, I think the organizers earned a D-minus.
Grand Rounds at my hospital always begins with a physician "presenting a case." Typically this involves a resident summarizing a bare-bones medical history of some patient who has some affliction related to the topic being discussed: gout, Wegener's granulomatosis, multiple myeloma, sepsis, a heart attack, you name it. Often the speaker will make some remark about the case in relation to his or her talk, and then it's on with the show. This kind of case presentation is de rigueur among physicians, and after one has lived & breathed medicine for long enough (i.e. survived the third year of medical school), one becomes so acclimated to the rhetorical form that one can get fairly desensitized to the reality that it's actual human beings that are being spoken of.
I don't mean to imply that physicians speak about patients in a de-humanizing way when a case is presented--that's never acceptable--only that the process of summary and discussion of history, physical exam, and laboratory findings in the dry, sterile, & detached form of the "case presentation" is second-nature to physicians, and must be creepy as hell to patients if they had to listen to themselves being discussed. Sometimes I try to teach residents and students at the bedside in the old-fashioned manner, but I always make sure to alert patients that such feelings might overtake them as I "do some doctor-talk with my colleagues." I do everything I can think of to make that moment as comfortable as possible for patients, but ultimately my suspicion is that all my efforts, at best, help blunt the sense of creepiness rather than remove it altogether.
So you can imagine what it must have felt like for the gal today to have her case of diabetes discussed in the amphitheater filled with well over 100 physicians in attendance, watching the medical facts of her life, neatly summarized into three Power Point slides, as she sat in the fifth row. I've been part of this community for more than ten years now and I still get nervous when facing the White Coat Army en banc; I can only imagine how intimidating that must have felt for her. Then, at the end of the presentation, the presenter noted to the crowd that the patient was in attendance, and asked her if she had any thoughts to add. Again, with what I would describe as remarkable poise, she eloquently explained some of the life circumstances that made her choose treatment options that, without that critical context, would puzzle and frustrate physicians.
She not only did this, but managed to deliver an observation with a small barb attached to the end of it: "I see that many of you here are eating really nice lunches here today, really healthy food. Well, my family has to live month-to-month because of our income, and I can tell you that a pound of pasta and some tomato sauce goes a lot further than some other food." It was a complex observation, but the sheer nerve & determination it took to march into what could very well have felt like a Lion's Den, and deliver that speech with such clarity, was quite a thing to watch. (Disclosure: lunches are not sponsored by anyone at our medical center. Mostly this woman was referring to tasty-but-modestly-sized deli sandwiches using fresh ingredients and a fruit salad.)
It's very unusual to invite patients to hear their own cases discussed in this kind of format, weirder still to give them a platform for a few minutes to speak about their challenges. Certainly in this setting it was a brilliant idea to include such a patient in the dialogue: my school gets an "A" not merely for effort but execution as well! Though at the end of the day, when the speaker concluded the lecture and the audience gave its polite applause per the cultural conventions of Grand Rounds, no one thought to give a special thanks for this woman. On that count, I think the organizers earned a D-minus.
Wednesday, March 2, 2011
BRB Link Dump
My spiritual and theological leanings are probably just enough to drive everyone concerned totally nuts: I am intellectually atheist, though functionally Jewish, plus I'm fond of various other religions (or at least certain aspects of them). We have a new Rabbi at our Synagogue and while he appears to be a very charismatic man, my own religious leanings are such that I do not look to him for spiritual leadership in any capacity, and I remain a member partly because I like going to synagogue, but mostly because I really like peace between me and my wife.
The point of this rambling being that although I long ago decided to follow my own path and look toward no other man or woman as my spiritual leader, if I had to choose a person, I quite possibly could have chosen Peter Gomes, whose life ended just a little too soon for my tastes earlier this week. Gomes was about the most polar opposite person you could pick for me to follow: he was African-American; I was white. He was Christian; I, an agnostic Jew. He was gay; I, not so much, thanks, though as Jerry Seinfeld noted, not that there's anything wrong with that. He was, for most of his life, a Republican, and I have mostly not been a Democrat because I regarded them as too far to the right. He was something of a dandy with something of a pompous manner of speaking at the most Establishment university in the United States; I am a well-educated though frequently unspeakably crude dude who went to Abbie Hoffman's school and often sneers at The Establishment. On the surface, thus, not my kind of guy.
But once you peel away his formal and sometimes antiquated mannerisms and really listen to Peter Gomes, there is aught but beauty, truth, and light. Here is a brief comment on gay marriage, while here a longer talk with Charlie Rose. He opens the conversation with Rose with a line that elegantly encapsulates why I find him so admirable: "I like the notion that there is much yet to be revealed about the Christian faith; it's not all over yet. It's not a complete story, and we're moving into it. There is much yet to be revealed, and I think our best theological days are ahead of us." If that ain't a bare-bones summary of the philosophy of the great Talmudic masters (that is, the part of the Talmudic masters that I find worthy of attention), then I don't know what is. I have not read any of his books, and my acquaintance with him is largely through talks & other coverage he's received in that peculiar Harvardo-centric fishbowl of Boston media (you could catch his sermons on Sunday morning on WHRB, the Harvard radio station). It may be a tough pill for me to swallow to read a book about Jesus, but in the coming months I may peruse one of his books to stay connected with this eminently decent and astonishingly eloquent man.
More immediate concerns that nobody else in the vicinity of Harvard Yard prematurely join the Reverend Gomes are very much on the minds of Massachusetts public health officials as they scramble to contain a measles outbreak inflicted on the city by an unvaccinated French woman working for the French consulate downtown. What a mess: the super-contagious virus may have spread to a professor at UMass Boston, and thus his students are bearing the brunt of some public health measures, but fortunately seem not to be too bothered by the whole fuss. Though make no mistake, a fuss this is: measles spreads like wildfire and--while not overwhelmingly lethal by Andromeda-strain standards--kills simply by the fact that so many can become infected so quickly. Even a low mortality rate of, say, three percent can be a lot of bodies if tens of thousands become infected. And while the vaccine for measles (the "MMR") is good, it's not perfect, so even vaccinated people are at risk of infection, especially if they haven't been vaccinated in decades. As I noted in a previous entry: this virus is a killer. How this gal got into the US and was allowed to work without having a documented MMR is not fully clear to me, but many are paying the price for her folly.
--br
The point of this rambling being that although I long ago decided to follow my own path and look toward no other man or woman as my spiritual leader, if I had to choose a person, I quite possibly could have chosen Peter Gomes, whose life ended just a little too soon for my tastes earlier this week. Gomes was about the most polar opposite person you could pick for me to follow: he was African-American; I was white. He was Christian; I, an agnostic Jew. He was gay; I, not so much, thanks, though as Jerry Seinfeld noted, not that there's anything wrong with that. He was, for most of his life, a Republican, and I have mostly not been a Democrat because I regarded them as too far to the right. He was something of a dandy with something of a pompous manner of speaking at the most Establishment university in the United States; I am a well-educated though frequently unspeakably crude dude who went to Abbie Hoffman's school and often sneers at The Establishment. On the surface, thus, not my kind of guy.
But once you peel away his formal and sometimes antiquated mannerisms and really listen to Peter Gomes, there is aught but beauty, truth, and light. Here is a brief comment on gay marriage, while here a longer talk with Charlie Rose. He opens the conversation with Rose with a line that elegantly encapsulates why I find him so admirable: "I like the notion that there is much yet to be revealed about the Christian faith; it's not all over yet. It's not a complete story, and we're moving into it. There is much yet to be revealed, and I think our best theological days are ahead of us." If that ain't a bare-bones summary of the philosophy of the great Talmudic masters (that is, the part of the Talmudic masters that I find worthy of attention), then I don't know what is. I have not read any of his books, and my acquaintance with him is largely through talks & other coverage he's received in that peculiar Harvardo-centric fishbowl of Boston media (you could catch his sermons on Sunday morning on WHRB, the Harvard radio station). It may be a tough pill for me to swallow to read a book about Jesus, but in the coming months I may peruse one of his books to stay connected with this eminently decent and astonishingly eloquent man.
More immediate concerns that nobody else in the vicinity of Harvard Yard prematurely join the Reverend Gomes are very much on the minds of Massachusetts public health officials as they scramble to contain a measles outbreak inflicted on the city by an unvaccinated French woman working for the French consulate downtown. What a mess: the super-contagious virus may have spread to a professor at UMass Boston, and thus his students are bearing the brunt of some public health measures, but fortunately seem not to be too bothered by the whole fuss. Though make no mistake, a fuss this is: measles spreads like wildfire and--while not overwhelmingly lethal by Andromeda-strain standards--kills simply by the fact that so many can become infected so quickly. Even a low mortality rate of, say, three percent can be a lot of bodies if tens of thousands become infected. And while the vaccine for measles (the "MMR") is good, it's not perfect, so even vaccinated people are at risk of infection, especially if they haven't been vaccinated in decades. As I noted in a previous entry: this virus is a killer. How this gal got into the US and was allowed to work without having a documented MMR is not fully clear to me, but many are paying the price for her folly.
--br
Subscribe to:
Posts (Atom)