Saturday, May 21, 2005
There Are Some Maps We Don't Want To Be On
The conservative paper, Detroit News, has reported on Bush's recent commencement address at Calvin College (that's a Technorati tag). mlive.com has printed the text of his address. The reason for this focus is that Calvin College (Wikipedia link, which does link to something) is located in Grand Rapids, Michigan.
Previously, Calvin College's only claim to fame was their school scarf, which looks an awful lot like the one that Harry Potter wears. Apparently, there was quite a run on their scarves and neckties when it was noted that their colors are the same as Gryffindor's.
Mr. Bush made some laudable comments during his speech:
First, we must understand that the character of our citizens is essential to society. In a free and compassionate society, the public good depends on private character. [...]Of course, he would have to be a Dumbledore-class wizard to make us believe that he actually means what he says.
Second, we must understand the importance of keeping power close to the people. [...]
Finally, we must understand that it is by becoming active in our communities that we move beyond our narrow interests. In today's complex world, there are a lot of things that pull us apart. We need to support and encourage the institutions and pursuits that bring us together. [...]
Before the commencement, the students met for a conference with Jim Wallis, the author of God's Politics: Why the Right Gets It Wrong and the Left Doesn't Get It and the and the editor of the Sojourners magazine. The student newspaper, Chimes, mentioned this:
Student Activities director Ken Heffner opened the discussion with the precaution that it is not a protest-organizing meeting but a chance to “converse about a historical event.So why did the student activities director specify that it was not a protest-organizing meeting? It a previous issue, Chimes reported that their Provost considered an invitation to Bush, based in part on Bush's performance at a prior speech, given at Concordia University:
“He did not in my estimation dishonor the occasion,” said Carpenter.Does that sound like faint praise, or what? In an act of wizardly premonition, the Provost commented on the reception that Bush might receive:
In response to rumors of possible protests, Carpenter responds that although the event will unavoidably have political dimensions to it, it is “important to be good hosts and to show the personal and institutional maturity of being able to extend hospitality and a civil audience to someone whom we may disagree with.”Maybe those scarves and neckties endow one with the ability to predict the future. According the the Detroit News after-action report, one third of the faculty, plus another 40 staff members, signed an open letter to Bush:
While welcoming the president, the letter delivers a carefully worded critique of administration policies from a Christian viewpoint. It calls the Iraq war "unjust and unjustified," expresses dismay at policies that "favor the wealthy ... and burden the poor," challenges policies of intolerance toward dissent, and environmental policies that are at odds with being "caretakers of God's good creation."The Detroit Free Press reported that the letter was published as an advertisement in the local paper, at a cost of $2,600. They inform us also that 800 students, faculty, and staff published another open letter, taking a full page, at a cost of more than $9,500.
The letter signers view the occasion of the president's speech as a teachable moment.
"People have been saying that the president's visit will put us on the map. But there are some maps we don't want to be on," says David Crump, a Calvin professor of religion who helped draft the letter. [...]
The letter is one way to register the fact that even in the heart of Christian America, religion does not dictate politics. It reminds Americans that even at a conservative Christian school, where religious values are paramount, people have different social, political and cultural views.
It's a way, the professors say, to counter stereotypical thinking about Christian institutions.
They are insistent on a tradition of liberal thought, grounded in religious belief, that suddenly feels positively 19th century.
I could not find the text of the student's ad, but the faculty ad was posted at Daily Kos:
An Open Letter to the President of the United States of America, George W. Bush On May 21, 2005, you will give the commencement address at Calvin College. We, the undersigned, respect your office, and we join the college in welcoming you to our campus. Like you, we recognize the importance of religious commitment in American political life. We seek open and honest dialogue about the Christian faith and how it is best expressed in the political sphere. While recognizing God as sovereign over individuals and institutions alike, we understand that no single political position should be identified with God's will, and we are conscious that this applies to our own views as well as those of others. At the same time we see conflicts between our understanding of what Christians are called to do and many of the policies of your administration. As Christians we are called to be peacemakers and to initiate war only as a last resort. We believe your administration has launched an unjust and unjustified war in Iraq. As Christians we are called to lift up the hungry and impoverished. We believe your administration has taken actions that favor the wealthy of our society and burden the poor. As Christians we are called to actions characterized by love, gentleness, and concern for the most vulnerable among us. We believe your administration has fostered intolerance and divisiveness and has often failed to listen to those with whom it disagrees. As Christians we are called to be caretakers of God's good creation. We believe your environmental policies have harmed creation and have not promoted long-term stewardship of our natural environment. Our passion for these matters arises out of the Christian faith that we share with you. We ask you, Mr. President, to re-examine your policies in light of our God-given duty to pursue justice with mercy, and we pray for wisdom for you and all world leaders. Concerned faculty, staff, and emeriti of Calvin CollegeIn contrast, other students set up a website that collected more than 500 student signatures in support of Bush's visit. An article in Chimes described the controversy:
At the same time others, including Bruce Berglund, assistant professor of history, do not think political figures are appropriate for the graduation setting.More information on the event is blogged here, here, and here. A Google Group discussion is here. The latest post on the discussion board went up at 8:36 PM today, just after the commencement address:
“The appearance of a politician at a commencement politicizes that event, even if that politician steers clear of explicitly political topics,” he said.
Indeed, some members of the Calvin community have expressed concern that the President’s speech may be overtly political, something they believe would be inappropriate and possibly detract from the true meaning of the occasion, which is to honor graduating students.
“If a political figure does speak at an openly non-political venue, such as commencement, the [Calvin] administration should stress that politics should not be included,” Venhuizen said.
Beyond being divisive, a political speech may also serve to further align Calvin College as an institution and Christians in general with the right side of the political aisle, something Berglund finds alarming.
“I find [Bush’s] visit unfortunate in that it will perpetuate the idea that committed Christians are, ipso facto, committed supporters of the Republican party,” he said.
Bush and the Anti-ChristHere at The Corpus Callosum, I have, from time to time, commented on the Unchristian aspects of Bush's policies. My comments don't carry as much weight, since I speak from an agnostic perspective. But the blockquote above, now that's a blistering comment. Read the whole thing, if you have the time. I couldn't do it better.
Laura B.
While the answer isn't clear for the time being, Bush has left a strong impression that he is indeed an anti-Christ. He has not only marred the good image of Christianity, but he's also made it miserably difficult for missionaries to do their jobs of spreading the Gospel. [...]
With Bush being the world-class moron, it would be so perfect for Karl Rove and Dick Cheney to steer the country down the road of total destruction. By allowing the major corporations like Halliburton to rob the hard-working Americans, the Bush administration has bankrupted this country as he promotes greed, selfishness, tyranny, corruption, cruelty, disgrace, and every possible Unbiblical and evil activity to flourish during this tenure in the White House.
(Note: The Rest of the Story/Corpus Callosum has moved. Visit the new site here.)
E-mail a link that points to this post:
Another Long Strange Trip:
What PCP Teaches Us About Science Policy
In the category of Things Found While Looking For Other Things, I
noticed that one of my former pharmacology professors, Ed Domino,
has published
a book: Sixty-One Years of
University of Michigan Pharmacology, 1942-2003.
(HPP Books, 2004.) As you can tell from the title, he now is
an emeritus
professor.
I have not read his book, but I did poke around a bit on the 'net, looking into the history of pharmacological research. There is an interesting story about Dr. Domino's research that teaches an important lesson for policymakers.
Dr. Domino established his reputation in the mid-twentieth century through his involvement in the discovery of phencyclidine. At the time, phencyclidine was thought to have promise as an anesthetic agent, and was patented by Parke Davis in 1958. It was withdrawn from the market in 1978. Since then, Parke Davis has undergone phagocytosis, and now is a pseudopodium of Pfizer. Phencyclidine, meanwhile, has become notorious as a street drug, more commonly known as PCP, or angel dust. PCP has caused untold suffering across the country, and across the decades since (see Dangerous Angel, by Sol Snyder). Perhaps in atonement, Dr. Domino later became involved in research at the University of Michigan Substance Abuse Research Center (UMSARC).
A derivative of phencyclidine, ketamine, attained limited commercial success as an anesthetic, but it too is abused sometimes.
In the 1950's it was not known how or why PCP and ketamine affect the brain the way they do. It has been learned, since then, that the brain has a widespread network of neurons that act by release of glutamate. Glutamate, in fact, is the most abundant neurotransmitter in the human brain. It stimulates activity via three kinds of receptor: the NMDA, AMPA, and kainate receptors. Both PCP and ketamine block (antagonize) the NMDA (N-methyl-D-aspartate) receptor. Subsequently, it has been proposed that blockade of the NMDA receptor may mimic schizophrenia in some important respects.
Although the history of PCP and ketamine is tainted by the illicit use of those substances, the research (1 2) on their mechanism of action since then has been more auspicious. Two drugs the modify glutamate transmission have been marketed recently.
Riluzole (Rilutek®) inhibits release of glutamate, and has demonstrated clinical utility in slowing the progression of amyotrophic lateral sclerosis (ALS). This may be due to the fact that excessive glutamate release leads to neuronal cell death. Preliminary studies indicate that riluzole may have a role in the treatment of depression as well. A single case study showed benefit for a patient with obsessive-compulsive disorder.
Memantine (Namenda®) acts by blocking NMDA receptors, and has some use in treatment of Alzheimer disease (AD). It binds weakly to the NMDA receptor, thus it modifies -- but does not prevent -- the normal physiological function of glutamate that is released by the presynaptic neuron. Like riluzole, it is thought to help prevent cell death.
In addition to the utility in treatment of Alzheimer disease, there is some evidence that memantine could reduce binge eating. Such an application might be expected to generate some excitement in the general public. The reference was electronically published ahead of the print publication on May 7 of this year. I will be curious to see if the mainstream media pick up on it.
However, there is reason for caution. There is one case report of subtle psychotic symptoms caused by memantine. Such an adverse effect may be acceptable in the treatment of a dreadful illness such as AD. It would not be acceptable to expose patients with binge eating problems (binge eating disorder) to such an adverse effect, if the frequency of the problem is appreciable. On the other hand, it is possible that further research on the role of glutamate and the NMDA receptor in the regulation of appetite will lead to something that is clinically useful, with an acceptable risk-benefit ratio.
Recall that glutamate has effects on receptors other than the NMDA receptors. AMPA (alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptors, in particular, have been items of interest for pharmacologists. A class of compounds knows as ampakines has been developed; some of these may turn out to be clinically useful. CX717, in particular, is under development in phase I as a wakefulness-promoting drug. Org24448 and CX516 are in phase II trials, in hope of finding something to improve cognition in patients with AD, schizophrenia, fragile X syndrome, and autism.
The kainate receptor also has generated some interest. NGX424, which is an AMPA/Kainate antagonist, is under development as a analgesic. There is preclinical interest in the kainate receptor subunit GluR7 as a possible target for development of a new class of antidepressants. Similarly, there is preclinical interest in the glutamate transporter.
What this story shows is that the earliest research into drugs that modulate glutamate activity resulted in a terrible drug, PCP, before anything was known about the fundamental pharmacology of the system. As our understanding of the basic science improved, it became possible to develop some drugs that are safe and at least a little bit effective for some serious illnesses. As our knowledge increases, the opportunities for development of new drugs improve even more.
This long strange trip has implications for policymakers. There is a trend in our government to increase the proportion of funding that goes to applied research and development for pharmaceuticals.
I have not read his book, but I did poke around a bit on the 'net, looking into the history of pharmacological research. There is an interesting story about Dr. Domino's research that teaches an important lesson for policymakers.
Dr. Domino established his reputation in the mid-twentieth century through his involvement in the discovery of phencyclidine. At the time, phencyclidine was thought to have promise as an anesthetic agent, and was patented by Parke Davis in 1958. It was withdrawn from the market in 1978. Since then, Parke Davis has undergone phagocytosis, and now is a pseudopodium of Pfizer. Phencyclidine, meanwhile, has become notorious as a street drug, more commonly known as PCP, or angel dust. PCP has caused untold suffering across the country, and across the decades since (see Dangerous Angel, by Sol Snyder). Perhaps in atonement, Dr. Domino later became involved in research at the University of Michigan Substance Abuse Research Center (UMSARC).
A derivative of phencyclidine, ketamine, attained limited commercial success as an anesthetic, but it too is abused sometimes.
In the 1950's it was not known how or why PCP and ketamine affect the brain the way they do. It has been learned, since then, that the brain has a widespread network of neurons that act by release of glutamate. Glutamate, in fact, is the most abundant neurotransmitter in the human brain. It stimulates activity via three kinds of receptor: the NMDA, AMPA, and kainate receptors. Both PCP and ketamine block (antagonize) the NMDA (N-methyl-D-aspartate) receptor. Subsequently, it has been proposed that blockade of the NMDA receptor may mimic schizophrenia in some important respects.
Although the history of PCP and ketamine is tainted by the illicit use of those substances, the research (1 2) on their mechanism of action since then has been more auspicious. Two drugs the modify glutamate transmission have been marketed recently.
Riluzole (Rilutek®) inhibits release of glutamate, and has demonstrated clinical utility in slowing the progression of amyotrophic lateral sclerosis (ALS). This may be due to the fact that excessive glutamate release leads to neuronal cell death. Preliminary studies indicate that riluzole may have a role in the treatment of depression as well. A single case study showed benefit for a patient with obsessive-compulsive disorder.
Memantine (Namenda®) acts by blocking NMDA receptors, and has some use in treatment of Alzheimer disease (AD). It binds weakly to the NMDA receptor, thus it modifies -- but does not prevent -- the normal physiological function of glutamate that is released by the presynaptic neuron. Like riluzole, it is thought to help prevent cell death.
In addition to the utility in treatment of Alzheimer disease, there is some evidence that memantine could reduce binge eating. Such an application might be expected to generate some excitement in the general public. The reference was electronically published ahead of the print publication on May 7 of this year. I will be curious to see if the mainstream media pick up on it.
However, there is reason for caution. There is one case report of subtle psychotic symptoms caused by memantine. Such an adverse effect may be acceptable in the treatment of a dreadful illness such as AD. It would not be acceptable to expose patients with binge eating problems (binge eating disorder) to such an adverse effect, if the frequency of the problem is appreciable. On the other hand, it is possible that further research on the role of glutamate and the NMDA receptor in the regulation of appetite will lead to something that is clinically useful, with an acceptable risk-benefit ratio.
Recall that glutamate has effects on receptors other than the NMDA receptors. AMPA (alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptors, in particular, have been items of interest for pharmacologists. A class of compounds knows as ampakines has been developed; some of these may turn out to be clinically useful. CX717, in particular, is under development in phase I as a wakefulness-promoting drug. Org24448 and CX516 are in phase II trials, in hope of finding something to improve cognition in patients with AD, schizophrenia, fragile X syndrome, and autism.
The kainate receptor also has generated some interest. NGX424, which is an AMPA/Kainate antagonist, is under development as a analgesic. There is preclinical interest in the kainate receptor subunit GluR7 as a possible target for development of a new class of antidepressants. Similarly, there is preclinical interest in the glutamate transporter.
What this story shows is that the earliest research into drugs that modulate glutamate activity resulted in a terrible drug, PCP, before anything was known about the fundamental pharmacology of the system. As our understanding of the basic science improved, it became possible to develop some drugs that are safe and at least a little bit effective for some serious illnesses. As our knowledge increases, the opportunities for development of new drugs improve even more.
This long strange trip has implications for policymakers. There is a trend in our government to increase the proportion of funding that goes to applied research and development for pharmaceuticals.
FDA plans praised, criticizedThe proponents of realignment of funding priorities complain that the funding for drug development has not kept up with the explosion of knowledge in basic sciences. The counterpoint is illustrated by this post: basic science should be ahead of applied science. It is dangerous to fumble around in the dark. We need more memantines and riluzoles, not more phencyclidines.
AAMC says that return on research investment should not be measured only by drug approvals
By Paula Park
The Food and Drug Administration's (FDA) proposal to reform the drug development process have been met with criticism from the Association of American Medical Colleges (AAMC), but with praise from other groups, including the Pharmaceutical Research and Manufacturers Association (PhRMA).
The AAMC charges that the FDA plans begin with a faulty premise: that the federal biomedical investment should translate directly to new treatments. David Korn, senior vice president of Biomedical and Health Science Research at AAMC, also warned that any attempt to involve the National Institutes of Health (NIH) in drug-related research could dilute funding for "curiosity driven" investigations that have, in the end, contributed to new therapies. [...]
(Note: The Rest of the Story/Corpus Callosum has moved. Visit the new site here.)
E-mail a link that points to this post:
Thursday, May 19, 2005
Science Does Require Faith
In response to thx's comment:
Oh, I agree, science does require faith. The difference
between
scientific dogma and religious dogma is that scientific dogma changes
faster. They both change, I realize; even the Catholic Church
updates its dogma every millennium or so. Scientists do it at
least once per century, sometimes once per decade. Not only
that,
but the methodology is different: scientists use peer review, whereas
churches rely on a council of elders. I suppose some may
argue that dogma that is expected to change every
few years is fundamentally different that dogma that is not expected to
change, and changes only rarely. But either way, it still is
dogma, in the sense that ultimate
proof is not possible.
Scientists, however, do not insist on absolute proof; they are happy
with what I call the umpire's
truth.
In professional (American) football, a player is out of bounds the moment any part of the body, or the equipment covering the body, touches or crosses the white boundary line. Now lets say that a player jumps in the air, catches a pass, then comes down near the line. Now let's say that there is one blade of grass with chalk on it, bent inward toward the playing area. Let's say that, after catching the pass, the tip of a shoelace touches the tip of the blade of grass. Technically, that player landed out of bounds, and the pass is no good.
To prevent this, we could use radiolabeled calcium in the chalk on the boundary lines, then have the player remove his shoes and use a germanium-lithium (GeLi) detector to check for isotopes of calcium on the shoelaces. There would be all kinds of practical problems with this, including the need to decontaminate the field and all the players after every play. We could guarantee that every play is called correctly. Of course, nobody would pay money to watch a game like that, because it would take several years. So in exchange for a game that actually is playable, the fans and teams live with a small degree of uncertainty. The game is played as though the calls are correct, even if absolute proof is not possible.
Scientists go about their business as though certain tenets are true, even if the proof is not absolute. They try to keep an open mind, and if contrary evidence comes a long, they take a look at it. That is like the umpire checking an instant replay every once in a while. A few challenged calls in a game is acceptable. If every single call gets challenged, that merely annoys people and accomplishes nothing. Lack of absolute certainty is accepted, because it is the only practical way to get the job done.
So, another difference between scientific dogma and religious dogma is that scientists can accept, with casual indifference, the fact that there are weaknesses in their theories. They expect the theories to be modified over time. For example, in psychiatry, we use the DSM-IV as a standard reference. But the very day after DSM-IV was published, committees started working on a revised version. Once that (DSM-IV-TR) was published, they started work on DSM-V. They also started working on policies to guide future revisions. There have been different versions of the Christian Bible, but as far as as I know, they don't start working on a revised version the moment a new edition is published.
There is a precedent in society for the dismissal of absolute proof in the service of pragmatism. In law, the standard of proof in a murder case is "beyond a reasonable doubt." Even in capital crimes, we do not insist on absolute proof, despite the fact that it may be a life-or-death matter.
I could amend what I said about ID having no place in science class; however, I do not think it has a place in an introductory class, such as in high school. All areas of inquiry have controversies; biological science is hardly unique in that respect. But you don't teach all the controversies to high school kids. They need a foundation in the majority view first, just to get an orientation to the field. If they decide to go on, then they learn about the controversies. This typically starts in the sophomore or junior year of college. For example, you have to learn Newtonian physics first, before you have any hope of understanding why it is all wrong. You have to learn the "standard" history of the attack on Pearl Harbor, before you can understand the controversies about that. You have to learn the old way of playing jazz, before you can do meaningful improvisation on your own.
To illustrate, here is part of an online syllabus for Psychology 281 at Vance-Granville Community College. There's nothing special about it, I just remember having run across it recently while looking for something else:
I suspect that there are few Sunday School teachers who tell their students, on the first day of Bible instruction, that there are different versions of scripture, and different translations, and that different religions have chosen to include some ancient writings, but not others, in their respective holy texts. Nobody is keeping those things secret, nor is anybody rejecting the truth, but everybody knows that students have to be introduced to complex topics gradually.
So perhaps I should not have said that ID has no place in science class. There would be nothing wrong with teaching it in a 200+ level college course. I suspect that is not going to happen on a secular campus anytime soon, because of the unpleasant way the controversy has developed. The majority of scientists are so suspicious of the motives of ID proponents, that they probably would not want to acknowledge it at all. That is unfortunate for those who are truly sincere and just want to understand the controversy. But from the perspective of a college professor, they have a responsibility to portray their field of expertise in a way that reflects what is really going on in the field. If they learn that something they are teaching is routinely misunderstood, they have to think hard about whether to keep teaching it -- even if what they are teaching is accurate.
The way this has played out, is that the most passionate ID proponents have tried to say that their advocacy on this issue has nothing to do with religion. If that is true, then why pick this particular topic? Why not hold hearings on whether to teach the controversies of quantum mechanics, or the controversy over the propriety of the split infinitive? To a traditional scientist, the insistence that ID has nothing to do with religion seems like -- there is no polite way to say it -- a lie. Then to hear people who claim to have the moral high ground, try to justify their activism with what appears to be a lie, well, that is going to irritate people. Rather badly, in some cases.* The scientist experiences the constant challenges from ID proponents the way football players would feel if every single play were challenged by the other team. After a while, it no longer seems sincere; it just seems annoying.
I have noticed that the most vocal proponents of ID get all the attention, and frankly I doubt their sincerity. But I also have learned that there are many people who just want to understand what all the fuss is about. They don't have any political agenda; their curiosity is a health expression of the human desire for knowledge. I would like to encourage that, not discourage it. But if I'm an instructor, and I try to present a minority view on something, yet somehow a lot of the students take the minority view as having equal footing with the majority view, then I have to decide whether to keep teaching it. If others take that and try to employ it for political purposes, then I'm pretty sure I will stop teaching it. If people start challenging me every time I enter the classroom, I'll go somewhere else to teach, or go sell used cars or something.
The view of most scientists is reflected in a recent editorial in the journal Nature.
In professional (American) football, a player is out of bounds the moment any part of the body, or the equipment covering the body, touches or crosses the white boundary line. Now lets say that a player jumps in the air, catches a pass, then comes down near the line. Now let's say that there is one blade of grass with chalk on it, bent inward toward the playing area. Let's say that, after catching the pass, the tip of a shoelace touches the tip of the blade of grass. Technically, that player landed out of bounds, and the pass is no good.
To prevent this, we could use radiolabeled calcium in the chalk on the boundary lines, then have the player remove his shoes and use a germanium-lithium (GeLi) detector to check for isotopes of calcium on the shoelaces. There would be all kinds of practical problems with this, including the need to decontaminate the field and all the players after every play. We could guarantee that every play is called correctly. Of course, nobody would pay money to watch a game like that, because it would take several years. So in exchange for a game that actually is playable, the fans and teams live with a small degree of uncertainty. The game is played as though the calls are correct, even if absolute proof is not possible.
Scientists go about their business as though certain tenets are true, even if the proof is not absolute. They try to keep an open mind, and if contrary evidence comes a long, they take a look at it. That is like the umpire checking an instant replay every once in a while. A few challenged calls in a game is acceptable. If every single call gets challenged, that merely annoys people and accomplishes nothing. Lack of absolute certainty is accepted, because it is the only practical way to get the job done.
So, another difference between scientific dogma and religious dogma is that scientists can accept, with casual indifference, the fact that there are weaknesses in their theories. They expect the theories to be modified over time. For example, in psychiatry, we use the DSM-IV as a standard reference. But the very day after DSM-IV was published, committees started working on a revised version. Once that (DSM-IV-TR) was published, they started work on DSM-V. They also started working on policies to guide future revisions. There have been different versions of the Christian Bible, but as far as as I know, they don't start working on a revised version the moment a new edition is published.
There is a precedent in society for the dismissal of absolute proof in the service of pragmatism. In law, the standard of proof in a murder case is "beyond a reasonable doubt." Even in capital crimes, we do not insist on absolute proof, despite the fact that it may be a life-or-death matter.
I could amend what I said about ID having no place in science class; however, I do not think it has a place in an introductory class, such as in high school. All areas of inquiry have controversies; biological science is hardly unique in that respect. But you don't teach all the controversies to high school kids. They need a foundation in the majority view first, just to get an orientation to the field. If they decide to go on, then they learn about the controversies. This typically starts in the sophomore or junior year of college. For example, you have to learn Newtonian physics first, before you have any hope of understanding why it is all wrong. You have to learn the "standard" history of the attack on Pearl Harbor, before you can understand the controversies about that. You have to learn the old way of playing jazz, before you can do meaningful improvisation on your own.
To illustrate, here is part of an online syllabus for Psychology 281 at Vance-Granville Community College. There's nothing special about it, I just remember having run across it recently while looking for something else:
Read ch 4Presumably, those students already have had an introduction to the basic concepts of diagnosis. Once they understand that, then they are ready to understand the weaknesses of the mainstream model.
Answer in an email to me the following question:
1) The DSM IV is generally effective, but there are fundamental limitations with any diagnostic system. What problems or issues can you think of that are associated with the use of any classification system and diagnostic process? Think beyond the textbook...
I suspect that there are few Sunday School teachers who tell their students, on the first day of Bible instruction, that there are different versions of scripture, and different translations, and that different religions have chosen to include some ancient writings, but not others, in their respective holy texts. Nobody is keeping those things secret, nor is anybody rejecting the truth, but everybody knows that students have to be introduced to complex topics gradually.
So perhaps I should not have said that ID has no place in science class. There would be nothing wrong with teaching it in a 200+ level college course. I suspect that is not going to happen on a secular campus anytime soon, because of the unpleasant way the controversy has developed. The majority of scientists are so suspicious of the motives of ID proponents, that they probably would not want to acknowledge it at all. That is unfortunate for those who are truly sincere and just want to understand the controversy. But from the perspective of a college professor, they have a responsibility to portray their field of expertise in a way that reflects what is really going on in the field. If they learn that something they are teaching is routinely misunderstood, they have to think hard about whether to keep teaching it -- even if what they are teaching is accurate.
The way this has played out, is that the most passionate ID proponents have tried to say that their advocacy on this issue has nothing to do with religion. If that is true, then why pick this particular topic? Why not hold hearings on whether to teach the controversies of quantum mechanics, or the controversy over the propriety of the split infinitive? To a traditional scientist, the insistence that ID has nothing to do with religion seems like -- there is no polite way to say it -- a lie. Then to hear people who claim to have the moral high ground, try to justify their activism with what appears to be a lie, well, that is going to irritate people. Rather badly, in some cases.* The scientist experiences the constant challenges from ID proponents the way football players would feel if every single play were challenged by the other team. After a while, it no longer seems sincere; it just seems annoying.
I have noticed that the most vocal proponents of ID get all the attention, and frankly I doubt their sincerity. But I also have learned that there are many people who just want to understand what all the fuss is about. They don't have any political agenda; their curiosity is a health expression of the human desire for knowledge. I would like to encourage that, not discourage it. But if I'm an instructor, and I try to present a minority view on something, yet somehow a lot of the students take the minority view as having equal footing with the majority view, then I have to decide whether to keep teaching it. If others take that and try to employ it for political purposes, then I'm pretty sure I will stop teaching it. If people start challenging me every time I enter the classroom, I'll go somewhere else to teach, or go sell used cars or something.
The view of most scientists is reflected in a recent editorial in the journal Nature.
[...] many of the students taught in introductory biology classes hold religious beliefs that conflict, at least on the face of things, with Darwin's framework. Professors rarely address the conflicts between faith and science in lectures, and students are drawn to intelligent design as a way of reconciling their beliefs with their interest in science. In doing so, they are helping it to gain a small, but firm, foothold on campuses around the country. [...]The authors suggest that scientists should not ignore ID, although they stop short of recommending that ID be taught in science classes.
Scientists would do better to offer some constructive thoughts of their own. For religious scientists, this may involve taking the time to talk to students about how they personally reconcile their beliefs with their research. Secular researchers should talk to others in order to understand how faiths have come to terms with science. All scientists whose classes are faced with such concerns should familiarize themselves with some basic arguments as to why evolution, cosmology and geology are not competing with religion. When they walk into the lecture hall, they should be prepared to talk about what science can and cannot do, and how it fits in with different religious beliefs.In order for such discussions to take place with a reasonable degree of civility, it is necessary to tone down the rhetoric and for participants to address each other with respect. I am hopeful that we can do this, even if only in a remote corner of the Blogosphere.
_________
*There is a controversy about whether it is ever proper to write a sentence with no verb. I take the minority view on that one.
(Note: The Rest of the Story/Corpus Callosum has moved. Visit the new site here.)
E-mail a link that points to this post:
Monday, May 16, 2005
Electronic Medical Records: Nobody is Doing it Right
Mark Kleiman (I finally spelled it correctly) has a post
on electronic medical records (EMR). He points out that the
VA
hospital system has developed, at great cost, a system that actually
works quite well. My wife used to work for the VA, and she
thought it was a great system. There were some glitches
connecting to the main hospital computer from satellite clinics, but
there were workarounds that got the job done.
Apparently, there is an effort now to develop national standards for EMR. Mark wonders why the government does not simply adopt the standards of the VA system for the national standards. He points out that the system is in the public domain, so it shouldn't be difficult to do this. I guess no trade secrets would be revealed.
I could see a few non-financial problems with this, although the problems are minor. For one, most medical offices are governed by state law, whereas the VA is not. Legislation pertaining to medical records may vary from state to state. This is especially true for prescriptions for controlled substances. (EMR systems generally include prescription-writing software.) Another problem has to do with privacy. The VA system allows any employee to look up any record, including those of their fellow employees. (Many VA employees are vets themselves, and thus patients at the same hospital where they work.) The system keeps track of who looks at what, and the logs actually are checked by a human. Anyone who accesses a record, without a clinical need to have that access, is subject to serious consequences. That works in the VA, but I doubt it would be acceptable or workable in the private sector: patients are very concerned about privacy.
I think those are problems that could be corrected fairly easily. Potentially more serious is the fact that EMR systems have not matured fully. There is some evidence that EMR may actually increase the rate of certain types of error:
Counterbalancing this argument is the fact that national standards could themselves result in a reduction of errors. If a set of standards for the user interface is developed, doctors would only have to learn the one interface. They would not have to use different keystrokes going from home, to office, to hospital, to ER; it wouldn't matter if they were using a desktop, laptop, or palm device. Well, the Palm Pilot would be different, because usually one uses a stylus, not a keyboard; even so, there could be similarities in the interface.
The open source movement is already developing interface standards. For example, the Gnome desktop suite and development platform has a set of guidelines for the development of interfaces. The problem with such standards is that they make it easier for people to switch from one system to another, which makes it harder for any one company to establish and maintain a dominant market position. Companies would have to adopt a new business model. That new model is not really new; Red Hat, Mandriva et. al. have been doing it for years. They provide the software for a nominal fee, then they focus on service and support. In the case of EMR, there also is opportunity in the area of content delivery. Integrating databases such as drug information, consensus treatment guidelines, Medline, electronic textbooks, etc., would make an EMR system much more appealing to a physician. Having all that available, with one consistent interface, would be really useful.
For example: a physician who is writing a new prescription might decide to check the latest treatment guidelines for dosage recommendations. Currently, that involves opening another application, and searching for the pertinent information. With a fully integrated EMR and content management system, it would be possible to select a menu item that would take the name of the drug -- reading it right from the prescription that is being typed -- feed it into the various databases, and more or less instantly deliver the pertinent information. That could be a tremendous growth industry, since the information content becomes obsolete quickly.
Another growth area would be in the area of data abstraction and analysis. A perfect EMR would be able to pull information from the records, and prepare periodic reports that would help the physician understand his or her own practice patterns. For example, they could see how often they prescribe controlled substances, compared to national or regional norms. If a new practice guideline comes out that says, say, that all patients with a new diagnosis of X should be seen for follow up in Y weeks, they could see if that is what they are actually doing. This would make it much easier to set up internal quality improvement procedures. Doctors don't have the time to learn SQL and set up the correct reports, but they might select one system over another, if it has this value-added service. This would be a monthly subscription, rather than a one-time capital expense.
So the problem with releasing the VA system standards on a national level is twofold: private companies wouldn't like it, because it would interfere with their business model; furthermore, they are using the wrong business model.
Apparently, there is an effort now to develop national standards for EMR. Mark wonders why the government does not simply adopt the standards of the VA system for the national standards. He points out that the system is in the public domain, so it shouldn't be difficult to do this. I guess no trade secrets would be revealed.
So why, in the scramble to develop a set of standards for national adoption, isn't there active consideration of simply making the VA system the national standard?Of course, the private corporations that develop EMR systems would object, since they all have a vested interest in the standards.
I could see a few non-financial problems with this, although the problems are minor. For one, most medical offices are governed by state law, whereas the VA is not. Legislation pertaining to medical records may vary from state to state. This is especially true for prescriptions for controlled substances. (EMR systems generally include prescription-writing software.) Another problem has to do with privacy. The VA system allows any employee to look up any record, including those of their fellow employees. (Many VA employees are vets themselves, and thus patients at the same hospital where they work.) The system keeps track of who looks at what, and the logs actually are checked by a human. Anyone who accesses a record, without a clinical need to have that access, is subject to serious consequences. That works in the VA, but I doubt it would be acceptable or workable in the private sector: patients are very concerned about privacy.
I think those are problems that could be corrected fairly easily. Potentially more serious is the fact that EMR systems have not matured fully. There is some evidence that EMR may actually increase the rate of certain types of error:
The researchers found that the CPOE [Computerized physician order entry] system they studied facilitated 22 types of medication error risks. Examples include fragmented CPOE displays that prevent a coherent view of patients' medications, pharmacy inventory displays mistaken for dosage guidelines, ignored antibiotic renewal notices placed on paper charts rather than in the CPOE system, separation of functions that facilitate double dosing and incompatible orders, and inflexible ordering formats generating wrong orders. Three-quarters of the house staff reported observing each of these errors risks, indicating that they occur weekly or more often. Use of multiple qualitative and survey methods identified and quantified error risks not previously considered, offering many opportunities for error reduction.In practice, these systems result in a net reduction of errors, that is, they prevent more errors than they cause. Still, when it comes to the development of national standards, it would be good to be sure the technology has matured sufficiently so that we have confidence that the standards are good standards.
Counterbalancing this argument is the fact that national standards could themselves result in a reduction of errors. If a set of standards for the user interface is developed, doctors would only have to learn the one interface. They would not have to use different keystrokes going from home, to office, to hospital, to ER; it wouldn't matter if they were using a desktop, laptop, or palm device. Well, the Palm Pilot would be different, because usually one uses a stylus, not a keyboard; even so, there could be similarities in the interface.
The open source movement is already developing interface standards. For example, the Gnome desktop suite and development platform has a set of guidelines for the development of interfaces. The problem with such standards is that they make it easier for people to switch from one system to another, which makes it harder for any one company to establish and maintain a dominant market position. Companies would have to adopt a new business model. That new model is not really new; Red Hat, Mandriva et. al. have been doing it for years. They provide the software for a nominal fee, then they focus on service and support. In the case of EMR, there also is opportunity in the area of content delivery. Integrating databases such as drug information, consensus treatment guidelines, Medline, electronic textbooks, etc., would make an EMR system much more appealing to a physician. Having all that available, with one consistent interface, would be really useful.
For example: a physician who is writing a new prescription might decide to check the latest treatment guidelines for dosage recommendations. Currently, that involves opening another application, and searching for the pertinent information. With a fully integrated EMR and content management system, it would be possible to select a menu item that would take the name of the drug -- reading it right from the prescription that is being typed -- feed it into the various databases, and more or less instantly deliver the pertinent information. That could be a tremendous growth industry, since the information content becomes obsolete quickly.
Another growth area would be in the area of data abstraction and analysis. A perfect EMR would be able to pull information from the records, and prepare periodic reports that would help the physician understand his or her own practice patterns. For example, they could see how often they prescribe controlled substances, compared to national or regional norms. If a new practice guideline comes out that says, say, that all patients with a new diagnosis of X should be seen for follow up in Y weeks, they could see if that is what they are actually doing. This would make it much easier to set up internal quality improvement procedures. Doctors don't have the time to learn SQL and set up the correct reports, but they might select one system over another, if it has this value-added service. This would be a monthly subscription, rather than a one-time capital expense.
So the problem with releasing the VA system standards on a national level is twofold: private companies wouldn't like it, because it would interfere with their business model; furthermore, they are using the wrong business model.
(Note: The Rest of the Story/Corpus Callosum has moved. Visit the new site here.)
E-mail a link that points to this post:
Sunday, May 15, 2005
What Can We Do About Uninformed Stereotypes of Mental Illness?
Stereotypes applied to persons with brain disorders tend to be unfair
and counterproductive (1
2
3).
Although the problem is common is all areas of medicine, it has proven
to be particularly persistent and malicious when the problems produce
no outwardly visible signs of impairment. In this post, I
look at an example of such a stereotype, as expressed by someone who
really ought to know better, provide scientifically-based refutation of
the specific stereotype, then mention what steps can be taken to combat
the problem.
Dr. Adams is an associate professor in Criminology at University of North Carolina at Wilmington. In a recent Townhall column, he voiced his opinion about a student who claimed to have ADHD. This provides us with an example of an uninformed stereotype. Perhaps it is a little unfair of me to pounce on this, since it wasn't the main point of his article. Perhaps if he had thought about it some more, and perhaps tried to be more informed, he wouldn't have come across as being so nasty. It may be that, in person, he comes across as being more reasonable. In fact, we may get an opportunity to hear more: Dr. Adams will be visiting Michigan soon:
I missed the event on the 11th. Too bad the 25th is a Wednesday. I won't be able to make it to Kalamazoo then. But then, maybe I don't have enough shame containers. That may be a bit harsh. It might hurt his feelings to see his picture like this. On the other hand, he probably can take care of himself. In one article, he speculates that liberals have not thrown a pie at him because they may know he carries a .357 magnum revolver.
I don't know Dr. Adams, but poking around on the 'net reveals many intolerant, angry invectives he has authored, and nothing nice. A recent column on Townhall.com bugged me enough to motivate me post about his writings. His columns are listed here. Much of it is anti-diversity, anti-liberal, and seemingly anti-human being: a strange perspective for someone who characterizes himself as pro-life. I guess he's in favor of life, so long as the person who is living agrees with him. I reached this conclusion based upon a subset of his writings, which is a biased sample. But the bias is a result of his own selection: the material I encountered was the material he has promoted most avidly. Everything I found was a complaint of some sort. He appears to spend a lot of time complaining about things he doesn't like, and little if any talking about what he does like.
In his favor, I must say he writes reasonably well, and uses sarcasm effectively. The pictures he posted of himself (on DrAdams.org) indicate that at least some people like him, even when he speaks in half-empty lecture halls. His biography indicates that he has overcome some adversity in life, which is always commendable. It mentions that he was given a Faculty of the Year award. I noticed that he backs up his arguments with his own direct observations. As an empiricist, this is something I appreciate.
Unfortunately, in the case of the column I linked above, he did not research his subject matter very well. He implies that a student with ADHD is making himself out to be a victim, and disparages his attempts to help himself. It seems counterproductive for him to do so: if someone recognizes a problem and makes an effort to fix the problem, shouldn't his professor support that effort?
In such cases, the student may be accused of being lazy, or of looking for an excuse. But for many of them, if you take the time to do a complete history, you learn that they actually are trying as hard as their classmates. Often they will say that they have had to study harder than their classmates, just to get average grades. Sure, if a student skips class to go to the beach, or whatever, that is their fault. However, that usually is not the case.
In order to understand the problem with ADHD in college students, you need experience and training. Illustrative of this, PBS has posted an interview with Dr. Jensen here. He refutes Dr. Adams' concerns, pretty effectively in my opinion.
Those who are really interested could peruse the abstracts on the subject, using Medline.
Of course, I have no idea whether Dr. Adams' student actually has ADHD, nor do I know if the student is inappropriately trying to adopt a victim role. My argument is not about the specific student; rather, I object to Dr. Adams casting his student as part of a group (as though the student is just like all those other kids who make excuses), then disparaging the entire group. Persons with brain disorders have struggled against this kind of thing for centuries, and it is a disservice to our entire society for a professional educator to be complicit in perpetuating such stereotypes.
What can be done to mitigate the damage done by uninformed stereotypes? The most important intervention is for people to speak up when such stereotypes are used. Being a passive listener is not a good strategy: it perpetuates the problem. Rather, it is more productive to clarify the intent of the one who expresses the stereotype, then express constructive criticism. The second most important strategy is to acquaint oneself with the facts. Note that, in the sidebar, I have a link to my prior post, Tips For Researching Medical Topics On The 'Net (which is updated periodically).
Doing the background research is important, and it is easy if you have access to the Internet. Presumably, if you are reading this, you do have access. There's no substitute for the facts, and no excuse for rendering judgment without first examining the facts. I suppose I sound like a criminologist when I say that, but it's really true.
Dr. Adams is an associate professor in Criminology at University of North Carolina at Wilmington. In a recent Townhall column, he voiced his opinion about a student who claimed to have ADHD. This provides us with an example of an uninformed stereotype. Perhaps it is a little unfair of me to pounce on this, since it wasn't the main point of his article. Perhaps if he had thought about it some more, and perhaps tried to be more informed, he wouldn't have come across as being so nasty. It may be that, in person, he comes across as being more reasonable. In fact, we may get an opportunity to hear more: Dr. Adams will be visiting Michigan soon:
Upcoming Events:
Dr. Adams will be speaking at the Right to Life annual fund-raising dinner in Lansing, Michigan on May 11. He will address the N.C. State Republican convention in Ashville, NC on May 20 and will be signing books at the convention on May 20 and 21. He will then speak at Kalamazoo College in Michigan on May 25th.
I missed the event on the 11th. Too bad the 25th is a Wednesday. I won't be able to make it to Kalamazoo then. But then, maybe I don't have enough shame containers. That may be a bit harsh. It might hurt his feelings to see his picture like this. On the other hand, he probably can take care of himself. In one article, he speculates that liberals have not thrown a pie at him because they may know he carries a .357 magnum revolver.
I don't know Dr. Adams, but poking around on the 'net reveals many intolerant, angry invectives he has authored, and nothing nice. A recent column on Townhall.com bugged me enough to motivate me post about his writings. His columns are listed here. Much of it is anti-diversity, anti-liberal, and seemingly anti-human being: a strange perspective for someone who characterizes himself as pro-life. I guess he's in favor of life, so long as the person who is living agrees with him. I reached this conclusion based upon a subset of his writings, which is a biased sample. But the bias is a result of his own selection: the material I encountered was the material he has promoted most avidly. Everything I found was a complaint of some sort. He appears to spend a lot of time complaining about things he doesn't like, and little if any talking about what he does like.
In his favor, I must say he writes reasonably well, and uses sarcasm effectively. The pictures he posted of himself (on DrAdams.org) indicate that at least some people like him, even when he speaks in half-empty lecture halls. His biography indicates that he has overcome some adversity in life, which is always commendable. It mentions that he was given a Faculty of the Year award. I noticed that he backs up his arguments with his own direct observations. As an empiricist, this is something I appreciate.
Unfortunately, in the case of the column I linked above, he did not research his subject matter very well. He implies that a student with ADHD is making himself out to be a victim, and disparages his attempts to help himself. It seems counterproductive for him to do so: if someone recognizes a problem and makes an effort to fix the problem, shouldn't his professor support that effort?
Yesterday, I received your email explaining the reasons for your poor performance in my class this semester. While I was pleased that you refrained from asking for a change of grade, I was disappointed that you attributed your bad grade to adult ADHD.What is worse, he implies that the problem, ADHD, is not a real problem and that it was basically made up by the medical profession, while the treatments are promoted by the self-serving pharmaceutical industry.
I hope you were kidding when you said that you plan to join an adult ADHD support group. Since you are an 18 year old male, I would suspect that a trip to nearby Wrightsville Beach [link added] could cure your "disorder." If you can't pay attention to the environment there, you may really have a problem.
Adult ADHD is another one of those problems we didn't have to deal with when I was growing up. But, now that a few doctors and drug companies have let us know it is out there, everyone seems to be getting it. The list of these disorders just keeps growing, doesn't it?There is a grain of truth to this. Probably the most common view, among people who actually know about this subject, is that adult ADHD is overdiagnosed in some cases. However, those same knowledgeable people believe that, while it is overdiagnosed in some cases, it is underdiagnosed in others:
Statistics Confirm Rise in Childhood ADHD and Medication Use
Peter Jensen, who has headed major National Institute of Mental Health (NIMH) studies on ADHD and is an assistant professor of psychiatry at Columbia University, agrees with Angold and Costello's findings that the majority of children receiving stimulant medication may not fully meet the criteria.
"It's likely there is a bit of both [under diagnosis and over diagnosis]," Jensen told Education World. "This always happens when public awareness increases that there could be some over diagnosis. But under diagnosis and under treatment are still happening."
ATTENTION DEFICIT HYPERACTIVITY DISORDERIn clinical practice, a fairly common scenario occurs with children who are highly intelligent, and who do not exhibit oppositional behavior. They are intelligent enough to do well in primary and secondary school, when the academic demands are not very great. Often, their report cards will contain comments, such as "doesn't reach full potential." But if their grades are OK and they aren't causing trouble, their problem goes unrecognized. If they go on to college, however, they eventually take classes in which the ADHD problem outweighs their ability to compensate.
Psychiatry On Call
California Psychiatric Association
Briefing Papers on Diagnosis & Treatment of Brain Disorders
"Overdiagnosis" refers to children being diagnosed with ADHD when they do not have it. This can occur when an incomplete evaluation takes place, either because the clinician is not fully trained or because he/she is not allowed adequate time to do a complete assessment, often because few health insurers cover this, and because special education does not have a funded category for ADHD. To complicate matters, when a child is highly disruptive at home or in the classroom, the parents or teacher may put pressure on the physician to "fix the problem" by making a diagnosis of ADHD and prescribing medication.
Even as we recognize that ADHD is over diagnosed, the under diagnosis - children having ADHD but not being diagnosed and treated - is actually much more common. This is due to a number of barriers such as lack of information about the symptoms and causes of ADHD, the stigma of mental illness, the myth that mental illness does not exist, and misperception of the child's behavior as intentional and willful. The symptoms of ADHD - hyperactivity, inattention and impulsive behavior - can profoundly interfere with a child's ability to learn, to make and keep friends, and to feel good about himself. Yet thousands of these children are never seen by an appropriate professional for evaluation.
In such cases, the student may be accused of being lazy, or of looking for an excuse. But for many of them, if you take the time to do a complete history, you learn that they actually are trying as hard as their classmates. Often they will say that they have had to study harder than their classmates, just to get average grades. Sure, if a student skips class to go to the beach, or whatever, that is their fault. However, that usually is not the case.
In order to understand the problem with ADHD in college students, you need experience and training. Illustrative of this, PBS has posted an interview with Dr. Jensen here. He refutes Dr. Adams' concerns, pretty effectively in my opinion.
Frontline: And yet, for probably hundreds of years, there have been people with this disorder. And they have lived and survived, I assume, without medication.Skeptics are encouraged to review recent information of the neurobiology of ADHD. There is abundant evidence from genetic studies (Medscape article, free registration required), and from neuroimaging studies, to support the validity of the diagnosis.
Jensen: We've had diabetes for hundreds of years, and we've had hypertension for hundreds of years, and we've had asthma for hundreds of years. . . . We've had cancer. We've had lots of things for hundreds of years. That doesn't necessarily make it a good thing. And when you sit back and you allow yourself to be informed by research . . . our studies show that these kids have bad outcomes when we don't help them.
Those who are really interested could peruse the abstracts on the subject, using Medline.
Of course, I have no idea whether Dr. Adams' student actually has ADHD, nor do I know if the student is inappropriately trying to adopt a victim role. My argument is not about the specific student; rather, I object to Dr. Adams casting his student as part of a group (as though the student is just like all those other kids who make excuses), then disparaging the entire group. Persons with brain disorders have struggled against this kind of thing for centuries, and it is a disservice to our entire society for a professional educator to be complicit in perpetuating such stereotypes.
What can be done to mitigate the damage done by uninformed stereotypes? The most important intervention is for people to speak up when such stereotypes are used. Being a passive listener is not a good strategy: it perpetuates the problem. Rather, it is more productive to clarify the intent of the one who expresses the stereotype, then express constructive criticism. The second most important strategy is to acquaint oneself with the facts. Note that, in the sidebar, I have a link to my prior post, Tips For Researching Medical Topics On The 'Net (which is updated periodically).
Doing the background research is important, and it is easy if you have access to the Internet. Presumably, if you are reading this, you do have access. There's no substitute for the facts, and no excuse for rendering judgment without first examining the facts. I suppose I sound like a criminologist when I say that, but it's really true.
(Note: The Rest of the Story/Corpus Callosum has moved. Visit the new site here.)
E-mail a link that points to this post: