To finish out a long title, let’s throw in insurance company executives, hospital administrators, most U.S. health economists, and most doctors.
Very few events in the advance of medical knowledge have a definite start date. Cost-effectiveness research is an exception.
The very first cost-effectiveness studies in medicine appeared in the July 31st, 1975 edition of the New England Journal of Medicine. The entire journal was devoted to the new science of cost-effectiveness research, which had been partially translated from other industries to healthcare. Some of the titles in this edition included “What do we gain from the sixth stool guiac?,” “Primer on Certain Elements of Medical Decision Making,” “Therapeutic Decision Making: A Cost-Benefit Analysis,” and “Protecting the Medical Commons: Who is Responsible.” Authors of some of these papers included Stephen Pauker, MD, a pioneer in the development of the science of medical decision making, Jerome Kassirer, MD, long-time editor of the New England Journal, and others.
Every other country in the developed world uses the knowledge generated by this field of medical research to make policy decisions, some countries more explicit and transparent than others. The British National Health Service (NHS) is the most transparent about the development and dissemination of these decisions, which are provided by an independent agency called the National Institute for Health and Care Excellence (NICE). If a new drug or device is shown to meet national cost-effectiveness guidelines, it’s adopted by the NHS; if not, it’s not.
This is why mammograms begin at age 50 in the U.K. (though some critics say they shouldn’t be offered at all), colonoscopy screening only happens once at age 60, and cervical cancer screening begins at age 25: compared to 40, 50 (and every 10 years thereafter), and 21 in the U.S. The Brits have concluded that screening at younger or older ages does not meet its cost-effectiveness guidelines, so they’re not provided.
No U.S. federal healthcare program (Medicare, Medicaid, e.g.) or U.S. insurance company uses cost-effectiveness guidelines in its healthcare decisions. Doctors are mostly not taught about cost-effectiveness research and its findings in medical schools and residencies. Why should they? In the U.S., no one with power or money acts on the information.
In fact, written into the law that founded Medicare is a statement that Medicare cannot consider costs in its coverage determinations. Yet other federal agencies use cost-effectiveness findings in their own fields. For example, the U.S. Army Corps of Engineers does cost-benefit analyses when it’s asked to assess the appropriateness of building a dam or levee. They calculate how many lives would be lost in a bad flood and balance that number versus the estimated cost of the project. If it costs too much per life saved, they don’t build the structure.
How can the U.S. healthcare system reduce costs if the payers aren’t allowed to consider costs in its coverage determinations? It can’t. It’s that simple.
Every other developed country accepts that resources put into the healthcare system cannot be put into other walks of life that also lead to happier and healthier lives: education, public safety, public health, and others. It’s high time the U.S. grew up and joined these countries in making the difficult, but necessary, decisions to limit some categories of healthcare interventions simply because they cost too much.
For anyone willing to listen with an open mind, the information on which to make these decisions has been available for 40 years now. How sad that the science started in the U.S., but has never been used here, though every other country has used it for their betterment.