Medicare Risk Adjustment

There are approximately 42 million Americans covered by Medicare, a well-known (but not well-understood) program that provides health services to senior Americans. The program is administered by the Centers for Medicare and Medicaid Services (CMS).

Medicare has two basic forms: fee-for-service (FFS) and Medicare Advantage (MA), which offers seniors a HMO-style approach to health care. About 7 million Americans belong to an MA plan.

How do I know all this? In my other life (the one where I’m not a columnist for NorthBay biz), I’m the CIO for Leprechaun, LLC (www.lepmed.com), a health care services company that deals exclusively with the Medicare arena. In this position, I’ve seen how companies can use technology to look at large data sets and discover new information using a technique called “data mining,” and then apply the new information to a real-world problem (in this case, quality health care for seniors). This issue’s Health and Medicine theme gives me an opportunity to tell you a little about it.

Prior to 2004, MA organizations were paid by the government solely on the basis of a member’s demographic information. In other words, a plan was paid the same for all the 67-year-old women living in the same county. As you might expect, this led to health plans seeking to enroll healthy people, not sick ones. So Congress passed the Medicare Modernization Act of 2003, which began phasing in a risk-adjusted payment system over a four-year period. In 2005, MA plans received 50 percent of their payments based on demographics and 50 percent based on the risk profile of their membership. Last year, that split became 75 percent based on risk, and starting this past January, MA plans receive payments based entirely on the risk profile of their members.

(This being a technology column, I’m not going to talk about the political football that is Medicare, nor the interesting mess represented by Medicare Part D, which covers prescription drugs.)

The risk assessment methodology chosen by Medicare is called “Hierarchical Condition Category” (HCC). Basically, it assigns risk factors to the diagnostic codes used by physicians to describe the physical condition of the patients they see.

You might think these risk factors reflect how sick a patient is. But risk factors don’t really have anything to do with that. The values assigned to the various HCC codes are based on a statistical analysis of how much it has historically cost to treat patients with those conditions. This leads to some interesting relative risk values, as some seemingly minor conditions have high risk factors because they require continuing care over a long period of time.

There are three main challenges within this HCC system. First, Medicare resets a patient’s risk status annually. In some ways, this makes perfect sense; you want patient risk to be assessed at least annually. But it also means that, at least from a risk perspective, every patient is perfectly healthy on January 1 each year—amputated limbs magically reappear and persistent conditions like congestive heart failure are forgotten. The result is that patients who aren’t seen over a full year have incorrect risk assignment.

Second, many physicians don’t understand the oddities of the HCC risk assignment system. Doctors enter a lot of diagnostic information in a patient’s chart, but historically, they only submit the information necessary to receive payment from a health plan. Since nearly all of Medicare’s diagnosis (and hence, risk) information comes through payment claims, a lot of useful risk information isn’t getting into the system.

Finally, the IT systems health plans use are still adjusting to the changes in the law. Even if doctors wanted to submit all the diagnostic information they have, many current systems limit the number of codes that can be submitted with a single claim. So although a patient may have 20 diagnostic codes, the claims submission system used by the health plan may only allow 10. Moreover, there’s no guarantee that the 10 submitted codes are the most important to the patient’s risk profile. Medicare also recently shot itself in the foot by paying physicians to provide “quality of care” codes, which take up some of those 10 slots while doing nothing to affect patient risk scores.

One way of tackling this problem is to educate physicians about the issues involved and make sure that patients are seen regularly. But that still doesn’t address the problem of IT systems, which effectively drop data on the floor. Since Medicare allows a plan to retroactively submit diagnostic information about patients (via an arcane timeline which would rate an entire column by itself), one solution would be to look directly in patient charts for relevant diagnostics. The problem? A 20,000-member MA plan might have 80,000 to 100,000 patient charts to review. Who’s going to identify which charts have the greatest amount of missing risk information?

That’s where a company like Leprechaun comes in. Leprechaun takes all the information produced by a plan, such as medical and pharmacy claims, and analyzes it for patterns which can be exploited. Clinical experts, such as cardiologists, look at the patterns to create rules which can then be used to identify the 10,000 charts which are most likely to contain incomplete diagnostic information.

Some rules are simple: if you see a claim for a prosthetic device (information that cannot be submitted to Medicare), make sure the patient has a diagnostic status which reflects an amputation. Some are more complex. For example, drugs that are prescribed for congestive heart failure may be used in other circumstances as well. With a sorted list of charts, plans can allocate review resources more effectively. Trained reviewers can go back into those charts looking for the diagnostic codes that the doctor identified, but that never made it to Medicare. Patients receive better care, and plans receive the payments necessary to properly care for them.

The medical world has a long history of data analysis. An excellent recent book, The Ghost Map, outlines how such analysis led to the defeat of a deadly cholera epidemic in London in the 1800s. But almost every organization produces data that can be analyzed for trends. For example, a restaurant might look not only at what dishes are most popular, but at how much of which dishes is left on people’s plates. This could be used to modify portion sizes to reduce cost. I’m sure my astute readers will think of applications for data mining in their own businesses.

Author

  • Michael E. Duffy

    Michael E. Duffy is a 70-year-old senior software engineer for Electronic Arts. He lives in Sonoma County and has been writing about technology and business for NorthBay biz since 2001.

    View all posts

Related Posts

Leave a Reply

Loading...

Sections