One Star Docs in a Five Star World

During the course of a month, I send myself multiple emails with links to stories that seem like potential grist for this column. For example, this month, I sent myself a note about the University of Utah medical center making patient satisfaction scores for its 1,200 physicians available online (see tinyurl.com/alxv6rk for the news release). This is part of the broader trend of online ratings for almost everything, represented by dedicated sites like Yelp.com and Angie’s List (www.angieslist.com), and the customer review feature of sites like Amazon.com. For better or worse, these sites reduce products and services to a one-to-five-star rating. Many businesses hate these reviews (for reasons both justified and not). At the same time, people choosing doctors don’t have a lot of information to go on.
 
Ratings like those on Amazon or Yelp suffer from a number of problems. First of all, they’re completely subjective. My five-star experience may only warrant three stars from someone who’s either rating the same things but is harder to please, or rating based on completely different criteria. It’s particularly confusing when customers are asked to lump all the aspects of their review into a single rating. If the food was great, but the service sucked, what rating do you give? Some people will give it five stars because of the food, some will give it one star because of the service, and some will give it three stars in an effort to split the difference.
 
Pet peeve: When rankings are averaged on these sites, they always seem to present the statistical mean, such as 4.7 out of 5 stars (although Amazon actually shows you a frequency graph). This is less of a concern when a large number of reviews are available, but it’s still interesting to see the actual distribution of rankings, or at least the median value (half above, half below). Kudos to Amazon for this; Yelp, please get with the program.
 
In an effort to provide greater depth and clarity, most review sites actively encourage reviewers to add comments to their ratings. This is both good and bad. Good because it adds nuance to the star rating and provides more details about what the reviewer liked or disliked. It’s generally a good way to see if people have a particular axe they’re grinding or whether they’re being “fair and balanced” in their judgments. But comments are bad because, well, they’re written by people, so the quality of both writing and thinking is highly variable. This is less of a problem when there are multiple reviews. But when there’s only one review with comments, written by someone who seems to have completely missed his or her high school English classes, it can be frustrating.
 
Back to patient satisfaction. As it turns out, rating physicians isn’t a new idea: HealthGrades (www.healthgrades.com) has been around since 1998. I rated my own doctor (Dr. James G. Trapnell of Santa Rosa—a great doc, by the way), just to see how it works. Rather than a star rating, the survey asks a standard set of questions about the office (ease of scheduling, office friendliness, wait time) and the doctor (level of trust, willingness to listen, time spent), along with what some regard as the most important question of all: Would you recommend this doctor to family and friends? Each question can be answered with five choices, from “definitely not” to “definitely yes.” It also asks how many times you’ve visited the doctor in the past two years. To validate your survey, you’re sent a link via email. There’s no option to provide written comments.
 
Before I took my survey, 18 patients had reviewed Dr. Trapnell, and he had an 89 percent “patient satisfaction” rating, based on the “would you recommend him?” question. Looking at the responses to the other questions, I could see the number of reviews and average rating for each question in the survey, as well as how that compared to the national average of HealthGrades physicians (Dr. Trapnell is above average on everything except scheduling).
 
The use of a survey points out a key difference between HealthGrades and sites like Yelp.com or Amazon.com: because HealthGrades knows you’re rating a doctor, it can ask a very specific set of questions related to how people experience his or her services. This greatly improves the consistency of reviews. The lack of written comments also helps with consistency, although it makes the results feel a bit more sterile (as opposed to the distinctly social quality of Amazon and Yelp reviews).
 
HealthGrades does suffer from one problem that I alluded to earlier: low volume of reviews. When I searched for family practice doctors near Sebastopol (where I live), and sorted by patient satisfaction, the top-ranked doctors had 100 percent satisfaction ratings, but these were based on a handful of reviews (typically three or fewer).
 
Does your doctor encourage you to rate him or her on HealthGrades? If not, why? It’s a good question to ask at your next visit. I plan to ask Dr. Trapnell at my next appointment.
 
The University of Utah data, on the other hand, doesn’t suffer from low volume—it has a regular program of collecting reviews, and more than 40,000 exist. Moreover, ratings aren’t posted until a doctor has been employed for at least six months and has received at least 30 reviews. It also allows patient comments, both positive and negative. Such comments are reviewed before posting and “only edited to remove information that might identify a patient or be considered libelous or slanderous.”
 
Unlike HealthGrades, however, the University of Utah website (healthcare.utah.edu/fad) doesn’t appear to let me sort its “Find a Doctor” results by satisfaction ranking. And since it’s bound by HIPAA requirements, all the patient comments are anonymous, so you have to trust in the system. Still, I salute its approach, which attempts to address the many potential shortcomings of reviews and encourage you to read its news release (linked above) for more details.
 
After reading this column, please rate Tech Talk from one to five stars, add your comment and send it to my boss at publisher@northbaybiz.com. In the spirit of transparency, I’ll report any and all results in a future column.

Author

  • Michael E. Duffy

    Michael E. Duffy is a 70-year-old senior software engineer for Electronic Arts. He lives in Sonoma County and has been writing about technology and business for NorthBay biz since 2001.

    View all posts

Related Posts

Leave a Reply

Loading...

Sections