PSS

By Amy M. Collins, editor

As an editor at AJN, I come across a lot of information on performance measures and Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) surveys. It’s a hot topic that we’ve covered several times, with some health care providers railing against these surveys and questioning whether satisfaction during a hospital stay is the same as quality care (see the September Editorial and the July 2012 Viewpoint for more on this).

Yet as usual, reading about a topic isn’t entirely the same as experiencing it. A few days after undergoing a small, non-emergency, in-office medical procedure, I was surprised to find a patient satisfaction survey in my e-mail inbox. Busy and flooded with many other e-mails, I was tempted to banish the survey to the trash can, especially since I didn’t feel I had much to say. But curiosity got the better of me.

The survey started off easily enough, as I clicked through questions such as “Was your waiting room time under 15 minutes?”; “Were the receptionists polite?”; “Was our facility clean?”

But as the survey crept forward, I began to feel overwhelmed by the sheer number of questions. Many questions seemed redundant; for example, I answered about five related to waiting times. Are they trying to catch patients out on inconsistent answers? All the while a green bar at the top showing the percent completed inched forward at the speed of New York City midtown traffic. I began to wish I hadn’t bothered.

Some of the questions seemed to place a lot of responsibility on the patient. Was adequate hygiene followed on the part of the caregivers in terms of handwashing/gloves? I don’t know—I wasn’t watching them like a hawk.

Other questions brought up feelings of loyalty to my provider. Did I feel the physician in question had been understanding of my needs, had listened to me, spent adequate time with me? Had s/he spoken in a way that I could understand? Since I happen to like my physician, a part of me felt guilty giving this provider anything other than a “very good.” Surely, I should have been objective, and the survey is, after all, anonymous. But I still felt like it was hard to give straight black-and-white answers.

Yes, this person is great—but my appointment wasn’t. It was unexpectedly painful; there had been a few minutes of uncertainty as to what was happening, with no explanation as to whether everything was actually going okay. But should that translate to a point off for the provider? Should I condemn the provider for not chatting away and giving me a play-by-play while s/he was concentrating on doing her/his job? How can I express with a click of the mouse that when s/he patted my arm after everything was over, it made up for the few minutes in which s/he’d unintentionally caused me pain?

When it came to questions about the nurses and whether I felt the care provided was satisfactory, I, of course, clicked “yes.” But clicking “yes” didn’t allow me to explain that when things got painful and I began to cry, my nurse came to my side and offered to hold my hand. Or that I almost crushed her fingers with my tight squeeze because I was anxious, causing her to ask gently if I could loosen my grip.

In the end, these small acts of kindness and caring overrode the negative experience and made me click on high scores for many of the questions. And toward the end, I was clicking on anything just to finish the lengthy survey! In retrospect, I see that this does not an objective, informative survey make. And I could see someone else, perhaps, just as easily going the other way—clicking negative answers because of a bad experience that was out of the providers’ hands.

Which brings me back to questions at the heart of the debate over these questionnaires—as patients, what satisfies us? And do those things actually measure the quality of the care we receive?


Bookmark and Share