Interesting post on The Old New Thing: “People lie on surveys and focus groups, often unwittingly.”
Software engineers, usability reports and marketing groups often interpret what people say too literally. As humans, our words are imperfect approximations of our perceptions of reality. I always try to do what they mean, not what they say. Whether that person is part of a focus group, participating in a usability session, or my boss. Its a tough challenge. It requires developing a mental model of the individual and their perspective.
“Just because people say they would do something doesn’t mean they will.”
Its a good point that people are quick to say what they want. Its harder to figure out what they need: what they will pay money for or what they will actually spend time on. People can’t necessarily imagine what the software will be like when its finished. Sometimes they imagine that you will solve problems you haven’t even thought of yet, or they can’t see past a temporary flaw or limitation of a prototype. There is a gaping chasm of difference between regular folks and those of us in the software industry. The words we speak between us are founded on unspoken assumptions.
I remember a focus group where the results were reported by a particularly insightful product manager. He said that people at first responded warmly to the demo, but at the end said that they wouldn’t use the new software. He remembered that they were a few times during the focus group that the new software failed (it being a prototype, rather than released software). It is likely that what the people were really saying was: the software I use now is reliable; I wouldn’t use this new software because it doesn’t work.
I find it easiest to find the truth in watching what people do. It hard to do this when developing software, since you have to sit people down in front of something that basically works and carefully watch where they move the mouse and what keys they type. Did they right-click that icon because they really want to control it using a context menu or was it that the interface failed to surface the command a better way? Then you have to be willing to go out on a limb and change things and test again. And at some point, you have to make a call that its good enough, because there is no perfection in human experience.
And when people do say something in usability testing, it is often founded more by feeling than fact: “I want this to make my work easier” “That operation didn’t feel right: too many things to learn”
And then you sort out what is really true: is the product really too hard to learn? vs. a comment driven by the limits of usability testing: testers have a few minutes to play with a product instead of a few days.
For instance, I can only imagine usability testing a feature like alpha channels in Photoshop. Without the help of a Photoshoop book, I never would have known all of the mathematical ops that can be done using channels…if someone in a usability test was shown this, they might have said: “You have got to be kidding me! That is way too complicated!” which is the difference between being introduced to a feature in the development stage with no documentation and pretty example pictures vs. seeing a well-documented feature after completion.
—
As a side tangent, I’m trying to imagine what happened when the paper clip in MS Word was user tested. That thing is so annoying! Could it actually be true that users sat in a room with a one-way mirror and said “Oh yes, great feature, love it!” or is it true that users thought it was an annoying load of crap and marketing ignored them. I guess we’ll never know.