Ever heard the story of Google’s early user-experience tests, where researchers sat users in front of a computer and then watched to see how they interacted with the site?
The results were surprising. A researcher would fire up a browser, navigate to Google, and then move away. They expected the user to start clicking, typing, searching, etc. But they didn’t click. In fact, they didn’t do anything. Users just sat there, staring at the screen, sometimes for a full minute. And when the researcher would ask, “What are you waiting for?” the user would respond, “I’m waiting for the page to finish loading.”
Long story short: That’s why Google added the copyright symbol at the bottom of the search page. The symbol stayed there for years, and today you can still find a footer down there with links like “Privacy” and “Terms.” Google learned their homepage was so bare many users thought there was more to the page that simply hadn’t appeared yet. Note that I said “learned” not “guessed.” Google identified the right product metrics and learned from them.
This is just one powerful reason, that product owners and product managers need to subject their products to real-world data, testing, and feedback whenever possible.
Google’s early product team realized there was something important missing in their initial offering. And they realized it only after developing a logical, testable metric—how do users actually engage with google.com?—and then putting it to the test.
Here are a couple of other reasons that your product will stand the best chances of success if you identify, track, and learn from the right metrics—rather than relying primarily on your intuition or even your general knowledge of your industry.
Quantitative Product Metrics Give Balance to Qualitative Customer Feedback
Your customers often find creative ways to use your product that even you might have missed.
One common method product teams use to gather data is to put out a survey. It’s easy to do, especially in the era of SurveyMonkey, NPS tools, etc., and with the exception of the rare “Take me off this list!” reply, surveys sent to a product manager’s email list usually yield lots of helpful data.
But this type of metric—what your customers tell you they want—might not be the most strategically beneficial. That’s because often your customers simply can’t tell you what they really need from your product, or how they’d derive the most value from it, because they don’t know themselves.
This is one reason agile can be such a smart methodology, particularly for software development. It allows you to push out products more quickly and frequently, to see how your customers are actually using your products—so you can make adjustments based on real-world feedback.
Say your company makes an online travel application, and at the heart of your tool is a platform that lets the user build her own itinerary—airline, hotel, rental car, showtimes and locations, etc.
Now let’s say you push out the early version of your tool, and you find that 95% of users aren’t building their travel itinerary in your app at all. Instead, they’re all taking advantage of a feature you considered minor: A PDF converter that allows a user to forward their travel details from other sites (airline, rental car) to their account on your site, and which then turns those PDFs into a searchable, cohesive itinerary for them.
If you spent long development cycles and lots of resources building this first version of your tool, you’ll realize only after you begin watching how your customers use it that you poured your resources into the wrong functionality.
Your app should not have been built or promoted as a do-it-yourself travel itinerary builder. It should’ve been positioned as an itinerary aggregator. Because in that respect, your product is killing it out there!
The point is, you’d learn this only after examining some real-world metrics and observing some actual user behavior—particularly which features your customers are actually accessing, and which they’re ignoring. And you wouldn’t have hit on this interesting development if you’d simply put out a survey to your customers—many of whom would’ve probably checked the “Interested” or “Very Interested” box beside the question, “What is your level of interest in being able to build your own itinerary in our app?”
User Experience is Rarely Objective
What you find simple and intuitive in your product might be a frustrating, hair-tearing, profanity-inducing mess for your users.
In his book Too Close to Call, about the 2000 presidential election, author Jeffrey Toobin tells a funny story about an elections supervisor in Florida. Judge Theresa LePore was sitting in her office during the vote-recount days after the election. Judge LePore slid her state’s “butterfly ballot,” which she had designed, across her desk to a colleague and said, “Vote for Gore.” The judge wasn’t telling her colleague which candidate he should prefer; the election was over. She was asking him to cast that ballot for Gore, because she was no longer sure it was such an easy and obvious task.
Her butterfly ballot had become a source of contention in the contested election. And now the judge was re-examining it after many complaints from voters who said it confused them and led them to vote for the wrong candidate. That ballot probably seemed perfectly simple and straightforward to Judge LePore when she was designing it. But in light of that voter feedback… maybe not so much.
“What you find simple and intuitive in your product might be a frustrating, hair-tearing, profanity-inducing mess for your users.”
Another reason to identify, track, and analyze the right metrics is that, as a product owner or manager, you’re simply too close to your product to be able to objectively evaluate things like the user experience for a brand new user.
This is why behavior-analytics tools are so valuable—and why you need not only to deploy them on your site or product, but also do deep-dive reviews of the data they’re gathering for you.
Or you can even run a simple test within your company. Ask your sales reps or your HR department, for example, to sign up for your product’s free trial, or to navigate to a specific feature within your product. Just watch them. Are these steps—which you probably find simple and intuitive—actually easy for the average non-product expert at your company?
It’s only through watching and measuring how your customers interact with your offering—where they get stuck, where they try something over and over, or where they give up and leave—that you’ll know what’s working and what needs improvement.
Remember, that’s how Google learned it needed to add something, anything, to the bottom of its homepage to alert visitors that, yes, the site had indeed finished loading and it was time to start interacting with it.
How do you think about relating metrics to your product strategy? Share your thoughts in the comments below.