The importance of sharing prototypes with your users.


If you work in e-commerce, product, or digital design, you might have come across the concept of the “painted door” test. It’s one way that teams assess whether their users might like a new concept, feature, or business model, but without the cost and time of building the entire thing. As David DeFranza succinctly puts it: “The idea is simple: Instead of building a complete solution or feature, you build the suggestion of such a feature and measure how many people try to utilize it.”

Here’s an example of how it might work. Let’s say you work for a rideshare company, and you have a hunch that your customers might want to choose a horse buggy as their mode of transportation. For your painted door test, you’d design a button for a buggy option and then see how many visitors try to click on or access the feature.

The “painted door” test is a way to assess a new concept, feature, or business model.
The “painted door” test is a way to assess a new concept, feature, or business model.

While they can’t really order the horse to show up at their door yet, this painted door trial would help you gauge interest by quantifying the number of people who tried to interact with the option. Through the test, you might deduce that app-users are crazy for horse rides, and decide that it’s a no-fail feature to pursue in your next release.

While I’m not here to argue that a painted door test is never a good idea, I do want to say that the method is lacking. In particular, the method of testing what users want lacks the all-important why.

This sounds like a reliable approach to testing an idea, right? While I’m not here to argue that a painted door test is never a good idea, I do want to say that the method is lacking. In particular, this way of testing what users want lacks the all-important why.

When you look at numbers, clicks, or engagement, you lose nuance. You lose stories and verbal feedback. This is why I’ll always encourage testing prototypes with people, instead of counting clicks.

What’s wrong with the “painted door” test?

There is a wealth of information that the painted door test can’t illuminate. Let’s return to our trusty horse-and-buggy rides. If our fictional rideshare business ended up getting tons of interest through the trial, they might decide to throw time, money, and development assets into building the feature as fast as possible. And, they could put it out into the world and be surprised when it doesn’t perform as well as they anticipated.

A painted door test is binary — it tells us is people clicked or not. It doesn’t give you insight, context, and stories.

That’s because a painted door test is binary — it tells us if people clicked or not. It doesn’t give you insight, context, and stories. It’s data with no direction. Maybe people were clicking on the buggy button because they loved the idea. Or, perhaps they just were curious because they had never heard of this lovely concept. Or, maybe they liked the idea but were concerned that the buggy wouldn’t get them to dinner on time.

These are all insights that you gain from showing people your prototype and talking with them about it, not just monitoring their interaction.

There are things that the painted door test can’t tell us.
There are things that the painted door test can’t tell us.

Quality over quantity

One of my issues with the painted door test is that it’s overly focused on volume. Because the trial is shown to all — or a large proportion — of your users within a specific timeframe, you probably end up with thousands of data points. From this, you can create a hypothesis about what percentage of your site traffic will engage with a concept or feature. Because it generates hard data, many companies prefer this style of prototype testing; they feel that the high volume of testers gives them more accurate information on future trends and business impact.

Putting up a fake feature and seeing what happens is hard to understand and unpack. When things go “well” or “poorly,” you don’t know why.

But, while a painted door test may provide large numbers that can be used to validate an idea, it doesn’t help designers or developers understand the real drivers behind their consumer’s behaviors. Putting up a fake feature and seeing what happens is hard to understand and difficult to unpack. When things go “well” or “poorly,” you don’t know why.

On the other hand, individual, one-on-one qualitative interviews are not about quantity; they’re all about quality. In fact, in the Design Sprint process, you typically conduct “just” five user interviews on the last day of the sprint. These interviews are the culmination of the Design Sprint week and it’s the time when you talk with users about a focused prototype you’ve created.

I’ve seen first-hand that, through these five interviews, you learn so much about your prototype. You hear what your users like, what’s confusing, what’s getting lost, and what’s intriguing. The richness and insights that come out of a handful of interviews are undeniable.

Qualitative research doesn’t provide the data points that a painted door test does, but there are more stories to draw from and to inspire future design decisions.

Group working together over pages

Listen and learn

I think my insistence on the importance of user testing with prototypes is about fidelity or level of detail. The fidelity of your prototype influences the fidelity of the feedback. In other words, if your prototype is ambiguous, your customer feedback will be ambiguous as well.

When you have a nuanced, human conversation versus a binary collection of “did they click or not,” you have more to work with and more important detail to inform the work ahead. If you send out a survey, you’ll get more data points, but the conversation is biased or constrained by what you put in the survey.

One story I shared in my new book Beyond the Prototype was about Twyla, an art startup where I was the CTO. During our Design Sprint, we decided to test an idea we wanted to build — a price transparency feature. Looking back, a painted door test on this feature wouldn’t have been helpful to us. If we had made the dummy feature and no one clicked on it, we wouldn’t have had any idea of why they didn’t like the concept.

Two women discussing a project

Instead, because we showed users a prototype of the concept and interviewed them directly, we found out that they didn’t like the idea. Even more important, we learned that it actively annoyed our customers. It was invaluable information to find out sooner rather than later.

So, even if you continue to conduct painted door tests at your company, consider running qualitative user testing alongside it. I think you’ll get better direction by talking with people one-on-one.

Take your feature and show it to people. Find out what they have to say. Listen intently for customers telling you what they want that you don’t currently offer. You’ll capture so much more, and it will help you refine your idea faster.

Use these learnings to tweak your prototype and test again. Keep going until you’ve got something so appealing that you’re very confident that people want it.