“We added an open text box to our churn survey… but most people either left it blank or wrote ‘not useful’ or ‘too expensive.’ We couldn’t tell what exactly was broken.” – B2B SaaS PM
Open-ended questions could be a gateway to rich, human-centered insights—but most fall flat. Partly due to survey fatigue, chatGPT answers, bad panel quality..etc. But also because we’re asking the wrong way.
Let’s break down exactly why your open-ended questions aren’t delivering—and how to fix them.
Open-ends are meant to capture the “why” behind user behavior. But in reality, most survey responses are:
It’s not that open-ends don’t work—it’s that they need better design. And that starts with avoiding these common mistakes.
(And What to Ask Instead)
“What can we improve?”
This question sounds flexible—but it offers no guidance. Most users don’t know where to start, so they either skip it or reply with vague answers like “UX” or “notifications.”
“What can we improve? (e.g., speed, setup, notifications, design)”
This provides direction without biasing their answer. It lowers the cognitive barrier and invites clarity.
“Why did you give us a 6?”
Cold “why” questions put users on the defensive and assume they’re ready to explain. But without setup, you get surface-level replies—or worse, none at all.
Ask first: “What were you trying to get done today?”
Then follow up: “What made that difficult?”
You’ll get more honest, detailed reflections by easing users in.
“What would’ve made your experience better?”
This assumes something was wrong—even if the user had no issues. It skews feedback and erodes trust.
“What worked well—and what didn’t?”
“Was anything surprising, confusing, or especially smooth?”
These invite both positive and negative input without pressure.
“What do you think of the product overall?”
This is overwhelming. It invites vague replies like “It’s okay” because users don’t know what part to focus on.
“What was your experience like using [feature] for the first time?”
“What’s one thing that slowed you down today?”
Specific questions generate specific, actionable stories.
“How do you feel about the app?”
You’ll get shallow takes like “It’s fine” or “Pretty good.” That’s not insight—it’s vague sentiment with no substance.
“Can you walk me through the last time you used the app?”
“What happened when you tried to complete [task]?”
Behavior reveals more than opinion.
“What would you do if we removed this feature?”
Hypothetical questions lead to guesses, not grounded insight. They force users into imaginary scenarios that may not reflect real needs.
“Have you ever used this feature? What for?”
“When was the last time you needed to do X—how did you do it?”
You want reality, not predictions.
“How do you like the new flow?”
This lacks context. Which part? When? What happened before or after?
“After completing step 3, how did the next screen feel?”
“When you first used the new flow, what stood out or felt different?”
This helps users recall concrete experiences, not abstract impressions.
“We got more from one 5-minute AI voice interview than 50 open-ended survey responses.” – UX Lead at B2B SaaS
Typing is effortful. Speaking is natural.
With AI voice interviews (like UserCall), users talk casually while AI handles follow-ups and tags the insights for you.
f your open-ended responses feel flat or unhelpful, it’s rarely only a “bad panel” problem—there's likely a design problem. The quality of insight you get is directly tied to how you ask.
Fix these 7 mistakes, and you’ll start collecting responses that are:
Still not getting the depth you need? Sometimes, it’s not just about better questions—but better channels. Consider switching up the format: voice instead of text, async interviews instead of surveys, or smarter AI-moderated tools that help people open up.
In the right moment, with the right medium, a single conversation can unlock the pivotal insight your entire project depends on.