3 Comments

Great guide, Thomas! πŸ‘πŸ»πŸ‘πŸ»πŸ‘πŸ»

Expand full comment

Really interesting breakdown. Conjoint analysis seems like a game changer for cutting through the noise and figuring out what users actually care about not just what they say they want. Has anyone tried running a smaller scale conjoint study for early stage products? Would love to hear how it worked out or if there are better lightweight methods for feature prioritization.

Expand full comment

Hey Noah, I’ve run conjoint studies ranging from small internal experiments with 10-20 respondents to 10,000+ participant studies. Conjoint is particularly ideal for feature prioritization *within the context of the broader product or pricing plans* (ie. showing the stuff you’re considering building alongside everything else you already offer).

If you’re just looking at a list of potential problem statements or opportunity areas in isolation, then a more lightweight discrete-choice model like Pairwise Comparison (head to head voting) or MaxDiff Analysis (3-6 options at a time, pick the best and worst option from each set) are better suited than conjoint.

Here are some guides explaining those two methods:

β€’ Pairwise Comparison β€” https://opinionx.co/blog/pairwise-comparison/

β€’ MaxDiff Analysis β€” https://opinionx.co/blog/maxdiff-analysis

For both of these methods, I personally like an approach called Customer Problem Stack Ranking as the data collection method for feature prioritization β€” https://opinionx.co/blog/customer-problem-stack-ranking/

I’ve helped run thousands of these studies, so feel free to add any follow up questions as comments here :)

Expand full comment