A practical rubric template for reviewing UX Research portfolios
Assessing and improving UXR case studies across five key dimensions
Portfolios and case studies have become a central artifact in the UX Research job search—yet they’re not something we’re trained to produce, nor are they part of our regular day-to-day responsibilities. They become urgent only when we’re seeking our next opportunity. And in today’s volatile and competitive job market, where many researchers have been impacted by layoffs, the stakes feel especially high.
Researchers are doing their best to stand out, but often without clear guidance—especially when they’re not landing interviews and receiving little or no feedback from hiring managers. When feedback does come, it typically comes from colleagues or mentors: well-meaning, but sometimes contradictory.
This isn’t just a problem for candidates. It’s a problem for hiring teams too. In past research with hiring managers and candidates, we found that work sample reviews are often conducted with no consistent structure or shared standards—leading to subjective, inconsistent evaluations.
This article aims to fill that gap. What follows is a flexible rubric template that UX Researchers can use to self-evaluate their portfolios—and that hiring teams can adapt to bring more structure, fairness, and clarity to their review process.
Where this rubric came from
The rubric draws on three primary sources:
Research on the UX Research hiring cycle conducted by Drill Bit Labs, including interviews with hiring managers and candidates about portfolio practices, pain points, and what makes a portfolio stand out.
Direct experience on hiring teams, reviewing candidates’ work samples and seeing firsthand how difficult it is to evaluate portfolios without a shared framework.
Portfolio review experience, drawn from dozens of sessions with mentees, during which common themes and repeated advice began to emerge.
To refine and validate an early version of this rubric, we conducted five 45-minute portfolio review sessions with UX Researchers at various career stages—from entry-level to staff and principal roles. Feedback from those sessions helped shape the dimensions, language, and scoring guidelines.
How the rubric works (and how to use it well)
What we’ve produced is an evaluation rubric designed to be useful across the full spectrum of seniority, offering clear standards whether you’re just starting out or bringing years of experience to the table.
It assesses portfolios across five key dimensions: Clarity, Rigor, Impact, Engagement, and Growth. Each dimension is rated on a scale from 1 (Poor) to 5 (Exceptional). The rubric includes detailed criteria for each level, making it easier for candidates to self-assess. There’s also space for comments to note suggested changes and areas for development.

The scoring and letter grades are non-standard, so you might be surprised to see a 72 of 100 (for example) rated as a B. This is intentional. Consistent performance at 4 (Strong) across all five dimensions yields a 90, which gives you a solid portfolio for casting a wide net, and getting invited to a phone screen. A few additional ratings of 5 (Exceptional) will push the score into A+ or bonus territory, which indicates a strong potential fit for later-round interviews when you are presenting your work.
Use of this rubric is based on two key assumptions:
This may seem obvious, but you’ll need a portfolio—or at least a few case studies in progress—to evaluate.
You should be pursuing a targeted job search. For example, you might be focusing on Senior or Lead roles at small to mid-size companies, rather than applying indiscriminately to any opening with “Researcher” in the title.
Depth is brought to you by Drill Bit Labs, a UX research and digital strategy consulting firm. We solve complex user experience issues for digital product teams.
The five dimensions that make or break a portfolio
To assess yourself or someone else’s work, you first need to understand the five dimensions.
1. Clarity
Definition: How well the portfolio communicates the problem, context, and relevance to the targeted role.
Strong clarity means the work is well-scoped, the goals are articulated, and the hiring team can immediately see how the work connects to the job. Weak clarity often shows up in generic case studies with vague goals or only superficial alignment with job requirements.
For example, one candidate was applying for Staff-level roles, which often emphasize non-technical competencies like mentoring and influencing strategy. However, all of the case studies presented were focused on technical execution. A key opportunity was to highlight leadership, influence, and mentorship activities that were more relevant to those target roles.
2. Rigor
Definition: The appropriateness and depth of the presented activities—often, but not exclusively, research methods—and the reasoning behind them.
Strong rigor goes beyond listing what you did. It involves explaining why those activities or methods were chosen, how they were executed, and/or how they addressed the project goals. The best examples convey this even in standalone form, without needing a candidate’s voiceover to fill in the gaps.
Above, this candidate’s portfolio took a beat to explain the chosen method (in this case, a special application of task analysis) and its specific strengths in relation to the questions being explored. This kind of explicit reasoning sets the candidate up as an expert practitioner.
3. Impact
Definition: How well the portfolio connects activities or research insights to business or user outcomes.
Impact may take the form of product decisions, strategic direction, user value, or even return to the business.
This can be a tricky one to demonstrate, since we often have little control over how our findings and recommendations ultimately get used. Projects might get shelved, the team might pivot, or the fruits of an early discovery effort may not materialize for years. Even so, if you drill deeper, you may find meaningful signals of impact: perhaps your work influenced a future line of research, led to a process improvement, or helped shape internal conversations about product direction.
This example shows how one candidate was able to artfully show impact. Although unable to demonstrate tangible impact from the findings of the study itself, the case study is nevertheless careful to show how the work was grounded in business outcomes (in this case, revenue and market share) from the earliest stages of planning. Under less than ideal circumstances, this indirect approach can be effective.
4. Engagement
Definition: The overall polish and flow of the portfolio, i.e., how well it communicates effectively and holds the viewer’s attention.
This isn’t about visual design, per se—after all, we’re researchers, not designers. But a good portfolio should avoid distracting the viewer with generic or sloppy templates, or an incoherent layout. It should communicate in a way that feels purposeful and easy to follow. This includes effective visualizations and figures, logical flow, and scannability.
One common question about the rubric is how to balance detail with brevity. How do you include enough information to score well in Clarity, Rigor, and Impact while keeping your portfolio concise? Balancing these with Engagement can take many forms. For instance, well-structured portfolios might include an executive summary, clear sections and headers, or even a hyperlinked table of contents to let viewers dig in wherever they choose.
The example above is representative of how many portfolios struggle to toe the line between comprehensiveness and brevity. The minimal detail makes it hard to follow how key insights emerged from research activities. It also assumes a level of context and knowledge that non-research stakeholders may not have.
5. Growth
Definition: Evidence of reflection, learning, and evolution across projects.
Growth is often the most underrepresented dimension—which makes sense. Reflecting on “failure” can be uncomfortable. But in reality, things rarely go exactly as planned. Strong portfolios show what the researcher took away from those bumps in the road, how they adapted, and how it shaped future decisions.
During our portfolio validation sessions, one candidate shared their experience working in a new domain: healthcare, which is a regulated environment shaped by U.S. HIPAA laws. They reflected on the challenges of participant recruitment and research planning in such a context, showing thoughtful adaptation and professional growth.
How candidates and hiring teams can use this rubric
For candidates, start with a representative or target job description. This could be a real role you’re preparing for, a position you applied to and felt well-suited to, or one related to a panel presentation you’ve been asked to give. Some candidates mentioned leveraging AI tools here: one can gather three to five job descriptions that reflect your target role and ask an LLM to synthesize them into a composite.
From there, this tool is most powerful when used in conversation. We all suffer from the Curse of Knowledge—we’re so close to our own work that we forget what’s obvious to us might not be obvious to someone else. So wherever possible, get out of your head and review your portfolio alongside a mentor or trusted colleague. Share your materials along with this rubric and article, have each person assess independently, and then compare notes. Use the comments column to flag areas for improvement and identify concrete action items. Iterate and revise accordingly to strengthen your application materials.
For hiring teams, one of the biggest issues we identified in our research on the UXR hiring cycle was a lack of structure. Too often, panel presentations are evaluated based on gut feel or first impressions. A more reliable and equitable approach is to systematically assess candidates based on the specific responsibilities of the role.
While this rubric is intentionally broad to accommodate a variety of UXR roles, think of it as a template that can and should be adapted for your team’s needs. Consider customizing the dimensions or criteria to align with your org’s expectations of a specific role. Further, we suggest sharing what you’re looking for, at least in broad terms, with the candidate in advance. Even a general signal of what you’re looking for helps them tailor their presentation and helps you assess more effectively.
The bottom line
Drawing upon original research, hiring experience, and dozens of portfolio reviews, we created this rubric as a resource for candidates and a template for hiring teams. It provides structured criteria to evaluate portfolios across five key dimensions: Clarity, Rigor, Impact, Engagement, and Growth.
Interested in using it? Get your copy of the rubric in Google Drive. It’s free to use, but if you adapt or share it, please credit this source.
Special thanks to our portfolio review session participants, whose experiences, comments, and questions have improved this resource.
Drill deeper
Depth is produced by Drill Bit Labs, a consulting firm that takes a research-led approach to digital strategy. We work side-by-side with UX and product design leaders to elevate their UX strategy, delight their users, and outperform business goals. Ways we can work together:
User research to inform confident design decisions and improve digital experiences
Live training courses that teach teams research skills
Advisory services to improve UX processes and team strategy
Let’s connect to discuss your goals and how we can help.