There is a lot of discussion about AI and its place in UX right now — in terms of both how we work (i.e., integrating AI into workflows and processes) and what we work on (i.e., how can we craft experiences that meet real user needs as companies put pressure on product teams to ship AI features quickly).
This leaves many wondering if AI is a passing trend or a shift in the expectations for their roles. We analyzed 2,983 user research job descriptions to examine how often AI is mentioned, how that has changed over time, and the contexts in which it appears.
This article summarizes our findings and what they signal about artificial intelligence’s place in the industry.
Analyzing 2,983 user research job descriptions for AI mentions
For this analysis, we used a sample of 2,983 user research job descriptions collected between January 2024 and April 20251. Job postings are a great source of data to address questions about role expectations because they are publicly available in large enough quantities to support longitudinal analyses and include skill and qualification requirements.
We flagged the job descriptions and their metadata for any mention of AI across several related terms (e.g., AI, artificial intelligence, generative AI, ChatGPT, Copilot, etc.) and analyzed the frequency of mentions over time. Additionally, we performed thematic analyses, coding where and why it appeared in the job posting.
Depth is brought to you by Drill Bit Labs — We bring a research-led lens to digital strategy.
AI in user research job descriptions: What the data shows
How often AI appears in user research job descriptions
Starting at the broadest level, we can look at how frequently AI comes up in user research job postings. Our data show that out of the 2,983 job posts sampled, 334 (11.2% of the total) mentioned AI for one reason or another.
So, most user research job posts don’t mention AI, but about 1-in-9 do, which is a fair number when you consider the overall volume of jobs across the entire industry. We should also note that job descriptions are notorious for inconsistent formatting and excluding key details, so we should treat this 11.2% as a rough directional measure.
What’s probably more noteworthy than the overall number is how it’s changing over time. When comparing the data from 2024 to what we have collected so far in 2025, we see an increase from 9.5% to 16.4% — that roughly 7% YoY increase is statistically significant (z = -5.07, p < .0001).
It probably doesn’t surprise many that this change shows up in the data. Anecdotally, in conversations with researchers across many types of companies and industries, we hear that most organizations are scrambling to find applications for new AI technologies and asking their product teams to deliver AI-driven features.
How user research job listings reference AI
Taking the 334 job postings that mentioned AI, we performed thematic coding and found three primary reasons why AI was referenced:
The role would involve research to shape AI products or features (39.8%)
AI was a part of the required experience or skills for the role (40.1%)
The posting included disclosures about AI usage within the hiring and interview process (24.3%)
Each of those deserves a closer look.
Of the job postings that mention AI, nearly 40% do so because they explicitly state that the role will focus on research in support of developing AI features and experiences. For example, one posting said, "You will play a crucial role in shaping the user experience of our artificial intelligence-enabled assistant and digital products." This is probably unsurprising, given the broader focus on AI across the industry. Many researchers report that their teams have been tasked with delivering AI-driven experiences.
Similarly, roughly 40% of the job postings that reference AI list it within required experience or skills; diving into that reveals some interesting nuance.
65% of the time, the posting asks for prior experience designing AI products or features. For example, it’s relatively common to see something like, "3+ years of experience in user research, preferably with a focus on AI and chatbot technology."
25% of the time, the posting was asking for skills or expertise using AI as a part of the user research process. What’s particularly noteworthy is that these requirements were often described in relatively broad or generic terms.
For instance,"Knowledge of artificial intelligence and how it can be applied to UX practices," or
"Exposure to AI as part of product development, executing research, and/or increasing personal productivity."
An additional 2% of postings referenced both, seeking candidates with experience designing AI products as well as using AI in their research processes.
These data have some important implications. First, we can see that when a job asks for prior experience with AI, it's much more commonly about creating AI products, experiences, and features. It could be a strong differentiator for a job seeker to highlight prior experience working on these types of digital experiences in their resume or portfolio. Second, it was far less common to see a job description mention skills applying AI to research work and processes. Where it was mentioned, it tended to be described at a relatively high level, which reflects broader trends: AI adoption today is largely driven by individuals, and most applications are relatively simple in scope.
We were surprised to find that 24% of the postings that mentioned AI did so in a disclosure about the hiring or interview process.
Some companies have started adding a blanket statement to all of their job posts, like this example: “Please note that [redacted company] may leverage artificial intelligence and machine learning technologies in connection with applications for employment.“
On the other hand, some postings mention a new issue in hiring: candidates using ChatGPT during live interviews to generate answers on the spot and reading them verbatim. For example, one company stated, “We are committed to ensuring a fair and equitable interview process for all candidates. As part of this commitment, the use of artificial intelligence (AI) tools to generate or assist with responses during interviews is not permitted. This policy is in place to maintain the integrity and authenticity of the interview process."
Commentary on how AI is changing the nature of hiring and job-seeking is somewhat outside the scope of this particular analysis, but it may be worth revisiting in the future. In the meantime, both candidates and hiring managers should be broadly aware of these emerging implications for the current hiring environment.
The bottom line
Mentions of AI in user research job postings are becoming more common, increasing from 9.5% in 2024 to 16.4% in 2025 (YTD). This shift reflects current organizational priorities to ship AI features.
Most often, AI is cited because the role will involve researching AI features and/or because it requires prior experience performing research on AI products. Mentions of using AI for accomplishing user research tasks are less common and tend to be described in broad, non-specific terms.
In light of these data, user researchers should consider:
Gaining experience working on AI-focused products and highlighting them in their resumes or portfolios.
Experimenting with AI tools (within reason) to improve their personal workflows and efficiency, even if expectations about AI-assisted research are loosely defined.
Drill Deeper
User research is the foundation of a successful digital strategy. Let Drill Bit Labs guide you in turning user insights into concrete initiatives to shape product direction, inform go-to-market strategies, and optimize digital touchpoints.
We’ve previously used this same data to produce our data-driven UX research career ladder. Refer to that article for more details about our collection methodology.
One of the most significant AI-driven changes I've noticed in UXR is that internal stakeholders are using LLMs to translate their objectives into specific research methods that they then request the UXR to run, rather than using the UXR's expertise to inform this decisions.
I'm dealing with UXRs landing at OpinionX for projects like conjoint analysis studies, so obviously I have a bit of a skewed view because of this, but the pattern is really clear and frequent -- traditionally-qual UXRs being pushed by internal stakeholders who use LLMs to correctly identify the type of research they need, which are often quant-heavy methods, and tasking the UXR to go run it.