Research Recruitment – Why It Fails and How to Fix It

research recruitment is broken - image of a paperclip chain broken

Research is in a boom era thanks to AI, but not all areas have been touched with the magic wand yet, and research recruitment is still proving particularly tricky. Based on the conversations we have had with numerous researchers, recruitment is the hardest part of our research right now. 

It’s rarely talked about in case study decks, but behind the scenes, it’s where a lot of projects stall or quietly fail. Whether it’s the same participants showing up again and again, or entire screeners left hanging with barely a response, finding the right people is often messier and slower than it should be.

Panels are oversaturated. Timelines are unforgiving. And as the demand for more inclusive, more rigorous insight grows, the gap between the participants we need and the ones we can actually get feels wider than ever.

This piece is for anyone stuck somewhere in that gap. Whether you’re managing recruitment in-house, briefing an agency, or trying to get stakeholder buy-in for something a bit braver, here are some practical ways to improve how you find and select participants, and why it matters.

Why Research Recruitment Fails (At a Glance)

  • Same people, same answers. Overused panels and repeat participants dilute fresh insight.
  • Overfitted briefs. Too-specific criteria kill your pool before you start.
  • Low trust, low visibility. The right people don’t see the invite or don’t believe it’s for them.
  • Bad incentives. Underpaying = high no-shows and disengaged participants.
  • Screeners that feel like tests. People game the system, or drop off entirely.
  • No follow-up. When research feels extractive, recontact gets harderand trust erodes.

Want to fix it? Keep reading

Why Research Recruitment Breaks Down

There’s no single reason recruitment trips up, but there are a few repeat offenders. Most of them are rooted in the same problem: we’re still treating recruitment like admin, when it’s anything but.

While you can streamline reporting, automate stimulus testing, generate transcripts in seconds, and recruit, that’s still very much stuck in the weeds.

Yes, there are inroads happening. We’re starting to see tools that help with matching, verification, and even automation of some outreach steps. But recruitment is still fundamentally about people. And people aren’t neat or logical, and they often overpromise and underdeliver.

That means the same messiness, biases, and blind spots we’re trying to remove from our research often show up right at the start, because they’re baked into how we recruit.

Let’s break down the most common pain points we have come across:

Over-reliance on the same methods

Panels are convenient but often overused. If the same participants are resurfacing again and again, you’re not collecting fresh insight but simply rehearsed opinions.

Panel Quality Concerns

And it’s not just repetition that’s the problem, but the quality, and it’s nothing new. A study back in 2020 found that 46% of respondents from major online survey panels had to be removed for failing basic quality checks like speeding, nonsense answers, or contradictions. Another analysis, from Kantar, reported that researchers discard an average of 38% of collected data due to quality issues, with some cases reaching as high as 70%. That’s a blown budget and warped outcomes.

Unrealistic criteria

Overfitted recruitment briefs (e.g. “vegan, lives in Kent, owns a dog, also uses Klarna”) seem strategic but can wipe out your pool. Worse, if you do find someone, they often don’t truly represent the wider group you’re trying to understand.

Low visibility and trust

Many of the people we should be speaking to don’t see research opportunities, or don’t believe they’re meant for them. They may not trust the process, or they’ve had a poor experience before. Many have been tokenised, misunderstood, or ignored.

Flat incentives

If the offer doesn’t reflect the value of someone’s time, you’re selecting for those who can afford to take part for less. That skews the data. It also sends a clear message about who research is really for.

And it matters more than people realise. In one recent study, remote sessions offering $160/hour had a 1% no-show rate, while those paying $60/hour saw no-shows rise to 10%. People aren’t just motivated by cash, but underpaying them often guarantees last-minute dropouts and rushed contributions.

No feedback loop

Participants rarely hear what happened after the session. That makes the whole thing feel extractive. It damages recontact rates, and it turns research into something transactional when it should be collaborative.

But here’s the deeper issue few people want to admit:

Demand characteristics are everywhere

Even when you do find the right people, they often show up trying to be helpful, agreeable, and insightful. That sounds like a good thing, until you realise it’s warping the data. People start to perform. They guess what the researcher wants. They tell you what they think a “good” answer sounds like.

This gets worse the more people take part in research. Repeat participants often know the beats of a session better than the moderator. That doesn’t mean they’re not valuable, but it does mean they’re often presenting a polished version of themselves, not a raw one.

You can’t eliminate demand characteristics altogether. But if you’re not acknowledging them and accounting for them in how you recruit and moderate your insights, they will be built on sand.

How to Spot Where Your Recruitment Is Stalling

Before you fix recruitment, you need to know where it’s breaking. That usually means looking at when things slow down or stop altogether.

Ask yourself:

  • Are we getting enough applicants?
    If no one’s clicking through, your visibility is low. This could be down to poor targeting, weak outreach, or unclear messaging.
  • Are people dropping off during the screener?
    If there’s a spike in drop-offs partway through, your screener might be too long, too invasive, or too confusing. Look for patterns in where people quit.
  • Are we rejecting most people who apply?
    If your criteria are too narrow or unrealistic, you’re screening out the majority before they even have a chance. It might be time to rethink what really matters.
  • Are we getting the “right” people but bad data?
    You might be attracting seasoned participants who know how to pass a screener but bring polished, predictable responses. That’s not always a quality issue, but it’s a signal to change your approach.
  • Are people ghosting after confirmation?
    If you’re getting sign-ups but high no-show rates, something’s off in your process. Look at how you’re confirming, reminding, and incentivising participation.

Keeping a simple tracker that logs outreach source, application rates, drop-off points, and final attendance can help make this clearer over time. Recruitment improves when you treat it like a feedback loop, not a fixed task.

Fixing the Research Recruitment Gaps 

Once you know what’s not working, the next step is testing something new. Recruitment won’t always respond to small tweaks. Sometimes you need to change the shape of your approach entirely.

Here are five shifts to try when the usual fixes aren’t landing.

1. Change how you describe the opportunity

If you’re struggling to get attention, forget the incentive for a moment and focus on the invitation. Most outreach sounds either flat or corporate. Instead, try writing as if you’re speaking to one person who genuinely wants to help shape something better. Be clear, human, and specific about what you need and why it matters.

2. Source from outside your usual ecosystem

If your panel or CRM isn’t delivering variety, stop expecting it to. Find one new route, whether that’s a Reddit community, a local WhatsApp group, or a paid ad in a niche newsletter, and run a short experiment. You don’t need to blow your budget, just widen the reach long enough to see who turns up.

3. Rethink what makes someone “right”

Perfect fits on paper don’t always give the richest insight. If you’ve been screening hard and still ending up with dry sessions, try relaxing a few of your filters and prioritising attitude or lived experience over tickboxes. A small shift here can open up the data without diluting it.

4. Make your screener invisible

Not literally! But it should feel intuitive. Too many screeners are obvious hurdles. If the person answering can sense which option will get them in, they’ll play the game. Instead, aim for clarity, simplicity, and a tone that feels like a warm-up, not a test.

5. Invest in the follow-up

Confirmation emails are part of the participant experience, along with any reminders, and particularly the payment. Every message you send either builds trust or chips away at it. Treat these touchpoints with the same care you give to research questions. A short, kind message can reduce no-shows more than another automated nudge.

We Need to Make Smarter Research Recruitment A Priority

You can have the best methodology in the world, but if the people in the room aren’t quite right or don’t show up at all, you’re building on shaky ground. Good recruitment makes insight more honest, more useful, and more likely to drive real change.

AI is already speeding up other parts of the research process. But recruitment is still mostly human work. That’s not a bad thing. It just needs better support.

It’s something we’re actively exploring at Beings.

We’re not promising to automate human behaviour, and we’re not there yet. But we are building something that will take care of the admin and chaos that slows everything down. Much of the busy work that holds agencies back is already being managed with a research-specific focus. That means you can start testing these strategies properly, reach new audiences faster, and spend more time with the people who matter.

Want to be one of the first to try it? We’re working on it. But in the meantime, you can sign up for the portal here and enjoy the benefits of having Aida as your dedicated AI research assistant.

Share this post