
•By Graham Thornton
Stop Buying AI Recruiting Tools That Don't Work
The rush to "do something with AI" has swept through nearly every talent acquisition team this year.
Artificial Intelligence, or AI. Boards are asking about it. Vendors are selling it. Competitors are announcing it.
At Talivity, we help companies evaluate and implement recruiting technology every day — which makes what I'm about to say sound strange:
Most organizations are buying AI recruiting tools to solve the wrong problems. When you point the right technology at the right problem, the results are remarkable. When you don't, you get shelfware.
Not because the tools don't work. But because they're solving the wrong problems.
I've had more than 100 virtual coffees with TA leaders this year. The pattern is the same:
"Our board read a Gartner report. Now I need to do something with AI."
When I ask what problem they're trying to solve, the answer is usually:
"I'm not sure — but I need to show we're not behind."
That's not a strategy. That's survival instinct.
This article breaks down how to point AI at real problems, when technology actually drives ROI, and how to avoid the trap most organizations fall into: buying solutions before defining what needs solving.
Stop Buying AI That Doesn't Work. Start Solving Problems That Matter
The rush to "do something with AI" has swept through nearly every talent acquisition team this year. Boards are asking about it. Vendors are selling it. Competitors are announcing it.
At Talivity, we help companies evaluate and implement recruiting technology every day. Here's what we've learned:
AI recruiting tools can cut time-to-hire by 40%, reduce recruiter workload by half, and improve quality of hire by 30%. But only when applied to well-defined problems.
Most organizations skip the diagnosis and jump straight to implementation. They buy AI sourcing when their real problem is vague job descriptions. They buy AI screening when their real problem is inconsistent hiring criteria. They buy conversational AI when their real problem is a terrible candidate experience.
The technology works. The problem definition doesn't.
Why Most AI Implementations Fail (And How to Fix It)
"We needed to do something with AI."
That's not a problem. That's reacting to pressure.
Here's the difference:
Market pressure sounds like:
"We need to modernize our tech stack."
"Everyone's talking about AI—we can't be left behind."
"Our board expects us to innovate."
"Gartner says we should be doing this."
Actual problems sound like:
"Recruiters spend six hours per week manually updating candidate statuses because our ATS has 14 steps."
"We lose 45% of candidates between phone screen and first interview; they accept other offers while waiting to be scheduled."
"Hiring managers reject 60% of final-round candidates because job requirements weren't defined upfront."
Market pressure tells you what's trending. Problem definition tells you what's broken.
Only one of those creates ROI when you apply AI to it.
Example 1: Retail Client — Review Monitoring Automation
The pressure: "Our Glassdoor score is stuck at 3.2. We need to improve our employer brand reputation."
Their proposed solution: Buy more Glassdoor ads, encourage employees to leave positive reviews, respond to negative reviews publicly.
The diagnosis: We asked why their score wasn't improving. The real problem wasn't lack of positive reviews—it was the math. They had 4,500 existing reviews. To move from 3.2 to 3.5 would require 800 consecutive 5-star reviews. Statistically impossible.
But here's what we found: roughly 15% of their negative reviews violated Glassdoor's community guidelines or contained verifiable inaccuracies. Manual review monitoring was consuming 10+ hours per week with inconsistent follow-through.
The solution: We implemented AI-powered review monitoring that scrapes all review platforms, flags guideline violations, and automates removal requests through a partnered law firm.
Results:
Glassdoor score improved from 3.2 to 3.6 in four months
Employer brand team time saved: 12 hours/week
Negative review removal rate: 22% of flagged content
Cost: $25K vs. $100K+ they were spending on ineffective Glassdoor advertising, plus eliminated 250+ annual hours of manual review monitoring.
AI worked because we identified the real problem: you can't add your way out of a Glassdoor math problem, but you can strategically remove what doesn't belong.
Example 2: Global Business Services Company — High-Volume Candidate Screening
The pressure: "We're drowning in client support applicants. We need AI screening to handle the volume."
Their proposed solution: Buy AI resume screening to automatically filter applicants.
The diagnosis: They were getting 300+ applications per client support role. In theory a good challenge, but when we sat with recruiters, we found the real problems:
Job postings were generic, attracting unqualified candidates
Application process was 14 steps, causing qualified candidates to abandon
No standardized criteria—every client lead wanted something different
The solution: First, we redesigned job postings to be specific and realistic. We cut the application to 4 steps. We created standardized evaluation criteria with hiring managers.
Then we implemented AI-powered video interviewing that asks consistent screening questions and scores responses against the agreed criteria.
Results:
Qualified applicant ratio improved from 1-in-22 to 1-in-5
Recruiter time per hire dropped from 5 hours to 90 minutes
Time to hire decreased by 40%
Phone screen-to-offer conversion improved from 15% to 35%
AI worked because recruiters could now spend their time with truly qualified candidates instead of manually screening hundreds of applications. But it only worked after we fixed the job postings and evaluation criteria.
Example 3: Tech Startup — AI Sourcing for Specialized Roles
The pressure: "We need AI sourcing. Our recruiters can't find enough qualified software engineers."
Their proposed solution: Buy AI sourcing tools to identify more candidates faster.
The diagnosis: We asked why qualified candidates weren't responding. The problems weren't about finding people—they were about attracting them:
"Ideal candidate profile" was vague: "senior engineer with AI experience"
Outreach messages were generic and didn't differentiate the company
Employer brand was weak: no engineering blog, no visible tech stack, no employee stories
Compensation was below market but no one had told the CEO
The solution: We defined specific technical requirements (not just "AI experience" but "experience with transformer models in production environments"). We created personalized outreach templates. We built lightweight employer brand content (engineering blog, tech stack page, team profiles). We ran a compensation benchmark and got approval to adjust salary bands.
Then we implemented AI sourcing that identified candidates matching the specific profile and scored them against technical requirements.
Results:
Candidate identification time dropped from 6 hours per role to 45 minutes
Response rates improved from 3% to 14%
Pipeline of qualified candidates increased 3x
Time-to-fill dropped from 75 days to 39 days
AI worked because it was searching for a well-defined profile and reaching out with compelling, personalized messages. Without those foundations, AI sourcing just finds more people who won't respond.
The Pattern: Fix Foundations First, and AI Multiplies the Gains
Every successful AI implementation follows the same sequence:
Diagnose what's actually broken
Map workflows, audit tech stack, interview stakeholders, identify root causes
Fix foundational problems first
Job descriptions, evaluation criteria, stakeholder alignment, process simplification
Then deploy AI pointed at specific, measured problems
Now the technology has something solid to work with
Results compound
Foundation fixes can get over 50% improvement. AI multiplies from there.
The companies that skip to step 3? Their AI sits unused or delivers disappointing results. Not because the technology doesn't work, but because it's being asked to solve problems it was never designed for.
Three Questions Before You Buy
Before evaluating any AI recruiting tool, answer these:
1. What specific, measurable problem am I solving?
Not "sourcing takes too long." That's a symptom.
Better: "Recruiters spend 8 hours per week manually searching LinkedIn because our ideal candidate profile isn't defined clearly enough to delegate to junior recruiters or automation."
2. Have I fixed the foundational issues?
Ask:
Are job requirements clear and consistent?
Do hiring managers change criteria mid-search?
Is our intake process forcing us to open roles before they're defined?
Are recruiters trained on current tools?
If the answer to any of these is "no," fix that first. AI will expose these problems, not solve them.
3. How will I measure success?
Define before you buy:
Baseline metrics: Current time-to-fill, cost-per-hire, quality of hire
Success criteria: What will improve? By how much?
Timeline: What does success look like in 30, 90, 365 days?
If you can't answer these before you buy, you won't be able to prove ROI after.
Here's How This Actually Works
When a talent acquisition leader tells me they're being pressed to invest in AI, my first question is: "What's actually broken?"
Many can't answer. Not because they're incompetent, but because they're buried in the day-to-day and haven't had time to step back and diagnose.
That's the work we do with our diagnostic lens. Here's what happens:
We map your actual workflows. Not what the process doc says. What recruiters actually do. We sit with them (chairsides), watch where time goes, find the bottlenecks no one talks about in leadership meetings.
We audit your tech stack. Most TA teams run 8-12 tools and use less than half of each. We figure out what's working, what's shelfware, and what gaps actually exist.
We interview stakeholders. Recruiters, hiring managers, and sometimes even candidates. We find the misalignments, the places where your process breaks because no one's on the same page.
Then we give you a roadmap. Not a vision deck. A prioritized list of what to fix and when:
30 days: Process changes you can implement immediately (intake redesign, workflow simplification, stakeholder alignment)
3-6 months: Training and change management to maximize tools you already own
6-12 months: The specific AI technologies worth evaluating—and exactly what problems they should solve
Here's what typically happens:
Many problems get solved through process and training improvements. These create immediate gains and build the foundation.
Then we help you identify and implement the right AI tools pointed at the right problems. Vendor evaluations. RFP management. Implementation support. We're vendor-agnostic, so we tell you what actually works—not what pays us a commission.
Recent results:
Financial services client: AI screening reduced time-to-hire by 35% after process redesign
Healthcare client: AI matching improved qualified applicant ratio 4x after fixing job postings
Tech client: AI sourcing cut research time 70% after clarifying ideal candidate profiles
AI works when the foundation is solid.
This Isn't For Everyone
If you're looking for someone to validate a decision you've already made, we're not your firm. If you need a big-name consultancy to present a 200-slide deck to your board, we're not your firm.
But if you're a TA leader who:
Feels pressure to "do something with AI" and wants to do it right
Suspects your recruiting problems might be process issues masquerading as technology gaps
Wants someone to tell you the truth about what will actually drive ROI
Needs a roadmap that fixes foundations first, then deploys AI to multiply the gains
Then we should talk.
Start Here
Book 30 minutes. No charge. We'll talk through what you're seeing, where you're stuck, and whether a diagnostic makes sense.
If it doesn't? We'll tell you. And point you toward what would help.