Back to Blog
    The Platypus Problem in TA Technology
    AI & HR Technology
    Leadership & Strategy

    By Mark Tomasino

    The Platypus Problem in TA Technology

    TA technology categories no longer reflect how hiring actually works. Learn a better framework for evaluating recruiting tools based on work, outcomes, and gaps

    As the father of a precocious, animal-obsessed five-year-old, I get a steady stream of education on the natural world. Did you know the blue whale is not only the largest animal alive today, but the largest animal to have ever lived? Or that the peregrine falcon is technically the fastest animal on Earth, reaching speeds of up to 240 miles per hour during its hunting dive, known as a stoop?

    Then there is this one.

    What animal:

    • has webbed feet and a bill like a duck

    • lays leathery eggs and produces venom like a reptile

    • has fur and produces milk like a mammal

    • uses electroreception to hunt underwater like a shark

    That curious-sounding creature is the platypus. It is a rare type of mammal called a monotreme, found exclusively in Australia. When European scientists first examined a platypus specimen in the late 1700s, it violated their existing classification system, or taxonomy, so completely that many initially dismissed it as a hoax.

    The platypus is also the inspiration for this article.

    The more I evaluate new talent acquisition technology providers, the more platypus-like solutions I encounter. Tools that span multiple categories, defy clean classification, and challenge the way our industry organizes and talks about technology. It has become increasingly clear to me that we have a taxonomy problem in TA tech.

    When Taxonomies Worked

    I first became aware of how the industry classifies HR technology and services around 2014. At the time, I was working as a solutions architect and regularly helping large enterprise employers map their TA tech stacks, identify gaps, and build business cases for better integration and, in some cases, vendor consolidation.

    While it did not feel simple at the time, in hindsight the exercise was relatively straightforward. You identified the point solutions a client was using, noted where capabilities were missing, and evaluated whether the work could be done better, more efficiently, or at lower cost. The categories were familiar and widely understood.

    My working taxonomy looked something like this:

    • Applicant Tracking System

    • Career Site

    • Talent Network or CRM

    • Candidate Search and Sourcing

    • Job Distribution

    • Career Fairs and Hiring Events

    • Assessments

    • Video Interviewing

    • Background and Drug Screening

    • Onboarding

    • HRIS and Payroll

    • Time and Attendance

    • Benefits Administration

    • Learning Management

    • Performance Management

    The lines between categories were not perfectly clean, but they were manageable. Platforms like Avature, Smashfly, and Findly might check two or three boxes. All-in-one suites such as ADP, SAP SuccessFactors, and the then-emerging Workday covered much of the core HR functionality, while still relying on recruiting-focused point solutions around the edges.

    With ninety minutes and a whiteboard, I could usually sketch a reasonably accurate hiring system and process map, highlight redundancies, and identify clear areas of opportunity.

    This environment also made vendor evaluation easier. If you needed an ATS and were not using your HRIS provider’s option, you evaluated a short list of best-in-breed platforms such as iCIMS, Taleo, Jobvite, Greenhouse, and a few others. If you needed job distribution, you could quickly compare Broadbean, eQuest, and JobTarget and make a defensible decision.

    That is not the reality today.

    Why Taxonomy No Longer Works

    About three years ago, I began to feel that something was off with how our industry categorizes TA technology. It started with a popular post that makes the rounds on LinkedIn each year, the now well-known TA Tech landscape slide. It attempts to capture the breadth of the market by organizing hundreds of vendor logos into dozens of neatly labeled boxes. At last count, there were more than 30 solution categories represented.

    At first glance, it is visually satisfying. The grid feels orderly. The effort behind it is obvious. It sparks discussion and debate, which is part of its appeal.

    But the more time I spent with it, the more one question kept nagging at me. Is this actually useful?

    Put differently, does our current taxonomy help buyers make better decisions, or does it make the evaluation process more confusing?

    The core issue is that taxonomy asks buyers to start with categories, even though categories no longer reflect how the work of hiring actually gets done.

    Consider the categories themselves. What is the practical difference between “AI Automated Interviews and Agents” and “Interview Intelligence” tools? If you are a TA leader under pressure to modernize, how are you supposed to interpret that distinction, let alone evaluate vendors accordingly?

    Or take a well-known example like Paradox, which was recently acquired by Workday. Where does it belong? “Text bots and chat bots” feels directionally correct, but incomplete. You will also find Paradox listed under “Smart Scheduling” and “ATS Enhancements.” They later added full ATS functionality, so now they appear there as well. Should they also live under “Candidate Experience?” Each answer is defensible, which is precisely the problem.

    The same challenge applies to platforms like Eightfold, Phenom, Gem, Humanly, or Sense. None of them fit cleanly into a single labeled box. Each spans sourcing, engagement, automation, analytics, and workflow. It is no surprise that many vendors now describe themselves as “AI powered Talent Platforms” with a carefully chosen word like “Experience” or “Intelligence” doing a lot of work. The language stretches because the categories no longer hold.

    This is not a criticism of anyone attempting to map the TA tech landscape, nor is it a knock on vendors trying to differentiate themselves. The same struggle shows up in partner directories, software review sites, and conference exhibit halls. Everyone is trying to impose order on a space that is changing faster than static labels can keep up with.

    I am also a culpable participant. One of my current projects at Talivity is rethinking how we organize our own partner solution directory. We recently launched our annual trends survey, and I spent an embarrassing amount of time agonizing over which multiple-choice options to include when asking leaders about recent and planned technology purchases. The categories felt simultaneously necessary and insufficient.

    The root cause is not hard to identify. Capabilities have converged. Tools that once did one thing now do many. AI has acted as an accelerant, not by creating entirely new kinds of work, but by cutting across existing ones and enabling vendors to expand functionality rapidly. New category labels proliferate. Feature sets grow. Marketing language gets fuzzier. The landscape becomes fragmented, noisy, and increasingly difficult to interpret.

    Unfortunately, this is not just an aesthetics concern. 

    For buyers, it distorts the evaluation process. Leaders plan to invest in “AI-driven Talent Intelligence” or “Agentic AI” without a clear understanding of what those labels mean in practice. Teams burn time on demos with vendors that were never going to be a fit. Shortlists are built around categories instead of business needs. Solutions get purchased that address surface-level symptoms while leaving underlying problems untouched.

    Over time, this creates deeper damage. Trust in technology erodes. Recruiters grow skeptical of the next tool. Leaders lose confidence in their ability to make sound technology decisions. Change fatigue sets in, making future improvements harder even when the right opportunities exist.

    That is the real cost of a broken taxonomy.

    The industry does not need more categories. It needs a more durable mental model for evaluating TA technology.

    A Different Way to Think About Choosing TA Technology

    So if taxonomy no longer helps us make sense of the landscape, what do we do instead?

    I want to offer a different way of thinking about technology decisions. It is an inversion of how most TA technology evaluations happen today. Instead of starting with solution categories or vendor shortlists, it starts with three simpler questions:

    • What work actually needs to be done?

    • What outcomes matter?

    • Where is the gap between where we are today and where we want to be?

    Only after those questions are answered does it make sense to evaluate technology.

    The Core Categories of TA Work

    Stepping away from categories forces us to focus on the work itself. Not the tools. Not the features. The work.

    What I have found is that this work is far more stable than the technology that supports it. Vendors change. Capabilities expand. Labels blur. The underlying work remains largely the same.

    Most TA teams are responsible for some combination of the following:

    Running the core hiring workflowRequisitions, approvals, compliance, and moving candidates from open role through offer and hire.

    Generating candidate trafficJob advertising, paid media, job distribution, and other demand generation efforts.

    Finding and surfacing the right candidatesExternal sourcing, rediscovery of past applicants, referrals, and identification of internal or adjacent talent.

    Engaging and nurturing candidatesCampaigns, messaging, chat, and ongoing communication so strong prospects do not fall through the cracks.

    Increasing recruiter throughputAutomation, scheduling support, workflow assistance, and productivity tools that reduce administrative burden.

    Evaluating and selecting candidatesScreening, matching, interviewing, and assessment that support confident and defensible hiring decisions.

    Presenting the employer brand clearly and crediblyCareer site experience and employer brand signals that help candidates understand fit.

    Measuring what is working and whyReporting, analytics, attribution, and insights to inform better decisions.

    Planning hiring realisticallyLabor market data, skills insights, and workforce intelligence to set achievable hiring plans.

    Making the tech stack work as a systemIntegration, data quality improvement, skills foundations, and complexity reduction across tools.

    These are not traditional categories. They are descriptions of work. They are the things hiring teams must get right regardless of how the market evolves.

    Five Dimensions of Success

    Once the work is clear, the next question becomes: what does success actually look like?

    In practice, the key metrics TA leaders care about tend to fall into five broad dimensions:

    TimeHow quickly hiring moves through the process, including time to fill, time in stage, interview scheduling latency, offer turnaround time, and total cycle time from requisition approval to start date.

    CostHow efficiently hiring resources are used, including cost per hire, cost per applicant, advertising efficiency, agency spend, recruiter time allocation, and total cost of ownership across the tech stack.

    QualityHow well candidates and hires fit the role and perform over time, reflected in early attrition, ramp time, performance outcomes, and hiring manager confidence.

    VolumeThe ability to hire at scale, shown through applicant flow, pipeline depth, fill rates during peak hiring periods, and responsiveness to growth or seasonal demand.

    SentimentHow the hiring process is experienced by candidates, recruiters, hiring managers, and leaders.

    Every technology investment improves some outcomes while putting pressure on others. Those tradeoffs are not inherently wrong, but they need to be understood.

    Where Problems Start to Become Clear

    Once there is a firm grasp on the work that needs to be done and the outcomes that matter, problems tend to emerge on their own. Not because someone labels them, but because the gap between current reality and desired results becomes hard to ignore.

    Take time as an example. Saying “it takes too long to hire” describes the symptom, not the problem. The real work begins by looking at where time is actually being lost. Is it between initial screening and interview scheduling? Is it after interviews, when decisions stall or feedback loops break down? Each of those points to a different part of the work. And the solution is not automatically a new tool. It may be a missing capability, but it could just as easily be poor configuration, underused features, unclear ownership, or simple process friction.

    Quality follows a similar pattern. High turnover within the first 90 days is a clear signal that something is off, but again, it is not yet a diagnosis. Understanding why requires going back to the work. Are role expectations being communicated clearly during the hiring process? Are evaluation criteria aligned with what actually predicts success? Is onboarding reinforcing the right signals, or exposing mismatches too late? Measuring sentiment from hiring managers and departing employees often reveals whether the issue lives in selection, expectation-setting, onboarding, or some combination of the three.

    I could keep going, but the point is not to be exhaustive. The point is that when problems are framed this way, they stop being abstract frustrations and start becoming understandable, actionable gaps. Only then does it make sense to ask whether technology can help, and if so, where.

    That shift in thinking is subtle, but it changes everything about how technology decisions get made.

    A Closing Thought

    In the natural world, taxonomies work because exceptions like the platypus are rare and evolution moves slowly. Classification systems have time to settle, and the rules tend to hold.

    TA tech does not enjoy that luxury. Capabilities converge. Tools quickly expand beyond their original purpose. AI cuts across the hiring process end to end, blurring category lines faster than they can be redefined.

    When that happens, categories stop guiding decisions and start distorting them. Teams chase labels instead of outcomes, evaluate solutions before fully understanding the work they are trying to improve, and confuse activity with progress.

    A more durable approach starts earlier. It begins with the work that must be done, the outcomes that matter, and the gaps between current reality and desired results. Only then does technology enter the conversation, not as a category to shop, but as a potential intervention with a clear job to do.

    This approach sits at the center of our work at Talivity. We help employers understand their hiring challenges, evaluate what modern TA technology can and cannot do, and make thoughtful connections between the two. When decisions are grounded in real work and real outcomes, technology has a far better chance of delivering on its promise.