Case Study

Structuring 18K candidates into a searchable database and enabling candidate re-acquisition

A deep dive into scoping and shipping the Talent Pool MVP for a modern ATS.

TL;DR

The problem: Recruiters were wasting budget re-acquiring candidates they already vetted because our ATS couldn't surface those past candidates, pushing users to spreadsheets or starting every search from zero.
What I did: as the Lead Designer, I owned the design from concept to code. I filtered a broad wish list into a realistic MVP, negotiated technical constraints with engineering, and QA'd the final build myself.
What changed: We shipped an MVP that successfully shifted recruiter behavior away from spreadsheets and into the product. We proved our core bets were right, accepted the cost of our technical trade-offs, and identified the bottlenecks.
The Opportunity

Helping recruiters re-acquire talent they already met

We were building a modern, AI-first ATS to replace clunky legacy systems. Our initial focus was optimizing the active pipeline where candidates move linearly from application to hire.
Our design partners kept flagging an inefficiency: starting every search from zero. They were burning budget on job posts and agency fees to re-acquire 'silver medalists' they already knew because when a new role opened, they couldn't find the runners-up from last month.
Our ATS lacked a way to track these high-potential candidates outside of active jobs. So, recruiters defaulted to spreadsheets and desktop folders.
This looked like the highest-value lever to improve efficiency. If we made saving and searching past candidates effortless, we would unlock the value of the network recruiters already owned.
Validating Bets

Finding what exactly to focus on

We clearly needed a separate database where recruiters could park good talent. While this exists in other tools, we needed to validate our specific approach. We broke the hypothesis down into 6 core bets:
Single source of truth: Recruiters need the full history to make decisions.
Effortless entry: If it takes more than one click, they won't use it.
Clean database: Automatic deduplication builds trust.
Easy search: It must be faster than their Excel sheet.
Personalized outreach: Bulk emailing sleeping candidates converts them.
Proactive matching: The system should suggest candidates before the recruiter searches.
We needed to find the absolute minimum scope that delivered value on day one. To validate our assumptions, I talked to our design partners showing them lo-fi sketches.
The feedback revealed a clear hierarchy of needs. While features like 'Proactive matching' and 'Outreach' were exciting, they were classified as luxuries. The immediate pain was simply access.
Recruiters confirmed they were willing to tolerate duplicates in the short term if it meant they could stop using spreadsheets.
I sat down with our Lead Engineer to review the scope against this feedback. We identified that building the "Perfect Deduplication" logic was a massive technical risk that would delay shipping by weeks. Since users had explicitly given us permission to be imperfect here, we made a joint decision to cut it.
We agreed to accept the technical debt of a 'dirty database' in exchange for the speed of delivery, locking the V1 scope to three essentials:
Single source of truth: Recruiters need the full history to make decisions.
Effortless entry: If it takes more than one click, they won't use it.
Easy search: It must be faster than their Excel sheet.
Untangling Logic

Mapping the chaos before writing a line of code

Together with our PM we translated the sketches into logic flows. We mapped the data model, edge cases, and error scenarios. To ensure we didn't miss anything, we built a state matrix to stress-test the interaction flows linearly.
Then, we brought in the lead engineer to poke holes in the plan and negotiate constraints.
We tackled several problems, including:
The tagging mess. How to handle tags when a candidate has multiple resumes? We could let the system automatically merge all skill tags and remove duplicates.
The search trade-off. A full-text search across all resumes would be powerful but also complex and costly to build. For the V1 we could rely exclusively on tags.
The 'wrong resume' problem. We knew recruiters would inevitably upload the wrong file, polluting the data. We identified that preventing this required a complex 'identity match' validation system to catch errors before ingestion.
GDPR and data privacy. Effortless entry could create a liability nightmare. We needed to map out how the system handles data retention periods without turning the recruiter into a compliance officer.
Now, it was time to design.
Proposed Solution #1

Adding candidates to the pool: 2 quick pathways

Recruiters can add candidates to the Talent Pool either from an active application or directly through the Talent Pool page.
Adding from an active application, recruiter simply clicks on the 'Add to Talent Pool' button inside candidate's application profile. The button state toggles to provide immediate system feedback, creating a link to their new permanent profile without breaking the recruiter's review flow.
To add a candidate from the Talent Pool page, the recruiter uploads their resume or fills in the form in the drawer manually.
Identity Match: The system should validate the Name, Email, and Phone of any new upload against the existing profile. If they don't match, the upload is flagged.
GDPR & data privacy: We disable the 'Add to Talent Pool' button if a candidate previously opted out of retention. We also shifted complex retention rule logic (differentiating between manually added and sourced talent) directly into the database schema. This ensured backend compliance without burdening the recruiter with endless UI checkboxes.
Proposed Solution #2

Single source of truth: Talent Pool table and candidate profile

Talent Pool view starts with a high-density table layout that mirrors the spreadsheets recruiters familiar with. This format puts the most valuable data front and center without forcing a click. It also allows for quick scanning and sorting to help recruiters spot potential matches without using search.
Clicking a row opens the candidate drawer and this is the heart of the talent pool. It had to consolidate a candidate's entire history into a single, scannable view. I leveraged our existing application patterns to reduce the learning curve.
Key details and contacts are pinned at the top for a 5-second relevance check and easy access.
Resumes: A chronological view of every resume submitted. This links candidates to specific past jobs, giving recruiters context on previous rejections or offers.
Aggregated skills & XP: A unified, searchable tag cloud parsed from all uploaded documents using our source-specific logic.
Comments: A standard linear feed for async internal notes and feedback.
Smart, merged skills: The system should parse all resumes, aggregates the skills, and removes duplicates.
Source-specific tags: Tags should be linked to their specific source document. If a recruiter deletes an old resume, the system removes only the unique tags associated with that file, ensuring the profile remains accurate.
Proposed Solution #3

Easy search: structuring chaos with AI-sourced tags and autosuggest

Generally, resumes are a nightmare to search. To solve this, we used AI to parse resumes into structured tags upon upload.
Because we solved the data-integrity issues upstream (identity match, source-specific tagging), we could trust the tag data enough to build a fast, precise search experience that relies exclusively on those tags, rather than a heavy full-text search.
Guided Input. Autocomplete suggests existing tags and displays candidate counts. This prevents 'zero-result' dead ends and shows recruiters exactly what inventory they have before they hit enter.
Strict 'AND' Logic. We chose additive logic (Tag A + Tag B). This aligns with the recruiter's mental model of narrowing a large pool down to the perfect candidate, rather than broadening it.
Visual Evidence. The results table dynamically reorders tags to show the matching skills first. This gives immediate visual confirmation of why a candidate appeared in the results.
Next actions. Clicking a result opens the full candidate profile drawer where recruiter can validate the match, review the resume and past notes, and add the candidate to a new active job.
Tags work well for now, but high volume recruiting needs more power. We are already planning to add saved searches so recruiters can stop repeating the same work every day. We also want to support all-fields search and complex queries for the power users. The long term goal is to let the system surface the right candidates in one click when a new job opens.
Design Validation

Testing designs with design partners

Moving from lo-fi sketches to high-fidelity prototypes allowed us to test the workflows with our design partners. We needed to verify that the new tools would actually save time in a real recruiting environment.
We avoided asking for opinions and focused strictly on behavior. We gave our partners specific jobs to be done. We watched them try to source a candidate from an active application and filter a talent pool list using tags. This approach highlighted exactly where the design supported their habits and where it caused friction.
“Here is a candidate who was almost a perfect fit. Save him to the database for the future.”
“Imagine you need to find a Java developer you spoke with six months ago. What are your actions?”
The sessions revealed that users hesitated when search results felt opaque. They wanted to know why a specific person appeared at the top. We used this insight to add visible 'matched tags' to the results table.
Shipping

Trimming dev scope

My reality-check meeting with the lead engineer was a tough one. Looking at the timeline, we had to choose: there just wasn't enough runway to build both the complex backend that handles multiple resumes and the frontend identity check that stops users from making mistakes. We had to pick one.
We decided that messing up the data was not an option. We poured our resources into the backend logic to ensure that adding or deleting resumes never created 'ghost data' that would corrupt the talent pool. The price was cutting the identity check. We launched with a risk that a recruiter could accidentally upload the wrong file, but we chose a bulletproof data model over a hand-holding interface for V1.
Design QA

A good design built wrong is a failed design

I started testing in the staging environment as soon as features were built. I tested the complete user flows against the logic we agreed on at the start.
Bugs for visual issues like spacing or layout breaks, as usual. But the important part was sitting down with the lead engineer to review the list. We worked together to separate the showstoppers that had to be fixed immediately from minor polish that could wait for a fast follow-up.
Ensuring that when I deleted a resume, the system correctly removed only the tags associated with that specific file.
Verifying the button state changes instantly when adding a candidate.
Checking that the search field suggests the right tags based on real data.
Ensuring the empty search state gives helpful guidance to the user.
Testing how the profile page looks for candidates with missing tags or data.
This ensured we shipped a solid, usable tool on time without getting stuck on low-priority details.
Outcomes & learnings

What we learned when reality hit

Adoption didn't happen overnight. It took a few weeks for our pilot recruiters to build up enough candidates in the new pool for it to feel useful. But once the data was there, we saw a real shift in behavior.
Here are the three biggest takeaways from the launch.
The core bet paid off: Recruiters were willing to tolerate the manual friction and lack of advanced automation because we solved their single biggest pain: centralized access. The usage data proved that a "clunky" database inside the ATS was infinitely more valuable to them than a perfect spreadsheet on their desktop.
The missing guardrails caused friction: Our decision to cut the "Identity Match" validation allowed us to ship on time, but it resulted in a steady stream of support tickets where recruiters accidentally merged two different candidates, creating "Frankenstein" profiles with mixed tags. This confirmed that while our backend structure was solid, the frontend safety checks need to be the priority for V1.1.
We found the new bottleneck: A great tool is useless if it is empty. We realized that getting their years of old data into the new system was a massive new point of friction. That is the next big problem we need to solve.
This directly changed our roadmap. We confirmed the talent pool was our strategic advantage, but we realized our next move wasn't adding more features to it.
We scrapped the plan for outreach tools. The new, number-one priority was building a dead-simple data importer. We had to fix the next problem.
Big thanks to our design partners: Alisa, Jana, and Maria for their time and valuable feedback.