🤖 Using AI To Screen Job Applicants
Bias, explainability and what HR can’t outsource to a model
AI in hiring sounds irresistible: less time screening résumés, more “objective” rankings, better candidates at scale.
But when a model quietly decides who gets an interview and who never hears back, you’ve moved your legal risk right into the algorithm.
Regulators have already started treating AI screening as just another hiring practice, not a magic exemption:
- The EEOC’s first AI-discrimination settlement involved hiring software that auto-rejected women 55+ and men 60+ for tutoring roles. (EEOC)
- New York City’s Local Law 144 requires bias audits and candidate notices for many automated employment decision tools. (New York City Government)
- Illinois regulates AI video interview tools and has passed broader AI-in-employment amendments to its Human Rights Act, effective 2026. (Illinois General Assembly)
- In the EU, AI used for recruitment and employment is treated as a high-risk category under the AI Act, with strict transparency and oversight requirements. (Artificial Intelligence Act)
This piece is about where the real line is:
- What modern AI hiring tools actually do
- How bias sneaks in, even when you ban “protected-class fields”
- Why explainability matters legally and practically
- What HR teams still must own, no matter how “smart” the software claims to be
No numbered headings so your WordPress TOC can do its thing.
đź§© What AI Screening Tools Actually Look Like
Most tools marketed as “AI hiring” are variations on familiar themes:
| Tool type | What it does | AI’s role | Main legal risk |
|---|---|---|---|
| Résumé parsers & ranking systems | Parse CVs, assign scores, auto-rank or auto-reject candidates | NLP models extract entities, predict “fit” from prior hiring data | Disparate impact; opaque criteria eliminate protected groups in bulk |
| Video interview analyzers | Score recorded or live interviews | Models analyze speech, word choice, sometimes facial expressions | Bias against accents, disabilities; emotion/face analysis risk; notice/consent rules (e.g. Illinois AIVIA) |
| Chatbot pre-screeners | Conduct initial Q&A, knock out candidates below a threshold | Models interpret answers, enforce cut-offs on skills, gaps, salary | Poorly chosen filters as proxies for age, disability or pregnancy |
| Gamified / psychometric tests | Online games or assessments; produce a “fit score” | Models map behavior to success based on historic employees | Encodes past bias; hard to explain; FCRA and disability issues |
| End-to-end ATS “AI layers” | Score, route and prioritize candidates through pipeline | Combine multiple signals into a single “hireability” score | Black-box decisions; vendor/ employer blame game when bias surfaces |
Legally, these tools are not special. They’re just screening criteria under existing anti-discrimination law. If a human couldn’t lawfully apply the rule, they can’t launder it through a model.
⚖️ The Legal Lens: Same Old Discrimination, New Delivery Mechanism
Core anti-discrimination rules still drive the analysis
In the U.S., the big statutes still frame the issue:
- Title VII – race, color, religion, sex, national origin
- ADA – disability and reasonable accommodations
- ADEA – age 40+
- Plus state/local protections (sexual orientation, gender identity, marital status, etc.)
A few key points that matter for AI:
- Neutral tools can still be illegal. Under disparate-impact doctrine, a seemingly-neutral practice that disproportionately harms a protected group — and can’t be justified as job-related and consistent with business necessity — violates Title VII.
- Intent is not required. “We just used what the vendor gave us” is not a defense if the impact is discriminatory.
- Vendors don’t replace employer liability. Courts and regulators increasingly treat vendors and employers as shared actors, not substitutes. (Gentry Locke Attorneys)
AI-specific and local rules are layering on
Regulators are starting to carve out AI-specific obligations:
- NYC Local Law 144 (LL 144) – requires covered AI hiring tools to undergo annual independent bias audits, and employers must post audit results and notify candidates when tools are used to evaluate them. (New York City Government)
- Illinois AI Video Interview Act (AIVIA) – requires notice, explanation and consent before using AI to analyze video interviews, along with restrictions on sharing and deletion. (Illinois General Assembly)
- Illinois HB 3773 / IHRA amendments – from 2026, explicitly restrict AI that causes discriminatory effects in employment decisions. (Duane Morris)
- EU AI Act – classifies AI used in employment (including recruitment) as high-risk, triggering obligations for risk management, data governance, human oversight and transparency, with staged enforcement between 2025 and 2027. (Artificial Intelligence Act)
The underlying theme: AI is being treated as amplified hiring, not an exotic new category.
🎯 Where Bias Creeps In (Even When You Remove “Protected Fields”)
You can strip out race, gender and age fields and still have biased models. The patterns creep back through proxies.
Typical bias channels in AI hiring
| Source of bias | What it looks like | Why it’s dangerous |
|---|---|---|
| Biased training data | Model is trained on “successful past hires,” who skew male/white/young | Model learns “be like past employees,” reproducing historical discrimination |
| Proxy variables | ZIP code, school attended, employment gaps, certain employers | Stand-ins for race, age, disability, pregnancy, caregiving duties |
| Label bias | “Good hire” label based on subjective manager ratings | Encodes supervisor bias and passes it down as “ground truth” |
| Measurement bias | Accent, speech speed, facial expression treated as performance | Penalizes disabilities, neurodivergence, cultural differences |
| Data quality gaps | Sparse or noisy data for small subgroups | Model underperforms for those groups → more false negatives |
The iTutorGroup case is the “obvious” version: the software literally rejected applicants based on age cut-offs. (EEOC)
But you can get a similar effect if your model quietly learns that:
- People with older graduation dates
- Or certain employment gaps
- Or specific low-income ZIP codes
are less likely to be hired — and then replicates that pattern at scale.
From a legal perspective, it doesn’t matter whether a biased rule came from:
- One manager’s “gut feeling,” or
- A vendor’s 60-page “proprietary AI fit model” report
If the outcome is discriminatory and not adequately justified and validated, you have a problem.
🧠Explainability: Why “The Model Said So” Isn’t Good Enough
When AI decides who gets interviewed, rejected or hired, someone will eventually ask:
“Why didn’t I move forward?”
That’s where explainability becomes more than a buzzword.
You still need a story you can tell out loud
Even if the math is complex, HR needs to be able to say things like:
- “Your application did not move forward because the role required X, Y, Z, and the system weighted your experience in A and B lower.”
- “The assessment is designed to measure [specific job-related traits], and your score fell below our cut-off in [specific area].”
You don’t need to expose source code, but you do need:
- A high-level summary of input features
- Clarity about which features matter most
- A non-technical explanation of how a decision threshold works
Regulatory and litigation angles
Explainability ties into several doctrines and rules:
- Disparate impact – You must show the tool is job-related and consistent with business necessity, which is hard if you can’t articulate what it’s actually measuring.
- Adverse action and consumer-report laws – If you’re using third-party scoring that functions like a background report, you may have to provide adverse-action notices and a chance to dispute errors.
- EU / high-risk AI regimes – Under the EU AI Act, high-risk AI systems in employment contexts must include documentation, transparency and human oversight so people understand and can contest decisions. (Artificial Intelligence Act)
From a litigation standpoint, “the vendor says it’s fair” is not a complete answer. Courts and regulators will be interested in:
- Whether you looked at the audit reports
- How you handled red flags
- Whether you kept using the tool unchanged despite clear disparate impact
🧍‍♀️ What HR Cannot Outsource To A Model
There’s a natural temptation to think, “If we buy the right tech, it will do fairness for us.”
That’s not how any of this works.
1. Deciding what “qualified” means
Job-relatedness is not a model parameter; it’s a human judgment.
Humans still have to:
- Define the core duties of the role
- Decide which skills and traits actually correlate with success
- Reject “nice-to-haves” that are just proxies for pedigree or status
If your input is a vague label like “top performer,” the model will faithfully reproduce all the prejudice that went into past performance ratings.
2. The duty to provide reasonable accommodations
Under disability law, employers must:
- Consider reasonable accommodations, and
- Engage in an interactive process with candidates and employees
You can’t outsource that to a bot that says “your score is low, goodbye.”
You still need humans to:
- Recognize when a candidate might be disadvantaged by the AI tool (e.g., speech disabilities, anxiety, neurodivergence)
- Offer alternative formats (written answers instead of video, extended time, in-person interviews)
- Adjust or waive AI-driven thresholds when needed
3. Evaluating disparate impact and fixing it
Bias audits and impact analyses are governance, not pure math:
- Someone has to decide which metrics matter
- Someone has to decide whether an effect size is acceptable or not
- Someone has to prioritize remediation over convenience when issues show up
Even where local law requires a third-party “bias audit” (like NYC LL 144), the employer still owns the decision to keep, tweak or retire a tool. (New York City Government)
4. Dealing with close cases, exceptions and context
AI is bad at nuance that law cares about:
- Whistleblowing or protected activity
- Non-linear career paths
- Caregiving gaps, military service, immigration hurdles
- Cultural differences in communication style
Those are precisely the areas where human review has to step in and ask, “Is this a legal risk, or just an unconventional candidate?”
đź§ľ What HR Should Demand From AI Vendors
You’re not just buying software; you’re buying shared liability.
Here’s a useful way to frame vendor questions:
| HR’s goal | What to ask for | Red flag answers |
|---|---|---|
| Understand what the tool actually does | Clear documentation of inputs, outputs, target roles, and limitations | “It’s proprietary / magic / replaces interviews completely” |
| Assess bias and legality | Recent bias audits, methods, and subgroup performance metrics | “We don’t track demographic impact” or “our tool can’t be biased” |
| Preserve human control | Configurable thresholds, ability to override or bypass AI decisions | “You must accept our default scoring pipeline” |
| Manage data responsibly | Details on data sources, retention, deletion and sharing with third parties | Vague answers on where candidate data goes or how long it’s kept |
| Support candidates’ rights | Features for notices, consent, accommodations, and candidate appeals | No way to flag accommodations or manually re-review rejected candidates |
If a vendor is allergic to transparency, assume the enforcement agencies won’t be sympathetic when something breaks.
đź§± Designing A Legally Defensible AI Screening Program
Start with a non-AI baseline
Before you plug in anything “intelligent”:
- Define job descriptions and essential functions clearly.
- Document valid selection criteria (skills, experience, assessments) and why they’re linked to performance.
- Map your current process so you know exactly where AI will plug in and what decisions it will influence.
AI layered on a messy, undocumented process just produces faster, more opaque mess.
Use AI as decision support, not decision replacement
Safer patterns:
- Use AI to prioritize or flag candidates, not to automatically reject them.
- Require human review of borderline cases, flagged anomalies and all rejections above certain risk thresholds.
- Keep humans in charge of final hiring decisions and threshold settings.
Implement ongoing, not one-time, bias monitoring
A good program looks more like continuous compliance than a one-off “audit”:
- Track selection and pass-through rates across protected groups (where you legally can, and often using statistical estimates).
- Watch for sudden changes after a model update or new feature.
- Document changes you make in response to impact findings: tweaks in thresholds, additional human review, alternate pathways for certain roles.
If you’re in a jurisdiction with explicit bias-audit duties (NYC, Illinois, coming state laws), this monitoring isn’t optional anyway. (Duane Morris)
🧑‍💼 Candidate Perspective: What You Can And Can’t Do About AI Screening
Candidates have less leverage but are not powerless.
Realistically possible steps:
- Read the notices carefully. In some jurisdictions, employers must tell you when AI is used and what it does.
- Where the law allows, ask about alternative formats or accommodations (e.g., a live interview instead of an AI-analyzed video).
- If you suspect discrimination, keep records – job posting, screening process description, communications, your own copies of submissions.
- For regulated roles or jurisdictions, you may have specific rights to explanation, reconsideration, or access to certain reports.
You probably can’t “opt out of all AI,” but you can insist it’s not the only word on your candidacy, especially where disability or other protected traits are at issue.
✅ Quick Checklist: What HR Can’t Outsource To A Model
If you remember nothing else, keep this mental list.
Even with the most sophisticated AI, HR still must:
- Set job-related criteria and articulate why they matter
- Decide which data is fair game and which is off-limits
- Provide reasonable accommodations and alternatives
- Perform and act on bias and disparate-impact analyses
- Maintain human oversight for edge cases and critical decisions
- Own the communication with candidates: notices, explanations, appeals
- Keep records that show your process was thoughtful, not rubber-stamped
The promise of AI screening is not “no more hard choices.” It’s faster, more consistent application of the choices you make.
If those choices are well-documented, compliant and carefully monitored, AI can help. If they aren’t, AI will simply scale your problems — and put them in front of regulators, plaintiffs’ lawyers and judges with detailed logs and time stamps.