A growing number of job seekers now face an unusual first step in recruitment: speaking not to a human recruiter, but to an algorithm. In many cases, candidates record answers on camera, submit them, and wait for a decision they will never see explained.
Behind the promise of efficiency lies a rising concern: are AI-powered interviews filtering candidates in ways that are opaque, unchallengeable, and potentially unfair?
The Rise of the Machine Interview
AI-driven hiring tools-often described as asynchronous or automated video interviews-are increasingly used to screen applicants before any human interaction takes place. These platforms typically allow employers to:
Present standardised interview questions
The system gives every candidate the same set of questions in the same format. This is meant to ensure consistency and make it easier to compare candidates fairly.
Collect recorded video responses from candidates
Instead of speaking to a live interviewer, candidates record their answers on video. These recordings are then submitted for review at a later time.
Use AI systems to evaluate speech patterns, language use, and sometimes facial cues
AI software analyses how a candidate speaks and responds, such as tone of voice, word choice, fluency, and sometimes facial expressions or eye movement, to assess communication and behaviour.
Generate automated “fit” or “hireability” scores
Based on its analysis, the system assigns a score or rating that estimates the candidate’s suitability for the job, helping employers decide who moves to the next stage.
Companies use AI interviews to quickly manage large volumes of applicants, but many candidates view them as unclear screening systems that are difficult to understand or to challenge.
The Real Problem: The “Black Box” Decision
One of the most persistent concerns surrounding AI interview systems is the lack of transparency. In many cases, these tools do not clearly explain:
Why a candidate was rejected
In many AI hiring systems, candidates are not given a clear, detailed reason for rejection. They may only receive a generic “not selected” outcome, without knowing whether it was due to communication style, experience match, or algorithmic scoring thresholds.
Which signals influenced the final outcome
The system processes multiple “signals” such as word choice, speaking speed, confidence indicators, sentence structure, or even facial and vocal patterns. However, it is often not transparent which of these signals carried the most weight in the final decision.
How personality, tone, or “fit” is determined or weighted
Some AI tools attempt to infer traits like confidence, enthusiasm, or cultural “fit” by analysing tone of voice, facial expressions, and language patterns. The problem is that the exact formula or weighting of these traits is usually proprietary, meaning candidates and even recruiters often do not know how these judgments are calculated.
This creates a situation often referred to as a black-box hiring process, in which candidates are assessed and eliminated without meaningful feedback or human explanation.
Even recruiters using these systems may not fully understand how specific scoring decisions are produced, raising broader questions of accountability in hiring.
Evidence of Bias: What Research Indicates
A growing body of research suggests that AI hiring systems can reproduce or even amplify existing biases.
Studies, including research published in Proceedings of the ACM on Human-Computer Interaction (PACM HCI) and related work on algorithmic hiring fairness, indicate that algorithmic recruitment tools can produce uneven outcomes across different demographic groups, even when sensitive attributes such as gender or race are removed from the model, due to proxy variables embedded in training data. And apart from that, Bias can emerge from several sources:
Historical hiring data that reflects past inequality
AI systems are often trained on previous human hiring decisions. If past hiring was biased (for example, favouring certain schools, genders, accents, or backgrounds), the AI can learn and replicate those patterns, even unintentionally.
Design choices in what traits are measured
The developers decide what the AI should look for, such as confidence, communication style, or “enthusiasm.” These choices can introduce bias because they may reflect subjective ideas of what a “good candidate” looks or sounds like, rather than actual job performance.
Indirect signals such as speech patterns, tone, or language style
Instead of directly measuring skills, AI may rely on indirect cues like accent, vocabulary, speaking speed, or tone of voice. These signals can unintentionally disadvantage candidates who communicate differently due to culture, education, or neurodiversity, even if they are fully qualified.
In controlled evaluations, researchers have also found that AI-based hiring assessments can lead to measurable disparities between groups, raising concerns about fairness in automated screening systems.
The Candidate Experience Problem
Beyond technical concerns, many job seekers describe AI interviews as psychologically and practically challenging. Common complaints include:
Lack of human interaction or clarification
Candidates cannot ask follow-up questions or clarify misunderstandings because there is no live interviewer, which makes the process feel rigid and one-sided.
Uncertainty about how responses are evaluated
Job seekers often do not know what the system is looking for or how their answers are being scored, which creates confusion and reduces trust in the process.
Heightened anxiety when speaking to a camera instead of a person
Many candidates feel more nervous when recording alone because there is no human feedback, body language, or conversational flow to ease pressure.
Inability to recover from mistakes or reframe answers
In a live interview, candidates can correct themselves or explain further, but recorded AI interviews usually do not allow retries or real-time adjustments.
Discomfort with strict timing and recording conditions
Strict time limits and technical requirements, such as camera setup, lighting, and internet stability, can increase pressure and negatively affect performance, even for qualified candidates. Some applicants also abandon the process due to distrust of automated evaluation or a lack of feedback, while environmental factors can further distort outcomes that are unrelated to actual job ability.
Why Companies Continue to Use AI Interviews
Despite these concerns, adoption continues to grow, and the main drivers are structural:
Very high numbers of applicants per role
Many job openings attract hundreds or even thousands of applications, making it difficult for recruiters to manually review each candidate fairly and efficiently.
Pressure to reduce hiring timelines
Companies often want to fill positions quickly to avoid productivity gaps, so they adopt tools that speed up the early stages of recruitment and reduce delays.
Need to standardise early-stage screening
Employers aim to evaluate all candidates using the same criteria and questions to ensure consistency and make comparisons easier, especially when dealing with large applicant pools.
Cost efficiency in large-scale recruitment
Automating early screening reduces the need for extensive recruiter time and resources, lowering hiring costs while handling large volumes of applicants. From an employer’s perspective, AI interviews ease the workload by filtering candidates before human interviews begin. Supporters argue these systems can improve consistency and reduce certain human biases by applying the same criteria to all applicants. However, public opinion remains divided, with many candidates questioning the fairness and transparency of algorithmic decisions.
The Ethical Flashpoint: How Should Candidates Be Evaluated?
Early versions of AI interview systems attempted to infer traits such as personality, confidence, or emotional state from facial expressions and voice patterns. These approaches have faced significant criticism due to questions about scientific validity and fairness. Key concerns raised include:
Whether emotional or personality inference is reliable
This questions how accurate AI really is when it tries to judge emotions or personality traits from voice, facial expressions, or wording. Many experts argue that these signals are too complex and context-dependent to be measured reliably by machines.
The risk of discrimination based on appearance, accent, or communication style
AI systems that analyse video or speech may unintentionally favour certain accents, facial expressions, or speaking styles, thereby disadvantaging candidates from different cultural, linguistic, or neurodiverse backgrounds.
The lack of explainability in automated scoring systems
Many AI hiring tools cannot clearly explain how they arrived at a score or decision. This makes it difficult for candidates or employers to challenge or understand the reasoning behind hiring outcomes.
These issues have led to increasing scrutiny of biometric-based evaluation methods in recruitment technology.
A System Still Evolving
The available evidence does not point to a simple conclusion, but rather to an expanding tension within modern hiring practices.
For employers:
AI-powered interviews provide speed, scalability, and efficiency, especially when handling large volumes of job applications that would otherwise overwhelm human recruiters.
For candidates:
The experience is often described as impersonal, difficult to interpret, and challenging to question or appeal, particularly when decisions are made without clear feedback or human explanation.
For Researchers and Critics:
These systems are viewed as neither completely reliable nor entirely flawed. Instead, they are considered high-impact decision-making tools that can significantly influence employment outcomes. As a result, they are seen as requiring stronger oversight, greater transparency, and more robust ethical safeguards to ensure fairness and accountability in hiring decisions.
Read:
Harder to Get a Job or Harder to Understand the Process?
In conclusion, AI-powered interviews are changing how hiring is done by using automated systems to screen and assess candidates. While they can accelerate and streamline recruitment, they also raise concerns about transparency, fairness, and the loss of human interaction. The main issue is not just job access, but whether the process itself is becoming harder to understand and challenging.


