The shift from experimentation to mainstream adoption
Artificial intelligence has rapidly moved from experimentation to mainstream adoption in recruitment processes. Across Australia and internationally, employers are increasingly relying on automated tools to screen, rank and assess job candidates.
The commercial appeal is clear. AI promises faster shortlisting, greater consistency in screening and reduced administrative burden. In a tight labour market, the ability to reduce time-to-hire while freeing recruiters to focus on substantive engagement is highly attractive.
However, efficiency does not displace legal obligation. As AI becomes embedded in hiring processes, employers must ensure that technological innovation does not create exposure under discrimination law.
What do AI recruitment tools actually do?
Most AI tools currently operate at the early stages of recruitment. Common applications include:
- Automated resume screening
- Candidate scoring and shortlisting
- Online pre-screening assessments and automated interviews
- Talent scouting for future roles
The Australian Responsible AI Index 2025 indicates that a majority of Australian organisations already use AI in recruitment to some degree.
While these tools can streamline processes, they also influence who progresses - and who does not. That influence has legal consequences. Decisions shaped by algorithmic screening remain subject to discrimination law, and risks can arise where bias - whether embedded in data, design or deployment - affects outcomes.
The legal framework: AI is not beyond regulation
Australia does not yet have AI-specific legislation. However, recruitment decisions made with the assistance of AI remain subject to existing anti-discrimination and employment laws.
Relevant federal legislation includes the Fair Work Act 2009 (Cth) and the Disability Discrimination Act 1992 (Cth), alongside state and territory laws such as the Equal Opportunity Act 2010 (Vic), the Anti-Discrimination Act 1977 (NSW), the Anti-Discrimination Act 1991 (QLD) and the Discrimination Act 1991(ACT).
These statutes prohibit discrimination on the basis of attributes including age, race, disability, gender identity, religious belief and marital status. Importantly, they apply regardless of whether decisions are made by humans or machines.
Employer liability: can technology be a shield?
A central question is whether liability shifts when discrimination arises from a third-party AI tool.
Under Australian law, it is likely that it does not. Employers are responsible for discriminatory conduct that occurs in recruitment if that conduct is attributable to them or their employees. Using a third‑party tool as part of that process does not materially shift that responsibility.
Employers choose to deploy the system, define the selection criteria, interpret the outputs and make the ultimate hiring decision. Courts and tribunals are likely to take a purposive approach to interpretation, recognising that anti-discrimination laws exist to prevent unfair exclusion from employment, regardless of the mechanism used.
Recent litigation involving automated decision-making suggests courts are reluctant to allow technology to operate as a legal shield.
In short: if the recruitment process is discriminatory, the employer remains responsible.
How discrimination risk can arise
AI tools do not need to be intentionally biased to create unlawful outcomes.
Direct Discrimination
Direct discrimination may arise if a system treats a candidate less favourably because of a protected attribute.
While overt programming of discriminatory criteria is unlikely, bias can emerge through training data. A widely reported example involved Amazon’s experimental recruitment system, which disadvantaged female candidates after being trained on historical data drawn from a male-dominated workforce. (See Reuters article). The system replicated past bias rather than correcting it.
Employers should therefore seek assurances that AI tools are designed with bias mitigation in mind, and that providers actively and routinely test for discriminatory patterns. The lesson is clear: historical data can encode historical inequality.
Indirect Discrimination: the more significant risk
Indirect discrimination is likely to present the greater legal risk.
It arises where a requirement or practice disproportionately disadvantages people with a protected attribute and is not reasonable in the circumstances.
For example:
- Language-heavy assessments may disadvantage candidates for whom English is a second language.
- Systems that penalise non-linear career paths may disproportionately affect parents returning from parental leave.
- Algorithms that privilege certain educational backgrounds may entrench socio-economic bias.
If disproportionate impacts emerge and reasonable mitigations are available, a tribunal may find the practice unlawful.
Employers must also consider obligations to provide reasonable adjustments. If an AI-driven process disadvantages a candidate because of disability or another protected attribute, failure to modify the process or provide an alternative assessment may itself constitute discrimination.
Practical risk mitigation: balancing innovation and fairness
AI recruitment tools offer genuine operational benefits, but they do not operate outside the law.
Employers can reduce legal exposure by embedding safeguards into procurement and implementation decisions:
- Maintain meaningful human oversight - AI outputs should inform, not replace, judgment.
- Conduct regular bias audits - Monitor outcomes for disproportionate impacts.
- Prioritise transparency – Select vendors able to explain how systems function.
- Train HR teams – Ensure internal decision-makers understand how discrimination law applies to AI-assisted processes.
As regulation continues to evolve, the core principle remains constant: efficiency must not come at the expense of equal opportunity.
All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted.