Facial recognition technology (FRT) remains challenging to deploy lawfully in Australia. Regulators consistently treat the technology as high risk, particularly in public-facing environments, placing a strong focus on necessity, proportionality and privacy by design.
However, the NSW Government has committed to mandating the use of FRT to support the operation of a state-wide gambling exclusion register. In this context, the government has published the NSW Code of Practice: Facial Recognition Technology in Hotels and Clubs (16 March 2026) (Code), which provides guidance for hotels and clubs in NSW that already use FRT, or wish to adopt FRT prior to legislation mandating its use taking effect.
A use case for gambling harm minimisation
The Code sits within the specific and tightly defined policy context of gambling harm minimisation.
In NSW, self-exclusion programs allow patrons experiencing gambling harm to voluntarily exclude themselves from venues, often for extended periods. Traditionally, enforcement has relied on staff recognising excluded individuals from photographs ─ an approach that places the onus on staff to remember and pay attention in busy environments, which can lead to missed detections. FRT offers a potential solution to address the limited effectiveness of human identification of patrons. By automating identification, FRT can more reliably detect self-excluded patrons and support timely intervention.
The Code demonstrates when the use of FRT may be appropriate and how its use should be governed.
Applying the Code to facial recognition technology in other contexts
For operators of gaming venues in NSW, the Code provides a clear and easy to adopt FRT governance framework. For all other organisations, the Code provides a best practice approach to the deployment of FRT, highlighting the importance of:
- a narrow and justified use case
- deployment designed to ensure compliance with privacy obligations from the outset, and
- ongoing governance, monitoring and accountability.
The Code emphasises that careful design, documentation and oversight are critical when deploying facial recognition technology, and organisations should understand their legal obligations before doing so.
What the Code requires
The Code translates high-level privacy principles into concrete operational requirements. In particular, it requires:
- a privacy impact assessment (PIA) before deployment of FRT, addressing the necessity and proportionality of the solution
- clear and accessible privacy policies, specifically dealing with collection and handling of biometric information
- prominent signage informing patrons that FRT is in use
- strict access controls for biometric data
- storage of FRT data in Australia
- deletion of biometric data once it is no longer required, and
- ongoing monitoring and reporting, including for system accuracy (eg false positives and false negatives).
The Code also emphasises governance and accountability, requiring venues to actively assess whether the system is operating effectively and appropriately over time.
These requirements are not new legal obligations. Rather, they operationalise existing requirements under the Privacy Act 1988 (Cth) (Privacy Act) in a way that is tailored to a specific industry and use case.
Limitations of the Code
The Code has certain limitations. In particular:
- it is not a safe harbour
- compliance with the Code does not displace obligations under the Privacy Act
- regulators, including the Office of the Australian Information Commissioner (OAIC), retain full enforcement powers.
Complying with the Code may assist in demonstrating good practice, but does not eliminate all legal risk.
In a broader context, the risk profile increases
The position on FRT differs in broader commercial settings. The Privacy Act and the OAIC treat biometric information used for FRT as sensitive information, triggering a higher compliance threshold. The OAIC's FRT guidance emphasises that organisations looking to adopt FRT should:
- undertake a genuine assessment of necessity and proportionality
- consider whether less intrusive alternatives could achieve the same outcome, and
- be mindful of the risks of collecting data from large numbers of individuals.
In practice, this creates a high bar and makes it difficult for organisations to demonstrate that use of FRT is necessary. Broad or indiscriminate deployment is especially problematic, as it involves capturing biometric data from individuals who have no direct connection to the risk being addressed by the use of FRT.
To minimise risk, governance is the starting point
To comply with legal obligations, organisations considering deploying FRT should begin with their governance framework. At a minimum, this includes:
- a genuine, documented PIA explaining why FRT is required
- a clear articulation of the specific problem being solved
- evidence that less intrusive options are insufficient
- robust data handling practices, including retention and deletion controls.
Transparency is critical, as providing passive notice is unlikely to be sufficient. Signage must be clear and prominent, and privacy policies must specifically address how biometric data is collected, used, stored and destroyed. Where feasible, express and informed consent should be obtained.
Authors: Matthew McMillan, Partner; Margaret Gigliotti, Partner; Keely O'Dowd, Special Counsel, and Georgina Warren, Lawyer.
For more information on how to comply with your legal obligations when deploying facial recognition technology within your organisation, please contact a member of our experienced team of privacy and technology lawyers.
All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted.