Introduction
The rapid Artificial Intelligence (AI) adoption across the industry is currently transforming the Australian financial services (AFS) landscape, allowing financial services licensees to automate complex tasks and enhance client engagement. However, the Australian Securities and Investments Commission (ASIC) has identified a critical governance gap where AI usage often outpaces the development of robust risk management frameworks, potentially leading to systemic breaches of the law.
Maintaining regulatory compliance while using AI in financial advice requires a structured approach that maps legal principles to technical system requirements. This guide provides an Australian financial service licensee (AFS Licensee) and every financial adviser seeking legal support with the essential framework needed to manage compliance obligations, ensuring that AI outputs remain fair and consistent with the best interests duty under the Corporations Act 2001 (Cth).
Understanding the Best Interests Duty in the Age of AI
Section 961B: Technical Requirements for AI Developers
The core legal obligation for financial advice in Australia is the Best Interests Duty, outlined in Section 961B of the Corporations Act 2001 (Cth).
For a licensee to use AI for personal advice, the system’s process must align with the “safe harbour” provisions in Section 961B(2). This means translating these legal steps into concrete technical requirements for your AI developers.
A compliant AI system must be engineered to follow a structured, auditable process. Key technical implementations include:
- Identifying Client Circumstances: To satisfy sections 961B(2)(a) through (c), the AI must do more than process a static form. It needs to engage in multi-turn dialogue to uncover a client’s objectives, financial situation, and needs. The system must incorporate “gap detection” algorithms to identify incomplete or contradictory information and prompt the client for clarification.
- Basing Judgements on Client Circumstances: Section 961B(2)(f) requires that all judgements are based on the client’s specific situation. To meet this, the AI must generate a “causal trace” or an “explainability log.” This log creates a transparent link, mapping the client’s data inputs directly to the advice outputs, demonstrating how the recommendations were derived.
- Applying a Standard of Professional Judgement: The “catch-all” provision in Section 961B(2)(g) requires taking any other step a person with reasonable expertise would. An AI system can address this by having hard-coded triage logic. This mechanism allows the AI to recognise scenarios beyond its expertise, such as complex tax situations, and automatically escalate these cases to a human financial adviser.
Using RAG for Product Investigation & Preventing Hallucination
A critical step in the safe harbour provision is Section 961B(2)(e), which mandates a “reasonable investigation” into financial products that might meet the client’s needs.
For a generative AI model, relying on its static training data is insufficient and poses a significant compliance risk. This is because the data can be:
- Outdated; or
- Lead to “hallucinations”—the creation of factually incorrect information.
To address this, your AI system must use a technique called Retrieval-Augmented Generation (RAG). This architecture forces the AI to query a verified, real-time database, such as your firm’s Approved Product List (APL), instead of relying on its internal knowledge.
By grounding its responses in a controlled and current data source, RAG ensures that any product investigation is based on accurate information. Providing financial advice based on stale or hallucinated product details would constitute a clear breach of the Best Interests Duty, which would trigger breach reporting by AFS licensees.
Get Your Free Initial Consultation
Consult with one of our experienced ACL & AFSL Lawyers today.
Managing Your Key Compliance Risks When Using Generative AI
ASIC’s Expectations for Managing Algorithmic Bias
ASIC has identified algorithmic bias as a primary risk to the integrity of the financial advice market. This bias occurs when an AI model produces systematic errors that disadvantage specific client groups, such as those from particular demographics or retirees in the decumulation phase.
Such outcomes often arise because the AI has been trained on historical data that does not accurately represent the unique needs of these groups.
AFS Licensees are expected to implement rigorous testing protocols to ensure technical fairness. A failure to proactively manage bias is not only an ethical issue but also a potential breach of the obligation under the Corporations Act 2001 (Cth) to provide financial services “efficiently, honestly and fairly.”
Key technical strategies for detecting and mitigating bias include:
- The ‘Model Judge’ Approach: This involves using a secondary, highly constrained AI model to audit the outputs of the primary advisory model. The second model is programmed to identify hidden stereotypes or instances where the advice unfairly privileges one client category over another.
- Using Fairness Metrics: Developers can use quantitative, reference-free metrics to measure how well their models avoid generating harmful or discriminatory content across diverse datasets.
- Adversarial Testing: Also known as ‘red teaming’, this process involves intentionally feeding the AI prompts designed to provoke biased responses or bypass its safety controls. This helps identify vulnerabilities in the model’s ethical constraints.
Your Liability for AI Hallucinations & Errors
When using generative AI, the AFS Licensee is legally responsible for any errors or “hallucinations” produced by the system.
A hallucination is a factually incorrect but highly plausible output, such as an AI claiming an investment is “guaranteed.” This responsibility cannot be outsourced to the AI vendor; the liability rests with the licensee and its directors, highlighting the importance of understanding the liabilities of directors of AFSL holders.
Under Australian law, these AI-generated errors can have significant legal consequences. Any false or misleading output can be classified as “misleading and deceptive conduct” under the Australian Consumer Law (Cth) and the ASIC Act (Cth), potentially leading to civil penalty actions.
Furthermore, directors have a duty of “care and diligence” under Section 180 of the Corporations Act 2001 (Cth). As AI becomes a standard tool, directors are expected to be sufficiently AI-literate to understand and manage the associated risks.
Australian Securities and Investments Commission v RI Advice Group Pty Ltd [2022] FCA 496 established that a director could be held personally liable for failing to implement practices that minimise harm caused by technology, a precedent that extends to the deployment of AI in financial advice.
Speak with an ACL & AFSL Lawyer Today
Request a Consultation to Get Started.
Establishing AI Governance & Oversight
The Human-in-the-Loop Requirement for AI-Generated SOAs
ASIC’s Regulatory Guide 255 (RG 255) establishes the core framework for digital financial advice in Australia, requiring significant human oversight. The guide mandates that a “suitably qualified individual” must review and sign off on AI-generated advice. This ensures that a licensed adviser, not the AI model, remains accountable for the financial advice provided to retail clients.
This human-in-the-loop process is a critical compliance control that cannot be a superficial “tick-a-box” exercise. Consequently, the human reviewer must have a sufficient understanding of the AI’s “rationale, risks, and rules,” even without needing to know the specific computer coding. The review must:
- Assess the quality and appropriateness of the advice.
- Verify that the AI’s output aligns with the client’s circumstances and best interests.
- Ensure the licensee can suspend the AI system immediately if an error is found until the issue is resolved.
The Experienced Provider Pathway & Its Role in AI Oversight
The experienced provider pathway offers a route for certain financial advisers to meet qualification standards and plays a key role in AI governance. Advisers are considered to have the necessary professional judgement to oversee AI-generated advice, if they possess:
- At least 10 years of experience between 1 January 2007 and 31 December 2021.
- A clean disciplinary record.
Their expertise is vital for spotting subtle errors or omissions that an AI model might miss.
To formalise this oversight role, an eligible adviser must provide a written declaration to their AFS Licensee—a process where our lawyers for financial advisers can provide assistance—confirming they meet the criteria under Section 1684 of the Corporations Act 2001 (Cth). Following this, the licensee is required to notify ASIC. This declaration provides a formal basis for these seasoned professionals to act as the “suitably qualified individual” in the human-in-the-loop process, ensuring AI outputs receive a final layer of expert human judgement.
Get Your Free Initial Consultation
Consult with one of our experienced ACL & AFSL Lawyers today.
AI Record-Keeping & Auditing Obligations
Creating a 7-Year Digital Audit Trail for All AI Outputs
Under Section 286 of the Corporations Act 2001 (Cth), AFS Licensees are required to keep financial records for a minimum of seven years.
This obligation extends to the use of generative AI, where prompts, outputs, and interaction metadata are classified as business records essential for AFSL audits and legal discovery. Consequently, if a client disputes advice years later, the licensee must be able to reproduce the exact digital circumstances that led to the AI’s conclusion.
To create a compliant and tamper-proof digital audit trail, a logging system must capture the entire context of each AI interaction. It is not enough to simply save the final Statement of Advice (SOA). The complete log should include:
- System Prompts: The underlying instructions that define the AI model’s rules, persona, and constraints.
- User Inputs: Every piece of information the client provides, including all queries and uploaded data.
- Model Identifiers: The specific model name and version used for the generation, such as GPT-4o-mini-v2.
- Model Parameters: Technical settings like temperature and top-p that influence the AI’s output.
- Retrieval Metadata: The specific documents and data sources retrieved by a RAG system to ground the advice in factual information.
- Human Reviewer Details: The ID of the adviser who reviewed and signed off on the advice, along with their decision and timestamp.
A significant compliance risk arises from relying on the default retention periods of AI vendors, which are often insufficient. For instance, some platforms may only store logs for 30 to 180 days.
Therefore, licensees must ingest this data into their own immutable storage to ensure the seven-year retention mandate is met.
A Technical Audit Framework for AI Innovation & Governance
ASIC’s Report 798, Beware the gap, highlighted that most financial services licensees use AI without having AI-specific policies in place.
To bridge this governance gap, a technical audit framework is required, establishing an Artificial Intelligence Management System (AIMS). Notably, the global standard for this is ISO/IEC 42001:2023, which was officially adopted as an Australian Standard in early 2024.
This standard provides a structured approach with specific controls to manage AI risks, ensuring that innovation does not outpace governance. For an AFS Licensee, an audit based on this framework should focus on several key dimensions to ensure transparency and accountability. These dimensions include:
- Data Integrity: Managing the quality and provenance of the data used to train and operate the AI to prevent biased or flawed outputs.
- Traceability: The ability to trace AI-driven decisions from the initial requirements and data inputs through to the final impacts on the client.
- Robustness: Ensuring the AI system operates reliably and in accordance with its intended purpose, which can be verified through performance testing and adversarial “red teaming” exercises.
- Contestability: Providing a clear process for users, including clients and internal reviewers, to challenge and seek redress for AI-driven outcomes.
An audit is not a one-off event but an ongoing process. ASIC expects licensees to conduct periodic and random reviews of AI-generated advice to ensure the model has not drifted from its intended performance metrics over time.
Speak with an ACL & AFSL Lawyer Today
Request a Consultation to Get Started.
Conclusion
Successfully integrating generative AI into AFS requires a compliance framework that translates legal duties into technical controls and manages risks like algorithmic bias and hallucinations. Licensees must implement robust governance, including mandatory human oversight and a detailed seven-year digital audit trail, to balance innovation with their regulatory obligations.
To navigate these complexities and turn regulatory challenges into strategic opportunities, contact our AFSL application lawyers at AFSL House. Our expert legal and consulting services provide the specialised compliance frameworks your firm needs to deploy AI confidently and securely.
Frequently Asked Questions (FAQ)
No, an AI model alone cannot legally satisfy the Best Interests Duty under Section 961B of the Corporations Act 2001 (Cth) because it lacks legal personality. However, a compliant process engineered with robust controls and mandatory human oversight can facilitate a licensee’s ability to meet this duty.
ASIC’s main concerns include algorithmic bias that can lead to discriminatory outcomes, a lack of transparency due to the “black box” nature of AI, and a widening “governance gap” where AI adoption outpaces risk management frameworks. The regulator is focused on these systemic risks and the potential for consumer harm, which can often trigger ASIC investigations.
Licensees should keep a complete digital audit trail for at least seven years, which includes the user prompt, system instructions, the specific AI model and version used, and any data retrieved by RAG systems. This ensures that every piece of AI-generated advice can be fully reconstructed for regulatory review or legal discovery.
Yes, a “suitably qualified” human adviser should review and sign off on all AI-generated SOAs before they are issued to a client. This is a critical requirement under ASIC’s guidance to ensure the advice is accurate, appropriate, and complies with the law.
The AFS Licensee and its directors are legally responsible for any incorrect advice or financial loss caused by an AI system, making it crucial to have expert AFSL lawyers to provide guidance. This liability cannot be outsourced to the AI vendor, as AI-generated errors can be treated as misleading and deceptive conduct under the ASIC Act (Cth).
Firms can effectively test for algorithmic bias using methods like the “model judge” approach, where a second AI audits the primary model’s outputs for fairness. Other key strategies include conducting adversarial testing, also known as “red teaming,” and using quantitative fairness metrics to detect and mitigate discriminatory outcomes.
RAG is a technique that forces an AI to query a verified, real-time database, such as an APL, instead of relying on its static training data. This is crucial for compliance, as it prevents the AI from “hallucinating” or providing advice based on outdated or incorrect product information.
No, standard vendor agreements are often insufficient for covering the unique risks associated with generative AI. Licensees should conduct strong due diligence and include explicit clauses on model performance, audit access, and data security, as regulatory obligations generally cannot be outsourced to a third-party provider.
ISO/IEC 42001 is the global standard for an AIMS, which was officially adopted as an Australian Standard in 2024. It is relevant for financial advice as it provides a structured framework with specific controls for managing AI risks and demonstrating robust governance to regulators.