Skip to main content

Attorney-General’s Department Artificial Intelligence Transparency Statement

Introduction

We are trialling the use of artificial intelligence (AI) to help our staff work more effectively and support the department's responsibilities. 
We are using AI to:

  • summarise and compare information to
  • support policy and legal work
  • draft and refine routine internal content to improve consistency and timeliness
  • classify and manage documents to support records and information management
  • identify patterns and unusual activity in system information to support cyber security and protective practices.

We only use AI where it is appropriate for the task and the information involved. AI supports staff, but it does not replace human judgement – staff remain accountable for decisions and for any content or actions informed by AI.

We are committed to using AI in ways that are transparent, accountable and fair. Our use of AI is confined to non‑decision making activities and does not directly interact with the public. All AI use is subject to governance arrangements and mandatory human oversight.

This transparency statement explains the AI systems we use, how they function and the data they rely on.

This aligns with the Australian Government's Policy for the responsible use of AI in government and the requirements for AI transparency. It also reflects our ongoing commitment to safe and responsible use of AI and innovation.

Scope and applications

The AI technologies we use include:

  • generative AI
  • machine learning
  • natural language processing
  • speech recognition
  • chatbots
  • computer vision.

We apply these technologies to:

  • automate routine administrative tasks
  • help draft and refine routine, internal and non-sensitive content
  • summarise publicly available reports and synthesise information from multiple sources
  • simplify or clarify content to make it easier to understand
  • support research by identifying publicly available information and relevant references
  • provide analytical support, such as outlining options, risks or considerations
  • automate document classification and categorisation
  • analyse sentiment to understand patterns and trends in large data sets
  • support information security by analysing system activity to detect behaviour that differs from normal usage patterns and generate alerts for human review.

Future use of AI

Any expansion of our AI use will be subject to:

  • risk assessment
  • approval through established governance processes
  • compliance with the Policy for the responsible use of AI in government.

We will update this transparency statement if our use of AI changes.

Data privacy and security

We are committed to protecting the privacy and security of personal, sensitive and classified information.

We make sure that any data used in AI systems is handled in line with the Privacy Act 1988 (Cth) , the Protective Security Policy Framework and other relevant data protection laws.

We only collect, use and share personal information when necessary and in ways that comply with our privacy policy.

AI governance and oversight

We have guidance material and rigorous governance processes in place to monitor and oversee the use of AI within the department. This includes:

  • appointing the Chief Information Officer as the accountable official and a Chief AI Officer in line with the Policy for the responsible use of AI in government and AI Plan for the Australian Public Service
  • ensuring governance bodies oversee all AI projects
  • developing policies for staff use of AI and information technology systems
  • making AI training available to all staff
  • implementing our Data Governance Framework and Data Strategy, which serve as the foundation for managing and leveraging data effectively within the department
  • maintaining user accreditation under the Data Availability and Transparency Act 2022
  • implementing the Commonwealth's AI Impact Assessment tool.

Monitoring and assurance

We monitor AI use through a combination of policy controls, user guidance and ongoing review. This includes:

  • monitoring authorised use of AI tools
  • requiring staff to check outputs for accuracy and appropriateness
  •  reviewing AI use as part of broader information and technology governance activities.

Staff can report inappropriate outputs, privacy or security concerns, or unintended disclosure of information. The appropriate areas then investigate these reports.

If issues are identified, we may restrict, correct or pause AI use while we complete a full assessment.

Usage patterns and domains

This section shows how we classify our AI use and the domains where we apply it. This is in line with the Classification system for AI use  under the Policy for the responsible use of AI in government.

Usage patterns

The following usage patterns describe our use of AI.

Analytics for insights

We use AI to help analyse large amounts of data and identify patterns or trends that may not be obvious. This can support better understanding of issues, improve planning and inform policy or service delivery.

AI provides insights and summaries only. Staff review and interpret the results before they are used.

Workplace productivity

We use AI tools to assist staff with everyday tasks such as drafting documents, summarising information and organising work. This improves efficiency and allows staff to focus on more complex or high-value work.

Staff review and edit any AI-generated content before it is used. AI does not replace human judgement.

Image processing

We use AI to analyse and process images, such as identifying objects, features or patterns in photos or scanned documents. This supports tasks like classification, quality checks, or record management.

Staff check all results to make sure they are accurate and appropriate.

Domains

We use AI in the following domains:

  • service delivery
  • compliance and fraud detection
  • policy and legal
  • corporate and enabling services.

Public interaction and significant impact

The Standard for AI transparency statements guidance requires agencies to provide classification details for AI use where the public may directly interact with, or be significantly impacted by, AI or its outputs without human review.

None of our currently deployed AI systems result in direct public interaction or generate outputs that significantly impact an individual without mandatory human review.

Although we use technologies such as generative AI, chatbots and natural language processing across domains including service delivery, these systems are exclusively designed to assist staff with internal processes, analysis and administrative tasks.

All AI outputs and recommendations that could potentially impact a member of the public are subject to mandatory human oversight and approval before any final action or communication takes place.

Risks and mitigations

Our AI use presents risks that include:

  •  incorrect or incomplete outputs
  • bias introduced through model training
  • use beyond intended purposes.

We manage these risks through defined controls. These include:

  • restricting AI use to approved tools,
  • only allowing the use of publicly available AI tools with information already in the public domain
  • mandating AI training for staff
  • requiring staff to independently verify AI outputs.

Our guidance makes it clear that AI outputs are not authoritative and any identified issues may lead to corrective action, restrictions or stopping use of the tool.

Continuous improvement

We regularly review and update our AI policies and practices. This includes keeping up to date with new developments in AI technology, ethics and regulatory requirements.

We strive to improve the transparency, fairness, and effectiveness of our AI use through continuous learning and improvement.

We will review this statement:

  • every year annually
  • when we make a significant change to our AI use
  • when new factors affect this statement.

Contact us

We value feedback and engagement on our use of AI.

If you have questions or concerns, or would like more information about how we use AI, contact us using the departmental contact form.

Last updated: 1 May 2026
Due for review: 1 November 2026, or upon significant change