Corporate plan
Our corporate plan is prepared annually and is our primary strategic planning document.
Compliance priorities
Our compliance priorities outline our key areas of focus for the year. They will guide our efforts to deliver effective compliance and, where necessary, targeted enforcement action.
Data strategy
Our Data strategy 2024–25 sets out a 12-month plan to enhance digital, data and analytical capabilities across the ACMA. It aims to improve the discoverability, usability and safety of our data.
Data and analytics are critical tools and capabilities that position us for the future. As recognised in our 2024–25 corporate plan, data and analytics support our evidence-based regulatory analysis and actions, and give greater insight into, and understanding of, Australia’s communications and media sectors. This 2024–25 strategy will be updated with a 4-year strategy and governance framework in 2025–26.

Statement of intent
Our statement of intent responds to the Australian Government’s statement of expectations outlining its expectations of us.
Review of the ACMA
On 12 June 2015, the then Minister for Communications announced that the Department of Infrastructure, Transport, Regional Development and Communications would conduct a wide-ranging review into the ACMA.
On 22 May 2017, the Australian Government released the final report of the ACMA review and its response.
For more information about the review, visit the Department of Infrastructure, Transport, Regional Development and Communications and the Arts.
ACMA AI transparency statement
The Digital Transformation Agency's Policy for the responsible use of AI in government sets out the Australian Government approach to embrace the opportunities of AI and provide for safe and responsible use of AI in the Australian Public Service.
The Australian Communications and Media Authority (ACMA) adheres to this policy supporting its principles under the "enable, engage, and evolve" framework. We will be transparent in our internal use of AI technology as we explore, evaluate and adopt AI technology to benefit our work and our stakeholders.
Currently, ACMA does not plan to use AI in services that the public may directly interact with or be significantly impacted by. If this changes, we will update this statement to detail our use of AI.
AI use
We may employ AI across various corporate and enabling functions, including software engineering, data analytics and workplace productivity.
Software engineering
ACMA uses AI to assist in software development, debugging and testing when developing digital and data solutions and administering ACMA systems. Some of these systems are accessed by the public, but all systems are tested by humans to ensure they behave as expected before they are released.
Data analytics
ACMA sees benefits in using AI to assist with data and insights in the areas of data management and obtaining insights from data through interrogation and analysis. ACMA’s data and insights can influence our approach to regulation, policy and informing advice to government on legislation.
Workplace productivity
We see the potential benefits in using AI to improve workplace productivity for staff including:
- helping answer questions from staff regarding workplace policies and entitlements
- summarising documents, emails, instant messages and other content
- summarising and transcribing meetings
Monitoring and governing AI use
The ACMA has developed an overarching agency approach to AI and has established an AI Steering Committee to assess the opportunities and risks in using AI within ACMA. The Steering Committee considers AI use case benefits, risks, and guidelines as well as continuing to raise staff awareness of AI.
An internal AI policy ensures responsible, ethical, and secure usage of AI tools while safeguarding the privacy, confidentiality, and integrity of agency data and operations. Under the policy, generative AI tools must not be used unless specifically approved under a robust approval, assurance and evaluation process and staff have undertaken AI training.
The AI Steering Committee undertakes regular review of AI projects and solutions that are of medium to high risk to ensure compliance with the policy and AI ethical principles.
Accountable official
The Chief Information and Digital Officer is designated as the accountable official.
AI transparency statement
This AI transparency statement was first published in February 2025. This statement will be reviewed annually, or when any significant change is made to our approach to AI.