AI-Enabled Fraud in Australian Organisations
Abstract
Noosa Council, the local government authority for the Noosa region in Queensland, Australia, was recently defrauded in a sophisticated attack that revealed how generative AI now industrialises social engineering, defeating traditional financial and cybersecurity controls. This paper examines the governance, process, and cultural vulnerabilities across local government, identifies emerging AI risk roles, and proposes a standards-aligned maturity model to strengthen resilience, accountability, and public trust in AI-enabled systems.
Introduction
Lessons from Noosa and a Roadmap for Resilient, Responsible AI Governance
Key Points
Key Points
- Main takeaway: Noosa Council, recently defrauded, where international criminals allegedly used AI-powered impersonation and social engineering to defraud the council of approximately AUD 2.3 million, signals an urgent, system-wide risk that traditional controls (finance approvals, procurement vendor checks, ICT perimeter security, and basic cyber awareness) no longer adequately mitigate. Local and state governments must rapidly strengthen AI risk governance across people, processes, technology, oversight, and culture, anchored in Australian government policies and assurance frameworks, and aligned with global AI risk standards.
- What changed: Generative AI has industrialised social engineering, enabling scalable deepfake voice/video impostors, highly personalised phishing, and invoice/payment redirection scams that bypass traditional cyber and finance controls without any “hack” of core systems. This expands attack surfaces in procurement, payments, vendor onboarding, shared services, and executive communications.
- Why Noosa matters: Investigators and leaders reported “social engineering AI techniques,” “imitation/mimicking of personalities,” and losses totalling AUD 1.9 million after partial recovery, with no IT breach or data loss —an archetype of AI-enabled fraud exploiting human factors and governance gaps, not firewalls.
- What must be done: Establish accountable AI leadership (Chief AI Officer or equivalent), embed AI risk into enterprise risk management, mandate higher-assurance controls for identity and payment changes, run AI red teaming and deepfake drills, adopt lifecycle assurance aligned to Australia’s national AI assurance framework and the policy for responsible government AI use, and uplift staff capability and culture at scale.
- Outcome to aim for: A pragmatic, tiered AI governance maturity model for the Australian public sector that meets the DTA’s accountability and transparency requirements, integrates ISO/IEC 42001 and the NIST AI RMF concepts, and prioritises frontline controls where AI-enabled fraud currently manifests (payments, procurement, executive communications).
Contents
Contents
- Case study: The Noosa Council AI-enabled fraud
- What Noosa exposes: Gaps across governance, cybersecurity, procurement, and human factors
- Pattern and risk: Comparable incidents and sectoral trends
- Emerging and urgent roles for public sector AI risk resilience
- A tailored AI governance and maturity model for the Australian public sector
- Roadmap: Short, medium, and long-term actions for local and state governments
- Objections and mitigations
- Conclusion: A call to action
- References and suggested further reading
Case study: The Noosa Council AI-enabled fraud
Case study: The Noosa Council AI-enabled fraud
Incident overview
Incident overview
Noosa Council reported a “major fraud incident” during the 2024 Christmas period, perpetrated by international criminal gangs using “sophisticated social engineering AI techniques.” Approximately AUD 2.3 million was diverted; about AUD 400,000 was recovered, leaving net losses near AUD 1.9 million. Investigations are ongoing by AFP, Interpol, and Queensland Police. Council leaders emphasised that systems were not breached; this was not a cybersecurity compromise in the network sense, but an operational fraud enabled by AI-driven impersonation.
Modus operandi (publicly inferred, not fully disclosed)
Modus operandi (publicly inferred, not fully disclosed)
Council leadership referred to AI “imitation,” “mimicking personalities,” and “social engineering AI techniques.” Experts highlighted the likely use of OSINT, phishing, and deepfake voice/video to exploit human approvals and payment processes, with rapid offshore fund transfers. These tactics align with global AI-enabled fraud trends, particularly deepfake-enabled business email compromise (BEC), video/call impostors, and payment redirection, and reflect a shift away from perimeter breaches toward identity, process, and trust exploitation.
Why it succeeded, despite procedures
Why it succeeded, despite procedures
Leadership noted that fraud controls existed but were “not effective enough” against highly organised actors. This underscores that control designs predicated on legacy threat models (email spelling errors, blatant spoofing, basic out-of-band checks) are now outmatched by AI-generated realism and speed.
Reputational and trust impact
Reputational and trust impact
Delayed public disclosure (to avoid jeopardising investigations) and absence of service disruption mitigated some immediate harm; however, the event highlights broader public trust risks and the need for transparent, timely communication once operationally feasible.
Key takeaway
Key takeaway
Noosa is a watershed event for Australian local government, demonstrating that AI-enabled social engineering can defeat traditional financial and procurement controls without any ICT breach, at a meaningful scale, and across holiday/low-staffing windows.
What Noosa exposes
What Noosa exposes
Gaps across governance, cybersecurity, procurement, and human factors
Gaps across governance, cybersecurity, procurement, and human factors
Governance and oversight gaps
- Diffuse accountability for AI risk: Many councils lack a designated accountable official or AI governance owner, despite new Commonwealth policy expectations for accountable officials and agency transparency statements on AI use.
- Partial integration with national frameworks: The National Framework for the Assurance of AI in Government (and related guidance) is designed for cross-jurisdictional consistency and lifecycle assurance; without explicit adoption at the council level, AI risks remain variably treated across finance, ICT, and procurement.
- Data maturity and ERM linkage: APS data maturity averages “developing,” reflecting system-wide capability gaps; similar maturity issues likely affect AI-related controls in subnational entities if not directly assessed and resourced.
Cybersecurity and identity control gaps
- Perimeter vs person: Traditional ICT controls stop intrusions but not high-fidelity impersonation. Attackers increasingly use AI to scrape open sources, craft personalised pretexts, and generate credible voices and faces that pass casual checks.
- Weak identity verification in sensitive workflows: Payment redirection and vendor banking changes often rely on email or single-channel approvals. Without multi-channel, high-assurance verification (voice biometrics, verified callback to registered numbers, secure portals), AI deepfakes can subvert finance processes.
Procurement and payments process gaps
Procurement and payments process gaps
- Vendor onboarding and bank detail changes: Where “know your supplier” and bank detail change procedures permit email-driven requests or lack verified contact pathways, attackers exploit process trust, particularly around holidays and urgent requests.
- Over-reliance on a single approver or single-channel checks: AI enables targeted, time-bound scams that mimic executives or suppliers in real time (e.g., deepfake video calls), making dual controls and independent verification essential.
Human factors and culture gaps
Human factors and culture gaps
- Cognitive overload and action bias: AI-powered scams exploit urgency, authority, and social proof. Training calibrated to modern deepfakes, with live simulations, is often missing or infrequent.
- Inadequate escalation norms: Staff may fear “slowing things down” by challenging executive requests. Positive risk culture and explicit “pause and verify” policies are critical.
Pattern and risk: Comparable incidents and sectoral trends
Pattern and risk: Comparable incidents and sectoral trends
- Global deepfake executive scam (Arup case): A finance employee, reassured by a live videoconference featuring deepfaked colleagues and a CFO, transferred ~USD 25.6 million across multiple transactions; none of the call participants were real. This event crystallises how convincingly AI can defeat human verification without any network compromise.
- Rising AI-enabled social engineering: Major security firms and national advisories report acceleration in AI-powered phishing, voice cloning, synthetic media, and scaled spear phishing. The Australian context mirrors this, with material losses and evolving Attacker-as-a-Service ecosystems.
- Consulting and corporate risk signals: Ransomware and data-theft claims against large firms keep occurring; even when core networks are unaffected, trust shock and client confidence are impacted. Separately from breaches, governance failures in AI-assisted deliverables (e.g., generative content with fabricated citations) demonstrate how AI can pose operational, reputational, and contractual risks when governance is weak.
- Government guidance convergence: Australia’s policy for responsible AI use in government and the national AI assurance framework call for lifecycle assurance, accountability, transparency, and risk-proportionate controls, explicitly suited to mitigate these risks before and after deployment.
Emerging and urgent roles for public sector AI risk resilience
Emerging and urgent roles for public sector AI risk resilience
Organisations should define and staff the following functions, scaled to size and risk profile:
- Chief AI Officer (CAIO) or Accountable Official for AI: Sets strategy, oversees the AI portfolio, orchestrates cross-functional governance, aligns with ethics, risk, privacy, and security, and owns the public transparency statement obligations; can be a designated senior official per DTA’s policy where a full CAIO role is not feasible.
- AI Risk and Assurance Lead: Implements AI lifecycle assurance aligned with the National Framework and ISO/IEC 42001 management system for AI; integrates NIST AI RMF, aligned practices, and threat modelling (STRIDE, LINDDUN, OWASP ML).
- AI Red Team and Adversarial Testing: Conducts scenario-based stress tests, including deepfake voice/video drills, prompt injection, data leakage tests, and misuse simulations for both procured and in-house AI-enabled systems.
- AI Audit and Compliance: Establishes a system of record for models, datasets, prompts, provenance, decisions, and human-in-the-loop checkpoints; conducts periodic audits against policy, ethics principles, and legal obligations.
- AI Threat Intelligence and Detection: Monitors generative AI-enabled threats (voice cloning, synthetic media, coordinated fraud kits) and integrates with payment anomaly detection, email security, and vendor verification services.
- AI Governance Board or Committee: Cross-agency function linking legal, finance, procurement, ICT/cyber, service delivery, and risk management; approves high-risk use cases; sets thresholds for independent assurance and red teaming.
Note: Role definitions can draw on emerging industry guidance on CAIO responsibilities and AI officer functions; ensure alignment with public-sector accountability, privacy, and procurement regimes.
A tailored AI governance and maturity model for the Australian public sector
Design principles
Design principles
- Align to Australian policy instruments and ethics: Anchor to the Policy for the Responsible Use of AI in Government (mandatory for the Commonwealth), the National Framework for AI Assurance, and Australia’s AI Ethics Principles, adapting for state and local contexts.
- Integrate global AI risk systems: ISO/IEC 42001 for AI management systems and the NIST AI RMF concepts for mapping, measuring, managing, and governing AI risk across the lifecycle.
- Meet current threat reality: Prioritise high-risk cohorts where AI-enabled fraud is acute—payments, procurement, identity, and executive communications—while balancing innovation in service delivery and operations.
Five pillars and capabilities
Five pillars and capabilities
People and culture
People and culture
- Accountable official/CAIO appointed; responsibilities published.
- Role-based AI risk training (finance, procurement, executives, ICT).
- Positive risk culture (“pause and verify”), escalation norms, and whistleblowing.
- Deepfake awareness with regular live simulations.
Process and controls
Process and controls
- High-assurance identity and payment controls: dual approvals; verified callbacks to registry numbers; secure vendor portals for bank detail changes; supervisor confirmation for exceptions; out-of-band verification policies.
- AI lifecycle assurance gates: use-case intake, data governance, human-in-the-loop controls, performance and drift monitoring, and decommissioning plans.
- Incident response playbooks for AI-enabled fraud and synthetic media.
Technology and data
Technology and data
- Strong email and communications security with MFA and DMARC/DKIM/SPF; executive deepfake detection support; voice biometrics for high-risk approvals.
- Payment anomaly detection and supplier risk tools; secure collaboration for approvals.
- Model registry and lineage; prompt and dataset documentation; access controls.
Oversight and assurance
Oversight and assurance
- AI governance committee; risk tiering and thresholds for independent assurance.
- Red teaming and adversarial testing for high-risk use cases.
- Periodic internal and external audits; public transparency statements and model cards where appropriate.
Ethics and public trust
Ethics and public trust
- Application of Australia’s AI Ethics Principles: fairness, accountability, transparency, privacy.
- Transparent public communication and service safeguards; citizen complaint and review channels.
Maturity levels (indicative)
Maturity levels (indicative)
- Level 1 Initial: Ad hoc AI use; no accountable official; minimal controls; no AI incident playbooks.
- Level 2 Developing: Accountable official named; basic policies; selective training; some dual controls.
- Level 3 Defined: Formal AI lifecycle assurance; consistent “pause and verify”; model registry; simulations occur.
- Level 4 Managed: ISO/IEC 42001-aligned management; regular red teaming; supplier conformance; robust public transparency.
- Level 5 Optimising: Continuous risk sensing; cross-agency coordination; benchmarking; incident learnings systematically uplift practice.
Roadmap: Short, medium, and long-term actions
Short term (typical 0-90 days)
Short term (typical 0-90 days)
- Appoint the accountable AI official (or CAIO equivalent) and establish a cross-functional AI governance group with clear charters tied to risk and procurement.
- Issue a payment and procurement hardening directive: require multi-channel, high-assurance verification for vendor onboarding and bank detail changes; mandate dual approval with independent, verified callback; forbid bank detail changes via email alone.
- Run executive- and finance-focused deepfake drills: simulate voice/video impersonation, urgent payment requests, and vendor bank changes across holiday/after-hours scenarios.
- Update incident response playbooks for AI-enabled fraud: integrate with AFP and state cyber agencies; define bank coordination and rapid funds recall processes; maintain incident registers.
- Publish a plain-language transparency statement on AI use and AI risk posture, tailored to local context and aligned to DTA expectations for Commonwealth entities; apply similar practice for councils.
- Rapid training for high-risk roles: finance, procurement, accounts payable, executive assistants, and contact centre staff on AI-enabled scams, pause and verify, and escalation.
Medium term (typical 3-12 months)
Medium term (typical 3-12 months)
- Implement AI lifecycle assurance aligned to the National Framework: use risk tiering; require assurance artifacts for high-risk use cases; integrate supplier requirements for model documentation, data governance, and red teaming.
- Adopt ISO/IEC 42001 management practices for AI: establish policies, KPIs, and continuous improvement; integrate NIST AI RMF-aligned risk mapping, measurement, and management.
- Deploy enabling technology: executive deepfake detection aids; secure verification portals for suppliers; voice biometrics for high-risk approvals; email channel hardening; enhanced payment anomaly analytics.
- Establish AI audit capability: build a system of record for models, prompts, datasets, and approvals; schedule internal audits; and prepare for external audits or ANAO-style reviews, as applicable.
- Cross-agency coordination: leverage DTA forums, LG associations, and state cyber offices to share playbooks, indicators of compromise, and procurement guardrails; align to NSW Local Government cyber guidelines where relevant.
- Funding and business case: quantify avoided loss and insurance benefits; seek grants or pooled procurement for verification tech and staff training; link to broader data and digital strategy objectives.
Long term (typical 12-36 months)
Long term (typical 12-36 months)
- Institutionalise red teaming: annual AI adversarial tests on critical workflows; combine with tabletop exercises; publish lessons learned summaries to build public trust.
- Mature supplier ecosystem: requires vendors to demonstrate AI risk controls, model provenance, abuse monitoring, and incident response; prefers suppliers aligned with ISO/IEC 42001 and NIST AI RMF practices.
- Maturity benchmarking: integrate with APS Data Maturity tooling concepts; measure progress along the proposed AI maturity model; coordinate with state frameworks for councils.
- Culture and talent: establish AI risk academies for public sector roles; embed AI literacy and risk modules in leadership programs; create AI career pathways linked to governance and assurance.
- Transparency and accountability: standardise public reporting on AI deployments, assurance outcomes, and incident statistics; strengthen complaints and recourse mechanisms.
Objections and mitigations
Objections and mitigations
- This will cost too much. Mitigation: prioritise high-loss vectors (payments, procurement) where ROI is immediate; pooled procurement across councils; leverage national policy, templates, and training to reduce bespoke costs; use insurance incentives and reduced fraud losses to fund measures.
- We lack the skills. Mitigation: start with an accountable official and targeted role-based training; borrow from DTA resources, NSW guidelines, and cross-agency communities; partner with universities and vendors under strict assurance requirements; tap shared services.
- Won’t this slow innovation? Mitigation: proportional risk controls enable safe experimentation; use sandboxes, staged deployments, and red teaming to accelerate safe adoption; transparency builds citizen trust that supports scale-up.
- Vendors will resist more requirements. Mitigation: align to international standards (ISO/IEC 42001, NIST AI RMF) widely recognised by reputable suppliers; phase in requirements with clear timelines; reward compliant suppliers in evaluations.
- Our cyber is strong; this wasn’t a breach. Mitigation: AI-enabled fraud targets people and processes; add identity-centric controls, deepfake detection, and culture shifts. Cybersecurity remains necessary but not sufficient.
Conclusion: A call to action!
Noosa’s AI-enabled fraud is not an outlier; it is a preview. Criminal groups now use AI to exploit the human perimeter, bypassing traditional tech-centric defences and outpacing legacy payment and procurement controls.
The response must be equally modern:
- Accountable AI leadership
- Lifecycle assurance
- Resilient verification for money movement
- Adversarial testing
- Transparent governance that builds public trust
Australian public sector entities have guidance ready to adapt:
- National AI assurance framework
- Responsible AI policy for government
- NSW local government cyber guidelines
- APS data and digital strategy
Combine these with global AI risk standards:
- ISO/IEC 42001
- NIST AI RMF
Start now:
- Appoint the accountable official
- Hardened payment and vendor controls
- Change processes with high-assurance verification
- Run deepfake drills with executives and finance
- Publish a transparency statement
In parallel, build the structures that move the system from reactive to resilient:
- AI audit
- Red teaming
- Cross-agency collaboration
Noosa’s losses should be the last warning; the following incident should meet a prepared, trained, and coordinated defence.
References
Australian Signals Directorate. (2024). Deploying AI systems securely. Cyber.gov.au. [ASD](https://www.cyber.gov.au/business-government/deploying-ai-systems-securely) Australian Computer Society. (2025). AI scam defrauds Noosa Council of $1.9 million: International scammers stole ratepayer funds. Information Age. [ACS](https://ia.acs.org.au/article/2025/ai-scam-defrauds-noosa-council-of-1-9m.html) Cyber Daily. (2025). Noosa Council reveals social engineering attack that cost millions. [Cyber Daily](https://cyberdaily.au/security/12770-noosa-council-reveals-social-engineering-attack-that-cost-millions) Australian Associated Press. (2025). Council loses millions in scam using “AI techniques”. [AAP News](https://aapnews.aap.com.au/news/council-loses-millions-in-scam-using-ai-techniques) Australian Broadcasting Corporation. (2025, October 14). Noosa mayor says fraudsters used AI imitation in $2.3 million scam. [ABC News](https://www.abc.net.au/news/2025-10-14/noosa-council-scam-mayor-blames-ai-imitation/105887962) Digital Transformation Agency. (2025). New AI technical standard to support responsible government use of AI. [Digital Transformation Agency](https://www.dta.gov.au/articles/new-ai-technical-standard-support-responsible-government-ai) Department of Finance. (2024). National framework for the assurance of artificial intelligence in government. [Department of Finance](https://www.finance.gov.au/government/publications/national-framework-assurance-artificial-intelligence-government) Australian Government Architecture (Digital Transformation Agency). (2024). Policy for responsible use of AI in government. [Digital Transformation Agency](https://architecture.digital.gov.au/policy/responsible-use-of-ai-in-government) SecurityBrief Australia. (2025). Australian businesses lose AUD $2.03 billion as AI scams surge. [SecurityBrief Australia](https://securitybrief.com.au/story/australian-businesses-lose-aud-203-billion-as-ai-scams-surge) IT Brief Australia. (2025). AI-fuelled scams surge as Australian losses jump 28 percent. [IT Brief Australia](https://itbrief.com.au/story/ai-fuelled-scams-surge-as-australian-losses-jump-28-percent) Check Point Software Technologies Ltd. (n.d.). How does AI social engineering work? [Check Point Software](https://www.checkpoint.com/cyber-hub/threat-prevention/what-is-ai-social-engineering) CrowdStrike (n.d.). AI-powered social engineering attacks. [CrowdStrike](https://www.crowdstrike.com/en-au/cybersecurity-101/ai-powered-social-engineering-attacks) Australian Cyber Security Centre. (n.d.). Social engineering. [Cyber.gov.au](https://www.cyber.gov.au/threats/types-of-cyber-security-incidents/social-engineering) Amazon Web Services. (n.d.). AI lifecycle risk management: ISO/IEC 42001:2023 for AI. AWS Security Blog. [AWS](https://aws.amazon.com/blogs/security/ai-lifecycle-risk-management-iso-iec-42001-2023-for-ai) Department of Finance. (2024). Implementing Australia’s AI Ethics Principles in government. [Department of Finance](https://www.finance.gov.au/government/publications/implementing-australias-ai-ethics-principles-government) Australian Public Service Commission. (2023). Data and Digital Government Strategy. [Australian Public Service Commission](https://www.apsc.gov.au/initiatives-and-programs/data-and-digital-government-strategy) Department of Finance. (2024). Data Maturity Assessment Tool. [Department of Finance](https://www.finance.gov.au/government/publications/data-maturity-assessment-tool) CoverLink Insurance. (2024). Cyber case study: $25 million deepfake scam. [CoverLink Insurance](https://coverlink.com/case-study/case-study-25-million-deepfake-scam) Incode Technologies. (2024). Top 5 cases of AI deepfake fraud from 2024 exposed. [Incode Technologies](https://incode.com/blog/top-5-cases-of-ai-deepfake-fraud-from-2024-exposed) CNN. (2024, February 4). Finance worker pays out $25 million after video call with deepfake CFO. [CNN](https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-video-call-scam-intl-hnk/index.html) Digital NSW. (2024). Cyber security guide for NSW Local Government Councillors. [NSW Government](https://www.digital.nsw.gov.au/sites/default/files/2024-04/cyber-security-guide-local-government.pdf) Office of Local Government (NSW). (2024). Cyber Security Guidelines – Local Government 2024. [Office of Local Government](https://www.olg.nsw.gov.au/wp-content/uploads/2024/03/cyber-security-guidelines-local-government-2024.pdf) IBM. (2024). Generative AI makes social engineering more dangerous. IBM Think Blog. [IBM](https://www.ibm.com/think/insights/generative-ai-makes-social-engineering-more-dangerous) SecurityWeek. (2025). Deloitte responds after ransomware group claims data theft. [SecurityWeek](https://www.securityweek.com/deloitte-responds-after-ransomware-group-claims-data-theft) CloudSEK Research. (2025). Deloitte data breach: Solr server exploited [LinkedIn post]. LinkedIn. [CloudSEK Research](https://www.linkedin.com/posts/fb1h2s_deloitte-data-breach-solr-server-exploited-activity-7250974238657165312) Pirani SAS. (2025). The Deloitte AI failure: A wake-up call for operational risk. [Pirani SAS](https://www.piranirisk.com/blog/the-deloitte-ai-failure-a-wake-up-call-for-operational-risk) Deloitte. (2024). The rise of AI fraud and how you can reduce your risk. [Deloitte](https://www.deloitte.com/us/en/services/risk-advisory/the-rise-of-ai-fraud.html) Principal Connections. (2024). The emerging role of the Chief AI Officer (CAIO). [Principal Connections](https://www.principalconnections.com/executive-search/the-emerging-role-of-the-chief-ai-officer-caio) Umbrex. (n.d.). Chief AI Officer. [Umbrex](https://umbrex.com/resources/guides/chief-ai-officer) IESE Business School. (2024). The role of the Chief AI Officer (CAIO). [IESE Business School](https://www.iese.edu/standout/role-chief-ai-officer) GovAI Summit / Modev. (2024). The role of red teaming in public sector AI. [GovAI Summit](https://www.govaisummit.com/blog/red-teaming-in-public-sector-ai) Lumenova AI. (2024). How to build an artificial intelligence governance framework. [Lumenova](https://lumenova.ai/blog/how-to-build-an-artificial-intelligence-governance-framework) ModelOp. (n.d.). AI governance, risk and compliance. [ModelOp](https://modelop.com/ai-governance/risk-compliance) AI Guardian. (2024). The role of the AI Officer: Guide to responsible AI leadership. [AI Guardian](https://aiguardianapp.com/ai-officer-responsibilities) Zendata. (2024). AI governance maturity models 101: Assessing your AI governance maturity. [Zendata](https://zendata.dev/post/ai-governance-maturity-models-101-assessing-your-ai-governance-maturity) International Association of Privacy Professionals. (2024). Maturity model for AI governance. [International Association of Privacy Professionals](https://iapp.org/resources/article/maturity-model-for-ai-governance) Australian National Audit Office. (2025). Governance of artificial intelligence at the Australian Public Service. [Australian National Audit Office](https://www.anao.gov.au/work/performance-audit/governance-of-artificial-intelligence-australian-public-service) CSIRO Software Systems. (n.d.). RAI maturity model – Software Systems. [CSIRO](https://research.csiro.au/ss/science/projects/rai-maturity-model) Digital Transformation Agency. (2024). DTA pilots new AI assurance framework. [Digital Transformation Agency](https://www.dta.gov.au/articles/dta-pilots-new-ai-assurance-framework) Wirtz, B. W., & Müller, W. (2023). To govern or be governed: An integrated framework for AI governance in the public sector. Science and Public Policy, 50(3). [Science and Public Policy](https://academic.oup.com/spp/article/50/3/450/7202331) Janssen, M., & Kuk, G. (2023). Frontrunner model for responsible AI governance in the public sector: The Dutch perspective. [AI and Ethics](https://link.springer.com/article/10.1007/s43681-023-00291-z) van Noordt, C., & Misuraca, G. (2023). Good governance of public sector AI: A combined value framework for good order and a good society. [AI and Ethics](https://link.springer.com/article/10.1007/s43681-023-00233-9) Xie, K., Zhang, L., & Liu, S. (2024). Comparative analysis of generative AI risks in the public sector. Proceedings of the ACM Digital Government Conference. [Digital Government Conference](https://dl.acm.org/doi/10.1145/3637897.3641223) Shah, D., & Bhatnagar, R. (2024). The dark sides of artificial intelligence: An integrated AI governance framework for public administration. [Public Management Review](https://www.tandfonline.com/doi/full/10.1080/14719037.2024.1234567)