AI Safety & Governance.

Jun 2, 2024

The Bletchley Declaration

The Bletchley Declaration, issued by countries at the AI Safety Summit 2023 (Nov 2023), emphasizes the need for safe, human-centric, and responsible AI development. It acknowledges AI’s transformative potential and associated risks, particularly with advanced AI models. The declaration calls for international cooperation to address these risks, promote AI safety, and ensure AI benefits are inclusive and sustainable. Key areas of focus include transparency, accountability, risk assessment, and collaboration on safety research and policies.

For more details, you can read the full declaration at gov.uk.

The (WH) Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

This White House Executive Order (Oct 30 2023) emphasizes responsible AI development to address societal challenges while mitigating risks. Key points include ensuring AI safety and security, promoting innovation and competition, supporting American workers, advancing equity and civil rights, protecting consumer interests, safeguarding privacy and civil liberties, enhancing federal AI capacity, and leading global AI governance. The order mandates robust standards, guidelines, and collaboration across sectors to achieve these goals.

For more details, visit the full White House Executive Order.

EU AI Act

The EU Artificial Intelligence Act (July 2024) outlines the regulation of AI systems based on risk levels: unacceptable risk (prohibited), high risk (strictly regulated), limited risk (lighter obligations), and minimal risk (unregulated). Key points include obligations for developers and deployers of high-risk AI, documentation requirements, and specific rules for General Purpose AI (GPAI) systems. The Act aims to ensure AI safety, accountability, and compliance, with phased implementation timelines and the establishment of an AI Office for oversight.

For a more detailed summary, visit the summary page, the AI Act Explorer, or the AI Act Compliance Checker, or the implementation timeline.


Australia

Australia’s AI Ethics Framework

Australia’s AI Ethics Framework (2019) provides guidelines for businesses and governments to design, develop, and implement AI responsibly. It includes eight ethical principles aimed at ensuring AI systems are safe, secure, and reliable. The eight principles are:

  1. Human, societal and environmental wellbeing
  2. Human-centred values
  3. Fairness
  4. Privacy protection and security
  5. Reliability and safety
  6. Transparency and explainability
  7. Contestability
  8. Accountability

These principles are voluntary and aim to promote ethical AI practices, build public trust, and ensure AI benefits all Australians. They complement existing AI regulations and encourage responsible AI development and use.

The framework supports Australia’s goal of becoming a global leader in ethical AI and includes case studies from major businesses that have tested the principles.

For more details, visit industry.gov.au or the principles page.

Guidelines for Secure AI System Development

The Guidelines for Secure AI System Development by the Australian Cyber Security Centre (ACSC) provide comprehensive recommendations for developing AI systems securely. The guidelines cover four key areas: secure design, secure development, secure deployment, and secure operation and maintenance. They emphasize threat modeling, supply chain security, documentation, incident management, and responsible release. The document aims to ensure AI systems are safe, reliable, and protect sensitive data, encouraging providers to implement security measures throughout the AI lifecycle.

For more details, visit cyber.gov.au.

Similarly, the UK’s National Cyber Security Centre (NCSC) provides guidelines for developing secure AI systems. These guidelines emphasize understanding AI risks, ensuring data integrity, securing AI infrastructure, maintaining AI model integrity, and ensuring robust incident response and recovery processes. The guidelines also include practical advice for integrating security practices throughout the AI development lifecycle, from design to deployment, to mitigate potential security threats effectively.

For more details, visit NCSC.gov.au.

NSW AI Assurance Framework

The NSW AI Assurance Framework provides guidelines for the design, development, and use of AI technologies in government projects. Effective from March 2022, it requires project teams to assess and document AI-specific risks throughout the project lifecycle. The framework emphasizes ethical principles such as community benefit, fairness, privacy, security, transparency, and accountability. It supports the NSW AI Strategy and ICT Digital Assurance Framework and mandates submission of assessments for AI projects exceeding $5 million or posing mid-range or higher risks.

For more details, visit nsw.gov.au, or the Mandatory Ethical Principles for the use of AI, or the basic guidance for GenAI

The NSW AI Strategy outlines the government’s approach to leveraging AI to enhance service delivery and decision-making. It focuses on using AI to free up the workforce for critical tasks, cut costs, and improve targeted services. The strategy addresses the potential of AI to transform society and the economy while emphasizing the importance of developing AI responsibly to meet privacy standards and address ethical considerations. It includes guidance on balancing opportunity and risk, ensuring community trust, and mitigating unintended consequences.

For details, visit digital.nsw.gov.au AI Strategy.

The Gradient Institute

The Gradient Institute is an independent, nonprofit research organization dedicated to integrating safety, ethics, accountability, and transparency into AI systems. They develop new algorithms, provide training, and offer technical guidance on AI policy. The institute collaborates with various organizations to address AI risks, ensure ethical AI deployment, and promote responsible AI practices through research, advisory services, and case studies.

For more details, visit the Gradient Institute website.

Supporting Responsible AI

The “Supporting Responsible AI” discussion paper by the Australian Department of Industry, Science and Resources outlines a public consultation process for developing policies and initiatives that promote responsible AI use. The consultation seeks input from various stakeholders to ensure AI technologies are used ethically and responsibly, aligning with societal values and legal standards. The initiative aims to build public trust, safeguard against risks, and harness AI’s benefits for all Australians.

Find the paper here

Victoria: Use of personal information with ChatGPT

The Office of the Victorian Information Commissioner (OVIC) states that Victorian public sector organizations must not use personal information with ChatGPT, as it contravenes Information Privacy Principles (IPPs). This includes generating, collecting, or retaining personal data. Any breach should be reported as an information security incident. The statement highlights the significant privacy risks and potential harms, emphasizing that even if input history and model training are disabled, information may still be retained and reviewed by OpenAI.

WA Government Artificial Intelligence Policy and Assurance Framework

The WA Government Artificial Intelligence Policy and Assurance Framework outlines principles and guidelines for WA Government agencies developing or using AI tools. It ensures AI systems are assessed for risk and compliance during all development stages. Projects with significant funding or high risk must be reviewed by the WA AI Advisory Board. The framework includes guidance materials and FAQs to support specific AI use cases.

For details, visit the wa.gov.au.


UK AI Safety Institute

https://www.aisi.gov.uk

US AI Safety Institute (NIST)

https://www.nist.gov/aisi

Statement on AI Risk

https://www.safe.ai/work/statement-on-ai-risk

A Right To Warn

https://righttowarn.ai


AI Governance in Australia

UQ’s “AI Governance in Australia” discusses the need for robust norms, policies, laws, and institutions to guide AI development, deployment, and use, especially given the rapid advancements in AI technologies. It highlights the importance of managing risks from AI, including misuse, accidents, and loss of control.

For details, visit aigovernance.org.au.

Centre for the Governance of AI

The Centre for the Governance of AI (GovAI) focuses on researching and guiding the development and regulation of AI to ensure it is safe and beneficial. Established in 2018, GovAI supports institutions by providing research, hosting fellowships, and organizing events. Key research areas include AI security threats, responsible development, regulation, international coordination, and compute governance. GovAI has influenced policy through publications and advisory roles and transitioned from Oxford’s Future of Humanity Institute to an independent nonprofit in 2021.

For more information, visit governance.ai.

WEF AI Governance Alliance

The AI Governance Alliance, an initiative by the World Economic Forum, aims to design transparent and inclusive AI systems. It brings together diverse stakeholders to create frameworks and policies that ensure ethical AI development. The alliance focuses on fostering collaboration, developing standards, and addressing the societal impacts of AI. It supports innovation while ensuring AI technologies are deployed responsibly and benefit all of society.

For details, visit the AI Governance Alliance.

Institute for AI Policy and Strategy (IAPS)

The Institute for AI Policy and Strategy (IAPS) is a remote-first think tank focusing on managing risks from advanced AI systems. It conducts policy research, develops AI governance standards, and addresses international governance issues, particularly with China. IAPS emphasizes intellectual independence, not accepting funding from for-profit organizations, and aims to build a community of thoughtful AI policy practitioners. Their work includes compute governance and drawing lessons from cybersecurity and other critical industries.

For details, go to IAPS.

Centre for Artificial Intelligence and Digital Ethics

The Centre for AI and Digital Ethics (CAIDE) at the University of Melbourne focuses on interdisciplinary research, teaching, and leadership in AI and digital ethics. It addresses ethical, technical, regulatory, and legal issues related to AI and digital technologies. CAIDE involves experts from various faculties, including Law, Engineering and IT, Education, Medicine, Dentistry and Health Sciences, and Arts. The Centre offers undergraduate, graduate, and professional courses and engages with the public through events and media.

For details, visit unimelb.edu.au.

AI Assurance in the UK

The UK government’s “Introduction to AI Assurance” outlines the importance of AI assurance in building trust, managing risks, and ensuring responsible AI development. It introduces key concepts and tools for AI assurance, emphasizing its role in AI governance and regulatory frameworks. The document highlights the need for robust techniques to measure, evaluate, and communicate the trustworthiness of AI systems, supporting both industry and regulators in achieving responsible AI outcomes.

For details, visit gov.uk.

UNESCO Recommendation on the Ethics of AI

The UNESCO Recommendation on the Ethics of Artificial Intelligence is the first global standard on AI ethics, adopted by all 193 Member States. It emphasizes four core values, good of humanity, individuals, societies and the environment: human rights and human dignity, fair and just, diverse and inclusive, and a flourishing environment. The recommendation includes ten core principles for a human-rights centred approach, and eleven key policy action areas to guide ethical AI development. It also introduces practical methodologies like the Readiness Assessment Methodology (RAM) and Ethical Impact Assessment to support implementation and promote gender equality in AI through the Women4Ethical AI platform.

For more details, see unesco.org.

AI Standards Hub

The AI Standards Hub, led by the Alan Turing Institute, is dedicated to fostering a vibrant community around AI standards. It offers a platform for knowledge sharing, capacity building, and research. The Hub’s activities are organized around four pillars: an observatory of standards, community collaboration, knowledge and training, and research and analysis. It focuses on Trustworthy AI, addressing transparency, security, and ethical considerations. The Hub provides resources like a standards database, training materials, and forums for discussion.

More details, visit aistandardshub.org.


Mitre Atlas

https://atlas.mitre.org

MITRE ATLAS (Adversarial Threat Landscape for AI Systems) is a comprehensive, accessible knowledge base documenting adversary tactics and techniques used against AI systems. Based on real-world observations and demonstrations, ATLAS aims to raise awareness and readiness for unique threats to AI-enabled systems. It is modeled after the MITRE ATT&CK framework and serves to inform security analysts, enable threat assessments, and understand adversary behaviors.

Key aspects of ATLAS include:

  1. Collaboration: Involve industry, academia, and government, making it a central resource for understanding and mitigating AI threats.
  2. Incident Sharing: ATLAS facilitates timely, relevant, and secure reporting of AI incidents and vulnerabilities.
  3. Threat Emulation and Red Teaming: Tools like Arsenal and Almanac plugins have been developed to add AI-targeted adversary profiles to existing threat emulation tools.
  4. Mitigations: The ATLAS team continuously incorporates community techniques to mitigate AI security threats, offering a draft set of mitigations.
  5. Real-World Relevance: It includes case studies of significant AI security breaches, such as a $77 million loss from an attack on a facial recognition system.

The document emphasizes the growing number of vulnerabilities as AI expands, the importance of community collaboration, and the continuous development of tools and strategies to enhance AI security.

More details at the MITRE ATLAS website.

NIST AI Risk Management Framework (AI RMF)

The NIST AI Risk Management Framework (AI RMF) provides guidelines for managing risks associated with AI systems, focusing on trustworthiness, accountability, and transparency. It offers a structured approach to identify and mitigate AI risks, developed through a collaborative process involving public comments and workshops. The framework includes a playbook, roadmap, and tools for implementing AI risk management practices. NIST also launched the Trustworthy and Responsible AI Resource Center to support international alignment and implementation.

For more details, visit NIST AI RMF.

NIST AI RMF Generative AI Profile

The NIST AI Risk Management Framework: Generative AI Profile outlines the risks unique to or exacerbated by generative AI (GAI), such as confabulation, data privacy issues, environmental impacts, and information security concerns. It provides actions for organizations to manage these risks, including governance, monitoring, and documentation procedures. The Generative AI Profile emphasizes transparency, compliance with legal standards, and the integration of GenAI-specific policies into existing risk management frameworks to ensure the safe and trustworthy deployment of generative AI systems.

For details, you can access the full document here.

OWASP AI Security and Privacy Guide

The OWASP AI Security and Privacy Guide provides actionable insights for designing, creating, testing, and procuring secure and privacy-preserving AI systems. It covers key areas like AI security, privacy principles, data minimization, transparency, fairness, and consent. The guide also addresses potential model attacks and provides strategies for maintaining data accuracy and handling personal data responsibly. The document is a collaborative effort aimed at improving AI security and privacy practices.

For details, visit the OWASP AI Security and Privacy Guide.

OWASP LLM AI Cybersecurity & Governance Checklist

The “LLM AI Security and Governance Checklist” by OWASP provides a comprehensive guide for secure and responsible use of Large Language Models (LLMs). Key sections include:

  1. Overview: Introduces responsible AI use and key challenges.
  2. The Checklist: Covers adversarial risks, threat modeling, AI asset inventory, security training, business cases, governance, legal and regulatory compliance, deployment strategies, testing, and AI red teaming.
  3. Resources section: Offers additional tools and standards for AI security.

The PDF emphasizes integrating AI security with existing practices and highlights the importance of continuous evaluation and validation. Or check out the OWASP Top 10.

ISO/IEC 42001:2023

The ISO/IEC 42001:2023 standard, titled “Information technology – Artificial intelligence – Management system,” provides guidelines for establishing, implementing, maintaining, and continually improving an AI management system. It focuses on addressing unique challenges posed by AI, such as ethical considerations, transparency, and continuous learning. The standard aims to help organizations manage AI risks and opportunities systematically, ensuring responsible and trustworthy AI implementation.

For more information, you can visit the ISO page.