Skip to main content

Artificial Intelligence Risk Framework

Risk-First has developed two complementary risk frameworks for understanding and managing AI-related risks.

Agentic Software Development Risk Framework

agentic-software-development.riskfirst.org

A framework addressing the unique threats that emerge when AI systems autonomously write, modify, and deploy code. Existing AI governance frameworks (NIST AI RMF, ISO/IEC 42001) focus on AI as a decision-making component — but AI is becoming the primary producer and modifier of software itself.

This shifts risk from "bad AI decision" to "unsafe evolving codebase" — a completely different class of risk.

Covers capabilities such as:

  • Code Generation — AI producing source code and configurations
  • Tool Calling — Invoking external APIs and system commands
  • Autonomous Planning — Decomposing goals without human intervention
  • Multi-Agent Orchestration — Coordinating multiple AI agents

And threats including Code Security Risks, Supply Chain Risks, Prompt Injection, and Autonomy & Control Risks.

Societal AI Risk Framework

societal-ai-risk.riskfirst.org

A framework addressing civilisation-scale risks from advanced AI systems. This examines what happens to society as AI grows in capability and autonomy — risks that affect economies, political systems, human agency, and the balance of power between humans and machines.

Covers risks such as:

  • Emergent Behaviour — Unforeseen capabilities arising from scaling
  • Loss of Human Control — AI evolving beyond human oversight
  • Social Manipulation — AI-driven disinformation at scale
  • Superintelligence With Malicious Intent — Advanced AI acting against human interests

And practices including Human-In-The-Loop, Global AI Governance, Kill Switch mechanisms, and more.

Further Reading

For a comprehensive view of AI risk frameworks across all domains, see the MIT AI Risks Database which covers hundreds of different frameworks.