AI governance has reached a tipping point.
Across the EU, US, and APAC, regulators have introduced comprehensive frameworks covering risk management, accountability, transparency, and oversight. On paper, AI governance looks mature.
In practice, it is not.
Our recent research, “Global AI Governance Overview: Understanding Regulatory Requirements Across Global Jurisdictions,” shows that governance failures rarely stem from regulatory gaps — they stem from implementation gaps.
Most governance frameworks assume:
centralized systems
deterministic behavior
human-paced decision loops
Modern AI systems - especially LLM-driven and agentic architectures - violate all three assumptions. This paper is the foundation for our ongoing work on governance-by-design and execution-level AI security.
We would like to thank Jakub Łatkiewicz for his meaningful contributions and expert input throughout the development of this work.
📄 Read the full paper: https://arxiv.org/abs/2512.02046

