As AI technology continues to develop, new regulations and governance approaches emerge alongside it. Security and compliance professionals now face a pressing challenge: with AI tools becoming standard in workplace operations, how can organizations adopt and implement AI governance frameworks effectively?
ARIMLABS has created an extensive mapping of two major AI compliance frameworks - AICM and the NIST AI RMF - identifying how their controls align, overlap, and where gaps exist.
KEY FRAMEWORKS
Multiple AI governance frameworks exist, but two deserve particular attention: the NIST AI Risk Management Framework (AI RMF) and the AI Control Matrix (AICM).
These frameworks take different approaches:
NIST AI RMF offers specific recommendations for security professionals, providing clear guidance on how to secure AI environments.
For example: control MAP2 has 3 subcategories that directly state what technologies or methodologies can be used
QUOTE
MAP 2.1: The specific tasks and methods used to implement the tasks that the AI system will support are defined (e.g., classifiers, generative models, recommenders).
AICM presents a comprehensive set of controls but does not include detailed implementation guidance.
For example: AIS-08 control provides the following description
QUOTE
Validate, filter, modify or block, as necessary, input against adversarial patterns, failure patterns and unwanted behaviour according to organisational policies and applicable laws and regulations.
These two frameworks share significant overlap and map directly to each other. Most AICM controls correspond on average to 2-3 NIST RMF controls. For example, AIS-08 from AICM maps to MP-2.3 and several other NIST AI RMF controls, demonstrating how these frameworks complement each other in practice.
However, the gaps exist: some AICM controls have no corresponding NIST RMF guidance. These unmapped controls represent areas where the frameworks diverge, and ARIMLABS highlights these gaps clearly in the control-mapping spreadsheet.
To supplement the compliance mapping, ARIMLABS references external guidance - including "AI Privacy Risks & Mitigations – Large Language Models (LLMs)" by Isabel Barberá - to provide context for controls that have no direct mapping.. This report presents a comprehensive risk management methodology for LLM systems, including practical mitigation measures for common privacy risks.
Building on this foundation, ARIMLABS expanded the compliance mapping by connecting AICM controls, NIST AI RMF categories, and relevant external guidance for unmapped areas. We then identified specific tools and vendors that can help organizations implement these controls in practice, creating a clear pathway from framework requirements to actionable solutions.
For those seeking a comprehensive view of the global AI regulatory landscape, our research paper "Global AI Governance Overview: Understanding Regulatory Requirements Across Global Jurisdictions" will soon be publicly available.

