See Credit Details Below
Overview
The National Institute for Standards and Technology (NIST) released the first version of its long-awaited AI Risk Management Framework (AI RMF 1.0) on January 26, 2023. The voluntary framework builds on comments from hundreds of stakeholders and is intended to assist AI organizations to build trustworthiness and fairness considerations into the AI development lifecycle. It provides guidance for organizations that design, develop, deploy or use AI systems to manage the many risks of AI technologies. Beyond providing guidance for developers, the framework is likely to inform how regulators approach artificial intelligence-related investigations, as they may treat the framework as effectively setting baseline standards for addressing risks in AI systems.
Topics to be covered:
- Overview of artificial intelligence technologies (10 minutes)
- Introduction to the NIST Framework (5 minutes)
- Risks and harms that AI systems should address (10 minutes)
- Characteristics of trustworthy AI systems (10 minutes)
- How to manage risk with the AI RMF 1.0 Core (10 minutes)
- How to align with the framework with regulator expectations and guidance (15 minutes)
Who Should Attend: In-house counsel, outside attorneys, privacy, technology, software, and other industry professional interested in artificial intelligence
Program Level: Overview
Prerequisites: None
Advanced Preparation: None
Faculty:
Benjamin R. Rossen
Baker Botts LLP