FINRA publishes agentic AI overview within financial services
On January 27, FINRA published its observations on the emerging use of agentic AI in the financial services industry. AI agents are autonomous systems capable of planning, decision-making, and executing actions to achieve goals and objectives without predefined rules or continuous human intervention, distinguishing them from traditional automation tools. FINRA noted these agents can operate with varying degrees of autonomy and oversight, but lack human judgment and tacit knowledge, which may present challenges with transparency and supervision.
FINRA identified several risks associated with AI agents, including the potential for agents to act beyond their intended scope and authority, difficulties in auditability and transparency, potential misuse or disclosure of sensitive data, insufficient domain knowledge for complex tasks, misaligned reward functions, and persistent risks unique to GenAI such as bias, hallucinations, and privacy concerns.
FINRA also outlined several types of AI agents, including conversational agents that interact via natural language, software development agents that automate coding and infrastructure management, fraud detection and prevention agents, trade and AML surveillance agents, process automation and optimization agents, and trade execution agents. FINRA put out an infographic on these different types.