Singapore Launches New AI Governance Framework for Agentic AI

The framework is the first in the world to include a comprehensive guide for enterprises to deploy Agentic AI responsibly

In this photo illustration, a phone screen displays an AI logo. Oleksii Pydsosonnii/The Epoch Times
Share
LinkedIn
Tweet
WhatsApp

Minister for Digital Development and Information Josephine Teo announced the launch of the new Model AI Governance Framework (MGF) for Agentic AI on Thursday (Jan. 22) at the World Economic Forum (WEF), Davos, Switzerland.

“It provides guidance to organisations on how to deploy agents responsibly, recommending technical and non-technical measures to mitigate risks, while emphasising that humans are ultimately accountable,” according to a statement released by Infocomm Media Development Authority (IMDA).

Agentic AI systems can plan across multiple steps to achieve specified objectives using AI agents and the MGF is in line with Singapore’s practical and balanced approach to AI Governance, providing space for innovation and yet putting guardrails in place, the statement said.

“Initiatives such as the MGF for Agentic AI support the responsible development, deployment and use of AI, so that its benefits can be enjoyed by all in a trusted and safe manner.”

While AI agents can take actions to complete repetitive tasks and allow companies to focus on value-added activities and increase productivity, there are potential risks that necessitate the MGF.

One of the potential risks comes from AI agents that could have access to sensitive data and the ability to make changes to their environment, such as updating a customer database or making a payment.

The increased capability and autonomy of AI agents also create a tendency for companies to place too much trust and reliance on automated systems, resulting in the reduction of human accountability.

The framework provides a number of guidance on managing risks in the deployment of Agentic AI.

It offers a structured overview of the risks of agentic AI and emerging best practices in managing these risks. It is targeted at organisations looking to deploy agentic AI, whether through developing AI agents in-house or using third-party agentic solutions.

The framework provides organisations with guidance on technical and non-technical measures to be put in place to deploy agents responsibly across four dimensions.

One of the dimensions is assessing and bounding the risks upfront by selecting appropriate agentic use cases and placing limits on agents’ powers such as agents’ autonomy and access to tools and data.

Humans will be held accountable for agents by defining significant checkpoints at which approval is required.

Technical controls and processes will be implemented throughout the agent lifecycle, such as baseline testing and controlling access to whitelisted services.

The fourth dimension is enabling end-user responsibility through transparency, education and training.

April Chin, Co-Chief Executive Officer of Resaro said, “MGF fills a critical gap in policy guidance for agentic AI. The framework establishes critical foundations for AI agent assurance,” according to the statement.

“It helps organisations define agent boundaries, identify risks, and implement mitigations such as agentic guardrails,” Chin added.

The MGF for Agentic AI was first introduced in 2020 and is the latest initiative introduced by Singapore to build a global ecosystem of trust and reliability in the AI space.

The statement said Singapore is working with other countries through the AI Safety Institute (AISI) and leading the ASEAN Working Group on AI Governance (WG-AI) to develop a trusted AI ecosystem within ASEAN.

IMDA is working to build the ‘Starter Kit Testing of LLM-Based Applications for Safety and Reliability” and welcomes feedback, including the submission of case studies. The feedback can be submitted here.

More information is available at Model AI Governance Framework for Agentic AI.

 

 

Subscribe for Newsletter

Scroll to Top