Solutions

ASEAN AI Guidelines Seek to Encourage Responsible Use and Deployment

On

An employee uses a computer in a hotel in the Philippines.
A growing number of organizations and businesses now uses AI and automation because of their transformative potential and capacity to introduce new opportunities. Photo credit: ADB.

From ride-hailing and payments firm Gojek, to aspiring “techglomerate” Aboitiz Group, deep-tech startup UCARE.AI, and Singapore’s Ministry of Education, businesses and organizations across Southeast Asia are embracing artificial intelligence (AI).

With reason. According to tech advisory firm Access Partnership, businesses in key markets in Southeast Asia can expect economic benefits of up to $835 billion by 2030—28% of the region’s total opportunity—if AI products and services are adopted.

The Association of Southeast Asian Nations (ASEAN) realizes businesses and organizations adopt AI and automation because of their transformative potential and capacity to introduce new opportunities by disrupting old models. As such, the regional grouping has come up with the ASEAN Guide on AI Governance and Ethics targeted toward organizations across the region that wish to design, develop, and deploy traditional AI technologies responsibly in commercial and non-military or dual-use applications. It also aims to increase users’ trust in AI. The guide was published in February.

“Given the profound impact that AI potentially brings to organizations and individuals in ASEAN, it is important that the decisions made by AI are aligned with national and corporate values, as well as broader ethical and social norms,” the guide said.

The guide focuses on encouraging alignment within ASEAN and fostering the interoperability of AI frameworks across jurisdictions. It also includes recommendations on national-level and regional-level initiatives that governments in the region can consider implementing for their AI systems.

In recent years, governments and international organizations have begun issuing principles, frameworks and recommendations on AI ethics and governance. These include Singapore’s Model AI Governance Framework and the Organisation for Economic Cooperation and Development’s Recommendation of the Council on AI. However, there has not yet been an intergovernmental common standard for AI that defines the principles of AI governance and provides guidance for policymakers in the region to utilize AI systems in a responsible and ethical manner.

Seven guiding principles

The guide laid out seven guiding principles to help ensure trust in AI and the design, development, and deployment of ethical AI systems.

1. Transparency and explainability. Transparency entails providing disclosure on when an AI system is being used and the involvement of an AI system in decision-making, what kind of data it uses, and its purpose. Disclosing that AI is used in the system can help the public become aware and make an informed choice of whether to use the AI-enabled system. Deployers of AI products and services should also be able to explain or communicate the reasoning behind an AI system’s decision in a way that is understandable to a range of people, as it is not always clear how an AI system has arrived at a conclusion. This allows the public to know the factors contributing to the AI system’s recommendation.

2. Fairness and equity. The guide advises AI deployers to have safeguards in place to ensure that algorithmic decisions do not further exacerbate or amplify existing discriminatory or unjust practices across different demographics. The design, development, and deployment of AI systems should not result in unfair biases or discrimination. Deployers of AI systems should regularly test such systems to confirm if there is bias and where bias is confirmed, make the necessary adjustments to ensure equity.

3. Security and safety. AI deployers should ensure AI systems are safe and sufficiently secure against malicious attacks. Ensuring that AI systems are safe is essential to fostering public trust in AI. This entails the conduct of impact or risk assessments and ensuring that known risks have been identified and mitigated. Deployers should adopt a risk prevention approach and ensure precautions are in place so that humans can intervene to prevent harm. The system should also be able to safely disengage itself in the event an AI system makes unsafe decisions.

4. Human-centricity. AI systems should respect human-centered values and pursue benefits for society, including people’s well-being, nutrition, and happiness, among others. This is key to ensure that people benefit from AI design, development, and deployment while being protected from potential harms. Especially in instances where AI systems are used to make decisions about humans or aid them, it is imperative that these systems are designed to benefit people and do not take advantage of the vulnerable.

5. Privacy and data governance. To ensure data privacy and protection, AI systems should have mechanisms in place to maintain and protect the quality and integrity of data. Data protocols should govern who can access data and when data can be accessed. Data privacy and protection should be respected and upheld during the design, development, and deployment of AI systems. The way data is collected, stored, generated, and deleted throughout the AI system lifecycle must comply with applicable data protection laws, data governance legislation, and ethical principles.

6. Accountability and integrity. Deployers should be accountable for decisions made by AI systems and for the compliance with laws and respect for AI ethics and principles. When designing, developing, and deploying AI systems, AI actors should act with integrity throughout the AI system lifecycle. Deployers should ensure the proper functioning of AI systems and compliance with applicable laws, internal AI governance policies, and ethical principles. In the event of a malfunction or misuse of the AI system that results in negative outcomes, those responsible should act with integrity and implement mitigating actions to prevent similar incidents from happening again.

7. Robustness and reliability. AI systems should be able to cope with errors during execution and unexpected or erroneous input, or cope with stressful environmental conditions. It should also perform consistently. AI systems should, where possible, work reliably and have consistent results for a range of inputs and situations. To prevent harm, AI systems need to be resilient to unexpected data inputs, not exhibit dangerous behavior, and continue to perform according to the intended purpose.

Use cases

The guide cited a handful of use cases illustrating how organizations operating in ASEAN have implemented AI governance measures in AI design, development, and deployment.

The guide cited Gojek of Indonesia for ensuring safeguards are in place in its use of AI in its operations. The company leverages Al in various sectors, including, but not limited to, driver-order matching, cartography, and fraud detection. Before deploying machine-learning models, the company tests the models' performance metrics against a set of predefined offline benchmarks. Such benchmarks allow evaluation of the Al model's performance in a controlled environment with a fixed set of data. This helps Gojek measure the variation in model outputs for successive iterations under the same operating conditions. Conducting offline benchmarking also allows Gojek to test model performance without any real-world risk or harm to end users.

The Aboitiz Group from the Philippines, meanwhile, was included in the guide for successfully establishing internal AI governance structures and measures. The company uses AI to support day-to-day operations of its business units, from power, banking, and financial services, food, land, construction, shipbuilding, infrastructure, and data science. Effective AI governance helps the group improve existing AI-driven processes and decisions and mitigate the risks and challenges posed. It has established appropriate responsibilities for all AI-related programs and decisions as well as clear risk assessment protocols of AI-driven decisions and appropriate measures to address the different levels of risk. The company also regularly reviews organizational values and incorporation of ethical principles into the use of AI.

The guide also included Singapore’s Ministry of Education among the use cases because it illustrates stakeholder interaction and community. In developing an AI-enabled system that provides a personalized learning path for each student, it engaged various stakeholders and sought their views during the design and development of the system. Ideas and feedback from policymakers, curriculum and technical experts, as well as users (teachers and students) were then incorporated in the planning, building, and piloting phases.

This article was first published by BIMP-EAGA on 5 June 2024.