New Draft Rules Aim To Strengthen Customer Transparency And Fair Treatment
Vietnam’s central bank released a draft circular that lays out stricter rules for using AI in banking. These rules say that banks and other financial institutions must tell customers whenever AI tools affect decisions or communications. Regulators say that openness is important for building trust in digital spaces that change quickly.
Banks must also say how AI was used to make content like pictures, audio, and video. The framework’s main goal is to keep customers safe from misunderstanding or misusing automated systems. Officials think that making things clearer will help customers understand how new technologies are changing the way people interact with money.

Source: marke tech/Website
Banks Must Notify Users Before Deploying AI Driven Customer Interfaces
The draft rules say that you have to tell people about AI-powered chatbots, virtual assistants, or automated call systems before you start using them. These technologies are becoming more and more important for communication on digital banking platforms. Customers need to know when they are talking to automated intelligence instead of a real person.
Financial institutions must also tell people about systems that can recognize emotions or classify biometrics. Regulators want to stop people from using sensitive analytical tools in the wrong way. These steps make sure that customers are always aware of how their data is used in new technologies.
Rules Prohibit Exploiting Customer Vulnerabilities Through Automated Decisions
The circular says that banks cannot use AI systems to sell bad financial products to people who are already in trouble. Regulators found risks related to age, disability, or money problems. The goal is to stop algorithms from pushing high-risk products on customers who do not have enough protection.
Under the new rules, customers can question decisions that were made with AI. Banks must hire people to look into disputed results. This requirement makes people more responsible and makes sure that automated systems do not do things without supervision.
Recommended Article: Interpol Cyber Teams Battle AI-Driven Global Crime Surge
Central Bank Cites Rapid Growth In AI As Reason For Stronger Governance
Officials said that AI is quickly becoming more common in Vietnam’s financial sector. Regulators are worried that uncontrolled growth could make systemic weaknesses worse or lead to unfair results. Increased oversight is meant to keep customers’ trust during times of rapid technological change.
The circular wants to limit risk while also encouraging responsible new ideas. Regulators want to make an environment where AI can help modernize finance in a safe way. They also stress the importance of protecting groups that may be more affected by automated systems.
Compliance Deadline Extends Into 2027 For Existing Systems
The rules should start to work in March. Banks that are already using AI will have until September 27, 2027, to meet the new rules. This timeline shows how hard it is for big banks to update old systems.
Regulators think that the phased approach gives businesses time to carefully adapt to new technologies. Institutions need to carefully look at their data practices, decision-making processes, and system outputs. The transition period makes it easier to adopt without interfering with important services.
Vietnam Expands AI Oversight Amid Broader National Security Directives
New rules for banks are in line with the government’s efforts to improve data protection and cybersecurity across the board. A directive that came out in January set strict rules for how to verify your identity when doing things online. Officials want to get rid of fake accounts, anonymous profiles, and unregistered mobile numbers all over the country.
These steps are meant to address growing worries about digital impersonation and AI-powered deepfake scams. Authorities require social media sites to check users’ ages and identities to keep them safe. National strategies show that there is a greater sense of urgency about the integrity of information in digital ecosystems.
Regional Developments Highlight Growing Focus On Responsible AI Governance
Countries next door are also improving their AI plans. Laos recently announced a national framework based on international methods for assessing ethics. This strategy’s goal is to help people adopt technology in a responsible way while also making all parts of the government more ready for digital technology.
Experts say that for AI to be reliable, it needs to be built on secure data that guarantees accuracy, ownership, and integrity. They point out that enterprise blockchain systems could be the main way to handle high-quality data inputs. These kinds of technologies help make sure that automated processes are safe and follow new rules.













