The Development and Application of AI in Taiwan's Financial IndustryMany banks, insurance companies, and securities firms in Taiwan already use AI. A survey carried out in April this year by Taiwan’s Financial Supervisory Commission (FSC) found that 33% of respondents had implemented AI, up from 29% the previous year.The survey showed the primary objectives for adopting AI in the financial industry include improving operational efficiency (30%), reducing manpower (18%), and enhancing customer experience (15%). Use cases focus on internal administration, smart customer service, and financial crime prevention.The survey also found that 48% of financial institutions using AI have adopted generative AI, up 21% from last year. The most common application areas for generative AI were internal administrative tasks (39%) and intelligent customer service (15%).The FSC’s data showed that financial institutions in Taiwan that have adopted AI mainly use it for internal operations and intelligent customer service, and that it is applied to address financial crime. Looking forward, the survey indicates that the ways financial institutions plan to apply AI do not differ significantly from current uses.The level of automated decision-making by AI systems in financial institutions remains low, and it appears less likely in the short term that we will see AI being generally used as the core of a financial product or service (e.g., fully AI-driven wealth management). Available information indicates that this direction is consistent with trends observed in many advanced countries.AI-related rules, regulations, and guidance for the financial industryIn Taiwan, AI regulations most relevant to financial services are those relating to AI regulatory sandboxes and robo-advisory services. Applications submitted to the FSC for entry into the regulatory sandbox may involve AI technologies, and robo-advisory services involve the use of AI to provide securities investment consulting services.As for AI itself (regardless of industry), like many other countries, Taiwan initially focused on ethical guidelines when discussing AI rules. In 2019, Taiwan's Ministry of Science and Technology issued the “AI Technology R&D Guidelines.” The Guidelines emphasized that AI researchers should act responsibly throughout the AI development process and adhere to certain core values and principles.After the guidelines were issued, debate arose over enacting specific laws to govern AI, and in 2024, Taiwan’s National Science and Technology Council (NSTC) introduced the “Artificial Intelligence Basic Act (Draft Act).” The draft stipulates that the government should improve regulatory adaptation, assess and verify mechanisms to prevent unlawful use of AI, promote AI risk classification standards, strengthen human oversight of AI, establish accountability mechanisms for AI applications, safeguard labor rights, protect personal data and privacy, and enhance data usability.These provisions are more targeted at the government. They do not directly bind the private sector.The Act is still subject to review and approval by Taiwan’s Legislative Yuan. Given the diverse range of opinions across society regarding its provisions, both the timeline for its passage and the final content remain uncertain.As for the financial sector, however, the government has already released relevant guidelines. In 2023, the FSC issued the Core Principles and Related Promotion Policies for the Use of AI in the Financial Industry.In 2024, it released the “Guidelines for Artificial Intelligence (AI) Applications in the Financial Industry.” Key points of the guidelines include:● A risk-based approach: Financial institutions should adopt a risk-based approach when applying AI systems. When outsourcing AI-related tasks to third parties, financial institutions should implement appropriate oversight and supervision measures.● Governance and Accountability: Financial institutions should have a clear governance structure and risk management policy for AI systems. They should identify the system’s purpose, applicable operations, and responsible personnel. For high-risk AI systems, institutions may, after evaluating internal resources and expertise, establish a review or audit mechanism conducted by independent third parties with AI expertise.● Fairness: When using AI systems, financial institutions should assess fairness and strive to avoid bias and discrimination. For example, if a financial institution uses generative AI developed by a third party and cannot verify the training process or ensure the fairness of the resulting data or computations, the institution must still have personnel objectively and professionally manage the risks of the output.● Privacy and Customer Rights: It is essential to protect customer privacy in order to prevent any potential data breaches. The principle of data minimization should be followed to avoid collecting excessive or unnecessary sensitive data. Institutions should also respect customers’ right to choose whether to use AI-based services and determine whether to offer alternatives.● Robustness and Security: Financial institutions should ensure the robustness and security of their AI systems, implementing proper cybersecurity protections. If services rely on AI systems developed or operated by third parties, institutions should conduct appropriate risk management and oversight of those third parties.Additionally, relevant self-regulatory organizations in the financial industry may also introduce more technical regulations regarding the use of AI. For example, Taiwan’s self-regulatory organization for the banking industry, the Bankers Association of the Republic of China (BAROC), issued the "Operational Guidelines for Financial Institutions on the Use of Artificial Intelligence Technologies" in 2024. BAROC’s guidelines stipulate the following key points:● When financial institutions use AI technologies, they should implement measures to comply with the principle of "fair treatment of customers".● During the model training phase, financial institutions must consider the security of the models or algorithms selected and adopt effective measures to improve training quality, prevent the generation of inappropriate content, and enhance the accuracy and reliability of AI-generated outputs.● Financial institutions should appoint senior executives or committees to oversee AI-related supervision and governance. A dedicated unit or personnel should be assigned to manage and promote AI.● When developing or optimizing AI technologies, financial institutions must retain necessary technical documents and relevant records to ensure they are reviewable when needed.● When using third-party AI technologies, financial institutions must carry out due diligence, evaluation, and oversight to ensure the third-party provider maintains traceable logs of AI computations for future audits. Institutions should require the provision of relevant information and clearly define the scope of responsibility.● When AI technology is used for direct interaction with consumers, the institution should disclose that the interaction or service is automated through AI. Where applicable, consumers should have the choice of whether to use such services and be informed about any available alternatives—unless otherwise required by law. Before providing the aforementioned AI-driven services, financial institutions should evaluate data governance, IT security, oversight mechanisms, protection of consumer rights, and contingency measures in case of unexpected events. Relevant internal departments, such as IT security, compliance, and risk control, should provide their opinion on these matters.● Financial institutions should adopt a risk-based approach and establish appropriate risk management and periodic review mechanisms based on their business scale and the extent of AI usage. Depending on the nature, scope, and severity of the risks, institutions may engage independent third-party experts with AI expertise to issue assessment reports.● When using generative AI, financial institutions remain responsible for assessing and managing the risks associated with the AI-generated content.● Transparency and Explainability: AI systems used by financial institutions should operate with transparency. Institutions may disclose relevant information on their websites to inform stakeholders about their use of AI and enhance market trust. As for explainability, institutions must be able to clearly explain how their AI systems work, including the logic behind predictions or decisions.Key Considerations of FSC’s AI Guidelines for the Financial IndustryAlthough the Draft AI Basic Act has not yet been implemented, the Guidelines above offer valuable reference points for the operations of financial institutions.These guidelines emphasize that financial institutions must bear ultimate responsibility for AI, especially if the AI is developed by an external party. If an adverse outcome results from the use of AI, financial consumers are likely to seek compensation directly from the financial institution.Therefore, when outsourcing such tasks, financial institutions must carefully ensure that contracts include appropriate and sufficient supervision procedures over the service provider. Financial institutions must clearly define the service provider’s obligations and specify the allocation of responsibilities between the parties.With the rise of emerging technologies such as AI, boards of directors may increasingly encounter unfamiliar issues and challenges in their decision-making processes. While the determination of whether a director has fulfilled their duty of care must be assessed on a case-by-case basis, unfamiliarity with a subject alone is unlikely to exempt a director from liability.It would be wise to engage external experts to provide professional advice or services, thereby demonstrating that the board has exercised appropriate diligence and mitigating potential legal risks. For example, when discussing whether to apply AI in production processes, business forecasting, or risk assessments, the company could consider hiring external experts to evaluate the appropriateness and effectiveness of using such technologies and data.The guidelines also underscore the importance of establishing robust AI governance frameworks within financial institutions. This may involve appointing senior executives or forming dedicated committees to oversee AI-related initiatives.In Taiwan, many institutions already have Chief Information Security Officers (CISOs) in place under regulatory requirements. However, with the growing adoption of AI, it may be necessary to reassess the structure and leadership of information-related departments—such as those led by the CTO or CIO.The respective roles of these positions, particularly the CISO, should be clearly defined. Clarifying responsibilities and establishing proper accountability will help ensure that AI systems operate effectively across their lifecycle, while maintaining an appropriate balance between efficiency, oversight, and risk control.The financial industry stands apart from other sectors in that it collects and manages funds from the general public. As we move into the AI era, the industry is evolving from its traditional focus on pure “financial” safety and soundness toward a growing emphasis on IT security and technological advancement.In the current era of digital transformation, effective AI governance is a critical issue for the financial sector—both in Taiwan and around the world.Eddie Hsiung is a partner of Lee and Li, Attorneys-at-Law, and the Director of the FinTech Industry Development Association. This article does not represent the positions of Lee and Li or the FinTech Industry Development Association.