South Korea's Framework Act on Artificial Intelligence Development and Establishment of Trust officially takes effect today, January 22, 2026, making it the first comprehensive AI regulation in the Asia-Pacific region and the second globally after the European Union's AI Act. The legislation establishes national standards for transparency, safety, and accountability while targeting high-impact AI systems with significant effects on human rights, public safety, or critical operations.

The law applies to any AI activities impacting the Korean market regardless of geographic origin, with exceptions only for systems developed exclusively for national defense or security. Companies must now comply with requirements including mandatory user notification when AI systems are deployed, clear labeling of all AI-generated content, impact assessments for high-impact AI systems, establishment of risk management protocols with human oversight, and appointment of domestic representatives for foreign companies operating in Korea without local presence.

High-impact AI systems face the strictest scrutiny under the new framework. The Ministry of Science and ICT defines these as systems trained with massive computational power or deployed in designated high-risk sectors including healthcare, energy infrastructure, public services, and systems affecting fundamental rights. Organizations operating these systems must perform comprehensive impact assessments documenting potential effects on human rights, maintain detailed safety documentation throughout the AI lifecycle, implement continuous risk monitoring with human oversight mechanisms, and prepare for regulatory audits demonstrating compliance with transparency requirements.

The enforcement structure takes a measured approach compared to European regulations. Maximum fines reach 30 million Korean won (approximately 21,000 US dollars) for violations, significantly lower than EU AI Act penalties. However, sanctions can escalate to include fact-finding investigations by regulators, mandatory suspension of non-compliant systems, corrective orders requiring operational changes, and potential imprisonment for severe violations. The Ministry of Science and ICT emphasized that 2026 focuses on guidance rather than penalties, allowing organizations a practical grace period to establish governance frameworks.

The legislation also establishes institutional infrastructure positioning South Korea as an AI power. The law formalizes the Presidential Council on National Artificial Intelligence Strategy as the central coordinating authority for national AI policy, reflecting Seoul's stated goal of becoming a top-three AI power globally. It mandates government-led initiatives to support production, collection, management, and distribution of AI training data, establishes integrated systems for providing high-quality datasets to the private sector, and provides administrative and financial support for AI data center construction and operation under Article 25.

The regulatory push comes as South Korea aims to bridge a competitive gap between its research excellence and commercial success. The country hosts world-class AI research institutions and contributes significantly to foundational AI breakthroughs, yet has struggled to translate academic leadership into market dominance comparable to US or Chinese firms. Industry observers note the law's dual mandate balances innovation support with trust-building measures, attempting to create conditions where Korean AI companies can scale globally while maintaining regulatory compliance.

Foreign companies operating in Korea face immediate compliance requirements. Organizations must audit existing AI deployments to determine if they qualify as high-impact systems, map data workflows and document governance responsibilities clearly, establish internal policies for risk assessment and user notification, designate compliance personnel or appoint domestic representatives, and prepare documentation outlining safety measures for potential regulatory review. Legal experts advise that the one-year transition period offers critical preparation time before enforcement intensifies in 2027.

The law consolidates 19 separate AI regulatory proposals that had been independently introduced in the National Assembly since June 2024, creating unified governance after political uncertainty surrounding April 2024 elections threatened to derail comprehensive legislation. The Ministry of Science and ICT launched an AI Basic Act Lower Statute Alignment Bureau in January 2025, led by working groups of experts from government, industry, academia, and legal sectors, to specify enforcement decree details and collect stakeholder feedback before regulations finalize.

Keep Reading