With the spread of AI products that handle tasks autonomously, the Japanese government plans to require AI operators to build systems involving human decision-making. This new requirement is included in a draft revision to guidelines for businesses, municipalities, and others involved in AI development, provision, or use, unveiled on Monday by the Internal Affairs and Communications Ministry and the Economy, Trade and Industry Ministry. The guidelines, introduced in 2024, are not legally binding and carry no penalties.
The Japanese government is revising its guidelines in response to the rapid development of AI. The draft requires AI operators to establish appropriate systems for managing 'physical AI,' used in robots and autonomous driving, and 'AI agents,' which handle tasks autonomously, to prevent malfunctions or misuse. For instance, operators must obtain customer consent when selling expensive items through AI-based systems.
Definitions for these AI types are provided for the first time in the guidelines. Physical AI is defined as systems that obtain external information via sensors or other means, which is then processed by AI for autonomous decisions and physical actions. AI agents are described as AI systems that understand their environment and autonomously execute operations.
A panel of experts under the Internal Affairs and Communications Ministry will discuss the revisions, aiming to finalize them by the end of March. The guidelines were introduced in 2024 and have been revised twice so far.