Metaverse: Lawmakers broaden the AI rulebook to include metaverse environments that meet certain criteria. The most recent updates also addressed risk management, data governance, and high-risk system documentation.
These most recent criteria make significant revisions to the scope, subject content, and duties for high-risk AI systems in risk management, data governance, and technical documentation.
Metaverse: Lawmakers broaden the AI rulebook
A new item has been added to broaden the reach of the rule to include AI system operators in specified metaverse environments that meet a number of cumulative conditions.
These criteria are that the Metaverse requires:
- a verified avatar;
- is designed for large-scale involvement;
- real-world social interactions;
- real-world financial transactions; and
- health or fundamental rights risks.
The scope has been broadened to include any economic operator who places an AI system on the market or puts it into operation.
According to the wording, the legislation does not exclude national laws or collective agreements from imposing stricter requirements to protect workers’ rights when businesses utilize AI systems. At the same time, AI systems designed primarily for scientific study and development are not covered.
As raised by several MEPs, the topic of whether any AI system is likely to interact with or impact minors has been postponed until a later date.
Furthermore, the change from center-right MPs that would limit the scope for AI suppliers or users in a third nation has been retained for future negotiations because it is tied to the definition, according to a notation at the document’s margin.
📌 The subject of discussion
The rules outlined in the regulation are designed not only for the placement of AI in the market but also for its development. The goals of harmonizing the regulations for high-risk systems and fostering innovation, with a special emphasis on innovation, have been introduced.
📌 High-risk AI requirements
According to the compromise modifications, high-risk AI systems must comply with the AI Act’s requirements throughout their existence and take into account the most recent and relevant technical standards.
📌 Risk management
Every time there is a significant modification to the high-risk AI, the risk management system must be modified “to assure its continuous effectiveness.”
Risk management must now examine the aspects of health, legal and fundamental rights, the impact on certain groups, the environment, and the spread of misinformation.
If, following the risk assessment, the AI providers believe there are still relevant risks, they should present the user with a reasoned judgment on why these risks are acceptable.
📌 Data management
According to the compromise modifications, for high-risk AI, techniques like as unsupervised learning and reinforcement learning that do not require validation and testing data sets must be created using training datasets that meet a certain set of criteria.
The goal is to prevent biases from developing, which is reinforced by the obligation to analyze potential feedback loops.
📌 Documentation for technical purposes
The wording has been introduced to provide SMEs more leeway in complying with the requirement to keep technical documentation about high-risk systems in place after permission from national authorities.
The list of technical details has been greatly expanded to cover details such as the user interface, how the AI system operates, expected inputs and outputs, cybersecurity precautions, and the carbon impact.