The European Union is on the brink of finalizing a groundbreaking regulation that is poised to be the most comprehensive and far-reaching legislation on artificial intelligence (AI) in the western world.
Negotiators have reached an agreement on controls for generative AI tools, including OpenAI Inc.’s ChatGPT and Google’s Bard, capable of producing content on command.
Delegates from the European Commission, the European Parliament, and 27 member countries worked through an extended meeting, bringing them closer to a formal agreement on a broader legislative piece known as the AI Act.
Formal adoption soon
This development signifies a crucial step in establishing a landmark AI policy that will set the standard for regulating generative AI tools in the absence of significant action by the US government or any other major government.
The agreement is particularly significant as policymakers aim to finalize the AI Act’s language and secure passage before the European elections in June, anticipating potential changes with the arrival of a new commission and parliament that could impact the legislation.
The protracted discussions highlight the contentious nature of the AI regulation debate, which has divided global leaders and tech executives.
Finding a balance
The EU, like other governments, is grappling with finding a balance between safeguarding its AI startups and addressing potential societal risks associated with generative tools like ChatGPT and Bard.
The negotiations faced challenges, with countries like France and Germany expressing concerns about rules that could potentially disadvantage local companies.
Despite these obstacles, officials were optimistic about reaching a deal, recognizing the need to address the rapid rise in popularity of generative AI tools.
The proposed plan by EU policymakers outlines requirements for developers of AI models, such as those supporting tools like ChatGPT.
Developers would need to maintain information on model training, summarize copyrighted material used, and appropriately label AI-generated content. AI systems deemed to pose “systemic risks” would be subject to an industry code of conduct, requiring cooperation with the commission, incident monitoring, and reporting.
The technical details of the act will be discussed in subsequent meetings following the anticipated agreement. The European Commission has not yet responded to requests for comments on the matter.
The proposed systems to regulate AI
AI systems, also known as foundation models, serve as a base for developers to create new applications.
Researchers have been surprised by some AI behaviours, such as ChatGPT occasionally providing false but convincing answers. The underlying model is trained to predict sentences, sometimes leading to misleading responses. Quirks in a foundation model’s code may result in unexpected outcomes in different situations.
EU proposals for regulating foundation models suggest companies should transparently document their system’s training data and capabilities, demonstrate efforts to reduce risks, and undergo audits by external researchers.
However, in recent weeks, influential EU countries like France, Germany, and Italy have contested these proposals. They argue that makers of generative AI models should have the freedom to self-regulate instead of adhering to strict rules.
They believe stringent regulations would hamper European companies’ competitiveness against major U.S. players like Google and Microsoft. Interestingly, smaller companies using OpenAI code would face more stringent rules compared to providers like OpenAI themselves.
(With inputs from agencies)
from Firstpost Tech Latest News https://ift.tt/sBrz6gC
No comments:
Post a Comment
please do not enter any spam link in the comment box.