OpenAI’s Maker Raises Concerns Over EU AI Legislation, Hints At Potential Exit

AIOpenAI's Maker Raises Concerns Over EU AI Legislation, Hints At Potential Exit
OpenAI's Maker Raises Concerns over EU AI Legislation, Hints at Potential Exit, OpenAI, ChatGPT
OpenAI’s Maker Raises Concerns over EU AI Legislation, Hints at Potential Exit

In a recent statement on Wednesday, the CEO of OpenAI, Sam Altman, conveyed the company’s consideration of departing the European Union (EU) if it fails to concede with the proposed AI legislation. The EU’s forthcoming regulations are expected to become the first specific laws governing artificial intelligence.

One noteworthy provision in the legislation is the conceivable requirement for generative AI companies to disclose the copyrighted materials used to train their systems in generating text and images. This move aims to address concerns raised by individuals in the creative industries, who accuse AI companies of utilizing the works of artists, musicians, and actors to mimic their creations.

Sam Altman voiced his concerns at the University College London’s panel discussion, stating that OpenAI might find it technically infeasible to stick to certain safety and transparency requirements drafted in the AI Act. Despite this, Altman remains optimistic about AI’s potential to create more jobs and reduce inequality.

Further, during the discussion, Altman delved deeper into OpenAI’s apprehensions, honing in on the classification of high-risk systems as outlined in the current iteration of the EU AI law. While the legislation is undergoing revisions, a potential scenario exists where OpenAI’s notable AI models like ChatGPT and GPT-4 could fall under the “high risk” classification. Such a designation would entail additional safety obligations for the companies responsible for these advanced systems. OpenAI has consistently asserted that their general-purpose models do not inherently possess characteristics that pose significant risks.

Altman emphasized the importance of thoroughly assessing whether OpenAI could successfully meet the requirements stipulated for high-risk systems in the EU AI Act. He expressed that he will explore every avenue to fulfill these requirements, but he furthermore acknowledges that there are practical constraints that might impede total compliance.

He furthermore emphasized when it comes to misinformation, social media platforms play a more substantial role in disseminating disinformation compared to AI language models. He stated that while AI models like GPT-4 can generate disinformation, it is the amplification and spread of such content through social media that ultimately magnifies its impact.

Leaders discuss Risks and Collaboration for Trustworthy AI

During the event at University College London, Altman, UK Prime Minister Sunak as well as executives from DeepMind along with Anthropic chatted about the risks of AI, like fake info and national security. They communicated it’s important to have rules and agreements to manage these challenges.

Though Prime Minister Sunak accentuated the benefits of AI in public services. The G7 leaders emphasized the need for international collaboration to ensure trustworthy AI. The European Commission strives to establish an agreement with Alphabet. Besides that, Thierry Breton stressed the importance of cooperation among countries and companies, while Tim O’Reilly advocated for rules and oversight to prevent excessive fear from impeding progress.

As they keep talking about AI rules, they want companies to work together to set clear standards and report regularly. They furthermore want to update the rules as things change. It’s important to find a good balance between encouraging new ideas with AI and making sure it’s utilized in a safe and fair way.

Protesters Challenge Altman’s Vision for the Future of AI

Altman’s appearance at a London university attracted protesters opposing OpenAI’s pursuit of Artificial General Intelligence (AGI). The demonstrators criticized Altman’s vision and distributed flyers urging public participation in shaping the future. Gideon Futterman, a student, expressed concerns about letting Silicon Valley figures dictate societal choices and referred to AGI as a risky endeavor.

Moreover, he engaged in a conversation with the protesters, acknowledging their concerns but emphasizing the connection between safety and capabilities. He denied active participation in an AI race and expressed confidence in OpenAI’s safety measures while suggesting that AGI’s development is inevitable.

Tarim Zia
Tarim Zia
Tarim Zia is a technical writer with a unique ability to express complex technological concepts in an engaging manner. Graduating with a degree in business studies and holding a diploma in computer science, Tarim brings a diverse set of skills to her writing. Her strong background in business allows her to provide valuable insights into the impact of technology on various industries, while her technical knowledge enables her to delve into the intricacies of emerging technologies.
Watch & Subscribe Our YouTube Channel
YouTube Subscribe Button

Latest From Hawkdive

You May like these Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.