Contact Form

Name

Email *

Message *

Thursday, 20 July 2023

‘Current AI Models are very stupid’: Nick Clegg, Meta's President of Global Affairs

Nick Clegg, the president of global affairs at Meta (formerly Facebook), downplayed the risks of current Artificial Intelligence (AI) models, stating that they are “quite stupid” and the hype surrounding AI has surpassed the technology’s capabilities. He mentioned that the current models are far from achieving true autonomy and the ability to think for themselves.

Meta recently announced that its large language model, LLaMA 2, will be available as an open-source tool for commercial businesses and researchers to use. This decision has sparked debates within the tech community due to concerns about the potential misuse of such a powerful tool.

The limitations on current AI models
Clegg acknowledged that large language models like GPT, upon which ChatGPT is made are essentially trained to predict the next word in a sequence based on enormous datasets of text, which makes them lack true understanding and independent thinking.

While opening up the product for others to use through open source allows for free user testing data and improvements, it also raises concerns about the need for strong guardrails to prevent misuse. Previous chatbot iterations have been manipulated to spread hate speech and false information, raising questions about how Meta plans to address the potential misuse of LLaMA 2.

The collaboration with Microsoft to make LLaMA 2 available through Microsoft’s platforms like Azure indicates Meta’s ambitions in the AI field. With the deep pockets of companies like Microsoft investing in AI creators like OpenAI (the creator of ChatGPT), there are concerns about a consolidation of power in the AI industry, potentially limiting healthy competition.

Overall, the availability and use of LLaMA 2 raise important questions about the ethical use of AI and the need for robust measures to prevent its misuse.

The need to go open source
LLaMA 2, developed through a partnership between Microsoft and Meta is offered as an open-source tool, making it free for commercial businesses and researchers to use. In contrast, GPT-4 and Google’s LLM, which powers the Bard chatbot, are not available for free use in commercial or research applications.

Recently, US comedian Sarah Silverman filed a lawsuit against both OpenAI and Meta, alleging that her copyright has been violated in the training of their AI systems.

Dame Wendy Hall, a prominent computer science professor at the University of Southampton, expressed concerns about open-sourcing AI models, particularly in terms of legislation and regulation.

AI surrounded by Hyperbole
Hall raised the question of whether the industry can be trusted to self-regulate or if there is a need for government involvement in regulation. She used strong language, comparing open-sourcing AI to providing a template for building a nuclear bomb.

In response, Nick dismissed the comparison as “hyperbole,” clarifying that Meta’s open-sourced system, LLaMA 2, cannot generate images or build harmful bioweapons. However, he agreed that AI indeed needs to be regulated.

Sir Nick emphasized that open-sourcing AI models is already common practice, and the real concern is how to do it responsibly and safely. He asserted that the LLMs (large language models) that are open-sourcing, including LLaMA 2, are safer than other open-sourced AI models.



from Firstpost Tech Latest News https://ift.tt/vlpRPJj

No comments:

Post a Comment

please do not enter any spam link in the comment box.

Navigating the World of Crypto: Exploring the Potential of Crypto4u

 In recent years, the world of cryptocurrency has undergone a seismic shift, evolving from a niche interest among tech enthusiasts to a glob...