Prominent players in the AI industry, including OpenAI, Microsoft, Google, and Anthropic, yesterday announced the formation of the Frontier Model Forum. The primary objective of this forum is to regulate the development of large machine learning models, with a specific focus on ensuring their safe and responsible deployment.
Termed “frontier AI models,” these cutting-edge models surpass the capabilities of existing advanced models. However, the concern lies in their potential to possess dangerous capabilities, posing significant risks to public safety, Reuters reported.
Among the most well-known applications of these models are generative AI models, such as the one powering chatbots like ChatGPT. These models can rapidly extrapolate vast amounts of data to generate responses in the form of prose, poetry, and images.
Despite the numerous use cases for such advanced AI models, several government bodies, including the European Union, and industry leaders, like OpenAI CEO Sam Altman, have emphasized the need for appropriate guardrails to mitigate the risks associated with AI.