Prominent players in the AI industry, including OpenAI, Microsoft, Google, and Anthropic, yesterday announced the formation of the Frontier Model Forum. The primary objective of this forum is to regulate the development of large machine learning models, with a specific focus on ensuring their safe and responsible deployment.
Termed “frontier AI models,” these cutting-edge models surpass the capabilities of existing advanced models. However, the concern lies in their potential to possess dangerous capabilities, posing significant risks to public safety, Reuters reported.
Among the most well-known applications of these models are generative AI models, such as the one powering chatbots like ChatGPT. These models can rapidly extrapolate vast amounts of data to generate responses in the form of prose, poetry, and images.
Despite the numerous use cases for such advanced AI models, several government bodies, including the European Union, and industry leaders, like OpenAI CEO Sam Altman, have emphasized the need for appropriate guardrails to mitigate the risks associated with AI.
Govt. not considering rules for use of AI in filmmaking: Murugan
DTH revenue slide to ease to 3–4% this fiscal year: Report
At Agenda Aaj Tak, Aamir, Jaideep Ahlawat dwell on acting, Dharam
JioHotstar to invest $444mn over 5 years in South Indian content
Standing firm, TRAI rejects DoT views on satcom spectrum fee
Diljit Dosanjh wraps shoot for untitled Imtiaz Ali film
‘Bhabiji Ghar Par Hai 2.0’ to return with comedy, chaos, a supernatural twist
BBC names Bérangère Michel as new Group CFO
‘Border 2’ teaser to be unveiled on Vijay Diwas
CNN-News18 Rahul Shivshankar takes editorial charge 


