LAB51 logo
Menu
Subscribe

AI NEWS

LAB51 Featured Image Artificial Intelligence2
By Nicolo Finazzi
Nicolo Finazzi

2 Min

May 5, 2023

AI PIONEER GEOFFREY HINTON RESIGNS FROM GOOGLE, WARNS OF MACHINE LEARNING RISKS

Geoffrey Hinton, a well-known computer scientist considered by some as the "Godfather of AI", has resigned from Google, citing concerns over the ethical implications of machine learning. Hinton warned that AI could potentially pose serious risks to society if left unchecked. He has expressed concerns about the AI's capacity to soon become more intelligent than us, as well as the lack of what is human and what is not. Hinton's departure comes amid growing scrutiny of the AI industry's impact on society and underscores the need for more responsible development of this technology.

OPENAI'S CHATGPT RETURNS TO ITALY FOLLOWING TRANSPARENCY DEMANDS

LAB51_ChatGPT Italy return

OpenAI's language model, ChatGPT, has resumed operations in Italy after complying with new transparency regulations. The Italian government had previously imposed restrictions on the use of AI models, citing concerns over transparency and the potential for these systems to perpetuate bias. OpenAI worked with the Italian government to ensure that ChatGPT was in compliance with these regulations, making it the first AI model to be re-approved for use in the country. ChatGPT's return to Italy is a positive sign for the future of AI regulation, as it demonstrates the potential for collaboration between governments and tech companies to create responsible and transparent AI solutions.

UK CMA LAUNCHES REVIEW OF AI MODELS

The UK's Competition and Markets Authority (CMA) has announced the launch of an initial review into the use of artificial intelligence (AI) models by businesses. The review will focus on how these models are being used to make decisions that affect consumers, such as credit scoring, insurance pricing, and personalized advertising. The CMA aims to identify any potential harm to consumers that may arise from the use of AI models, such as discrimination or bias, and to ensure that businesses are transparent in their use of these technologies. The review will also examine how AI models are trained and validated, as well as how they can be audited and held accountable. The CMA's review is part of a broader effort by the UK government to establish a regulatory framework for AI and to promote responsible development of this technology.

magnifiercrossmenuchevron-down