8 Min
Welcome to AI News by Lab51. This week's highlights include a class-action lawsuit against OpenAI over data usage, Neko Health raising $65 million for AI body scanning, Valve addressing ban claims on AI-generated games, OpenAI launching a specialised team to safeguard against rogue AI, and Inflection AI's $1.3 billion funding for Chatbot Pi development.
A California-based law firm has recently filed a class-action lawsuit against OpenAI, the artificial intelligence company known for its popular chatbot ChatGPT. The lawsuit alleges that OpenAI has massively violated the copyrights and privacy of countless individuals by using data scraped from the internet without their consent to train its technology.
The lawsuit seeks to test a novel legal theory, claiming that OpenAI has infringed upon the rights of millions of internet users by utilising their social media comments, blog posts, Wikipedia articles, and even family recipes.
Ryan Clarkson, managing partner of the Clarkson law firm, stated that they aim to represent "real people whose information was stolen and commercially misappropriated to create this very powerful technology." The case was officially filed in a federal court in the northern district of California.
This lawsuit brings to light a crucial question concerning the surge in "generative" AI tools like chatbots and image generators. These technologies work by ingesting vast amounts of data from the open internet and learning to make inferences based on it. As a result, these "large language models" gain the ability to generate responses, write poetry, engage in complex conversations, and even pass professional exams. However, the individuals who originally authored the billions of words used as training data never consented to OpenAI using their work for its own profit.
This class-action lawsuit adds to the growing list of legal challenges faced by companies involved in AI technology. OpenAI has been previously sued for defamation, and similar lawsuits have been filed against other companies, such as Microsoft and Stability AI.
Neko Health, a preventative healthcare startup co-founded by Spotify co-creator Daniel Ek, has recently secured $65 million in funding to advance its innovative approach to promoting wellness. The company's core offering involves providing full-body scans that enable individuals to receive comprehensive health assessments, aiding in the early detection of potential health issues.
The funding will be instrumental in further developing Neko Health's technology and expanding its operations. By leveraging advanced scanning techniques, the company aims to empower individuals to take proactive steps towards maintaining their well-being.
Neko Health's full-body scans offer a non-invasive and detailed examination of a person's overall health. These scans utilize cutting-edge imaging technology to capture a comprehensive view of various bodily systems and organs. By analyzing the gathered data, the company’s algorithms can identify potential abnormalities or early signs of health issues.
The recent funding round highlights the growing interest and support for innovative healthcare solutions focused on prevention rather than solely treating existing conditions. Neko Health's approach aligns with the shift towards a proactive and personalized approach to healthcare, aiming to empower individuals to actively manage their health.
Valve, the renowned gaming company known for its popular platform Steam, has recently addressed allegations suggesting that it has imposed a ban on AI-generated games. In response to these claims, the company clarified its stance, emphasizing its openness to collaborating with developers who utilize AI technology in game development.
The initial allegations stemmed from a post made by an indie developer in a subreddit, where he claimed that Valve was no longer willing to publish games featuring AI-generated content. This incident highlighted the fact that Steam, like other app platforms, has a review and approval process, and the guidelines regarding content can sometimes be ambiguous until developers test them with unique scenarios.
With its latest statement, Valve has now clarified that it does not have a specific policy against AI-generated content on its platform. The company recognizes the potential of AI in driving innovation and pushing the boundaries of game development. While AI-generated games present unique challenges in terms of quality control and ensuring a positive user experience, Valve is open to working with developers to address these concerns.
The company emphasized that it evaluates games based on their individual merits and quality standards, regardless of whether they are developed using traditional methods or AI technology. Valve's priority is to ensure that the games offered on its platform meet the expectations of players and maintain a high level of enjoyment.
OpenAI, the popular artificial intelligence research company, has announced the establishment of a specialized teamdedicated to preventing the emergence of rogue AI. The team's mission is to proactively address the potential risks associated with the development and deployment of artificial intelligence systems.
With the rapid advancement of AI technologies, concerns have arisen regarding the potential misuse or unintended consequences of AI. Examples of such alarms have come from renowned AI expert Geoffrey Hinton, often referred to as the "Godfather of AI," and Sam Altman, CEO of OpenAI itself.
In response to this, the company aims to stay at the forefront of responsible AI development by actively working to ensure that AI systems are designed and deployed with strong safety measures and ethical considerations in mind.
The newly formed team will focus on researching and implementing techniques that can prevent the development of AI systems that may exhibit harmful or malicious behavior. This includes exploring methodologies for robustoversight, auditing, and monitoring of AI models and systems throughout their lifecycle.
OpenAI's initiative aligns with broader efforts in the AI community to establish robust governance frameworks and ethical guidelines (e.g., AI Act released in the EU). It reflects a growing recognition of the need to ensure that AI development proceeds responsibly and in the best interests of society.
Inflection AI, a prominent player in the field of chatbot technology, has successfully raised $1.3 billion in a recent funding round. The funding was primarily focused on supporting the development and advancement of their flagship chatbot platform, Chatbot Pi.
Noteworthy investors spearheaded the funding round, underscoring the growing recognition and interest surrounding AI-powered chatbot solutions. Nvidia took the lead in this investment, alongside renowned billionaires Reid Hoffman, co-founder of LinkedIn, Bill Gates, co-founder of Microsoft, and Eric Schmidt, former CEO of Google. As a result of this successful funding round, Inflection achieved a valuation of $4 billion, according to a credible source familiar with the transaction. While the company's majority shareholders, including co-founder and leader Mustafa Suleyman, maintained their ownership stake, Inflection has opted not to provide further details or comments regarding this development.
Protagonist of the investment is Chatbot Pi, developed by Inflection AI. This is an advanced chatbot platform that leverages AI and NLP to deliver enhanced conversational experiences. The platform aims to revolutionize customer interactions and streamline business operations by providing intelligent, automated conversational agents.
The company's successful funding round will accelerate the development and deployment of its very own Chatbot Pi, which will allow Inflection AI to enhance its capabilities and expand its market reach.