Stability AI's CarperAI Lab has unveiled FreeWilly1 and FreeWilly2, two powerful open-source Large Language Models (LLMs). These model have been designed to push the boundaries of language understanding and reasoning. Developed with the industry-standard Alpaca format, they both leverage Meta's LLaMA:
Microsoft's groundbreaking Orca project influenced the team's approach to training these models. They leveraged high-quality instructions to generate a dataset containing 600,000 data points. Despite using only 10% of the original Orca dataset size, the FreeWilly models exhibit exceptional performance across multiple benchmarks. They also demonstrated their proficiency in resolving complex issues. This regards specialised disciplines like law and mathematics, intricate reasoning, and understanding language nuances.
Using EleutherAI's lm-eval-harness with the addition of AGIEval, the team conducted thorough evaluations, confirming the FreeWilly models' top-notch performance in language comprehension and their potential to revolutionize various applications in artificial intelligence.
Stability AI believes that these new open-access Large Language Models, FreeWilly1 and FreeWilly2, enhance our understanding of spoken language and create unprecedented possibilities for innovative use-cases across multiple domains.
Worldcoin's unique and unconventional approach makes it stand out among in the crypto space. As we are curious to see how this ambitious project will evolve, we'll keep a close eye on its progress and provide regular updates on the exciting developments ahead.
To know more about these topics, read the following articles: