2 Min
Amazon's mission to assert itself in the realm of generative AI is gaining momentum. The tech giant is now working on its custom-designed AWS chips, Inferentia and Trainium. These microchips offer an alternative to Nvidia GPUs, commonly used for training large language models. Amazon's teams are working on these chips to provide its Amazon Web Services (AWS) customers with a robust AI solution. The team, Headquartered in an unmarked office building in Austin, Texas, wants to match the performance of other industry leaders.
Inferentia and Trainium are part of Amazon's strategic move to keep pace with industry competitors, particularly Microsoft and Google. These two tech giants have made substantial investments in generative AI. Microsoft allocated a reported $13 billion to OpenAI and Google introduced its own language model, Bard.
AWS CEO Adam Selipsky expressed Amazon's readiness to meet the growing demand for generative AI. He mentioned, "The entire world would like more chips for doing generative AI, whether that’s GPUs or whether that’s Amazon’s own chips that we’re designing".
To know more about these topics, read the following articles:
Although Amazon's entry into the generative AI landscape has been slightly delayed compared to its rivals, its strategy leverages its vast cloud dominance. AWS, holding a 40% market share in 2022, provides a significant edge. Amazon is also developing an array of AI-focused tools to cater to various AI requirements. AWS HealthScribe and SageMaker are some of the offerings that augment Amazon's AI portfolio.
With over 100,000 customers currently utilizing machine learning on AWS, Amazon's impact in the generative AI domain is poised to grow. Its cloud ecosystem, combined with its custom chips and innovative AI tools, positions Amazon as a formidable player in the rapidly expanding field of AI technology.