
Amazon and AI research firm Anthropic have announced a ground-breaking partnership, with the aim of advancing generative artificial intelligence (AI). Anthropic will utilise Amazonโs high-performance, low-cost machine learning accelerators, AWS Trainium and Inferentia chips, to develop their future foundation models. AWS will become Anthropicโs primary cloud provider for mission-critical workloads. Furthermore, AWS customers will gain early access to unique features for model customisation via Amazon Bedrock. Amazon has committed a significant investment of up to $4 billion in Anthropic, securing a minority ownership position in the company.
- Amazon and Anthropic form a $4 billion partnership to advance generative AI with AWS infrastructure and custom chips.
- Amazonโs custom AI chips, Inferentia and Trainium, may give them a competitive edge in the AI chip market.
- Competition in the generative AI space includes Microsoftโs ChatGPT and Googleโs Bard, with Anthropic shifting alliances.
A strategic alliance for a generative AI future
Amazonโs investment in Anthropic not only underscores its confidence in the firmโs technology but also signals its commitment to the advancement of generative AI. Anthropic, known for its expertise in AI safety and research, will provide AWS customers worldwide with access to future generations of its foundation models through Amazon Bedrock, a fully managed service by AWS. This service enables secure access to top foundation models in the industry, offering AWS customers early access to unique features for model customisation and fine-tuning capabilities.
The collaboration has received positive sentiment from Amazonโs top brass, with Amazon CEO, Andy Jassy, expressing his respect for Anthropicโs team and foundation models. However, this partnership is not merely a one-way street. Amazon developers and engineers will gain the ability to build using Anthropic models with the help of Amazon Bedrock. This integration of generative AI capabilities promises to enhance customer experiences across Amazonโs businesses.
Amazonโs AI chips: A potential game-changer
Amazonโs commitment to generative AI is further highlighted by its development of custom silicon chips, Inferentia and Trainium. These chips, which will be used by Anthropic to build, train, and deploy its future foundation models, offer an alternative to Nvidia GPUs, which have become increasingly difficult and expensive to procure.
Amazonโs custom chips could potentially give the company an edge in the generative AI space, particularly given that competitors Microsoft and Google do not have comparable offerings. Amazonโs history of developing custom silicon, which began with the Nitro in 2013, combined with the success of its Arm-based server chip, Graviton, suggests that the company has the technological prowess to make its mark in the AI chip market.
Competition in the generative AI space
Amidst the backdrop of Amazonโs collaboration with Anthropic, itโs important to note the competition in the generative AI landscape. Microsoft has already gained attention for hosting OpenAIโs ChatGPT, and Google has launched its own large language model, Bard. In fact, Anthropic, co-founded by former OpenAI employees, is in direct competition with OpenAIโs GPT models.
Earlier this year, Google parent Alphabet invested $300m in Anthropic for a 10% stake, leading Anthropic to make Google Cloud its โpreferred cloud providerโ. However, Amazonโs recent investment and partnership with Anthropic marks a shift in Big Tech influence on the AI research firm.
The future of AI: Safety, accessibility, and collaboration
As the field of generative AI continues to evolve, the emphasis on safety, accessibility, and collaboration becomes increasingly paramount. Both Amazon and Anthropic are actively engaged in promoting the responsible development and deployment of AI technologies. They are part of various organisations such as the OECD AI working groups, GPAI, Partnership on AI, ISO, NIST, and the Responsible AI Institute.