News

AWS and Nvidia to Deliver Generative AI Infrastructure

Amazon Web Services (AWS) and Nvidia have announced the expansion of cooperation to provide consumers with advanced infrastructure, software, and services for innovation in the sphere of artificial intelligence.

AWS and Nvidia to Deliver Generative AI Infrastructure

The relevant information is contained in the press release of the mentioned companies, which was published last Tuesday, November 28. Firms intend to provide customers with technologies for teaching basic configurations of artificial intelligence and creating generative AI apps.

As part of this partnership project, AWS will become the first cloud provider to transfer Nvidia GH200 Grace Hopper superchips to the cloud. These microcircuits differ from other products in this category by the NVLink multi-node technology. The chips will be available in Amazon Elastic Compute Cloud (Amazon EC2) instances, which will allow joint customers to scale them up to thousands of copies.

The multi-node platform GH200 NVL32 combines 32 microcircuits of Grace Hopper with Nvidia NVLink and NVSwitch technologies in one instance.

Also, as part of the partnership, the companies will cooperate in hosting Nvidia DGX Cloud, AI-training-as-a-service, on AWS. In this case, developers will be given access to the DGX cloud with GH200 NVL32, which has the largest shared memory in one instance. DGX on AWS will speed up the learning process of advanced generative artificial intelligence and large language models.

Also, the cooperation of the two companies is connected with the implementation of a project called Ceiba. As part of this initiative, Nvidia and AWS are working together to create the world’s fastest GPU-based artificial intelligence supercomputer. This development will be equipped with 16,384 Nvidia GH200 superchips. The supercomputer will be capable of processing 65 exaflops of AI. Nvidia will use this high-efficiency machine for its research and development in the sphere of generative artificial intelligence.

The press release also states that AWS will release three new Amazon EC2 instances based on Nvidia GPUs. The P5e will be designed for large-scale workloads with generative artificial intelligence and high-performance computing. The G6 and G6e will be released by the company as a product for several use cases, including machine intelligence tuning, logical inference, and graphical and video workloads. The G6e, as indicated in the press release, is suitable for developing 3D workflows and digital twin apps with Nvidia Omniverse.

Software is another area of cooperation between companies. Nvidia NeMo Retriever microservice offers consumers tools for creating high-precision chatbots and generalization tools. Nvidia BioNeMo allows you to speed up and simplify teaching pharmaceutical companies artificial intelligence models designed for drug development.

Nvidia is currently the world leader in the production of computer chips. The company’s products are used in artificial intelligence processes. The firm’s market value is a landmark mark of $1 trillion this year. This result was facilitated by the high level of demand for the company’s chips, which are used in the process of training generative artificial intelligence models.

As we have reported earlier, IBM Expands Collaboration With AWS.

Serhii Mikhailov

2180 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.