{"id":28784,"date":"2022-03-22T11:12:01","date_gmt":"2022-03-22T18:12:01","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=28784"},"modified":"2022-03-22T11:12:04","modified_gmt":"2022-03-22T18:12:04","slug":"nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/","title":{"rendered":"NVIDIA Announces Hopper Architecture, the Next Generation of Accelerated Computing"},"content":{"rendered":"\n<p class=\"has-text-align-center\">The New Engine for World\u2019s AI Infrastructure, NVIDIA H100 GPU Makes Order of Magnitude Performance Leap<\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"alignright size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"138\" height=\"110\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2017\/02\/Nvidia_Logo.jpg\" alt=\"\" class=\"wp-image-17071\"\/><\/figure><\/div>\n\n\n\n<p>To power the next wave of AI data centers, NVIDIA today announced its next-generation accelerated computing platform with <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/hopper-architecture\/\" rel=\"noreferrer noopener\" target=\"_blank\"><u>NVIDIA Hopper\u2122 architecture<\/u><\/a>, delivering an order of magnitude performance leap over its predecessor.<\/p>\n\n\n\n<p>Named for Grace Hopper, a pioneering U.S. computer scientist, the new architecture succeeds the NVIDIA Ampere architecture, launched two years ago.<\/p>\n\n\n\n<p>The company also announced its first Hopper-based GPU, the <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/h100\" rel=\"noreferrer noopener\" target=\"_blank\"><u>NVIDIA H100<\/u><\/a>, packed with 80 billion transistors. The world\u2019s largest and most powerful accelerator, the H100 has groundbreaking features such as a revolutionary Transformer Engine and a highly scalable NVIDIA NVLink<sup>\u00ae<\/sup> interconnect for advancing gigantic AI language models, deep recommender systems, genomics and complex digital twins.<\/p>\n\n\n\n<p>\u201cData centers are becoming AI factories \u2014 processing and refining mountains of data to produce intelligence,\u201d said Jensen Huang, founder and CEO of NVIDIA. \u201cNVIDIA H100 is the engine of the world\u2019s AI infrastructure that enterprises use to accelerate their AI-driven businesses.\u201d<\/p>\n\n\n\n<p><strong>H100 Technology Breakthroughs<\/strong><\/p>\n\n\n\n<p>The NVIDIA H100 GPU sets a new standard in accelerating large-scale AI and HPC, delivering six breakthrough innovations:<\/p>\n\n\n\n<ul><li><strong>World\u2019s Most Advanced Chip <\/strong>\u2014 Built with 80 billion transistors using a cutting-edge TSMC 4N process designed for NVIDIA\u2019s accelerated compute needs, H100 features major advances to accelerate AI, HPC, memory bandwidth, interconnect and communication, including nearly 5 terabytes per second of external connectivity. H100 is the first GPU to support PCIe Gen5 and the first to utilize HBM3, enabling 3TB\/s of memory bandwidth. Twenty H100 GPUs can sustain the equivalent of the entire world\u2019s internet traffic, making it possible for customers to deliver advanced recommender systems and large language models running inference on data in real time.<\/li><li><a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/03\/22\/h100-transformer-engine\/\" rel=\"noreferrer noopener\" target=\"_blank\"><strong><u>New Transformer Engine<\/u><\/strong><\/a> \u2014 Now the standard model choice for natural language processing, the Transformer is one of the most important deep learning models ever invented. The H100 accelerator\u2019s Transformer Engine is built to speed up these networks as much as 6x versus the previous generation without losing accuracy.<\/li><li><strong>2nd-Generation Secure Multi-Instance GPU <\/strong>\u2014 MIG technology allows a single GPU to be partitioned into seven smaller, fully isolated instances to handle different types of jobs. The Hopper architecture extends MIG capabilities by up to 7x over the previous generation by offering secure multitenant configurations in cloud environments across each GPU instance.<\/li><li><strong>Confidential Computing <\/strong>\u2014 H100 is the world\u2019s first accelerator with confidential computing capabilities to protect AI models and customer data while they are being processed. Customers can also apply confidential computing to <a href=\"https:\/\/blogs.nvidia.com\/blog\/2021\/11\/29\/federated-learning-ai-nvidia-flare\/\" rel=\"noreferrer noopener\" target=\"_blank\"><u>federated learning<\/u><\/a> for privacy-sensitive industries like healthcare and financial services, as well as on shared cloud infrastructures.<\/li><li><strong>4th-Generation NVIDIA NVLink <\/strong>\u2014 To accelerate the largest AI models, NVLink combines with a new external NVLink Switch to extend NVLink as a scale-up network beyond the server, connecting up to 256 H100 GPUs at 9x higher bandwidth versus the previous generation using NVIDIA HDR Quantum InfiniBand.<\/li><li><a href=\"https:\/\/blogs.nvidia.com\/blog\/2022\/03\/22\/nvidia-hopper-accelerates-dynamic-programming-using-dpx-instructions\/\" rel=\"noreferrer noopener\" target=\"_blank\"><strong><u>DPX Instructions<\/u><\/strong><\/a> \u2014 New DPX instructions accelerate dynamic programming \u2014 used in a broad range of algorithms, including route optimization and genomics \u2014 by up to 40x compared with CPUs and up to 7x compared with previous-generation GPUs. This includes the Floyd-Warshall algorithm to find optimal routes for autonomous robot fleets in dynamic warehouse environments, and the Smith-Waterman algorithm used in sequence alignment for DNA and protein classification and folding.<\/li><\/ul>\n\n\n\n<p>The combined technology innovations of H100 extend NVIDIA\u2019s AI inference and training leadership to enable real-time and immersive applications using giant-scale AI models. The H100 will enable chatbots using the world\u2019s most powerful monolithic transformer language model, <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-brings-large-language-ai-models-to-enterprises-worldwide\" rel=\"noreferrer noopener\" target=\"_blank\"><u>Megatron 530B<\/u><\/a>, with up to 30x higher throughput than the previous generation, while meeting the subsecond latency required for real-time conversational AI. H100 also allows researchers and developers to train massive models such as Mixture of Experts, with 395 billion parameters, up to 9x faster, reducing the training time from weeks to days.<\/p>\n\n\n\n<p><strong>Broad NVIDIA H100 Adoption<\/strong><\/p>\n\n\n\n<p>NVIDIA H100 can be deployed in every type of data center, including on-premises, cloud, hybrid-cloud and edge. It is expected to be available worldwide later this year from the world\u2019s leading cloud service providers and computer makers, as well as directly from NVIDIA.<\/p>\n\n\n\n<p>NVIDIA\u2019s fourth-generation DGX\u2122 system, <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-announces-dgx-h100-systems-worlds-most-advanced-enterprise-ai-infrastructure\" rel=\"noreferrer noopener\" target=\"_blank\"><u>DGX H100<\/u><\/a>, features eight H100 GPUs to deliver 32 petaflops of AI performance at new FP8 precision, providing the scale to meet the massive compute requirements of large language models, recommender systems, healthcare research and climate science.<\/p>\n\n\n\n<p>Every GPU in DGX H100 systems is connected by fourth-generation NVLink, providing 900GB\/s connectivity, 1.5x more than the prior generation. NVSwitch\u2122 enables all eight of the H100 GPUs to connect over NVLink. An external NVLink Switch can network up to 32 DGX H100 nodes in the next-generation NVIDIA DGX SuperPOD\u2122 supercomputers.<\/p>\n\n\n\n<p>Hopper has received broad industry support from leading cloud service providers Alibaba Cloud, Amazon Web Services, Baidu AI Cloud, Google Cloud, Microsoft Azure, <u>Oracle Cloud<\/u> and Tencent Cloud, which plan to offer H100-based instances.<\/p>\n\n\n\n<p>A wide range of servers with H100 accelerators are expected from the world\u2019s leading systems manufacturers, including Atos, BOXX Technologies, Cisco, <a href=\"https:\/\/www.dell.com\/en-us\/blog\/making-it-easier-than-ever-for-ai-anywhere\/\" rel=\"noreferrer noopener\" target=\"_blank\"><u>Dell Technologies<\/u><\/a>, Fujitsu, <a href=\"https:\/\/www.gigabyte.com\/Press\/News\/1977\" rel=\"noreferrer noopener\" target=\"_blank\"><u>GIGABYTE<\/u><\/a>, H3C, <a href=\"https:\/\/www.hpe.com\/us\/en\/newsroom\/press-release\/2022\/03\/hpe-greenlake-edge-to-cloud-platform-delivers-greater-choice-and-simplicity-with-unified-experience-new-cloud-services-and-expanded-partner-ecosystem.html\" rel=\"noreferrer noopener\" target=\"_blank\"><u>Hewlett Packard Enterprise<\/u><\/a>, <a href=\"https:\/\/www.inspursystems.com\/newsroom\/inspur-information-ai-servers-support-new-nvidia-h100-gpu\/\" rel=\"noreferrer noopener\" target=\"_blank\"><u>Inspur<\/u><\/a>, Lenovo, Nettrix and <a href=\"https:\/\/www.supermicro.com\/en\/pressreleases\/supermicro-enables-deployment-nvidia-omniverse-enterprise-scale-industrys-largest\" rel=\"noreferrer noopener\" target=\"_blank\"><u>Supermicro<\/u><\/a>.<\/p>\n\n\n\n<p><strong>NVIDIA H100 at Every Scale<\/strong><\/p>\n\n\n\n<p>H100 will come in SXM and PCIe form factors to support a wide range of server design requirements. A converged accelerator will also be available, pairing an H100 GPU with an NVIDIA ConnectX<sup>\u00ae<\/sup>-7 400Gb\/s <a href=\"https:\/\/www.nvidia.com\/en-us\/networking\/infiniband-adapters\/\" rel=\"noreferrer noopener\" target=\"_blank\"><u>InfiniBand<\/u><\/a> and <a href=\"https:\/\/www.nvidia.com\/en-us\/networking\/ethernet-adapters\/\" rel=\"noreferrer noopener\" target=\"_blank\"><u>Ethernet<\/u><\/a> SmartNIC.<\/p>\n\n\n\n<p>NVIDIA\u2019s H100 SXM will be available in HGX\u2122 H100 server boards with four- and eight-way configurations for enterprises with applications scaling to multiple GPUs in a server and across multiple servers. HGX H100-based servers deliver the highest application performance for AI training and inference along with data analytics and HPC applications.<\/p>\n\n\n\n<p>The H100 PCIe, with NVLink to connect two GPUs, provides more than 7x the bandwidth of PCIe 5.0, delivering outstanding performance for applications running on mainstream enterprise servers. Its form factor makes it easy to integrate into existing data center infrastructure.<\/p>\n\n\n\n<p>The <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/h100cnx\" rel=\"noreferrer noopener\" target=\"_blank\"><u>H100 CNX<\/u><\/a>, a new converged accelerator, couples an H100 with a ConnectX-7 SmartNIC to provide groundbreaking performance for I\/O-intensive applications such as multinode AI training in enterprise data centers and 5G signal processing at the edge.<\/p>\n\n\n\n<p>NVIDIA Hopper architecture-based GPUs can also be paired with <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-announces-cpu-for-giant-ai-and-high-performance-computing-workloads\" rel=\"noreferrer noopener\" target=\"_blank\"><u>NVIDIA Grace\u2122 CPUs<\/u><\/a> with an ultra-fast <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-opens-nvlink-for-custom-silicon-integration\" rel=\"noreferrer noopener\" target=\"_blank\"><u>NVLink-C2C interconnect<\/u><\/a> for over 7x faster communication between the CPU and GPU compared to PCIe 5.0. This combination \u2014 the <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-introduces-grace-cpu-superchip\" rel=\"noreferrer noopener\" target=\"_blank\"><u>Grace Hopper Superchip<\/u><\/a> \u2014 is an integrated module designed to serve giant-scale HPC and AI applications.<\/p>\n\n\n\n<p><strong>NVIDIA Software Support<\/strong><\/p>\n\n\n\n<p>The NVIDIA H100 GPU is supported by powerful software tools that enable developers and enterprises to build and accelerate applications from AI to HPC. This includes major updates to the <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-ai-delivers-major-advances-in-speech-recommender-system-and-hyperscale-inference\" rel=\"noreferrer noopener\" target=\"_blank\"><u>NVIDIA AI<\/u><\/a> suite of software for workloads such as speech, recommender systems and hyperscale inference.<\/p>\n\n\n\n<p>NVIDIA also released more than <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-introduces-60+-updates-to-cuda-x-libraries-opening-new-science-and-industries-to-accelerated-computing\" rel=\"noreferrer noopener\" target=\"_blank\"><u>60 updates to its CUDA-X\u2122 collection<\/u><\/a> of libraries, tools and technologies to accelerate work in quantum computing and 6G research, cybersecurity, genomics and drug discovery.<\/p>\n\n\n\n<p><strong>Availability<\/strong><\/p>\n\n\n\n<p>NVIDIA H100 will be available starting in the third quarter.<\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a rel=\"noreferrer noopener\" href=\"http:\/\/insidebigdata.com\/newsletter\/\" target=\"_blank\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;@InsideBigData1 \u2013 <a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>To power the next wave of AI data centers, NVIDIA today announced its next-generation accelerated computing platform with NVIDIA Hopper\u2122 architecture, delivering an order of magnitude performance leap over its predecessor. Named for Grace Hopper, a pioneering U.S. computer scientist, the new architecture succeeds the NVIDIA Ampere architecture, launched two years ago.<\/p>\n","protected":false},"author":10513,"featured_media":8794,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,65,115,62,63,64,66,182,68,87,180,258,183,67,56,1],"tags":[],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>NVIDIA Announces Hopper Architecture, the Next Generation of Accelerated Computing - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"NVIDIA Announces Hopper Architecture, the Next Generation of Accelerated Computing - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"To power the next wave of AI data centers, NVIDIA today announced its next-generation accelerated computing platform with NVIDIA Hopper\u2122 architecture, delivering an order of magnitude performance leap over its predecessor. Named for Grace Hopper, a pioneering U.S. computer scientist, the new architecture succeeds the NVIDIA Ampere architecture, launched two years ago.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2022-03-22T18:12:01+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-03-22T18:12:04+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2014\/04\/Nvidia_logo.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"151\" \/>\n\t<meta property=\"og:image:height\" content=\"151\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/\",\"url\":\"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/\",\"name\":\"NVIDIA Announces Hopper Architecture, the Next Generation of Accelerated Computing - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2022-03-22T18:12:01+00:00\",\"dateModified\":\"2022-03-22T18:12:04+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"NVIDIA Announces Hopper Architecture, the Next Generation of Accelerated Computing\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"NVIDIA Announces Hopper Architecture, the Next Generation of Accelerated Computing - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/","og_locale":"en_US","og_type":"article","og_title":"NVIDIA Announces Hopper Architecture, the Next Generation of Accelerated Computing - insideBIGDATA","og_description":"To power the next wave of AI data centers, NVIDIA today announced its next-generation accelerated computing platform with NVIDIA Hopper\u2122 architecture, delivering an order of magnitude performance leap over its predecessor. Named for Grace Hopper, a pioneering U.S. computer scientist, the new architecture succeeds the NVIDIA Ampere architecture, launched two years ago.","og_url":"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2022-03-22T18:12:01+00:00","article_modified_time":"2022-03-22T18:12:04+00:00","og_image":[{"width":151,"height":151,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2014\/04\/Nvidia_logo.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/","url":"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/","name":"NVIDIA Announces Hopper Architecture, the Next Generation of Accelerated Computing - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2022-03-22T18:12:01+00:00","dateModified":"2022-03-22T18:12:04+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2022\/03\/22\/nvidia-announces-hopper-architecture-the-next-generation-of-accelerated-computing\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"NVIDIA Announces Hopper Architecture, the Next Generation of Accelerated Computing"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2014\/04\/Nvidia_logo.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-7ug","jetpack-related-posts":[{"id":30408,"url":"https:\/\/insidebigdata.com\/2022\/09\/20\/nvidia-hopper-in-full-production\/","url_meta":{"origin":28784,"position":0},"title":"NVIDIA Hopper in Full Production","date":"September 20, 2022","format":false,"excerpt":"NVIDIA today announced that the NVIDIA H100 Tensor Core GPU is in full production, with global tech partners planning in October to roll out the first wave of products and services based on the groundbreaking NVIDIA Hopper\u2122 architecture.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":33862,"url":"https:\/\/insidebigdata.com\/2023\/11\/13\/nvidia-supercharges-hopper-the-worlds-leading-ai-computing-platform\/","url_meta":{"origin":28784,"position":1},"title":"NVIDIA Supercharges Hopper, the World\u2019s Leading AI Computing Platform","date":"November 13, 2023","format":false,"excerpt":"NVIDIA today announced it has supercharged the world\u2019s leading AI computing platform with the introduction of the NVIDIA HGX\u2122 H200. Based on NVIDIA Hopper\u2122 architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/AI_shutterstock_2287025875_special-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":31892,"url":"https:\/\/insidebigdata.com\/2023\/03\/21\/nvidia-hopper-gpus-expand-reach-as-demand-for-ai-grows\/","url_meta":{"origin":28784,"position":2},"title":"NVIDIA Hopper GPUs Expand Reach as Demand for AI Grows","date":"March 21, 2023","format":false,"excerpt":"NVIDIA and key partners today announced the availability of new products and services featuring the NVIDIA H100 Tensor Core GPU \u2014 the powerful GPU for AI \u2014 to address rapidly growing demand for generative AI training and inference.\u00a0","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":21259,"url":"https:\/\/insidebigdata.com\/2018\/10\/10\/oracle-nvidia-bring-power-cloud-next-generation-analytics-machine-learning-ai\/","url_meta":{"origin":28784,"position":3},"title":"Oracle and NVIDIA Bring the Power of the Cloud to the Next Generation of Analytics, Machine Learning and AI","date":"October 10, 2018","format":false,"excerpt":"Oracle (NYSE:ORCL) and NVIDIA (NASDAQ: NVDA) today announced that Oracle is the first public cloud provider to support the NVIDIA HGX-2TM platform on Oracle Cloud Infrastructure, to meet the needs of the next generation of analytics, machine learning and AI.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":14789,"url":"https:\/\/insidebigdata.com\/2016\/04\/05\/nvidia-delivers-massive-performance-leap-for-deep-learning-with-nvidia-tesla-p100-accelerators\/","url_meta":{"origin":28784,"position":4},"title":"NVIDIA Delivers Massive Performance Leap for Deep Learning with NVIDIA Tesla P100 Accelerators","date":"April 5, 2016","format":false,"excerpt":"The latest addition to the NVIDIA Tesla Accelerated Computing Platform, the Tesla P100 enables a new class of servers that can deliver the performance of hundreds of CPU server nodes. Today\u2019s data centers \u2014 vast network infrastructures with numerous interconnected commodity CPU servers \u2014 process large numbers of transactional workloads,\u2026","rel":"","context":"In &quot;Big Data&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/roCXXvI5wK4\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":25701,"url":"https:\/\/insidebigdata.com\/2021\/03\/08\/nvidia-a100-a40-and-nvidia-rtx-a6000-ampere-architecture-based-professional-gpus-transform-data-science-and-big-data-analytics\/","url_meta":{"origin":28784,"position":5},"title":"NVIDIA A100, A40 and NVIDIA RTX A6000 Ampere Architecture-Based Professional GPUs Transform Data Science and Big Data Analytics","date":"March 8, 2021","format":false,"excerpt":"Scientists, researchers, and engineers are solving the world\u2019s most important scientific, industrial, and big data challenges with AI and high-performance computing (HPC). Businesses, even entire industries, harness the power of AI to extract new insights from massive data sets, both on-premises and in the cloud. NVIDIA Ampere architecture-based products, like\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2021\/02\/Pic3.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/28784"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=28784"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/28784\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/8794"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=28784"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=28784"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=28784"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}