{"id":33412,"date":"2023-09-18T02:59:00","date_gmt":"2023-09-18T09:59:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=33412"},"modified":"2023-09-18T10:20:48","modified_gmt":"2023-09-18T17:20:48","slug":"anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/","title":{"rendered":"Anyscale Teams With NVIDIA to Supercharge LLM Performance and Efficiency"},"content":{"rendered":"<div class=\"wp-block-image\">\n<figure class=\"alignright size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"227\" height=\"139\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/08\/Anyscale_logo.png\" alt=\"\" class=\"wp-image-30156\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/08\/Anyscale_logo.png 227w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/08\/Anyscale_logo-150x92.png 150w\" sizes=\"(max-width: 227px) 100vw, 227px\" \/><\/figure><\/div>\n\n\n<p>Anyscale, the AI infrastructure company built by the creators of Ray, the world\u2019s fastest-growing open-source unified framework for scalable computing, today announced a collaboration with NVIDIA to further boost the performance and efficiency of large language model (LLM) development on\u00a0<a href=\"https:\/\/www.anyscale.com\/ray-open-source\" target=\"_blank\" rel=\"noreferrer noopener\">Ray<\/a>\u00a0and the\u00a0<a href=\"https:\/\/www.anyscale.com\/platform\" target=\"_blank\" rel=\"noreferrer noopener\">Anyscale Platform<\/a>\u00a0for production AI.<\/p>\n\n\n\n<p>The companies are integrating NVIDIA AI software into Anyscale\u2019s scalable computing platforms, including Ray open source, the Anyscale Platform, and Anyscale Endpoints, announced separately today.&nbsp;<\/p>\n\n\n\n<p>The open-source integrations will bring NVIDIA software, including&nbsp;<a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-tensorrt-llm-supercharges-large-language-model-inference-on-nvidia-h100-gpus\" target=\"_blank\" rel=\"noreferrer noopener\">NVIDIA TensorRT-LLM<\/a>,&nbsp;<a href=\"https:\/\/developer.nvidia.com\/triton-inference-server\" target=\"_blank\" rel=\"noreferrer noopener\">NVIDIA Triton Inference Server<\/a>,&nbsp;and&nbsp;<a href=\"https:\/\/www.nvidia.com\/en-us\/ai-data-science\/generative-ai\/nemo-framework\/\" target=\"_blank\" rel=\"noreferrer noopener\">NVIDIA NeMo<\/a>&nbsp;to Ray to supercharge end-to-end AI development and deployment. Making cutting-edge AI software available via open source democratizes access and dramatically increases the audience of developers that can use this integration.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft size-full is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/08\/NVIDIA_logo_2023.png\" alt=\"\" class=\"wp-image-33093\" style=\"width:214px;height:159px\" width=\"214\" height=\"159\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/08\/NVIDIA_logo_2023.png 300w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/08\/NVIDIA_logo_2023-150x112.png 150w\" sizes=\"(max-width: 214px) 100vw, 214px\" \/><\/figure><\/div>\n\n\n<p>For production AI, the companies will certify the\u00a0<a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/products\/ai-enterprise\/\" target=\"_blank\" rel=\"noreferrer noopener\">NVIDIA AI Enterprise<\/a>\u00a0software suite for the Anyscale Platform, bringing enterprise-grade security, stability, and support to companies deploying AI. An additional integration with Anyscale Endpoints will bring support for the NVIDIA software to a greatly expanded pool of AI application developers via easy-to-use application programming interfaces.<\/p>\n\n\n\n<p><em>\u201cRealizing the incredible potential of generative AI requires computing platforms that help developers iterate quickly and save costs when building and tuning LLMs,\u201d said Robert Nishihara, CEO and co-founder of Anyscale. \u201cOur collaboration with NVIDIA will bring even more performance and efficiency to Anyscale\u2019s portfolio so that developers everywhere create LLMs and generative AI applications with unprecedented speed and efficiency.\u201d<\/em><\/p>\n\n\n\n<p><em>\u201cLLMs are at the heart of today\u2019s generative AI transformation, and the developers creating and customizing these models require full-stack computing with efficient orchestration throughout the AI life cycle,\u201d said Manuvir Das, vice president of Enterprise Computing at NVIDIA. \u201cThe combination of NVIDIA AI and Anyscale unites incredible performance with ease of use and the ability to scale rapidly with success.\u201d<\/em><\/p>\n\n\n\n<p><strong>NVIDIA AI Acceleration Speeds End-to-End Anyscale Development<\/strong><\/p>\n\n\n\n<p>NVIDIA\u2019s open-source and production software helps boost accelerated computing performance and efficiency for generative AI development.&nbsp;<\/p>\n\n\n\n<p>The integration delivers numerous benefits for customers and users:&nbsp;<\/p>\n\n\n\n<ul>\n<li>NVIDIA TensorRT-LLM automatically scales inference to run models in parallel over multiple GPUs, which can provide up to 8X higher performance when running on <a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/h100\/\" target=\"_blank\" rel=\"noreferrer noopener\">NVIDIA H100 Tensor Core GPUs<\/a>, compared to prior-generation GPUs. These capabilities will bring further acceleration and efficiency to Ray, which ultimately results in significant cost savings for at-scale LLM development.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul>\n<li>NVIDIA Triton Inference Server standardizes AI model deployment and execution across every workload. It supports inference across cloud, data center, edge, and embedded devices on GPUs, CPUs, and other processors, maximizing performance and reducing end-to-end latency by running multiple models concurrently to maximize GPU utilization and throughput for LLMs. These capabilities will add more efficiency for developers deploying AI in production on Ray and the Anyscale Platform.<\/li>\n<\/ul>\n\n\n\n<ul>\n<li>NVIDIA NeMo is an end-to-end, cloud-native framework for building, customizing, and deploying generative AI models anywhere. It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI. The integration of NeMo with Ray and the Anyscale Platform will enable developers to fine-tune and customize models with enterprise data, paving the way for LLMs that understand the unique offerings of individual businesses.&nbsp;<\/li>\n<\/ul>\n\n\n\n<ul>\n<li>Anyscale Endpoints is a service that enables developers to integrate fast, cost-efficient, and scalable LLMs into their applications using popular LLM APIs. Endpoints can be tailored to specific use cases and fine-tuned with additional content and context to serve users\u2019 specific needs while ensuring the best combination of price and performance. Endpoints is less than half the cost of comparable proprietary solutions for general workloads and up to 10X less expensive for specific tasks.<\/li>\n<\/ul>\n\n\n\n<p>More details are available on the&nbsp;<a href=\"https:\/\/blogs.nvidia.com\/blog\/2023\/09\/18\/llm-anyscale-nvaie\/\" target=\"_blank\" rel=\"noreferrer noopener\">NVIDIA blog<\/a>.<\/p>\n\n\n\n<p><strong>Availability<\/strong><\/p>\n\n\n\n<p>NVIDIA AI integrations with Anyscale are under development and expected to be available in Q4. Practitioners interested in early access are encouraged to apply\u00a0<a href=\"https:\/\/enterpriseproductregistration.nvidia.com\/?LicType=EVAL&amp;ProductFamily=NVAIEnterprise&amp;Partner=Anyscale\" target=\"_blank\" rel=\"noreferrer noopener\">here<\/a>.<\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Anyscale, the AI infrastructure company built by the creators of Ray, the world\u2019s fastest-growing open-source unified framework for scalable computing, today announced a collaboration with NVIDIA to further boost the performance and efficiency of large language model (LLM) development on\u00a0Ray\u00a0and the\u00a0Anyscale Platform\u00a0for production AI.<\/p>\n","protected":false},"author":10513,"featured_media":33232,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,182,180,67,268,56,1],"tags":[1174,1248,263,1173,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Anyscale Teams With NVIDIA to Supercharge LLM Performance and Efficiency - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Anyscale Teams With NVIDIA to Supercharge LLM Performance and Efficiency - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"Anyscale, the AI infrastructure company built by the creators of Ray, the world\u2019s fastest-growing open-source unified framework for scalable computing, today announced a collaboration with NVIDIA to further boost the performance and efficiency of large language model (LLM) development on\u00a0Ray\u00a0and the\u00a0Anyscale Platform\u00a0for production AI.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-18T09:59:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-18T17:20:48+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/08\/Generative_AI_shutterstock_2273007347_special.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1100\" \/>\n\t<meta property=\"og:image:height\" content=\"550\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/\",\"url\":\"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/\",\"name\":\"Anyscale Teams With NVIDIA to Supercharge LLM Performance and Efficiency - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2023-09-18T09:59:00+00:00\",\"dateModified\":\"2023-09-18T17:20:48+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Anyscale Teams With NVIDIA to Supercharge LLM Performance and Efficiency\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Anyscale Teams With NVIDIA to Supercharge LLM Performance and Efficiency - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/","og_locale":"en_US","og_type":"article","og_title":"Anyscale Teams With NVIDIA to Supercharge LLM Performance and Efficiency - insideBIGDATA","og_description":"Anyscale, the AI infrastructure company built by the creators of Ray, the world\u2019s fastest-growing open-source unified framework for scalable computing, today announced a collaboration with NVIDIA to further boost the performance and efficiency of large language model (LLM) development on\u00a0Ray\u00a0and the\u00a0Anyscale Platform\u00a0for production AI.","og_url":"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2023-09-18T09:59:00+00:00","article_modified_time":"2023-09-18T17:20:48+00:00","og_image":[{"width":1100,"height":550,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/08\/Generative_AI_shutterstock_2273007347_special.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/","url":"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/","name":"Anyscale Teams With NVIDIA to Supercharge LLM Performance and Efficiency - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2023-09-18T09:59:00+00:00","dateModified":"2023-09-18T17:20:48+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Anyscale Teams With NVIDIA to Supercharge LLM Performance and Efficiency"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/08\/Generative_AI_shutterstock_2273007347_special.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-8GU","jetpack-related-posts":[{"id":30155,"url":"https:\/\/insidebigdata.com\/2022\/08\/23\/anyscale-unveils-ray-2-0-and-anyscale-innovations-at-ray-summit-2022\/","url_meta":{"origin":33412,"position":0},"title":"Anyscale Unveils Ray 2.0 and Anyscale Innovations at Ray Summit 2022","date":"August 23, 2022","format":false,"excerpt":"Anyscale, the company behind Ray, the unified framework for scalable computing, today announced Ray 2.0 and the enterprise-ready capabilities and roadmap for Anyscale\u2019s managed Ray platform at the Ray Summit. This year\u2019s Summit features dozens of organizations scaling their AI initiatives with Ray including Uber, IBM, Meta, Riot Games, Instacart\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":31003,"url":"https:\/\/insidebigdata.com\/2022\/11\/29\/the-anyscale-platform-built-on-ray-introduces-new-breakthroughs-in-ai-development-experimentation-and-ai-scaling\/","url_meta":{"origin":33412,"position":1},"title":"The Anyscale Platform\u2122, built on Ray, Introduces New Breakthroughs in AI Development, Experimentation and AI Scaling","date":"November 29, 2022","format":false,"excerpt":"Anyscale, the company behind\u00a0Ray\u00a0open source, the unified compute framework for scaling any machine learning or Python workload, announced several new advancements on the Anyscale Platform\u2122\u00a0at\u00a0AWS re:Invent\u00a0in Las Vegas, NV. The new capabilities extend beyond the advantages of Ray open source to make AI\/ML and Python workload development, experimentation, and scaling\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":33452,"url":"https:\/\/insidebigdata.com\/2023\/09\/22\/insidebigdata-ai-news-briefs-9-22-2023\/","url_meta":{"origin":33412,"position":2},"title":"insideBIGDATA AI News Briefs \u2013 9\/22\/2023","date":"September 22, 2023","format":false,"excerpt":"Welcome insideBIGDATA AI News Briefs, our timely new feature bringing you the latest industry insights and perspectives surrounding the field of AI including deep learning, large language models, generative AI, and transformers. We\u2019re working tirelessly to dig up the most timely and curious tidbits underlying the day\u2019s most popular technologies.\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/07\/AI-News-Briefs-column-banner.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":24772,"url":"https:\/\/insidebigdata.com\/2020\/07\/17\/democratizing-ai-how-to-gain-actionable-insights-through-open-source\/","url_meta":{"origin":33412,"position":3},"title":"Democratizing AI: How to Gain Actionable Insights through Open Source","date":"July 17, 2020","format":false,"excerpt":"In this special guest feature, Ion Stoica, Co-founder of Anyscale, details the state of machine learning application creation in the enterprise today, the case for democratizing AI, and what this means for the future of work. He\u2019ll also share how open source software tools such as Ray (which he helped\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":33862,"url":"https:\/\/insidebigdata.com\/2023\/11\/13\/nvidia-supercharges-hopper-the-worlds-leading-ai-computing-platform\/","url_meta":{"origin":33412,"position":4},"title":"NVIDIA Supercharges Hopper, the World\u2019s Leading AI Computing Platform","date":"November 13, 2023","format":false,"excerpt":"NVIDIA today announced it has supercharged the world\u2019s leading AI computing platform with the introduction of the NVIDIA HGX\u2122 H200. Based on NVIDIA Hopper\u2122 architecture, the platform features the NVIDIA H200 Tensor Core GPU with advanced memory to handle massive amounts of data for generative AI and high performance computing\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/AI_shutterstock_2287025875_special-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33719,"url":"https:\/\/insidebigdata.com\/2023\/10\/23\/lambda-and-vast-data-partner-to-accelerate-ai-training-across-public-and-private-cloud-leveraging-nvidia-technology\/","url_meta":{"origin":33412,"position":5},"title":"Lambda and VAST Data Partner to Accelerate AI Training Across Public and Private Cloud, Leveraging NVIDIA Technology\u00a0","date":"October 23, 2023","format":false,"excerpt":"VAST Data, the AI data platform company and Lambda, a leading Infrastructure-as-a-Service and compute provider for public and private GPU infrastructure, today announced a strategic partnership that will enable the world's first hybrid cloud experience dedicated to AI and deep learning workloads. Together, Lambda and VAST are building an NVIDIA\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/09\/AI_data_storage_shutterstock_1107715973_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/33412"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=33412"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/33412\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/33232"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=33412"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=33412"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=33412"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}