{"id":32725,"date":"2023-06-28T03:00:00","date_gmt":"2023-06-28T10:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=32725"},"modified":"2023-06-26T16:41:21","modified_gmt":"2023-06-26T23:41:21","slug":"generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/","title":{"rendered":"MosaicML Releases Open-Source MPT-30B LLMs, Trained on H100s to Power Generative AI Applications"},"content":{"rendered":"<div class=\"wp-block-image\">\n<figure class=\"alignright size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"295\" height=\"186\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/MosaicML_logo.png\" alt=\"\" class=\"wp-image-32726\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/MosaicML_logo.png 295w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/MosaicML_logo-150x95.png 150w\" sizes=\"(max-width: 295px) 100vw, 295px\" \/><\/figure><\/div>\n\n\n<p><a href=\"https:\/\/www.mosaicml.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">MosaicML<\/a>&nbsp;announced the availability of MPT-30B Base, Instruct, and Chat, the most advanced models in their MPT (MosaicML Pretrained Transformer) series of open-source large language models. These state-of-the-art models &#8211; which were trained with an 8k token context window &#8211; surpass the quality of the original GPT-3 and can be used directly for inference and\/or as starting points for building proprietary models. They were trained on the MosaicML Platform using &#8211; in part &#8211; NVIDIA\u2019s latest-generation H100 accelerators, which are now available for MosaicML\u2019s customers. By building on top of MPT-30B, businesses can harness the power of generative AI without compromising security or data privacy.<\/p>\n\n\n\n<p><strong>Over 3 Million MPT Downloads since May<\/strong><\/p>\n\n\n\n<p>The MosaicML MPT family of models are already some of the most powerful and popular open-source language models available for commercial use today. Since launching on May 5, 2023, the MPT-7B models (Base, Instruct, Chat, StoryWriter) have been downloaded over 3.3 million times. The new release extends the MPT family with larger, higher quality MPT-30B models that unlock even more applications. As always, MosaicML\u2019s MPT models are optimized for efficient training and inference.<\/p>\n\n\n\n<p><strong>MPT-30B surpasses GPT-3<\/strong><\/p>\n\n\n\n<p>As we pass the 3rd anniversary of GPT-3, it\u2019s noteworthy to point out that MPT-30B was designed to surpass the quality of this iconic model. When measured using standard academic benchmarks, MPT-30B outperforms the originally published GPT-3.<\/p>\n\n\n\n<p>Furthermore, MPT-30B achieves this quality target while using ~1\/6th the number of parameters \u2013 GPT-3 has 175 billion parameters while MPT-30B has only 30 billion parameters. This means MPT-30B is easier to run on local hardware and much cheaper to deploy for inference. Starting today, developers and enterprises can build and deploy their own commercially viable enterprise grade GPT-3 quality models. It was also trained at a cost that was orders of magnitude lower than estimates for the original GPT-3, putting the ability to train custom GPT-3 models within the reach of enterprises.<\/p>\n\n\n\n<p>Finally, MPT-30B was trained on longer sequences (up to 8,000 tokens) than GPT-3, the popular LLaMA family of models, and the recent Falcon model (2,000 each). It is designed to handle even longer sequences in practice, making it a perfect fit for data-heavy enterprise applications.<\/p>\n\n\n\n<p><strong>Training on H100 GPUs now available for MosaicML customers<\/strong><\/p>\n\n\n\n<p>MPT-30B is the first publicly known LLM trained on NVIDIA H100 GPUs, thanks to the leading flexibility and reliability of the MosaicML platform. Within days of hardware delivery, the MosaicML team was able to seamlessly move the MPT-30B training run from its original A100 cluster to a new H100 cluster \u2013 increasing throughput-per-GPU by over 2.4x and resulting in a faster finish time. MosaicML is committed to bring the latest advances in hardware and software within reach of all enterprises \u2013 enabling them to train models faster and for lower cost than before.<\/p>\n\n\n\n<p><strong>Times and costs to pretrain MPT-30B from scratch on 1 trillion tokens.<\/strong><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"93\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/MosaicML_fig1.png\" alt=\"\" class=\"wp-image-32727\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/MosaicML_fig1.png 700w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/MosaicML_fig1-300x40.png 300w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/MosaicML_fig1-150x20.png 150w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/figure><\/div>\n\n\n<p>Times for H100 are extrapolated from a 256xH100 system. Costs are based on current MosaicML reserved cluster pricing of $2.50\/A100-40GB\/hour and $5.00\/H100-80GB\/hour as of June 22nd, 2023. Costs are subject to change.<\/p>\n\n\n\n<p><strong>Finetuning times and costs for MPT-30B on smaller systems.<\/strong><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"74\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/MosaicML_fig2.png\" alt=\"\" class=\"wp-image-32728\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/MosaicML_fig2.png 700w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/MosaicML_fig2-300x32.png 300w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/MosaicML_fig2-150x16.png 150w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/figure><\/div>\n\n\n<p>Costs are based on current MosaicML reserved cluster pricing of $2.50\/A100-40GB\/hour and $5.00\/H100-80GB\/hour as of June 22nd, 2023. Costs are subject to change.<\/p>\n\n\n\n<p><strong>MosaicML MPT: Powering a New Generation of AI Applications<\/strong><\/p>\n\n\n\n<p>Enterprises are deploying MPT models for use cases like code completion and dialogue generation, as well as fine-tuning these models with their own proprietary data.<\/p>\n\n\n\n<p>Replit, the world\u2019s leading web-based IDE, was able to build a new code generation model using their proprietary data together with MosaicML\u2019s training platform in just three days. Their custom MPT model,&nbsp;<a href=\"https:\/\/huggingface.co\/replit\/replit-code-v1-3b\" target=\"_blank\" rel=\"noreferrer noopener\">replit-code-v1-3b<\/a>, significantly improved the performance of their GhostWriter product in terms of speed, cost, and code quality.<\/p>\n\n\n\n<p><a href=\"https:\/\/scatterlab.co.kr\/\" target=\"_blank\" rel=\"noreferrer noopener\">Scatter Lab<\/a>, a cutting-edge AI startup building \u2018social AI chatbots\u2019 that enable engaging, human-like conversations, trained their own MPT model from scratch to power a custom chatbot. This model, one of the first multilingual generative AI models that can understand both English and Korean, enables new chat experiences for their 1.5 million users.<\/p>\n\n\n\n<p>Worldwide travel and expense management software company&nbsp;<a href=\"https:\/\/navan.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Navan<\/a>&nbsp;is building their custom LLMs on the MPT foundation.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>\u201cAt Navan, we use generative AI across our products and services, powering experiences such as our virtual travel agent and our conversational business intelligence agent. MosaicML\u2019s foundation models offer state-of-the-art language capabilities, while being extremely efficient to fine-tune and serve inference at scale. We are excited to see how this promising technology develops,\u201d said Ilan Twig, co-founder and chief technology officer at Navan.<\/p>\n<\/blockquote>\n\n\n\n<p>MPT-30B is designed to accelerate model development for enterprises who want to build their own language models for chat, question answering, summarization, extraction, and other language applications.<\/p>\n\n\n\n<p><strong>How Developers can use MPT-30B today<\/strong><\/p>\n\n\n\n<p>MPT-30B is fully open source and is available for download through the HuggingFace Hub. Developers can fine-tune MPT-30B on their data, as well as deploy the model for inference on their own infrastructure. For a faster and easier experience, developers can run model inference using MosaicML&#8217;s MPT-30B-Instruct managed endpoint, which relieves developers from the need to secure GPU capacity and takes care of the serving infrastructure. At a price of $0.005\/1K tokens, MPT-30B-Instruct is 4-6X cheaper than comparable endpoints like OpenAI DaVinci. Refer to the&nbsp;<a href=\"https:\/\/www.mosaicml.com\/blog\/mpt-30b\" target=\"_blank\" rel=\"noreferrer noopener\">MPT-30B blog<\/a>&nbsp;for full technical details.<\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>MosaicML\u00a0announced the availability of MPT-30B Base, Instruct, and Chat, the most advanced models in their MPT (MosaicML Pretrained Transformer) series of open-source large language models. These state-of-the-art models &#8211; which were trained with an 8k token context window &#8211; surpass the quality of the original GPT-3 and can be used directly for inference and\/or as starting points for building proprietary models.<\/p>\n","protected":false},"author":10513,"featured_media":32680,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,182,180,67,268,56,1],"tags":[437,264,1245,1248,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>MosaicML Releases Open-Source MPT-30B LLMs, Trained on H100s to Power Generative AI Applications - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"MosaicML Releases Open-Source MPT-30B LLMs, Trained on H100s to Power Generative AI Applications - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"MosaicML\u00a0announced the availability of MPT-30B Base, Instruct, and Chat, the most advanced models in their MPT (MosaicML Pretrained Transformer) series of open-source large language models. These state-of-the-art models - which were trained with an 8k token context window - surpass the quality of the original GPT-3 and can be used directly for inference and\/or as starting points for building proprietary models.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2023-06-28T10:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-06-26T23:41:21+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1100\" \/>\n\t<meta property=\"og:image:height\" content=\"550\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/\",\"url\":\"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/\",\"name\":\"MosaicML Releases Open-Source MPT-30B LLMs, Trained on H100s to Power Generative AI Applications - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2023-06-28T10:00:00+00:00\",\"dateModified\":\"2023-06-26T23:41:21+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"MosaicML Releases Open-Source MPT-30B LLMs, Trained on H100s to Power Generative AI Applications\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"MosaicML Releases Open-Source MPT-30B LLMs, Trained on H100s to Power Generative AI Applications - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/","og_locale":"en_US","og_type":"article","og_title":"MosaicML Releases Open-Source MPT-30B LLMs, Trained on H100s to Power Generative AI Applications - insideBIGDATA","og_description":"MosaicML\u00a0announced the availability of MPT-30B Base, Instruct, and Chat, the most advanced models in their MPT (MosaicML Pretrained Transformer) series of open-source large language models. These state-of-the-art models - which were trained with an 8k token context window - surpass the quality of the original GPT-3 and can be used directly for inference and\/or as starting points for building proprietary models.","og_url":"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2023-06-28T10:00:00+00:00","article_modified_time":"2023-06-26T23:41:21+00:00","og_image":[{"width":1100,"height":550,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/","url":"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/","name":"MosaicML Releases Open-Source MPT-30B LLMs, Trained on H100s to Power Generative AI Applications - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2023-06-28T10:00:00+00:00","dateModified":"2023-06-26T23:41:21+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"MosaicML Releases Open-Source MPT-30B LLMs, Trained on H100s to Power Generative AI Applications"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-8vP","jetpack-related-posts":[{"id":32736,"url":"https:\/\/insidebigdata.com\/2023\/06\/26\/databricks-signs-definitive-agreement-to-acquire-mosaicml-a-leading-generative-ai-platform\/","url_meta":{"origin":32725,"position":0},"title":"Databricks Signs Definitive Agreement to Acquire MosaicML, a Leading Generative AI Platform","date":"June 26, 2023","format":false,"excerpt":"Databricks, the Data and AI company, today announced it has entered into a definitive agreement to acquire MosaicML, a leading generative AI platform. Together, Databricks and MosaicML will make generative AI accessible for every organization, enabling them to build, own and secure generative AI models with their own data.\u00a0The transaction\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33012,"url":"https:\/\/insidebigdata.com\/2023\/08\/04\/top-10-insidebigdata-articles-for-july-2023\/","url_meta":{"origin":32725,"position":1},"title":"TOP 10 insideBIGDATA Articles for July 2023","date":"August 4, 2023","format":false,"excerpt":"In this continuing regular feature, we give all our valued readers a monthly heads-up for the top 10 most viewed articles appearing on insideBIGDATA. Over the past several months, we\u2019ve heard from many of our followers that this feature will enable them to catch up with important news and features\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/07\/Top10-column-banner_special.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32787,"url":"https:\/\/insidebigdata.com\/2023\/07\/05\/what-the-experts-expect-of-generative-ai-2023-survey-results\/","url_meta":{"origin":32725,"position":2},"title":"What The Experts Expect of Generative AI: 2023 Survey Results","date":"July 5, 2023","format":false,"excerpt":"Generative AI is quickly becoming a top priority for any company\u2019s digital transformation strategy. It is hijacking conversations everywhere, from the boardroom to the dining room \u2013 we\u2019re even seeing accelerated M&A (ie. MosaicML) to help businesses build their own foundation models and generative AI tools. As business leaders assess\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/AI_shutterstock_2287025875_special-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":34119,"url":"https:\/\/insidebigdata.com\/2023\/12\/08\/hallucination-index-identifies-best-llms-for-most-popular-ai-use-cases\/","url_meta":{"origin":32725,"position":3},"title":"Hallucination Index Identifies Best LLMs for Most Popular AI Use Cases","date":"December 8, 2023","format":false,"excerpt":"Galileo, a leading machine learning (ML) company for unstructured data, released a Hallucination Index developed by its research arm, Galileo Labs, to help users of today\u2019s leading LLMs determine which model is least likely to hallucinate for their intended application.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/AI_shutterstock_2287025875_special-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33231,"url":"https:\/\/insidebigdata.com\/2023\/08\/28\/generative-ai-report-nutanix-simplifies-adoption-of-generative-ai-with-new-nutanix-gpt-in-a-box-solution\/","url_meta":{"origin":32725,"position":4},"title":"Generative AI Report: Nutanix Simplifies Adoption of Generative AI with New Nutanix GPT-in-a-Box Solution","date":"August 28, 2023","format":false,"excerpt":"Nutanix\u00a0(NASDAQ:\u00a0NTNX), a leader in hybrid multicloud computing, announced the Nutanix GPT-in-a-Box\u2122\u00a0solution for customers looking to jump-start their artificial intelligence (AI) and machine learning (ML) innovation, while maintaining control over their data. The new offering is a full-stack software-defined AI-ready platform, along with services to help organizations size and configure hardware\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/08\/Generative_AI_shutterstock_2273007347_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32282,"url":"https:\/\/insidebigdata.com\/2023\/05\/08\/insidebigdata-latest-news-5-8-2023\/","url_meta":{"origin":32725,"position":5},"title":"insideBIGDATA Latest News \u2013 5\/8\/2023","date":"May 8, 2023","format":false,"excerpt":"In this regular column, we\u2019ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we\u2019re in close touch with vendors from\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/32725"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=32725"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/32725\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/32680"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=32725"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=32725"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=32725"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}