{"id":34204,"date":"2023-12-14T02:58:00","date_gmt":"2023-12-14T10:58:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=34204"},"modified":"2023-12-14T10:37:21","modified_gmt":"2023-12-14T18:37:21","slug":"deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/","title":{"rendered":"Deci Unveils DeciLM-7B: A Leap Forward in Language Model Performance and Inference Cost Efficiency"},"content":{"rendered":"\n<p><a href=\"http:\/\/www.deci.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">Deci<\/a>, the deep learning company harnessing AI to build AI, unveiled the latest addition to its suite of innovative generative AI models, DeciLM-7B, a 7 billion parameter large language model. Building upon the success of its predecessor DeciLM 6B, DeciLM 7B is setting new benchmarks in the large language model (LLM) space, outperforming prominent open-source models such as Llama2 7B and Mistral 7B in both accuracy and efficiency.\u00a0<\/p>\n\n\n\n<p>DeciLM-7B stands out for its unmatched performance, surpassing open-source language models up to 13 billion parameters in both accuracy and speed with less computational demand. It achieves a 1.83x and 2.39x increase in throughput over Mistral 7B and Llama 2 7B, respectively, which means significantly faster processing speeds compared to competing models. Its compact design is ideal for cost-effective GPUs, striking an unparalleled balance between affordability and high-end performance.<\/p>\n\n\n\n<p>The remarkable performance of DeciLM-7B can be further accelerated when used in tandem with Infery-LLM, the world\u2019s fastest inference engine, designed to deliver high throughput, low latency and cost effective inference on widely available GPUs. This powerful duo sets a new standard in throughput performance, achieving speeds 4.4 times greater than Mistral 7B with vLLM without sacrificing quality. Leveraging DeciLM-7B in conjunction with Infery-LLM enables teams to drastically reduce their LLM compute expenses, while simultaneously benefiting from quicker inference times. This integration facilitates the efficient scaling of Generative AI workloads and supports the transition to more cost-effective hardware solutions.<\/p>\n\n\n\n<p>This synergy enables the efficient serving of multiple clients simultaneously without excessive compute costs or latency issues. This is especially crucial in sectors such as telecommunications, online retail, and cloud services, where the ability to respond to a massive influx of concurrent customer inquiries in real time can significantly enhance user experience and operational efficiency.<\/p>\n\n\n\n<p>Licensed under Apache 2.0, DeciLM-7B is available for use and deployment anywhere, including local setups, enabling teams to fine tune for specific industry applications without compromising on data security or privacy. Its versatility allows teams to easily tailor it for unique use cases across a wide range of business applications, including content creation, translation, conversation modeling, data categorization, summarization, sentiment analysis and chatbot development, among others. When fine tuned for specific data sets, DeciLM-7B can deliver similar quality to that of much larger models such as GPT 3.5 at approximately 97% lower cost and better speed.<\/p>\n\n\n\n<p><em>&#8220;With the increasing use of Generative AI in various business sectors, there&#8217;s a growing demand for models that are not only highly performant but also operationally cost efficient,\u201d said Yonatan Geifman, CEO and co-founder of Deci. &#8220;Our latest innovation, DeciLM-7B, combined with Infery-LLM, is a game-changer in this regard. It&#8217;s adaptable to diverse settings, including on-premise solutions, and its exceptional inference efficiency makes high-quality large language models more accessible to a wider range of users.\u201d<\/em><\/p>\n\n\n\n<p>DeciLM-7B\u2019s cost-effectiveness and reduced computational demand make advanced AI technologies more accessible to businesses of all sizes, fostering innovation and driving forward the digital transformation across various sectors. With DeciLM-7B, companies can now leverage the full potential of AI without the prohibitive costs or complexities previously associated with high-end language models.<\/p>\n\n\n\n<p>Deci AI&#8217;s introduction of DeciLM-7B builds on its track record of innovative and efficient Generative AI models, including<a href=\"https:\/\/deci.ai\/blog\/decilm-15-times-faster-than-llama2-nas-generated-llm-with-variable-gqa\/\" target=\"_blank\" rel=\"noreferrer noopener\">\u00a0DeciLM 6B<\/a>,<a href=\"https:\/\/deci.ai\/blog\/decicoder-efficient-and-accurate-code-generation-llm\/\" target=\"_blank\" rel=\"noreferrer noopener\">\u00a0DeciCoder 1B<\/a>, and<a href=\"https:\/\/deci.ai\/blog\/decidiffusion-1-0-3x-faster-than-stable-diffusion-same-quality\/\" target=\"_blank\" rel=\"noreferrer noopener\">\u00a0DeciDiffusion 1.0<\/a>. Similar to its other models, DeciLM 7B was generated with Deci&#8217;s cutting-edge Automated Neural Architecture Construction (AutoNAC) engine, the most advanced Neural Architecture Search (NAS)-based technology on the market, with its focus on efficiency.<\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Deci, the deep learning company harnessing AI to build AI, unveiled the latest addition to its suite of innovative generative AI models, DeciLM-7B, a 7 billion parameter large language model. Building upon the success of its predecessor DeciLM 6B, DeciLM 7B is setting new benchmarks in the large language model (LLM) space, outperforming prominent open-source models such as Llama2 7B and Mistral 7B in both accuracy and efficiency.\u00a0<\/p>\n","protected":false},"author":10513,"featured_media":29633,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,182,180,67,268,56,1],"tags":[437,324,1245,1248,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Deci Unveils DeciLM-7B: A Leap Forward in Language Model Performance and Inference Cost Efficiency - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Deci Unveils DeciLM-7B: A Leap Forward in Language Model Performance and Inference Cost Efficiency - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"Deci, the deep learning company harnessing AI to build AI, unveiled the latest addition to its suite of innovative generative AI models, DeciLM-7B, a 7 billion parameter large language model. Building upon the success of its predecessor DeciLM 6B, DeciLM 7B is setting new benchmarks in the large language model (LLM) space, outperforming prominent open-source models such as Llama2 7B and Mistral 7B in both accuracy and efficiency.\u00a0\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-14T10:58:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-12-14T18:37:21+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/06\/Backprop_shutterstock_1615182352.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"300\" \/>\n\t<meta property=\"og:image:height\" content=\"168\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/\",\"url\":\"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/\",\"name\":\"Deci Unveils DeciLM-7B: A Leap Forward in Language Model Performance and Inference Cost Efficiency - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2023-12-14T10:58:00+00:00\",\"dateModified\":\"2023-12-14T18:37:21+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Deci Unveils DeciLM-7B: A Leap Forward in Language Model Performance and Inference Cost Efficiency\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Deci Unveils DeciLM-7B: A Leap Forward in Language Model Performance and Inference Cost Efficiency - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/","og_locale":"en_US","og_type":"article","og_title":"Deci Unveils DeciLM-7B: A Leap Forward in Language Model Performance and Inference Cost Efficiency - insideBIGDATA","og_description":"Deci, the deep learning company harnessing AI to build AI, unveiled the latest addition to its suite of innovative generative AI models, DeciLM-7B, a 7 billion parameter large language model. Building upon the success of its predecessor DeciLM 6B, DeciLM 7B is setting new benchmarks in the large language model (LLM) space, outperforming prominent open-source models such as Llama2 7B and Mistral 7B in both accuracy and efficiency.\u00a0","og_url":"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2023-12-14T10:58:00+00:00","article_modified_time":"2023-12-14T18:37:21+00:00","og_image":[{"width":300,"height":168,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/06\/Backprop_shutterstock_1615182352.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/","url":"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/","name":"Deci Unveils DeciLM-7B: A Leap Forward in Language Model Performance and Inference Cost Efficiency - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2023-12-14T10:58:00+00:00","dateModified":"2023-12-14T18:37:21+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2023\/12\/14\/deci-unveils-decilm-7b-a-leap-forward-in-language-model-performance-and-inference-cost-efficiency\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Deci Unveils DeciLM-7B: A Leap Forward in Language Model Performance and Inference Cost Efficiency"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/06\/Backprop_shutterstock_1615182352.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-8TG","jetpack-related-posts":[{"id":33161,"url":"https:\/\/insidebigdata.com\/2023\/08\/17\/generative-ai-report-deci-releases-powerful-open-source-generative-ai-model-decicoder-to-redefine-code-generation-for-developers\/","url_meta":{"origin":34204,"position":0},"title":"Generative AI Report: Deci Releases Powerful Open-Source Generative AI Model, DeciCoder, to Redefine Code Generation for Developers\u00a0","date":"August 17, 2023","format":false,"excerpt":"Deci, the deep learning company harnessing AI to build AI, released DeciCoder, its inaugural foundation model in generative AI helping users generate programming language code. This groundbreaking Large Language Model (LLM), dedicated to code generation with 1 billion parameters and an expansive 2048-context window, surpasses results released in equivalent models\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":34119,"url":"https:\/\/insidebigdata.com\/2023\/12\/08\/hallucination-index-identifies-best-llms-for-most-popular-ai-use-cases\/","url_meta":{"origin":34204,"position":1},"title":"Hallucination Index Identifies Best LLMs for Most Popular AI Use Cases","date":"December 8, 2023","format":false,"excerpt":"Galileo, a leading machine learning (ML) company for unstructured data, released a Hallucination Index developed by its research arm, Galileo Labs, to help users of today\u2019s leading LLMs determine which model is least likely to hallucinate for their intended application.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/AI_shutterstock_2287025875_special-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33418,"url":"https:\/\/insidebigdata.com\/2023\/09\/19\/sambanova-unveils-new-ai-chip-the-sn40l-powering-its-full-stack-ai-platform\/","url_meta":{"origin":34204,"position":2},"title":"SambaNova Unveils New AI Chip, the SN40L, Powering its Full Stack AI Platform","date":"September 19, 2023","format":false,"excerpt":"SambaNova Systems,\u00a0makers of the\u00a0purpose-built, full stack AI platform, announced a revolutionary new chip, the SN40L. The SN40L will power SambaNova\u2019s full stack large language model (LLM) platform, the SambaNova Suite, with its revolutionary new design: on the inside it offers both dense and sparse compute, and includes both large and\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/AI_shutterstock_2287025875_special-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33412,"url":"https:\/\/insidebigdata.com\/2023\/09\/18\/anyscale-teams-with-nvidia-to-supercharge-llm-performance-and-efficiency\/","url_meta":{"origin":34204,"position":3},"title":"Anyscale Teams With NVIDIA to Supercharge LLM Performance and Efficiency","date":"September 18, 2023","format":false,"excerpt":"Anyscale, the AI infrastructure company built by the creators of Ray, the world\u2019s fastest-growing open-source unified framework for scalable computing, today announced a collaboration with NVIDIA to further boost the performance and efficiency of large language model (LLM) development on\u00a0Ray\u00a0and the\u00a0Anyscale Platform\u00a0for production AI.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/08\/Generative_AI_shutterstock_2273007347_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33433,"url":"https:\/\/insidebigdata.com\/2023\/09\/21\/insidebigdata-latest-news-9-21-2023\/","url_meta":{"origin":34204,"position":4},"title":"insideBIGDATA Latest News \u2013 9\/21\/2023","date":"September 21, 2023","format":false,"excerpt":"In this regular column, we\u2019ll bring you all the latest industry news centered around our main topics of focus: big data, data science, machine learning, AI, and deep learning. Our industry is constantly accelerating with new products and services being announced everyday. Fortunately, we\u2019re in close touch with vendors from\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/07\/LatestNews_shutterstock_23433256_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32725,"url":"https:\/\/insidebigdata.com\/2023\/06\/28\/generative-ai-report-mosaicml-releases-open-source-mpt-30b-llms-trained-on-h100s-to-power-generative-ai-applications\/","url_meta":{"origin":34204,"position":5},"title":"MosaicML Releases Open-Source MPT-30B LLMs, Trained on H100s to Power Generative AI Applications","date":"June 28, 2023","format":false,"excerpt":"MosaicML\u00a0announced the availability of MPT-30B Base, Instruct, and Chat, the most advanced models in their MPT (MosaicML Pretrained Transformer) series of open-source large language models. These state-of-the-art models - which were trained with an 8k token context window - surpass the quality of the original GPT-3 and can be used\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/34204"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=34204"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/34204\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/29633"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=34204"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=34204"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=34204"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}