{"id":34091,"date":"2023-12-11T03:00:00","date_gmt":"2023-12-11T11:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=34091"},"modified":"2023-12-15T08:43:32","modified_gmt":"2023-12-15T16:43:32","slug":"crafting-precision-content-using-large-language-models","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/","title":{"rendered":"Crafting Precision Content Using Large Language Models\u00a0"},"content":{"rendered":"\n<p>By <a href=\"https:\/\/www.dacgroup.com\/blog\/author\/kpuvanesasingham\" target=\"_blank\" rel=\"noreferrer noopener\">Kuhan Puvanesasingham<\/a><\/p>\n\n\n\n<p>The latest Large Language Models&nbsp;(LLM), although toddlers, are in many ways much smarter than the average human is\u2014or will ever be. Not only do they know more, but they have a much greater capacity to process and act on vast arrays of instructions.&nbsp;&nbsp;<\/p>\n\n\n\n<p><a href=\"https:\/\/info.dacgroup.com\/largelanguagemodels?_ga=2.7635954.1300145646.1701105715-2138258860.1697574270\" target=\"_blank\" rel=\"noreferrer noopener\">We\u2019ve recently explored the potential of LLMs<\/a> to interpret or generate text considering numerous parameters with extraordinary precision. We have demonstrated their ability to evaluate unstructured text blocks and grade them along various scales to great success. Additionally, we have been able to generate specific, tailored content using a complex series of instructions that can be tuned with precision to the same set of scales.&nbsp;<\/p>\n\n\n\n<p>We are entering a new era of performance-based, data-driven content where we can transform LLMs into written content production facilities, complete with a control board of knobs and dials to numerically fine-tune optimal marketing content across various formats.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Here, we share our initial <a href=\"https:\/\/info.dacgroup.com\/largelanguagemodels?_ga=2.7635954.1300145646.1701105715-2138258860.1697574270\" target=\"_blank\" rel=\"noreferrer noopener\">experiment<\/a> findings and offer a general methodology to harness LLMs to extract quantitative measures from otherwise unstructured text, opening the door to statistical analysis and optimization of content creation.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"605\" height=\"319\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/12\/Fig1.jpg\" alt=\"\" class=\"wp-image-34095\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/12\/Fig1.jpg 605w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/12\/Fig1-300x158.jpg 300w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/12\/Fig1-150x79.jpg 150w\" sizes=\"(max-width: 605px) 100vw, 605px\" \/><figcaption class=\"wp-element-caption\">Source: Analytics Vidhya<\/figcaption><\/figure><\/div>\n\n\n<p><strong>Large language models are perfectly suited to plot ideas<\/strong><\/p>\n\n\n\n<p>Large language models are constructed using mathematical vectors. This foundation allows them to adeptly translate blocks of text\u2014rating, scoring, or mapping content into numeric values that capture the distinct qualities of each block. For example, we can ask the model to evaluate how \u2018technical\u2019 this blog is on a scale of 1-10, where the boundaries are defined by example or by user instruction. Similarly, LLMs can reverse this direction to generate blocks of text that correspond to numeric scores of a certain attribute submitted by a user. For example, you can ask the LLM to write a blog post with the degree of technicality scoring 9 out of 10.<\/p>\n\n\n\n<p>We can additively include more concepts to build a conceptual space. This is what we term a conceptual Cartesian space, which the LLM can refer to for content generation. We can plot a point in this space to define an idea based on its position relative to each of the axes that define our space.&nbsp;&nbsp;<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"624\" height=\"362\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/12\/Fig2.png\" alt=\"\" class=\"wp-image-34097\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/12\/Fig2.png 624w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/12\/Fig2-300x174.png 300w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/12\/Fig2-150x87.png 150w\" sizes=\"(max-width: 624px) 100vw, 624px\" \/><figcaption class=\"wp-element-caption\">Source: https:\/\/serokell.io\/blog\/language-models-behind-chatgpt<\/figcaption><\/figure><\/div>\n\n\n<p><strong>Our experiments and findings<\/strong><\/p>\n\n\n\n<p>We conducted&nbsp;a <a href=\"https:\/\/info.dacgroup.com\/largelanguagemodels?_ga=2.7635954.1300145646.1701105715-2138258860.1697574270\" target=\"_blank\" rel=\"noreferrer noopener\">series of experiments<\/a> to&nbsp;validate&nbsp;the effectiveness and flexibility of&nbsp;conceptual Cartesian&nbsp;mapping using LLMs.&nbsp;We took a \u2018ground-up\u2019 approach to&nbsp;validate&nbsp;our&nbsp;methodology, starting with basic experiments and&nbsp;increasing the complexity at each step.&nbsp;<\/p>\n\n\n\n<p><em>Gradients experiment&nbsp;<\/em><\/p>\n\n\n\n<p>This experiment explores the LLM\u2019s ability to scale content along linear gradients, providing users with control in generating or evaluating text. We examined different scale ranges (1-10, 1-100) and demonstrated the model\u2019s adherence to specific scoring frameworks. Results affirm the model\u2019s ability to methodically follow chosen gradients.<\/p>\n\n\n\n<p><em>Alternative scoring methods experiment&nbsp;<\/em><\/p>\n\n\n\n<p>In this experiment, we tested the influence of alternative scoring methods on text output. The LLM is instructed to apply various scoring frameworks, showcasing the model\u2019s adaptability. It successfully crafts responses based on specific rules, indicating the customization potential of LLMs for diverse applications. We even used a respected psychological framework to grade (or diagnose) empathy scores for a block of content.<\/p>\n\n\n\n<p><em>Multi-dimensional space experiment&nbsp;<\/em><\/p>\n\n\n\n<p>This experiment delves into the model\u2019s performance in multi-dimensional spaces. The study introduces concepts like practicality and technicality as additional axes, illustrating the LLM\u2019s ability to handle complex ideas and multiple dimensions effectively. The results indicate the model\u2019s agility in navigating intricate multi-dimensional spaces.<\/p>\n\n\n\n<p><em>Unspecified relative space experiment&nbsp;<\/em><\/p>\n\n\n\n<p>This experiment explores the LLM\u2019s capability to quantitatively analyze ideas relative to other ideas, not gradients along a single axis. One significant practical application for marketers is positioning content relative to competitors; we got the LLM to generate housing policy for a fictitious mayoral candidate that is quantitatively positioned relative to several existing candidates.<\/p>\n\n\n\n<p>Our <a href=\"https:\/\/info.dacgroup.com\/largelanguagemodels?_ga=2.7635954.1300145646.1701105715-2138258860.1697574270\" target=\"_blank\" rel=\"noreferrer noopener\">study<\/a> demonstrated the model\u2019s ability to handle open content generation tasks with quantitative precision, showcasing its potential in environments where ideas lack strict predefined frameworks.<\/p>\n\n\n\n<p><strong>Attaching standard performance metrics<\/strong><\/p>\n\n\n\n<p>If we bind the conceptual Cartesian position of content with traditional metrics, we can analyze the performance of published content against newly available numeric values. For example, we can study the social media posts (i.e., likes, shares, click through rates) against conceptual scores assigned by the model for attributes like humor, empathy, and technicality. Through statistical analysis, we can identify the optimal mix of each conceptual attribute for a given context and use that co-ordinate position to generate new content for enhanced performance.<\/p>\n\n\n\n<p>The innovative combination of conceptual Cartesian mapping and LLMs gives us a new, methodical, precise approach to general content creation.<\/p>\n\n\n\n<p>Businesses can tailor their messaging with precision, ensuring maximum engagement with their target audience whilst positioning their content relative to competitors. Political campaigns can craft nuanced narratives relative to other candidates or polling results. Educational institutions can create customized learning materials, enhancing student engagement and comprehension at the individual level.<\/p>\n\n\n\n<p><a href=\"https:\/\/info.dacgroup.com\/largelanguagemodels?_ga=2.7635954.1300145646.1701105715-2138258860.1697574270\" target=\"_blank\" rel=\"noreferrer noopener\">Access the full whitepaper here.<\/a><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>[SPONSORED POST] Our friends over at DAC recently explored the potential of LLMs to interpret or generate text considering numerous parameters with extraordinary precision. They have demonstrated their ability to evaluate unstructured text blocks and grade them along various scales to great success. Additionally, they have been able to generate specific, tailored content using a complex series of instructions that can be tuned with precision to the same set of scales.\u00a0<\/p>\n","protected":false},"author":10513,"featured_media":32763,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,182,180,67,268,56,311,1],"tags":[437,1245,1248,95],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Crafting Precision Content Using Large Language Models\u00a0 - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Crafting Precision Content Using Large Language Models\u00a0 - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"[SPONSORED POST] Our friends over at DAC recently explored the potential of LLMs to interpret or generate text considering numerous parameters with extraordinary precision. They have demonstrated their ability to evaluate unstructured text blocks and grade them along various scales to great success. Additionally, they have been able to generate specific, tailored content using a complex series of instructions that can be tuned with precision to the same set of scales.\u00a0\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-11T11:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-12-15T16:43:32+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2284999159_special.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1100\" \/>\n\t<meta property=\"og:image:height\" content=\"550\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/\",\"url\":\"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/\",\"name\":\"Crafting Precision Content Using Large Language Models\u00a0 - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2023-12-11T11:00:00+00:00\",\"dateModified\":\"2023-12-15T16:43:32+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Crafting Precision Content Using Large Language Models\u00a0\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Crafting Precision Content Using Large Language Models\u00a0 - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/","og_locale":"en_US","og_type":"article","og_title":"Crafting Precision Content Using Large Language Models\u00a0 - insideBIGDATA","og_description":"[SPONSORED POST] Our friends over at DAC recently explored the potential of LLMs to interpret or generate text considering numerous parameters with extraordinary precision. They have demonstrated their ability to evaluate unstructured text blocks and grade them along various scales to great success. Additionally, they have been able to generate specific, tailored content using a complex series of instructions that can be tuned with precision to the same set of scales.\u00a0","og_url":"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2023-12-11T11:00:00+00:00","article_modified_time":"2023-12-15T16:43:32+00:00","og_image":[{"width":1100,"height":550,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2284999159_special.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/","url":"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/","name":"Crafting Precision Content Using Large Language Models\u00a0 - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2023-12-11T11:00:00+00:00","dateModified":"2023-12-15T16:43:32+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2023\/12\/11\/crafting-precision-content-using-large-language-models\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Crafting Precision Content Using Large Language Models\u00a0"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2284999159_special.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-8RR","jetpack-related-posts":[{"id":30414,"url":"https:\/\/insidebigdata.com\/2022\/09\/20\/nvidia-launches-large-language-model-cloud-services\/","url_meta":{"origin":34091,"position":0},"title":"NVIDIA Launches Large Language Model Cloud Services","date":"September 20, 2022","format":false,"excerpt":"NVIDIA today announced two new large language model cloud AI services \u2014 the NVIDIA NeMo Large Language Model Service and the NVIDIA BioNeMo LLM Service \u2014 that enable developers to easily adapt LLMs and deploy customized AI applications for content generation, text summarization, chatbots, code development, as well as protein\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":33298,"url":"https:\/\/insidebigdata.com\/2023\/09\/06\/insidebigdata-ai-news-briefs-9-8-2023\/","url_meta":{"origin":34091,"position":1},"title":"insideBIGDATA AI News Briefs \u2013 9\/8\/2023","date":"September 6, 2023","format":false,"excerpt":"Welcome insideBIGDATA AI News Briefs, our timely new feature bringing you the latest industry insights and perspectives surrounding the field of AI including deep learning, large language models, generative AI, and transformers. We\u2019re working tirelessly to dig up the most timely and curious tidbits underlying the day\u2019s most popular technologies.\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/07\/AI-News-Briefs-column-banner.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33975,"url":"https:\/\/insidebigdata.com\/2023\/11\/23\/new-report-the-definitive-guide-to-large-language-models-and-high-performance-marketing-content\/","url_meta":{"origin":34091,"position":2},"title":"New Report: The Definitive Guide to Large Language Models and High-Performance Marketing Content","date":"November 23, 2023","format":false,"excerpt":"Phrasee, a leading innovator in brand language optimization, just released a new white paper \"The Definitive Guide to Large Language Models and High-Performance Marketing Content,\" on how enterprise marketers can build an in-house LLM solution and use it at its full potential.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2284999159_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33161,"url":"https:\/\/insidebigdata.com\/2023\/08\/17\/generative-ai-report-deci-releases-powerful-open-source-generative-ai-model-decicoder-to-redefine-code-generation-for-developers\/","url_meta":{"origin":34091,"position":3},"title":"Generative AI Report: Deci Releases Powerful Open-Source Generative AI Model, DeciCoder, to Redefine Code Generation for Developers\u00a0","date":"August 17, 2023","format":false,"excerpt":"Deci, the deep learning company harnessing AI to build AI, released DeciCoder, its inaugural foundation model in generative AI helping users generate programming language code. This groundbreaking Large Language Model (LLM), dedicated to code generation with 1 billion parameters and an expansive 2048-context window, surpasses results released in equivalent models\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32878,"url":"https:\/\/insidebigdata.com\/2023\/07\/25\/video-highlights-generative-ai-with-large-language-models\/","url_meta":{"origin":34091,"position":4},"title":"Video Highlights: Generative AI with Large Language Models","date":"July 25, 2023","format":false,"excerpt":"At an unprecedented pace, Large Language Models like GPT-4 are transforming the world in general and the field of data science in particular. This two-hour training video presentation by Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, introduces deep learning transformer architectures including LLMs.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32900,"url":"https:\/\/insidebigdata.com\/2023\/07\/19\/poll-which-company-will-lead-the-llm-pack\/","url_meta":{"origin":34091,"position":5},"title":"POLL: Which Company Will Lead the LLM Pack?","date":"July 19, 2023","format":false,"excerpt":"Since the release of ChatGPT late last year, the world has gone crazy for large language models (LLMs) and generative AI powered by transformers. The biggest players in our industry are now jockeying for prime position in this lucrative space. The news cycle is extremely fast-paced and technology is advancing\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/AI_shutterstock_2287025875_special-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/34091"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=34091"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/34091\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/32763"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=34091"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=34091"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=34091"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}