{"id":31837,"date":"2023-03-13T06:00:00","date_gmt":"2023-03-13T13:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=31837"},"modified":"2023-03-10T10:34:05","modified_gmt":"2023-03-10T18:34:05","slug":"data-science-bows-before-prompt-engineering-and-few-shot-learning","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/","title":{"rendered":"Data Science Bows Before Prompt Engineering and Few Shot Learning\u00a0"},"content":{"rendered":"\n<p>While the media, general public, and practitioners of Artificial Intelligence are delighting in the newfound possibilities of Chat GPT, most are missing what this application of natural language technologies means to data science.<\/p>\n\n\n\n<p>They\u2019ve failed to see how far this discipline has come\u2014and what it now means to everyday users of previously arcane, advanced analytics techniques that have become normalized.<\/p>\n\n\n\n<p>According to Abhishek Gupta, Principal Data Scientist and Engineer at&nbsp;<a href=\"https:\/\/www.talentica.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Talentica Software<\/a>, the underlying language model for Chat GPT is&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2212.05206\" target=\"_blank\" rel=\"noreferrer noopener\">GPT-3.5<\/a>. This model is more utilitarian than Chat GPT. It\u2019s more proficient at generating software code and is applicable to a range of natural language technology tasks other than question answering and language generation, including document classification, summarization, and analysis of textual organization.<\/p>\n\n\n\n<p>Most of all, this language model is extremely amenable to prompt engineering and few shot learning, frameworks that all but obsolete data science\u2019s previous limitations around feature engineering and training data amounts.<\/p>\n\n\n\n<p>By tailoring GPT-3.5 with prompt engineering and few shot learning, \u201cCommon tasks don\u2019t require a data scientist,\u201d Gupta pointed out. \u201cA common person can do them just by knowing how to create the prompt and, to some extent, knowing some knowledge about GPT-3.5.\u201d<\/p>\n\n\n\n<p><strong>Prompt Engineering&nbsp;<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/2302.11382\" target=\"_blank\" rel=\"noreferrer noopener\">Prompt engineering<\/a>&nbsp;epitomizes how GPT-3.5 has revolutionized data science, making it easy for non-technical users. Before they could perform prompt engineering with this language model, expensive, hard-to-find data scientists predominantly had to build individual models for each application of natural language technologies.<\/p>\n\n\n\n<p>But with the availability of GPT-3.5, \u201cWe can speed time-to-market now that we have this single model that we can do more intelligent prompt engineering over,\u201d Gupta revealed. \u201cAnd, it\u2019s the same model that we can use for different tasks.\u201d Thus, no matter how disparate the tasks\u2014such as reading emails and writing responses or summarizing a research article in five lines\u2014users simply have to sufficiently engineer the prompt to teach the model to perform it.<\/p>\n\n\n\n<p>\u201cA prompt is a certain command we give to the model,\u201d Gupta explained. \u201cAnd, in modeling the commands, we also give it certain examples which can identify patterns. Based on these commands and patterns, the model can understand what the task is all about.\u201d For instance, one would simply have to give a model a specific text and write TL;DR (Too Long; Didn\u2019t Read) and the model would understand that the task was text summarization\u2014then perform it.<\/p>\n\n\n\n<p><strong>Prompt Engineering Stores&nbsp;<\/strong><\/p>\n\n\n\n<p>Prompt engineering\u2019s capital advantage is it replaces the need to engineer features for individual models trained for one task. Feature engineering is often time consuming, arduous, and demanding of specialized statistical and coding knowledge. Conversely, any user can issue a natural language prompt, rendering this aspect of model tuning accessible to a much broader user base, including laymen. It\u2019s effectiveness hinges upon&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2211.01910\" target=\"_blank\" rel=\"noreferrer noopener\">creating the right prompt<\/a>.<\/p>\n\n\n\n<p>\u201cIf you give a good prompt, the output will be much better than a casually given prompt,\u201d Gupta advised. \u201cThere are certain words that will help the model understand better about the task compared to other words. There are certain automated ways to create these prompts.\u201d<\/p>\n\n\n\n<p>A best practice for prompt engineering is to employ a prompt engineering database, which is roughly equivalent to a feature store, in that it houses prompts that can be reused and modified for different purposes. \u201cPeople have come up with a database of prompts which can be used for certain tasks which are usually commonly known,\u201d Gupta mentioned.<\/p>\n\n\n\n<p><strong>Few Shot Learning<\/strong><\/p>\n\n\n\n<p>In addition to giving commands via prompts, organizations can also provide examples in prompts to train GPT-3.5 for a given task. The latter is part of the&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2212.05206\" target=\"_blank\" rel=\"noreferrer noopener\">few shot learning phenomenon<\/a>&nbsp;in which the amount of training data for teaching models is lowered to a few (few shot learning), single (single shot learning) or zero (zero shot learning) examples. This example reduction is remarkable compared to all of the training data\u2014and annotations required for training data\u2014that can otherwise hamper machine learning tasks.<\/p>\n\n\n\n<p>In this case, one \u201cjust gives some examples of the patterns to the model and it auto-generates similar kinds of patterns for the solution\u2019s task,\u201d Gupta commented. If the task is for the system to identify the capitals of every country, the user could give an example that&nbsp;New Delhi&nbsp;is the capital of&nbsp;India&nbsp;before asking for capitals of other countries. The example of this single shot learning use case would train the system, then \u201cby giving the pattern to the model you can ask any question based on that pattern,\u201d Gupta concluded.<\/p>\n\n\n\n<p><strong>Multitask Learning&nbsp;<\/strong><\/p>\n\n\n\n<p>Although such an example may seem trivial, it attests to the ease of use, lack of specialized knowledge, and dearth of technical skills required to tune GPT-3.5 for almost any natural language technology task. Ultimately, this utilitarian nature of GPT-3.5 evinces the effectiveness of multitask learning, and the expanding accessibility of advanced machine learning models.<\/p>\n\n\n\n<p><strong>About the Author<\/strong><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"125\" height=\"125\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Jelani-Harper.jpg\" alt=\"\" class=\"wp-image-23475\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Jelani-Harper.jpg 125w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Jelani-Harper-110x110.jpg 110w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Jelani-Harper-50x50.jpg 50w\" sizes=\"(max-width: 125px) 100vw, 125px\" \/><\/figure><\/div>\n\n\n<p><em>Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.<\/em><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this contributed article, editorial consultant Jelani Harper takes a new look at the GPT phenomenon by exploring how prompt engineering (stores, databases) coupled with few shot learning can constitute a significant adjunct to traditional data science. <\/p>\n","protected":false},"author":10513,"featured_media":31716,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,182,180,67,268,56,97,84,1],"tags":[437,1254,133,264,1265,1248,1266,95],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Data Science Bows Before Prompt Engineering and Few Shot Learning\u00a0 - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Data Science Bows Before Prompt Engineering and Few Shot Learning\u00a0 - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"In this contributed article, editorial consultant Jelani Harper takes a new look at the GPT phenomenon by exploring how prompt engineering (stores, databases) coupled with few shot learning can constitute a significant adjunct to traditional data science.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-13T13:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-10T18:34:05+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/02\/GPT4_shutterstock_2252419881_small.png\" \/>\n\t<meta property=\"og:image:width\" content=\"300\" \/>\n\t<meta property=\"og:image:height\" content=\"200\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/\",\"url\":\"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/\",\"name\":\"Data Science Bows Before Prompt Engineering and Few Shot Learning\u00a0 - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2023-03-13T13:00:00+00:00\",\"dateModified\":\"2023-03-10T18:34:05+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Data Science Bows Before Prompt Engineering and Few Shot Learning\u00a0\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Data Science Bows Before Prompt Engineering and Few Shot Learning\u00a0 - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/","og_locale":"en_US","og_type":"article","og_title":"Data Science Bows Before Prompt Engineering and Few Shot Learning\u00a0 - insideBIGDATA","og_description":"In this contributed article, editorial consultant Jelani Harper takes a new look at the GPT phenomenon by exploring how prompt engineering (stores, databases) coupled with few shot learning can constitute a significant adjunct to traditional data science.","og_url":"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2023-03-13T13:00:00+00:00","article_modified_time":"2023-03-10T18:34:05+00:00","og_image":[{"width":300,"height":200,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/02\/GPT4_shutterstock_2252419881_small.png","type":"image\/png"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/","url":"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/","name":"Data Science Bows Before Prompt Engineering and Few Shot Learning\u00a0 - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2023-03-13T13:00:00+00:00","dateModified":"2023-03-10T18:34:05+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2023\/03\/13\/data-science-bows-before-prompt-engineering-and-few-shot-learning\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Data Science Bows Before Prompt Engineering and Few Shot Learning\u00a0"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/02\/GPT4_shutterstock_2252419881_small.png","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-8hv","jetpack-related-posts":[{"id":32578,"url":"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/","url_meta":{"origin":31837,"position":0},"title":"The Science and Practical Applications of Word Embeddings\u00a0","date":"June 8, 2023","format":false,"excerpt":"In this contributed article, editorial consultant Jelani Harper takes a look at how word embeddings are directly responsible for many of the exponential advancements natural language technologies have made over the past couple years. They\u2019re foundational to the functionality of popular Large Language Models like ChatGPT and other GPT iterations.\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/04\/ChatGPT_shutterstock_2249988847_small.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32878,"url":"https:\/\/insidebigdata.com\/2023\/07\/25\/video-highlights-generative-ai-with-large-language-models\/","url_meta":{"origin":31837,"position":1},"title":"Video Highlights: Generative AI with Large Language Models","date":"July 25, 2023","format":false,"excerpt":"At an unprecedented pace, Large Language Models like GPT-4 are transforming the world in general and the field of data science in particular. This two-hour training video presentation by Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, introduces deep learning transformer architectures including LLMs.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":24908,"url":"https:\/\/insidebigdata.com\/2020\/08\/24\/interview-andy-horng-co-founder-and-head-of-ai-cultivate\/","url_meta":{"origin":31837,"position":2},"title":"Interview: Andy Horng, Co-Founder and Head of AI, Cultivate","date":"August 24, 2020","format":false,"excerpt":"I recently caught up with Andy Horng, Co-Founder and Head of AI at Cultivate, to get a sense for the technology underlying the company's AI-powered leadership development platform. NLP plays an important role, and as a result they're using the RoBERTa language model for very good results.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2020\/08\/Andy-Horng.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":31714,"url":"https:\/\/insidebigdata.com\/2023\/02\/24\/research-highlights-a-comprehensive-survey-on-pretrained-foundation-models-a-history-from-bert-to-chatgpt\/","url_meta":{"origin":31837,"position":3},"title":"Research Highlights: A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT","date":"February 24, 2023","format":false,"excerpt":"The Pretrained Foundation Models (PFMs) are regarded as the foundation for various downstream tasks with different data modalities. A pretrained foundation model, such as BERT, GPT-3, MAE, DALLE-E, and ChatGPT, is trained on large-scale data which provides a reasonable parameter initialization for a wide range of downstream applications.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/02\/LLM_paper.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32512,"url":"https:\/\/insidebigdata.com\/2023\/06\/03\/video-highlights-fine-tune-gpt-j-6b-in-under-3-hours-on-ipus\/","url_meta":{"origin":31837,"position":4},"title":"Video Highlights: Fine Tune GPT-J 6B in Under 3 Hours on IPUs","date":"June 3, 2023","format":false,"excerpt":"Did you know you can run GPT-J 6B on Graphcore IPU in the cloud? Following the now infamous leaked Google memo, there's been a real storm in the AI world recently around smaller, open source language models, like GPT-J, that are cheaper and faster to fine-tune, run and perform just\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2019\/05\/Deep_Learning_shutterstock_386816095.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":30906,"url":"https:\/\/insidebigdata.com\/2022\/11\/19\/snorkel-ai-accelerates-foundation-model-adoption-with-data-centric-ai\/","url_meta":{"origin":31837,"position":5},"title":"Snorkel AI Accelerates Foundation Model Adoption with Data-centric AI","date":"November 19, 2022","format":false,"excerpt":"Snorkel AI, the data-centric AI platform company, today introduced Data-centric Foundation Model Development for enterprises to unlock complex, performance-critical use cases with GPT-3, RoBERTa, T5, and other foundation models. With this launch, enterprise data science and machine learning teams can overcome adaptation and deployment challenges by creating large, domain-specific datasets\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/31837"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=31837"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/31837\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/31716"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=31837"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=31837"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=31837"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}