{"id":25223,"date":"2020-11-11T06:00:00","date_gmt":"2020-11-11T14:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=25223"},"modified":"2020-11-09T14:20:26","modified_gmt":"2020-11-09T22:20:26","slug":"why-humans-still-need-to-be-involved-in-language-based-ai","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/","title":{"rendered":"Why Humans Still Need to be Involved in Language-Based AI"},"content":{"rendered":"\n<p>New, sophisticated AI models such as OpenAI\u2019s GPT-3 are <a href=\"https:\/\/www.nytimes.com\/2020\/07\/29\/opinion\/gpt-3-ai-automation.html\" target=\"_blank\" rel=\"noreferrer noopener\">making headlines<\/a> for their ability to mimic human-like language. Does this mean humans will be replaced with computers? Not so fast.<\/p>\n\n\n\n<p>Despite the hype, these algorithms still have major flaws. Machines still fall short of understanding the meaning and intent behind human conversation. Not to mention, ethical concerns such as bias in AI still are far from a solution. For these reasons, humans still need to be in the loop in most practical AI applications, especially in nuanced areas such as language.<\/p>\n\n\n\n<p><strong>Humans remain the best way to understand context<\/strong><\/p>\n\n\n\n<p>New machine learning models like GPT-3 are highly complex systems trained on vast amounts of data, which allow them to perform relatively well on a variety of language tasks out-of-the-box. And with just a small amount of examples of a specific task, they can perform very well.<\/p>\n\n\n\n<p>So far, beta testers have had striking results using GPT-3 for many applications, such as writing essays, creating chatbots for historical figures, and even machine translation. Despite being trained predominantly on English data, the researchers behind GPT-3 found that the model can translate from French, German, and Romanian to English with surprising accuracy.<\/p>\n\n\n\n<p>It would be convenient if we could use the same AI system like GPT-3 for several tasks at once, such as answering and translating a customer\u2019s question simultaneously. However, translation is basically a serendipitous side-effect of training such a large, powerful model. There is still a long way to go before we can comfortably rely on a model like this to provide customer-facing responses.<\/p>\n\n\n\n<p>OpenAI\u2019s CEO Sam Altman <a href=\"https:\/\/twitter.com\/sama\/status\/1284922296348454913\" target=\"_blank\" rel=\"noreferrer noopener\">said on Twitter<\/a> that despite the hype, GPT-3 \u201cstill has serious weaknesses and sometimes makes very silly mistakes.\u201d GPT-3 experiments are still riddled with errors, some of them more egregious than others. Users don\u2019t always get desirable answers on the first try, and therefore need to adjust their prompts to get correct answers. Machine learning algorithms cannot be expected to be 100% accurate. Humans are still required to differentiate acceptable responses from the unacceptable.<\/p>\n\n\n\n<p><strong>The power of context in translation<\/strong><\/p>\n\n\n\n<p>Part of determining what is acceptable is making judgments related to how language is interpreted in context, which is something humans excel at. We effortlessly know that if we ask a friend, \u201cDo you like to cook?\u201d and her response is \u201cI like to eat,\u201d she probably doesn\u2019t enjoy cooking. Context is also the reason we would say, \u201cCould you please provide your payment details?\u201d to a customer rather than, \u201cGive me your credit card number,\u201d even though the two sentences have the same intent.<\/p>\n\n\n\n<p>In settings where there is little margin for error, such as <a href=\"https:\/\/unbabel.com\/blog\/machine-translation-customer-service\/\" target=\"_blank\" rel=\"noreferrer noopener\">real-time customer service chats<\/a>, humans occasionally need to correct machines\u2019 mistakes. Local dialects and phrases can easily be misinterpreted by machine translation. It\u2019s also critical that a translation system adheres to localized cultural norms \u2014 for example, speaking formally in a business setting in countries like Germany or Japan. So, for now, we still need humans to process the nuances of language.<\/p>\n\n\n\n<p><strong>GPT-3 is impressive, but still biased<\/strong><\/p>\n\n\n\n<p>Going beyond questions of context, humans also need to be involved in the development of these language models for ethical reasons. We know <a href=\"https:\/\/unbabel.com\/blog\/gender-bias-artificial-intelligence\/\" target=\"_blank\" rel=\"noreferrer noopener\">AI systems are often biased<\/a>, and GPT-3 is no exception. In the GPT-3 paper, the authors conduct a preliminary analysis of the model\u2019s shortcomings around fairness, bias, and representation, running experiments related to the model\u2019s perception of gender, race, and religion.<\/p>\n\n\n\n<p>After giving the model prompts such as \u201cHe was very&#8221;, &#8220;She was very&#8221;, &#8220;He would be described as&#8221;, and so on, the authors generated many samples of text and looked at the most common adjectives and adverbs present for each gender. They noted that females are more often described with words related to their appearance (\u201cbeautiful,\u201d \u201cgorgeous,\u201d \u201cpetite\u201d), whereas males are described with more varied terms (\u201cpersonable,\u201d \u201clarge,\u201d \u201clazy\u201d). In examining the model\u2019s \u201cunderstanding\u201d of race and religion, the authors conclude that \u201cinternet-trained models have internet-scale biases; models tend to reflect stereotypes present in their training data.\u201d<\/p>\n\n\n\n<p>None of this is novel or surprising, but investigating, identifying, and measuring biases in AI systems (as the GPT-3 authors did) are necessary first steps toward the elimination of these biases.<\/p>\n\n\n\n<p><strong>Keeping humans in the machine learning loop<\/strong><\/p>\n\n\n\n<p>To make tangible progress in mitigating these biases and their impact, we need humans. This goes beyond having them correct errors, augment datasets, and retrain models. Researchers from UMass Amherst and Microsoft <a href=\"https:\/\/arxiv.org\/abs\/2005.14050\" target=\"_blank\" rel=\"noreferrer noopener\">analyzed nearly 150 papers related to \u201cbias<\/a>\u201d in AI language processing, and found that many have vague motivations and lack normative reasoning. Often, they do not explicitly state how, why, and to whom the \u201cbiases\u201d are harmful.<\/p>\n\n\n\n<p>To understand the real impact of biased AI systems, they argue, we must engage with literature that \u201cexplores the relationship between language and social hierarchies.\u201d We must also engage with communities whose lives are affected by AI and language systems.<\/p>\n\n\n\n<p>After all, language is a human phenomenon, and as practitioners of AI, we should consider not only how to avoid offensive-sounding machine-generated text, but also how our models interact with and impact the societies in which we live.<\/p>\n\n\n\n<p>In addition to bias, major concerns continue to surface about the model\u2019s potential for automated <a href=\"https:\/\/syncedreview.com\/2020\/08\/04\/as-its-gpt-3-model-wows-the-world-openai-ceo-suggests-the-hype-is-way-too-much\/\" target=\"_blank\" rel=\"noreferrer noopener\">toxic language generation<\/a> and <a href=\"https:\/\/syncedreview.com\/2020\/08\/04\/as-its-gpt-3-model-wows-the-world-openai-ceo-suggests-the-hype-is-way-too-much\/\" target=\"_blank\" rel=\"noreferrer noopener\">fake news propagation<\/a>, as well as the <a href=\"https:\/\/syncedreview.com\/2020\/08\/04\/as-its-gpt-3-model-wows-the-world-openai-ceo-suggests-the-hype-is-way-too-much\/\" target=\"_blank\" rel=\"noreferrer noopener\">environmental impact<\/a> of the raw computing power needed to build larger and larger machine learning models.<\/p>\n\n\n\n<p>Here the need for humans isn\u2019t an issue of model performance, but of ethics. Who if not humans will ensure such technology is used responsibly?<\/p>\n\n\n\n<p><strong>GPT-3 can\u2019t say, \u201cI don\u2019t know\u201d<\/strong><\/p>\n\n\n\n<p>If the goal is to train AI to match human intelligence, or at least perfectly mimic human language, perhaps the largest issue is that language models trained solely on text have no grounding in the real world (although this is an active research area). In other words, <a href=\"https:\/\/unbabel.com\/blog\/ai-talking-understanding\/\" target=\"_blank\" rel=\"noreferrer noopener\">they don\u2019t truly \u201cknow\u201d what they\u2019re saying<\/a>. Their \u201cknowledge\u201d is limited to the text they are trained on.<\/p>\n\n\n\n<p>So, while GPT-3 can accurately tell you who the U.S. president was in 1955, it doesn\u2019t know that a toaster is heavier than a pencil. It also thinks the correct answer to \u201c<a rel=\"noreferrer noopener\" href=\"https:\/\/lacker.io\/ai\/2020\/07\/06\/giving-gpt-3-a-turing-test.html\" target=\"_blank\">How many rainbows does it take to jump from Hawaii to seventeen?<\/a>\u201d is two. Whether or not machines can infer meaning from pure text is <a href=\"https:\/\/medium.com\/huggingface\/learning-meaning-in-natural-language-processing-the-semantics-mega-thread-9c0332dfe28e\">up for debate<\/a>, but these examples suggest that the answer is no \u2014 at least for now. To use AI-based language systems responsibly, we still need humans to be closely involved.<\/p>\n\n\n\n<p><strong>About the Author<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignleft size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"125\" height=\"125\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2020\/11\/Christine-e1556290419559-768x769-1.jpg\" alt=\"\" class=\"wp-image-25224\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2020\/11\/Christine-e1556290419559-768x769-1.jpg 125w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2020\/11\/Christine-e1556290419559-768x769-1-110x110.jpg 110w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2020\/11\/Christine-e1556290419559-768x769-1-50x50.jpg 50w\" sizes=\"(max-width: 125px) 100vw, 125px\" \/><\/figure><\/div>\n\n\n\n<p>Christine Maroti, AI Research Engineer at <a rel=\"noreferrer noopener\" href=\"https:\/\/unbabel.com\/\" target=\"_blank\">Unbabel<\/a>, is originally from New York, and is often referred to as Tina, Tininha, or Tuna. She moved to Lisbon in the summer of 2018 to work in Applied AI at Unbabel. When she&#8217;s not training translation models, Tina enjoys scouring her new country for the best croquetes de carne.<\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA\u00a0<a rel=\"noreferrer noopener\" href=\"http:\/\/insidebigdata.com\/newsletter\/\" target=\"_blank\">newsletter<\/a>.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this contributed article, Christine Maroti, AI Research Engineer at Unbabel, believes that humans still need to be in the loop in most practical AI applications, especially in nuanced areas such as language. Despite the hype, these algorithms still have major flaws. Machines still fall short of understanding the meaning and intent behind human conversation. Not to mention, ethical concerns such as bias in AI still are far from a solution.<\/p>\n","protected":false},"author":10513,"featured_media":22678,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,182,180,67,268,56,97,1],"tags":[437,264,947,948,949,95],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Why Humans Still Need to be Involved in Language-Based AI - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Why Humans Still Need to be Involved in Language-Based AI - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"In this contributed article, Christine Maroti, AI Research Engineer at Unbabel, believes that humans still need to be in the loop in most practical AI applications, especially in nuanced areas such as language. Despite the hype, these algorithms still have major flaws. Machines still fall short of understanding the meaning and intent behind human conversation. Not to mention, ethical concerns such as bias in AI still are far from a solution.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2020-11-11T14:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-11-09T22:20:26+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/05\/Artificial_intelligence_2_SHUTTERSTOCK.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"300\" \/>\n\t<meta property=\"og:image:height\" content=\"200\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/\",\"url\":\"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/\",\"name\":\"Why Humans Still Need to be Involved in Language-Based AI - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2020-11-11T14:00:00+00:00\",\"dateModified\":\"2020-11-09T22:20:26+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Why Humans Still Need to be Involved in Language-Based AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Why Humans Still Need to be Involved in Language-Based AI - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/","og_locale":"en_US","og_type":"article","og_title":"Why Humans Still Need to be Involved in Language-Based AI - insideBIGDATA","og_description":"In this contributed article, Christine Maroti, AI Research Engineer at Unbabel, believes that humans still need to be in the loop in most practical AI applications, especially in nuanced areas such as language. Despite the hype, these algorithms still have major flaws. Machines still fall short of understanding the meaning and intent behind human conversation. Not to mention, ethical concerns such as bias in AI still are far from a solution.","og_url":"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2020-11-11T14:00:00+00:00","article_modified_time":"2020-11-09T22:20:26+00:00","og_image":[{"width":300,"height":200,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/05\/Artificial_intelligence_2_SHUTTERSTOCK.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/","url":"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/","name":"Why Humans Still Need to be Involved in Language-Based AI - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2020-11-11T14:00:00+00:00","dateModified":"2020-11-09T22:20:26+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2020\/11\/11\/why-humans-still-need-to-be-involved-in-language-based-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Why Humans Still Need to be Involved in Language-Based AI"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/05\/Artificial_intelligence_2_SHUTTERSTOCK.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-6yP","jetpack-related-posts":[{"id":32878,"url":"https:\/\/insidebigdata.com\/2023\/07\/25\/video-highlights-generative-ai-with-large-language-models\/","url_meta":{"origin":25223,"position":0},"title":"Video Highlights: Generative AI with Large Language Models","date":"July 25, 2023","format":false,"excerpt":"At an unprecedented pace, Large Language Models like GPT-4 are transforming the world in general and the field of data science in particular. This two-hour training video presentation by Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, introduces deep learning transformer architectures including LLMs.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":31750,"url":"https:\/\/insidebigdata.com\/2023\/03\/02\/ai-from-a-psychologists-point-of-view\/","url_meta":{"origin":25223,"position":1},"title":"AI from a Psychologist\u2019s Point of View","date":"March 2, 2023","format":false,"excerpt":"Researchers at the Max Planck Institute for Biological Cybernetics in T\u00fcbingen have examined the general intelligence of the language model GPT-3, a powerful AI tool. Using psychological tests, they studied competencies such as causal reasoning and deliberation, and compared the results with the abilities of humans. Their findings, in the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2018\/09\/artificial-intelligence-3382507_640.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":29628,"url":"https:\/\/insidebigdata.com\/2022\/06\/19\/research-highlights-emergent-abilities-of-large-language-models\/","url_meta":{"origin":25223,"position":2},"title":"Research Highlights: Emergent Abilities of Large Language Models","date":"June 19, 2022","format":false,"excerpt":"In this regular column we take a look at highlights for breaking research topics of the day in the areas of big data, data science, machine learning, AI and deep learning. For data scientists, it\u2019s important to keep connected with the research arm of the field in order to understand\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2022\/06\/arXiv_1.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":23035,"url":"https:\/\/insidebigdata.com\/2019\/08\/02\/ai-for-legalese\/","url_meta":{"origin":25223,"position":3},"title":"AI for Legalese","date":"August 2, 2019","format":false,"excerpt":"Have you ever signed a lengthy legal contract you didn't fully read? Or have you every read a contract you didn't fully understand? Contract review is a time-consuming and labor-intensive process for everyone concerned -- including contract attorneys. Help is on the way. IBM researchers are exploring ways for AI\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2019\/05\/Artificial_intelligence_SHUTTERSTOCK.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":25474,"url":"https:\/\/insidebigdata.com\/2021\/01\/07\/2020s-biggest-stories-in-ai\/","url_meta":{"origin":25223,"position":4},"title":"2020\u2019s Biggest Stories in AI","date":"January 7, 2021","format":false,"excerpt":"In this contributed article, several data scientists from Vectra AI discuss 2020\u2019s Biggest Stories in AI. 2020 provided a glimpse of just how much AI is beginning to penetrate everyday life. It seems likely that in the next few years we\u2019ll regularly (and unknowingly) see AI-generated text in our social\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2019\/05\/Artificial_intelligence_2_SHUTTERSTOCK.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33231,"url":"https:\/\/insidebigdata.com\/2023\/08\/28\/generative-ai-report-nutanix-simplifies-adoption-of-generative-ai-with-new-nutanix-gpt-in-a-box-solution\/","url_meta":{"origin":25223,"position":5},"title":"Generative AI Report: Nutanix Simplifies Adoption of Generative AI with New Nutanix GPT-in-a-Box Solution","date":"August 28, 2023","format":false,"excerpt":"Nutanix\u00a0(NASDAQ:\u00a0NTNX), a leader in hybrid multicloud computing, announced the Nutanix GPT-in-a-Box\u2122\u00a0solution for customers looking to jump-start their artificial intelligence (AI) and machine learning (ML) innovation, while maintaining control over their data. The new offering is a full-stack software-defined AI-ready platform, along with services to help organizations size and configure hardware\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/08\/Generative_AI_shutterstock_2273007347_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/25223"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=25223"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/25223\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/22678"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=25223"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=25223"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=25223"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}