{"id":32578,"date":"2023-06-08T06:00:00","date_gmt":"2023-06-08T13:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=32578"},"modified":"2023-06-08T09:02:22","modified_gmt":"2023-06-08T16:02:22","slug":"the-science-and-practical-applications-of-word-embeddings","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/","title":{"rendered":"The Science and Practical Applications of Word Embeddings\u00a0"},"content":{"rendered":"\n<p>Word embeddings are directly responsible for many of the exponential advancements natural language technologies have made over the past couple years. They\u2019re foundational to the functionality of popular Large Language Models like ChatGPT and other GPT iterations. These mathematical representations also have undeniable implications for textual applications of Generative AI.<\/p>\n\n\n\n<p>From a pragmatic perspective, word embeddings are pivotal to unlocking a trove of business value from contemporary applications of Natural Language Understanding, cognitive search, and Natural Language Generation. When paired with the Generative AI capacity of some of the said language models, they drastically expedite the time required for everything from backend data management to customer-facing applications.&nbsp;&nbsp;<\/p>\n\n\n\n<p>According to&nbsp;<a href=\"https:\/\/www.pega.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Pega<\/a>&nbsp;CTO Don Schuerman, the results of these practical Generative AI use cases are transformational. Moreover, they\u2019re horizontally applicable across organizations and deployments, underpinning the basics of workflow management and application development in general.<\/p>\n\n\n\n<p>\u201cWe can say, \u2018what are the steps of a workflow to manage a loan application or onboard a new member of a healthcare plan?\u2019\u201d Schuerman said. \u201cGenerative AI will say, \u2018here\u2019s what the common processes for that would look like and here\u2019s the data model for it.\u2019\u201d<\/p>\n\n\n\n<p>The ability of language understanding models to properly comprehend such user requests and elicit the correct responses hinges on the efficacy of word embeddings.&nbsp;&nbsp;<\/p>\n\n\n\n<p><strong>Mathematical Vectors<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/abs\/1901.09069\" target=\"_blank\" rel=\"noreferrer noopener\">Word embeddings<\/a>&nbsp;are a facet of representation learning, which provides the statistical foundation for many contemporary models for natural language technologies. According to&nbsp;<a href=\"https:\/\/franz.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Franz<\/a>&nbsp;CEO Jans Aasman, these embeddings represent words as vectors. \u201cA vector is a series of elements, usually numbers,\u201d Aasman commented. \u201cFor example, you can have a vector of 10 numbers.\u201d These mathematical representations include semantic understanding and, when comparing embeddings to one another in what\u2019s oftentimes a high-dimensionality space, context. Words or phrases with similar meaning are represented closer to one another than those with dissimilar meanings are.<\/p>\n\n\n\n<p>Aasman said representing words as numerical vectors enables machine learning models to ascribe various weights to them. In a specific text, which might include a prompt or, in the case of models like ChatGPT, the contents of the internet itself, \u201cWhat you can do is take a window of like plus or minus five words or, like ChatGPT, plus or minus 500 words from the word you\u2019re interested in,\u201d Aasman disclosed. \u201cYou create a weight for every other word to see to what extent it influences the next word.\u201d<\/p>\n\n\n\n<p><strong>The Prompt Engineering Effect<\/strong><\/p>\n\n\n\n<p>The applicability of word embeddings to prompt engineering\u2014the means by which users phrase tasks they want generative AI models to create text in response to\u2014is critical, because it allows models to understand what users are asking. Doing so is vital for accurate natural language technology applications of question answering, intelligent search, and more. In this context, via word embeddings, \u201cYou get a whole bunch of words around the word you\u2019re interested in, and you can see to what extent it predicts the words after your word,\u201d Aasman noted.<\/p>\n\n\n\n<p>The generative AI tasks prompt engineering initiates are impressive. Some involves what Gartner has termed&nbsp;<a href=\"https:\/\/www.gartner.com\/en\/newsroom\/press-releases\/2022-06-22-is-synthetic-data-the-future-of-ai\" target=\"_blank\" rel=\"noreferrer noopener\">synthetic data<\/a>&nbsp;which, for example, might involve \u201casking Generative AI to make me some sample data so I can test this application quickly,\u201d Schuerman revealed. The same concept can easily extend to generate training data (or annotations for such data) for supervised learning models. \u201cEvery developer knows the experience of filling out a spreadsheet thinking of their best friend\u2019s dog\u2019s name to fill it in with different data for testing,\u201d Schuerman observed. \u201cThat wasted time is now gone.\u201d Users can also prompt Generative AI to write code, devise data models and fields in them, and create individual procedures for workflows or applications.<\/p>\n\n\n\n<p><strong>Model Restrictions<\/strong><\/p>\n\n\n\n<p>Word embeddings also affect the ability to tailor language generation models to select responses from a particular source. Because they provide the means of models understanding what users are asking for, these embeddings are amenable to prompts that focus on a particular corpus or knowledge base. \u201cA common pattern in a GPT use case is if you want to restrict the model, or give the model a certain set of data, you can actually bake it into the prompt,\u201d Schuerman explained. For example, if an organization wants a language model to use developer documentation for question answering, one of the first steps is to classify that text in discreet concepts, words, phrases, or sections.<\/p>\n\n\n\n<p>According to Schuerman,&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2303.08774\" target=\"_blank\" rel=\"noreferrer noopener\">GPT<\/a>&nbsp;is useful for providing those classifications. Those components then become part of the word embedding process; it\u2019s incumbent upon users to include those classifications in their prompts. This technique enables users to \u201cdo two things,\u201d Schuerman specified. \u201cIt allows us to include the most up-to-date information in responses, but also ensure that we\u2019re restricting GPT so it doesn\u2019t go to some other source that we don\u2019t trust to get this answer.\u201d This same methodology can provide timely question answering for customer service documentation, IT help desks, or searching any specific corpus for conversational responses in real time.<\/p>\n\n\n\n<p><strong>Manifold Layout Techniques&nbsp;<\/strong><\/p>\n\n\n\n<p>Oftentimes, word embeddings are vectorized in high-dimensionality spaces. Depending on the enormity of the dimensionality of a particular embedding or series of embeddings, the sheer number of dimensions may become lumbersome, slowing computations and delaying Natural Language Processing. There are several dimensionality reduction techniques involving supervised and unsupervised learning that can redress this issue.<\/p>\n\n\n\n<p><a href=\"https:\/\/indicodata.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">Indico Data<\/a>&nbsp;CTO Slater Victoroff characterized \u201cmanifold layout techniques\u201d as one such approach to effectively bring an embedding from a higher-dimensionality space to a lower-dimensionality one. The benefit of doing so is that it largely preserves the semantics and relationships found in the former space in such a way \u201cthat you don\u2019t lose a lot,\u201d Aasman indicated. Manifolds are not infrequently employed in contemporary applications of word embeddings to reduce the dimensionality involved, which can spur computations and NLP results.<\/p>\n\n\n\n<p><strong>Today and Tomorrow&nbsp;<\/strong><\/p>\n\n\n\n<p>It appears word embeddings will be part of statistical applications of natural language technologies, including textual representations of Generative AI, for some time. They assist with\u2014if not enable\u2014the prompt engineering process required for getting apposite, timely responses from Generative AI models. They are the conduit by which the enterprise can reap many of the advantages that this form of AI delivers for building applications, interacting with customers, and supplying rapid information retrieval.<\/p>\n\n\n\n<p>Due in no small part to the utility of word embeddings, enterprise applications of Generative AI in low code settings are \u201can accelerator and starting point for any process; you name it,\u201d Schuerman concluded. \u201cYou name the process and we can give you a starting point for it.\u201d<\/p>\n\n\n\n<p><strong>About the Author<\/strong><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"125\" height=\"125\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Jelani-Harper.jpg\" alt=\"\" class=\"wp-image-23475\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Jelani-Harper.jpg 125w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Jelani-Harper-110x110.jpg 110w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Jelani-Harper-50x50.jpg 50w\" sizes=\"(max-width: 125px) 100vw, 125px\" \/><\/figure><\/div>\n\n\n<p><em>Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.<\/em><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this contributed article, editorial consultant Jelani Harper takes a look at how word embeddings are directly responsible for many of the exponential advancements natural language technologies have made over the past couple years. They\u2019re foundational to the functionality of popular Large Language Models like ChatGPT and other GPT iterations. These mathematical representations also have undeniable implications for textual applications of Generative AI.<\/p>\n","protected":false},"author":10513,"featured_media":32245,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,182,180,67,268,56,97,1],"tags":[437,1254,264,277,635,96,1315],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The Science and Practical Applications of Word Embeddings\u00a0 - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Science and Practical Applications of Word Embeddings\u00a0 - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"In this contributed article, editorial consultant Jelani Harper takes a look at how word embeddings are directly responsible for many of the exponential advancements natural language technologies have made over the past couple years. They\u2019re foundational to the functionality of popular Large Language Models like ChatGPT and other GPT iterations. These mathematical representations also have undeniable implications for textual applications of Generative AI.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2023-06-08T13:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-06-08T16:02:22+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/04\/ChatGPT_shutterstock_2249988847_small.png\" \/>\n\t<meta property=\"og:image:width\" content=\"300\" \/>\n\t<meta property=\"og:image:height\" content=\"200\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/\",\"url\":\"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/\",\"name\":\"The Science and Practical Applications of Word Embeddings\u00a0 - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2023-06-08T13:00:00+00:00\",\"dateModified\":\"2023-06-08T16:02:22+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The Science and Practical Applications of Word Embeddings\u00a0\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The Science and Practical Applications of Word Embeddings\u00a0 - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/","og_locale":"en_US","og_type":"article","og_title":"The Science and Practical Applications of Word Embeddings\u00a0 - insideBIGDATA","og_description":"In this contributed article, editorial consultant Jelani Harper takes a look at how word embeddings are directly responsible for many of the exponential advancements natural language technologies have made over the past couple years. They\u2019re foundational to the functionality of popular Large Language Models like ChatGPT and other GPT iterations. These mathematical representations also have undeniable implications for textual applications of Generative AI.","og_url":"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2023-06-08T13:00:00+00:00","article_modified_time":"2023-06-08T16:02:22+00:00","og_image":[{"width":300,"height":200,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/04\/ChatGPT_shutterstock_2249988847_small.png","type":"image\/png"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/","url":"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/","name":"The Science and Practical Applications of Word Embeddings\u00a0 - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2023-06-08T13:00:00+00:00","dateModified":"2023-06-08T16:02:22+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2023\/06\/08\/the-science-and-practical-applications-of-word-embeddings\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"The Science and Practical Applications of Word Embeddings\u00a0"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/04\/ChatGPT_shutterstock_2249988847_small.png","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-8ts","jetpack-related-posts":[{"id":27144,"url":"https:\/\/insidebigdata.com\/2021\/09\/17\/the-next-wave-of-cognitive-analytics-graph-aware-machine-learning\/","url_meta":{"origin":32578,"position":0},"title":"The Next Wave of Cognitive Analytics: Graph Aware Machine Learning","date":"September 17, 2021","format":false,"excerpt":"In this contributed article, editorial consultant Jelani Harper discusses a number of compelling and timely topics including manifold learning, graph embeddings, and cognitive computing.","rel":"","context":"In &quot;Analytics&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":33512,"url":"https:\/\/insidebigdata.com\/2023\/09\/25\/timescale-vector-launches-to-enable-developers-to-build-production-ai-applications-at-scale-with-postgresql\/","url_meta":{"origin":32578,"position":1},"title":"Timescale Vector Launches to Enable Developers to Build Production AI Applications at Scale With PostgreSQL","date":"September 25, 2023","format":false,"excerpt":"Timescale, the cloud database company, announced the launch of Timescale Vector, enabling developers to build production AI applications at scale with PostgreSQL. With Timescale Vector, which sits atop Timescale\u2019s production-grade cloud PostgreSQL platform, developers can now leverage a single platform for managing relational data, vector embeddings, time-series data, analytics and\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/09\/AI_data_storage_shutterstock_1107715973_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32902,"url":"https:\/\/insidebigdata.com\/2023\/07\/22\/generative-ai-report-commandbar-releases-ai-powered-helphub-to-overlay-chatgpt-onto-any-product\/","url_meta":{"origin":32578,"position":2},"title":"Generative AI Report: CommandBar Releases AI-Powered HelpHub to Overlay ChatGPT Onto Any Product","date":"July 22, 2023","format":false,"excerpt":"Welcome to the Generative AI Report, a new feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications centered on large language models, we thought it would be a\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/07\/ChatGPT_shutterstock_2249988847_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32891,"url":"https:\/\/insidebigdata.com\/2023\/07\/19\/generative-ai-report-qlik-debuts-suite-of-openai-connectors-bringing-power-of-generative-ai-directly-into-the-qlik-analytics-experience\/","url_meta":{"origin":32578,"position":3},"title":"Generative AI Report: Qlik Debuts Suite of OpenAI Connectors, Bringing Power of Generative AI Directly into the Qlik Analytics Experience","date":"July 19, 2023","format":false,"excerpt":"Welcome to the Generative AI Report, a new feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications centered on large language models, we thought it would be a\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32095,"url":"https:\/\/insidebigdata.com\/2023\/04\/12\/latest-version-of-zilliz-cloud-aims-to-cure-ai-hallucinations\/","url_meta":{"origin":32578,"position":4},"title":"Latest Version of Zilliz Cloud Aims to Cure AI \u2018Hallucinations\u2019","date":"April 12, 2023","format":false,"excerpt":"Zilliz Cloud is the managed service from\u00a0Zilliz, the inventors of Milvus, the open source vector database used by more than 1,000 enterprises around the world. It\u2019s purpose-built for AI and other applications powered by unstructured data. It represents data as high-dimensional vectors, or embeddings \u2014 the kind generated by machine-learning\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":30964,"url":"https:\/\/insidebigdata.com\/2022\/11\/28\/2023-trends-in-artificial-intelligence-and-machine-learning-generative-ai-unfolds\/","url_meta":{"origin":32578,"position":5},"title":"2023 Trends in Artificial Intelligence and Machine Learning: Generative AI Unfolds\u00a0\u00a0","date":"November 28, 2022","format":false,"excerpt":"In this contributed article, editorial consultant Jelani Harper offers his perspectives around 2023 trends for the boundless potential of generative Artificial Intelligence\u2014the variety of predominantly advanced machine learning that analyzes content to produce strikingly similar new content.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2018\/09\/artificial-intelligence-3382507_640.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/32578"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=32578"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/32578\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/32245"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=32578"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=32578"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=32578"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}