{"id":25864,"date":"2021-03-31T06:00:00","date_gmt":"2021-03-31T13:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=25864"},"modified":"2021-04-01T09:36:35","modified_gmt":"2021-04-01T16:36:35","slug":"gaining-the-enterprise-edge-in-ai-products","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/","title":{"rendered":"Gaining the Enterprise Edge in AI Products"},"content":{"rendered":"\n<p>The AI economy will benefit incumbents \u2013 if they can leverage their proprietary data. To prepare for \u2018<a href=\"https:\/\/www.technologyreview.com\/2017\/05\/12\/151722\/nvidia-ceo-software-is-eating-the-world-but-ai-is-going-to-eat-software\/\" target=\"_blank\" rel=\"noreferrer noopener\">AI eating software<\/a>,\u2019 enterprises need to standardize data collection processes across the organization.<\/p>\n\n\n\n<p><strong>AI is the next commodity<\/strong><\/p>\n\n\n\n<p>Last June, OpenAI released <a href=\"https:\/\/openai.com\/blog\/openai-api\/\" target=\"_blank\" rel=\"noreferrer noopener\">GPT-3<\/a>, their newest text-generating AI model. As seen in the <a href=\"https:\/\/gpt3examples.com\/#examples\" target=\"_blank\" rel=\"noreferrer noopener\">deluge of Twitter demos<\/a>, GPT-3 works so well that people have generated <a href=\"https:\/\/twitter.com\/ChinyaSuhail\/status\/1287110006370836480\" target=\"_blank\" rel=\"noreferrer noopener\">text-based DevOps pipelines<\/a>, <a href=\"https:\/\/twitter.com\/rajnishkumar\/status\/1288502875455475712\" target=\"_blank\" rel=\"noreferrer noopener\">complex SQL queries<\/a>, <a href=\"https:\/\/twitter.com\/jsngr\/status\/1294635175222157313\" target=\"_blank\" rel=\"noreferrer noopener\">Figma designs<\/a>, and even <a href=\"https:\/\/twitter.com\/sharifshameem\/status\/1282676454690451457\" target=\"_blank\" rel=\"noreferrer noopener\">code<\/a>. While the examples are obviously cherry-picked, this still represents powerful out of the box functionality.<\/p>\n\n\n\n<p>This baseline predictive indicates rapidly changing market dynamics. The AI capabilities of OpenAI, public clouds, and even <a href=\"https:\/\/www.eleuther.ai\/projects\/gpt-neo\/\" target=\"_blank\" rel=\"noreferrer noopener\">open-source variants<\/a> are rapidly evolving. Regardless of the ultimate vendor, many predictive tasks will become easily accessible via some \u2018Open\u2019 AI API in the coming years. If prediction succeeds compute (as compute succeeded electricity), the new base unit of the enterprise becomes machine intelligence.<\/p>\n\n\n\n<p><strong>This commoditization would fundamentally shift the market dynamics of industries built on AI platforms.<\/strong> Specifically, AI products will no longer differentiated by many of their economic moats. Rather than technical differentiation from models or algorithms, the competitive edge will now come from proprietary data sources and the speed at which companies can engineer features.<\/p>\n\n\n\n<p><strong>Technical (in)differentiation: <\/strong>10x products build technology moats and win markets. But if two companies built chatbots built on a powerful (generic) API, there would likely be 10%, not 10X, differences in technology.<\/p>\n\n\n\n<p>Strong AI APIs limit technologic differentiation because it becomes dramatically harder to improve much beyond baseline performance. In fact, many tuned GPT-2 models <a href=\"https:\/\/www.allencheng.com\/starting-a-business-around-gpt-3-is-a-bad-idea\/\" target=\"_blank\" rel=\"noreferrer noopener\">perform worse<\/a> than an untrained GPT-3 at their specific task.<\/p>\n\n\n\n<p><strong>(Dis)economies of&nbsp;scale: <\/strong>Companies benefit from scale because of fixed infrastructural and hardware costs. But adding AI to a product introduces variable costs for cloud infrastructure, and model training.<\/p>\n\n\n\n<p>If AI is consumed via API, there would be a linear per-prediction price. And if AI were consumed via a personally managed OSS model, the cost to pay for the necessary infrastructure remains linear. Even if the cost of API access decreases as competitors trained their own models, machine intelligence remains a fixed marginal cost. While the API pricing may decrease, AI-powered product margins don\u2019t benefit from economies of scale. In fact, scaling AI systems means dealing with a long-tail containing increasingly <a href=\"https:\/\/a16z.com\/2020\/07\/24\/long-tail-problem-in-a-i\/\" target=\"_blank\" rel=\"noreferrer noopener\">hard to wrangle<\/a> data for edge cases.<\/p>\n\n\n\n<p>These open-access models give rise to undifferentiated AI products. As historical moats no longer provide an enduring competitive advantage, competing products can more easily emerge.<\/p>\n\n\n\n<p>While AI API providers and distribution platforms are the logical beneficiaries from this evolving technology, established incumbents stand to &nbsp;<a href=\"https:\/\/www.allencheng.com\/starting-a-business-around-gpt-3-is-a-bad-idea\" target=\"_blank\" rel=\"noreferrer noopener\">benefit<\/a>. In Christensen\u2019s terms, \u2018Open\u2019 AI is a <a href=\"https:\/\/hbr.org\/2015\/12\/what-is-disruptive-innovation\" target=\"_blank\" rel=\"noreferrer noopener\">sustaining innovation<\/a> that benefits the existing players. <strong>Enterprises can easily integrate AI into their product, outcompeting entrants with their preexisting branding, distribution, and data.<\/strong> Further, many inhibiting factors for AI in startups don\u2019t hold for incumbents.<\/p>\n\n\n\n<p><strong>Incumbents can handle the long-tail<\/strong><\/p>\n\n\n\n<p>AI products are expected to handle the long-tail of user intent. Think about search: while it\u2019s trivial for to return answers to commonly answered questions eg. <em>\u201cprice of GameStop\u201d<\/em>, Google is also expected to handle your incredibly complex, never-been-seen-before query. In fact, some <a href=\"https:\/\/a16z.com\/2020\/02\/16\/the-new-business-of-ai-and-how-its-different-from-traditional-software\/\" target=\"_blank\" rel=\"noreferrer noopener\">40\u201350%<\/a> of functionality in AI products resides in this long-tail.<\/p>\n\n\n\n<p>As a solution, companies are commonly <a href=\"https:\/\/a16z.com\/2020\/08\/12\/taming-the-tail-adventures-in-improving-ai-economics\/\" target=\"_blank\" rel=\"noreferrer noopener\">recommended<\/a> to \u2018bound problems\u2019 or collect data to solve the \u2018global long-tail\u2019 problem (the same problem shared between everyone). Enterprises can more easily do both.<\/p>\n\n\n\n<p>Bootstrapping a product\u2019s capability with AI means that the problem space is already bounded. Because the intended functionality had to exist without AI, the domain should already be sufficiently narrow to benefit from more advanced techniques.<\/p>\n\n\n\n<p>Solving the global long-tail is easier for enterprises because their product should already be able to generate the data necessary to solve the problem. Their proprietary data can be used to increase model coverage.<\/p>\n\n\n\n<p><strong>The hidden costs of AI<\/strong><\/p>\n\n\n\n<p>Despite the clear value add of AI, there are additional cost considerations. A16Z\u2019s <a href=\"https:\/\/a16z.com\/2020\/02\/16\/the-new-business-of-ai-and-how-its-different-from-traditional-software\/\" target=\"_blank\" rel=\"noreferrer noopener\">The New Business of AI<\/a> kicked off a <a href=\"https:\/\/techcrunch.com\/2020\/02\/21\/do-ai-startups-have-worse-economics-than-saas-shops\/\" target=\"_blank\" rel=\"noreferrer noopener\">flurry<\/a> of conversations around the unit economics of AI. Martin Casado and Matt Bornstein argue that AI startups represent a new type of business that operates like \u201cSoftware + Services\u201d due to lower gross margins, scaling challenges, and weaker defensive moats.<\/p>\n\n\n\n<p>The additional complexity of training and deploying ML models requires additional operational costs above typical software development costs. Managing, processing, and training with these vast corpuses of proprietary data is expensive. Because of these fundamental differences, AI businesses consistently report gross margins in the <a href=\"https:\/\/www.redeye.se\/events\/788494\/artificial-intelligence-seminar-2020\" target=\"_blank\" rel=\"noreferrer noopener\">50\u201360% range<\/a>, as opposed to the <a href=\"https:\/\/tomtunguz.com\/gross-margin-trends-saas\/\" target=\"_blank\" rel=\"noreferrer noopener\">60\u201380+%<\/a> typical of SaaS.<\/p>\n\n\n\n<p><strong>Increased cost of infrastructure: <\/strong>Today\u2019s <a href=\"https:\/\/venturebeat.com\/2020\/05\/29\/openai-debuts-gigantic-gpt-3-language-model-with-175-billion-parameters\/\" target=\"_blank\" rel=\"noreferrer noopener\">massive ML models<\/a> are computationally intensive, requiring a large portion of the operating budget to be spent on cloud infrastructure. While many startups don\u2019t report specifics, it\u2019s estimated that roughly <a href=\"https:\/\/a16z.com\/2020\/02\/16\/the-new-business-of-ai-and-how-its-different-from-traditional-software\/\" target=\"_blank\" rel=\"noreferrer noopener\">25%+ of revenue<\/a> is spend on cloud resources. Further, the operational complexity for managing data and metadata (as well as models) quickly exceeds that for simple SaaS or legacy products.<\/p>\n\n\n\n<p><strong>Increased cost of&nbsp;labor: <\/strong>Gross margins are further lowered by the need for a \u2018human in the loop\u2019. During model training, the data that powers supervised learning often requires manual cleaning and labeling. And in production, humans still play a large role as joint decision makers. Many \u2018AI\u2019 products are no more than ML-produced suggestions fed to human moderators with final authority. Even if AI improved to a point where it technically required less assistance, current debates around <a href=\"https:\/\/hbr.org\/2020\/10\/ai-fairness-isnt-just-an-ethical-issue\" target=\"_blank\" rel=\"noreferrer noopener\">fairness and bias<\/a> show that human oversight will likely remain to counteract the ethical implications of these black box models.<\/p>\n\n\n\n<p><strong>Increased cost of deployment: <\/strong>Whereas a typical SaaS product can be deployed to a customer instantly with zero marginal cost, AI products can require time and resources spent during on-boarding. Understanding and fitting models to the customer\u2019s data distributions can be a time intensive process. Because inputs from each new deployment can vary dramatically, each customer represents a new <a href=\"https:\/\/www.wired.com\/2004\/10\/tail\/\" target=\"_blank\" rel=\"noreferrer noopener\">long-tail<\/a> of edge cases to consider.<\/p>\n\n\n\n<p><strong>Preparing your enterprise for AI<\/strong><\/p>\n\n\n\n<p>Collecting this proprietary data is necessary but expensive. Standardize and simplify your data collection processes to best position your enterprise for these changing market dynamics. Specifically, the steps below will help you improve the unit economics of your future AI-enabled products.<\/p>\n\n\n\n<p><strong>Tame your infrastructural complexity: <\/strong>The common trope with ML systems is the vast complexity. As captured in the quintessential <a href=\"https:\/\/papers.nips.cc\/paper\/2015\/file\/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Hidden Technical Debt in Machine Learning Systems<\/a>, the actual model represents only a small fraction of the total infrastructure. The rest is held together with (hopefully not) pipeline sprawl and spaghetti code.<\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"246\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/03\/F5-Network-pic.png\" alt=\"\" class=\"wp-image-25867\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/03\/F5-Network-pic.png 700w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/03\/F5-Network-pic-150x53.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/03\/F5-Network-pic-300x105.png 300w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/figure><\/div>\n\n\n\n<p>Automating this infrastructure helps reduce time to serve. If proprietary data and features are the new competitive edge, reducing complexity for feature engineering gives your product massive leverage. Luckily, whole products and industries have already emerged to solve this problem.<\/p>\n\n\n\n<p><strong>Structure your&nbsp;data: <\/strong>Structure your data up-front or invest in mechanisms to cope with permanently unstructured data. Adding this metadata on top allows creates a structured transactional layer. This helps both for feature development from internal data discovery as well as for ingestion into BI systems that aid analytic decision making.<\/p>\n\n\n\n<p><strong>Standardize your data collection: <\/strong>The benefit of incumbency comes in large part from your proprietary data. This unique corpus is a defensible moat that new entrants will struggle to recreate. However, your data is only a strategic benefit if you can both collect and process it. With ad-hoc collection processes or lack of a systematized approach, you won\u2019t be able to leverage the full extent of your data. And without collection systems, you\u2019re completely ceding your strategic position in the market. Without either of these in place, your product is at risk of being cannibalized by challengers.<\/p>\n\n\n\n<p><strong>About the Author<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"alignleft size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"125\" height=\"125\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/03\/Taggart-Bonham.jpg\" alt=\"\" class=\"wp-image-25865\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/03\/Taggart-Bonham.jpg 125w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/03\/Taggart-Bonham-110x110.jpg 110w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/03\/Taggart-Bonham-50x50.jpg 50w\" sizes=\"(max-width: 125px) 100vw, 125px\" \/><\/figure><\/div>\n\n\n\n<p><em>Taggart Bonham is Product Manager of Global AI at F5 Networks. His background is ML software engineering and venture capital, focusing on the AI and smart city sectors. At F5, his focus is on building and scaling AI platforms to enable customers to unlock the power of their data with application insights.<\/em><\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a rel=\"noreferrer noopener\" href=\"http:\/\/insidebigdata.com\/newsletter\/\" target=\"_blank\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;@InsideBigData1 \u2013 <a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this contributed article, Taggart Bonham, Product Manager of Global AI at F5 Networks, discusses last June, OpenAI released GPT-3, their newest text-generating AI model. As seen in the deluge of Twitter demos, GPT-3 works so well that people have generated text-based DevOps pipelines, complex SQL queries, Figma designs, and even code. In the article, Taggart explains how enterprises need to prepare for the AI economy by standardizing their data collection processes across their organizations like GPT-3 so it can then be properly leveraged. <\/p>\n","protected":false},"author":10513,"featured_media":21162,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,87,180,56,97,1],"tags":[437,324,947,949,95],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Gaining the Enterprise Edge in AI Products - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Gaining the Enterprise Edge in AI Products - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"In this contributed article, Taggart Bonham, Product Manager of Global AI at F5 Networks, discusses last June, OpenAI released GPT-3, their newest text-generating AI model. As seen in the deluge of Twitter demos, GPT-3 works so well that people have generated text-based DevOps pipelines, complex SQL queries, Figma designs, and even code. In the article, Taggart explains how enterprises need to prepare for the AI economy by standardizing their data collection processes across their organizations like GPT-3 so it can then be properly leveraged.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2021-03-31T13:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-04-01T16:36:35+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2018\/09\/artificial-intelligence-3382507_640.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"640\" \/>\n\t<meta property=\"og:image:height\" content=\"426\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/\",\"url\":\"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/\",\"name\":\"Gaining the Enterprise Edge in AI Products - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2021-03-31T13:00:00+00:00\",\"dateModified\":\"2021-04-01T16:36:35+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Gaining the Enterprise Edge in AI Products\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Gaining the Enterprise Edge in AI Products - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/","og_locale":"en_US","og_type":"article","og_title":"Gaining the Enterprise Edge in AI Products - insideBIGDATA","og_description":"In this contributed article, Taggart Bonham, Product Manager of Global AI at F5 Networks, discusses last June, OpenAI released GPT-3, their newest text-generating AI model. As seen in the deluge of Twitter demos, GPT-3 works so well that people have generated text-based DevOps pipelines, complex SQL queries, Figma designs, and even code. In the article, Taggart explains how enterprises need to prepare for the AI economy by standardizing their data collection processes across their organizations like GPT-3 so it can then be properly leveraged.","og_url":"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2021-03-31T13:00:00+00:00","article_modified_time":"2021-04-01T16:36:35+00:00","og_image":[{"width":640,"height":426,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2018\/09\/artificial-intelligence-3382507_640.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/","url":"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/","name":"Gaining the Enterprise Edge in AI Products - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2021-03-31T13:00:00+00:00","dateModified":"2021-04-01T16:36:35+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2021\/03\/31\/gaining-the-enterprise-edge-in-ai-products\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Gaining the Enterprise Edge in AI Products"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2018\/09\/artificial-intelligence-3382507_640.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-6Ja","jetpack-related-posts":[{"id":31884,"url":"https:\/\/insidebigdata.com\/2023\/03\/17\/video-highlights-gpt-4-developer-livestream\/","url_meta":{"origin":25864,"position":0},"title":"Video Highlights: GPT-4 Developer Livestream","date":"March 17, 2023","format":false,"excerpt":"Here is Greg Brockman, President and Co-Founder of OpenAI, for a March 14, 2023 developer demo showcasing GPT-4 and some of its capabilities\/limitations. Included are a number of very compelling new use case capabilities over the previous GPT-3.5 version.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/02\/GPT4_shutterstock_2252419881_small.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32778,"url":"https:\/\/insidebigdata.com\/2023\/07\/05\/generative-ai-report-pilot-taps-openai-to-launch-pilot-gpt\/","url_meta":{"origin":25864,"position":1},"title":"Generative AI Report: Pilot Taps OpenAI to launch Pilot GPT","date":"July 5, 2023","format":false,"excerpt":"Welcome to the Generative AI Report, a new feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications centered on large language models, we thought it would be a\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2284999159_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32861,"url":"https:\/\/insidebigdata.com\/2023\/07\/17\/brief-history-of-llms\/","url_meta":{"origin":25864,"position":2},"title":"Brief History of LLMs","date":"July 17, 2023","format":false,"excerpt":"The early days of natural language processing saw researchers experiment with many different approaches, including conceptual ontologies and rule-based systems. While some of these methods proved narrowly useful, none yielded robust results. That changed in the 2010s when NLP research intersected with the then-bustling field of neural networks. The collision\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2284999159_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":26539,"url":"https:\/\/insidebigdata.com\/2021\/06\/26\/dall-e-a-human-like-intelligence-through-multimodality\/","url_meta":{"origin":25864,"position":3},"title":"DALL-E &#8211; A Human-like Intelligence through Multimodality","date":"June 26, 2021","format":false,"excerpt":"In this special guest feature, Sahar Mor, founder of AirPaper, discusses DALL-E - a new powerful API from OpenAI that creates images from text captions. With this, Sahar is planning to build a few products such as a chart generator based on text and a text-based tool to generate illustrations\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2021\/06\/AirPaper1.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":31422,"url":"https:\/\/insidebigdata.com\/2023\/01\/18\/originality-ai-allows-users-to-quickly-detect-ai-written-content-with-a-chrome-extension\/","url_meta":{"origin":25864,"position":4},"title":"Originality.AI Allows Users to Quickly Detect AI Written Content With a Chrome Extension\u00a0","date":"January 18, 2023","format":false,"excerpt":"Originality.AI recently launched a tool that allows users to screen for content created by popular AI tools, such as ChatGPT. To increase efficiency for the user, Originality.AI has also launched a Google Chrome Extension to make it faster and easier to check content.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/vYZy9giw9Fw\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":31750,"url":"https:\/\/insidebigdata.com\/2023\/03\/02\/ai-from-a-psychologists-point-of-view\/","url_meta":{"origin":25864,"position":5},"title":"AI from a Psychologist\u2019s Point of View","date":"March 2, 2023","format":false,"excerpt":"Researchers at the Max Planck Institute for Biological Cybernetics in T\u00fcbingen have examined the general intelligence of the language model GPT-3, a powerful AI tool. Using psychological tests, they studied competencies such as causal reasoning and deliberation, and compared the results with the abilities of humans. Their findings, in the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2018\/09\/artificial-intelligence-3382507_640.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/25864"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=25864"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/25864\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/21162"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=25864"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=25864"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=25864"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}