{"id":33752,"date":"2023-11-01T06:00:00","date_gmt":"2023-11-01T13:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=33752"},"modified":"2023-10-30T11:06:56","modified_gmt":"2023-10-30T18:06:56","slug":"generative-ai-report-11-1-2023","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/","title":{"rendered":"Generative AI Report \u2013 11\/1\/2023"},"content":{"rendered":"\n<p>Welcome to the&nbsp;<strong>Generative AI Report<\/strong>&nbsp;round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.<\/p>\n\n\n\n<p><strong>Forbes Launches New Generative AI Search Tool, Adelaide, Powered By Google Cloud<\/strong><\/p>\n\n\n\n<p>Forbes announced the beta launch of <a href=\"https:\/\/www.forbes.com\/adelaide\/\" target=\"_blank\" rel=\"noreferrer noopener\">Adelaide<\/a>, its purpose-built news search tool using Google Cloud. The tool offers visitors AI-driven personalized recommendations and insights from Forbes\u2019 trusted journalism. The launch makes Forbes one of the first major news publishers to provide personalized, relevant news content recommendations for its global readers leveraging generative AI.<\/p>\n\n\n\n<p>Select visitors to Forbes.com can access Adelaide through the website, where they can explore content spanning the previous twelve months, offering deeper insights into the topics they are searching for. Adelaide, named after the wife of Forbes founder B.C. Forbes, has been crafted in-house by Forbes&#8217; dedicated tech and data team, ensuring seamless integration and optimal functionality.<\/p>\n\n\n\n<p>What sets Adelaide apart is its search- and conversation-based approach, making content discovery easier and more intuitive for Forbes&#8217; global audience. The tool generates individualized responses to user queries based exclusively on Forbes articles, and is built using Google Cloud Vertex AI Search and Conversation. With this integration, Adelaide will comb through Forbes&#8217; trusted content archive from the past twelve months, continuously learning and adapting to individual reader preferences.<\/p>\n\n\n\n<p><em>\u201cForbes was an early adopter of AI nearly five years ago \u2013 and now AI is a foundational technology for our first-party data platform, ForbesOne,\u201d said&nbsp;Vadim Supitskiy,&nbsp;Chief Digital and Information Officer, Forbes. \u201cAs we look to the future, we are enabling our audiences to better understand how AI can be a tool for good and enhance their lives. Adelaide is poised to revolutionize how Forbes audiences engage with news and media content, offering a more personalized and insightful experience from start to finish.\u201d<\/em><\/p>\n\n\n\n<p><strong>DataGPT<sup>TM<\/sup>&nbsp;Launches out of Stealth to Help Users Talk Directly to Their Data Using Everyday Language<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/datagpt.com\/datagpt\" target=\"_blank\" rel=\"noreferrer noopener\">DataGPT<\/a>, the leading provider of conversational AI data analytics software, announced the launch of the DataGPT AI Analyst, uniting the creative, comprehension-rich side of a large language model (LLM) (the \u201cright brain\u201d) with the logic and reasoning of advanced analytics techniques (the \u201cleft brain\u201d). This combination makes sophisticated analysis accessible to more people without compromising accuracy and impact.<\/p>\n\n\n\n<p>Companies everywhere are struggling to conduct analysis as quickly as business questions evolve. Current business intelligence (BI) solutions fall short, lacking iterative querying needed to dig deeper into data. Similarly, consumer-facing generative AI tools are unable to integrate with large databases. Despite investing billions in complex tooling,&nbsp;<a href=\"https:\/\/barc-research.com\/bi-analytics-survey-23\/?ref=comparative.ai\" target=\"_blank\" rel=\"noreferrer noopener\">85% of business users<\/a>&nbsp;forgo them, wasting time and money as data teams manually parse through rigid dashboards, further burdened by ad hoc and follow-up requests. Conversational AI data analysis offers the best of both worlds.<\/p>\n\n\n\n<p><em>\u201cOur vision at DataGPT is crystal clear: we are committed to empowering anyone, in any company, to chat directly to their data,\u201d said Arina Curtis, CEO and co-founder, DataGPT. \u201cOur DataGPT software, rooted in conversational AI data analysis, not only delivers instant, analyst-grade results, but provides a seamless, user-friendly experience that bridges the gap between rigid reports and informed decision making.\u201d<\/em><\/p>\n\n\n\n<p><strong>Predibase Launches New Offering to Fine-tune and Serve 100x More LLMs at No Additional Cost&nbsp;\u2013 Try Now with Llama-2 for Free<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/predibase.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Predibase<\/a>, the developer platform for open-source AI, announced the availability of their software development kit (SDK) for efficient fine-tuning and serving. This new offering enables developers to train smaller, task-specific LLMs using even the cheapest and most readily available GPU hardware within their cloud. Fine-tuned models can then be served using Predibase\u2019s lightweight, modular LLM serving architecture that dynamically loads and unloads models on demand in seconds. This allows multiple models to be served without additional costs. This approach is so efficient that Predibase can now offer unlimited fine-tuning and serving of LLaMa-2-13B for free in a 2-week trial.&nbsp;&nbsp;<\/p>\n\n\n\n<p><em>\u201cMore than&nbsp;75% of organizations&nbsp;won\u2019t use commercial LLMs in production due to concerns over ownership, privacy, cost, and security, but productionizing open-source LLMs comes with its own set of infrastructure challenges,\u201d said Dev Rishi, co-founder and CEO of Predibase. \u201cEven with access to high-performance GPUs in the cloud, training costs can reach thousands of dollars per job due to a lack of automated, reliable, cost-effective fine-tuning infrastructure. Debugging and setting up environments require countless engineering hours. As a result,&nbsp; businesses can spend a fortune even before getting to the cost of serving in production.\u201d<\/em><\/p>\n\n\n\n<p><strong>DataStax Launches New Integration with LangChain, Enables Developers to Easily Build Production-ready Generative AI Applications<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.datastax.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">DataStax<\/a>, the company that powers generative AI applications with real-time, scalable data, announced a new integration with LangChain, the most popular orchestration framework for developing applications with large language models (LLMs). The integration makes it easy to add&nbsp;<a href=\"https:\/\/www.datastax.com\/products\/datastax-astra\" target=\"_blank\" rel=\"noreferrer noopener\">Astra DB<\/a>&nbsp;\u2013 the real-time database for developers building production Gen AI applications \u2013 or&nbsp;<a href=\"https:\/\/cassandra.apache.org\/_\/index.html\" target=\"_blank\" rel=\"noreferrer noopener\">Apache Cassandra<\/a>\u00ae, as a new vector source in the LangChain framework.<\/p>\n\n\n\n<p>As many companies implement retrieval augmented generation (RAG) \u2013 the process of providing context from outside data sources to deliver more accurate LLM query responses \u2013 into their generative AI applications, they require a vector store that gives them real-time updates with zero latency on critical, real-life production workloads.&nbsp;<\/p>\n\n\n\n<p>Generative AI applications built with RAG stacks require a vector-enabled database and an orchestration framework like LangChain, to provide memory or context to LLMs for accurate and relevant answers. Developers use LangChain as the leading AI-first toolkit to connect their application to different data sources.&nbsp;<\/p>\n\n\n\n<p>The new integration lets developers leverage the power of the Astra DB vector database&nbsp;for their LLM, AI assistant, and real-time generative AI projects&nbsp;through the&nbsp;<a href=\"https:\/\/integrations.langchain.com\/vectorstores\" target=\"_blank\" rel=\"noreferrer noopener\">LangChain plugin architecture for vector stores<\/a>. Together, Astra DB and LangChain help developers to take advantage of framework features like, vector similarity search, semantic caching, term-based search, LLM-response caching, and data injection from Astra DB (or Cassandra) into prompt templates.<\/p>\n\n\n\n<p><em>\u201cIn a RAG application, the model receives supplementary data or context from various sources \u2014 most often a database that can store vectors,\u201d said Harrison Chase, CEO, LangChain. \u201cBuilding a generative AI app requires a robust, powerful database, and we ensure our users have access to the best options on the market via our simple plugin architecture. With integrations like DataStax&#8217;s LangChain connector, incorporating Astra DB or Apache Cassandra as a vector store becomes a seamless and intuitive process.\u201d&nbsp;<\/em><\/p>\n\n\n\n<p><em>\u201cDevelopers at startups and enterprises alike are using LangChain to build generative AI apps, so a deep native integration is a must-have,\u201d said Ed Anuff, CPO, DataStax. \u201cThe ability for developers to easily use Astra DB as their vector database of choice, directly from LangChain, streamlines the process of building the personalized AI applications that companies need. In fact, we\u2019re already seeing customers benefit from our joint technologies as healthcare AI company, Skypoint, is using Astra DB and LangChain to power its generative AI healthcare model.\u201d<\/em><\/p>\n\n\n\n<p><strong>Seismic Fall 2023 Release leads with new generative AI capabilities to unlock growth and boost productivity<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/seismic.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Seismic<\/a>, a leader in enablement, announced its Fall 2023 Product Release which brings several new generative AI-powered capabilities to the Seismic Enablement Cloud, including two major innovations in Aura Copilot and Seismic for Meetings. In addition, the company launched&nbsp;Seismic Exchange, a new centralized hub for all Seismic partner apps, integrations, and solutions for customers to get the most out of their tech stack.<\/p>\n\n\n\n<p><em>\u201cSeismic Aura continues to get smarter and better, leading to AI-powered products like Seismic for Meetings which will automate \u2013 and in some cases, eliminate \u2013 manual tasks for salespeople like meeting preparation, summarization, and follow-up,\u201d said Krish Mantripragada, Chief Product Officer, Seismic. \u201cOur new offering of Aura Copilot packages AI insights, use cases, workflows, and capabilities to enable our customers to grow and win more business. We\u2019re excited to deliver these new innovations from our AI-powered enablement roadmap to our customers and show them what the future looks like this week at Seismic Shift.\u201d<\/em><\/p>\n\n\n\n<p><strong>IBM Launches watsonx Code Assistant, Delivers Generative AI-powered Code Generation Capabilities Built for Enterprise Application Modernization<\/strong><\/p>\n\n\n\n<p>IBM (NYSE:&nbsp;IBM) launched watsonx Code Assistant, a generative AI-powered assistant that helps enterprise developers and IT operators code more quickly and more accurately using natural language prompts. The product currently delivers on two specific enterprise use cases. First, IT Automation with&nbsp;<a href=\"https:\/\/www.ibm.com\/products\/watsonx-code-assistant-ansible-lightspeed\" target=\"_blank\" rel=\"noreferrer noopener\">watsonx Code Assistant for Red Hat Ansible Lightspeed<\/a>, for tasks such as network configuration and code deployment. Second, mainframe application modernization with&nbsp;<a href=\"https:\/\/www.ibm.com\/products\/watsonx-code-assistant-zos\" target=\"_blank\" rel=\"noreferrer noopener\">watsonx Code Assistant for Z<\/a>, for translation of COBOL to Java.<\/p>\n\n\n\n<p>Designed to accelerate development while maintaining the principles of trust, security, and compliance, the product leverages generative AI based on&nbsp;<a href=\"https:\/\/www.ibm.com\/blog\/building-ai-for-business-ibms-granite-foundation-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">IBM&#8217;s&nbsp;<\/a><a href=\"http:\/\/www.ibm.com\/blog\/building-ai-for-business-ibms-granite-foundation-models\/\" target=\"_blank\" rel=\"noreferrer noopener\">Granite foundation models<\/a>&nbsp;for code running on IBM&#8217;s watsonx platform.&nbsp;Granite uses the decoder architecture, which underpins large language model capabilities to predict what is next in a sequence to support natural language processing tasks.&nbsp;IBM is exploring opportunities to tune&nbsp;watsonx Code Assistant with additional domain-specific generative AI capabilities to assist in code generation, code explanation, and the full end-to-end software development lifecycle to continue to drive enterprise application modernization.<\/p>\n\n\n\n<p><em>&#8220;With this launch, watsonx Code Assistant joins watsonx Orchestrate and watsonx Assistant in IBM&#8217;s growing line of watsonx assistants that provide enterprises with tangible ways to implement generative AI,&#8221; said&nbsp;Kareem Yusuf, Ph.D, Senior Vice President, Product Management and Growth, IBM Software. &#8220;Watsonx Code Assistant puts AI-assisted code development and application modernization tools directly into the hands of developers \u2013 in a naturally integrated way that is designed to be non-disruptive \u2013 to help address skills gaps and increase productivity.&#8221;<\/em><\/p>\n\n\n\n<p><strong>Twelve Labs Breaks New Ground With First-of-its-kind Video-to-text Generative APIs<\/strong><\/p>\n\n\n\n<p><a href=\"http:\/\/www.twelvelabs.io\/\" target=\"_blank\" rel=\"noreferrer noopener\">Twelve Labs<\/a>, the video understanding company, announced the debut of game-changing technology along with the release of its public beta. Twelve Labs is the first in its industry to commercially release&nbsp;video-to-text generative APIs&nbsp;powered by its latest video-language foundation model, Pegasus-1. This model would enable novel capabilities like Summaries, Chapters, Video Titles, and Captioning from videos &#8211; even those without audio or text\u2013 with the release of its public beta to truly extend the boundaries of what is possible.<\/p>\n\n\n\n<p>This groundbreaking release comes at a time when language models\u2019 training objective previously had been to guess the most probable next word. This task alone enabled new possibilities to emerge,&nbsp; ranging from planning a set of actions to solve a complex problem, to effectively summarizing a 1,000 page long text, to passing the bar exam. While mapping visual and audio content to language may be viewed similarly, solving video-language alignment, as Twelve Labs has with this release, is incredibly&nbsp; difficult\u2013 yet by doing so, Twelve Labs latest functionality solves a myriad of other problems no one else has been able to overcome.&nbsp;<\/p>\n\n\n\n<p>The company has uniquely trained its multimodal AI model to solve complex video-language alignment problems. Twelve Labs\u2019 proprietary model, evolved, tested, and refined for its public beta, leverages all of the components present in videos like action, object, and background sounds, and it learns to map human language to what&#8217;s happening inside a video. This is far beyond the capabilities in the existing market and its APIs are now available as OpenAI rolls out voice and image capabilities for ChatGPT, signaling a shift is underway from interest in unimodal to multimodal.<\/p>\n\n\n\n<p><em>\u201cThe Twelve Labs team has consistently pushed the envelope and broken new ground in video understanding since our founding in 2021. Our latest features represent this tireless work,\u201d said Jae Lee, co-founder and CEO of Twelve Labs. \u201cBased on the remarkable feedback we have received, and the breadth of test cases we\u2019ve seen, we are incredibly excited to welcome a broader audience to our platform so that anyone can use best-in-class AI to understand video content without manually watching thousands of hours to find what they are looking for. We believe this is the best, most efficient way to make use of video.\u201d<\/em><\/p>\n\n\n\n<p><strong>Privacera Announces the General Availability of Its Generative AI Governance Solution Providing a Unified Platform for Data and AI Security<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/privacera.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Privacera<\/a>, the AI and data security governance company founded by the creators of Apache Ranger\u2122 and&nbsp;the industry\u2019s first comprehensive generative AI governance solution, announced the General Availability (GA) of Privacera AI Governance (PAIG).&nbsp;PAIG allows organizations to securely innovate with generative AI (GenAI) technologies by securing the entire AI application lifecycle, from discovery and securing of sensitive fine-tuning data, Retrieval Augmented Generation (RAG) and user interactions feeding into AI-powered models, to model outputs as well as continuous monitoring of AI governance through comprehensive audit trails. Securing sensitive data and managing other risks with AI applications is crucial to enable organizations to accelerate their GenAI product strategies.&nbsp;<\/p>\n\n\n\n<p>The emergence of Large Language Models (LLMs) is providing a vast range of opportunities to innovate and refine new experiences and products. Whether it\u2019s content creation, developing new experiences around virtual assistance or improved productivity around code development, smaller and larger data-driven organizations are going to invest in diverse LLM-powered applications. With these opportunities, there is an increased need to secure and govern the use of LLMs within and outside of any enterprise, small or large. Such risks include sensitive and unauthorized data exposure, IP leakage, abuse of models, and regulatory compliance failures.&nbsp;<\/p>\n\n\n\n<p><em>\u201cWith PAIG, Privacera is becoming the unified AI and data security platform for today\u2019s modern data applications and products,\u201d said Balaji Ganesan, co-founder and CEO of Privacera.&nbsp; \u201cData-driven organizations need to think about how GenAI fits in their overall security and governance strategy. This will enable them to achieve enterprise-grade security in order to fully leverage GenAI to transform their businesses without exposing the business to unacceptable risk. Our new product capabilities allow enterprises to secure the end-to-end lifecycle for data and AI applications &#8211; from fine-tuning the LLMs, protecting VectorDB to validating and monitoring user prompts and replies to AI at scale.\u201d&nbsp;<\/em><\/p>\n\n\n\n<p><strong>Domino Fall 2023 Release Expands Platform to Fast-Track and Future-Proof All AI, including GenAI<\/strong><\/p>\n\n\n\n<p>&nbsp;Domino Data Lab, provider of the Enterprise AI platform trusted by more than 20% of the Fortune 500, announced powerful new capabilities for building AI, including Generative AI (GenAI), rapidly and safely at scale. Its Fall 2023 platform update expands Domino\u2019s AI Project Hub to incorporate contributions from partners on the cutting edge of AI, introduces templates and code generation tools to accelerate and guide responsible data science work with best practices, and strengthens data access and governance capabilities.<\/p>\n\n\n\n<p>By enabling data scientists with the latest in open-source and commercial technologies, Domino\u2019s AI Project Hub now accelerates the development of real-world AI applications with pre-packaged reference projects integrating the best of the AI ecosystem. Both Domino customers and now partners can contribute templated projects to the AI Hub. Customers can adapt contributed projects to their unique requirements, IP, and data\u2014to build AI applications such as fine-tuning LLMs for text generation, enterprise Q&amp;A chatbots,&nbsp; sentiment analysis of product reviews, predictive modeling of energy output, and more.&nbsp;<\/p>\n\n\n\n<p>AI Hub Project Templates pre-package state-of-the-art models with environments, source code, data, infrastructure, and best practices &#8211; so enterprises can jumpstart AI productivity using a wide variety of use cases, including natural language processing, computer vision, generative AI, and more. With its initial release, Domino AI Project Hub includes templates to build machine learning models for classic regression and classification tasks, advanced applications such as fault detection using computer vision, and generative AI applications using the latest foundation models from Amazon Web Services, Hugging Face, OpenAI, Meta, and more.&nbsp;<\/p>\n\n\n\n<p><strong>Mission Cloud Launches New Generative AI Podcast Hosted by Generative AI<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.missioncloud.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Mission Cloud<\/a>, a US-based Amazon Web Services (AWS) Premier Tier Partner, launched Mission: Generate, a new, innovative podcast cutting through the noise and helping businesses and technology professionals discover what&#8217;s real and what&#8217;s simply hype when it comes to the generative AI landscape. The podcast will feature hosts Dr. RyAIn Ries and&nbsp;CAIsey Samulski&nbsp;\u2013 AI versions of Mission Cloud&#8217;s Dr.&nbsp;Ryan Ries, Practice Lead, Data, Analytics, AI &amp; Machine Learning, and&nbsp;Casey Samulski, Sr.&nbsp;Product Marketing Manager \u2013 discussing real-world generative AI applications that drive business success and industry evolution.<\/p>\n\n\n\n<p>That&#8217;s right! There are human voices on this podcast, but no actual humans talking. Generative AI was used to synthesize the conversations to create a generative AI podcast, by generative AI, about generative AI.<\/p>\n\n\n\n<p><em>&#8220;Generative AI is a hot topic, but how traditional media typically talks about it is pretty loose in terms of accuracy and often exclusively about chatbots, such as ChatGPT,&#8221; said Samulski. &#8220;There is much more to gen AI than chat, however. And businesses can actually accomplish incredible things and boost revenue and productivity if they know how to leverage it to build real solutions. The creation of this podcast, for example, is just the tip of the iceberg in terms of how far we can go with this technology and we hope to provide a truly educational experience by showcasing real-life use cases we&#8217;ve built for actual customers.&#8221;<\/em><\/p>\n\n\n\n<p><strong>Druva Supercharges Autonomous Protection With Generative AI<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.druva.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Druva<\/a>, the at-scale SaaS platform for data resiliency, unveiled Dru, the AI copilot for backup that revolutionizes how customers engage with their data protection solutions. Dru allows both IT and business users to get critical information through a conversational interface, helping customers reduce protection risks, gain insight into their protection environment, and quickly navigate their solution through a dynamic, customized interface. Dru\u2019s generative AI capabilities empower seasoned backup admins to make smarter decisions and novice admins to perform like experts.<\/p>\n\n\n\n<p>In an ever-evolving digital world, IT teams are overburdened with an increasing amount of responsibilities and complexity as businesses try to do more with less. While many organizations are promising the potential value of AI, Dru addresses real customer challenges today. With Dru, IT teams gain a new way to access information and drive actions through simple conversations to further increase productivity.<\/p>\n\n\n\n<p><em>\u201cWe believe that the future is autonomous, and Druva is committed to pushing the forefront of innovation. We are listening closely to customer pain points and being thoughtful with how we build and incorporate Dru into our platform to ensure we\u2019re solving real-world problems,\u201d said Jaspreet Singh, CEO at Druva. \u201cOur customers are already leveraging the efficiency benefits of our 100% SaaS, fully managed platform, and now with generative AI integrated directly into the solution, they can further expedite decision-making across their environment.\u201d<\/em><\/p>\n\n\n\n<p><strong>Prevedere Introduces Prevedere Generate, A Generative AI Solution That Brings Powerful New Forecasting Capabilities to Business Users Worldwide&nbsp;<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/prevedere.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Prevedere<\/a>, a leading provider of global intelligence and technology for advanced predictive planning, introduced a new generative AI product, Prevedere Generate, into its advanced predictive platform. Featuring a natural language chat interface, Prevedere Generate overlays the Prevedere solution suite, enabling business users to quickly and easily incorporate industry and company-specific economic and consumer data into their forecast models to gain new insights related to leading indicators and business trends.&nbsp;<\/p>\n\n\n\n<p>Since its inception, Prevedere&#8217;s predictive AI engine, driven by external and internal data, has empowered enhanced decision-making and forecasting capabilities, enabling business leaders to reduce risk, identify market opportunities, and foresee impending economic and market shifts. Prevedere Generate builds on this capability by offering a natural language search that allows business leaders to uncover leading external drivers and add more context about historical and future events related to these indicators in record time.<\/p>\n\n\n\n<p><em>&#8220;We&#8217;re committed to enhancing our generative AI capabilities to further accelerate customer adoption and drive market expansion worldwide,&#8221;&nbsp;Rich Fitchen, Chief Executive Officer at Prevedere. &#8220;Our ongoing efforts combine generative AI with our patented technology and extensive econometric analysis expertise, allowing customers to quickly access actionable insights presented in easily digestible formats.&#8221;<\/em><\/p>\n\n\n\n<p><strong>Cognizant and Vianai Systems Announce Strategic Partnership to Advance Generative AI Capabilities<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.cognizant.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Cognizant<\/a> (Nasdaq:&nbsp;CTSH) and <a href=\"https:\/\/www.vian.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">Vianai Systems, Inc<\/a>.&nbsp;announced the launch of a global strategic go-to-market partnership to accelerate human-centered generative AI offerings. This partnership leverages Vianai&#8217;s hila\u2122 Enterprise platform alongside Cognizant&#8217;s Neuro\u00ae&nbsp;AI, creating a seamless, unified interface to unlock predictive, AI-driven decision making. For both companies, this partnership is expected to enable growth opportunities in their respective customer bases, and through Cognizant&#8217;s plans to resell Vianai solutions.<\/p>\n\n\n\n<p>Vianai&#8217;s hila Enterprise provides clients a platform to safely and reliably deploy any large language model (LLM), optimized and fine-tuned to speak to their systems of record \u2013 both structured and unstructured data, enabling clients to better analyze, discover and explore their data leveraging the conversational power of generative AI.<\/p>\n\n\n\n<p>In addition, the LLM monitoring capabilities within hila Enterprise (vianops) is a next-generation monitoring platform for AI-driven enterprises, which monitors and analyzes LLM performance to proactively uncover opportunities to continually improve the reliability and trustworthiness of LLMs for clients.<\/p>\n\n\n\n<p><em>&#8220;In every business around the world, there is a hunger to harness the power of AI, but serious challenges around hallucinations, price-performance and lack of trust are holding enterprises back. That&#8217;s why we built hila Enterprise, a platform that delivers trusted, human-centered applications of AI,&#8221; said Dr.&nbsp;Vishal Sikka, Founder and Chief Executive Officer of Vianai Systems. &#8220;In Cognizant, we have found a strategic partner with a distinguished history of delivering innovative services. Together we will deliver transformative applications of AI that businesses can truly rely on, built on the trusted foundation of hila Enterprise and Cognizant&#8217;s Neuro AI platform.&#8221;<\/em><\/p>\n\n\n\n<p><em>&#8220;Being able to monitor and improve LLM performance is critical to unlocking the true power of generative AI,&#8221; said Ravi Kumar S, Cognizant&#8217;s Chief Executive Officer. &#8220;With Vianai&#8217;s platform and our Neuro AI platform, we believe we will be able to offer our clients a high-quality solution to support seamless data analysis with predictive decision-making capabilities.&#8221;<\/em><\/p>\n\n\n\n<p><strong>Introducing Tynker Copilot \u2013 The First-Ever LLM-Powered Coding Companion for Young Coders<\/strong><\/p>\n\n\n\n<p>Tynker, the leading game-based coding platform that has engaged over 100 million kids, introduced &#8220;Tynker Copilot.&#8221; Leveraging the capabilities of Large Language Models (LLMs), Tynker Copilot empowers young innovators aged 6-12. It provides a seamless interface for these budding developers to transform their ideas into visual block code for apps and games. Additionally, when exploring existing projects, kids benefit from the tool&#8217;s ability to explain block code fragments, ensuring a deeper understanding. Tynker Copilot allows children to build confidence as they work with AI, laying a solid foundation for their future. With this launch, coding education takes a significant leap forward.&nbsp;<\/p>\n\n\n\n<p>Large Language Models (LLMs) have excelled in text-based programming languages like Python and JavaScript. However, their application to visual block coding, the primary introduction to programming for many kids, had yet to be explored. Tynker is the first to bridge this gap. Our latest integration lets children quickly convert their ideas into block code, streamlining their initial coding experience.&nbsp;<\/p>\n\n\n\n<p>Tynker&#8217;s introduction of the Copilot feature marks a significant industry milestone. Until now, the capabilities of LLMs have not been fully utilized for the younger age group. Tynker Copilot empowers children as young as 6 to input text commands like &#8220;Design a space-themed knock-knock joke&#8221; or &#8220;Teach me how to build a Fruit-ninja style game&#8221; and receive block code outputs with step-by-step instructions. Moreover, when debugging existing projects, students can submit block-code snippets and benefit from LLM-generated natural language explanations.&nbsp;<\/p>\n\n\n\n<p><em>Our ethos at Tynker is grounded in innovation and breaking boundaries,&#8221; stated Srinivas Mandyam, CEO of Tynker. &#8220;We&#8217;re exhilarated to channel the latest advancements in AI technology to serve our youngest learners, fostering an environment of curiosity and growth. While the potential for such a tool is vast, our prime objective remains to employ it as an empowering educational aid, not a shortcut.&nbsp;<\/em><\/p>\n\n\n\n<p><strong>care.ai Builds Advanced Solutions for Nurses with Google Cloud&#8217;s Generative AI and Data Analytics on Its Smart Care Facility Platform<\/strong><\/p>\n\n\n\n<p>care.ai announced it is building Google Cloud&#8217;s generative AI and data analytics tools into its Smart Care Facility Platform to improve healthcare facility management and patient care, and move toward its vision for predictive, smart care facilities.<\/p>\n\n\n\n<p>By leveraging Google Cloud&#8217;s gen AI tools, including&nbsp;Vertex AI, and analytics and business intelligence products&nbsp;BigQuery&nbsp;and&nbsp;Looker, care.ai&#8217;s Smart Care Facility Platform aims to help reduce administrative burdens, mitigate staffing shortages, and free up clinicians to spend more time with patients in 1,500 acute and post-acute facilities where care.ai&#8217;s platform is already deployed.<\/p>\n\n\n\n<p><em>&#8220;This collaboration brings us closer to our vision of predictive smart care facilities, as opposed to the current reactive care model, which relies on latent, manual, sometimes inaccurate data entry,&#8221; said&nbsp;Chakri Toleti, CEO of care.ai. &#8220;Our vision is to enable the era of the smart hospital by making gen AI and ambient intelligence powerful assistive technologies, empowering clinicians, making healthcare safer, smarter, and more efficient.&#8221;<\/em><\/p>\n\n\n\n<p><strong>Talkdesk Launches New Generative AI Features that Make AI More Responsible, Accurate, and Accessible in the Contact Center<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.talkdesk.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Talkdesk<\/a>\u00ae, Inc., a global AI-powered contact center leader for enterprises of all sizes, announced significant product updates that deepen the integration of generative AI (GenAI) within its flagship&nbsp;<a href=\"https:\/\/www.talkdesk.com\/cloud-contact-center\/cx-cloud\/\" target=\"_blank\" rel=\"noreferrer noopener\">Talkdesk CX Cloud<\/a>&nbsp;platform and Industry Experience Clouds. Now, companies across industries can easily deploy, monitor, and fine-tune GenAI in the contact center with no coding experience, eliminate inaccurate and irresponsible AI use and subsequent brand risk, and create a powerful personalized experience for customers.<\/p>\n\n\n\n<p><em>Tiago Paiva, chief executive officer and founder of Talkdesk, said: \u201cAbout a year after its debut, GenAI significantly builds upon the benefits of artificial intelligence and has already proven to be a powerful tool for businesses. But as more enterprises deploy GenAI within business functions, it\u2019s clear that more work needs to be done to ensure accuracy, responsibility, and accessibility of the technology. At Talkdesk, we\u2019re taking a stand in the CCaaS industry to ensure that GenAI within the contact center does no harm to the business or its customers, provides the right level of personalized experiences across the customer journey, and gives more businesses access to its benefits.\u201d<\/em><\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.<\/p>\n","protected":false},"author":37,"featured_media":32680,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,1323,180,67,268,56,1],"tags":[437,324,1245,1248,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Generative AI Report \u2013 11\/1\/2023 - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Generative AI Report \u2013 11\/1\/2023 - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-01T13:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-10-30T18:06:56+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1100\" \/>\n\t<meta property=\"og:image:height\" content=\"550\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Daniel Gutierrez\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@AMULETAnalytics\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Gutierrez\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"22 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/\",\"url\":\"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/\",\"name\":\"Generative AI Report \u2013 11\/1\/2023 - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2023-11-01T13:00:00+00:00\",\"dateModified\":\"2023-10-30T18:06:56+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2540da209c83a68f4f5922848f7376ed\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Generative AI Report \u2013 11\/1\/2023\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2540da209c83a68f4f5922848f7376ed\",\"name\":\"Daniel Gutierrez\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5780282e7e567e2a502233e948464542?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5780282e7e567e2a502233e948464542?s=96&d=mm&r=g\",\"caption\":\"Daniel Gutierrez\"},\"description\":\"Daniel D. Gutierrez is a Data Scientist with Los Angeles-based AMULET Analytics, a service division of AMULET Development Corp. He's been involved with data science and Big Data long before it came in vogue, so imagine his delight when the Harvard Business Review recently deemed \\\"data scientist\\\" as the sexiest profession for the 21st century. Previously, he taught computer science and database classes at UCLA Extension for over 15 years, and authored three computer industry books on database technology. He also served as technical editor, columnist and writer at a major computer industry monthly publication for 7 years. Follow his data science musings at @AMULETAnalytics.\",\"sameAs\":[\"http:\/\/www.insidebigdata.com\",\"https:\/\/twitter.com\/@AMULETAnalytics\"],\"url\":\"https:\/\/insidebigdata.com\/author\/dangutierrez\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Generative AI Report \u2013 11\/1\/2023 - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/","og_locale":"en_US","og_type":"article","og_title":"Generative AI Report \u2013 11\/1\/2023 - insideBIGDATA","og_description":"Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.","og_url":"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2023-11-01T13:00:00+00:00","article_modified_time":"2023-10-30T18:06:56+00:00","og_image":[{"width":1100,"height":550,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg","type":"image\/jpeg"}],"author":"Daniel Gutierrez","twitter_card":"summary_large_image","twitter_creator":"@AMULETAnalytics","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Daniel Gutierrez","Est. reading time":"22 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/","url":"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/","name":"Generative AI Report \u2013 11\/1\/2023 - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2023-11-01T13:00:00+00:00","dateModified":"2023-10-30T18:06:56+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2540da209c83a68f4f5922848f7376ed"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2023\/11\/01\/generative-ai-report-11-1-2023\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Generative AI Report \u2013 11\/1\/2023"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2540da209c83a68f4f5922848f7376ed","name":"Daniel Gutierrez","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/5780282e7e567e2a502233e948464542?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5780282e7e567e2a502233e948464542?s=96&d=mm&r=g","caption":"Daniel Gutierrez"},"description":"Daniel D. Gutierrez is a Data Scientist with Los Angeles-based AMULET Analytics, a service division of AMULET Development Corp. He's been involved with data science and Big Data long before it came in vogue, so imagine his delight when the Harvard Business Review recently deemed \"data scientist\" as the sexiest profession for the 21st century. Previously, he taught computer science and database classes at UCLA Extension for over 15 years, and authored three computer industry books on database technology. He also served as technical editor, columnist and writer at a major computer industry monthly publication for 7 years. Follow his data science musings at @AMULETAnalytics.","sameAs":["http:\/\/www.insidebigdata.com","https:\/\/twitter.com\/@AMULETAnalytics"],"url":"https:\/\/insidebigdata.com\/author\/dangutierrez\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-8Mo","jetpack-related-posts":[{"id":33161,"url":"https:\/\/insidebigdata.com\/2023\/08\/17\/generative-ai-report-deci-releases-powerful-open-source-generative-ai-model-decicoder-to-redefine-code-generation-for-developers\/","url_meta":{"origin":33752,"position":0},"title":"Generative AI Report: Deci Releases Powerful Open-Source Generative AI Model, DeciCoder, to Redefine Code Generation for Developers\u00a0","date":"August 17, 2023","format":false,"excerpt":"Deci, the deep learning company harnessing AI to build AI, released DeciCoder, its inaugural foundation model in generative AI helping users generate programming language code. This groundbreaking Large Language Model (LLM), dedicated to code generation with 1 billion parameters and an expansive 2048-context window, surpasses results released in equivalent models\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33406,"url":"https:\/\/insidebigdata.com\/2023\/09\/19\/generative-ai-report-9-19-2023\/","url_meta":{"origin":33752,"position":1},"title":"Generative AI Report &#8211; 9\/19\/2023","date":"September 19, 2023","format":false,"excerpt":"Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":34101,"url":"https:\/\/insidebigdata.com\/2023\/12\/07\/new-state-of-generative-ai-insiders-report-outlines-the-urgency-to-be-generative-ai-ready-in-2024\/","url_meta":{"origin":33752,"position":2},"title":"New State of Generative AI Insiders Report Outlines the Urgency to be Generative AI-Ready in 2024","date":"December 7, 2023","format":false,"excerpt":"Businesses will face substantial competitive hurdles if they are not Generative AI-ready in the next few years;\u00a0Report is the first to offer a generative AI scorecard for 2024 readiness. \u00a0\"The State of Generative AI Insiders Report,\" in conjunction with leaders from Insight Partners, Battery Ventures, Dataiku, Weights & Biases and\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/08\/Generative_AI_shutterstock_2273007347_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33558,"url":"https:\/\/insidebigdata.com\/2023\/10\/03\/generative-ai-report-10-3-2023\/","url_meta":{"origin":33752,"position":3},"title":"Generative AI Report \u2013 10\/3\/2023","date":"October 3, 2023","format":false,"excerpt":"Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32957,"url":"https:\/\/insidebigdata.com\/2023\/07\/26\/generative-ai-report-servicenow-nvidia-and-accenture-team-to-accelerate-generative-ai-adoption-for-enterprises\/","url_meta":{"origin":33752,"position":4},"title":"Generative AI Report: ServiceNow, NVIDIA, and Accenture Team to Accelerate Generative AI Adoption for Enterprises","date":"July 26, 2023","format":false,"excerpt":"Welcome to the Generative AI Report, a new feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications centered on large language models, we thought it would be a\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2284999159_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33948,"url":"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/","url_meta":{"origin":33752,"position":5},"title":"Generative AI Report \u2013 11\/21\/2023","date":"November 21, 2023","format":false,"excerpt":"Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/33752"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/37"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=33752"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/33752\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/32680"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=33752"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=33752"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=33752"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}