{"id":33948,"date":"2023-11-21T03:00:00","date_gmt":"2023-11-21T11:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=33948"},"modified":"2023-11-22T10:16:39","modified_gmt":"2023-11-22T18:16:39","slug":"generative-ai-report-11-21-2023","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/","title":{"rendered":"Generative AI Report \u2013 11\/21\/2023"},"content":{"rendered":"\n<p>Welcome to the&nbsp;<strong>Generative AI Report<\/strong>&nbsp;round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.<\/p>\n\n\n\n<p><strong>NVIDIA Introduces Generative AI Foundry Service on Microsoft Azure for Enterprises and Startups Worldwide<\/strong><\/p>\n\n\n\n<p>NVIDIA introduced an AI foundry service to supercharge the development and tuning of custom generative AI applications for enterprises and startups deploying on Microsoft Azure.<\/p>\n\n\n\n<p>The NVIDIA AI foundry service pulls together three elements \u2014 a collection of&nbsp;<a href=\"https:\/\/blogs.nvidia.com\/blog\/custom-generative-ai-model-development\/\" target=\"_blank\" rel=\"noreferrer noopener\"><u>NVIDIA AI Foundation Models<\/u><\/a>,&nbsp;<a href=\"https:\/\/www.nvidia.com\/en-us\/ai-data-science\/generative-ai\/nemo-framework\/\" target=\"_blank\" rel=\"noreferrer noopener\"><u>NVIDIA NeMo<\/u><\/a>\u2122 framework and tools, and&nbsp;<a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/dgx-cloud\/\" target=\"_blank\" rel=\"noreferrer noopener\"><u>NVIDIA DGX\u2122 Cloud<\/u><\/a>&nbsp;AI supercomputing services \u2014 that give enterprises an end-to-end solution for creating custom generative AI models. Businesses can then deploy their customized models with&nbsp;<a href=\"https:\/\/www.nvidia.com\/en-us\/data-center\/products\/ai-enterprise\/\" target=\"_blank\" rel=\"noreferrer noopener\"><u>NVIDIA AI Enterprise<\/u><\/a>&nbsp;software to power generative AI applications, including intelligent search, summarization and content generation. Industry leaders SAP SE, Amdocs and Getty Images are among the pioneers building custom models using the service.<\/p>\n\n\n\n<p><em>\u201cEnterprises need custom models to perform specialized skills trained on the proprietary DNA of their company \u2014 their data,\u201d said Jensen Huang, founder and CEO of NVIDIA. \u201cNVIDIA\u2019s AI foundry service combines our generative AI model technologies, LLM training expertise and giant-scale AI factory. We built this in Microsoft Azure so enterprises worldwide can connect their custom model with Microsoft\u2019s world-leading cloud services.\u201d<\/em><\/p>\n\n\n\n<p><em>\u201cOur partnership with NVIDIA spans every layer of the Copilot stack \u2014 from silicon to software \u2014 as we innovate together for this new age of AI,\u201d said Satya Nadella, chairman and CEO of Microsoft. \u201cWith NVIDIA\u2019s generative AI foundry service on Microsoft Azure, we\u2019re providing new capabilities for enterprises and startups to build and deploy AI applications on our cloud.\u201d<\/em><\/p>\n\n\n\n<p><strong>Hammerspace Unveils Reference Architecture for Large Language Model Training<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/hammerspace.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Hammerspace<\/a>, the company orchestrating the Next Data Cycle, released the data architecture being used for training inference for Large Language Models (LLMs) within hyperscale environments. This architecture enables artificial intelligence (AI) technologists to design a unified data architecture that delivers the performance of a super computing-class parallel file system coupled with the ease of application and research access to standard NFS.<\/p>\n\n\n\n<p>For AI strategies to succeed, organizations need the ability to scale to a massive number of GPUs, as well as the flexibility to access local and distributed data silos. Additionally, they need the ability to leverage data regardless of the hardware or cloud infrastructure on which it currently resides, as well as the security controls to uphold data governance policies. The magnitude of these requirements is particularly critical in the development of LLMs, which often necessitate utilizing hundreds of billions of parameters, tens of thousands of GPUs, and hundreds of petabytes of diverse types of unstructured data.&nbsp;&nbsp;<\/p>\n\n\n\n<p><em>\u201cThe most powerful AI initiatives will incorporate data from everywhere,\u201d said David Flynn, Hammerspace Founder and CEO. \u201cA high-performance data environment is critical to the success of initial AI model training. But even more important, it provides the ability to orchestrate the data from multiple sources for continuous learning. Hammerspace has set the gold standard for AI architectures at scale.\u201d<\/em><\/p>\n\n\n\n<p><strong>Pendo and Google Cloud Partner to Transform Product Management with Generative AI Capabilities and Training<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.pendo.io\/\" target=\"_blank\" rel=\"noreferrer noopener\">Pendo<\/a>, a leader in application experience management, tannounced an expanded partnership with Google Cloud to leverage its&nbsp;<a href=\"https:\/\/cloud.google.com\/ai\/generative-ai\" target=\"_blank\" rel=\"noreferrer noopener\">generative AI (gen AI) capabilities<\/a>&nbsp;across the Pendo One platform. Pendo will integrate with Vertex AI to provide product teams and application owners with features that accelerate product discovery, improve product-led growth and retention campaigns, and provide personalized app experiences to their users.<\/p>\n\n\n\n<p><em>&#8220;Google Cloud has been a critical partner for us since the earliest days of Pendo, and they continue to help us drive innovation for our customers,&#8221; said&nbsp;Todd Olson, CEO and co-founder of Pendo. &#8220;With gen AI fueling features across our platform, we can eliminate tedious manual work and help product teams make smarter decisions to ensure every tech investment drives returns for their companies.&#8221;<\/em><\/p>\n\n\n\n<p><em>&#8220;Generative AI can have a major impact helping product teams more effectively develop digital experiences for their customers,&#8221; said&nbsp;Stephen Orban, VP of Migrations, ISVs, and Marketplace at Google Cloud. &#8220;Our partnership with Pendo will give product managers new tools that enable them to rapidly create in-app guides and analyze user engagement, saving resources that can be used to improve product roadmaps and build new features.&#8221;<\/em><\/p>\n\n\n\n<p><strong>Redis Cloud Powers LangChain OpenGPTs Project<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/redis.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Redis, Inc.<\/a>&nbsp;announced <a href=\"https:\/\/www.langchain.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">LangChain<\/a>&nbsp;is utilizing&nbsp;<a href=\"https:\/\/redis.com\/cloud-partners\/google\/\" target=\"_blank\" rel=\"noreferrer noopener\">Redis Cloud<\/a>&nbsp;as the extensible real-time data platform for the&nbsp;<a href=\"https:\/\/twitter.com\/LangChainAI\/status\/1724099234574811149?s=20\" target=\"_blank\" rel=\"noreferrer noopener\">OpenGPTs project<\/a>. This collaboration between Redis and LangChain continues the companies&#8217; partnership to enable developers and businesses to leverage the latest innovation in the fast-evolving landscape of generative AI, such as the new&nbsp;<a href=\"https:\/\/blog.langchain.dev\/langchain-templates\/\" target=\"_blank\" rel=\"noreferrer noopener\">LangChain Template<\/a>&nbsp;for Retrieval Augmented Generation (RAG) utilizing Redis.<\/p>\n\n\n\n<p>LangChain&#8217;s OpenGPTs, an open-source initiative, introduces a more flexible approach to generative AI. It allows users to choose their models, control data retrieval, and manage where data is stored. Integrated with&nbsp;LangSmith&nbsp;for advanced debugging, logging, and monitoring, OpenGPTs offers a unique user-controlled experience. &#8220;The OpenGPTs project is bringing the same ideas of an agent to open source but allowing for more control over what model you use, how you do retrieval, and where your data is stored,&#8221; said Harrison Chase, Co-Founder and CEO of LangChain.<\/p>\n\n\n\n<p><em>&#8220;OpenGPTs is a wonderful example of the kind of AI applications developers can build using Redis Cloud to solve challenges like retrieval, conversational LLM memory, and semantic caching,&#8221; said Yiftach Shoolman, Co-Founder and Chief Technology Officer of Redis. &#8220;This great development by LangChain shows how our customers can address these pain points within one solution at real-time speed that is also cost-effective. We&#8217;re working across the AI ecosystem to support up-and-coming startups like LangChain to drive forward the opportunity generative AI offers the industry.&#8221;<\/em><\/p>\n\n\n\n<p><strong>Snow Software Unveils Snow Copilot, its First Generative AI Assistant, Built to Solve Large Challenges in IT Asset Management and FinOps<\/strong>&nbsp;<\/p>\n\n\n\n<p><a href=\"https:\/\/www.snowsoftware.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Snow Software<\/a>, a leader in technology intelligence, previewed Snow Copilot, the first-in-a-series of Artificial Intelligence (AI) capabilities designed to solve large challenges in IT Asset Management (ITAM) and FinOps. Developed in its innovation incubator Snow Labs, Snow&nbsp;Copilot is an AI assistant that empowers users to ask conversational questions and receive natural language responses. At release, Snow Copilot is available for Software Asset Management (SAM) computer data in Snow Atlas with more use cases being explored over time.<\/p>\n\n\n\n<p>Snow Labs is a multiprong innovation initiative to help organizations make better decisions and deliver positive business outcomes with Technology Intelligence, or the ability to understand and manage all technology data, via Snow Atlas. The current project focuses on using artificial intelligence to advance data insights and further explore ways to tackle multifaceted ITAM and FinOps challenges, the first offering powered by Snow AI.<\/p>\n\n\n\n<p><em>\u201cWe created Snow Labs as a space for rapid experimentation and prototyping, allowing us to test emerging technologies that sat outside of our standard product roadmap,\u201d said Steve Tait, Chief Technology Officer and EVP, Research and Development at Snow. \u201cArtificial intelligence is a great example of a rapidly evolving, emerging technology that could allow Snow Labs to address a myriad of challenges our customers face when making sense of their technology asset data. We believe that AI will fundamentally transform the way our customers and partners interact with their data. This is just one of many ways we are working to bring our vision around Technology Intelligence to life through innovation.\u201d<\/em><\/p>\n\n\n\n<p><strong>Matillion to bring no-code AI to pipelines<\/strong><\/p>\n\n\n\n<p>Data productivity provider&nbsp;<a href=\"https:\/\/www.matillion.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Matillion<\/a>&nbsp;announced its AI vision, with a range of GenAI functionality to put AI in the hands of every data practitioner, coders and non-coders alike. The addition of a low-code\/no-code graphical AI Prompt component will enable every data engineer to harness prompt engineering within LLM-enabled pipelines and materially boost productivity, whilst unlocking the infinite opportunities of unstructured data.&nbsp;<\/p>\n\n\n\n<p>The model-agnostic design will allow users to choose their preferred LLM, set the right context, and drive prompts at speed and scale. Seamlessly integrating with existing systems, the technology will enable information extraction, summarization, text classification, NLP, sentiment analysis and judgment calls on any source that Matillion connects to. With a strong emphasis on security and explainability, the solution safeguards data sovereignty, transparently articulates results and actively eliminates bias. LLM-enabled pipeline functionality within Matillion is expected to launch in the first quarter of 2024.<\/p>\n\n\n\n<p><em>Ciaran Dynes, Chief of Product at Matillion, said: \u201cThe role of the data engineer is evolving at pace. With the advent of GenAI, data engineering is about to get much more interesting.&nbsp;Matillion\u2019s core ethos is to make data more productive, and enabling users to seamlessly integrate AI into their data stack and leverage that functionality without the need for a data scientist, is doing just that. Whilst all eyes are on AI, BI isn\u2019t going anywhere. We believe that through the Data Productivity Cloud, we have the opportunity to democratise access to AI in the context of data pipelines to augment BI projects, and to train and consume AI models.\u201d<\/em><\/p>\n\n\n\n<p><strong>Thomson Reuters Launches Generative AI-Powered Solutions to Transform How Legal Professionals Work<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.thomsonreuters.com\/en.html\" target=\"_blank\" rel=\"noreferrer noopener\">Thomson Reuters<\/a> (TSX\/NYSE:&nbsp;TRI),&nbsp;a global content and technology company, announced a series of GenAI initiatives designed to transform the legal profession. Headlining these initiatives is the debut of GenAI within the most advanced legal research platform,&nbsp;AI-Assisted Research on Westlaw Precision. Available now to customers in&nbsp;the United States, this skill helps legal professionals quickly get to answers for complex research questions. This generative AI skill leverages innovation in Casetext and taking a &#8220;best of&#8221; approach was created using the&nbsp;Thomson Reuters Generative AI Platform.&nbsp;&nbsp;<\/p>\n\n\n\n<p>The company also announced that it will be building on the AI assistant experience Casetext created with CoCounsel, the world&#8217;s first AI legal assistant. Later in 2024, Thomson Reuters will launch an&nbsp;AI assistant&nbsp;that will be the interface across Thomson Reuters products with GenAI capabilities.&nbsp;The AI assistant, called CoCounsel, will be fully integrated with multiple Thomson Reuters legal products, including Westlaw Precision, Practical Law Dynamic Tool Set, Document Intelligence, and HighQ, and will continue to be available on the CoCounsel application as a destination site. Customers will be able to choose the right skills to solve the problem at hand while taking advantage of generative AI capabilities.&nbsp;&nbsp;<\/p>\n\n\n\n<p><em>&#8220;Thomson Reuters is redefining the way legal work is done by delivering a generative AI-based toolkit to enable attorneys to quickly gather deeper insights and deliver a better work product. AI-Assisted Research on Westlaw Precision and CoCounsel Core provide the most comprehensive set of generative AI skills that attorneys can use across their research and workflow,&#8221; said&nbsp;David Wong, chief product officer, Thomson Reuters.&nbsp;&nbsp;<\/em><\/p>\n\n\n\n<p><strong>Dataiku Welcomes Databricks to Its LLM Mesh Partner Program&nbsp;<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.dataiku.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Dataiku<\/a>, the platform for Everyday AI, announced that Databricks is the latest addition to its&nbsp;<a href=\"https:\/\/blog.dataiku.com\/llm-mesh\" target=\"_blank\" rel=\"noreferrer noopener\">LLM Mesh Partner Program<\/a>. Through this integration and partnership, the two companies are paving a clearer and more vibrant path for Generative AI-driven business transformations while allowing the enterprise to capitalize on the immense potential of LLMs.<\/p>\n\n\n\n<p>LLMs offer ground-breaking capabilities but create challenges related to cost control, security, privacy, and trust. The LLM Mesh is the solution \u2014 a common backbone for securely building and scaling Generative AI applications in the enterprise context. It simplifies the complexities of integration, boosts collaboration, and optimizes resources at a time when&nbsp;<a href=\"https:\/\/pages.dataiku.com\/dataiku-databricks-ai-survey\" target=\"_blank\" rel=\"noreferrer noopener\">over 60% of senior AI professionals<\/a>&nbsp;are setting their sights on Generative AI, including LLMs, in the coming year.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"630\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/11\/Dataiku_Databricks_LLM-Mesh_2.1.png\" alt=\"\" class=\"wp-image-33956\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/11\/Dataiku_Databricks_LLM-Mesh_2.1.png 700w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/11\/Dataiku_Databricks_LLM-Mesh_2.1-300x270.png 300w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/11\/Dataiku_Databricks_LLM-Mesh_2.1-150x135.png 150w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/figure><\/div>\n\n\n<p>Together, Dataiku and Databricks democratize access to data, analytics, machine learning, and AI, enabling a collaborative, visual experience that scales programs and accelerates the delivery of Generative AI projects.&nbsp;<\/p>\n\n\n\n<p><em>&#8220;Databricks recognizes the immense opportunities and challenges organizations face with the intricacies of Generative AI applications and the strain it can place on both technology and talent resources. We\u2019re excited to partner with Dataiku and look forward to enabling every enterprise to build, scale, and realize the benefits of Generative AI,\u201d said Roger Murff, VP of Technology Partners at Databricks.&nbsp;<\/em><\/p>\n\n\n\n<p><strong>Martian Invents Model Router that Beats GPT-4 by Using Breakthrough \u201cModel Mapping\u201d Interpretability Technique<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/withmartian.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Martian<\/a> emerged from stealth with the Model Router, an orchestration layer solution that routes each individual query to the best LLM in real-time. Through routing, Martian achieves higher performance and lower cost than any individual provider, including GPT-4. The system is built on the company\u2019s unique Model Mapping technology that unpacks LLMs from complex black boxes into a more interpretable architecture, making it the first commercial application of mechanistic interpretability.<\/p>\n\n\n\n<p><em>\u201cAll the effort being put into AI development is wasted if it\u2019s unwieldy, cost-prohibitive and uncharted for enterprise and everyday users,\u201d said Aaron Jacobson, partner, NEA. \u201cWe believe Martian will unlock the power of AI for companies and people en masse. Etan and Shriyash have demonstrated entrepreneurial spirit in their prior experiences and deep expertise in this field through high-impact peer-reviewed research that they\u2019ve been doing since 2016.\u201d<\/em><\/p>\n\n\n\n<p><em>\u201cOur goal is to consistently deliver such breakthroughs until AI is fully understood and we have a theory of machine intelligence as robust as our theories of logic or calculus,\u201d Shriyash Upadhyay, co-founder, Martian, said.&nbsp;<\/em><\/p>\n\n\n\n<p><strong>IBM Unveils watsonx.governance to Help Businesses &amp; Governments Govern and Build Trust in Generative AI<\/strong><\/p>\n\n\n\n<p>IBM (NYSE:<a href=\"http:\/\/www.ibm.com\/investor\" target=\"_blank\" rel=\"noreferrer noopener\">IBM<\/a>) announced that&nbsp;watsonx.governance will be generally available in early December to help businesses shine a light on&nbsp;<a href=\"http:\/\/www.ibm.com\/topics\/ai-model\" target=\"_blank\" rel=\"noreferrer noopener\">AI models<\/a>&nbsp;and eliminate the mystery around the data going in, and the answers coming out.<\/p>\n\n\n\n<p>While&nbsp;<a href=\"https:\/\/research.ibm.com\/blog\/what-is-generative-AI\" rel=\"noreferrer noopener\" target=\"_blank\">generative AI<\/a>, powered by Large Language Models (LLM) or Foundation Models, offers many use cases for businesses, it also poses new risks and complexities, including training data scraped from corners of the internet that cannot be validated as fair and accurate, all the way to a lack of explainable outputs. Watsonx.governance provides organizations with the toolkit they need to manage risk, embrace transparency, and anticipate compliance with future AI-focused regulation.<\/p>\n\n\n\n<p>As businesses today are looking to innovate with AI, deploying a mix of LLMs from tech providers and open sources communities,&nbsp;<a href=\"https:\/\/www.ibm.com\/watsonx\" target=\"_blank\" rel=\"noreferrer noopener\">watsonx<\/a>&nbsp;enables them to manage, monitor and govern models from wherever they choose.<\/p>\n\n\n\n<p><em>&#8220;Company boards and CEOs are looking to reap the rewards from today&#8217;s more powerful AI models, but the risks due to a lack of transparency and inability to govern these models have been holding them back,&#8221; said&nbsp;Kareem Yusuf, Ph.D, Senior Vice President, Product Management and Growth, IBM Software. &#8220;Watsonx.governance is a one-stop-shop for businesses that are struggling to deploy and manage both LLM and ML models, giving businesses the tools, they need to automate AI governance processes, monitor their models, and take corrective action, all with increased visibility. Its ability to translate regulations into enforceable policies will only become more essential for enterprises as new AI regulation takes hold worldwide.&#8221;<\/em><\/p>\n\n\n\n<p><strong>KX Announces KDB.AI And KX Copilot In Microsoft Azure<\/strong><\/p>\n\n\n\n<p>Representing a significant milestone in its strategic partnership with Microsoft, KX, the global pioneer in vector and time-series data management, has announced two new offerings optimized for Microsoft Azure customers: the integration of&nbsp;<a href=\"http:\/\/kdb.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">KDB.AI<\/a>&nbsp;with Azure Machine Learning and Azure OpenAI Service; and KX Copilot.<\/p>\n\n\n\n<p>Recent estimations from McKinsey suggest that generative AI\u2019s impact on productivity could add the equivalent of $2.6 trillion to $4.4 trillion to the global economy; few companies, however, are optimized to harness the transformative power of this technology appropriately. Further integration of KX into Azure and productivity tools will help business users and technologists alike drive greater value from their data assets and AI investments for more informed decision-making.<\/p>\n\n\n\n<p>With the Integration of&nbsp;<a href=\"http:\/\/kdb.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">KDB.AI<\/a>&nbsp;with Azure Machine Learning and Azure OpenAI Service, developers who require turnkey technology stacks can significantly speed up the process of building and deploying AI applications by accessing fully configured instances of&nbsp;<a href=\"http:\/\/kdb.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">KDB.AI<\/a>, Azure Machine Learning, and Azure OpenAI Service inside their customer subscription. With samples of KX\u2019s LangChain and OpenAI Chat GPT plug-ins included, developers can deploy a complete technical stack and start building AI-powered applications in less than five minutes.&nbsp;<a href=\"http:\/\/kdb.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">KDB.AI<\/a>&nbsp;will be available in Azure Marketplace in early 2024.<\/p>\n\n\n\n<p><em>Ashok Reddy, CEO, KX:&nbsp;&#8220;With the deeper integration of our technology within the Microsoft Cloud environment, these announcements demonstrate our ongoing commitment to bring the power and performance of KX to even more customers. Generative AI is the defining technology of our age, and the introduction of these services will help organizations harness its incredible power for greater risk management, enhanced productivity and real-time decision-making.&#8221;<\/em><\/p>\n\n\n\n<p><strong>Messagepoint Announces Generative AI Capabilities for Translation<\/strong> <strong>and Plain Language Rewrites<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.messagepoint.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Messagepoint<\/a>&nbsp;announced enhancements to its generative AI capabilities to further support organizations in creating communications that are easy for customers to understand. As part of its Intelligent Content Hub for customer communications management, Messagepoint\u2019s AI-powered Assisted Authoring will now support translation into over 80 languages and suggest content rewrites to align communications with the ISO standard for plain language. Messagepoint\u2019s Assisted Authoring capabilities are governed by enterprise-grade controls that safely make it faster and easier for marketing and customer servicing teams to translate and optimize content, while still retaining complete control over the outgoing message.<\/p>\n\n\n\n<p><em>\u201cAs organizations strive to make complex topics and communications more accessible, the time and effort to support multiple languages or rewrite communications using plain language principles can be prohibitive,\u201d said Steve Biancaniello, founder and CEO of Messagepoint. \u201cBy leveraging generative AI in the controlled environment&nbsp;Messagepoint provides,&nbsp;organizations benefit from the speed and accuracy of AI-based translation and optimization without introducing risk. These capabilities represent a massive opportunity for organizations to better serve vulnerable populations and those with&nbsp;limited English proficiency.\u201d<\/em><\/p>\n\n\n\n<p><strong>Uniphore Advances Enterprise AI With Next Generation X Platform Capabilities&nbsp;<\/strong><\/p>\n\n\n\n<p>Uniphore announced breakthrough innovations for its X Platform that serves as a foundation for large enterprises to deliver better business results through enhanced customer and employee experiences, while driving a quick time-to-market and improved efficiencies.&nbsp;These innovations include the development of and usage of Large Multimodal Models (LMMs) that have pre-built guardrails which help ensure the successful integration of Knowledge AI, Emotion AI, and Generative AI, leveraging all data sources including voice, video and text on its industry leading X Platform.&nbsp; As a result, Uniphore\u2019s suite of industry leading applications now has capabilities that are unmatched in the industry.&nbsp;<\/p>\n\n\n\n<p>While the rest of the industry is rushing to add Generative AI using open frameworks that are centered predominantly on text-based language models and in some cases, use of pictures and graphics, Uniphore has augmented the X Platform with an advanced LMM which addresses the shortcomings of GPT-based solutions. Uniphore customers now have access to solutions that solve today\u2019s biggest challenges such as hallucinations, data sovereignty and privacy. Enterprises benefit from Uniphore\u2019s LMMs across all its applications by humanizing the customer and employee experiences with contextual responses, accurate guidance and with complete control of data privacy and security.<\/p>\n\n\n\n<p><em>\u201cGlobal enterprises are looking for robust AI solutions to not only solve current business challenges, but find ways to deliver better customer and employee experiences to drive business forward in the future,\u201d said Umesh Sachdev, co-founder and CEO of Uniphore \u201cCustomers have come to rely on Uniphore to ensure they get the best end-to-end AI platform that leverages Knowledge AI, Emotion AI and Generative AI across voice, video and text-based channels for a complete solution.\u201d<\/em><\/p>\n\n\n\n<p><strong>Rockset Adds Vector Search For Real-time Machine Learning At Scale<\/strong><\/p>\n\n\n\n<p><a href=\"http:\/\/rockset.com\" target=\"_blank\" rel=\"noreferrer noopener\">Rockset<\/a>, the real-time analytics database built for the cloud, announced native support for vector embeddings, enabling organizations to build high-performance vector search applications at scale, in the cloud. By extending its real-time SQL-based search and analytics capabilities, Rockset now allows developers to combine vector search with filtering and aggregations to enhance the search experience and optimize relevance by enabling hybrid search.<\/p>\n\n\n\n<p>Vector search has gained rapid momentum as more applications employ machine learning (ML) and artificial intelligence (AI) to power voice assistants, chatbots, anomaly detection, recommendation and personalization engines\u2014all of which are based on vector embeddings at their core. Rockset delivers fast, efficient search, aggregations and joins on real-time data at massive scale by using a <a href=\"https:\/\/rockset.com\/blog\/converged-indexing-the-secret-sauce-behind-rocksets-fast-queries\/\" target=\"_blank\" rel=\"noreferrer noopener\">Converged Index\u2122<\/a> stored on RocksDB. Vector databases, such as Milvus, Pinecone, Weaviate and other popular alternatives like Elasticsearch, store and index vectors to make vector search efficient. With this release, Rockset provides a more powerful alternative that combines vector operations with the ability to filter on metadata, do keyword search and join vector similarity scores with other data to create richer, more relevant ML and AI powered experiences in real-time.&nbsp;<\/p>\n\n\n\n<p><em>\u201cBy extending our existing real-time search and analytics capabilities into vector search, we give AI\/ML developers access to real-time data and fast queries with a fully managed cloud service,\u201d said Rockset co-founder and CEO Venkat Venkataramani. \u201cWe now enable hybrid metadata filtering, keyword search and vector search, simply using SQL. Combining this ease of use with our compute efficiency in the cloud makes AI\/ML a lot more accessible for every organization.\u201d&nbsp;<\/em><\/p>\n\n\n\n<p><strong>LogicMonitor Introduces LM Co-Pilot, a Generative AI Tool Supporting Ops Teams with Interactive Experiences<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.logicmonitor.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">LogicMonitor<\/a>, a leading SaaS-based hybrid observability platform powered by AI, announced its generative AI-based tool, LM Co-Pilot.&nbsp;<a href=\"https:\/\/www.logicmonitor.com\/resource\/future-further-how-data-illumination-futureproofs-your-business\" target=\"_blank\" rel=\"noreferrer noopener\">With the growing demand for observability tools that provide recommendations<\/a>, LM Co-Pilot uses generative intelligence to assist users in their day-to-day operations, recognize issues and offer solutions, and empower IT and Cloud Operations teams to focus on innovation and the satisfaction of their customers.&nbsp;<\/p>\n\n\n\n<p><em>\u201cOne of the benefits of generative AI is its ability to take massive amounts of information and distill it into a rich, yet refined, interactive experience. While there are several applications for this, we want to initially target experiences that we can immediately improve.\u201d said Taggart Matthiesen, Chief Product Officer, LogicMonitor. \u201cWith Co-pilot, we can condense multiple steps into an interactive experience, helping our users immediately access our entire support catalog at the tip of their fingers. This is really an evolutionary step in content discovery and delivery. Co-Pilot minimizes error-prone activities, saves our users time, and exposes them to contextually relevant information.\u201d&nbsp;&nbsp;<\/em><\/p>\n\n\n\n<p><strong>Flip AI Launches to Bring the \u2018Holy Grail of Observability\u2019 to All Enterprises<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.flip.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">Flip AI<\/a>&nbsp;launched its observability intelligence platform, Flip, powered by a large language model (LLM) that predicts incidents and generates root cause analyses in seconds. Flip is trusted by well-known global enterprises, including a top media and entertainment company and some of the largest financial institutions in the world.&nbsp;<\/p>\n\n\n\n<p>Flip automates incident resolution processes, reducing the effort to minutes for enterprise development teams. Flip\u2019s core tenet is the notion of serving as an intelligence layer across all observability and infrastructure data sources and rationalizing through any modality of data, no matter where and how it is stored. Flip sits on top of traditional observability solutions like Datadog, Splunk and New Relic; open source solutions like Prometheus, OpenSearch and Elastic; and object stores like Amazon S3, Azure Blob Storage and GCP Cloud Storage. Flip\u2019s LLM can work on structured and unstructured data; operates on-premises, multi-cloud and hybrid; requires little to no training; ensures that an enterprise\u2019s data stays private; and has a minimal compute footprint.&nbsp;<\/p>\n\n\n\n<p><em>\u201cWhen enterprise software doesn&#8217;t perform as intended, it directly impacts customer experience and revenue. Current observability tools present an overwhelming amount of data on application performance. Developers and operators spend hours, sometimes days, poring through data and debugging incidents,\u201d said Corey Harrison, co-founder and CEO of Flip AI. \u201cOur LLM does this heavy lifting in seconds and immediately reduces mean time to detect and remediate critical incidents. Enterprises are calling Flip the \u2018holy grail\u2019 of observability.\u201d<\/em><\/p>\n\n\n\n<p><strong>Monte Carlo Announces Support for Apache Kafka and Vector Databases to Enable More Reliable Data and AI Products<\/strong><\/p>\n\n\n\n<p>Monte Carlo, the data observability leader, announced a series of new product advancements to help companies tackle the challenge of ensuring reliable data for their data and AI products.<\/p>\n\n\n\n<p>Among the enhancements to its data observability platform are integrations with Kafka and vector databases, starting with Pinecone. These forthcoming capabilities will help teams tasked with deploying and scaling generative AI use cases to ensure that the data powering large-language models (LLMs) is reliable and trustworthy at each stage of the pipeline. With this news, Monte Carlo becomes the first-ever data observability platform to announce data observability for vector databases, a type of database designed to store and query high-dimensional vector data, typically used in RAG architectures.<\/p>\n\n\n\n<p><em>\u201cTo unlock the potential of data and AI, especially large language models (LLMs), teams need a way to monitor, alert to, and resolve data quality issues in both real-time streaming pipelines powered by Apache Kafka and vector databases powered by tools like Pinecone and Weaviate,\u201d said Lior Gavish, co-founder and CTO of Monte Carlo. \u201cOur new Kafka integration gives data teams confidence in the reliability of the real-time data streams powering these critical services and applications, from event processing to messaging. Simultaneously, our forthcoming integrations with major vector database providers will help teams proactively monitor and alert to issues in their LLM applications.\u201d<\/em><\/p>\n\n\n\n<p><strong>Espressive Announces Barista Live Generative Answers for Improved Employee Experiences Powered by AI<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.espressive.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Espressive<\/a>, the pioneer in automating digital workplace assistance, revealed Live Generative Answers, a new capability within the company\u2019s generative AI-based virtual agent Espressive Barista, which can already resolve employee issues through end-to-end automations and by leveraging internal knowledge repositories for concise answers. Now with Live Generative Answers, Barista can source answers from multiple places outside an organization, either from public sources on the internet, or from large language models (LLMs) like ChatGPT and Bard. Powered by generative AI, the Barista Experience Selector understands the intent of an employee interaction to take the correct action that will provide the best response. Barista harnesses automation and a number of AI technologies, including LLMs, to expediently automate what a service desk agent does, acting as an extension of the team and taking on the work of a regular agent. Through this approach, Espressive delivers 55 to 67 percent deflection rates on average \u2013 the highest in the industry \u2013 and the highest employee adoption of over 80 percent on average.<\/p>\n\n\n\n<p><em>\u201cOrganizations haven\u2019t fundamentally transformed the service desk in the past 30 years. While ITSM tools have certainly progressed, they are still adding headcount and almost 100 percent of the tickets require humans to resolve,\u201d said Pat Calhoun, founder and CEO of Espressive. \u201cBarista provides CIOs the ability to reduce cost, improve productivity and securely leverage LLMs and generative AI to drive business results. With our new Live Generative Answers capabilities, Barista can now collect data from multiple sources both internally and externally to ensure employees are getting the right answers quickly. Barista proactively resolves issues to transform the employee experience.\u201d<\/em><\/p>\n\n\n\n<p><strong>Vectara Unveils Open-Source Hallucination Evaluation Model To Detect and Quantify Hallucinations in Top Large Language Models<\/strong><\/p>\n\n\n\n<p>Large Language Model (LLM) builder&nbsp;<u><a href=\"https:\/\/vectara.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Vectara<\/a><\/u>, the trusted Generative AI (GenAI) platform, released its open-source Hallucination Evaluation Model. This is a first-of-its-kind initiative to proffer a commercially available and open-source model that addresses the accuracy and level of hallucination in LLMs, paired with a publicly available and regularly updated leaderboard, while inviting other model builders like OpenAI, Cohere, Google, and Anthropic to participate in defining an open and free industry-standard in support of self-governance and responsible AI.<\/p>\n\n\n\n<p>By launching its Hallucination Evaluation Model, Vectara is increasing transparency and objectively quantifying hallucination risks in leading GenAI tools, a critical step toward removing barriers to enterprise adoption, stemming dangers like misinformation, and enacting effective regulation. The model is designed to quantify how much an LLM strays from facts while synthesizing a summary related to previously provided reference materials.<\/p>\n\n\n\n<p><em>\u201cFor organizations to effectively implement Generative AI solutions including chatbots, they need a clear view of the risks and potential downsides,&#8221; said Simon Hughes, AI researcher and ML engineer at Vectara. &#8220;For the first time, Vectara\u2019s Hallucination Evaluation Model allows anyone to measure hallucinations produced by different LLMs. As a part of Vectara\u2019s commitment to industry transparency, we\u2019re releasing this model as open source, with a publicly accessible Leaderboard, so that anyone can contribute to this important conversation.\u201d<\/em><\/p>\n\n\n\n<p><strong>Rafay Launches Infrastructure Templates for Generative AI to Help Enterprise Platform Teams Bring AI Applications to Market Faster<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/rafay.co\/\" target=\"_blank\" rel=\"noreferrer noopener\">Rafay Systems<\/a>, a leading platform provider for Cloud and Kubernetes Automation, announced the availability of curated infrastructure templates for Generative AI (GenAI) use cases that many enterprises are exploring today. These templates are designed to bring together the power of Rafay\u2019s Environment Management and Kubernetes Management capabilities, along with best-in-class tools used by developers and data scientists to extract business value from GenAI.<\/p>\n\n\n\n<p>Rafay\u2019s GenAI templates empower platform teams to efficiently guide GenAI technology development and utilization, and include reference source code for a variety of use cases, pre-built cloud environment templates, and Kubernetes cluster blueprints pre-integrated with the GenAI ecosystem. Customers can easily experiment with services such as Amazon Bedrock, Microsoft Azure OpenAI and OpenAI\u2019s ChatGPT. Support for high-performance, GPU-based computing environments is built into the templates. Traditional tools used by data scientists such as Simple Linux Utility for Resource Management (SLURM), Kubeflow and MLflow are also supported.&nbsp;<\/p>\n\n\n\n<p><em>&#8220;As platform teams lead the charge in enabling GenAI technologies and managing traditional AI and ML applications, Rafay\u2019s GenAI focused templates expedite the development and time-to-market for all AI applications, ranging from chatbots to predictive analysis, delivering real-time benefits of GenAI to the business,\u201d said Mohan Atreya, Rafay Systems SVP of Product and Solutions. \u201cPlatform teams can empower developers and data scientists to move fast with their GenAI experimentation and productization, while enforcing the necessary guardrails to ensure enterprise-grade governance and control. With Rafay, any enterprise can confidently start their GenAI journey today.&#8221;<\/em><\/p>\n\n\n\n<p><strong>Cresta Raises Bar with New Generative AI Capabilities that drive efficiency and effectiveness in the contact center<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/cresta.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Cresta<\/a>, a leading provider of generative AI for intelligent contact centers, announced new AI enhancements that provide contact center agents and leaders with advanced, intuitive capabilities to make data-driven decisions that drive more productive and effective customer interactions &#8211; a true game changer in AI accessibility.<br><br>The enhancements to Cresta Outcome Insights, Cresta Knowledge Assist, and Cresta Opera are powered by the latest advancements in Large Language Models and Generative AI, and represent a significant leap forward in how agents and leaders can utilize AI to elevate contact center operations. These new features are designed to revolutionize the way users engage with Cresta, delivering an unprecedented level of performance, insights, and productivity.<\/p>\n\n\n\n<p><em>&#8220;Cresta is using the latest innovation in LLMs and Generative AI to ensure that contact center leaders are equipped with the tools and insights they need to help agents excel before, during and after each customer interaction,&#8221; said&nbsp;Ping Wu, CEO of Cresta. &#8220;These new solutions demonstrate our commitment to helping contact center agents experience the full potential of AI to enhance their performance, seamlessly collaborate and receive personalized coaching tailored to their unique styles and skill sets.&#8221;<\/em><\/p>\n\n\n\n<p><strong>DataStax Launches RAGStack, an Out-of-the-box Retrieval Augment Generation Solution, to Simplify RAG Implementations for Enterprises Building Generative AI Applications<\/strong><\/p>\n\n\n\n<p><a href=\"https:\/\/www.datastax.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">DataStax<\/a>, the company that powers generative AI applications with real-time, scalable data, announced the launch of&nbsp;<a href=\"https:\/\/www.datastax.com\/products\/ragstack\" target=\"_blank\" rel=\"noreferrer noopener\">RAGStack<\/a>, an innovative, out-of-the-box RAG solution designed to simplify implementation of&nbsp;<a href=\"https:\/\/www.datastax.com\/guides\/what-is-retrieval-augmented-generation\" target=\"_blank\" rel=\"noreferrer noopener\">retrieval augmented generation&nbsp;(RAG)<\/a> applications built with LangChain. RAGStack reduces the complexity and overwhelming choices that developers face when implementing RAG for their generative AI applications with a streamlined, tested, and efficient set of tools and techniques for building with LLMs.<\/p>\n\n\n\n<p>As many companies implement retrieval augmented generation (RAG) \u2013 the process of providing context from outside data sources to deliver more accurate LLM query responses \u2013 into their generative AI applications, they\u2019re left sifting through complex and overwhelming technology choices across open source orchestration frameworks,&nbsp;vector databases, LLMs, and more. Currently, companies often need to fork and modify these open source projects for their needs. Enterprises are wanting an off the shelf commercial solution that is supported.<\/p>\n\n\n\n<p><em>\u201cEvery company building with generative AI right now is looking for answers about the most effective way to implement RAG within their applications,\u201d said Harrison Chase, CEO, LangChain. \u201cDataStax has recognized a pain point in the market and is working to remedy that problem with the release of RAGStack. Using top-choice technologies, like LangChain and Astra DB among others, Datastax is providing developers with a tested, reliable solution made to simplify working with LLMs.\u201d<\/em><\/p>\n\n\n\n<p><strong>DataRobot Announces New Enterprise-Grade Functionality to Close the Generative AI Confidence Gap and Accelerate Adoption<\/strong><\/p>\n\n\n\n<p>DataRobot, a leader in Value-Driven AI, announced new end-to-end functionality designed to close the generative AI confidence gap, accelerating AI solutions from prototype to production and driving real-world value. Enhancements to the DataRobot AI Platform empower organizations to operate with correctness and control, govern with full transparency, and build with speed and optionality.&nbsp;<\/p>\n\n\n\n<p><em>\u201cThe demands around generative AI are broad, complex and evolving in real-time,\u201d said Venky Veeraraghavan, Chief Product Officer, DataRobot. \u201cWith over 500 of our customers deploying and managing AI in production, we understand what it takes to build, govern, and operate your AI safely and at scale. With this latest launch, we&#8217;ve designed a suite of production-ready capabilities to address the challenges unique to generative AI and instill the confidence required to bring transformative solutions into practice.\u201d<\/em><\/p>\n\n\n\n<p><strong>Snowflake Puts Industry-Leading Large Language and AI Models in the Hands of All Users with Snowflake Cortex<\/strong><\/p>\n\n\n\n<p>&nbsp;<a href=\"https:\/\/www.snowflake.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Snowflake<\/a>&nbsp;(NYSE: SNOW), the Data Cloud company, announced new innovations that&nbsp;enable all users to securely tap into the power of generative AI with their enterprise data \u2014 regardless of their technical expertise. Snowflake is simplifying how every organization can securely derive value from generative AI with&nbsp;Snowflake Cortex (private preview), Snowflake\u2019s new fully managed service that enables organizations to more easily discover, analyze, and build AI apps in the Data Cloud.<br><br>Snowflake Cortex gives users instant access to a growing set of serverless functions that include&nbsp;industry-leading large language models (LLMs) such as Meta AI\u2019s&nbsp;<a href=\"https:\/\/ai.meta.com\/llama\/\" target=\"_blank\" rel=\"noreferrer noopener\">Llama 2<\/a>&nbsp;model, task-specific models, and advanced vector search functionality. Using these functions, teams can accelerate their analytics and quickly build contextualized LLM-powered apps within minutes. Snowflake has also built three&nbsp;LLM-powered experiences leveraging Snowflake Cortex to enhance user productivity including&nbsp;Document AI (private preview),&nbsp;Snowflake Copilot (private preview),&nbsp;and&nbsp;Universal Search (private preview).<br><br><em>\u201cSnowflake&nbsp;is helping pioneer the next wave of AI innovation by providing enterprises with the data foundation and cutting-edge AI building blocks they need to create powerful AI and machine learning apps while keeping their data safe and governed,\u201d said Sridhar Ramaswamy, SVP of AI, Snowflake. \u201cWith Snowflake Cortex, businesses can now tap into the power of large language models in seconds, build custom LLM-powered apps within minutes, and maintain flexibility and control over their data \u2014 while reimagining how all users tap into generative AI to deliver business value.\u201d<\/em><\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.<\/p>\n","protected":false},"author":37,"featured_media":32680,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,182,1323,180,67,268,56,1],"tags":[437,133,1245,277,95],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Generative AI Report \u2013 11\/21\/2023 - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Generative AI Report \u2013 11\/21\/2023 - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-21T11:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-11-22T18:16:39+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1100\" \/>\n\t<meta property=\"og:image:height\" content=\"550\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Daniel Gutierrez\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@AMULETAnalytics\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Gutierrez\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"30 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/\",\"url\":\"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/\",\"name\":\"Generative AI Report \u2013 11\/21\/2023 - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2023-11-21T11:00:00+00:00\",\"dateModified\":\"2023-11-22T18:16:39+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2540da209c83a68f4f5922848f7376ed\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Generative AI Report \u2013 11\/21\/2023\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2540da209c83a68f4f5922848f7376ed\",\"name\":\"Daniel Gutierrez\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5780282e7e567e2a502233e948464542?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5780282e7e567e2a502233e948464542?s=96&d=mm&r=g\",\"caption\":\"Daniel Gutierrez\"},\"description\":\"Daniel D. Gutierrez is a Data Scientist with Los Angeles-based AMULET Analytics, a service division of AMULET Development Corp. He's been involved with data science and Big Data long before it came in vogue, so imagine his delight when the Harvard Business Review recently deemed \\\"data scientist\\\" as the sexiest profession for the 21st century. Previously, he taught computer science and database classes at UCLA Extension for over 15 years, and authored three computer industry books on database technology. He also served as technical editor, columnist and writer at a major computer industry monthly publication for 7 years. Follow his data science musings at @AMULETAnalytics.\",\"sameAs\":[\"http:\/\/www.insidebigdata.com\",\"https:\/\/twitter.com\/@AMULETAnalytics\"],\"url\":\"https:\/\/insidebigdata.com\/author\/dangutierrez\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Generative AI Report \u2013 11\/21\/2023 - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/","og_locale":"en_US","og_type":"article","og_title":"Generative AI Report \u2013 11\/21\/2023 - insideBIGDATA","og_description":"Welcome to the Generative AI Report round-up feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications and deployments centered on large language models (LLMs), we thought it would be a timely service for readers to start a new channel along these lines. The combination of a LLM, fine tuned on proprietary data equals an AI application, and this is what these innovative companies are creating. The field of AI is accelerating at such fast rate, we want to help our loyal global audience keep pace.","og_url":"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2023-11-21T11:00:00+00:00","article_modified_time":"2023-11-22T18:16:39+00:00","og_image":[{"width":1100,"height":550,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg","type":"image\/jpeg"}],"author":"Daniel Gutierrez","twitter_card":"summary_large_image","twitter_creator":"@AMULETAnalytics","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Daniel Gutierrez","Est. reading time":"30 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/","url":"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/","name":"Generative AI Report \u2013 11\/21\/2023 - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2023-11-21T11:00:00+00:00","dateModified":"2023-11-22T18:16:39+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2540da209c83a68f4f5922848f7376ed"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2023\/11\/21\/generative-ai-report-11-21-2023\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Generative AI Report \u2013 11\/21\/2023"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2540da209c83a68f4f5922848f7376ed","name":"Daniel Gutierrez","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/5780282e7e567e2a502233e948464542?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5780282e7e567e2a502233e948464542?s=96&d=mm&r=g","caption":"Daniel Gutierrez"},"description":"Daniel D. Gutierrez is a Data Scientist with Los Angeles-based AMULET Analytics, a service division of AMULET Development Corp. He's been involved with data science and Big Data long before it came in vogue, so imagine his delight when the Harvard Business Review recently deemed \"data scientist\" as the sexiest profession for the 21st century. Previously, he taught computer science and database classes at UCLA Extension for over 15 years, and authored three computer industry books on database technology. He also served as technical editor, columnist and writer at a major computer industry monthly publication for 7 years. Follow his data science musings at @AMULETAnalytics.","sameAs":["http:\/\/www.insidebigdata.com","https:\/\/twitter.com\/@AMULETAnalytics"],"url":"https:\/\/insidebigdata.com\/author\/dangutierrez\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-8Py","jetpack-related-posts":[{"id":32718,"url":"https:\/\/insidebigdata.com\/2023\/06\/30\/power-to-the-data-report-podcast-generative-ai-economic-forecast\/","url_meta":{"origin":33948,"position":0},"title":"Power to the Data Report Podcast: Generative AI Economic Forecast","date":"June 30, 2023","format":false,"excerpt":"Welcome to the \u201cPower-to-the-Data Report\u201d podcast where we cover timely topics of the day from throughout the Big Data ecosystem. I am your host Daniel Gutierrez from insideBIGDATA where I serve as Editor-in-Chief & Resident Data Scientist. Today\u2019s topic is an economic forecast for the accelerating rise of generative AI\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/Power-Data-column-banner_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32891,"url":"https:\/\/insidebigdata.com\/2023\/07\/19\/generative-ai-report-qlik-debuts-suite-of-openai-connectors-bringing-power-of-generative-ai-directly-into-the-qlik-analytics-experience\/","url_meta":{"origin":33948,"position":1},"title":"Generative AI Report: Qlik Debuts Suite of OpenAI Connectors, Bringing Power of Generative AI Directly into the Qlik Analytics Experience","date":"July 19, 2023","format":false,"excerpt":"Welcome to the Generative AI Report, a new feature here on insideBIGDATA with a special focus on all the new applications and integrations tied to generative AI technologies. We\u2019ve been receiving so many cool news items relating to applications centered on large language models, we thought it would be a\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":34101,"url":"https:\/\/insidebigdata.com\/2023\/12\/07\/new-state-of-generative-ai-insiders-report-outlines-the-urgency-to-be-generative-ai-ready-in-2024\/","url_meta":{"origin":33948,"position":2},"title":"New State of Generative AI Insiders Report Outlines the Urgency to be Generative AI-Ready in 2024","date":"December 7, 2023","format":false,"excerpt":"Businesses will face substantial competitive hurdles if they are not Generative AI-ready in the next few years;\u00a0Report is the first to offer a generative AI scorecard for 2024 readiness. \u00a0\"The State of Generative AI Insiders Report,\" in conjunction with leaders from Insight Partners, Battery Ventures, Dataiku, Weights & Biases and\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/08\/Generative_AI_shutterstock_2273007347_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32372,"url":"https:\/\/insidebigdata.com\/2023\/05\/14\/ddn-is-a-leading-data-storage-company-behind-the-machine-learning-and-generative-ai-explosion\/","url_meta":{"origin":33948,"position":3},"title":"DDN is a Leading Data Storage Company Behind the Machine Learning and\u00a0Generative\u00a0AI Explosion","date":"May 14, 2023","format":false,"excerpt":"DDN\u00ae, a leader in artificial intelligence (AI) and multi-cloud data management solutions, announced that it has sold more AI storage appliances in the first four months of 2023 than it had for all of 2022. Broad enthusiasm for the business opportunities presented by generative AI has resulted in a steady\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":34000,"url":"https:\/\/insidebigdata.com\/2023\/11\/25\/oreilly-releases-2023-generative-ai-in-the-enterprise-report\/","url_meta":{"origin":33948,"position":4},"title":"O\u2019Reilly Releases 2023 Generative AI in the Enterprise Report","date":"November 25, 2023","format":false,"excerpt":"O\u2019Reilly, the source for insight-driven learning in technology and business, released the findings of a global survey of more than 2,800 technology professionals on the realities of generative AI in the enterprise. The Generative AI in the Enterprise report explores how companies use generative AI, the bottlenecks holding back adoption,\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2284999159_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33741,"url":"https:\/\/insidebigdata.com\/2023\/10\/28\/infosys-generative-ai-radar-report\/","url_meta":{"origin":33948,"position":5},"title":"Infosys Generative AI Radar Report","date":"October 28, 2023","format":false,"excerpt":"Our friends over at Infosys Knowledge Institute released its Generative AI Radar report providing insight into over 1,000 U.S. and Canadian businesses on the current state of generative (gen) AI implementation and early indications of its ability to deliver value.\u00a0","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2284999159_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/33948"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/37"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=33948"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/33948\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/32680"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=33948"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=33948"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=33948"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}