{"id":30972,"date":"2022-11-25T06:06:00","date_gmt":"2022-11-25T14:06:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=30972"},"modified":"2022-11-23T15:08:12","modified_gmt":"2022-11-23T23:08:12","slug":"d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/","title":{"rendered":"d-Matrix Unlocks New Potential with Reinforcement Learning based Compiler for at Scale Digital In-Memory Compute Platforms"},"content":{"rendered":"<div class=\"wp-block-image\">\n<figure class=\"alignright size-full is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/11\/d-Matrix_logo.png\" alt=\"\" class=\"wp-image-30973\" width=\"283\" height=\"197\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/11\/d-Matrix_logo.png 356w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/11\/d-Matrix_logo-300x209.png 300w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/11\/d-Matrix_logo-150x104.png 150w\" sizes=\"(max-width: 283px) 100vw, 283px\" \/><\/figure><\/div>\n\n\n<p><a href=\"https:\/\/www.d-matrix.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">d-Matrix<\/a>, a leader in high-efficiency AI-compute and inference, announced a collaboration with Microsoft using its low-code reinforcement learning (RL) platform, Project Bonsai, to enable an AI-trained compiler for d-Matrix\u2019s unique digital in memory compute (DIMC) products. The user-friendly Project Bonsai platform accelerates time to value, with a product-ready solution that cuts down on development efforts using an AI-based compiler that leverages ultra-efficient DIMC technology from d-Matrix.<\/p>\n\n\n\n<p>With large transformer models driving expanding demand for AI inference, while memory and energy requirements hit threshold limits, d-Matrix is bringing one of the first DIMC-based inference compute platforms to market. d-Matrix transforms the economics of complex transformers and Generative AI with a scalable platform built to handle the immense data and power requirements of inference AI, making energy-hungry data centers more efficient. This novel AI compute platform from d-Matrix uses an ingenious combination of intelligent ML tools and integrated software architectures utilizing chiplets in a Lego block grid formation, which enables the integration of multiple programming engines in a common package.<\/p>\n\n\n\n<p>Combining d-Matrix technology with Project Bonsai enables the efficient creation of a compiler for the DIMC platform. Project Bonsai accelerates rapid prototyping, testing and deploying of trained RL agents in the compiler stack to take full advantage of low power, AI inference technology from d-Matrix that can deliver up to ten times the power efficiency of older architectures.&nbsp;<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>\u201cd-Matrix has built the world\u2019s most efficient computing platform for AI inference at scale,\u201d said Sudeep Bhoja, Co-Founder, CTO at d-Matrix. \u201cWhat made us gravitate towards Project Bonsai is its product-first features and ease of use. Microsoft\u2019s unique offering is built around machine teaching and the Inkling language, which makes RL constructs fully explainable.\u201d<\/p><\/blockquote>\n\n\n\n<p>The RL based compiler is expected to become a key differentiator of d-Matrix\u2019s first generation DIMC product offering, CORSAIR, on track to ship in late 2023.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>\u201cWe have been working together developing the RL based compiler,\u201d said Kingsuk Maitra, Principal Applied AI Engineer at Microsoft, with the Project Bonsai team. \u201cWe made it a point to have a product mindset from the get-go. Embodiments including the instruction set architecture have been vetted and validated on two d-Matrix test chips, NightHawk and JayHawk, and embedded into the RL training environment. Project Bonsai\u2019s low code attributes made early development work easy, and the ability to integrate statistical control parameters and make integration of other real life chip design constraints simpler, with very promising results so far.\u201d\u00a0<\/p><\/blockquote>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>d-Matrix, a leader in high-efficiency AI-compute and inference, announced a collaboration with Microsoft using its low-code reinforcement learning (RL) platform, Project Bonsai, to enable an AI-trained compiler for d-Matrix\u2019s unique digital in memory compute (DIMC) products. The user-friendly Project Bonsai platform accelerates time to value, with a product-ready solution that cuts down on development efforts using an AI-based compiler that leverages ultra-efficient DIMC technology from d-Matrix.<\/p>\n","protected":false},"author":10513,"featured_media":30973,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,180,67,268,56,1],"tags":[1203,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>d-Matrix Unlocks New Potential with Reinforcement Learning based Compiler for at Scale Digital In-Memory Compute Platforms - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"d-Matrix Unlocks New Potential with Reinforcement Learning based Compiler for at Scale Digital In-Memory Compute Platforms - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"d-Matrix, a leader in high-efficiency AI-compute and inference, announced a collaboration with Microsoft using its low-code reinforcement learning (RL) platform, Project Bonsai, to enable an AI-trained compiler for d-Matrix\u2019s unique digital in memory compute (DIMC) products. The user-friendly Project Bonsai platform accelerates time to value, with a product-ready solution that cuts down on development efforts using an AI-based compiler that leverages ultra-efficient DIMC technology from d-Matrix.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2022-11-25T14:06:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-11-23T23:08:12+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/11\/d-Matrix_logo.png\" \/>\n\t<meta property=\"og:image:width\" content=\"356\" \/>\n\t<meta property=\"og:image:height\" content=\"248\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/\",\"url\":\"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/\",\"name\":\"d-Matrix Unlocks New Potential with Reinforcement Learning based Compiler for at Scale Digital In-Memory Compute Platforms - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2022-11-25T14:06:00+00:00\",\"dateModified\":\"2022-11-23T23:08:12+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"d-Matrix Unlocks New Potential with Reinforcement Learning based Compiler for at Scale Digital In-Memory Compute Platforms\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"d-Matrix Unlocks New Potential with Reinforcement Learning based Compiler for at Scale Digital In-Memory Compute Platforms - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/","og_locale":"en_US","og_type":"article","og_title":"d-Matrix Unlocks New Potential with Reinforcement Learning based Compiler for at Scale Digital In-Memory Compute Platforms - insideBIGDATA","og_description":"d-Matrix, a leader in high-efficiency AI-compute and inference, announced a collaboration with Microsoft using its low-code reinforcement learning (RL) platform, Project Bonsai, to enable an AI-trained compiler for d-Matrix\u2019s unique digital in memory compute (DIMC) products. The user-friendly Project Bonsai platform accelerates time to value, with a product-ready solution that cuts down on development efforts using an AI-based compiler that leverages ultra-efficient DIMC technology from d-Matrix.","og_url":"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2022-11-25T14:06:00+00:00","article_modified_time":"2022-11-23T23:08:12+00:00","og_image":[{"width":356,"height":248,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/11\/d-Matrix_logo.png","type":"image\/png"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/","url":"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/","name":"d-Matrix Unlocks New Potential with Reinforcement Learning based Compiler for at Scale Digital In-Memory Compute Platforms - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2022-11-25T14:06:00+00:00","dateModified":"2022-11-23T23:08:12+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2022\/11\/25\/d-matrix-unlocks-new-potential-with-reinforcement-learning-based-compiler-for-at-scale-digital-in-memory-compute-platforms\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"d-Matrix Unlocks New Potential with Reinforcement Learning based Compiler for at Scale Digital In-Memory Compute Platforms"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/11\/d-Matrix_logo.png","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-83y","jetpack-related-posts":[{"id":31470,"url":"https:\/\/insidebigdata.com\/2023\/01\/24\/d-matrix-launches-new-chiplet-connectivity-platform-to-address-exploding-compute-demand-for-generative-ai\/","url_meta":{"origin":30972,"position":0},"title":"d-Matrix Launches New Chiplet Connectivity Platform to Address Exploding Compute Demand for Generative AI","date":"January 24, 2023","format":false,"excerpt":"Today, d-Matrix, a leader in high-efficiency AI-compute and inference processors, announced Jayhawk, an Open Domain-Specific Architecture (ODSA) Bunch of Wires (BoW) based chiplet platform for energy efficient die-die connectivity over organic substrates. Building on the back of the Nighthawk chiplet platform launched in 2021, the 2nd generation Jayhawk silicon platform\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2022\/11\/d-Matrix_logo.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":18240,"url":"https:\/\/insidebigdata.com\/2017\/06\/30\/bonsai-expands-tensorflow-support-gears-extending-functionality-ai-platform-enterprises-building-industrial-applications\/","url_meta":{"origin":30972,"position":1},"title":"Bonsai Expands TensorFlow Support with Gears, Extending Functionality of AI Platform for Enterprises Building Industrial Applications","date":"June 30, 2017","format":false,"excerpt":"Bonsai, provider of an AI platform that empowers enterprises to build and deploy intelligent systems, released Gears, a top feature requested by customers in the Bonsai Early Access Program. Gears further extends the value of Bonsai to data scientists, providing them with a tool to manage, deploy and scale previously\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/6DGxiMnx2g8\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":22292,"url":"https:\/\/insidebigdata.com\/2019\/03\/18\/ai-critical-measures-time-value-insights\/","url_meta":{"origin":30972,"position":2},"title":"AI Critical Measures: Time to Value and Insights","date":"March 18, 2019","format":false,"excerpt":"AI is a game changer for industries today but achieving AI success contains two critical factors to consider \u2014 time to value and time to insights.\u00a0 Time to value is the metric that looks at the time it takes to realize the value of a product, solution or offering. Time\u2026","rel":"","context":"In &quot;Enterprise&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2019\/03\/Charla-e1552670361849.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":29928,"url":"https:\/\/insidebigdata.com\/2022\/07\/25\/runai-releases-advanced-model-serving-functionality-to-help-organizations-simplify-ai-deployment\/","url_meta":{"origin":30972,"position":3},"title":"Run:ai Releases Advanced Model Serving Functionality to Help Organizations Simplify AI Deployment\u00a0","date":"July 25, 2022","format":false,"excerpt":"Run:ai, a leader in compute orchestration for AI workloads, announced new features of its Atlas Platform, including two-step model deployment \u2014 which makes it easier and faster to get machine learning models into production. The company also announced a new integration with NVIDIA Triton Inference Server. These capabilities are particularly\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2020\/03\/RunAILogo2020-03-31_15-07-21.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":23487,"url":"https:\/\/insidebigdata.com\/2019\/10\/27\/micron-introduces-comprehensive-ai-development-platform\/","url_meta":{"origin":30972,"position":4},"title":"Micron Introduces Comprehensive AI Development Platform","date":"October 27, 2019","format":false,"excerpt":"Micron Technology, Inc. (Nasdaq: MU), announced a powerful new set of high-performance hardware and software tools for deep learning applications with the acquisition of FWDNXT, a software and hardware startup. When combined with advanced Micron memory, FWDNXT\u2019s (pronounced \u201cforward next\u201d) artificial intelligence (AI) hardware and software technology enables Micron to\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":33418,"url":"https:\/\/insidebigdata.com\/2023\/09\/19\/sambanova-unveils-new-ai-chip-the-sn40l-powering-its-full-stack-ai-platform\/","url_meta":{"origin":30972,"position":5},"title":"SambaNova Unveils New AI Chip, the SN40L, Powering its Full Stack AI Platform","date":"September 19, 2023","format":false,"excerpt":"SambaNova Systems,\u00a0makers of the\u00a0purpose-built, full stack AI platform, announced a revolutionary new chip, the SN40L. The SN40L will power SambaNova\u2019s full stack large language model (LLM) platform, the SambaNova Suite, with its revolutionary new design: on the inside it offers both dense and sparse compute, and includes both large and\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/AI_shutterstock_2287025875_special-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/30972"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=30972"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/30972\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/30973"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=30972"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=30972"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=30972"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}