{"id":32788,"date":"2023-07-05T14:58:45","date_gmt":"2023-07-05T21:58:45","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=32788"},"modified":"2023-07-05T14:58:48","modified_gmt":"2023-07-05T21:58:48","slug":"research-highlights-scaling-mlps-a-tale-of-inductive-bias","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/","title":{"rendered":"Research Highlights: Scaling MLPs: A Tale of Inductive Bias"},"content":{"rendered":"\n<p>Multi-layer Perceptrons (MLPs) are the most fundamental type of neural network, so they play an important role in many machine learning systems and are the most theoretically studied type of neural network. A new <a href=\"https:\/\/arxiv.org\/abs\/2306.13575\" target=\"_blank\" rel=\"noreferrer noopener\">paper<\/a> from researchers at ETH Zurich pushes the limits of pure MLPs, and shows that scaling them up allows much better performance than expected from MLPs in the past. These findings may have important implications for the study of inductive biases, the theory of deep learning, and neural scaling laws. Our friends over at <a href=\"https:\/\/thegradient.pub\/\" target=\"_blank\" rel=\"noreferrer noopener\">The Gradient<\/a> provided this analysis. <\/p>\n\n\n\n<p><strong>Overview<\/strong>&nbsp;<\/p>\n\n\n\n<p>Many neural network architectures have been developed for different tasks, but the simplest form is the MLP, which consists of dense linear layers composed with elementwise nonlinearities. MLPs are important for several reasons: they are used in certain settings such as implicit neural representations and processing tabular data, they are used as subcomponents within state-of-the-art models such as convolutional neural networks, graph neural networks, and Transformers, and they are widely studied in theoretical works that aim to understand deep learning more generally.<\/p>\n\n\n\n<p>MLP-Mixer (left) versus pure MLPs for images (right). MLP-Mixer still encodes visual inductive biases, whereas the pure MLP approach simply treats images as arrays of numbers.<\/p>\n\n\n\n<p>This current work scales MLPs for widely studied image classification tasks. The pure MLPs considered in this work significantly differ from MLP-based models for vision such as\u00a0<a href=\"https:\/\/arxiv.org\/abs\/2306.13575\" target=\"_blank\" rel=\"noreferrer noopener\">MLP-Mixer<\/a>\u00a0and\u00a0<a href=\"https:\/\/arxiv.org\/abs\/2105.08050\" target=\"_blank\" rel=\"noreferrer noopener\">gMLP<\/a>. The latter two works use MLPs in a specific way that encodes visual inductive biases by decomposing linear maps into channel mixing maps and patch mixing maps. In contrast, pure MLPs flatten entire images into numerical vectors, which are then processed by general dense linear layers.<\/p>\n\n\n\n<p>The authors consider isotropic MLPs in which every hidden layer has the same dimension and layernorm is added after each layer of activations. They also experiment with inverted bottleneck MLPs, which expand and contract the dimension of each layer and include residual connections. The inverted bottleneck MLPs generally perform much better than the isotropic MLPs.<\/p>\n\n\n\n<p>Finetuned performance of inverted bottleneck MLPs pretrained on ImageNet21k.<\/p>\n\n\n\n<p>Experiments on standard image classification datasets show that MLPs can perform quite well, despite their lack of inductive biases. In particular, MLPs perform very well at transfer learning \u2014 when pretrained on ImageNet21k, large inverted bottleneck MLPs can match or exceed the performance of ResNet18s (except on ImageNet itself). Moreover, as with\u00a0<a href=\"https:\/\/arxiv.org\/abs\/2001.08361\" target=\"_blank\" rel=\"noreferrer noopener\">other modern deep learning models<\/a>, the performance of inverted bottleneck MLPs scales predictably with model size and dataset size; interestingly, these scaling laws show that MLP performance is more limited by dataset size than model size, which may be because MLPs have less inductive biases and hence require more data to learn well.<\/p>\n\n\n\n<p><strong>Why it&#8217;s important?<\/strong><\/p>\n\n\n\n<p>Scaling laws and gains from scaling model and dataset sizes are important to study, as larger versions of today\u2019s models may have sufficient power to do many useful tasks. This work shows that MLP performance also follows scaling laws, though MLPs are more data-hungry than other deep learning models. Importantly, MLPs are extremely runtime efficient to train: their forward and backward passes are quick and, as shown in this work, they improve when they are trained with very large batch sizes. Thus, MLPs can be used to efficiently study pretraining and large dataset training.<\/p>\n\n\n\n<p>The authors\u2019 observations that MLPs perform well with very large batch sizes is very interesting. Convolutional neural networks generally perform better with\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1706.02677\" target=\"_blank\" rel=\"noreferrer noopener\">smaller batch sizes<\/a>. Thus, using MLPs as a proxy to study CNNs (for instance, in theoretical works) may be faulty in this sense, as the implicit biases or other properties of the optimization process may significantly differ when training with these two different architectures.<\/p>\n\n\n\n<p>That large-scale MLPs can do well is even more evidence that inductive biases may be significantly less important than model and data scale in many settings. This finding aligns with the finding that at a large enough scale,\u00a0<a href=\"https:\/\/arxiv.org\/abs\/2010.11929\" target=\"_blank\" rel=\"noreferrer noopener\">Vision Transformers<\/a>\u00a0outperform CNNs in many tasks, even though CNNs have more visual inductive biases built in.<\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Multi-layer Perceptrons (MLPs) are the most fundamental type of neural network, so they play an important role in many machine learning systems and are the most theoretically studied type of neural network. A new paper from researchers at ETH Zurich pushes the limits of pure MLPs, and shows that scaling them up allows much better performance than expected from MLPs in the past. These findings may have important implications for the study of inductive biases, the theory of deep learning, and neural scaling laws. <\/p>\n","protected":false},"author":10513,"featured_media":23655,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[182,180,67,268,56,84,1303,1],"tags":[741,277,1331,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Research Highlights: Scaling MLPs: A Tale of Inductive Bias - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Research Highlights: Scaling MLPs: A Tale of Inductive Bias - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"Multi-layer Perceptrons (MLPs) are the most fundamental type of neural network, so they play an important role in many machine learning systems and are the most theoretically studied type of neural network. A new paper from researchers at ETH Zurich pushes the limits of pure MLPs, and shows that scaling them up allows much better performance than expected from MLPs in the past. These findings may have important implications for the study of inductive biases, the theory of deep learning, and neural scaling laws.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2023-07-05T21:58:45+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-07-05T21:58:48+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/12\/Machine_Learning_shutterstock_344688470.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"300\" \/>\n\t<meta property=\"og:image:height\" content=\"212\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/\",\"url\":\"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/\",\"name\":\"Research Highlights: Scaling MLPs: A Tale of Inductive Bias - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2023-07-05T21:58:45+00:00\",\"dateModified\":\"2023-07-05T21:58:48+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Research Highlights: Scaling MLPs: A Tale of Inductive Bias\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Research Highlights: Scaling MLPs: A Tale of Inductive Bias - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/","og_locale":"en_US","og_type":"article","og_title":"Research Highlights: Scaling MLPs: A Tale of Inductive Bias - insideBIGDATA","og_description":"Multi-layer Perceptrons (MLPs) are the most fundamental type of neural network, so they play an important role in many machine learning systems and are the most theoretically studied type of neural network. A new paper from researchers at ETH Zurich pushes the limits of pure MLPs, and shows that scaling them up allows much better performance than expected from MLPs in the past. These findings may have important implications for the study of inductive biases, the theory of deep learning, and neural scaling laws.","og_url":"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2023-07-05T21:58:45+00:00","article_modified_time":"2023-07-05T21:58:48+00:00","og_image":[{"width":300,"height":212,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/12\/Machine_Learning_shutterstock_344688470.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/","url":"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/","name":"Research Highlights: Scaling MLPs: A Tale of Inductive Bias - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2023-07-05T21:58:45+00:00","dateModified":"2023-07-05T21:58:48+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2023\/07\/05\/research-highlights-scaling-mlps-a-tale-of-inductive-bias\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Research Highlights: Scaling MLPs: A Tale of Inductive Bias"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/12\/Machine_Learning_shutterstock_344688470.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-8wQ","jetpack-related-posts":[{"id":26425,"url":"https:\/\/insidebigdata.com\/2021\/06\/09\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-may-2021\/","url_meta":{"origin":32788,"position":0},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 May 2021","date":"June 9, 2021","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":13379,"url":"https:\/\/insidebigdata.com\/2015\/07\/15\/nvidia-doubles-performance-for-deep-learning-training\/","url_meta":{"origin":32788,"position":1},"title":"NVIDIA Doubles Performance for Deep Learning Training","date":"July 15, 2015","format":false,"excerpt":"NVIDIA announced updates to its GPU-accelerated deep learning software that will double deep learning training performance. The new software will empower data scientists and researchers to supercharge their deep learning projects and product development work.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":14589,"url":"https:\/\/insidebigdata.com\/2016\/03\/04\/hpc-the-computational-foundation-of-deep-learning\/","url_meta":{"origin":32788,"position":2},"title":"Scaling Deep Learning","date":"March 4, 2016","format":false,"excerpt":"Deep Learning is a relatively new area of Machine Learning research which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence. The video presentation below is from the 2016 Stanford HPC Conference, where Brian Catanzaro from Baidu presents: \"Scaling Deep\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/x_i0Wq8D3ds\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":14964,"url":"https:\/\/insidebigdata.com\/2016\/05\/05\/minds-ais-deep-neural-network-training-software-shatters-industry-benchmarks\/","url_meta":{"origin":32788,"position":3},"title":"minds.ai\u2019s Deep Neural Network Training Software Shatters Industry Benchmarks","date":"May 5, 2016","format":false,"excerpt":"minds.ai, developers of a revolutionary scalable deep neural network training platform with dramatic acceleration performance, announced that it has set a new record in the time taken to train AlexNet Neural Network (NN). Training AlexNet is a well-known task used for benchmarking NN training performance. The new minds.ai benchmark achieved\u2026","rel":"","context":"In &quot;Google News Feed&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2016\/05\/minds_ai_logo.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":21207,"url":"https:\/\/insidebigdata.com\/2018\/10\/07\/introduction-deep-learning-neural-networks\/","url_meta":{"origin":32788,"position":4},"title":"An Introduction to Deep Learning and Neural Networks","date":"October 7, 2018","format":false,"excerpt":"In this contributed article, Agile SEO technical writer and editor Limor Wainstein outlines how deep learning, neural networks, and machine learning are not interchangeable terms. This article helps to clarify the definitions for you with an introduction to deep learning and neural networks.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2018\/10\/Neural-Network-diagram.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":18301,"url":"https:\/\/insidebigdata.com\/2017\/07\/07\/neural-machine-translation-evolving-breakneck-speed\/","url_meta":{"origin":32788,"position":5},"title":"Neural Machine Translation Evolving at Breakneck Speed","date":"July 7, 2017","format":false,"excerpt":"In this special guest feature, Sirena Rubinoff, Content Manager at Morningside Translations, discusses how Neural Machine Translation (NMT), although still relatively new, is quickly transforming into a robust platform for translations. Hardware advances will further contribute to neural networks overall, and NMT in particular.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/32788"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=32788"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/32788\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/23655"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=32788"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=32788"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=32788"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}