{"id":25109,"date":"2020-10-16T06:00:00","date_gmt":"2020-10-16T13:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=25109"},"modified":"2020-10-17T10:00:17","modified_gmt":"2020-10-17T17:00:17","slug":"whats-under-the-hood-of-neural-networks","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/","title":{"rendered":"What\u2019s Under the Hood of Neural Networks?"},"content":{"rendered":"\n<p>Artificial neural networks are big business these days. If you\u2019ve been on  Twitter recently, or voted in the last election, chances are your data was processed by one. They are now being used in sectors ranging from  marketing and medicine to autonomous vehicles and energy harvesting.<\/p>\n\n\n\n<p>Yet despite their ubiquity, many regard neural networks as controversial.  Inspired by the structure of neurons in the brain, they are \u201cblack boxes\u201d in  the sense that, because their training processes and their capabilities are  poorly understood, it can be difficult to keep track of what they\u2019re doing  under the hood. And if we don\u2019t know how they achieve their results, how can we be sure that we can trust them? A second issue arises because, as  neural networks become more commonplace, they are run on smaller  devices. As a result, power consumption can be a limiting factor in their performance.<\/p>\n\n\n\n<p>However, help is at hand. Physicists working at Aston University in  Birmingham and the London Institute for Mathematical Sciences have  published a study that addresses both of these problems.<\/p>\n\n\n\n<p>Neural networks are built to carry out a variety of tasks including  automated decision-making. When designing one, you first feed it  manageable amounts of information, so that you can train the network by  gradually improving the results obtained. For example, an autonomous  vehicle needs to differentiate correctly between different types of traffic  signs. If it makes the right decision a hundred times, you might then trust  the design of the network to go it alone and do its work on another thousand signs, or a million more.<\/p>\n\n\n\n<p>The controversy stems from the lack of control you have over the training  process and the resulting network once it\u2019s up and running. It\u2019s a bit like the predicament described in E.M. Forster\u2019s sci-fi short story The Machine Stops . There, the human race has created \u201cthe Machine\u201d to govern their affairs,  only to find it has developed a will of its own. While the concerns over  neural networks aren\u2019t quite so dystopian, they do possess a worrying  autonomy and variability in performance. If you train them on too few test  cases relative to the number of free parameters inside, neural networks can give the illusion of making good decisions, a problem known as overfitting.<\/p>\n\n\n\n<p>Neural networks are so-called because they are inspired by computation in  the brain. The brain processes information by passing electric signals  through a series of neuron cells linked together by synapses. In a similar  way, neural networks are a collection of nodes arranged in a series of layers through which a signal navigates. These layers are connected by edges,  which are assigned weights. An input signal is then iteratively transformed  by these weights as it works its way through successive layers of the  network. The way that the weights are distributed in each layer determines  the overall task which is computed, and hence the output that emerges in the final layer.<\/p>\n\n\n\n<p>The <a href=\"https:\/\/arxiv.org\/abs\/2004.08930\" target=\"_blank\" rel=\"noreferrer noopener\">study<\/a>, to be published in the journal Physical Review Letters, looked at two main types of neural networks: recurrent and layer-dependent.  Recurrent neural networks can be viewed as a multilayered system where  the weighted edges in each layer are identical. In layer-dependent neural  networks, each layer has a different distribution of weights. The former set- up is by far the simpler of the two, because there are fewer weights to specify, meaning the network is cheaper to train.<\/p>\n\n\n\n<p>One might expect that inherently different structures would produce  radically different outputs. Instead, the team found that the opposite was  true. The set of functions that the networks computed were identical.  According to Bo Li, one of the co-authors, the result astonished him. \u201cAt the beginning, I didn\u2019t believe that this could be true. There had to be a difference.\u201d<\/p>\n\n\n\n<p>The authors were able to draw this unexpected conclusion because they  took a pencil-and-paper approach to what is usually thought of as a  computational problem. Testing how each network deals with an individual input, for all possible inputs, would have been impossible. There are far too  many different combinations to consider. Instead, the authors devised a  mathematical expression that considers the path that the signal takes  through the network for all possible inputs simultaneously, along with their corresponding outputs.<\/p>\n\n\n\n<p>Crucially, the study suggests that there is no benefit to the extra complexity  in terms of the variety of the functions that the network can compute. This  has both theoretical and practical implications.<\/p>\n\n\n\n<p>With fewer free parameters recurrent neural networks are less prone to overfitting. They require less information to specify the smaller number of weights, meaning that it\u2019s easier to keep track of what they\u2019re computing. As co-author David Saad says, \u201cmistakes can be painful\u201d in the industries that these networks are being used for, so this paves the way for a better understanding of ANN capabilities.<\/p>\n\n\n\n<p>The simpler networks also require less power. \u201cIn simpler networks there  are fewer parameters and fewer parameters, which means less resources,\u201d  explains Alexander Mozeika, one of the co-authors. \u201cSo if I were an  engineer, I would try to use our insights to build networks that run on smaller chips or use less energy.\u201d<\/p>\n\n\n\n<p>While the results of the study are encouraging, they also give cause for  concern. Even the simple presumption that networks constructed in  different ways should do different things seems to have been misguided.  Why does this matter? Because neural networks are now being used to  diagnose diseases, detect threats and inform political decisions. Given the  stakes of these applications, it\u2019s vital that the capabilities of neural  networks, and more importantly their limitations, are properly appreciated.<\/p>\n\n\n\n<p><strong>About the Author<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"alignleft size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"125\" height=\"125\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2020\/10\/PippaCole.png\" alt=\"In this contributed article, \" class=\"wp-image-25110\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2020\/10\/PippaCole.png 125w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2020\/10\/PippaCole-110x110.png 110w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2020\/10\/PippaCole-50x50.png 50w\" sizes=\"(max-width: 125px) 100vw, 125px\" \/><\/figure><\/div>\n\n\n\n<p><em>Pippa Cole is the science writer at the London Institute for Mathematical Sciences, where study co-author Mozeika is based. As a result, she has been able to interview all three authors of the paper mentioned above. She has a PhD in Cosmology from the University of Sussex and has written previously for the blog <a rel=\"noreferrer noopener\" href=\"http:\/\/www.astrobites.org\/author\/pcole\" target=\"_blank\">Astrobites<\/a>.<\/em><\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a rel=\"noreferrer noopener\" href=\"http:\/\/insidebigdata.com\/newsletter\/\" target=\"_blank\">newsletter<\/a>.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this contributed article, Pippa Cole, Science Writer at the London Institute for Mathematical Sciences, discusses new research on artificial neural networks that has added to concerns that we don\u2019t have a clue what machine learning algorithms are up to under the hood. She highlights a new study that focuses on two completely different deep-layered machines, and found that in fact they did exactly the same thing, which was a huge surprise. It\u2019s a demonstration of how little we understand about the inner workings of deep-layered neural networks.<\/p>\n","protected":false},"author":10513,"featured_media":21208,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,87,180,56,97,84,1],"tags":[652,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>What\u2019s Under the Hood of Neural Networks? - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"What\u2019s Under the Hood of Neural Networks? - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"In this contributed article, Pippa Cole, Science Writer at the London Institute for Mathematical Sciences, discusses new research on artificial neural networks that has added to concerns that we don\u2019t have a clue what machine learning algorithms are up to under the hood. She highlights a new study that focuses on two completely different deep-layered machines, and found that in fact they did exactly the same thing, which was a huge surprise. It\u2019s a demonstration of how little we understand about the inner workings of deep-layered neural networks.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2020-10-16T13:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-10-17T17:00:17+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2018\/10\/Neural-Network-diagram.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"700\" \/>\n\t<meta property=\"og:image:height\" content=\"329\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/\",\"url\":\"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/\",\"name\":\"What\u2019s Under the Hood of Neural Networks? - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2020-10-16T13:00:00+00:00\",\"dateModified\":\"2020-10-17T17:00:17+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"What\u2019s Under the Hood of Neural Networks?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"What\u2019s Under the Hood of Neural Networks? - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/","og_locale":"en_US","og_type":"article","og_title":"What\u2019s Under the Hood of Neural Networks? - insideBIGDATA","og_description":"In this contributed article, Pippa Cole, Science Writer at the London Institute for Mathematical Sciences, discusses new research on artificial neural networks that has added to concerns that we don\u2019t have a clue what machine learning algorithms are up to under the hood. She highlights a new study that focuses on two completely different deep-layered machines, and found that in fact they did exactly the same thing, which was a huge surprise. It\u2019s a demonstration of how little we understand about the inner workings of deep-layered neural networks.","og_url":"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2020-10-16T13:00:00+00:00","article_modified_time":"2020-10-17T17:00:17+00:00","og_image":[{"width":700,"height":329,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2018\/10\/Neural-Network-diagram.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/","url":"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/","name":"What\u2019s Under the Hood of Neural Networks? - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2020-10-16T13:00:00+00:00","dateModified":"2020-10-17T17:00:17+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2020\/10\/16\/whats-under-the-hood-of-neural-networks\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"What\u2019s Under the Hood of Neural Networks?"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2018\/10\/Neural-Network-diagram.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-6wZ","jetpack-related-posts":[{"id":14937,"url":"https:\/\/insidebigdata.com\/2016\/04\/28\/movidius-announces-fathom-deep-learning-accelerator-compute-stick\/","url_meta":{"origin":25109,"position":0},"title":"Movidius Announces Fathom Deep Learning Accelerator Compute Stick","date":"April 28, 2016","format":false,"excerpt":"Movidius, a leader in low-power machine vision technology, today announced both the Fathom Neural Compute Stick \u2013 the world\u2019s first deep learning acceleration module, and Fathom deep learning software framework. Both tools hand-in-hand will allow powerful neural networks to be moved out of the cloud, and deployed natively in end-user\u2026","rel":"","context":"In &quot;Big Data&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2016\/04\/Fathom1.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":21207,"url":"https:\/\/insidebigdata.com\/2018\/10\/07\/introduction-deep-learning-neural-networks\/","url_meta":{"origin":25109,"position":1},"title":"An Introduction to Deep Learning and Neural Networks","date":"October 7, 2018","format":false,"excerpt":"In this contributed article, Agile SEO technical writer and editor Limor Wainstein outlines how deep learning, neural networks, and machine learning are not interchangeable terms. This article helps to clarify the definitions for you with an introduction to deep learning and neural networks.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2018\/10\/Neural-Network-diagram.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":15217,"url":"https:\/\/insidebigdata.com\/2016\/09\/24\/visualizing-and-understanding-deep-neural-networks\/","url_meta":{"origin":25109,"position":2},"title":"Visualizing and Understanding Deep Neural Networks","date":"September 24, 2016","format":false,"excerpt":"In this presentation, Matthew Zeiler, Ph.D., Founder and CEO of Clarifai Inc, speaks about large convolutional neural networks. These networks have recently demonstrated impressive object recognition performance making real world applications possible.","rel":"","context":"In &quot;Google News Feed&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/ghEmQSxT6tw\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":19330,"url":"https:\/\/insidebigdata.com\/2017\/11\/05\/visualizing-understanding-deep-neural-networks\/","url_meta":{"origin":25109,"position":3},"title":"Visualizing and Understanding Deep Neural Networks","date":"November 5, 2017","format":false,"excerpt":"In the video presentation below, Matthew Zeiler, PhD, Founder and CEO of Clarifai Inc, speaks about large convolutional neural networks. These networks have recently demonstrated impressive object recognition performance making real world applications possible. However, there was no clear understanding of why they perform so well, or how they might\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/ghEmQSxT6tw\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":18911,"url":"https:\/\/insidebigdata.com\/2017\/09\/24\/rmsprop-optimization-algorithm-gradient-descent-neural-networks\/","url_meta":{"origin":25109,"position":4},"title":"RMSprop Optimization Algorithm for Gradient Descent with Neural Networks","date":"September 24, 2017","format":false,"excerpt":"The video lecture below on the RMSprop optimization method is from the course Neural Networks for Machine Learning, as taught by Geoffrey Hinton (University of Toronto) on Coursera in 2012. For all you AI practitioners out there, this technique should supplement your toolbox in a very useful way.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/defQQqkXEfE\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":14910,"url":"https:\/\/insidebigdata.com\/2016\/04\/25\/neural-networks-and-the-future-of-machine-learning\/","url_meta":{"origin":25109,"position":5},"title":"Neural Networks and the Future of Machine Learning","date":"April 25, 2016","format":false,"excerpt":"In this special guest feature, Gary Baum, Vice President of Marketing at MyScript, talks about how handwriting recognition is enhancing machine (and human) learning. As an input method, handwriting recognition teaches machines to adapt to the user, adding in another layer to their evolving skill set. Those users can program\u2026","rel":"","context":"In &quot;Google News Feed&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2016\/04\/Gary_Baum.jpg?resize=350%2C200","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/25109"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=25109"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/25109\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/21208"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=25109"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=25109"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=25109"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}