{"id":26530,"date":"2021-06-26T06:00:00","date_gmt":"2021-06-26T13:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=26530"},"modified":"2021-10-12T15:55:46","modified_gmt":"2021-10-12T22:55:46","slug":"the-amazing-applications-of-graph-neural-networks","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/","title":{"rendered":"The Amazing Applications of Graph Neural Networks"},"content":{"rendered":"\n<p>The predictive prowess of machine learning is widely hailed as the summit of statistical Artificial Intelligence. Vaunted for its ability to enhance everything from customer service to operations, its numerous neural networks, multiple models, and deep learning deployments are considered an enterprise surety for profiting from data.<\/p>\n\n\n\n<p>But according to <a href=\"https:\/\/franz.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Franz<\/a> CEO Jans Aasman, there\u2019s just one tiny problem with this lofty esteem that\u2019s otherwise accurate: for the most part, it \u201conly works for what they call Euclidian datasets where you can just look at the situation, extract a number of salient points from that, turn it into a number in a vector, and then you have supervised learning and unsupervised learning and all of that.\u201d<\/p>\n\n\n\n<p>Granted, a generous portion of enterprise data <em>is<\/em> Euclidian and readily vectorized. However, there\u2019s a wealth of non-Euclidian, multidimensionality data serving as the catalyst for astounding machine learning use cases, such as:<\/p>\n\n\n\n<ul><li><strong>Network Forecasting:<\/strong> Analysis of all the varying relationships between entities or events in complex social networks of friends and enemies yields staggeringly accurate predictions about how any event (such as a specific customer buying a certain product) will influence network participants. This intelligence can revamp everything from marketing and sales approaches to regulatory mandates (Know Your Customer, Anti-Money Laundering, etc.), healthcare treatment, law enforcement, and more.<\/li><li><strong>Entity Classification:<\/strong> The potential to classify entities according to events\u2014such as part failure or system failure for connected vehicles, for example\u2014is critical for predictive maintenance. This capability has obvious connotations for fleet management, equipment asset monitoring, and other Internet of Things applications.<\/li><li><strong>Computer Vision, Natural Language Processing:<\/strong> Understanding the multidimensionality of the relationships of words to one another or images in a scene transfigures typical neural network deployments for NLP or computer vision. The latter supports scene generation in which, instead of machines looking at a scene of a car passing a fire hydrant with a dog sleeping near it, these things can be described so the machine generates that picture.<\/li><\/ul>\n\n\n\n<p>Each of these use cases revolves around high dimensionality data with multifaceted relationships between entities or nodes at a remarkable scale at which \u201cregular machine learning fails,\u201d Aasman noted. However, they\u2019re ideal for graph neural networks, which specialize in these and other high-dimensionality data deployments.<\/p>\n\n\n\n<p><strong>High-Dimensionality Data<\/strong><\/p>\n\n\n\n<p>Graph neural networks achieve these feats because graph approaches focus on discerning relationships between data. Relationships <a href=\"https:\/\/arxiv.org\/pdf\/1904.10146v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">in Euclidian datasets<\/a> aren\u2019t as complicated as those in high-dimensionality data, since \u201ceverything in a straight line or a two-dimensional flat surface can be turned into a vector,\u201d Aasman observed. These numbers or vectors form the basis for generating features for typical machine learning use cases.<\/p>\n\n\n\n<p>Examples of non-Euclidian datasets include things like the numerous relationships of over 100 aircraft systems to one another, links between one group of customers to four additional ones, and the myriad interdependencies of the links between those additional groups. This information isn\u2019t easily vectorized and eludes the capacity of machine learning sans graph neural networks. \u201cEach number in the vector would actually be dependent on other parts of the graph, so it\u2019s too complicated,\u201d Aasman commented. \u201cOnce things get into sparse graphs and you have networks of things, networks of drugs, and genes, and drug molecules, it becomes really hard to predict if a particular drug is missing a link to something else.\u201d<\/p>\n\n\n\n<p><strong>Relationship Predictions<\/strong><\/p>\n\n\n\n<p>When the context between nodes, entities, or events is really important (like in the pharmaceutical use case Aasman referenced or any other complex network application), graph neural networks provide predictive accuracy by understanding the data\u2019s relationships. This quality manifests in three chief ways, including:<\/p>\n\n\n\n<ul><li><strong>Predicting Links:<\/strong> Graph neural networks are adept at predicting links between nodes to readily comprehend if entities are related, how so, and what effect that relationship will have on business objectives. This insight is key for answering questions like \u201cdo certain events happen more often for a patient, for an aircraft, or in a text document, and can I actually predict the next event,\u201d Aasman disclosed.<\/li><li><strong>Classifying Entities:<\/strong> It\u2019s simple to classify entities based on attributes. Graph neural networks do this while considering the links between entities, resulting in new classifications that are difficult to achieve without graphs. This application involves supervised learning; predicting relationships entails unsupervised learning.<\/li><li><strong>Graph Clusters:<\/strong> This capability indicates how many graphs a specific graph contains and how they relate to each other. This topological information is based on unsupervised learning.<\/li><\/ul>\n\n\n\n<p>Combining these qualities with data models with prevalent temporal information (including the time of events, i.e. when customers made purchases) generates cogent examples of machine learning. This approach can illustrate a patient\u2019s medical future based on his or her past and all the relevant events of which it\u2019s comprised. \u201cYou can say given this patient, give me the next disease and the next chance that you get that disease in order of descending chance,\u201d Aasman remarked. Organizations can do the same thing for customer churn, loan failure, <a href=\"https:\/\/go.forrester.com\/blogs\/ai-is-transforming-fraud-detection-in-fsi\/\" target=\"_blank\" rel=\"noreferrer noopener\">certain types of fraud<\/a>, or other use cases.<\/p>\n\n\n\n<p><strong>Topological Text Classification, Picture Understanding<\/strong><\/p>\n\n\n\n<p>Graph neural networks render transformational outcomes when their unparalleled relationship discernment concentrates on aspects of NLP and computer vision. For the former it supports topological text classification, which is foundational for swifter, more granular comprehension of written language. Conventional entity extraction can pinpoint key terms in text. \u201cBut in a sentence, things can refer back to a previous word, to a later word,\u201d Aasman explained. \u201cEntity extraction doesn\u2019t look at this at all, but a graph neural network will look at the structure of the sentence, then you can do way more in terms of understanding.\u201d<\/p>\n\n\n\n<p>This approach also underpins picture understanding, in which graph neural networks understand the way different images in a single picture relate. Without them, machine learning can just identify various objects in a scene. With them, it can glean how those objects are interacting or relate to each other. \u201c[Non-graph neural network] machine learning doesn\u2019t do that,\u201d Aasman specified. \u201cNot how all the things in the scene fit together.\u201d Coupling graph neural networks <a href=\"https:\/\/www.gartner.com\/smarterwithgartner\/4-impactful-technologies-from-the-gartner-emerging-technologies-and-trends-impact-radar-for-2021\/\" target=\"_blank\" rel=\"noreferrer noopener\">with conventional neural networks<\/a> can richly describe the images in scenes and, conversely, generate detailed scenes from descriptions.<\/p>\n\n\n\n<p><strong>Graph Approaches<\/strong><\/p>\n\n\n\n<p>Graph neural networks are based on the neural networks that were initially devised in the 20<sup>th<\/sup> century. However, graph approaches enable the former to overcome the limits of vectorization to operate on high-dimensionality, non-Euclidian datasets. Specific graph techniques (and techniques amenable to graphs) aiding in this endeavor include:<\/p>\n\n\n\n<ul><li><strong>Jaccard Index:<\/strong> When trying to establish whether or not there should be a link that\u2019s missing between one set of nodes or another set of nodes, for example, <a href=\"https:\/\/arxiv.org\/abs\/1911.01685.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">the Jaccard index<\/a> can inform this decision by revealing \u201cto what extent two nodes are similar in a graph,\u201d Aasman said.<\/li><li><strong>Preferential Attachment:<\/strong> This statistical concept is a \u201ctechnique they call the winner takes all where you can predict if someone is going to get everything or you won\u2019t get anything,\u201d Aasman mentioned. Preferential attachment measures how close nodes are together.<\/li><li><strong>Centrality:<\/strong> Centrality is an indicator of how important nodes are in networks, which is related to which ones are between other nodes.<\/li><\/ul>\n\n\n\n<p>These and other graph approaches enable graph neural networks to work on high-dimensionality data without vectorizing it, thereby expanding the overall utility of enterprise machine learning applications.<\/p>\n\n\n\n<p><strong>Poly-Dimensionality Machine Learning Scale<\/strong><\/p>\n\n\n\n<p>The critical distinction in applying graph neural networks to the foregoing use cases and applying typical machine learning approaches is the complexity of the relationships analyzed\u2014and the scale of that complexity. Aasman explained a use case in which graph neural networks made accurate predictions about the actions of world leaders based on inputs spanning the better part of a year, over 20,000 entities, and nearly half a million events. Such foresight is far from academic when shifted to customer behavior, healthcare treatment, or other mission-critical deployments. Consequently, it may be impacting cognitive computing deployments sooner than organizations realize.<\/p>\n\n\n\n<p><strong>About the Author<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"alignleft size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"125\" height=\"125\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Jelani-Harper.jpg\" alt=\"\" class=\"wp-image-23475\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Jelani-Harper.jpg 125w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Jelani-Harper-110x110.jpg 110w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Jelani-Harper-50x50.jpg 50w\" sizes=\"(max-width: 125px) 100vw, 125px\" \/><\/figure><\/div>\n\n\n\n<p><em>Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.<\/em><\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a rel=\"noreferrer noopener\" href=\"http:\/\/insidebigdata.com\/newsletter\/\" target=\"_blank\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;@InsideBigData1 \u2013 <a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this contributed article, editorial consultant Jelani Harper points out that a generous portion of enterprise data is Euclidian and readily vectorized. However, there\u2019s a wealth of non-Euclidian, multidimensionality data serving as the catalyst for astounding machine learning use cases. <\/p>\n","protected":false},"author":10513,"featured_media":22407,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,65,115,270,87,180,67,56,97,1],"tags":[437,1025,106,714,652,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The Amazing Applications of Graph Neural Networks - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Amazing Applications of Graph Neural Networks - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"In this contributed article, editorial consultant Jelani Harper points out that a generous portion of enterprise data is Euclidian and readily vectorized. However, there\u2019s a wealth of non-Euclidian, multidimensionality data serving as the catalyst for astounding machine learning use cases.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2021-06-26T13:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-10-12T22:55:46+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/04\/graph-analytics_SHUTTERSTOCK.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"300\" \/>\n\t<meta property=\"og:image:height\" content=\"174\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/\",\"url\":\"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/\",\"name\":\"The Amazing Applications of Graph Neural Networks - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2021-06-26T13:00:00+00:00\",\"dateModified\":\"2021-10-12T22:55:46+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"The Amazing Applications of Graph Neural Networks\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The Amazing Applications of Graph Neural Networks - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/","og_locale":"en_US","og_type":"article","og_title":"The Amazing Applications of Graph Neural Networks - insideBIGDATA","og_description":"In this contributed article, editorial consultant Jelani Harper points out that a generous portion of enterprise data is Euclidian and readily vectorized. However, there\u2019s a wealth of non-Euclidian, multidimensionality data serving as the catalyst for astounding machine learning use cases.","og_url":"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2021-06-26T13:00:00+00:00","article_modified_time":"2021-10-12T22:55:46+00:00","og_image":[{"width":300,"height":174,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/04\/graph-analytics_SHUTTERSTOCK.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/","url":"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/","name":"The Amazing Applications of Graph Neural Networks - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2021-06-26T13:00:00+00:00","dateModified":"2021-10-12T22:55:46+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2021\/06\/26\/the-amazing-applications-of-graph-neural-networks\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"The Amazing Applications of Graph Neural Networks"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/04\/graph-analytics_SHUTTERSTOCK.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-6TU","jetpack-related-posts":[{"id":30964,"url":"https:\/\/insidebigdata.com\/2022\/11\/28\/2023-trends-in-artificial-intelligence-and-machine-learning-generative-ai-unfolds\/","url_meta":{"origin":26530,"position":0},"title":"2023 Trends in Artificial Intelligence and Machine Learning: Generative AI Unfolds\u00a0\u00a0","date":"November 28, 2022","format":false,"excerpt":"In this contributed article, editorial consultant Jelani Harper offers his perspectives around 2023 trends for the boundless potential of generative Artificial Intelligence\u2014the variety of predominantly advanced machine learning that analyzes content to produce strikingly similar new content.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2018\/09\/artificial-intelligence-3382507_640.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":21456,"url":"https:\/\/insidebigdata.com\/2018\/11\/15\/best-arxiv-org-ai-machine-learning-deep-learning-october-2018\/","url_meta":{"origin":26530,"position":1},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 October 2018","date":"November 15, 2018","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":21207,"url":"https:\/\/insidebigdata.com\/2018\/10\/07\/introduction-deep-learning-neural-networks\/","url_meta":{"origin":26530,"position":2},"title":"An Introduction to Deep Learning and Neural Networks","date":"October 7, 2018","format":false,"excerpt":"In this contributed article, Agile SEO technical writer and editor Limor Wainstein outlines how deep learning, neural networks, and machine learning are not interchangeable terms. This article helps to clarify the definitions for you with an introduction to deep learning and neural networks.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2018\/10\/Neural-Network-diagram.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":21994,"url":"https:\/\/insidebigdata.com\/2019\/01\/16\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-december-2018\/","url_meta":{"origin":26530,"position":3},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 December 2018","date":"January 16, 2019","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":19891,"url":"https:\/\/insidebigdata.com\/2018\/02\/09\/best-arxiv-org-ai-machine-learning-deep-learning-january-2018\/","url_meta":{"origin":26530,"position":4},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 January 2018","date":"February 9, 2018","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":22845,"url":"https:\/\/insidebigdata.com\/2019\/06\/26\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-may-2019\/","url_meta":{"origin":26530,"position":5},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 May 2019","date":"June 26, 2019","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/26530"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=26530"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/26530\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/22407"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=26530"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=26530"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=26530"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}