{"id":30968,"date":"2022-11-26T06:00:00","date_gmt":"2022-11-26T14:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=30968"},"modified":"2022-11-23T14:52:13","modified_gmt":"2022-11-23T22:52:13","slug":"chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/","title":{"rendered":"Chung-Ang University Researchers Develop Algorithm for Optimal Decision Making under Heavy-tailed Noisy Rewards"},"content":{"rendered":"\n<p class=\"has-text-align-center\"><em>Researchers propose methods that theoretically guarantee minimal loss for worst case scenarios with minimal prior information for heavy-tailed reward distributions<\/em><\/p>\n\n\n\n<p>The exploration algorithms for stochastic multi-armed bandits (MABs)\u2013sequential decision-making problems under uncertain environments\u2013typically assume light-tailed distributions for reward noises. However, real-world datasets often show heavy-tailed noise. In light of this, researchers from Korea propose an algorithm that can achieve minimax optimality (minimum loss under maximum loss scenario) with minimal prior information. Superior to existing algorithms, the new algorithm has potential applications in autonomous trading and personalized recommendation systems.<\/p>\n\n\n\n<p>In data science, researchers typically deal with data that contain noisy observations. An important problem explored by data scientists in this context is the problem of sequential decision making. This is commonly known as a \u201cstochastic multi-armed bandit\u201d(stochastic MAB). Here, an intelligent agent sequentially explores and selects actions based on noisy rewards under an uncertain environment. Its goal is to minimize the cumulative regret\u2013the difference between the maximum reward and the expected reward of selected actions. A smaller regret implies a more efficient decision making.<\/p>\n\n\n\n<p>Most existing studies on stochastic MABs have performed regret analysis under the assumption that the reward noise follows a light-tailed distribution. However, many real-world datasets, in fact, show a heavy-tailed noise distribution. These include user behavioral pattern data used for developing personalized recommendation systems, stock price data for automatic transaction development, and sensor data for autonomous driving.<\/p>\n\n\n\n<p>In a recent study, Assistant Professor Kyungjae Lee of <a href=\"https:\/\/neweng.cau.ac.kr\/index.do\" target=\"_blank\" rel=\"noreferrer noopener\">Chung-Ang University<\/a> and Assistant Professor Sungbin Lim of the Ulsan Institute of Science and Technology, both in Korea, addressed this issue. In their theoretical analysis, they proved that the existing algorithms for stochastic MABs were sub-optimal for heavy-tailed rewards. More specifically, the methods employed in these algorithms\u2013robust upper confidence bound (UCB) and adaptively perturbed exploration (APE) with unbounded perturbation\u2013do not guarantee a minimax (minimization of maximum possible loss) optimality.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>\u201cBased on this analysis, minimax optimal robust (MR) UCB and APE methods have been proposed. MR-UCB utilizes a tighter confidence bound of robust mean estimators, and MR-APE is its randomized version. It employs bounded perturbation whose scale follows the modified confidence bound in MR-UCB,\u201d\u00a0explains Dr. Lee, speaking of their work, which was\u00a0published in the\u00a0<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9893089\" target=\"_blank\" rel=\"noreferrer noopener\"><em>IEEE Transactions on Neural Networks and Learning Systems<\/em>\u00a0on 14 September 2022<\/a>.<\/p><\/blockquote>\n\n\n\n<p><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"201\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/11\/Chung-Ang_paper_fig1.png\" alt=\"\" class=\"wp-image-30969\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/11\/Chung-Ang_paper_fig1.png 700w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/11\/Chung-Ang_paper_fig1-300x86.png 300w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2022\/11\/Chung-Ang_paper_fig1-150x43.png 150w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/figure><\/div>\n\n\n<p>The researchers next derived gap-dependent and independent upper bounds of the cumulative regret. For both the proposed methods, the latter value matches the lower bound under the heavy-tailed noise assumption, thereby achieving minimax optimality. Further, the new methods require minimal prior information and depend only on the maximum order of the bounded moment of rewards. In contrast, the existing algorithms require the upper bound of this moment\u00a0<em>a priori<\/em>\u2013information that may not be accessible in many real-world problems.<\/p>\n\n\n\n<p>Having established their theoretical framework, the researchers tested their methods by performing simulations under Pareto and Fr\u00e9chet noises. They found that MR-UCB consistently outperformed other exploration methods and was more robust with an increase in the number of actions under heavy-tailed noise.<\/p>\n\n\n\n<p>Further, the duo verified their approach for real-world data using a cryptocurrency dataset, showing that MR-UCB and MR-APE were beneficial\u2013minimax optimal regret bounds and minimal prior knowledge\u2013in tackling heavy-tailed synthetic and real-world stochastic MAB problems.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\"><p>\u201cBeing vulnerable to heavy-tailed noise, the existing MAB algorithms show poor performance in modeling stock data. They fail to predict big hikes or sudden drops in stock prices, causing huge losses. In contrast, MR-APE can be used in autonomous trading systems with stable expected returns through stock investment,\u201d\u00a0comments Dr. Lee, discussing the potential applications of the present work. \u201cAdditionally, it can be applied to personalized recommendation systems since behavioral data shows heavy-tailed noise. With better predictions of individual behavior, it is possible to provide better recommendations than conventional methods, which can maximize the advertising revenue,\u201d\u00a0he concludes.<\/p><\/blockquote>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers from South Korean Chung-Ang University propose methods that theoretically guarantee minimal loss for worst case scenarios with minimal prior information for heavy-tailed reward distributions. <\/p>\n","protected":false},"author":10513,"featured_media":27259,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,182,180,67,268,56,77,84,1],"tags":[133,264,277,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Chung-Ang University Researchers Develop Algorithm for Optimal Decision Making under Heavy-tailed Noisy Rewards - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Chung-Ang University Researchers Develop Algorithm for Optimal Decision Making under Heavy-tailed Noisy Rewards - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"Researchers from South Korean Chung-Ang University propose methods that theoretically guarantee minimal loss for worst case scenarios with minimal prior information for heavy-tailed reward distributions.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2022-11-26T14:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-11-23T22:52:13+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/09\/algorithm_shutterstock_718579645.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"300\" \/>\n\t<meta property=\"og:image:height\" content=\"195\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/\",\"url\":\"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/\",\"name\":\"Chung-Ang University Researchers Develop Algorithm for Optimal Decision Making under Heavy-tailed Noisy Rewards - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2022-11-26T14:00:00+00:00\",\"dateModified\":\"2022-11-23T22:52:13+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Chung-Ang University Researchers Develop Algorithm for Optimal Decision Making under Heavy-tailed Noisy Rewards\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Chung-Ang University Researchers Develop Algorithm for Optimal Decision Making under Heavy-tailed Noisy Rewards - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/","og_locale":"en_US","og_type":"article","og_title":"Chung-Ang University Researchers Develop Algorithm for Optimal Decision Making under Heavy-tailed Noisy Rewards - insideBIGDATA","og_description":"Researchers from South Korean Chung-Ang University propose methods that theoretically guarantee minimal loss for worst case scenarios with minimal prior information for heavy-tailed reward distributions.","og_url":"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2022-11-26T14:00:00+00:00","article_modified_time":"2022-11-23T22:52:13+00:00","og_image":[{"width":300,"height":195,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/09\/algorithm_shutterstock_718579645.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/","url":"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/","name":"Chung-Ang University Researchers Develop Algorithm for Optimal Decision Making under Heavy-tailed Noisy Rewards - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2022-11-26T14:00:00+00:00","dateModified":"2022-11-23T22:52:13+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2022\/11\/26\/chung-ang-university-researchers-develop-algorithm-for-optimal-decision-making-under-heavy-tailed-noisy-rewards\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Chung-Ang University Researchers Develop Algorithm for Optimal Decision Making under Heavy-tailed Noisy Rewards"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/09\/algorithm_shutterstock_718579645.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-83u","jetpack-related-posts":[{"id":32500,"url":"https:\/\/insidebigdata.com\/2023\/05\/29\/uva-researchers-built-an-ai-algorithm-that-understands-physics\/","url_meta":{"origin":30968,"position":0},"title":"UVA Researchers Built an AI Algorithm That Understands Physics","date":"May 29, 2023","format":false,"excerpt":"Normally, when testing the behavior of materials under high heat or explosive conditions, researchers have to run simulation after simulation, a data-intensive process that can take days even on a supercomputer. However, with a deep learning algorithm created by Stephen Baek, Phong Nguyen and their research team, the process takes\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2019\/05\/Artificial_intelligence_2_SHUTTERSTOCK.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":22953,"url":"https:\/\/insidebigdata.com\/2019\/07\/18\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-june-2019\/","url_meta":{"origin":30968,"position":1},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 June 2019","date":"July 18, 2019","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":24597,"url":"https:\/\/insidebigdata.com\/2020\/07\/03\/yolo-revisited\/","url_meta":{"origin":30968,"position":2},"title":"Research Highlights: YOLO Revisited","date":"July 3, 2020","format":false,"excerpt":"In the insideBIGDATA Research Highlights column we take a look at new and upcoming results from the research community for data science, machine learning, AI and deep learning. Our readers need to get a glimpse for technology coming down the pipeline that will make their efforts more strategic and competitive.\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"deep learning","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2018\/10\/shutterstock_1096541144.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":30315,"url":"https:\/\/insidebigdata.com\/2022\/09\/09\/ai-under-the-hood-mixing-things-up-optimizing-fluid-mixing-with-machine-learning\/","url_meta":{"origin":30968,"position":3},"title":"AI Under the Hood: Mixing Things Up &#8211; Optimizing Fluid Mixing with Machine Learning","date":"September 9, 2022","format":false,"excerpt":"Fluid mixing is an important part of several industrial processes and chemical reactions. However, the process often relies on trial-and-error-based experiments instead of mathematical optimization. While turbulent mixing is effective, it cannot always be sustained and can damage the materials involved. To address this issue, researchers from Japan (Tokyo University\u2026","rel":"","context":"In &quot;Academic&quot;","img":{"alt_text":"deep learning","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2018\/10\/shutterstock_1096541144.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":19891,"url":"https:\/\/insidebigdata.com\/2018\/02\/09\/best-arxiv-org-ai-machine-learning-deep-learning-january-2018\/","url_meta":{"origin":30968,"position":4},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 January 2018","date":"February 9, 2018","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":22263,"url":"https:\/\/insidebigdata.com\/2019\/03\/15\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-february-2019\/","url_meta":{"origin":30968,"position":5},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 February 2019","date":"March 15, 2019","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/30968"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=30968"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/30968\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/27259"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=30968"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=30968"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=30968"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}