{"id":27948,"date":"2021-12-17T06:00:00","date_gmt":"2021-12-17T14:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=27948"},"modified":"2021-12-18T09:43:56","modified_gmt":"2021-12-18T17:43:56","slug":"best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/","title":{"rendered":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 November 2021"},"content":{"rendered":"\n<div class=\"wp-block-image is-style-default\"><figure class=\"alignright size-full is-resized\"><img decoding=\"async\" loading=\"lazy\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg\" alt=\"\" class=\"wp-image-6361\" width=\"237\" height=\"200\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg 450w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv-150x127.jpg 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv-300x253.jpg 300w\" sizes=\"(max-width: 237px) 100vw, 237px\" \/><\/figure><\/div>\n\n\n\n<p>In this recurring monthly feature, we filter recent research papers appearing on the <a rel=\"noreferrer noopener\" href=\"https:\/\/arxiv.org\/\" target=\"_blank\">arXiv.org<\/a> preprint server for compelling subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the past month. Researchers from all over the world contribute to this repository as a prelude to the peer review process for publication in traditional journals. arXiv contains a veritable treasure trove of statistical learning methods you may use one day in the solution of data science problems. The articles listed below represent a small fraction of all articles appearing on the preprint server. They are listed in no particular order with a link to each paper along with a brief overview. Links to GitHub repos are provided when available. Especially relevant articles are marked with a \u201cthumbs up\u201d icon. Consider that these are academic research papers, typically geared toward graduate students, post docs, and seasoned professionals. They generally contain a high degree of mathematics so be prepared. Enjoy!<\/p>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2111.13293v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">KNAS &#8211; Green Neural Architecture Search<\/a><\/p>\n\n\n\n<p>Many existing neural architecture search (NAS) solutions rely on downstream training for architecture evaluation, which takes enormous computations. Considering that these computations bring a large carbon footprint, this paper aims to explore a green (namely environmental-friendly) NAS solution that evaluates architectures without training. Intuitively, gradients, induced by the architecture itself, directly decide the convergence and generalization results. This paper proposes the gradient kernel hypothesis: Gradients can be used as a coarse-grained proxy of downstream training to evaluate random-initialized networks. To support the hypothesis, a theoretical analysis was conducted to find a practical gradient kernel that has good correlations with training loss and validation performance. According to this hypothesis, a new kernel based architecture search approach KNAS was proposed. Experiments show that KNAS achieves competitive results with orders of magnitude faster than &#8220;train-then-test&#8221; paradigms on image classification tasks. Furthermore, the extremely low search cost enables its wide applications. The searched network also outperforms strong baseline RoBERTA-large on two text classification tasks. The GitHub report associates with this paper can be found <a href=\"https:\/\/github.com\/Jingjing-NLP\/KNAS\" target=\"_blank\" rel=\"noreferrer noopener\">HERE<\/a>. <\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"488\" height=\"411\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_03.png\" alt=\"\" class=\"wp-image-27969\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_03.png 488w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_03-150x126.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_03-300x253.png 300w\" sizes=\"(max-width: 488px) 100vw, 488px\" \/><\/figure><\/div>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2111.09883v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Swin Transformer V2: Scaling Up Capacity and Resolution<\/a><\/p>\n\n\n\n<p>This paper presents techniques for scaling Swin Transformer up to 3 billion parameters and making it capable of training with images of up to 1,5361,536 resolution. By scaling up capacity and resolution, Swin Transformer sets new records on four representative vision benchmarks: 84.0% top-1 accuracy on ImageNet-V2 image classification, 63.1\/54.4 box\/mask mAP on COCO object detection, 59.9 mIoU on ADE20K semantic segmentation, and 86.8% top-1 accuracy on Kinetics-400 video action classification. The techniques discussed are generally applicable for scaling up vision models, which has not been widely explored as that of NLP language models, partly due to the following difficulties in training and applications: 1) vision models often face instability issues at scale and 2) many downstream vision tasks require high resolution images or windows and it is not clear how to effectively transfer models pre-trained at low resolutions to higher resolution ones. The GPU memory consumption is also a problem when the image resolution is high. The GitHub repo associated with this paper can be found <a href=\"https:\/\/github.com\/microsoft\/Swin-Transformer\" target=\"_blank\" rel=\"noreferrer noopener\">HERE<\/a>. <\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"437\" height=\"509\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_01.png\" alt=\"\" class=\"wp-image-27951\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_01.png 437w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_01-129x150.png 129w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_01-258x300.png 258w\" sizes=\"(max-width: 437px) 100vw, 437px\" \/><\/figure><\/div>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2111.04204v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Natural Adversarial Objects<\/a><\/p>\n\n\n\n<p>Although state-of-the-art object detection methods have shown compelling performance, models often are not robust to adversarial attacks and out-of-distribution data. This paper introduces a new dataset, Natural Adversarial Objects (NAO), to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence. The mean average precision (mAP) of EfficientDet-D7 drops 74.5% when evaluated on NAO compared to the standard MSCOCO validation set. Moreover, by comparing a variety of object detection architectures, it&#8217;s found that better performance on MSCOCO validation set does not necessarily translate to better performance on NAO, suggesting that robustness cannot be simply achieved by training a more accurate model. NAO can be downloaded <a href=\"https:\/\/drive.google.com\/drive\/folders\/15P8sOWoJku6SSEiHLEts86ORfytGezi8\" target=\"_blank\" rel=\"noreferrer noopener\">HERE<\/a>. <\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"450\" height=\"503\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_02.png\" alt=\"\" class=\"wp-image-27953\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_02.png 450w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_02-134x150.png 134w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_02-268x300.png 268w\" sizes=\"(max-width: 450px) 100vw, 450px\" \/><\/figure><\/div>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/ftp\/arxiv\/papers\/2111\/2111.11982.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Is Dynamic Rumor Detection on social media Viable? An Unsupervised Perspective<\/a><\/p>\n\n\n\n<p>With the growing popularity and ease of access to the internet, the problem of online rumors is escalating. People are relying on social media to gain information readily but fall prey to false information. There is a lack of credibility assessment techniques for online posts to identify rumors as soon as they arrive. Existing studies have formulated several mechanisms to combat online rumors by developing machine learning and deep learning algorithms. The literature so far provides supervised frameworks for rumor classification that rely on huge training datasets. However, in the online scenario where supervised learning is exigent, dynamic rumor identification becomes difficult. Early detection of online rumors is a challenging task, and studies relating to them are relatively few. It is the need of the hour to identify rumors as soon as they appear online. This paper proposes a novel framework for unsupervised rumor detection that relies on an online post&#8217;s content and social features using state-of-the-art clustering techniques. The proposed architecture outperforms several existing baselines and performs better than several supervised techniques. The proposed method, being lightweight, simple, and robust, offers the suitability of being adopted as a tool for online rumor identification.<\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"283\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_04.png\" alt=\"\" class=\"wp-image-27971\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_04.png 700w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_04-150x61.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_04-300x121.png 300w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/figure><\/div>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2111.11554v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">KML &#8211; Using Machine Learning to Improve Storage Systems<\/a><\/p>\n\n\n\n<p>Operating systems include many heuristic algorithms designed to improve overall storage performance and throughput. Because such heuristics cannot work well for all conditions and workloads, system designers resorted to exposing numerous tunable parameters to users \u2013 essentially burdening users with continually optimizing their own storage systems and applications. Storage systems are usually responsible for most latency in I\/O heavy applications, so even a small overall latency improvement can be significant. Machine learning (ML) techniques promise to learn patterns, generalize from them, and enable optimal solutions that adapt to changing workloads. This paper proposes that ML solutions become a first-class component in OSs and replace manual heuristics to optimize storage systems dynamically. The paper describes a proposed ML architecture, called KML. The researchers developed a prototype KML architecture and applied it to two problems: optimal readahead and NFS read-size values. The experiments show that KML consumes little OS resources, adds negligible latency, and yet can learn patterns that can improve I\/O throughput by as much as 2.3x or 15x for the two use cases respectively \u2013 even for complex, never-before-seen, concurrently running mixed workloads on different storage devices.<\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"421\" height=\"411\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_05.png\" alt=\"\" class=\"wp-image-27973\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_05.png 421w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_05-150x146.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_05-300x293.png 300w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_05-50x50.png 50w\" sizes=\"(max-width: 421px) 100vw, 421px\" \/><\/figure><\/div>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2111.11578v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Generative Adversarial Networks for Astronomical Images Generation<\/a><\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"alignleft size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"100\" height=\"100\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/04\/ThumbsUP_shutterstock_325452782.jpg\" alt=\"\" class=\"wp-image-22440\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/04\/ThumbsUP_shutterstock_325452782.jpg 100w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/04\/ThumbsUP_shutterstock_325452782-50x50.jpg 50w\" sizes=\"(max-width: 100px) 100vw, 100px\" \/><\/figure><\/div>\n\n\n\n<p>Space exploration has always been a source of inspiration for humankind, and thanks to modern telescopes, it is now possible to observe celestial bodies far away from us. With a growing number of real and imaginary images of space available on the web and exploiting modern deep Learning architectures such as Generative Adversarial Networks, it is now possible to generate new representations of space. In this research, using a Lightweight GAN, a dataset of images obtained from the web, and the Galaxy Zoo Dataset, thousands of new images of celestial bodies, galaxies have been generated, and finally, by combining them, a wide view of the universe. The GitHub repo associated with this paper can be found <a href=\"https:\/\/github.com\/davide-coccomini\/GAN-Universe\" target=\"_blank\" rel=\"noreferrer noopener\">HERE<\/a>. and the generated images can be explored <a href=\"https:\/\/davide-coccomini.github.io\/GAN-Universe\/\" target=\"_blank\" rel=\"noreferrer noopener\">HERE<\/a>. at <\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"662\" height=\"423\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_06.png\" alt=\"\" class=\"wp-image-27975\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_06.png 662w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_06-150x96.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_06-300x192.png 300w\" sizes=\"(max-width: 662px) 100vw, 662px\" \/><\/figure><\/div>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2111.10831v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Learning by Active Forgetting for Neural Networks<\/a><\/p>\n\n\n\n<p>Remembering and forgetting mechanisms are two sides of the same coin in a human learning-memory system. Inspired by human brain memory mechanisms, modern machine learning systems have been working to endow machine with lifelong learning capability through better remembering while pushing the forgetting as the antagonist to overcome. Nevertheless, this idea might only see the half picture. Up until very recently, increasing researchers argue that a brain is born to forget, i.e., forgetting is a natural and active process for abstract, rich, and flexible representations. This paper presents a learning model by active forgetting mechanism with artificial neural networks. The active forgetting mechanism (AFM) is introduced to a neural network via a &#8220;plug-and-play&#8221; forgetting layer (P&amp;PF), consisting of groups of inhibitory neurons with Internal Regulation Strategy (IRS) to adjust the extinction rate of themselves via lateral inhibition mechanism and External Regulation Strategy (ERS) to adjust the extinction rate of excitatory neurons via inhibition mechanism. Experimental studies have shown that the P&amp;PF offers surprising benefits: self-adaptive structure, strong generalization, long-term learning and memory, and robustness to data and parameter perturbation. This work sheds light on the importance of forgetting in the learning process and offers new perspectives to understand the underlying mechanisms of neural networks.<\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"628\" height=\"680\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_07.png\" alt=\"\" class=\"wp-image-27977\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_07.png 628w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_07-139x150.png 139w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_07-277x300.png 277w\" sizes=\"(max-width: 628px) 100vw, 628px\" \/><\/figure><\/div>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2111.10492v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Feature selection or extraction decision process for clustering using PCA and FRSD<\/a><\/p>\n\n\n\n<p>This paper concerns the critical decision process of extracting or selecting the features before applying a clustering algorithm. It is not obvious to evaluate the importance of the features since the most popular methods to do it are usually made for a supervised learning technique process. A clustering algorithm is an unsupervised method. It means that there is no known output label to match the input data. This paper proposes a new method to choose the best dimensionality reduction method (selection or extraction) according to the data scientist&#8217;s parameters, aiming to apply a clustering process at the end. It uses Feature Ranking Process Based on Silhouette Decomposition (FRSD) algorithm, a Principal Component Analysis (PCA) algorithm, and a K-Means algorithm along with its metric, the Silhouette Index (SI). This paper presents 5 use cases based on a smart city dataset. This research also aims to discuss the impacts, the advantages, and the disadvantages of each choice that can be made in this unsupervised learning process.<\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"784\" height=\"632\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_08.png\" alt=\"\" class=\"wp-image-27979\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_08.png 784w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_08-150x121.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_08-300x242.png 300w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_08-768x619.png 768w\" sizes=\"(max-width: 784px) 100vw, 784px\" \/><\/figure><\/div>\n\n\n\n<p><a href=\"https:\/\/arxiv.org\/pdf\/2111.07631v1.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">AI in Games: Techniques, Challenges and Opportunities<\/a><\/p>\n\n\n\n<p>With breakthrough of AlphaGo, AI in human-computer game has become a very hot topic attracting researchers all around the world, which usually serves as an effective standard for testing artificial intelligence. Various game AI systems (AIs) have been developed such as Libratus, OpenAI Five and AlphaStar, beating professional human players. This paper surveys recent successful game AIs, covering board game AIs, card game AIs, first-person shooting game AIs and real time strategy game AIs. This survey: 1) compares the main difficulties among different kinds of games for the intelligent decision making field ; 2) illustrates the mainstream frameworks and techniques for developing professional level AIs; 3) raises the challenges or drawbacks in the current AIs for intelligent decision making; and 4) tries to propose future trends in the games and intelligent decision making techniques. Finally, the hope is that this brief review can provide an introduction for beginners, inspire insights for researchers in the filed of AI in games.<\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"700\" height=\"298\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_09.png\" alt=\"\" class=\"wp-image-27981\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_09.png 700w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_09-150x64.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/12\/arXiv_2021_11_09-300x128.png 300w\" sizes=\"(max-width: 700px) 100vw, 700px\" \/><\/figure><\/div>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a rel=\"noreferrer noopener\" href=\"http:\/\/insidebigdata.com\/newsletter\/\" target=\"_blank\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;@InsideBigData1 \u2013 <a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the month.<\/p>\n","protected":false},"author":37,"featured_media":6361,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,182,87,180,67,56,77,84,1],"tags":[437,741,133,264,277,933,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 November 2021 - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 November 2021 - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the month.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2021-12-17T14:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-12-18T17:43:56+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"450\" \/>\n\t<meta property=\"og:image:height\" content=\"380\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Daniel Gutierrez\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@AMULETAnalytics\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel Gutierrez\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/\",\"url\":\"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/\",\"name\":\"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 November 2021 - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2021-12-17T14:00:00+00:00\",\"dateModified\":\"2021-12-18T17:43:56+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2540da209c83a68f4f5922848f7376ed\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 November 2021\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2540da209c83a68f4f5922848f7376ed\",\"name\":\"Daniel Gutierrez\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5780282e7e567e2a502233e948464542?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5780282e7e567e2a502233e948464542?s=96&d=mm&r=g\",\"caption\":\"Daniel Gutierrez\"},\"description\":\"Daniel D. Gutierrez is a Data Scientist with Los Angeles-based AMULET Analytics, a service division of AMULET Development Corp. He's been involved with data science and Big Data long before it came in vogue, so imagine his delight when the Harvard Business Review recently deemed \\\"data scientist\\\" as the sexiest profession for the 21st century. Previously, he taught computer science and database classes at UCLA Extension for over 15 years, and authored three computer industry books on database technology. He also served as technical editor, columnist and writer at a major computer industry monthly publication for 7 years. Follow his data science musings at @AMULETAnalytics.\",\"sameAs\":[\"http:\/\/www.insidebigdata.com\",\"https:\/\/twitter.com\/@AMULETAnalytics\"],\"url\":\"https:\/\/insidebigdata.com\/author\/dangutierrez\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 November 2021 - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/","og_locale":"en_US","og_type":"article","og_title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 November 2021 - insideBIGDATA","og_description":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the month.","og_url":"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2021-12-17T14:00:00+00:00","article_modified_time":"2021-12-18T17:43:56+00:00","og_image":[{"width":450,"height":380,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg","type":"image\/jpeg"}],"author":"Daniel Gutierrez","twitter_card":"summary_large_image","twitter_creator":"@AMULETAnalytics","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Daniel Gutierrez","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/","url":"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/","name":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 November 2021 - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2021-12-17T14:00:00+00:00","dateModified":"2021-12-18T17:43:56+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2540da209c83a68f4f5922848f7376ed"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2021\/12\/17\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-november-2021\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 November 2021"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2540da209c83a68f4f5922848f7376ed","name":"Daniel Gutierrez","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/5780282e7e567e2a502233e948464542?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5780282e7e567e2a502233e948464542?s=96&d=mm&r=g","caption":"Daniel Gutierrez"},"description":"Daniel D. Gutierrez is a Data Scientist with Los Angeles-based AMULET Analytics, a service division of AMULET Development Corp. He's been involved with data science and Big Data long before it came in vogue, so imagine his delight when the Harvard Business Review recently deemed \"data scientist\" as the sexiest profession for the 21st century. Previously, he taught computer science and database classes at UCLA Extension for over 15 years, and authored three computer industry books on database technology. He also served as technical editor, columnist and writer at a major computer industry monthly publication for 7 years. Follow his data science musings at @AMULETAnalytics.","sameAs":["http:\/\/www.insidebigdata.com","https:\/\/twitter.com\/@AMULETAnalytics"],"url":"https:\/\/insidebigdata.com\/author\/dangutierrez\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-7gM","jetpack-related-posts":[{"id":21994,"url":"https:\/\/insidebigdata.com\/2019\/01\/16\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-december-2018\/","url_meta":{"origin":27948,"position":0},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 December 2018","date":"January 16, 2019","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":22953,"url":"https:\/\/insidebigdata.com\/2019\/07\/18\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-june-2019\/","url_meta":{"origin":27948,"position":1},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 June 2019","date":"July 18, 2019","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":22263,"url":"https:\/\/insidebigdata.com\/2019\/03\/15\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-february-2019\/","url_meta":{"origin":27948,"position":2},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 February 2019","date":"March 15, 2019","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":22431,"url":"https:\/\/insidebigdata.com\/2019\/04\/09\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-march-2019\/","url_meta":{"origin":27948,"position":3},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 March 2019","date":"April 9, 2019","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":21252,"url":"https:\/\/insidebigdata.com\/2018\/10\/17\/best-arxiv-org-ai-machine-learning-deep-learning-september-2018\/","url_meta":{"origin":27948,"position":4},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 September 2018","date":"October 17, 2018","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":20910,"url":"https:\/\/insidebigdata.com\/2018\/08\/13\/best-arxiv-org-ai-machine-learning-deep-learning-july-2018\/","url_meta":{"origin":27948,"position":5},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 July 2018","date":"August 13, 2018","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/27948"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/37"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=27948"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/27948\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/6361"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=27948"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=27948"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=27948"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}