{"id":27484,"date":"2021-10-26T06:00:00","date_gmt":"2021-10-26T13:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=27484"},"modified":"2021-10-27T10:20:14","modified_gmt":"2021-10-27T17:20:14","slug":"faces-as-the-future-of-ai","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/","title":{"rendered":"Faces as the Future of AI"},"content":{"rendered":"\n<p>Humans are hardwired to look at each other\u2019s faces. Three-month-old infants <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC5698271\/\" target=\"_blank\" rel=\"noreferrer noopener\">prefer looking at faces<\/a> when given a chance. We have a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Fusiform_face_area\" target=\"_blank\" rel=\"noreferrer noopener\">separate brain region<\/a> devoted to facial recognition, and a human can fail to recognize faces while all the rest of the visual processing functions perfectly well (a condition known as <a href=\"https:\/\/en.wikipedia.org\/wiki\/Prosopagnosia\" target=\"_blank\" rel=\"noreferrer noopener\">prosopagnosia<\/a>). We are much better at recognizing faces and emotions than virtually anything else; in 1973, Hermann Chernoff even suggested using <a href=\"https:\/\/en.wikipedia.org\/wiki\/Chernoff_face\" target=\"_blank\" rel=\"noreferrer noopener\">drawings of faces for multivariate data visualization<\/a>.<\/p>\n\n\n\n<p>For us humans, it makes sense to specialize on faces. We are social animals whose brains had <a href=\"https:\/\/oxfordre.com\/psychology\/view\/10.1093\/acrefore\/9780190236557.001.0001\/acrefore-9780190236557-e-44\" target=\"_blank\" rel=\"noreferrer noopener\">probably evolved for social reasons<\/a> and who have an urgent need not only to distinguish individuals but to recognize variations in emotions: the difference between fear and anger in a fellow primate might mean life or death. But it turns out that in artificial intelligence, problems related to human faces are also coming to the forefront of computer vision. Below, we consider some of them, discuss the current state of the art, and introduce a common solution that might advance it in the near future.<\/p>\n\n\n\n<p><strong>Common Issues in Computer Vision<\/strong><\/p>\n\n\n\n<p>First, <em>face recognition<\/em> itself has obvious security-related applications from unlocking your phone to catching criminals with CCTV cameras. Usually face recognition is an added layer of security, but as the technology progresses, it might rival fingerprints and other biometrics. Formally, it is a classification problem: choose the correct answer out of several alternatives. But there are <em>a lot<\/em> of faces, and we need to add new people on the fly. Therefore, face recognition systems usually operate by learning to <em>extract features<\/em>, i.e., map the picture of a face to a much smaller space of features and then perform information retrieval in this feature space. Feature learning is almost invariably done with deep neural networks. While modern face recognition systems <a href=\"https:\/\/arxiv.org\/abs\/1902.03524\" target=\"_blank\" rel=\"noreferrer noopener\">achieve excellent results<\/a> and are widely used in practice, this problem still, to this day, gives rise to new <a href=\"https:\/\/arxiv.org\/abs\/2006.13026\" target=\"_blank\" rel=\"noreferrer noopener\">fundamental ideas in deep learning<\/a>.<\/p>\n\n\n\n<p><em>Emotion recognition<\/em> (classifying facial expressions) is another human forte, but automating it is important. AI assistants can be more helpful if they recognize emotions, and a car might recognize whether the driver is about to fall asleep at the wheel (this technology is <a href=\"https:\/\/arxiv.org\/abs\/2103.02162\" target=\"_blank\" rel=\"noreferrer noopener\">close to production<\/a>). There are also numerous medical applications: emotions (or lack of such) are important in diagnosing Parkinson\u2019s disease, strokes and cortical lesions, and much more. Again, emotion recognition is a classification problem, and the best results are achieved by rather <a href=\"https:\/\/arxiv.org\/abs\/2105.03588\" target=\"_blank\" rel=\"noreferrer noopener\">standard deep learning architectures<\/a>, although medical applications usually augment images with other modalities such as respiration or electrocardiograms.<\/p>\n\n\n\n<p><em>Gaze estimation<\/em>, i.e., predicting where a person is looking, is important for smartphones, AR\/VR, and various eye tracking applications such as, again, car safety. This problem does not require large networks because the input images are rather small, but results keep improving, lately, e.g., with <a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Park_Few-Shot_Adaptive_Gaze_Estimation_ICCV_2019_paper.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">few-shot adaptation to a specific person<\/a>. The current state of gaze estimation is already sufficient to create <a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3317697.3325118\" target=\"_blank\" rel=\"noreferrer noopener\">AR\/VR software<\/a> <a href=\"https:\/\/mdsoar.org\/handle\/11603\/19190\" target=\"_blank\" rel=\"noreferrer noopener\">fully controlled by gaze<\/a>, and we expect this market to grow very rapidly.<\/p>\n\n\n\n<p><em>Segmentation<\/em>, a classical computer vision problem, is important for human faces as well, mostly for video editing and similar applications. If you want to cut a person out really well, say add a cool background to your video conferencing app, segmentation turns into <em>background matting<\/em>, a much harder problem where the segmentation mask is not binary but can also be \u201csemi-transparent\u201d to a degree. This is important for object boundaries, hair, glasses, and the like. Background matting has only very recently started getting <a href=\"https:\/\/arxiv.org\/abs\/2012.07810\" target=\"_blank\" rel=\"noreferrer noopener\">satisfactory solutions<\/a>, and there is a lot to be done yet.<\/p>\n\n\n\n<p>Many specialized face-related problems rely on <em>facial keypoint detection<\/em>, the problem of finding characteristic points on a human face. A common keypoint scheme includes several dozen (68 in the popular <a href=\"https:\/\/ibug.doc.ic.ac.uk\/media\/uploads\/documents\/sagonas_2016_imavis.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">IBUG scheme<\/a>) points that all need to be labeled on a face. Facial keypoints can serve as the first step for tracking faces in images and video, recognizing faces and facial expressions, and numerous biometric and medical applications. There exist state-of-the-art solutions both based on <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0167865519302910\" target=\"_blank\" rel=\"noreferrer noopener\">deep neural networks<\/a> and&nbsp; <a href=\"https:\/\/arxiv.org\/abs\/1902.01831\" target=\"_blank\" rel=\"noreferrer noopener\">ensembles of classical models<\/a>.<\/p>\n\n\n\n<p><strong>The Limitations of Manually Labeled Data<\/strong><\/p>\n\n\n\n<p>Face-related problems represent an important AI frontier. Interestingly, most of them struggle with the same obstacle: lack of labeled training data. There exist <a href=\"http:\/\/vis-www.cs.umass.edu\/lfw\/\" target=\"_blank\" rel=\"noreferrer noopener\">datasets with millions of faces<\/a>, but a face recognition system has to add a new person by just 1-2 photos. In many other problems, manually labeled data is challenging and costly to obtain. Imagine how much work it is to manually draw a segmentation mask for a human face, and then imagine that you have to make this mask \u201csoft\u201d for background matting. Facial keypoints are also notoriously difficult to label: in engineering practice, researchers even have to explicitly account for human biases in labeling that vary across datasets. Lack of representative training data has also led to bias in deployed models resulting in poor performance with certain ethnicities.<\/p>\n\n\n\n<p>Moreover, significant changes in conditions often render existing datasets virtually useless: you might need to recognize faces from an infrared camera of a smartphone that users hold below their chins, but the datasets only provide frontal RGB photos. This lack of data can impose a hard limit on what AI researchers can do.<\/p>\n\n\n\n<p><strong>Synthetic Data Presents a Solution<\/strong><\/p>\n\n\n\n<p>Fortunately, a solution is already presenting itself: many AI models can be trained on <em><u><a href=\"http:\/\/arxiv.org\/abs\/1909.11512\" target=\"_blank\" rel=\"noreferrer noopener\">synthetic data<\/a><\/u><\/em>. If you have a CGI-based 3D human head crafted with sufficient fidelity, this head can be put in a wide variety of conditions, including lighting, camera angles, camera modalities, backgrounds, occlusions, and much more. Even more importantly, since you control everything going on in your virtual 3D scene, you know where every pixel is coming from and can get perfect labeling for all of these problems for free, even hard ones like background matting. Every 3D model of a human head can give you an endless stream of perfectly labeled highly varied data for any face-related problem\u2014what\u2019s not to like?<\/p>\n\n\n\n<p>Synthetic data appears to be a key solution, but it raises questions. First, synthetic images cannot be perfectly photorealistic, leading to the <em>domain shift<\/em> problem. Models are trained on the synthetic domain to be used on real images. Second, creating a new 3D head from scratch is a lot of manual labor, and variety in synthetic data is essential, so (at least semi-) <em>automatic generation of synthetic data <\/em>will probably see much more research in the nearest future. However, in practice, synthetic data is already proving itself for human faces even in its most straightforward form: creating hybrid synthetic+real datasets and training standard models on this data.<\/p>\n\n\n\n<p>Let us summarize. Several important computer vision problems related to human faces are increasingly finding real-world applications in security, biometrics, AR\/VR, video editing, car safety, and more. Most of them are far from solved, and the amount of labeled data for such problems is limited because real data is expensive. Fortunately, it appears that synthetic data is picking up the torch. Human faces may well be the next frontier for modern AI, and it looks like we are well-positioned to get there.<\/p>\n\n\n\n<p><strong>About the Author<\/strong><\/p>\n\n\n\n<div class=\"wp-block-image is-style-default\"><figure class=\"alignleft size-large\"><img decoding=\"async\" loading=\"lazy\" width=\"150\" height=\"150\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/10\/Sergey-Nikolenko.jpg\" alt=\"\" class=\"wp-image-27485\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/10\/Sergey-Nikolenko.jpg 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/10\/Sergey-Nikolenko-110x110.jpg 110w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2021\/10\/Sergey-Nikolenko-50x50.jpg 50w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/><\/figure><\/div>\n\n\n\n<p><em>Sergey I. Nikolenko is Head of AI at <a href=\"https:\/\/synthesis.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">Synthesis AI<\/a>. Sergey is a computer scientist specializing in machine &nbsp;learning and analysis of algorithms. Synthesis AI is a San Francisco based company specializing on the generation and use of synthetic data for modern machine learning models. He also serves as the Head of the Artificial Intelligence Lab at the Steklov Mathematical Institute at St. Petersburg, Russia. Sergey\u2019s interests include synthetic data in machine learning, deep learning models for natural language processing, image manipulation, and computer vision, and algorithms for networking. Sergey has authored a seminal text in field, &#8220;<a href=\"https:\/\/www.springer.com\/gp\/book\/9783030751777\" target=\"_blank\" rel=\"noreferrer noopener\">Synthetic Data for Deep Learning<\/a>,&#8221; published by Springer. <\/em><\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a rel=\"noreferrer noopener\" href=\"http:\/\/insidebigdata.com\/newsletter\/\" target=\"_blank\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;@InsideBigData1 \u2013 <a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this contributed article, Dr. Sergey I. Nikolenko, Head of AI at Synthesis AI, discusses how in AI, problems related to human faces are coming to the forefront of computer vision. The article considers some of them, discusses the current state of the art, and introduces a common solution that might advance it in the near future.<\/p>\n","protected":false},"author":10513,"featured_media":22568,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,87,180,56,97,1],"tags":[581,264,1069,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Faces as the Future of AI - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Faces as the Future of AI - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"In this contributed article, Dr. Sergey I. Nikolenko, Head of AI at Synthesis AI, discusses how in AI, problems related to human faces are coming to the forefront of computer vision. The article considers some of them, discusses the current state of the art, and introduces a common solution that might advance it in the near future.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2021-10-26T13:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-10-27T17:20:14+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/05\/Deep_Learning_shutterstock_386816095.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"300\" \/>\n\t<meta property=\"og:image:height\" content=\"240\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/\",\"url\":\"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/\",\"name\":\"Faces as the Future of AI - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2021-10-26T13:00:00+00:00\",\"dateModified\":\"2021-10-27T17:20:14+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Faces as the Future of AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Faces as the Future of AI - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/","og_locale":"en_US","og_type":"article","og_title":"Faces as the Future of AI - insideBIGDATA","og_description":"In this contributed article, Dr. Sergey I. Nikolenko, Head of AI at Synthesis AI, discusses how in AI, problems related to human faces are coming to the forefront of computer vision. The article considers some of them, discusses the current state of the art, and introduces a common solution that might advance it in the near future.","og_url":"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2021-10-26T13:00:00+00:00","article_modified_time":"2021-10-27T17:20:14+00:00","og_image":[{"width":300,"height":240,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/05\/Deep_Learning_shutterstock_386816095.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/","url":"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/","name":"Faces as the Future of AI - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2021-10-26T13:00:00+00:00","dateModified":"2021-10-27T17:20:14+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2021\/10\/26\/faces-as-the-future-of-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Faces as the Future of AI"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2019\/05\/Deep_Learning_shutterstock_386816095.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-79i","jetpack-related-posts":[{"id":24367,"url":"https:\/\/insidebigdata.com\/2020\/05\/06\/want-a-functioning-ai-model-beware-of-biased-data\/","url_meta":{"origin":27484,"position":0},"title":"Want a Functioning AI Model? Beware of Biased Data","date":"May 6, 2020","format":false,"excerpt":"In this special guest feature, Sinan Ozdemir, Director of Data Science at Directly, points out how algorithmic bias has been one of the most talked-about issues in AI for years, yet it remains one of the most persistent challenges in the field. Despite years of research into bias detection and\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2020\/05\/Sinan.headshot.jpeg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":20938,"url":"https:\/\/insidebigdata.com\/2018\/08\/16\/melding-minds-humans-ai-make-perfect-partners\/","url_meta":{"origin":27484,"position":1},"title":"Melding The Minds: Why Humans &#038; AI Make Perfect Partners","date":"August 16, 2018","format":false,"excerpt":"In this contributed article, Imaginea Ai Co-founder & CEO Nav Dhunay discusses what\u2019s in store for humanity after the AI revolution. While it\u2019s hard to predict the future, there\u2019s good reason to believe that our relationship to AI will be far more positive than many people assume. The development of\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":32732,"url":"https:\/\/insidebigdata.com\/2023\/06\/27\/unlocking-the-power-of-generative-and-language-ai\/","url_meta":{"origin":27484,"position":2},"title":"Unlocking the Power of Generative and Language AI","date":"June 27, 2023","format":false,"excerpt":"In this contributed article, Amit Ben, co-founder & CEO of One AI, believes that overall, the future of both Generative AI and Language AI are full of promise. As technology continues to advance, we can expect to see continued growth and innovation in this field, with new applications and use\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2313909647_special.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":23433,"url":"https:\/\/insidebigdata.com\/2019\/10\/16\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-september-2019\/","url_meta":{"origin":27484,"position":3},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 September 2019","date":"October 16, 2019","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":18857,"url":"https:\/\/insidebigdata.com\/2017\/09\/15\/crowdflower-announces-first-round-winners-1-million-ai-everyone-challenge\/","url_meta":{"origin":27484,"position":4},"title":"CrowdFlower Announces First Round Winners of $1 Million AI For Everyone Challenge","date":"September 15, 2017","format":false,"excerpt":"CrowdFlower, the essential human-in-the-loop Artificial Intelligence platform for data science and machine learning teams, announced the first-round winners of its $1 million \u201cAI For Everyone\u201d Challenge. Selected from a set of nearly two dozen submissions, the winning proposals are computer vision projects that will label millions of images to build\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":23963,"url":"https:\/\/insidebigdata.com\/2020\/02\/10\/busting-the-5-common-myths-business-leaders-get-wrong-about-ai\/","url_meta":{"origin":27484,"position":5},"title":"Busting the 5 Common Myths Business Leaders Get Wrong About AI","date":"February 10, 2020","format":false,"excerpt":"In this special guest feature, Nikolas Kairinos, CEO and co-founder of Prospex and Fountech, takes a look at the 5 common myths business leaders get wrong about AI. Every organisation should consider the potential impact of this technology on its strategy, and how it can be utilised to solve problems\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/27484"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=27484"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/27484\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/22568"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=27484"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=27484"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=27484"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}