{"id":32255,"date":"2023-05-02T07:00:00","date_gmt":"2023-05-02T14:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=32255"},"modified":"2023-05-01T13:54:03","modified_gmt":"2023-05-01T20:54:03","slug":"understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/","title":{"rendered":"Understanding 4 Concepts for Avoiding Bias in AI-enabled Fraud Detection"},"content":{"rendered":"<div class=\"wp-block-image\">\n<figure class=\"alignright size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"150\" height=\"231\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/05\/Ilya_headshot_2021.png\" alt=\"\" class=\"wp-image-32256\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/05\/Ilya_headshot_2021.png 150w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/05\/Ilya_headshot_2021-97x150.png 97w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/><\/figure><\/div>\n\n\n<p><em>In this special guest feature, Ilya Gerner, Director of Compliance Strategy for <a href=\"https:\/\/www.gcomsoft.com\/\">GCOM<\/a>, explains why bias can be an issue when using artificial intelligence (AI) for fraud detection. By understanding key concepts of machine learning (ML), organizations can ensure greater equity in AI outputs. Ilya has over ten years experience in advanced analytics, leading teams in the development of fraud detection algorithms, building decision-support tools, and conducting statistical analysis. Since 2020, he has supported the Internal Revenue Service in its Identity Theft Strategy initiative, leading efforts to provide data analytic capabilities to the Security Summit and the Information Sharing and Analysis Center (ISAC) and conducting strategic risk analysis to identify gaps in identity theft protection.<\/em><\/p>\n\n\n\n<p>Fraud can be a big problem for government agencies that deliver benefits to the public. As one example, the proportion of unemployment benefits improperly paid out by states <a href=\"https:\/\/www.dol.gov\/agencies\/eta\/unemployment-insurance-payment-accuracy\" target=\"_blank\" rel=\"noreferrer noopener\">can exceed 40%<\/a>.\u00a0<\/p>\n\n\n\n<p>Artificial intelligence (AI) can help. AI can pore through reams of data to uncover potential fraud \u2013 and do it far more quickly and accurately than humans can. So, it\u2019s no surprise that more agencies are turning to AI to help identify fraud and reduce fraudulent payouts.<\/p>\n\n\n\n<p>But AI has a known potential to introduce bias. For instance, the <a href=\"https:\/\/www.nytimes.com\/2022\/12\/07\/style\/lensa-ai-selfies.html\" target=\"_blank\" rel=\"noreferrer noopener\">Lensa AI<\/a> image generator was recently found to deliver renderings that <a href=\"https:\/\/www.nytimes.com\/2022\/12\/21\/technology\/personaltech\/how-to-use-chatgpt-ethically.html?smid=nytcore-ios-share&amp;referringSource=articleShare\" target=\"_blank\" rel=\"noreferrer noopener\">altered people\u2019s appearance<\/a> in ways that could be considered biased based on gender and race.<\/p>\n\n\n\n<p>Bias can enter machine learning (ML) models in multiple ways. One way is through historical data. If you train a model based on a dataset that itself contains bias, that bias will be baked into the model.&nbsp;<\/p>\n\n\n\n<p>Another way is through the introduction of proxy data. Imagine that you\u2019re looking for evidence of fraud in tax return filings, for example. A model that omits the age of the person filing the return but includes the total number of tax returns the person has previously filed could still result in disparate age-based impacts, because the number of tax returns filed in a lifetime could be a rough proximate for age.<\/p>\n\n\n\n<p>Unfairness is of particular concern for governments, which deal in datasets that include legally protected attributes such as age, gender, and race. Agencies want to avoid both disparate treatment \u2013 applying decisions to demographic groups in dissimilar ways \u2013 and disparate impacts \u2013 harming or benefiting demographic groups in dissimilar ways.<\/p>\n\n\n\n<p>But there are strategies for avoiding bias and inequity in AI-driven fraud detection. By better understanding how ML models function, organizations can help ensure fairness in AI.<\/p>\n\n\n\n<p><strong>ML Concepts for Blunting Bias<\/strong><\/p>\n\n\n\n<p>Let\u2019s look at four key approaches to mitigating unfairness in ML algorithms \u2013 Unawareness, Demographic Parity, Equalized Odds, and Predictive Rate Parity \u2013 and how they might play out in a hypothetical but real-world scenario.<\/p>\n\n\n\n<p>State tax agencies make every effort to collect overdue taxes, but resource constraints mean that they might not be able to resolve every case. So, agencies prioritize cases that will result in a high amount of money collected but at a low cost.<\/p>\n\n\n\n<p>Let\u2019s say a state tax agency wants to identify 50 taxpayers to receive a mailed notification that their taxes are past due. But it wants to avoid contacting taxpayers who are likely to consume resources by calling the agency\u2019s customer service center after they receive the notification.<\/p>\n\n\n\n<p>The agency knows from historical data that taxpayers over age 45 are more likely to call the customer service center. That means age, a sensitive attribute, is part of the picture. This will have different implications, depending on which strategy for mitigating unfairness is applied:<\/p>\n\n\n\n<p><strong>Mitigation Approach 1:&nbsp; Unawareness.<\/strong> A model using this concept omits sensitive attributes such as age. But it doesn\u2019t account for proxies of such sensitive attributes.<\/p>\n\n\n\n<p>In our hypothetical example, the tax agency\u2019s ML model applies the Unawareness concept to select taxpayers based on their frequency of calls to the contact center. It doesn\u2019t directly use age as an attribute, but because age correlates with phone usage, it will favor younger taxpayers. As a result, the model selects 35 taxpayers under age 45, and 15 taxpayers over age 45. The outcome is that 10 of the taxpayers end up calling the contact center \u2013 not a bad result, but perhaps not ideal.<\/p>\n\n\n\n<p><strong>Mitigation Approach 2:&nbsp; Demographic Parity.<\/strong> With this concept, the model\u2019s probability of predicting a specific outcome is the same for one individual as for another individual with different sensitive attributes.<\/p>\n\n\n\n<p>Applying the Demographic Parity concept, the tax agency\u2019s ML model directly uses age to ensure equal distribution of taxpayers above and below age 45. As a result, the model selects 25 taxpayers under age 45, and 25 over age 45. The outcome is that 14 taxpayers call the contact center \u2013 a less favorable result than with the Unawareness concept.<\/p>\n\n\n\n<p><strong>Mitigation Approach 3:&nbsp; Equalized Odds.<\/strong> With Equalized Odds, if the model predicts the same outcome for two individuals with different sensitive attributes, the probability it will select either individual is the same.<\/p>\n\n\n\n<p>Applying this concept, the agency\u2019s ML model uses age to ensure that true-positive and false-positive rates are the same for taxpayers above and below age 45. As a result, it selects 30 taxpayers under age 45, and 20 over age 45. In this case, eight taxpayers call the customer service center\u2014 the best outcome so far.<\/p>\n\n\n\n<p><strong>Mitigation Approach 4:&nbsp; Predictive Rate Parity.<\/strong> With this concept, if the model predicts a specific outcome for two individuals with different sensitive attributes, the probability it will predict that outcome for either individual is the same.<\/p>\n\n\n\n<p>Applying this concept, the agency\u2019s ML model uses age to ensure that among taxpayers who call the contact center, an equal number are under 45 and over 45. As a result, it selects 40 taxpayers under age 45 and 10 over age 45. Eight taxpayers call the customer service center \u2013 the same outcome as with Equalized Odds.<\/p>\n\n\n\n<p>To summarize the results of this hypothetical situation, two of the models achieve a more desirable outcome, but they rely on sensitive data. The model that doesn\u2019t rely on sensitive data achieves a fairly desirable outcome, but the data it uses is a proxy for sensitive data.<\/p>\n\n\n\n<p><strong>Balancing Accuracy and Fairness<\/strong><\/p>\n\n\n\n<p>One challenge for ML modelers is that the four concepts for mitigating unfairness are mutually exclusive. Modelers have to select one fairness definition to apply to an algorithm, and then accept the tradeoffs.<\/p>\n\n\n\n<p>Demographic Parity, Equalized Odds, and Predictive Rate Parity all involve disparate treatment. Unawareness doesn\u2019t involve disparate treatment, but it can result in disparate impacts. Each concept has its pros and cons, and there\u2019s no correct or incorrect choice.<\/p>\n\n\n\n<p>Another challenge is that there\u2019s often a tradeoff between accuracy and fairness. A highly accurate model might not be equitable. But improving the model\u2019s fairness can make it less accurate. For fraud detection, an agency might choose to run a less accurate model to make fraud detection more equitable.<\/p>\n\n\n\n<p>AI is helping governments more efficiently and effectively identify and prevent fraud. What\u2019s important is that they understand how ML concepts can affect treatment and outcomes, and that they be transparent about how they\u2019re using AI. By leveraging strategies to avoid bias and inequity in AI-enabled fraud detection, they can serve the public fairly.<\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this special guest feature, Ilya Gerner, Director of Compliance Strategy for GCOM, explains why bias can be an issue when using artificial intelligence (AI) for fraud detection. By understanding key concepts of machine learning (ML), organizations can ensure greater equity in AI outputs.<\/p>\n","protected":false},"author":10513,"featured_media":32256,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,87,180,61,56,97,1],"tags":[],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Understanding 4 Concepts for Avoiding Bias in AI-enabled Fraud Detection - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Understanding 4 Concepts for Avoiding Bias in AI-enabled Fraud Detection - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"In this special guest feature, Ilya Gerner, Director of Compliance Strategy for GCOM, explains why bias can be an issue when using artificial intelligence (AI) for fraud detection. By understanding key concepts of machine learning (ML), organizations can ensure greater equity in AI outputs.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2023-05-02T14:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-05-01T20:54:03+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/05\/Ilya_headshot_2021.png\" \/>\n\t<meta property=\"og:image:width\" content=\"150\" \/>\n\t<meta property=\"og:image:height\" content=\"231\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/\",\"url\":\"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/\",\"name\":\"Understanding 4 Concepts for Avoiding Bias in AI-enabled Fraud Detection - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2023-05-02T14:00:00+00:00\",\"dateModified\":\"2023-05-01T20:54:03+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Understanding 4 Concepts for Avoiding Bias in AI-enabled Fraud Detection\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Understanding 4 Concepts for Avoiding Bias in AI-enabled Fraud Detection - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/","og_locale":"en_US","og_type":"article","og_title":"Understanding 4 Concepts for Avoiding Bias in AI-enabled Fraud Detection - insideBIGDATA","og_description":"In this special guest feature, Ilya Gerner, Director of Compliance Strategy for GCOM, explains why bias can be an issue when using artificial intelligence (AI) for fraud detection. By understanding key concepts of machine learning (ML), organizations can ensure greater equity in AI outputs.","og_url":"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2023-05-02T14:00:00+00:00","article_modified_time":"2023-05-01T20:54:03+00:00","og_image":[{"width":150,"height":231,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/05\/Ilya_headshot_2021.png","type":"image\/png"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/","url":"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/","name":"Understanding 4 Concepts for Avoiding Bias in AI-enabled Fraud Detection - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2023-05-02T14:00:00+00:00","dateModified":"2023-05-01T20:54:03+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2023\/05\/02\/understanding-4-concepts-for-avoiding-bias-in-ai-enabled-fraud-detection\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Understanding 4 Concepts for Avoiding Bias in AI-enabled Fraud Detection"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/05\/Ilya_headshot_2021.png","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-8of","jetpack-related-posts":[{"id":32514,"url":"https:\/\/insidebigdata.com\/2023\/06\/01\/ai-empowers-microfinance-revolutionizing-fraud-detection\/","url_meta":{"origin":32255,"position":0},"title":"AI Empowers Microfinance: Revolutionizing Fraud Detection","date":"June 1, 2023","format":false,"excerpt":"In this sponsored article, Dmitry Dolgorukov, CRO and Co-Founder of HES FinTech, suggesets that to effectively combat fraud, microfinance institutions must establish robust fraud detection systems. Early detection and prevention of fraudulent activities are vital in minimizing financial impact and safeguarding the funds of vulnerable customers. Microfinance institutions face a\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/05\/How-to-Avoid-Fraud-in-Digital-Lending-3.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":25587,"url":"https:\/\/insidebigdata.com\/2021\/02\/06\/video-highlights-scalable-object-detection-with-detr\/","url_meta":{"origin":32255,"position":1},"title":"Video Highlights: Scalable Object Detection with DETR","date":"February 6, 2021","format":false,"excerpt":"Object detection is a central problem in computer vision and underpins many applications from medical image analysis to autonomous driving. This video presentation will start with a tutorial on object detection covering basic concepts and techniques. Then we will dive into an interactive session where you will implement a recent\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2019\/05\/Deep_Learning_shutterstock_386816095.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":29991,"url":"https:\/\/insidebigdata.com\/2022\/08\/04\/how-ai-can-prevent-rising-candidate-fraud\/","url_meta":{"origin":32255,"position":2},"title":"How AI Can Prevent Rising Candidate Fraud","date":"August 4, 2022","format":false,"excerpt":"Candidates are lying and cheating to get hired more than ever. INSIDER recently cited a study, \"The Future of Candidate Evaluation,\" by our friends over at Glider AI that found candidate fraud has nearly doubled since before the pandemic. As INSIDER reported, more companies believe AI is the solution.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2019\/10\/Artificial_intelligence_3_SHUTTERSTOCK.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":17435,"url":"https:\/\/insidebigdata.com\/2017\/03\/22\/argyle-data-extends-predictive-analytics-offerings-enterprise-data-centers\/","url_meta":{"origin":32255,"position":3},"title":"Argyle Data Extends Predictive Analytics Offerings to Enterprise Data Centers","date":"March 22, 2017","format":false,"excerpt":"Argyle Data has expanded its core machine learning and AI application suite to engage with clients in enterprise areas including IoT security, financial services and online\/mobile banking.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":2072,"url":"https:\/\/insidebigdata.com\/2012\/11\/07\/ornl-to-use-yarcdata-appliance-for-healthcare-fraud-detection\/","url_meta":{"origin":32255,"position":4},"title":"ORNL to use YarcData Appliance for Healthcare Fraud Detection","date":"November 7, 2012","format":false,"excerpt":"This week Cray's YarcData division announced a contract to deliver a uRiKA graph-analytics appliance to the Oak Ridge National Laboratory (ORNL). Analysts at ORNL will use the uRiKA system as they conduct research in healthcare fraud and analytics for a leading healthcare payer. Identifying healthcare fraud and abuse is challenging\u2026","rel":"","context":"In &quot;Big Data Hardware&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":29182,"url":"https:\/\/insidebigdata.com\/2022\/05\/01\/hewlett-packard-enterprise-ushers-in-next-era-in-ai-innovation-with-swarm-learning-solution-built-for-the-edge-and-distributed-sites\/","url_meta":{"origin":32255,"position":5},"title":"Hewlett Packard Enterprise Ushers in Next Era in AI Innovation with  Swarm Learning Solution Built for the Edge and Distributed Sites","date":"May 1, 2022","format":false,"excerpt":"Hewlett Packard Enterprise (NYSE: HPE) announced the launch of HPE Swarm Learning, a breakthrough AI solution to accelerate insights at the edge, from diagnosing diseases to detecting credit card fraud, by sharing and unifying AI model learnings without compromising data privacy.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/32255"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=32255"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/32255\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/32256"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=32255"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=32255"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=32255"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}