{"id":31848,"date":"2023-03-16T06:00:00","date_gmt":"2023-03-16T13:00:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=31848"},"modified":"2023-06-23T12:36:37","modified_gmt":"2023-06-23T19:36:37","slug":"research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/","title":{"rendered":"Research Highlights: Real or Fake Text? We Can Learn to Spot the Difference"},"content":{"rendered":"<div class=\"wp-block-image\">\n<figure class=\"alignright size-full\"><img decoding=\"async\" loading=\"lazy\" width=\"300\" height=\"113\" src=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/03\/Univ_Penn_Engineering_logo.png\" alt=\"\" class=\"wp-image-31850\" srcset=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/03\/Univ_Penn_Engineering_logo.png 300w, https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/03\/Univ_Penn_Engineering_logo-150x57.png 150w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/figure><\/div>\n\n\n<p>The most recent generation of chatbots has surfaced longstanding concerns about the growing sophistication and accessibility of artificial intelligence.<\/p>\n\n\n\n<p>Fears about the integrity of the job market \u2014 from the creative economy to the managerial class \u2014 have spread to the classroom as educators rethink learning in the wake of ChatGPT.<\/p>\n\n\n\n<p>Yet while apprehensions about employment and schools dominate headlines, the truth is that the effects of large-scale language models such as ChatGPT will touch virtually every corner of our lives. These new tools raise society-wide concerns about artificial intelligence\u2019s role in reinforcing social biases, committing fraud and identity theft, generating fake news, spreading misinformation and more.<\/p>\n\n\n\n<p>A team of researchers at the&nbsp;<a href=\"https:\/\/www.seas.upenn.edu\/\" target=\"_blank\" rel=\"noreferrer noopener\">University of Pennsylvania School of Engineering and Applied Science<\/a>&nbsp;is seeking to empower tech users to mitigate these risks. In&nbsp;a <a href=\"https:\/\/www.cis.upenn.edu\/~ccb\/publications\/real-or-fake-text-analysis.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">peer-reviewed paper<\/a>&nbsp;presented at the&nbsp;February 2023&nbsp;meeting of the&nbsp;<a href=\"https:\/\/aaai.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">Association for the Advancement of Artificial Intelligence<\/a>, the authors demonstrate that people can learn to spot the difference between machine-generated and human-written text.<\/p>\n\n\n\n<p>Before you choose a recipe, share an article, or provide your credit card details, it\u2019s important to know there are steps you can take to discern the reliability of your source.<\/p>\n\n\n\n<p>The study, led by&nbsp;<a href=\"https:\/\/directory.seas.upenn.edu\/chris-callison-burch\/\" target=\"_blank\" rel=\"noreferrer noopener\">Chris Callison-Burch<\/a>, Associate Professor in the Department of Computer and Information Science (CIS), along with&nbsp;<a href=\"https:\/\/liamdugan.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Liam Dugan<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/daphnei.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Daphne Ippolito, Ph.D.<\/a> students in CIS, provides evidence that AI-generated text is detectable.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>\u201cWe\u2019ve shown that people can train themselves to recognize machine-generated texts,\u201d says Callison-Burch. \u201cPeople start with a certain set of assumptions about what sort of errors a machine would make, but these assumptions aren\u2019t necessarily correct. Over time, given enough examples and explicit instruction, we can learn to pick up on the types of errors that machines are currently making.\u201d<\/p>\n<\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>\u201cAI today is surprisingly good at producing very fluent, very grammatical text,\u201d adds Dugan. \u201cBut it does make mistakes. We prove that machines make distinctive types of errors \u2014 common-sense errors, relevance errors, reasoning errors and logical errors, for example \u2014 that we can learn how to spot.\u201d<\/p>\n<\/blockquote>\n\n\n\n<p>The study uses data collected using&nbsp;<a href=\"https:\/\/roft.io\/\" target=\"_blank\" rel=\"noreferrer noopener\">Real or Fake Text?<\/a>, an original web-based training game. This training game is notable for transforming the standard experimental method for detection studies into a more accurate recreation of how people use AI to generate text. In standard methods, participants are asked to indicate in a yes-or-no fashion whether a machine has produced a given text. This task involves simply classifying a text as real or fake and responses are scored as correct or incorrect.<\/p>\n\n\n\n<p>The Penn model significantly refines the standard detection study into an effective training task by showing examples that all begin as human-written. Each example then transitions into generated text, asking participants to mark where they believe this transition begins. Trainees identify and describe the features of the text that indicate error and receive a score.<\/p>\n\n\n\n<p>The study results show that participants scored significantly better than random chance, providing evidence that AI-created text is, to some extent, detectable.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>\u201cOur method not only gamifies the task, making it more engaging, it also provides a more realistic context for training,\u201d says Dugan. \u201cGenerated texts, like those produced by ChatGPT, begin with human-provided prompts.\u201d<\/p>\n<\/blockquote>\n\n\n\n<p>The study speaks not only to artificial intelligence today, but also outlines a reassuring, even exciting, future for our relationship to this technology.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>\u201cFive years ago,\u201d says Dugan, \u201cmodels couldn\u2019t stay on topic or produce a fluent sentence. Now, they rarely make a grammar mistake. Our study identifies the kind of errors that characterize AI chatbots, but it\u2019s important to keep in mind that these errors have evolved and will continue to evolve. The shift to be concerned about is not that AI-written text is undetectable. It\u2019s that people will need to continue training themselves to recognize the difference and work with detection software as a supplement.\u201d<\/p>\n<\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>\u201cPeople are anxious about AI for valid reasons,\u201d says Callison-Burch. \u201cOur study gives points of evidence to allay these anxieties. Once we can harness our optimism about AI text generators, we will be able to devote attention to these tools\u2019 capacity for helping us write more imaginative, more interesting texts.\u201d<\/p>\n<\/blockquote>\n\n\n\n<p>Ippolito, the Penn study\u2019s co-leader and current Research Scientist at Google, complements Dugan\u2019s focus on detection with her work\u2019s emphasis on exploring the most effective use cases for these tools. She contributed, for example, to&nbsp;<a href=\"https:\/\/wordcraft-writers-workshop.appspot.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Wordcraft<\/a>, an&nbsp;AI creative writing tool&nbsp;developed in tandem with published writers. None of the writers or researchers found that AI was a compelling replacement for a fiction writer, but they did find significant value in its ability to support the creative process.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>\u201cMy feeling at the moment is that these technologies are best suited for creative writing,\u201d says Callison-Burch. \u201cNews stories, term papers, or legal advice are bad use cases because there\u2019s no guarantee of factuality.\u201d<\/p>\n<\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote\">\n<p>\u201cThere are exciting positive directions that you can push this technology in,\u201d says Dugan. \u201cPeople are fixated on the worrisome examples, like plagiarism and fake news, but we know now that we can be training ourselves to be better readers and writers.\u201d<\/p>\n\n\n\n<p><\/p>\n<\/blockquote>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A team of researchers at the\u00a0University of Pennsylvania School of Engineering and Applied Science\u00a0is seeking to empower tech users to mitigate risks of AI generated misinformation. In\u00a0a peer-reviewed paper\u00a0presented at the\u00a0February 2023\u00a0meeting of the\u00a0Association for the Advancement of Artificial Intelligence, the authors demonstrate that people can learn to spot the difference between machine-generated and human-written text.<\/p>\n","protected":false},"author":10513,"featured_media":31850,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,115,87,180,67,56,84,1303,1],"tags":[437,1254,1001,1248,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Research Highlights: Real or Fake Text? We Can Learn to Spot the Difference - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Research Highlights: Real or Fake Text? We Can Learn to Spot the Difference - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"A team of researchers at the\u00a0University of Pennsylvania School of Engineering and Applied Science\u00a0is seeking to empower tech users to mitigate risks of AI generated misinformation. In\u00a0a peer-reviewed paper\u00a0presented at the\u00a0February 2023\u00a0meeting of the\u00a0Association for the Advancement of Artificial Intelligence, the authors demonstrate that people can learn to spot the difference between machine-generated and human-written text.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-16T13:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-06-23T19:36:37+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/03\/Univ_Penn_Engineering_logo.png\" \/>\n\t<meta property=\"og:image:width\" content=\"300\" \/>\n\t<meta property=\"og:image:height\" content=\"113\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/\",\"url\":\"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/\",\"name\":\"Research Highlights: Real or Fake Text? We Can Learn to Spot the Difference - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2023-03-16T13:00:00+00:00\",\"dateModified\":\"2023-06-23T19:36:37+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Research Highlights: Real or Fake Text? We Can Learn to Spot the Difference\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Research Highlights: Real or Fake Text? We Can Learn to Spot the Difference - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/","og_locale":"en_US","og_type":"article","og_title":"Research Highlights: Real or Fake Text? We Can Learn to Spot the Difference - insideBIGDATA","og_description":"A team of researchers at the\u00a0University of Pennsylvania School of Engineering and Applied Science\u00a0is seeking to empower tech users to mitigate risks of AI generated misinformation. In\u00a0a peer-reviewed paper\u00a0presented at the\u00a0February 2023\u00a0meeting of the\u00a0Association for the Advancement of Artificial Intelligence, the authors demonstrate that people can learn to spot the difference between machine-generated and human-written text.","og_url":"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2023-03-16T13:00:00+00:00","article_modified_time":"2023-06-23T19:36:37+00:00","og_image":[{"width":300,"height":113,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/03\/Univ_Penn_Engineering_logo.png","type":"image\/png"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/","url":"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/","name":"Research Highlights: Real or Fake Text? We Can Learn to Spot the Difference - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2023-03-16T13:00:00+00:00","dateModified":"2023-06-23T19:36:37+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2023\/03\/16\/research-highlights-real-or-fake-text-we-can-learn-to-spot-the-difference\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Research Highlights: Real or Fake Text? We Can Learn to Spot the Difference"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/03\/Univ_Penn_Engineering_logo.png","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-8hG","jetpack-related-posts":[{"id":32243,"url":"https:\/\/insidebigdata.com\/2023\/05\/05\/new-report-65-plan-to-use-chatgpt-instead-of-traditional-search-engines\/","url_meta":{"origin":31848,"position":0},"title":"New Report: 65% Plan to Use ChatGPT Instead of Traditional Search Engines","date":"May 5, 2023","format":false,"excerpt":"A new Forbes Advisor report found that over 75% of consumers are concerned about misinformation from AI even though 65% plan to use ChatGPT instead of traditional search engines when searching for information or answers.\u00a0","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/04\/Forbes_Advisor_pic.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32577,"url":"https:\/\/insidebigdata.com\/2023\/06\/11\/insidebigdatapodcast-should-we-and-can-we-put-the-brakes-on-artificial-intelligence\/","url_meta":{"origin":31848,"position":1},"title":"#insideBIGDATApodcast:\u00a0Should We, and Can We, Put the Brakes on Artificial Intelligence?","date":"June 11, 2023","format":false,"excerpt":"Appearing on the New Yorker Radio Hour, Sam Altman, CEO of OpenAI, which created ChatGPT, says that AI is a powerful tool that will streamline human work and quicken the pace of scientific advancement.\u00a0But ChatGPT has both enthralled and terrified us, and even some of AI\u2019s pioneers are freaked out\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/01\/Podcast_shutterstock_612194837.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":32859,"url":"https:\/\/insidebigdata.com\/2023\/07\/14\/wormgpt-the-generative-ai-tool-cybercriminals-are-using-to-launch-business-email-compromise-attacks\/","url_meta":{"origin":31848,"position":2},"title":"WormGPT \u2013 The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks","date":"July 14, 2023","format":false,"excerpt":"SlashNext published a research report detailing a unique module based on ChatGPT that was created by cybercriminals with the explicit intent of leveraging generative AI for nefarious purposes.\u00a0These research findings have widespread implications for the security community in understanding how bad actors are not only manipulating generative AI platforms like\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":32465,"url":"https:\/\/insidebigdata.com\/2023\/05\/24\/lets-talk-about-chatgpt-and-fraud\/","url_meta":{"origin":31848,"position":3},"title":"ChatGPT: A Fraud Fighter\u2019s Friend or Foe?","date":"May 24, 2023","format":false,"excerpt":"In this contributed article, Doriel Abrahams, Head of Risk, U.S., Forter, explores how ChatGPT can combine with social engineering to conduct fraud, some of the generative AI trends he anticipates will play out this year, and how existing fraud rings could use the technology to manipulate both businesses and consumers\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/04\/ChatGPT_shutterstock_2249988847_small.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":31714,"url":"https:\/\/insidebigdata.com\/2023\/02\/24\/research-highlights-a-comprehensive-survey-on-pretrained-foundation-models-a-history-from-bert-to-chatgpt\/","url_meta":{"origin":31848,"position":4},"title":"Research Highlights: A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT","date":"February 24, 2023","format":false,"excerpt":"The Pretrained Foundation Models (PFMs) are regarded as the foundation for various downstream tasks with different data modalities. A pretrained foundation model, such as BERT, GPT-3, MAE, DALLE-E, and ChatGPT, is trained on large-scale data which provides a reasonable parameter initialization for a wide range of downstream applications.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/02\/LLM_paper.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":34038,"url":"https:\/\/insidebigdata.com\/2023\/11\/29\/sophos-anticipates-ai-based-attack-techniques-and-prepares-detections\/","url_meta":{"origin":31848,"position":5},"title":"Sophos Anticipates AI-Based Attack Techniques and Prepares Detections","date":"November 29, 2023","format":false,"excerpt":"Sophos, a global leader in innovating and delivering cybersecurity as a service, released two reports about the use of AI in cybercrime. The first report\u2014\u201cThe Dark Side of AI: Large-Scale Scam Campaigns Made Possible by Generative AI\u201d\u2014demonstrates how, in the future, scammers could leverage technology like ChatGPT to conduct fraud\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/AI_shutterstock_2287025875_special-1.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/31848"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=31848"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/31848\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/31850"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=31848"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=31848"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=31848"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}