{"id":33511,"date":"2023-09-25T02:59:00","date_gmt":"2023-09-25T09:59:00","guid":{"rendered":"https:\/\/insidebigdata.com\/?p=33511"},"modified":"2023-09-25T14:58:09","modified_gmt":"2023-09-25T21:58:09","slug":"protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe","status":"publish","type":"post","link":"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/","title":{"rendered":"Protestors to Meta AI: \u201cSharing model weights is fundamentally unsafe\u201d\u00a0"},"content":{"rendered":"\n<p><strong>WHO: <\/strong>Concerned citizens&nbsp;<\/p>\n\n\n\n<p><strong>WHAT: <\/strong>Protest against Meta (formerly Facebook) releasing large language model (AI) weights <\/p>\n\n\n\n<p><strong>WHERE: <\/strong>The Meta building in San Francisco, 250 Howard St, San Francisco, CA 94105 (<a href=\"https:\/\/wordpress.us4.list-manage.com\/track\/click?u=436109e7501b2962c4dffa34b&amp;id=0fd2e8999e&amp;e=31152c449c\" target=\"_blank\" rel=\"noreferrer noopener\">event page<\/a>)<\/p>\n\n\n\n<p><strong>WHEN: <\/strong>4:00pm Friday September 29, 2023\u00a0<\/p>\n\n\n\n<p><strong>WHY: <\/strong>Protestors will be gathering outside the Meta building in San Francisco on September 29th to call for \u201cresponsible release\u201d of their cutting edge AI models. The protestors demand that Meta stop releasing the weights of their AI models publicly\u2014 what they term \u201cirreversible proliferation\u201d.&nbsp;<\/p>\n\n\n\n<p><em>Protest organizer Holly Elmore says \u201cUnlike other top AI labs, Meta says that they plan to keep releasing the weights of their models publicly, which allows anyone with an internet connection the ability to modify them. Once model weights are released, they cannot be recalled even if it becomes clear the model is unsafe\u2013 proliferating the AI model is irreversible. Chief AI Scientist at Meta, Yann LeCun, has said he\u2019s not worried if bad actors gain access to powerful AIs, even if the AIs have human-level or superhuman capabilities. This is unacceptable. New tech shouldn\u2019t be creating a Wild West where my good AI fights your bad AI\u2013 AI should be regulated like we regulate pharmaceuticals or airplanes, so that it is safe before it is released.\u201d\u00a0<\/em><\/p>\n\n\n\n<p>When large language models (LLMs) are accessed through an API, users access a version that has been fine-tuned to ensure user safety and there can be additional input\/output filtering for safety that does not take place when users are directly running the model from the weights. Meta recently released the model weights of its LLM, LLaMA 2, and the company has said that they will continue to release the weights of new models, trusting that \u201cgood guy AIs\u201d will win against \u201cbad guy AIs\u201d.&nbsp;<\/p>\n\n\n\n<p>Safety measures for LLMs like LLaMA 2 aren\u2019t just about whether they say hateful things. Not only can the safety fine-tuning be stripped from models if one has the weights, Meta released a base version of LLaMa 2 without any safety fine-tuning. This is dangerous because LLMs are increasingly being developed as autonomous agents without the need for a human in the loop. Researchers have found that LLMs can be used to scale phishing campaigns, suggest cyberattacks, synthesize dangerous chemicals, or help plan biological attacks. Over the coming years, as their general capabilities increase, they will only become more capable of malicious activities.<\/p>\n\n\n\n<p><em>When asked about the protest, Dr. Peter S. Park, AI Existential Safety Postdoctoral Fellow at MIT and Director of StakeOut.AI, said \u201cAlready, widely released AI models are being misused for non-consensual pornography, hateful content, and graphic violence. And going forward, Llama 2 and successors will likely enable cybercriminals, rogue actors, and nation-state adversaries to carry out fraud and propaganda campaigns with unprecedented ease.&#8221;\u00a0<\/em><\/p>\n\n\n\n<p><em>Sign up for the free insideBIGDATA&nbsp;<a href=\"http:\/\/inside-bigdata.com\/newsletter\/\" target=\"_blank\" rel=\"noreferrer noopener\">newsletter<\/a>.<\/em><\/p>\n\n\n\n<p><em>Join us on Twitter:&nbsp;<a href=\"https:\/\/twitter.com\/InsideBigData1\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/twitter.com\/InsideBigData1<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on LinkedIn:&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/insidebigdata\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.linkedin.com\/company\/insidebigdata\/<\/a><\/em><\/p>\n\n\n\n<p><em>Join us on Facebook:&nbsp;<a href=\"https:\/\/www.facebook.com\/insideBIGDATANOW\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/insideBIGDATANOW<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Meta\u2019s frontier AI models are fundamentally unsafe. Since Meta AI has released the model weights publicly, any safety measures can be removed. Before it releases even more advanced models \u2013 which can have more dangerous capabilities \u2013 a group of concerned citizens call on Meta to take responsible release seriously and stop irreversible proliferation. Join in for a peaceful protest at Meta\u2019s office in San Francisco.<\/p>\n","protected":false},"author":10513,"featured_media":32763,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[526,182,81,180,67,268,56,1],"tags":[437,1343,96],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Protestors to Meta AI: \u201cSharing model weights is fundamentally unsafe\u201d\u00a0 - insideBIGDATA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Protestors to Meta AI: \u201cSharing model weights is fundamentally unsafe\u201d\u00a0 - insideBIGDATA\" \/>\n<meta property=\"og:description\" content=\"Meta\u2019s frontier AI models are fundamentally unsafe. Since Meta AI has released the model weights publicly, any safety measures can be removed. Before it releases even more advanced models \u2013 which can have more dangerous capabilities \u2013 a group of concerned citizens call on Meta to take responsible release seriously and stop irreversible proliferation. Join in for a peaceful protest at Meta\u2019s office in San Francisco.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/\" \/>\n<meta property=\"og:site_name\" content=\"insideBIGDATA\" \/>\n<meta property=\"article:publisher\" content=\"http:\/\/www.facebook.com\/insidebigdata\" \/>\n<meta property=\"article:published_time\" content=\"2023-09-25T09:59:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-25T21:58:09+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2284999159_special.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1100\" \/>\n\t<meta property=\"og:image:height\" content=\"550\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Editorial Team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:site\" content=\"@insideBigData\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Editorial Team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/\",\"url\":\"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/\",\"name\":\"Protestors to Meta AI: \u201cSharing model weights is fundamentally unsafe\u201d\u00a0 - insideBIGDATA\",\"isPartOf\":{\"@id\":\"https:\/\/insidebigdata.com\/#website\"},\"datePublished\":\"2023-09-25T09:59:00+00:00\",\"dateModified\":\"2023-09-25T21:58:09+00:00\",\"author\":{\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\"},\"breadcrumb\":{\"@id\":\"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/insidebigdata.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Protestors to Meta AI: \u201cSharing model weights is fundamentally unsafe\u201d\u00a0\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/insidebigdata.com\/#website\",\"url\":\"https:\/\/insidebigdata.com\/\",\"name\":\"insideBIGDATA\",\"description\":\"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/insidebigdata.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9\",\"name\":\"Editorial Team\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g\",\"caption\":\"Editorial Team\"},\"sameAs\":[\"http:\/\/www.insidebigdata.com\"],\"url\":\"https:\/\/insidebigdata.com\/author\/editorial\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Protestors to Meta AI: \u201cSharing model weights is fundamentally unsafe\u201d\u00a0 - insideBIGDATA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/","og_locale":"en_US","og_type":"article","og_title":"Protestors to Meta AI: \u201cSharing model weights is fundamentally unsafe\u201d\u00a0 - insideBIGDATA","og_description":"Meta\u2019s frontier AI models are fundamentally unsafe. Since Meta AI has released the model weights publicly, any safety measures can be removed. Before it releases even more advanced models \u2013 which can have more dangerous capabilities \u2013 a group of concerned citizens call on Meta to take responsible release seriously and stop irreversible proliferation. Join in for a peaceful protest at Meta\u2019s office in San Francisco.","og_url":"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/","og_site_name":"insideBIGDATA","article_publisher":"http:\/\/www.facebook.com\/insidebigdata","article_published_time":"2023-09-25T09:59:00+00:00","article_modified_time":"2023-09-25T21:58:09+00:00","og_image":[{"width":1100,"height":550,"url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2284999159_special.jpg","type":"image\/jpeg"}],"author":"Editorial Team","twitter_card":"summary_large_image","twitter_creator":"@insideBigData","twitter_site":"@insideBigData","twitter_misc":{"Written by":"Editorial Team","Est. reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/","url":"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/","name":"Protestors to Meta AI: \u201cSharing model weights is fundamentally unsafe\u201d\u00a0 - insideBIGDATA","isPartOf":{"@id":"https:\/\/insidebigdata.com\/#website"},"datePublished":"2023-09-25T09:59:00+00:00","dateModified":"2023-09-25T21:58:09+00:00","author":{"@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9"},"breadcrumb":{"@id":"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/insidebigdata.com\/2023\/09\/25\/protestors-to-meta-ai-sharing-model-weights-is-fundamentally-unsafe\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/insidebigdata.com\/"},{"@type":"ListItem","position":2,"name":"Protestors to Meta AI: \u201cSharing model weights is fundamentally unsafe\u201d\u00a0"}]},{"@type":"WebSite","@id":"https:\/\/insidebigdata.com\/#website","url":"https:\/\/insidebigdata.com\/","name":"insideBIGDATA","description":"Your Source for AI, Data Science, Deep Learning &amp; Machine Learning Strategies","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/insidebigdata.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/2949e412c144601cdbcc803bd234e1b9","name":"Editorial Team","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/insidebigdata.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/e137ce7ea40e38bd4d25bb7860cfe3e4?s=96&d=mm&r=g","caption":"Editorial Team"},"sameAs":["http:\/\/www.insidebigdata.com"],"url":"https:\/\/insidebigdata.com\/author\/editorial\/"}]}},"jetpack_featured_media_url":"https:\/\/insidebigdata.com\/wp-content\/uploads\/2023\/06\/GenerativeAI_shutterstock_2284999159_special.jpg","jetpack_shortlink":"https:\/\/wp.me\/p9eA3j-8Iv","jetpack-related-posts":[{"id":20509,"url":"https:\/\/insidebigdata.com\/2018\/06\/03\/veteran-ai-experts-launch-weights-biases-offer-state-art-developer-tools-machine-learning\/","url_meta":{"origin":33511,"position":0},"title":"Veteran AI Experts Launch Weights &#038; Biases to Offer State of the Art Developer Tools for Machine Learning","date":"June 3, 2018","format":false,"excerpt":"Weights & Biases (W&B) launched with the first enterprise AI platform to help teams visualize and debug machine learning models. W&B received $5 million in a Series A investment round co-led by Trinity Ventures and Bloomberg Beta.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/yrp2eDyelIU\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":28312,"url":"https:\/\/insidebigdata.com\/2022\/01\/24\/penguin-computing-partners-with-meta-to-help-build-toward-the-metaverse-by-delivering-ai-optimized-architecture-and-managed-services\/","url_meta":{"origin":33511,"position":1},"title":"Penguin Computing Partners with Meta to Help Build Toward the Metaverse by Delivering AI-Optimized Architecture and Managed Services","date":"January 24, 2022","format":false,"excerpt":"Penguin Computing, Inc. a division of SGH (Nasdaq: SGH) and a leader in high-performance computing (HPC) focused on artificial intelligence and machine learning, today announced its role in providing AI-optimized architecture and managed services for the AI Research SuperCluster (RSC) \u2014 Meta\u2019s cutting-edge AI supercomputer for AI research.","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":23221,"url":"https:\/\/insidebigdata.com\/2019\/09\/08\/research-highlight-compressing-neural-networks-with-minimal-sacrifice-in-accuracy\/","url_meta":{"origin":33511,"position":2},"title":"Research Highlight: Compressing Neural Networks with Minimal Sacrifice in Accuracy","date":"September 8, 2019","format":false,"excerpt":"Most data scientists soon realize that deep learning models can be unwieldy and often impractical to run on smaller devices without major modification. Our friends over at deeplearning.ai recently communicated about a group of researchers at Facebook AI Research determined a new technique to compress neural networks with minimal loss\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2019\/06\/Data-Scientist-shutterstock_768047488.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":33500,"url":"https:\/\/insidebigdata.com\/2023\/09\/28\/insidebigdata-ai-news-briefs-9-28-2023\/","url_meta":{"origin":33511,"position":3},"title":"insideBIGDATA AI News Briefs \u2013 9\/28\/2023","date":"September 28, 2023","format":false,"excerpt":"Welcome insideBIGDATA AI News Briefs, our timely new feature bringing you the latest industry insights and perspectives surrounding the field of AI including deep learning, large language models, generative AI, and transformers. We\u2019re working tirelessly to dig up the most timely and curious tidbits underlying the day\u2019s most popular technologies.\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2023\/07\/AI-News-Briefs-column-banner.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":23163,"url":"https:\/\/insidebigdata.com\/2019\/08\/28\/best-of-arxiv-org-for-ai-machine-learning-and-deep-learning-july-2019\/","url_meta":{"origin":33511,"position":4},"title":"Best of arXiv.org for AI, Machine Learning, and Deep Learning \u2013 July 2019","date":"August 28, 2019","format":false,"excerpt":"In this recurring monthly feature, we will filter all the recent research papers appearing in the arXiv.org preprint server for subjects relating to AI, machine learning and deep learning \u2013 from disciplines including statistics, mathematics and computer science \u2013 and provide you with a useful \u201cbest of\u201d list for the\u2026","rel":"","context":"In &quot;AI Deep Learning&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/insidebigdata.com\/wp-content\/uploads\/2013\/12\/arxiv.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":26645,"url":"https:\/\/insidebigdata.com\/2021\/07\/08\/how-a-new-ai-mindset-for-automl-will-make-deep-learning-more-accessible\/","url_meta":{"origin":33511,"position":5},"title":"How a New AI Mindset for AutoML Will Make Deep Learning More Accessible","date":"July 8, 2021","format":false,"excerpt":"In this special guest feature, Yonatan Geifman, CEO & co-founder of Deci, discusses how automated machine learning (or AutoML) can \u201cdemocratize data science\u201d by gradually implementing different levels of automation.","rel":"","context":"In &quot;Big Data&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/33511"}],"collection":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/users\/10513"}],"replies":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/comments?post=33511"}],"version-history":[{"count":0,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/posts\/33511\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media\/32763"}],"wp:attachment":[{"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/media?parent=33511"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/categories?post=33511"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/insidebigdata.com\/wp-json\/wp\/v2\/tags?post=33511"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}