{"id":27804,"date":"2024-07-01T09:33:39","date_gmt":"2024-07-01T08:33:39","guid":{"rendered":"https:\/\/www.kaspersky.co.uk\/blog\/?p=27804"},"modified":"2024-07-01T09:33:39","modified_gmt":"2024-07-01T08:33:39","slug":"kaspersky-next-2024","status":"publish","type":"post","link":"https:\/\/www.kaspersky.co.uk\/blog\/kaspersky-next-2024\/27804\/","title":{"rendered":"Exploring AI and Deep Fake Disinformation"},"content":{"rendered":"<p>On June 18, 2024, Athens hosted the latest NEXT 2024 event. The conference brought together some of Europe\u2019s most brilliant minds in order to showcase ground-breaking research. The central theme of this year\u2019s event was the growing impact of deepfakes, focusing on how these AI-driven synthetic media is driving disinformation to unprecedented levels and enabling more severe forms of technology-enabled abuse.<\/p>\n<p>Here\u2019s a run-down of some of the key topics of discussion throughout the day.<\/p>\n<h3>Navigating the Shadows: AI and Deep Fakes<\/h3>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-27806\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/86\/2024\/07\/01092624\/knext1-1024x681.jpg\" alt=\"\" width=\"1024\" height=\"681\"><br>\nMarco Preuss and Dan Demeter from our Global Research and Analysis Team (GReAT) kicked off with their <a href=\"https:\/\/x.com\/kaspersky\/status\/1802977936707887215\" target=\"_blank\" rel=\"noopener nofollow\">session<\/a>: \u201cNavigating the Shadows: AI, Deep Fakes and Artificial Phantoms.\u201d They provided a comprehensive overview of how deepfakes are created, their potential for misuse, and the challenges they pose to digital security. They highlighted examples where deepfakes had been used to spread false information and manipulate public perception, emphasizing the need for advanced detection technologies and robust countermeasures.<\/p>\n<h3>Securing the AI Future<\/h3>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-27807\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/86\/2024\/07\/01092706\/knext2-1024x681.jpg\" alt=\"\" width=\"1024\" height=\"681\"><br>\nDr. Lilian Balatsou, renowned AI evangelist and cognitive neuroscientist, <a href=\"https:\/\/x.com\/kaspersky\/status\/1802983474715873449\" target=\"_blank\" rel=\"noopener nofollow\">delivered<\/a> a compelling keynote on \u201cSecuring the AI Future.\u201d She explored the dual-edged nature of AI advancements, discussing how they can be harnessed for good, whilst also posing significant risks if left unchecked. Dr. Balatsou underscored the importance of developing ethical AI frameworks and the role of interdisciplinary collaboration in mitigating the risks associated with AI and deepfakes.<\/p>\n<p>The next talk was moderated by David Emm, and was entitled \u201cDecrypting The Dangers: Tackling Misinformation for a Secure Digital Future\u201d and featured insights from Dr. Lilian Balatsou, Dimitris Dimitriadis, Marco Preuss, and Yuliya Shlychkova.<\/p>\n<p>The panellists delved into the mechanisms of misinformation, the psychological impact of fake news, and strategies to combat the spread of false information. They emphasized the need for public awareness campaigns and the development of tools to verify the authenticity of online content.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-27808\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/86\/2024\/07\/01092717\/knext3-1024x681.jpg\" alt=\"\" width=\"1024\" height=\"681\"><\/p>\n<h3>Mission Impossible: The One Hour Hack<\/h3>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-27809\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/86\/2024\/07\/01092728\/knext4-1024x681.jpg\" alt=\"\" width=\"1024\" height=\"681\"><\/p>\n<p>Yuliya Novikova, Head of Digital Footprint Intelligence presented \u201cMission Impossible: The One Hour Hack.\u201d She <a href=\"https:\/\/x.com\/kaspersky\/status\/1803026507285115223\" target=\"_blank\" rel=\"noopener nofollow\">demonstrated<\/a> how hackers are able to crack passwords in less than an hour using sophisticated techniques and AI-driven tools. Novikova\u2019s live hacking session was a stark reminder of the vulnerabilities in our digital lives and the importance of strong, unique passwords and multi-factor authentication to protect personal and corporate data.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-27810\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/86\/2024\/07\/01092739\/knext5-1024x681.jpg\" alt=\"\" width=\"1024\" height=\"681\"><\/p>\n<p>The concluding panel, \u201cManipulated Truths: How Artificial Truth and Deepfakes Power Technology-enabled Abuse,\u201d was <a href=\"https:\/\/x.com\/kaspersky\/status\/1803045382718468205\" target=\"_blank\" rel=\"noopener nofollow\">moderated<\/a> by Christina Jankowski and featured Dr. Madeleine Janickyj from University College London, Emma Pickering from Refuge, and David Emm from our GReAT. The panellists discussed the ethical and social implications of deepfake technology, focusing on its use in harassment, political manipulation, and creating false narratives. They called for stronger legal frameworks and international cooperation to address the misuse of deepfakes.<\/p>\n<p>A significant takeaway from Kaspersky NEXT 2024 was the need for increased public awareness and education on the risks associated with deepfakes. Attendees agreed that empowering individuals with knowledge is a crucial step in combating disinformation. Educational campaigns, workshops, and public service announcements are essential to inform the public about how to recognize and report deepfake content.<\/p>\n<p>The <a href=\"https:\/\/x.com\/kaspersky\/status\/1802978427051319577\" target=\"_blank\" rel=\"noopener nofollow\">event<\/a> underscored the urgent need to address the challenges posed by AI and deepfake technology. Additionally, it highlighted the critical importance of education and public awareness in combating these emerging threats. Experts agreed that empowering individuals with knowledge about the risks of deepfakes and the tools available to detect and counteract them is paramount. Public awareness campaigns, workshops, and educational programs are essential to inform people about how to recognize and report deepfake content. Education and awareness are the first lines of defense against the pervasive threats of AI-driven disinformation and technology-enabled abuse. By staying informed and vigilant, we can better protect ourselves and our communities from the potentially devastating impacts of these technologies. Together, with the continued efforts of experts, policymakers, and the public, we can harness the power of AI for good and create a safer digital future for everyone.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Kaspersky NEXT 2024 in Athens brought together experts to tackle the challenges of AI and deepfakes, highlighting the importance of collaboration, education, and innovative detection technologies.<\/p>\n","protected":false},"author":437,"featured_media":27812,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[5,3659,1622,1623,2026,3608],"tags":[3142,1043,3660,3641,2049],"class_list":{"0":"post-27804","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-news","8":"category-next","9":"category-privacy","10":"category-technology","11":"category-threats","12":"category-trends","13":"tag-knext","14":"tag-ai","15":"tag-cybersecurity-conference","16":"tag-deepfake","17":"tag-next"},"hreflang":[{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/kaspersky-next-2024\/27804\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/www.kaspersky.co.uk\/blog\/tag\/knext\/","name":"#knext"},"_links":{"self":[{"href":"https:\/\/www.kaspersky.co.uk\/blog\/wp-json\/wp\/v2\/posts\/27804","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.co.uk\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kaspersky.co.uk\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.co.uk\/blog\/wp-json\/wp\/v2\/users\/437"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kaspersky.co.uk\/blog\/wp-json\/wp\/v2\/comments?post=27804"}],"version-history":[{"count":5,"href":"https:\/\/www.kaspersky.co.uk\/blog\/wp-json\/wp\/v2\/posts\/27804\/revisions"}],"predecessor-version":[{"id":27815,"href":"https:\/\/www.kaspersky.co.uk\/blog\/wp-json\/wp\/v2\/posts\/27804\/revisions\/27815"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kaspersky.co.uk\/blog\/wp-json\/wp\/v2\/media\/27812"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.co.uk\/blog\/wp-json\/wp\/v2\/media?parent=27804"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kaspersky.co.uk\/blog\/wp-json\/wp\/v2\/categories?post=27804"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kaspersky.co.uk\/blog\/wp-json\/wp\/v2\/tags?post=27804"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}