Exploring the Dynamic Relationship Between AI and Cybersecurity


February 13, 2025

Dr. Costis Toregas gives his presentation.

Artificial intelligence (AI) is reshaping cybersecurity, offering both enhanced security measures and new vulnerabilities that demand careful study. Recognizing the need for deeper exploration of AI-enabled cybersecurity, Dr. Costis Toregas, director of GW’s Cyber Security and Privacy Research Institute (CSPRI), co-chaired a workshop at the 58th Hawaii International Conference on System Sciences (HICSS) in January alongside Drs. Eman El-Sheikh and Sagar Samtani from the University of West Florida and Indiana University, respectively.

HICSS, recognized as the longest-running scientific conference in Information Technology Management, features an array of symposia, workshops, and tutorials. In this workshop, the only one dedicated to security and privacy issues in this year’s program, Toregas led discussions with over 50 international researchers and professionals, focusing on best practices and emerging trends in integrating AI with cybersecurity. Topics covered ranged from securing open-source AI to creating intelligent regulatory compliance tools using large language models.

Workshop participants spent considerable time exploring how to translate these best practices into research-informed AI-enabled cybersecurity education and operations. Dedicated time was set aside to examine a key example of how academic researchers supported by the National Science Foundation and the National Security Administration are advancing this goal through the CyberAI Project, an initiative to develop a robust framework and Knowledge Units for faculty addressing this evolving domain.

They also identified pressing challenges at the intersection of AI and cybersecurity, particularly within critical infrastructure, higher education, and industry and government operations. Stephen Kaisler, an adjunct professor in GW Engineering’s Department of Computer Science who attended the workshop, pointed out several areas in need of further exploration, including the need for reasoning models to explain cybersecurity and AI functions, integrating cybersecurity practices with those of AI and machine learning, ensuring clear system explanations, and modeling adversarial attacks.

“I might paraphrase John F. Kennedy by saying, What can AI/ML do for cybersecurity, and what can cybersecurity do for AI/ML? The current focus on quantitative and probabilistic AI methods and models seems to omit a significant aspect: How do we reason about what is happening in each of the cases mentioned above?” said Kaisler.

The workshop emphasized two main approaches to integrating AI and cybersecurity: using AI-driven techniques to strengthen cyber defenses and embedding cybersecurity directly into AI system design to guard against evolving threats. These discussions align closely with GW’s Trustworthy AI initiative, which promotes academic research and industry collaborations to investigate AI’s impact on society. As Toregas noted, both approaches present unique opportunities and challenges, and the workshop served as a platform for participants to examine their broader implications.

To maintain the momentum from this workshop, a Community of Interest was established, consisting of a global network of researchers from universities represented at HICSS. This group will work together to advance AI and cybersecurity research through collaborative publications and ongoing discussions, ultimately aiming for the adoption of frameworks and curricula that link these two fields.

At GW Engineering, Toregas encourages faculty and students to engage with CSPRI in these vital conversations. A follow-up workshop on campus will allow faculty to delve deeper into the strategies discussed at HICSS. Additionally, an upcoming CSPRI-hosted pizza party will unite students and faculty for continued dialogue on the dynamic relationship between AI and cybersecurity.