AI and Cybersecurity: From Research to the Classroom

AI and Cybersecurity: from Research to the Classroom (2021-2023)

Executive Summary

While AI has been applied to cybersecurity problems, the field is still in its infancy, making it important to explore how AI and cybersecurity can support one another and to train future generations of computing professionals about these new risks and ways to mitigate them. This collaborative NSF EAGER project between the University of Maryland, Baltimore County (UMBC) and the University of Illinois Urbana-Champaign (UIUC) will address both the research and educational aspects of combining AI and Cybersecurity. It will carry out novel research on applying the latest AI techniques to cybersecurity problems and explore how attacks on AI systems can be mitigated,  extend our innovative work on evaluating students understanding of the underlying security concepts to include AI-related topics, and create and evaluate modules and exercises for undergraduate, graduate and professional courses on both cybersecurity and AI to cover the concepts, examples, and tools illustrating how they can support one another.

Technical Challenge/Activitites

Our research will be organized around three themes. The first will extend our approach for modelling and evaluating the concepts and curricula in cybersecurity education to encompass its expansion to include AI and machine learning. The second till focus on how the latest AI techniques can enhance cybersecurity and how cybersecurity ideas can be used protect AI systems. The third theme will focus on creating, evaluating, and sharing specific educational modules that illustrate the intersection of cybersecurity and AI in a range of courses.

Research on Cybersecurity and AI in Education

Our research on the educational aspects of Cybersecurity and AI will include two components. The first will be to develop new learning assessment tools to cover both the AI technologies that are relevant and useful for cybersecurity and the potential vulnerabilities AI systems face from cyberattacks of different kinds. The second will investigate how to teach students at different levels to understand and use cybersecurity and AI concepts through project-based learning activities and competitions

Artificial Intelligence and Cybersecurity

The relationship between AI and cybersecurity started in mid 1990s and is still evolving. Originally, AI was seen as an aid to intrusion detection by providing tools for anomaly detection, data reduc- tion, and inducing rules explaining audit data. UMBC’s early work on supporting cybersecurity with AI techniques includes using semantic ontologies to represent host or network intrusion pat- terns to help recognize instances. Since then UMBC has explored many ways that AI systems and tools can be used in cybersecurity. This effort expands on prior work by examining maintainable cybersecurity knowledge graphs, enhancing reinforcement learning for cybersecurity applications, and protecting AI systems from attack.

Developing curricula and educational modules for AI and cybersecurity

We will design and offer a new course on AI and Security with independently usable modules. It will cover enough machine learning and knowledge representation to talk about how these techniques can be used for defense. Cybersecurity scenarios will include traditional ones at the network/host level, such as detecting zero-day attacks and improving situational awareness, as well as protecting intelligent robots, smart IoT assistants, and AI-driven medical systems. From the AI side, it will cover the adversarial learning concept and examples of adversarial attacks, drawing on our past and ongoing research. From the cybersecurity perspective, it will include basics such as the CIA triad, authentication, network defense, and host-oriented defense. Research from other tasks in this project will help identify which topics and how much depth, should be covered. Our modular approach will allow the course to be adapted to diverse audiences and levels, including students in our computing and cybersecurity programs, as well as those in other universities.

Potential Impact

This project will make significant contributions to the science and practice of combining AI and cybersecurity and transition these contributions to courses at our universities and beyond. It will explore how we can apply, adapt, and integrate the latest AI techniques to a range of cybersecurity problems, such as combining NLP, knowledge graphs, and embeddings to help system administra- tors predict and track evolving threats. It will also prototype new techniques to protect AI systems from attacks via data poisoning attacks or exploiting their lack of robustness. We will extend our cybersecurity concept index to include relevant AI concepts along with ways to measure how well students understand them. We will disseminate our results through a project website and mailing list, research papers, and a GitHub repository that holds shared datasets, presentation slides, reusable class projects, demonstration systems, software code, and documentation.