True Teamwork: Human-AI Partnership Activities for K-12 Cybersecurity Education

Presenting classroom-ready activities at NICE K12 Cybersecurity Education Conference 2025

Three complete activities, each with four grade-band versions, designed to teach students that humans and AI work best as collaborative partners in cybersecurity.
Author
Affiliation
Published

Sunday, December 7, 2025

Presentation Context

I had the opportunity to present at the NICE K12 Cybersecurity Education Conference 2025, held December 7-9 in Nashville, alongside my colleagues Rob Honomichl and Paul Wagner. Our session was part of the Multidisciplinary & Innovative Approaches track, where we introduced a set of classroom-ready activities designed to help K-12 students learn how to collaborate with AI as genuine teammates in cybersecurity contexts rather than simply using AI as another digital tool.

TipMaterials Available

All lesson plans, assessment rubrics, implementation guides, and research resources are available at:

ryanstraight.github.io/nicek12-2025-materials

12 lesson plans • Career connections • Annotated bibliography • Works at any resource level

The Core Shift

At their heart, these activities aim to transform how students think about AI. Instead of viewing AI as either an adversary to guard against or a passive tool to be wielded, students come to see AI as a collaborative partner they work alongside.

Old Thinking New Thinking
Humans use AI tools Humans and AI as teammates
AI is either adversary or tool AI is collaborative partner
Individual competency matters Partnership capability matters

This conceptual shift matters because modern cybersecurity genuinely requires humans and AI to work together. In today’s Security Operations Centers, AI systems process tens of thousands of security events every second, far more than any human team could monitor manually. Human analysts don’t replace this automated processing; instead, they coordinate with it, bringing contextual judgment and creative problem-solving to complement AI’s speed and pattern recognition.

The Three Activities

In this activity, students investigate security incidents WITH an AI partner rather than on their own. Through investigation, students discover firsthand that AI excels at rapid pattern recognition across large datasets while humans bring something equally valuable: contextual understanding, intuition about human behavior, and the ability to weave disparate clues into coherent narratives.

NoneExample

A school secretary’s account is compromised with password Lincoln2024! (mascot + year). AI recognizes the weak password pattern and brute force attack signature. Humans notice something AI cannot: the school just announced budget cuts affecting support staff, and the first unauthorized access was to personnel records. AI provides technical analysis; humans provide institutional context that explains why this person, why now.

This activity challenges students to design governance policies for AI security systems. Working with a fictional monitoring system called SchoolGuard, they must decide what actions the AI should take automatically and what should require explicit human approval. Privacy, safety, efficiency, and trust all come into tension.

What makes this activity distinctive is that the AI itself participates in the policy discussion, explaining both its capabilities and its limitations. In engaging with this activity, students discover that there are rarely easy answers in AI governance—only thoughtful trade-offs between competing values.

This activity places students in defined team roles during realistic security incidents. Taking on positions like Incident Commander, SOC Analyst, Threat Intelligence Specialist, or Communications Coordinator, they must coordinate their response using AI-generated analysis while working under time pressure.

The experience directly mirrors how actual Security Operations Center teams function during real incidents, where clear roles, rapid communication, and effective human-AI coordination can mean the difference between containing a breach and watching it spread.

Grade-Band Differentiation

Each of the three activities includes four developmentally-appropriate versions designed to meet students where they are. The core concepts remain consistent across grade bands, but the scenarios, vocabulary, and complexity scale appropriately.

Grade Band Security Detective Ethics Incident Response
K-2 “Mystery Helpers” “Robot Helper Rules” “Fix It Team!”
3-5 “Locked Library Computers” “Computer Rules Committee” “Computer Problem Solvers”
6-8 Full investigation SchoolGuard policies Team response
9-12 SOC simulation AI Governance Workshop Enterprise incident

Low-Resource Implementation

One key design principle guided our development: the framing matters more than the technology. We wanted these activities to work in any classroom, regardless of technology access. As a result, they function effectively with any level of AI availability:

  • Full access: Students partner directly with ChatGPT, Claude, etc.
  • Limited access: Rotation stations, shared accounts
  • No AI access: Pre-generated response cards, teacher as AI voice

Interestingly, the low-resource options often create better learning opportunities. When students cannot simply ask AI for answers, they must engage more deeply with the material and think critically about what AI might contribute to the problem at hand.

NICE Framework Alignment

Every activity maps directly to real cybersecurity careers as defined by the NICE Workforce Framework, helping students see that what they are learning connects to genuine professional pathways.

Activity Primary Work Roles
Security Detective Teams Cyber Defense Analyst, Vulnerability Assessment
Ethics in Automated Security Cyber Policy Planner, Privacy Officer, Security Manager
AI-Assisted Incident Response Incident Responder, SOC Analyst, Threat Intelligence

This alignment ensures students recognize the direct connection between what happens in the classroom and the careers they might pursue. The activities do not merely simulate cybersecurity work; they introduce the authentic cognitive demands that professionals face daily.

The Posthuman Foundation

These activities emerge from my broader research program on posthuman pedagogy in cybersecurity education. Traditional educational approaches tend to treat humans as bounded, autonomous individuals who use external technologies as tools. A posthuman perspective, by contrast, recognizes that learning increasingly occurs across distributed networks that include both human and technological agents. We do not simply use AI; we think and learn with AI in ways that blur the traditional boundaries between human cognition and machine processing.

This theoretical grounding matters practically, not just philosophically. Rather than teaching students to master AI as a tool, these activities help them develop the collaborative sensibilities needed to work effectively within human-AI assemblages. The goal is not merely to prepare students for a future where AI is ubiquitous but to equip them for a present where human-AI collaboration is rapidly becoming the norm.

Materials and Resources

All materials from this presentation are freely available and ready for classroom use:

ryanstraight.github.io/nicek12-2025-materials

  • 12 complete lesson plans (3 activities × 4 grade bands)
  • Assessment rubrics (human-AI collaboration, decision-making quality, NICE Framework application)
  • Technical setup guides (platform-specific and low-resource options)
  • Ready-to-print evidence packets, worksheets, and AI response cards
  • Annotated bibliography with research foundations and further reading
  • Career connections linking activities to NICE Framework work roles
  • Audience-specific guides for CTE programs, afterschool/outreach, and STEAM integration
Note Speaking Engagements

Interested in similar presentations for your institution? View my speaking topics and availability →

Reuse

Citation

BibTeX citation:
@unpublished{straight2025,
  author = {Straight, Ryan},
  publisher = {NIST},
  title = {True {Teamwork:} {Human-AI} {Partnership} {Activities} for
    {K-12} {Cybersecurity} {Education}},
  date = {2025-12-07},
  url = {https://ryanstraight.com/research/nicek12-2025/},
  langid = {en}
}
For attribution, please cite this work as:
Straight, Ryan. 2025. “True Teamwork: Human-AI Partnership Activities for K-12 Cybersecurity Education.” NICE K12 Cybersecurity Education Conference 2025. December 7. https://ryanstraight.com/research/nicek12-2025/.