Conference Presentation
I’m presenting this work at the 29th CISSE Colloquium (November 12-14, 2025) at Seattle University. The presentation explores how posthuman theory offers instructional designers a robust framework for AI integration that transcends individual disciplines.
View slides in full screen | Download proceedings paper (coming soon)
A Framework for All Domains
This research emerged from my cybersecurity education work, specifically from watching students navigate increasingly complex relationships with AI tools in their security coursework. The framework isn’t about cybersecurity, though. It’s about instructional design.
Whether you’re teaching nursing students to work with clinical decision support systems, helping business students interrogate algorithmic hiring tools, or preparing engineering students for AI-assisted design work, you’re facing the same fundamental challenge: how do we design learning experiences when agency distributes across human and artificial actors in ways that fundamentally reshape what it means to learn, to know, to perform?
The Distributed Agency Problem
Traditional instructional design makes a comfortable assumption. We design for individual learners. Bounded humans acquiring discrete competencies, demonstrating individual mastery, progressing through carefully scaffolded experiences toward predetermined outcomes that we can measure, validate, certify.
This worked fine when educational technology meant overhead projectors and maybe a learning management system—tools that extended human capability without fundamentally scrambling the whole agency equation. AI changes everything.
When students collaborate with large language models to analyze security vulnerabilities, when they work alongside automated penetration testing tools that probe systems faster than human cognition can track, when threat intelligence platforms aggregate and synthesize data from thousands of sources simultaneously, agency doesn’t just expand—it redistributes, reconfigures, becomes something genuinely distributed across human cognition, artificial intelligence, technical infrastructures, organizational policies, and sociotechnical contexts that shape what’s even possible to think or do. Performance emerges. From the network, not from individuals.
Most instructional designers might respond to this by trying to stuff the genie back in the bottle. Academic integrity policies ban AI use. Learning objectives specify “without AI assistance” as the baseline competency. Assessment rubrics deduct points for using ChatGPT.
We’re designing learning experiences that deliberately exclude the actual agency configurations our students will navigate professionally, and then we wonder why there’s a disconnect between education and practice.
Four Posthuman Instructional Design Principles
The framework I’m presenting draws from posthuman theory, particularly Adams and Thompson’s (2016) methodology for investigating how humans and technologies shape each other through their entanglements. This offers instructional designers a different way forward, one that acknowledges distributed agency as legitimate rather than problematic.
1. Design for the Assemblage, Not the Individual
Instead of starting with individual learning objectives (what should each student know?), we begin with assemblage capabilities: what can the human-AI network accomplish together that neither could achieve alone?
In cyber courses, this means students don’t just learn threat modeling techniques in isolation. Rather, they learn to orchestrate distributed intelligence that combines human judgment about context and motivation, automated scanning tools that process millions of potential vulnerabilities, threat intelligence feeds updating in real-time, and collaborative analysis platforms where human expertise aggregates and amplifies. The learning target? The assemblage’s capability.
Think about how this translates to your domain. Medical education: clinical reasoning emerges from assemblages of human expertise, diagnostic algorithms, electronic health records, consultation networks. Business education: strategic decision-making involves human creativity, predictive analytics, market intelligence systems, stakeholder networks. The individual is always already part of something larger.
2. Cultivate Relationality and Response-ability
Posthuman theory emphasizes relationality. Entities don’t just interact. They fundamentally shape each other through their relationships and what Haraway (harawayStayingTroubleKin2016?) calls response-ability, the ethical capacity to respond appropriately within relationships rather than simply reacting or complying.
For instructional design, this means creating learning experiences where students develop genuine ethical orientation toward their AI collaborators and the broader sociotechnical contexts they’re embedded within, not just instrumental competence in using tools. It’s the difference between knowing how to prompt ChatGPT effectively and understanding how your prompting practices participate in larger systems of knowledge production, power, and possibility.
Students learn to interrogate algorithmic bias in threat detection systems, to question whose security interests automated tools serve (hint: it’s not always the users’), to recognize how their relationships with AI either reproduce or challenge existing power structures in security work. They develop response-ability.
The principle transfers. Nursing students examining how clinical decision support systems encode particular perspectives on health and illness. Journalism students investigating how AI-generated content reshapes public discourse. It’s always about relationships, never just tools.
3. Embrace Emergence, Messiness, and Indeterminacy
Traditional instructional design loves predictability, specify the outcome, design the pathway, assess convergence on the expected result, document achievement of predetermined competencies, move to the next module. Clean and controlled. Posthuman instructional design acknowledges that when agency distributes across human-AI assemblages, what emerges can’t be fully predetermined,
In practice? Open-ended security challenges where human-AI assemblages generate solutions I never anticipated. Assessment that values the sophistication of emergent approaches over adherence to expected answers. Learning experiences that embrace the authentic messiness of distributed agency rather than artificially constraining it. Students discover capabilities they didn’t know the assemblage possessed, they encounter genuine complexity, and they learn to navigate uncertainty productively rather than seeking algorithmic certainty.
4. Posthuman Assessment Approaches
If agency distributes, then assessment must evaluate distributed performance, as well. What the assemblage achieves, how effectively students orchestrate human-AI collaboration, the sophistication of what emerges from these relationships. Traditional assessments that try to isolate individual contribution become not just inadequate but actually incoherent.
Students maintain reflective documentation showing how their collaborations with AI evolved, what emerged that neither human nor AI could have produced alone, how they navigated ethical dimensions of these relationships. We assess process and outcome together. Formative, developmental assessment rather than just summative judgment.
The criterion isn’t “did you get the right answer?” but “how sophisticated was your orchestration of distributed intelligence, and what did that assemblage make possible?”
Theoretical Grounding: Curriculum-as-Lived
These principles connect to Ted Aoki’s (aokiCurriculumLivingExperience2005?) distinction between curriculum-as-planned and curriculum-as-lived: that inevitable, productive gap between what we design and what students actually experience. Traditional instructional design tries to minimize this gap. Posthuman instructional design leverages it. The learning that matters emerges from students’ lived experience navigating human-AI assemblages, not from executing predetermined objectives. We design the conditions for meaningful emergence, not the outcomes themselves.
So how do you actually implement this? The framework translates through four AI literacies that make posthuman principles concrete and actionable:
- Cognitive literacy: Understanding what AI can and can’t do within assemblages, not as isolated tools
- Civic literacy: Developing critical consciousness about AI’s role in reproducing or challenging social structures
- Creative literacy: Learning to configure novel human-AI collaborations that produce emergent solutions
- Critical literacy: Interrogating the power dynamics embedded in AI-mediated practices
These aren’t cybersecurity competencies, but rather transferable capacities for navigating AI-enhanced learning in any domain. They give instructional designers concrete targets that honor distributed agency while remaining implementable in actual courses with actual students who need actual grades.
Implementation Observations
The paper includes preliminary observations from my cybersecurity courses where I’ve been experimenting with these principles. Students’ reflections reveal something interesting: they move from seeing AI as a tool to experiencing it as a collaborator, from instrumental use toward genuine partnership, and from trying to extract value to cultivating relationships. They develop critical consciousness. They question algorithmic authority. They recognize their response-ability.
These remain observations, not validated findings, and the paper positions this as design scholarship proposing a framework for future empirical work, not presenting completed research. But the patterns suggest something worth pursuing systematically.
Research Directions
This framework opens several research trajectories that extend well beyond cybersecurity:
- Comparative studies examining how posthuman principles manifest across different disciplines—what does distributed agency look like in nursing versus engineering versus humanities education?
- Longitudinal research tracking how students’ capacity for response-ability and relationality develops over time
- Assessment validity studies investigating whether posthuman approaches actually measure what matters for professional practice
- Design process research exploring how instructional designers themselves navigate the shift from individual to assemblage thinking
Every discipline integrating AI faces these questions. The research opportunities are everywhere.
Why This Matters for Instructional Designers
AI integration isn’t just another educational technology to be incorporated into existing frameworks. Tt fundamentally challenges instructional design at its theoretical foundations, forcing us to reconsider basic assumptions about agency, learning, performance, and assessment.
If we respond by banning AI or treating it primarily as an integrity problem, we’re essentially preparing students for a world that no longer exists. If we embrace it uncritically, we risk turning education into training for algorithmic compliance.
Posthuman instructional design offers something different: a theoretically grounded approach that acknowledges distributed agency as legitimate while maintaining critical consciousness about what that means for education, for society, for the kinds of futures we’re creating through our instructional choices.
The cybersecurity context demonstrates the framework works even in high-stakes domains where errors have real consequences, where security breaches can destroy organizations, where the adversarial nature of the work means you can’t just hope everything works out. If it works there, it can work anywhere.
References
Reuse
Citation
@inproceedings{straight2025,
author = {Straight, Ryan and Herron, Josh},
title = {Distributed {Agency} in {AI-Enhanced} {Cybersecurity}
{Education:} {A} {Posthuman} {Instructional} {Design} {Framework}},
booktitle = {CISSE 2025},
volume = {29},
date = {2025},
url = {https://ryanstraight.com/research/cisse-2025/},
langid = {en}
}