Semantic Technologies for Cybersecurity Education Competencies
JSON-LD Implementation of Distributed Learning Analytics

Ryan Straight
ryanstraight@arizona.edu

University of Arizona

Aaron Escamilla
escamillaa@arizona.edu

University of Arizona

2025-11-05

Research Context

This research addresses a gap between professional cybersecurity practice and current educational technology capabilities. We begin by examining how contemporary workforce competencies reveal limitations in existing learning analytics approaches.

Current Assumptions

Current Ed Tech

Individual Humans + Passive AI Tools

Reality

Human ↔︎ AI Assemblages

Contemporary learning management systems, adaptive learning platforms, and learning analytics systems operate under an assumption of individual human learners interacting with passive technological tools. In these models, the student represents the locus of agency, while AI functions as a delivery mechanism or assessment instrument.

However, analysis of professional cybersecurity practice reveals a different operational reality. Security analysts collaborate with automated detection systems, threat intelligence platforms, and decision support tools in configurations that challenge traditional boundaries between human and machine agency.

This creates a measurement gap: current educational technology lacks the representational capacity to capture, assess, or develop competencies involving distributed human-AI collaboration.

Empirical Evidence from Pilot Study

89.4%

of education-focused NICE Framework work role competencies contain posthuman elements

How do we track/assess/teach this?

These findings come from our pilot study published in the Journal of Cybersecurity Education, Research and Practice, which analyzed TWO EDUCATION-FOCUSED work roles - not the entire 52-role framework.

IMPORTANT: These are OG-004 and OG-005 - different from the OG-015 analysis we’ll present today.

We analyzed two education-focused work roles from NICE Framework Version 1: OG-WRL-004 (Cybersecurity Curriculum Development) and OG-WRL-005 (Cybersecurity Instruction). These roles encompass responsibilities for developing and delivering cybersecurity education.

Using systematic posthumanist qualitative coding, we examined every task statement, knowledge requirement, and skill specification. Our coding schema identified nine categories of posthumanist elements: distributed agency, technological mediation, human-technology entanglement, adaptive learning patterns, and socio-ecological awareness.

Results showed 89.4% of competency statements contained at least one posthumanist element. Analysis revealed significant co-occurrence patterns: human adaptive learning codes appeared alongside technological adaptive learning codes; complexity recognition co-occurred with acknowledgment of distributed agency.

Examples from the framework illustrate these patterns:

  • “Develop training that incorporates adaptive learning technologies” - technological mediation of pedagogy
  • “Coordinate with automated assessment systems” - distributed agency across human-technology networks
  • “Evaluate effectiveness of AI-enhanced learning platforms” - recognition of non-human agency in educational outcomes

The framework implicitly recognizes that cybersecurity education involves human-AI collaboration, distributed decision-making, and technological mediation of learning processes.

However, current educational technology systems lack mechanisms to represent, track, or assess these collaborative competencies. Learning management systems can report module completion but cannot evaluate students’ effectiveness in collaborating with AI security tools, recognizing algorithmic mediation of threat perception, or coordinating action across human-technology networks.

This represents a documented gap between workforce competency descriptions and educational measurement capabilities.

POST-PUBLICATION EXTENSION: Following acceptance of the ISCAP paper, we conducted preliminary analysis of CE-WRL-001 (Cyberspace Operations) to pilot scaling the methodology beyond education-focused roles. This preliminary extension undertaken specifically for this revealed dramatically different posthumanist patterns in operational roles, which we’ll present today as methodological demonstration.

What is Posthumanism?

Decentering the human
Agency is distributed
Technology mediates, not just supports

Our methodological approach draws on posthumanist theory, specifically postphenomenological perspectives, which challenge three assumptions underlying current educational technology design.

First: Decentering the human. Posthumanism questions human exceptionalism - the assumption that humans constitute the sole origin of meaning, agency, and knowledge. Instead, posthumanist perspectives recognize that meaning and agency emerge through relationships between human and non-human actors.

In educational contexts, this framework suggests learning does not occur solely within individual human cognition. Rather, learning emerges through assemblages - networks of humans, technologies, content, environments, and institutional structures functioning as distributed systems.

Second: Agency is distributed. Whereas traditional educational models position students as autonomous agents using passive technological tools, posthumanist analysis reveals agency distributed across human and non-human actors.

Consider AI tutoring systems: students pose questions (human agency), while the AI determines question relevance, adjusts difficulty levels, and sequences topics (technological agency). Learning outcomes emerge from this interaction rather than from either actor independently.

This pattern characterizes contemporary cybersecurity practice. Security analysts collaborate with threat detection algorithms in configurations where neither actor could accomplish operational objectives independently. Agency operates as a distributed property of the system.

Third: Technology mediates, not merely supports. Drawing from postphenomenology - the philosophical study of how technologies shape human experience - this principle recognizes that technologies actively mediate perception, inquiry, and reasoning rather than neutrally implementing human intentions.

Threat detection systems, for example, do not simply execute human-designed rules. They mediate how analysts perceive threat landscapes, which patterns become visible, and which risks appear salient.

CRITICAL DISTINCTION FROM HCI: This is NOT just “human-computer interaction.” Traditional HCI maintains clear subject-object boundaries: humans interact WITH computers as external tools. Posthuman distributed agency recognizes that operational capability emerges FROM the assemblage itself - you literally cannot separate “what the human detected” from “what the system detected” in real-time threat analysis. The SIEM aggregates billions of data points, applies pattern matching, determines “unusual” vs “usual,” and prioritizes which signals deserve attention - all BEFORE the human analyst applies contextual knowledge and strategic judgment. The detection IS the assemblage, not the human using a tool.

Similarly, adaptive learning platforms do not passively deliver predetermined content. They mediate which knowledge appears relevant, how concepts connect, and what constitutes mastery. Technology actively shapes learning processes.

These theoretical commitments have implications for educational technology design:

Distributed agency requires learning analytics capable of evaluating human-AI collaboration effectiveness rather than isolating individual human performance.

Technological mediation necessitates assessment approaches recognizing collaborative competencies rather than treating skills as purely human attributes.

Learning-as-assemblage demands curriculum designs explicitly developing students’ capacity for effective participation in human-technology networks.

The technical solution we present implements these theoretical requirements using semantic web technologies.

Our Core Contribution

A methodological framework

Not a complete framework analysis

Theory → JSON-LD → SPARQL

Demonstrated through OG-015, with preliminary CE-001 extension

It’s critical to understand what this paper contributes: a METHODOLOGY for operationalizing posthumanist theory in computational systems.

We are NOT presenting a completed analysis of all 52 NICE Framework work roles. Instead, we demonstrate the viability of this approach through ONE comprehensive case study: OG-015 Technology Portfolio Management (73 coded instances across 9 categories).

IMPORTANT DISTINCTION: The JCERP pilot paper analyzed OG-004 and OG-005 (education roles). Today’s presentation demonstrates the methodology through OG-015 (Technology Portfolio Management) - a DIFFERENT work role showing how the approach scales beyond education-focused positions.

Our contribution is showing HOW TO:

  1. Apply posthumanist theory systematically to professional competencies
  2. Translate qualitative insights into machine-readable JSON-LD
  3. Enable computational queries through SPARQL

This is methodological innovation - proving the approach works and is reproducible. The OG-015 case study represents approximately 2% of the framework but demonstrates the methodology can be applied systematically across different role types.

ADDITIONAL EXTENSION: Today’s presentation also includes preliminary analysis of CE-001 (Cyberspace Operations) to demonstrate the methodology’s capacity to reveal role-specific patterns. This extension represents pilot scaling work comparing strategic roles (OG-015) versus operational roles (CE-001).

CRITICAL TRANSPARENCY ON OUTCOMES: We have ZERO empirical evidence that this improves educational outcomes. We have NOT deployed this in actual courses with learner outcome measurement. This paper contributes METHODOLOGY and INFRASTRUCTURE, not pedagogical effectiveness evidence. Why does this matter anyway? Current systems CANNOT assess human-AI collaborative competencies. Before testing whether posthumanist pedagogy improves outcomes, we need measurement infrastructure capable of detecting those outcomes. That’s what this framework provides. Future research phase (Fall 2026-Spring 2027) will include controlled comparisons and longitudinal tracking. Should you adopt RIGHT NOW for proven outcomes? NO. Does this enable outcomes research previously impossible? YES. We’re building necessary infrastructure, not claiming completed outcomes research.

Future work will apply this validated methodology across all 52 roles. Today, we’re demonstrating the method works across diverse role types and reveals patterns that current educational technology cannot capture.

Implementation Approach

We developed a semantic web-based implementation translating posthumanist analysis into machine-readable formats compatible with educational technology systems.

Three-Stage Methodology

Posthuman Theory → JSON-LD Schema → Queryable Competencies

Our methodology comprises three integrated stages.

First, we apply posthumanist and postphenomenological theory to perform qualitative analysis of cybersecurity competencies. This stage generates theoretical insights regarding distributed agency, technological mediation, and human-technology entanglement patterns.

Second, we translate these theoretical insights into JSON-LD schemas - JavaScript Object Notation for Linked Data, a W3C standard for semantic web representation. This translation preserves theoretical sophistication while rendering insights machine-readable.

Third, these schemas enable SPARQL queries - the standard query language for semantic web data (SPARQL Protocol and RDF Query Language) - permitting computational analysis of human-AI collaboration patterns across the competency framework.

This approach develops new vocabulary preserving theoretical nuance while enabling computational tractability, rather than constraining posthumanist concepts within existing data structures.

DETAILED TRANSLATION PROTOCOL (For the technical audience):

  1. Systematic qualitative coding using our nine-category posthumanist framework applied to ALL NICE competency statements (tasks, knowledge, skills).
  2. Code-to-property mapping where each qualitative code (like HTE-S for Human-Technology Entanglement - Symbiosis) became a first-class JSON-LD entity with defined relationships to Schema.org vocabularies.
  3. Semantic validation ensuring theoretical relationships (like co-occurrence patterns between complexity recognition and distributed agency) remain computationally tractable through SPARQL queries.

Complete translation schema and code mappings are detailed in the published JCERP paper. This is NOT just “tagging data” - we’re creating new ontological vocabulary that preserves philosophical sophistication while enabling machine processing. Happy to share technical resources and implementation details.

The Posthuman Ontology

JSON-LD Structure:
Context → Analysis → Code Frequencies

SE-C: 22 | HTE-S: 17 | NHA-S: 10

This slide shows the SIMPLIFIED structure of our JSON-LD implementation. Full technical details with complete code examples are in the published paper.

The key concept: We translate posthumanist qualitative coding into machine-readable JSON-LD format with three layers:

Context layer: Defines namespaces linking NICE Framework vocabulary, our custom posthuman ontology, and standard schema.org properties for semantic interoperability.

Analysis layer: Contains structured assessment data from our qualitative coding work.

Code Frequency layer: Records actual counts from systematic analysis of OG-015: - SE-C: Socio-Ecological Complexity (22 instances found) - HTE-S: Human-Technology Entanglement - Symbiosis (17 instances found) - NHA-S: Non-Human Agency - System agency (10 instances found)

This approach preserves the nuance of qualitative posthumanist analysis while making it computationally tractable - enabling queries, aggregation, and integration with other educational systems.

For implementation details, developers should refer to the complete JSON-LD schemas in the paper’s appendices.

Primary Case Study: OG-015

Technology Portfolio Management

73 coded instances

9 posthuman categories

We demonstrate this methodology through DETAILED posthumanist analysis of ONE NICE Framework work role: OG-015 Technology Portfolio Management. This is our PRIMARY case study for TODAY’S presentation - the complete, in-depth analysis demonstrating methodological rigor.

CLARIFICATION: This is NOT the same as the JCERP pilot paper, which analyzed OG-004 (Curriculum Development) and OG-005 (Instruction). OG-015 is a NEW analysis demonstrating how the methodology scales to strategic management roles beyond education-focused positions.

This role encompasses management of technology investment portfolios aligned with organizational strategic objectives. We systematically analyzed ALL task statements, knowledge requirements, and skill specifications for this single role.

Results identified 73 individual coded instances distributed across 9 posthumanist categories from this ONE work role analysis. This demonstrates the coding depth and pattern density achievable when posthumanist methodology is systematically applied to diverse role types.

This single work role represents approximately 2% of the complete NICE Framework (one work role among 52 total). This detailed case study demonstrates the methodology established in JCERP (via OG-004/005) can be applied to strategic management roles.

WHY OG-015 SPECIFICALLY? We selected Technology Portfolio Management as our primary case study because it exemplifies complex human-technology interactions characteristic of cybersecurity practice. It’s a STRATEGIC role with sophisticated technology portfolio management competencies aligning with our pedagogical focus. The 73 coded instances across multiple competency statements, tasks, and skills provide sufficient analytical depth to identify characteristic distribution patterns. The coding distribution aligns with theoretical expectations for technology-intensive professional roles, suggesting these patterns would emerge consistently across the framework’s 52 work roles. This selection enables us to demonstrate methodological rigor through rich, nuanced analysis rather than claiming representativeness from superficial coverage.

Next, we’ll show a preliminary extension to CE-001 (Cyberspace Operations) to demonstrate how the methodology reveals role-specific patterns across different work role types.

What the Data Reveals

Systematic posthumanist analysis of technology portfolio management competencies reveals distinct patterns.

Complexity and symbiosis codes show highest frequencies: SE-C (Socio-Ecological Complexity) appears in 22 instances, while HTE-S (Human-Technology Symbiosis) appears in 17 instances.

This pattern indicates portfolio management competencies already recognize the complex, interconnected nature of cybersecurity systems and symbiotic relationships between human strategic thinking and algorithmic analysis capabilities.

The framework implicitly acknowledges distributed agency, showing substantial recognition of non-human agency (10 instances). Technology systems function as active participants in decision-making rather than passive tools.

The low frequency of anthropocentric codes (only 1 instance each of human-first language and human-centric competencies) suggests cybersecurity education frameworks already trend beyond purely human-centered models, though not explicitly articulated using posthumanist terminology.

The semantic web implementation enables computational analysis of these patterns, which were previously accessible only through manual reading.

Preliminary Extension: CE-001 Analysis

PRELIMINARY EXTENSION: Following the OG-015 analysis (Technology Portfolio Management), we applied the methodology to CE-001 (Cyberspace Operations) as a pilot scaling exercise. This comparison shows ACTUAL coded data from both roles.

REMINDER: The published JCERP pilot analyzed OG-004 and OG-005 (education roles). Today’s OG-015 and CE-001 analyses are NEW work demonstrating methodology scalability.

OG-015 (Portfolio Management - strategic role, shown in red): 73 coded instances emphasizing Complexity (22) and Symbiosis (17). Strategic roles show high recognition of system complexity and symbiotic human-technology collaboration where roles remain distinct.

CE-001 (Cyberspace Operations - operational role, shown in blue): 82 coded instances across 36 task statements, revealing dramatically different posthuman patterns:

  • MEDIATION dominates (24 instances, 29.3%): Operational perception and action are entirely technologically mediated. Operators cannot perceive cyber terrain without mediating technologies.
  • HIGH NON-HUMAN AGENCY (29.3% combined): System agency (18) + AI agency (6) = 24 instances. Autonomous systems exhibit substantial agency in detection, analysis, and defensive operations.
  • CO-CONSTITUTION prominent (12 instances, 14.6%): Operational capabilities emerge from co-constitutive assemblages - neither operator alone nor tool alone suffices.
  • ZERO ANTHROPOCENTRISM: Unlike OG-015’s residual human-centrism (2.7%), operational roles are functionally posthuman by necessity.

THEORETICAL IMPLICATION: Operational roles are posthuman by necessity due to tactical tempo and epistemological conditions. Strategic roles retain more anthropocentric organization even while acknowledging complexity.

This preliminary extension validates that the methodology reveals meaningful role-specific patterns and demonstrates scalability beyond education-focused roles.

Making It Queryable

PREFIX posthuman: <https://posthuman.education/ontology#>
PREFIX nice: <https://nice.nist.gov/framework/terms#>

SELECT ?workRole ?name ?entanglementType
WHERE {
  ?workRole a nice:WorkRole ;
            schema:name ?name ;
            posthuman:posthumanAnalysis ?analysis .

  ?analysis posthuman:primaryCategories ?category .
  ?category a posthuman:HumanTechnologyEntanglement ;
            posthuman:subtype ?entanglementType .
}
ORDER BY ?entanglementType

Standard semantic web - works with any triplestore

The semantic web implementation enables translation from qualitative insight to computational analysis.

This SPARQL query represents the W3C standard query language for semantic web data. The query retrieves all work roles exhibiting human-technology entanglement patterns, identifies entanglement types, and organizes results.

This implementation uses standard semantic web protocols rather than proprietary formats. The queries function with any RDF triplestore or SPARQL endpoint, including Apache Jena, Blazegraph, GraphDB, or other institutional infrastructure.

The posthuman ontology - including HumanTechnologyEntanglement and subtypes such as Symbiosis, Mediation, and Co-constitution - becomes queryable through first-class entity representation.

Educational systems can now systematically query questions previously requiring extensive manual analysis: “Which competencies require deep co-constitutive human-AI collaboration versus symbiotic relationships where roles remain distinct?” “Where do gaps in technological mediation recognition occur?” “Which curriculum modules require enhancement for posthuman competencies?”

The semantic web implementation renders these analytical questions computationally tractable through standard SPARQL queries.

Implications and Applications

We now examine implications of this work for learning analytics, curriculum design, and assessment practice.

Learning Analytics Implications

Before

Track: Individual performance

Now

Track: Human-AI collaboration

What we can now measure

This work has implications for educational technology measurement capabilities.

Traditional learning analytics track individual student performance metrics: time on task, quiz scores, clickstream data, assignment completion. These measurements focus on the individual human learner.

Posthumanist schemas enable tracking of human-AI collaboration effectiveness. The framework supports measuring how effectively students work alongside AI security tools, whether they recognize algorithmic mediation of perception, and whether they can manage distributed agency across human-technology networks.

Measurable dimensions include:

  • Collaboration pattern effectiveness: Identifying which human-AI pairings produce stronger security outcomes
  • Mediation recognition: Assessing learner understanding of how threat detection systems shape attention and decision-making
  • Distributed agency competence: Evaluating students’ capacity to coordinate action across human and technological actors in security operations
  • Adaptive collaboration: Measuring whether students adjust strategies based on technological capabilities and limitations

These theoretical constructs become measurable learning outcomes through computational representations.

The framework enables assessment of competencies relevant to cybersecurity workforce preparation that were previously unmeasurable.

CONCRETE INSTRUCTOR SCENARIO: How would an instructor actually USE this in their cybersecurity course? Traditional approach: Students learn Security Information and Event Management (SIEM) tools, assessment measures tool proficiency - “Did student detect the threat?” Posthumanist-enhanced approach using our framework: (1) Instructor queries semantic schema: “Show CE-001 competencies with high technological mediation” (2) Design assessments measuring BOTH tool proficiency AND mediation awareness (3) Students must articulate how SIEM shapes threat visibility, pattern salience, algorithmic biases - not just use the tool (4) Learning analytics track collaboration effectiveness - “How effectively did student collaborate with automated systems?” rather than just pass/fail on detection. The framework provides assessment vocabulary + tracking infrastructure for collaborative competencies. We’re developing instructor implementation guides, assignment designs, rubric structures, and learning analytics dashboards for Fall 2026 public release.

Curriculum Gap Analysis

Systematic competency enhancement

Evidence-based curriculum design

The semantic framework enables systematic curriculum gap analysis across competency frameworks.

Current curriculum design for cybersecurity education relies on manual reading of competency frameworks, expert judgment, and ad hoc mapping to learning objectives - processes that are time-consuming, inconsistent, and potentially incomplete.

The framework enables SPARQL queries identifying specific gaps. For example: “Show me all competencies with high complexity recognition but low technological adaptive learning codes” identifies competencies acknowledging complex systems while not explicitly addressing system learning and evolution.

“Find competencies with human-technology symbiosis but no consideration of ethical implications” identifies collaboration competencies lacking ethical frameworks.

“Identify work roles with heavy non-human agency recognition lacking corresponding assessment of distributed responsibility” reveals gaps in accountability frameworks.

These queries generate systematic, evidence-based recommendations for curriculum enhancement through computational analysis of the competency landscape rather than relying solely on individual instructor judgment.

Institutions can use these findings to make data-driven decisions regarding curriculum development resource allocation for workforce preparation.

Assessment Applications

Distributed Agency
Technological Mediation
Adaptive Collaboration

Now assessable, not just theoretical

The concepts of distributed agency, technological mediation, and adaptive collaboration have functioned as theoretical constructs in posthumanist scholarship - significant philosophically but difficult to operationalize in educational assessment.

The semantic framework renders these concepts assessable.

Distributed Agency: Performance assessments can require students to coordinate security responses across human analysts and automated systems, with computational evaluation of coordination effectiveness.

Technological Mediation: Assessments can evaluate whether students recognize how security tools shape threat perception - assessing not merely tool use proficiency but understanding of mediating processes.

Adaptive Collaboration: Measurements can determine whether students adjust strategies based on AI system capabilities, including recognizing appropriate reliance on algorithmic detection versus human judgment and adapting to changing system capabilities.

These constructs become measurable learning outcomes through computational representations in the semantic framework.

Educational programs can demonstrate development of collaborative competencies characterizing contemporary professional practice through data-supported claims rather than relying solely on assertions about technical skill development.

Integration Strategy

Backward Compatible + Posthuman Enhanced

Incremental adoption, not replacement

Implementation does not require replacing existing educational technology infrastructure.

JSON-LD schemas integrate with established educational metadata standards including Schema.org, IEEE Learning Object Metadata, and IMS standards - all compatible with linked data approaches.

Institutions can maintain current LMS platforms, learning analytics systems, and assessment infrastructure. Posthumanist capabilities function as enhanced metadata layered onto existing systems.

Learning management systems continue tracking traditional metrics such as completion rates, grades, and time on task. Systems with posthumanist schema awareness additionally access collaboration effectiveness data, distributed agency assessments, and technological mediation recognition measures.

This enables incremental adoption: institutions can begin with single courses, modules, or competency sets, experiment with posthumanist assessment approaches, evaluate outcomes, and scale gradually based on observed value.

The approach avoids wholesale infrastructure replacement, supporting experimentation without substantial technology investments or institutional disruption.

Beyond Cybersecurity

Healthcare • Climate Science • Engineering

Anywhere human-AI collaboration is fundamental

Reproducible methodology

While validated through cybersecurity education, the methodology generalizes to STEM fields where human-AI collaboration characterizes professional practice.

Healthcare: Diagnostic AI systems collaborate with physicians. Radiologists work alongside image analysis algorithms. Treatment planning involves both human clinical judgment and algorithmic analysis of patient data and medical literature. Medical education requires preparation for these collaborative practices.

Climate Science: Climate modeling involves entanglement between human interpretation and computational simulation. Environmental monitoring systems exercise autonomous decisions regarding data collection and threat assessment. Climate science education must develop these collaborative competencies.

Engineering Design: CAD systems mediate design thinking. Generative design algorithms participate in creative processes. Manufacturing automation requires coordination across human and technological actors. Engineering education must address these operational realities.

The methodological approach remains consistent: apply posthumanist analysis to professional competency frameworks, translate findings into JSON-LD schemas, and enable SPARQL queries for systematic curriculum development.

The framework represents a reproducible methodology applicable across domains where professional practice increasingly involves human-AI collaboration.

Validation and Future Work

We conclude with technical validation results and planned extensions of this work.

Technical Validation

✓ JSON-LD validated
✓ SPARQL queries work
✓ Standards compatible

The implementation has undergone technical validation across multiple dimensions.

JSON-LD validation: All posthumanist concepts successfully translate to JSON-LD representation while preserving semantic relationships. Theoretical nuance remains intact in computational representation.

SPARQL query execution: Demonstrated queries successfully retrieve posthumanist elements and relationship patterns. Complex queries identify co-occurrence patterns, collaboration effectiveness metrics, and adaptive learning sequences that current educational technology systems cannot represent.

Standards compatibility: Schema integration maintains compatibility with existing educational metadata standards. Backward compatibility ensures traditional educational technology interprets basic elements while posthumanist-aware systems access enhanced capabilities.

Statistical relationships from our analysis, including the 89.4% occurrence rate of posthumanist elements, successfully translate into semantic web format maintaining both quantitative precision and theoretical interpretation.

The implementation uses W3C standards and integrates with existing educational technology infrastructure, supporting practical deployment beyond research prototype status.

CRITICAL LIMITATIONS: This technical validation demonstrates the framework WORKS - the JSON-LD translates correctly, SPARQL queries execute successfully, and semantic relationships are preserved. However, we have NOT validated:

  1. Deployment in actual educational settings - no real courses using this yet,
  2. Learner outcome improvements - no empirical evidence of pedagogical effectiveness,
  3. Instructor usability at scale - no systematic usability testing with diverse faculty,
  4. Full framework coverage - only demonstrated through detailed case study methodology (OG-015 + preliminary CE-001), not complete 52-role analysis.

This is infrastructure and methodology validation, not deployment validation. The framework enables future pedagogical research, but we’re not claiming it’s “classroom-ready” without additional validation phases. Pilot deployments targeting Fall 2026.

Scaling the Methodology

Completed: OG-004/005 (JCERP) + OG-015 + CE-001 (ISCAP)

Next Phase: 52-role systematic analysis

Then: Public framework release + Implementation guides

Fall 2026 target for complete framework

Published JCERP work established the METHODOLOGY through analysis of OG-004 (Curriculum Development) and OG-005 (Instruction) - education-focused roles showing 89.4% posthuman elements.

Today’s presentation demonstrates scaling through OG-015 (Technology Portfolio Management, 73 coded instances) and CE-001 (Cyberspace Operations, 82 coded instances). Combined with JCERP’s OG-004/005 analysis, this represents approximately 6% of the complete NICE Framework across four different role types.

SCALING VALIDATION: The CE-001 analysis validates the methodology reveals meaningful role-specific patterns. Operational roles are functionally posthuman by necessity (29.3% mediation, 29.3% non-human agency, zero anthropocentrism) versus strategic roles (OG-015) retaining more complexity recognition and symbiotic patterns. This demonstrates the methodology captures meaningful differences across role types.

Planned scaling includes:

Full 52-work-role systematic analysis: Applying this validated posthumanist coding methodology across the entire framework. The JCERP pilot (OG-004/005) established methodological foundation; today’s OG-015 demonstrates strategic role scaling; CE-001 shows operational role patterns. Now we systematically analyze all remaining 48 roles to provide complete workforce competency coverage.

Public framework release: Upon completion of the full analysis, JSON-LD schemas, SPARQL queries, and ontology documentation will be released as open access resources. Institutions can adopt the framework without licensing fees or proprietary restrictions.

Implementation guides: Development of practical guides for educational technology developers, curriculum designers, and institutions addressing schema integration with existing LMS platforms, assessment design for distributed agency competencies, and interpretation of query results for curriculum enhancement.

Timeline: Targeting Fall 2026 for complete framework release, allowing time for comprehensive systematic analysis across all roles, thorough documentation, and validation with early adopters.

The complete framework will be freely available, supporting transformation of educational technology to align with contemporary professional practice. Today’s presentation demonstrates both the published methodology and preliminary scaling validation.

Conclusions

1. Posthuman theory → Practical tech

2. Assess what actually matters

3. Enhance existing systems

We conclude with three contributions of this work:

First: Operationalization of posthumanist theory in computational systems. This work demonstrates that sophisticated philosophical insights regarding human-technology relations can be rendered computationally tractable. Theoretical and practical dimensions inform each other rather than operating independently.

Second: Expanded assessment capabilities. The framework enables assessment beyond knowledge recall or isolated skill performance, supporting evaluation of AI collaboration effectiveness, technological mediation recognition, and distributed agency coordination - competencies characterizing contemporary professional practice.

Third: Infrastructure-compatible enhancement. Rather than requiring wholesale replacement, institutions can adopt posthumanist approaches incrementally, layering capabilities onto existing infrastructure. This supports experimental adoption and evidence-based scaling.

Educational technology gains capacity to measure competencies aligned with workforce operational reality through this semantic web implementation.

Questions and discussion welcome.

Contact

Ryan Straight

ryanstraight@arizona.edu
ORCID: 0000-0002-6251-5662

Aaron Escamilla

escamillaa@arizona.edu

ISCAP 2025 | Straight & Escamilla | November 5, 2025

1 / 24
Semantic Technologies for Cybersecurity Education Competencies JSON-LD Implementation of Distributed Learning Analytics Ryan Straight ryanstraight@arizona.edu University of Arizona Aaron Escamilla escamillaa@arizona.edu University of Arizona 2025-11-05

  1. Slides

  2. Tools

  3. Close
  • Semantic Technologies...
  • Research Context
  • Current Assumptions
  • Empirical Evidence from Pilot Study
  • What is Posthumanism?
  • Our Core Contribution
  • Implementation Approach
  • Three-Stage Methodology
  • The Posthuman Ontology
  • Primary Case Study: OG-015
  • What the Data Reveals
  • Preliminary Extension: CE-001 Analysis
  • Making It Queryable
  • Implications and Applications
  • Learning Analytics Implications
  • Curriculum Gap Analysis
  • Assessment Applications
  • Integration Strategy
  • Beyond Cybersecurity
  • Validation and Future Work
  • Technical Validation
  • Scaling the Methodology
  • Conclusions
  • Contact
  • f Fullscreen
  • s Speaker View
  • o Slide Overview
  • e PDF Export Mode
  • r Scroll View Mode
  • ? Keyboard Help