Beyond Cheating: Why AI Demands a Rethink of Graduate Education

Reflections on a recent interview about distributed cognition and ethical AI integration in higher education

Exploring how AI challenges traditional notions of academic integrity and opens new possibilities for collaborative learning in graduate education.
Author
Affiliation
Published

Sunday, June 1, 2025

I was recently interviewed by Evidence in Motion about AI in graduate education, and it got me thinking about how we’re framing this whole thing wrong.

The interview came out a few weeks ago, and I keep coming back to this idea that we’re asking the wrong question. Instead of “How do we stop students from cheating with AI?” maybe we should be asking “How do we actually teach in a world where AI collaboration is just… normal?”

The Conversation That Started It All

The folks at Evidence in Motion reached out because they found a talk that I did for them with Dr. Angela Gunder in the fall to be intriguing. The interview ended up covering a lot of ground about what it means to learn in an age where the boundaries between human and artificial intelligence are getting increasingly blurry.

One thing I said during the interview that I keep thinking about: “Every conversation with AI contains echoes of human voices.” That wasn’t just me being poetic—it’s actually central to how I think about these tools. AI systems are trained on vast amounts of human knowledge, so when students interact with them, they’re really engaging with a kind of distributed collective intelligence.

From a postphenomenological perspective (which I’ve written about here and here), these AI tools aren’t just external instruments—they’re mediators that actually reshape how we relate to knowledge itself.

Why the “Cheating” Frame Misses the Point

Here’s what’s driving me a bit crazy: universities are scrambling to install AI detection tools while students are trying to figure out what’s allowed and what isn’t. But this whole framework assumes that learning is supposed to be a solo activity, which… when was the last time you solved a complex problem entirely on your own?

In the interview, I talked about how “instead of treating AI as a black-box threat, we should assess how students use it transparently.” The key word there is how. We need to be teaching students to be thoughtful collaborators with these systems, not just trying to catch them using them.

I’ve been experimenting with assignments that actually require students to document their process of working with AI tools. Not to police them, but to help them develop what I’m calling “critical AI literacy”—understanding not just how to use these tools, but how to evaluate their outputs, recognize their limitations, and think critically about whose voices are embedded in their responses.

What This Actually Looks Like

Some practical things I’m trying in my classes:

  • Reflection portfolios where students document their human-AI collaboration process
  • Assignments that require sustained critical engagement rather than information reproduction
  • Discussion of whose knowledge is represented in AI training data and whose might be missing
  • Exploration of when AI collaboration enhances learning versus when it might short-circuit important thinking

The goal isn’t to eliminate AI use—it’s to make it visible and intentional.

The Bigger Picture

This connects to broader questions about what we value in education. Are we trying to teach students to reproduce information, or are we trying to develop their capacity for critical thinking, ethical reasoning, and meaningful engagement with complex problems?

If it’s the latter, then AI actually opens up some interesting possibilities. We can focus more on the uniquely human capacities—creativity, ethical judgment, contextual understanding—while using AI as a thinking partner for information gathering and initial analysis.

But (and this is important) this approach isn’t without risks. There’s a real danger of “techno-solutionism,” assuming that technology will solve pedagogical problems without doing the hard work of rethinking our approaches. There’s also the risk that over-reliance on AI could atrophy critical thinking skills if we’re not intentional about maintaining human agency in the process.

Where Do We Go From Here?

I think we’re at one of those moments where we can either resist the change and make our educational practices increasingly irrelevant, or we can get proactive about shaping how these technologies are integrated.

That requires some courage from educators to experiment with new approaches. It also requires wisdom to stay focused on what makes education valuable in the first place.

A few questions I’m sitting with:

  • How might we redesign assessments to embrace rather than resist AI collaboration?
  • What does critical AI literacy look like in different disciplines?
  • How do we ensure that AI integration serves educational equity rather than making existing inequalities worse?

I’m continuing to work on these questions through the MA{VR}X Lab, and I’d love to hear from others who are grappling with similar challenges. If you’re experimenting with AI integration in your programs, or if you have thoughts on any of this, feel free to connect with me on Bluesky.

You can read the original article at Evidence in Motion if you want to see the whole thing.


Further Reading

EIM Editorial Team. (2025). Beyond Cheating: Why AI Demands a Rethink of Graduate Education. https://eimpartnerships.com/articles/beyond-cheating-why-ai-demands-a-rethink-of-graduate-education
Straight, R. (2024a). Beyond human-centric models in cybersecurity education: A pilot posthuman analysis of the NICE workforce framework for cybersecurity. Journal of Cybersecurity Education, Research and Practice, 2024(1). https://doi.org/10.62915/2472-2707.1210
Straight, R. (2024b). Doing postphenomenology in cybersecurity education: A methodological invitation. Cybersecurity Pedagogy and Practice Journal, 3(1), 64–72. https://doi.org/10.62273/TWSH7587
Verbeek, P.-P. (2011). What things do: Philosophical reflections on technology, agency, and design. Penn State University Press.

Reuse

Citation

BibTeX citation:
@misc{straight2025,
  author = {Straight, Ryan},
  title = {Beyond {Cheating:} {Why} {AI} {Demands} a {Rethink} of
    {Graduate} {Education}},
  date = {2025-06-01},
  url = {https://ryanstraight.com/posts/2025-06-01-beyond-cheating-ai-graduate-education/},
  langid = {en}
}
For attribution, please cite this work as:
Straight, R. (2025, June 1). Beyond Cheating: Why AI Demands a Rethink of Graduate Education. https://ryanstraight.com/posts/2025-06-01-beyond-cheating-ai-graduate-education/