← Back to Research

Higher Education · Emerging Research Line

Authenticity and assessment form a connected research line spanning student-AI collaborative writing, an interview-based authenticity framework, and writing assessment design. Across these studies, the central question is how instructors should judge authorship, ownership, and legitimate performance when generative AI becomes part of student work.

Instructor perspectives Integrated authenticity framework Assessment redesign
Student Work

AI changes the status of process, not only product.

Drafting, prompting, revising, and reflection all become part of what must be interpreted when students work with generative AI.

Instructor Judgment

Authenticity becomes an evaluative question.

The issue is not only whether AI was used, but how instructors define ownership, legitimacy, and meaningful intellectual contribution.

Assessment Design

Good response means redesign, not only detection.

This line moves toward criteria, transparency, and design principles that help institutions evaluate AI-mediated work without collapsing into vague prohibition or surveillance.

From authenticity concerns to assessment criteria
This project line treats authenticity as a design and judgment problem: how should educational systems interpret AI-mediated work, and what kinds of assessment structures become necessary as a result?
3 connected studies Q-method perspectives, interview-based framework building, and writing assessment design
Higher education primary context for student writing, instructor judgment, and institutional response
3 authenticity dimensions investment, process integrity, and contextual integration shape how AI-mediated work is judged
Beyond detection the line pushes toward criteria, disclosure, and task redesign rather than binary prohibition
Research Problem

Generative AI blurs familiar boundaries between assistance, authorship, revision, and outsourcing. Instructors are left to judge work that may be cognitively meaningful, partially delegated, strategically reflective, or merely polished by an AI system. This project line asks what counts as authentic academic work once those boundaries become unstable.

Why It Matters

Without clearer evaluative language, institutions tend to default either to vague integrity warnings or to surveillance-heavy responses. This line instead pushes toward better criteria, better assignments, and more defensible ways of interpreting AI-mediated work.

Integrated Authenticity Framework

The framework-building side of this line treats authenticity as a holistic judgment rather than a yes-or-no label. In the current formulation, instructors evaluate AI-mediated writing across three interacting dimensions, then interpret those dimensions through student agency, AI integration, and educational value.

Dimension 1

Personal-Intellectual Investment

Does the work still reflect the student's own ideas, commitment, and independent contribution rather than borrowed fluency or outsourced thinking?

Dimension 2

Process Integrity

How visible is the student's judgment across drafting, prompting, revising, source evaluation, and explanation of choices made with AI?

Dimension 3

Contextual Integration

What counts as authentic depends on disciplinary norms, course level, and task purpose. The same AI move may be defensible in one context and unacceptable in another.

The practical implication is that authenticity cannot be inferred from AI use alone. It has to be evaluated across multiple dimensions and in relation to the educational purpose of the task.

Where Faculty Draw the Line

The spectrum work shows why blanket policy language fails. Faculty judgments tend to cluster into accepted, contested, and rejected zones, with the middle zone depending heavily on assignment design, disciplinary expectations, and whether students can document their process.

Accepted Zone
Contested Zone
Rejected Zone

Accepted uses

  • Brainstorming and idea generation
  • Preliminary literature search
  • Grammar and mechanics support
  • Translation assistance

Contested uses

  • Stylistic enhancement
  • Structural reorganization
  • Paraphrasing existing arguments
  • Generating supporting examples

Rejected uses

  • Core argument generation
  • Substituting original analysis
  • Undisclosed full-text composition
  • Fabricated evidence or citations

The middle zone is the real assessment problem. Instructors do not just need better detection; they need criteria for judging contribution, disclosure, and alignment between AI use and the task's learning goals.

What This Line Studies

  • How instructors define authentic work when students draft, revise, or brainstorm with generative AI.
  • Which criteria matter most in evaluation: ownership, process transparency, judgment, disciplinary fit, and educational value.
  • How writing assessment systems and classroom routines should be redesigned so AI use can be interpreted rather than simply detected.

Connected Outputs

Q-Method Authenticity Study

A Q-method manuscript on higher education instructors' subjective judgments of authenticity in student-AI collaborative writing. This work is useful because it identifies structured viewpoints rather than flattening disagreement into a simple average opinion.

Interview-Based Framework Study

A qualitative authenticity study centered on instructor perspectives in AI-assisted academic writing. This is where the Integrated Authenticity Framework and the acceptability spectrum begin to take shape.

Writing Assessment Design

A proposal-accepted line in the Journal of Computing in Higher Education that moves from abstract integrity concerns to the concrete design of criteria, monitoring structures, and evidence sources for AI-mediated writing.

Programmatic Role

This line extends the learner-agency work into instructor judgment and institutional design. It is where the research agenda moves from how learners work with AI to how educational systems should interpret and assess that work fairly.

Current Position

This research line already includes Q-method work on subjective judgments, interview-based framework building, and a design-oriented assessment proposal. The next step is translating those insights into more portable assessment criteria and task design guidance.