⚠️ Action Required: Instructors to Remove AI-Detection Language from Course Materials
Article Contents
- AI Detection Being Deactivated Spring 2025
- A Paradigm Shift: From Detection to Engagement
- Final Thoughts: A Shift Toward Meaningful Assessment
AI Detection Being Deactivated Spring 2025
UC San Diego Extended Studies is deactivating Turnitin’s AI-detection on April 7th, 2025, which means that instructors will no longer have the AI-detection report when reviewing the Turnitin similarity report, though plagiarism detection will continue to be available.
While instructors may explicitly prohibit AI use in their courses, the upholding of academic integrity will now rely on clear guidelines, assignment design, and more hands-on grading practices. By removing AI detection, we are shifting the focus away from making academic misconduct the central concern of student submissions. Now, instructors can instead prioritize assessing genuine engagement with course material and the achievement of learning outcomes. This pivot will enrich instructor-student interactions and foster an environment of engaged learning.
To ensure the success of our Extended Studies instructors in this time of transition, we are providing some immediate action items as well as some advice for making changes to assignment and assessment design and grading practices.
Action Required: Instructors to Remove AI-Detection Language from Course Materials
If your syllabus, Canvas course, or assignment instructions reference AI detection or Turnitin’s AI report, these references must be removed or altered. Please replace any mentions of AI detection with clear, course-specific guidelines about AI use. Below we offer some suggestions for making these edits in your course documentation.
- Full Prohibition – Students may not use AI tools for any stage of their work.
- Limited Use with Documentation – AI is permitted only for specific stages, such as brainstorming or outlining, with required documentation of prompts, outputs, and reflections.
- Guided AI Integration – AI use is allowed but must be clearly cited, with students explaining how they used it, what they revised, and where human judgment was applied.
These policies should be stated explicitly in your syllabi and assignment/assessment instructions, ensuring students understand expectations consistently and from day one.
A Paradigm Shift: From Detection to Engagement
With AI detection deactivated, instructors must rethink their approaches to assessment. The focus now must shift from catching AI use to designing coursework that prioritizes learning, critical thinking, and student engagement. This is not about making assignments “AI-proof”—it’s about making them AI-resilient, meaningfully engaging, and deeply connected to the learning process.
1. Write Thoughtful and Detailed Assignment Instructions
Clear, specific assignment instructions and essay/discussion prompts reduce opportunities for generic or AI-generated responses. Instead of broad essay topics like “Discuss the impact of climate change,” try one of the following approaches:
- Personalized Questions – Elicit personal responses and reflections, things only humans have, to prevent AI tools from being used. For example, “How has climate change affected your region or field of study? Use specific examples from your own experiences or industry”.
- Multi-Step Prompts – Require students to submit a proposal, outline, draft, and final version with reflections at each stage. Ask students to explain their reasoning and methods for completing these tasks.
- Analysis Over Summary – Summarizing “what was learned” often leads to AI use, since this type of assessment technique is more regurgitation than demonstration of proficiency. Instead of asking students to summarize material, require them to compare, critique, or connect concepts to real-world issues.
2. Align Assignments with Student Learning Outcomes (SLOs), Grade Using Rubrics
SLO Alignment
Each assignment should explicitly tie into course SLOs, ensuring that students are demonstrating the skills they need to develop. This approach not only keeps the assessment rooted in the learning objectives but also makes it more difficult for students to outsource their work to AI. Additionally, it increases the relevance and authenticity of their submissions.
- Document Problem-Solving Processes: If an SLO focuses on problem-solving, have students document how they arrived at their solution. This ensures that the process, not just the result, is assessed.
- Critique and Reflection for Critical Thinking: If an SLO requires critical thinking, ask students to critique AI-generated responses or explain the rationale behind their choices. This encourages students to engage deeply with the material and reflect on their own thought processes.
- Real-World Application: If an SLO involves practical application, have students apply concepts to real-world scenarios, case studies, or personal experiences. This not only assesses their understanding but also ensures they can make meaningful connections between theory and practice.
Rubrics
By aligning assignments with SLOs, instructors can create detailed, comprehensive rubrics that not only enhance grading transparency but also provide valuable, actionable feedback to students. Rubrics ensure that students understand exactly what is expected of them and how their work will be evaluated, fostering a fairer and more structured grading process. They also help instructors focus on what truly matters: student learning, reasoning, and engagement with the course material. Instructors: Learn more about rubrics by using the Smart Search in the Instructor Resource Center in Canvas; search for “rubrics.”
3. Shift from Product-Only Grading to Process-Based Assessment
Traditional grading often focuses solely on the final product, but in an AI-assisted era, polished submissions don’t always reflect genuine student effort or learning. To ensure meaningful engagement, assessments should emphasize how students arrive at their final work, not just the end result. By evaluating process alongside product, instructors can gain deeper insight into student learning and critical thinking.
- Require Drafts and Reflections – Have students submit drafts along with reflections on how their ideas evolved, what challenges they faced, and what revisions they made.
- Use AI Documentation Logs – If AI is permitted, require students to provide records of their prompts, outputs, and revisions to demonstrate thoughtful and responsible AI use.
- Scaffold Large Assignments – Break assignments into multiple stages (proposal → draft → revision → final) to ensure students engage with their work over time rather than relying on last-minute AI-generated submissions.
Instructors who shift assessment toward process-based grading can better evaluate student engagement, reinforce learning outcomes while discouraging AI over-reliance without resorting to policing tactics.
4. Prioritize Clarity Over Precision in Writing
Traditionally, academic grading has often favored flawless grammar and precise language, but this can inadvertently privilege native English speakers or those with exceptional command over the language. This approach may not only overlook the depth of a student’s thinking but also pose an equity issue for learners from diverse educational or linguistic backgrounds. This emphasis may also drive students to excessively use AI tools. And while AI-generated text often has flawless grammar, it lacks depth and nuance.
Instead of grading heavily on mechanics, we should focus on the clarity of thought, originality, and reasoning. This shift allows all students, regardless of language proficiency or educational background, to demonstrate their understanding and critical engagement with the material without being penalized for minor linguistic imperfections.
- Encourage AI-Assisted Editing – Allow students to use AI tools like Grammarly to refine mechanics, while assessing their ideas and critical engagement. Ask students to provide their writing outlines if they intend to use such tools.
- Promote Thoughtful AI Use – For students who struggle with writing, encourage strategic use of AI-generated text as a starting point. However, you should require documentation of how AI was used, including drafts and prompts, revisions and personal input, to ensure active engagement with the material versus flat-out text generation.
- Dock Points Only for Comprehension Issues – Instead of punishing minor grammar mistakes, only deduct points when writing errors obscure meaning.
- Assess Structure & Argumentation – Grade based on logical flow, coherence, and depth of reasoning, not just grammatical polish.
5. Add Live or Interactive Elements to Assessments
Even in fully online courses, incorporating live discussions, presentations, or oral defenses ensures that students engage deeply with their learning. Extended Studies instructors can use Zoom or Kaltura Capture/Quick Capture to facilitate the following approaches:
- One-on-One Check-Ins – Short video meetings where students discuss their work and explain key concepts.
- Video or Audio Submissions – Instead of a written discussion post, students record a short video or audio response.
- Group Debates or Roleplays – Small-group activities in Zoom where students apply concepts in real time.
Some of your course’s asynchronous hours may need to be reallocated to these assessment moments—but these adjustments ensure integrity and deeper engagement in an AI-driven world. Contact your program manager with questions about incorporating these approaches and how these interact with contact hours.
Final Thoughts: A Shift Toward Meaningful Assessment
Turning off AI detection tools may seem risky, but by transforming the approach to assessment and grading and emphasizing achievement over integrity policing, the rewards will be great.
In short, instructors now have the opportunity to...
- Refocus on learning over policing.
- Design assignments that prioritize depth, process, and transparency.
- Encourage students to engage critically with AI rather than relying on it blindly.
Removing outdated AI-detection language and redesigning assessments to be more process-driven, engaging, and aligned with learning outcomes will create a stronger, more resilient academic environment—one that values thinking over automation.