Students are asking for AI guidance, not just policy

Students are seeking guidance on using AI effectively in their studies, rather than just policies restricting its use. A new framework, SAGE, has been developed to help students verify AI-generated content and engage with AI tools systematically.
Universities have primarily responded to generative AI with policies and warnings, but students need more than just rules. A study of 167 ICT students found 51.5% requested guidance on verifying AI-generated content accuracy. The SAGE framework was developed to help students understand how to verify AI outputs against authoritative sources. SAGE involves six steps: generate, evaluate, refine, AI critic, reflect, and was validated across six empirical studies involving over 500 students at five Australian university campuses. With SAGE, 73% of students verified AI outputs systematically, and 81% engaged in deep revision. The framework embeds verification into assessment design, requiring students to develop cross-referencing skills.
This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.