The AI Detector for Assignments helps educators identify submissions that may warrant closer review. It is designed as a support tool for instructor judgment — not a replacement for it.
What the detector does
Submit a learner assignment and TutorFlow analyzes the text for patterns associated with AI-generated writing. The output gives you a signal about how likely it is that the submission was fully or partially AI-generated.
This is useful for:
- Prioritizing which assignments in a large cohort to review more carefully
- Supporting academic integrity discussions with objective data
- Informing classroom policy decisions about AI use in coursework
How to use the output responsibly
Detection tools are probabilistic. They can produce false positives — flagging human-written work as AI-generated — and false negatives — missing AI-generated text that has been edited or paraphrased.
The detector output is a signal, not a verdict. It should inform your review process, not replace it.
The recommended workflow is:
- Review the detection output — Note the signal strength and any specific passages flagged.
- Compare with assignment context — Does the flagged writing match the learner's typical voice and capability? Is the prompt design one that tends to produce AI-like responses even from humans?
- Consider learner history — Is this consistent with how the learner has written before?
- Apply institutional policy — Follow your school or organization's guidelines for academic integrity decisions.
- Have a conversation — When in doubt, a direct conversation with the learner provides more information than any automated tool.
What it should not be used for
- As the sole basis for an academic integrity decision or disciplinary action
- To make conclusions about individual learners without other supporting evidence
- As a substitute for assignment design that makes AI assistance less useful (task specificity, personal reflection, and process-based assessment all reduce AI substitutability more effectively than detection alone)