Skip to main content

Artificial Intelligence (AI) Policy Template

[Insert Organisation Name]
Policy Owner: [Insert Name]
Review Cycle: Annual
Next Review Date: [Insert Date]
Version: 1.0

  1. Purpose

This policy sets out the principles, expectations, and responsibilities for the ethical, effective, and safe use of Artificial Intelligence (AI) technologies across [Organisation Name]. It applies to all staff, learners, and stakeholders engaging with AI in any capacity, whether for teaching, learning, assessment, support services, or administration.

  1. Scope

This policy applies to:

  • All teaching and support staff (including contractors and visiting professionals)
  • All learners enrolled at the institution
  • All AI systems and tools used within the organisation, whether institutionally approved or individually accessed
  • Both educational and administrative uses of AI
  1. Definitions

Artificial Intelligence (AI): Technologies that perform tasks typically requiring human intelligence, such as problem-solving, natural language understanding, pattern recognition, or decision-making.

Generative AI: Tools that create new content (e.g. text, images, audio) based on input data or prompts (e.g. ChatGPT, Claude, Gemini, DALL·E).

Emerging Technologies: Technologies including, but not limited to, automation, mixed/virtual reality, machine learning, and predictive analytics.

  1. Principles

We are committed to:

  • Promoting ethical and inclusive use of AI
  • Ensuring transparency and explainability of AI tools in educational decision-making
  • Prioritising data protectionprivacy, and security
  • Supporting professional development to build digital and AI literacy
  • Using AI to enhance — not replace — human judgment and educational relationships

 

  1. Roles and Responsibilities

Role

Responsibility

SLT/Principal

Oversight and strategic direction for AI use. Ensures policy implementation and resourcing.

AI Lead/Digital Strategy Lead

Acts as point of contact. Maintains AI register, monitors implementation, leads CPD.

Heads of Department

Embeds policy into practice within their teams. Ensures compliance and promotes dialogue.

Teaching Staff

Use AI tools responsibly and ethically. Model safe practice and encourage critical thinking.

Learners

Use AI tools only as permitted. Declare use where required. Adhere to academic integrity.

IT/IS Team

Maintain technical safeguards. Approve tools for use. Monitor system usage and risks.

Safeguarding / Data Protection Lead

Assess risks relating to bias, safety, and data. Ensure compliance with UK GDPR and safeguarding duties.

 

  1. Approved Technologies

The following AI tools and platforms are approved for institutional use. These must be reviewed regularly by the AI Lead or designated committee.

Tool

Purpose

Approved Use

ChatGPT (Free or Enterprise)

Lesson ideas, drafting text

Staff only. Not for learner use without supervision.

GrammarlyGO

Writing support

Staff and learner use, with guidance.

Microsoft Copilot

Integrated into MS365

Staff use. CPD recommended.

AI in Learning Platforms (e.g. Century, Kaltura)

Adaptive learning or feedback

Staff and learner use where already embedded.

Caution: Use of unapproved tools (especially those requiring uploads of learner data) must be submitted for risk assessment.

  1. Acceptable Use Guidelines
  • AI should never replace human interaction, especially in pastoral care, safeguarding, or summative assessment decisions.
  • Learners must be transparent about AI-assisted work (e.g., reflective writing, personal statements).
  • Staff must not upload personal or sensitive learner data into third-party tools unless explicitly approved.
  • Staff should provide critical literacy guidance to learners on AI-generated content (accuracy, bias, misinformation).
  • Use of AI in high-stakes assessment (e.g., grading, progression) must involve human oversight.
  1. Academic Integrity and AI
  • Plagiarism detection tools may not reliably detect AI-generated content.
  • Learners must follow assessment guidelines regarding AI use.
  • Suspected misuse (e.g., AI-generated assignment submissions) will be investigated under the [insert policy name here].
  1. Data Protection and Privacy
  • All AI tools must comply with UK GDPR and institutional data protection policies.
  • No personal or learner data is to be uploaded to AI tools that do not provide adequate data security or transparency.
  • Data processing activities must be logged, and risk assessments (e.g. DPIAs) conducted where appropriate.
  1. Training and CPD
  • All staff will receive annual CPD on AI and emerging technology use, including ethical and pedagogical considerations.
  • Bespoke training will be provided for curriculum teams to support integration into schemes of work and assessment practices.
  • Induction for learners includes a Digital and AI Awareness module.
  1. Review and Continuous Improvement
  • This policy will be reviewed annually or following significant technological, legal, or sector developments.
  • Feedback from staff and learners will be sought to shape future versions.
  • Ofsted readiness: A copy of this policy and implementation examples will be made available as part of our Quality Assurance evidence base.

 

  1. Related Policies
  • Safeguarding and Prevent Policy
  • Data Protection and GDPR Policy
  • Academic Integrity and Assessment Policy
  • Digital Strategy / Learning and Teaching Strategy
  • Acceptable Use of Technology Policy

Appendices – you will need to create these forms internally

  • Appendix A: AI Tool Risk Assessment Form
  • Appendix B: Learner AI Usage Declaration Template
  • Appendix C: CPD Programme Outline for AI Readiness
  • Appendix D: AI Use Register (Live Document)
Post by FE & Skills Hero
July 29, 2025