Start main page content

Wits AI Framework

Artificial Intelligence (AI) will not define us. Integrity will.

Wits University is committed to the responsible, ethical and innovative use of AI with academic integrity at its centre. The purpose of this framework is to provide academic and professional and support staff (staff), and students with contextually valued practices regarding the fair and productive use of AI in advancing research, innovation, learning, teaching, course design and assessment of student learning at the University.

The Wits AI Framework is guided by the University’s:

The 6 Wits AI Principles

The University has established six broad principles that provide a common framework for the ethical use of AI, ensuring that academic integrity and research ethics are upheld. This framework is not a policy document.

Principle 1: Upholding unwavering integrity and personal accountability

The University underscores that the individual scholar (or human team) remains fully responsible for the originality, accuracy, and integrity of their work. The use of AI must be transparently disclosed and appropriately acknowledged in line with university policy. The use of a tool does not absolve the user of responsibility for any academic misconduct, including but not limited to plagiarism, falsification, fabrication or improper attribution. In line with good research practice, researchers should commit to describing their use of AI tools transparently to facilitate the reproducibility of their methods and findings in dedicated sections in the final write up of the study as a matter of course (University of Oxford, 2025), remembering that AI cannot be an author on any work.

Principle 2: Foster AI literacy

All staff and students should be enabled to develop a foundational understanding of how AI tools work, including their capabilities, inherent limitations (e.g., bias, inaccuracy, hallucinations), and the ethical considerations of their use – acknowledging that AI access is not equitable in our context. This literacy will support an awareness of when to avoid AI and when it may be useful to use such tools (understanding their role in the academic landscape), as well as how to use it and how to evaluate AI-generated content (critical use).

Principle 3: Adapt research, pedagogical and assessment practices

Research, teaching, learning, and assessment methods must be strategically adapted to the AI landscape to create valid and reliable learning outcomes and/or research outputs in order to protect the reputation of the University.

This may involve redesigning curricula, learning outcomes and assessments accordingly; to consider whether and how AI tools and capabilities may be used to enhance educational goals while clearly defining the permissible and impermissible uses of AI within specific academic tasks without compromising outcomes or the development of key skills that are considered a hallmark of a University graduate.

Principle 4: Prioritise human oversight and augmentative use

AI should be positioned and used as an augmentative and consultative tool that supports human intellect, not as a substitute for it, if the quality and integrity of a University degree is to be maintained. The final judgment, critical interpretation, creative insight, and ethical decision-making must remain in the hands of the human user, who should use the tool to enhance, not supersede, their academic responsibilities.

Principle 5: Manage institutional risks and promote responsible implementation

The University community must proactively engage with the broader risks of AI, including data security, user privacy, and intellectual property rights. This requires providing clear guidance and training to equip staff and students to safeguard sensitive data and use these tools in a manner that aligns with institutional values and legal standards.

Staff and students engaging with AI must be mindful of the risks relating to data security, confidentiality, privacy, and intellectual property. Sensitive and / or personal data should not be entered into public AI systems, as this may compromise research participants, or intellectual property. Disclosure of confidential data may also destroy novelty or originality, placing intellectual property in the public domain.

Principle 6: Equitable, inclusive and socially just AI practices

The University is committed to ensuring that the integration of AI into teaching and learning, research, and administration promotes fairness and inclusivity. The implementation of AI must, as far as is reasonably possible, avoid reinforcing or worsening existing social, economic, and / or educational inequalities.

The University recognises that unequal access to digital infrastructure, affordable data, and reliable devices may disadvantage some members of our University community. To address this, the University will actively work to mitigate challenges associated with the digital divide, data costs, and accessibility. Where AI is permitted, equitable alternatives and pathways will be made available to students and staff who encounter barriers to access.

In addition, the University commits to a critical and ongoing examination of AI tools for inherent biases, including those related to race, gender, language, culture, and other markers of identity. The University will prioritise the responsible and ethical use of tools that are sensitive to the needs of our diverse, multilingual community that align with the University’s broader transformation goals.

Share