Skip to Main Content

Generative AI - Dubai: Home

What is Generative AI?

AI Image

Generative Artificial Intelligence (AI) refers to a type of online tool that can create new content, such as text, images, or music, based on the data it has been trained on. It works by learning patterns from large datasets and then using those patterns to generate new information. ​

Examples of tools include ChatGPT, which generates human-like text and DALL-E, which creates images based on text prompts.

Middlesex is committed to the ethical and responsible use of Artificial Intelligence (AI) including Generative AI. We believe that used responsibly and with integrity AI can be a powerful tool for change, helping to transform our learning, teaching, and assessment, develop students’ employability, revolutionise our research, and enhance our systems and processes. We recognise the role AI will play in our professional and personal lives and we are committed to enabling our staff and students to gaIn the necessary skills to succeed in a future where AI is pervasive.

UNESCO Recommendation on the Ethics of Artificial Intelligence

Middlesex University Dubai Library follows the UNESCO Recommendation on the Ethics of Artificial Intelligence. This framework ensures AI systems serve humanity while addressing risks such as bias, discrimination, threats to dignity, and environmental harm. Grounded in human rights, cultural diversity, and sustainability, it guides responsible AI use. For students, these principles are vital as AI transforms research and learning, helping them protect privacy, uphold academic integrity, and develop critical thinking skills rather than relying uncritically on automated systems.

 

Human responsibility and academic integrity

Ultimate responsibility and accountability must always lie with natural persons, and AI systems should never replace human responsibility.

Students must maintain intellectual control over their research and writing processes while honestly acknowledging AI assistance. While AI can assist with brainstorming or initial drafts, final academic decisions, including argument development, source selection, and conclusions, must remain with the student.

Academic integrity means understanding the difference between legitimate AI assistance (like grammar checking) and inappropriate delegation of core academic tasks (like having AI write entire sections). This requires ongoing dialogue with tutors about appropriate AI use and ensuring that work represents genuine learning rather than creating dependency on automated systems.

 

Transparency of AI tools used in research and learning

Transparency is essential to ensure respect for human rights and enables public scrutiny.

In academic contexts, students must understand how AI tools work, their training data, limitations, and response generation methods. You should be able to explain to lecturers which AI tools you used, why you chose them, and how they contributed to your work. If you cannot explain how an AI tool reached a particular conclusion, you may not be using it appropriately for academic purposes. Transparency also means choosing AI tools that provide insight into their reasoning processes rather than "black box" systems

 

Fairness and Non-Discrimination in AI Systems

The UNESCO document emphasises that AI actors must "promote social justice and safeguard fairness and non-discrimination" while ensuring equitable access to AI benefits.

Students must critically examine whether AI tools perpetuate academic inequalities or favour certain writing styles, cultural perspectives, or disciplinary approaches. AI systems may be biased towards native English speakers or better represent certain subject areas. Fairness requires actively seeking diverse perspectives beyond AI recommendations and ensuring academic work doesn't reinforce existing biases.

 

Data protection and privacy rights

Privacy is a right essential to the protection of human dignity, human autonomy and human agency, and must be respected and protected.

Students must understand what personal and academic data AI tools collect when used for coursework and research. Your essays, research queries, and learning patterns may be stored, analysed, or used to train future AI systems. This includes understanding your rights regarding educational data, being cautious about uploading sensitive research data, and knowing how AI tools might retain your intellectual property.

 

Ethical boundaries for AI use in assignments, research, and examinations

AI use should be "appropriate to the context" and "based on rigorous scientific foundations."

Students must establish clear boundaries about when AI assistance is appropriate based on the learning objectives. For assignments and examinations, AI use may be prohibited to ensure authentic assessment. For research, AI might help with literature searches but not primary analysis. For creative assignments, AI assistance should enhance rather than replace original thinking. These boundaries depend on whether assignments aim to develop critical thinking skills, as extensive AI use may undermine educational goals. Students need to collaborate with tutors to establish discipline-specific guidelines.

 

Proper attribution and citation when using AI tools

The emphasis on transparency and accountability suggests that AI use must be properly acknowledged.

Students should follow institutional guidelines while erring on the side of over-disclosure about AI use. This includes documenting which AI tools were used, for what purposes, and how output was modified or verified. Different universities programmes have varying standards some require explicit statements about AI use, others expect citation of AI tools like any other source. It is your own responsibility to check and apply standards.

Proper attribution respects intellectual honesty principles while helping readers and tutors to understand AI's role in your academic work and contributing to broader scholarly conversations about appropriate AI integration.

 

AI evaluation and verification

The framework emphasises "rigorous scientific foundations" in AI applications.

Students must develop systematic approaches to verify AI-generated content, recognising that AI can produce credible-appearing but incorrect information, fabricate citations, and present outdated perspectives as current knowledge.

Quality assessment involves verifying AI-provided facts through primary sources, cross-checking information across multiple reliable sources, and understanding AI knowledge limitations. Students should be particularly cautious about AI-generated statistical data, citations, and technical information. This requires maintaining traditional research skills while developing new literacies for AI evaluation.

 

Recognising limitations and potential harms of AI in academic contexts

AI technologies "do not necessarily, per se, ensure human and environmental flourishing."

Students must recognise that AI assistance can harm learning outcomes by reducing critical thinking development, creating dependency on automated systems, or diminishing engagement with complex academic materials. AI limitations include lack of real-time information, inability to understand context deeply, potential for generating confident but incorrect information, and absence of genuine creativity or original insight. Students should assess whether AI use supports or undermines their educational development, understanding that over-reliance on AI tools may leave gaps in essential academic skills needed for intellectual growth.

Using Generative AI in MDX assessments?

Generative Artificial Intelligence (AI) tools may be used in your assessments as specified by the module leader.

Where the use of Generative AI is allowed you must provide, as a minimum:

  • written acknowledgment of the use of generative artificial intelligence
  • the extent of use, and how generated materials were used
  • descriptions of how the information was generated (including the prompts used).
  • where generated material has not been adapted, citing and referencing using closest source types in the relevant referencing style (e.g. “artificial intelligence” or “non-recoverable sources”).

How and when to incorporate AI will be at the discretion of the programme team and will be led by the nature of the discipline and the programme and module learning outcomes. It will be informed by the principles above and subject to a rigorous internal and external quality assurance process.

It may be that some assessments explicitly ask students to work with Generative AI while others specify that AI should not be used, or used in specific ways.  It must be made clear to students within the assessment criteria when and how they can use AI for each assessment, and how to acknowledge appropriately when they do so.

Using Generative AI in MDX assessments?

Generative Artificial Intelligence (AI) tools may be used in your assessments as specified by the module leader.

Where the use of Generative AI is allowed you must provide, as a minimum:

  • written acknowledgment of the use of generative artificial intelligence
  • the extent of use, and how generated materials were used
  • descriptions of how the information was generated (including the prompts used).
  • where generated material has not been adapted, citing and referencing using closest source types in the relevant referencing style (e.g. “artificial intelligence” or “non-recoverable sources”.

The University encourages the ethical and responsible use of Generative AI within our assessment practices. How and when to incorporate AI will be at the discretion of the programme team and will be led by the nature of the discipline and the programme and module learning outcomes. It will be informed by the principles above and subject to a rigorous internal and external quality assurance process.

It may be that some assessments explicitly ask students to work with Generative AI while others specify that AI should not be used, or used in specific ways.  It must be made clear to students within the assessment criteria when and how they can use AI for each assessment, and how to acknowledge appropriately when they do so.

Middlesex University AI Guidelines

Generative AI tools to get you started