Bahria University

Discovering Knowledge

Dr. Abdul Hafeez

PhD Theme/Topic: Human–AI Interaction Framework for Transparent Intelligent Learning Systems Using MCDA and Explainable Analytics

Supervisor: Dr. Abdul Hafeez/ Professor
Co-Supervisor: Dr. Ansar Siddique/ Professor
Contact #:+923354224793
Email: ahafeez.bulc@bahria.edu.pk
Campus/School/Dept: BULC/-/CS
RAC Approved Supervisor for Research Areas: Human Computer/AI Interaction, Multi-Criteria Decision Analysis (MCDA)

Supervisory Record:
PhD Produced:0
PhD Enrolled:1
 
MS/MPhil Produced: 25
MS/MPhil Enrolled:1

Topic Brief Description: 

As AI tools become increasingly embedded in educational environments, they introduce new possibilities for enhancing learning while also presenting challenges related to transparency, trust, and user understanding. This research explores the intersection of Human–AI Interaction (HAI) and intelligent learning systems, focusing on how AI-powered educational tools can be designed to enhance transparency, interpretability, and user trust through Multi-Criteria Decision Analysis (MCDA) and explainable analytics. A mixed-methods approach will be employed, beginning with qualitative interviews and surveys to gather insights from students, instructors, and academic technology administrators regarding their expectations, trust concerns, and perceived challenges when using AI-driven learning systems.

Based on these insights, a prototype transparent intelligent learning system (e.g., an AI-powered feedback tool or adaptive tutoring system) will be developed. The system will incorporate MCDA to balance criteria such as accuracy, usability, fairness, and cognitive load, and will provide visual explanations through explainable analytics dashboards. Experimental studies will evaluate how students interact with the system, examining outcomes such as learning performance, engagement with AI feedback, perceived transparency, and trust in AI-generated recommendations. Usability testing will assess the system’s effectiveness in supporting informed decision-making and fostering positive Human–AI interaction. Data will be analyzed using both qualitative (thematic analysis) and quantitative (statistical) methods, ultimately aiming to establish a design framework for creating transparent, user-aligned AI learning systems.

Research Objectives/Deliverables:

  1. To investigate the role and impact of AI-driven intelligent learning systems on students’ learning behaviors, decision-making, and interactions with AI-generated feedback.
  2. To develop a human-centered Human–AI Interaction framework using MCDA and explainable analytics, providing guidelines for designing transparent AI learning systems that enhance user trust and understanding.
  3. To assess how students perceive transparency, trustworthiness, and fairness in AI-powered learning tools, and how these perceptions influence their willingness to adopt and rely on such systems.

Research Questions:

  1.  How do AI-driven learning tools influence students’ learning behaviors and decision-making when interacting with AI-generated feedback and recommendations?
  2.  What design principles and MCDA-based criteria should guide the development of transparent, explainable AI learning systems that effectively support users while minimizing potential confusion or misuse?
  3. How do students’ perceptions of transparency, explainability, and trust affect their acceptance, adoption, and responsible use of AI-driven educational tools?

Candidate’s Eligibility Profile:

  1. The applicant must have an MS/MPhil/Equivalent degree in CS/IT/SE with CGPA > 3.0. Besides, applicants must have knowledge of Knowledge of Human–Computer Interaction, Human–AI Interaction, and AI-driven system design is essential.
  2. Candidates should thrive in an international environment and have excellent communication skills to actively contribute to team research efforts.
  3. We value independence and responsibility while promoting teamwork and collaboration among colleagues.