Avoiding overreliance on AI in higher education

ARTIFICIAL Intelligence (AI) is rapidly transforming higher education, offering new tools for teaching, learning and assessment.

From adaptive learning platforms to automated grading systems and AI-generated feedback, the appeal of efficiency and scalability is undeniable. However, alongside these benefits lies a growing concern: the risk of overreliance.

When algorithms begin to overshadow academic judgment and interpersonal engagement, the core mission of education which is fostering critical thinking, reflection, and human connection, can be compromised.

The challenge is not whether to use AI, but how to integrate it without allowing it to replace pedagogical intent. Teaching is not simply the transmission of content; it is a dynamic, relational process shaped by context, empathy, and professional intuition.

Overdependence on AI tools can unintentionally narrow learning experiences. Students might begin to rely on generative tools to complete tasks without engaging with underlying concepts.

Educators, in turn, may be tempted to adopt AI suggestions without exercising their own academic judgment, especially under pressure to deliver content quickly or manage large cohorts. This can result in more passive learning, reduced intellectual curiosity, and a loss of creative teaching practices.

(Image: Holistic News)

To stay grounded in pedagogy, educators must remain at the centre of instructional decisions. AI tools should be seen as support systems and not decision-makers.

For example, many universities use adaptive platforms like Coursera or Moodle that recommend learning pathways based on student performance.

While helpful, these systems are most effective when lecturers intervene to adjust recommendations based on their knowledge of the students and the broader learning goals.

When educators actively shape the AI-enhanced experience, they ensure that learning is personal, inclusive, and meaningful.

AI can also be used to enrich and not to restrict student choice. Too often, AI systems predict what students should learn next and create narrow content funnels that limit exposure to diverse topics.

A student performing poorly in algorithmic thinking, for instance, might be repeatedly directed to basic exercises in data structures. Yet a thoughtful instructor might identify that the same student can engage with real-world problems like AI bias or ethical computing, thus broadening their learning journey.

By stepping in, educators help students stretch beyond algorithmic assumptions, encouraging intellectual risk-taking and confidence.

One effective way to embed AI ethically is through reflective learning models. Rather than using AI to provide definitive answers, educators can frame it as a thinking partner.

In a humanities class, for example, students could use a generative AI tool to draft the structure of a persuasive essay, then critically evaluate the logic and underlying assumptions.

This method not only builds AI literacy but also reinforces skills in argumentation, critique, and self-awareness.

Similarly, in engineering or business courses, students might be asked to compare AI-generated solutions to case studies with their own, reflecting on differences in reasoning and ethical implications.

Institutions also play a critical role in shaping a balanced approach. Clear guidelines around acceptable use of AI should be developed collaboratively across departments. These frameworks can help ensure consistency while respecting the specific needs of different disciplines.

Equally important is building AI literacy across the academic community. Faculty development programmes and classroom resources on how AI works; and where it falls short, empower both educators and students to engage with these tools thoughtfully and responsibly.

(Image: ObserveNow Media)

Some universities have taken the lead by creating interdisciplinary “Teaching with AI” task forces. These groups review emerging technologies, propose ethical standards, and help integrate AI into pedagogy without sacrificing academic integrity.

Perhaps the most important strategy to avoid overreliance is the regular evaluation of learning impact. Rather than focusing solely on performance metrics generated by AI tools, institutions should review whether students are genuinely engaging with content, developing higher-order thinking, and participating actively in their own learning.

This might involve classroom observations, student feedback, and peer reflection to ensure that AI is supporting and not replacing meaningful educational experiences.

AI undoubtedly has a place in the future of higher education. Its ability to support personalised learning, provide rapid feedback, and assist with routine tasks can benefit both students and educators.

However, its value depends entirely on how we use it. If treated as a shortcut, AI can lead to shallow learning and disengagement. If used with intention and pedagogical care, it can enhance creativity, reflection, and depth.

The goal is not to teach through AI, but to teach with it. That means reaffirming the role of educators as designers of learning experiences and mentors in students’ intellectual journeys.

It means treating AI as an assistant that extends human capabilities and not as a replacement for human connection. As institutions move forward, the guiding principle should remain clear: technology may shape the future, but it is pedagogy that defines its purpose. – May 21, 2025

 

The author is the Director of the Centre for Academic Advancement and Flexible Learning (CAFEL) and Senior Lecturer at the Department of Electrical and Electronics Engineering, College of Engineering, Universiti Tenaga Nasional (UNITEN).

The views expressed are solely of the author and do not necessarily reflect those of Focus Malaysia.

 

Main image: Freepik

Subscribe and get top news delivered to your Inbox everyday for FREE