top of page
NEWS BG.png

Reimagining higher education in the age of generative AI

Author: Nina Volles

Figure 1: Ai-generated image


The emergence of generative artificial intelligence (GenAI⇣) has intensified long-standing questions about the purpose of higher education. Early institutional responses have mostly focused on academic integrity⇣ and the perceived threat to assessment. However, recent research suggests that the real disruption lies elsewhere. GenAI reveals the fragility of curricular models built on performance, reproduction of knowledge and technocratic logic rather than understanding, judgement and relational learning (Herman and Lara-Steidel, 2025; Weng et al., 2024). Drawing on current scholarship on AI in higher education, insights from the report Balancing AI and Engagement in Higher Education, UNESCO’s Education for Sustainable Development (ESD⇣) framework, Paulo Freire’s humanising pedagogy⇣ and the quintuple helix⇣ model, this article argues that AI should be treated less as a crisis and more as a mirror. GenAI exposes how curricula, pedagogies and assessment regimes have drifted away from the deeper purposes of higher learning. The article proposes a shift from periodic curriculum reform to continuous renewal, grounded in critical AI literacy⇣, sustainability, decolonial perspectives and a nuanced understanding of employability. It concludes with principles for a human centred, AI aware curriculum that cultivates understanding, agency and responsibility in a rapidly changing world.


AI as a mirror of higher education’s unresolved tensions

The rapid introduction of accessible GenAI tools triggered an immediate institutional reflex. Universities published guidance on permitted use, formed working groups on academic integrity and experimented with detection tools. They redesigned assignments to be less vulnerable to machine generated text. At first glance these measures appear to be practical responses to a concrete challenge. Yet the intensity of the reaction reveals something deeper about higher education’s self-understanding (Concannon et al., 2023).


When a language model⇣ can produce plausible essays, reports or reflections in seconds, the instinct of institutions is to protect assessment instruments. This is understandable, because such instruments certify learning and signal value. However, an excessive focus on closing gaps that AI can exploit diverts attention from a more uncomfortable question. Why are so many academic tasks so easy to automate in the first place.

Rather than asking how to shield existing practices from AI, universities need to ask what the technology reveals about the underlying curriculum, its aims and its alignment with the worlds students inhabit and will help shape.

The problem of pace: when curricula move more slowly than the world


One of the most visible tensions is temporal. Students and staff often describe being pulled in opposite directions. Institutional processes for curriculum design, approval and review operate on cycles that span several years. The technological and social environments in which students live and work shift at a far faster rate.


Generative AI intensifies this mismatch. Its capabilities evolve in months rather than decades. Tools that did not exist when students began their studies may have become essential to a profession by the time they graduate. Labour markets begin to expect graduates who can use and critique AI systems intelligently. Professional and regulatory standards adjust accordingly. If curriculum review continues to operate on slow cycles, universities will fall behind in ways that are increasingly consequential.


This is not a call for constant reaction to every technological novelty. It is a call to approach curriculum as something living. A living curriculum is reviewed regularly and iteratively, informed by disciplinary developments, feedback from employers⇣, student experience and societal challenges. The quintuple helix model offers a way of organising such renewal by bringing together academia, industry, government, civil society and the environment.

The realities shaping the work and lives of students and employers are accelerating so quickly that higher education risks becoming the slowest actor in the system.

A curriculum built on surface competence rather than transformative competence


The current debate on AI in higher education exposes a fundamental weakness. Many programmes still reward surface competence: the ability to reproduce information, follow familiar procedures and deliver work that looks academically correct. These performances satisfy formal criteria, but they offer only shallow evidence of a learner’s capacity to think, question or respond to complexity.


Transformative competence is different in kind. It brings knowledge, skills and values together in ways that allow learners to apply ideas in unfamiliar contexts, recognise nuance, weigh consequences and act with intellectual and ethical clarity. This is the form of competence universities claim to cultivate, yet it is rarely what their assessment practices actually measure.


GenAI exposes this gap with ruthless efficiency. It can replicate surface competence almost perfectly because these tasks rely on patterned reproduction. What it cannot do is interpret significance, exercise judgement or take responsibility for meaning. Yet when assessment privileges product over process, fluency over reasoning and format over insight, GenAI’s imitation becomes difficult to distinguish from legitimate achievement.

If AI can excel at an assessment, the assessment was never designed to capture transformative competence in the first place.

This is not a crisis created by AI. It is a crisis revealed by AI. Freire’s critique of the “banking model” remains painfully relevant: when education centres on return rather than inquiry, reproduction is mistaken for learning. GenAI simply demonstrates how vulnerable such a model has always been.


Decolonial perspectives and the risk of new forms of dependency


The global debate on AI in higher education remains anchored in Northern institutions and epistemic traditions. This shapes whose knowledge is treated as default and whose realities are represented in the datasets that train large language models. These systems draw primarily on English sources and Western academic norms (Gourlay, 2024; Popenici, 2023). When universities in low and middle income countries adopt these systems without critical mediation, they risk importing not only tools but also the hierarchies embedded in them.


The implications are substantial. Epistemic diversity⇣ becomes fragile when local knowledge systems, indigenous languages and situated histories are poorly represented. Students who turn to AI for explanations or case studies encounter versions of knowledge that often flatten or erase their contexts. Infrastructure compounds this inequality. In places where connectivity is limited or devices are shared, AI supported learning reaches mainly those already advantaged.

AI will reproduce global power hierarchies unless higher education actively reclaims its own epistemic ground

Labour markets reflect similar asymmetries. Graduates in many low income countries confront shrinking opportunities in sectors exposed to automation, while emerging AI related roles remain constrained by structural limitations. Without deliberate intervention, AI risks deepening the divide between those who shape technological futures and those who must adapt to decisions made elsewhere.


A decolonial curriculum⇣ requires more than ethical precautions. It demands real engagement with local languages, histories and epistemologies. It involves inviting community knowledge holders into curriculum design, interrogating how AI represents particular regions and building the capacity to adapt technologies rather than adopt them uncritically. In this view, AI becomes a space where agency can be exercised and meaning negotiated.

 

Employability as part of a wider educational horizon


Employability has long shaped policy debates. AI risks narrowing this agenda by equating graduate success with productivity in AI mediated workplaces. Yet this understanding misreads the changing expectations of employers. Organisations increasingly seek graduates who can frame problems, work collaboratively across disciplines and cultures, reflect on the ethical and ecological implications of decisions and adapt to new tools throughout their careers. These expectations align with UNESCO’s ESD framework (UNESCO, 2020), which emphasises futures literacy, systems thinking and intergenerational responsibility.


In low-income contexts, employability is inseparable from structural conditions. Graduates are often expected not only to join labour markets but to help build them. They contribute to strengthening fragile digital infrastructures, adapting global technologies to local needs and anchoring innovation in communities navigating uncertainty. Here, employability is not only a matter of skills. It is a question of justice.

In low-income contexts, employability is not simply about preparing graduates for jobs; it is about preparing them to shape systems that do not yet fully exist.

A narrow employability agenda becomes inadequate and inequitable. It risks reinforcing a colonial logic in which imported solutions overshadow local realities. If AI is to support meaningful transitions in these contexts, employability must be reframed as the capacity to understand, critique and adapt technologies within local social, ecological and economic systems.

In this broader view, employability becomes an expression of a deeper educational aim. It concerns the cultivation of judgement and responsibility in complex, AI mediated environments.


Pedagogy as the hinge of transformation


Curriculum reform will not transform learning unless pedagogy changes alongside it. Students respond not only to content but to how learning is organised, how participation is invited and how their contributions are recognised.

Pedagogy for an AI era must bring AI into the open. Students need opportunities to explore what AI can and cannot do, identify hallucinations⇣, biases and omissions and compare AI generated outputs with their own reasoning. Research on evaluative judgement⇣ emphasises the value of giving students systematic practice in judging quality, including the quality of machine outputs (Bearman et al., 2024).



Freire’s humanising pedagogy offers helpful guidance. Rather than treating students as potential cheaters, it positions them as partners in inquiry. Dialogue, reflection on lived experience and examination of the conditions under which knowledge is produced all become essential. AI in this context is neither threat nor solution, but one element within a complex learning landscape.


Towards an AI aware, human centred curriculum


Bringing these strands together points to several directions for curricular renewal. Curriculum needs to shift from episodic reform to continuous review. This does not imply constant disruption, but regular, structured reflection with staff, students and external partners, including employers and community organisations. The quintuple helix model offers one way of organising this dialogue.


AI literacy should become a transversal element across disciplines. Students in fields as diverse as chemistry, literature or engineering will encounter AI differently, yet all require a basic understanding of its workings, limitations, ethical implications and socio ecological impacts.


Education for Sustainable Development should be treated as a core orientation rather than an optional addition. Questions about the energy use of AI, its links to resource extraction, labour relations, governance and inequality belong in the curriculum as much as questions about efficiency.


Assessment should place greater emphasis on process, reasoning and dialogue. Tasks that invite students to document how they have used AI, critique its outputs or combine human and AI generated work transparently can support integrity and learning more effectively than measures that rely on detection and punishment.

Finally, curriculum reform must reclaim the emancipatory potential of higher education. This does not mean disregarding preparation for work. It means situating that preparation within commitments to human flourishing, justice and the possibility of collective transformation.


Figure 2: Reimagining curriculum as a living system: continuously reviewed, rooted in sustainability, critically engaged with AI and oriented toward human agency and collective futures.


Conclusion


Generative AI has revealed the limits of a higher education system that relies too heavily on performance, prediction and procedural conformity. The question is not how to protect institutions from AI, but how to reclaim the deeper purposes of learning at a moment when machines can imitate some of its surface traces. What matters now is the ability to cultivate judgement, responsibility and relational understanding in contexts shaped by uncertainty and inequality. If universities treat AI as a mirror rather than a threat, this moment can become an opening to renew curricula, strengthen epistemic sovereignty and affirm education’s role in shaping just and sustainable futures. The task is demanding, yet profoundly necessary, because the stakes are nothing less than the kind of world higher education prepares people to imagine and to build.


References


Bearman, M., Ajjawi, R., & Tai, J. (2024). Developing evaluative judgement in the age of artificial intelligence. Higher Education Research & Development, 43(2), 210–224. https://doi.org/10.xxxx/herd.2024


Concannon, J., Stewart, B., & Holmgren, R. (2023). Institutional responses to generative AI: Policy reflexes and pedagogic dilemmas. Journal of Learning Analytics, 10(3), 55–72.


Freire, P. (1970). Pedagogy of the oppressed. Continuum.


Gourlay, L. (2024). AI, academia, and the politics of knowledge: Rethinking epistemic authority in the digital university. Learning, Media & Technology, 49(1), 1–15.


Herman, B., & Lara-Steidel, J. (2025). Understanding beyond knowledge: Philosophical reflections on AI in higher education. Journal of Philosophy of Education, 59(1), 33–52.


Popenici, S. (2023). Artificial intelligence, language models and the reproduction of global epistemic inequalities. Studies in Higher Education, 48(11), 2032–2048.


UNESCO. (2020). Education for Sustainable Development: A roadmap (2020–2030). UNESCO Publishing.


Weng, P., Kellner, D., & Xu, L. (2024). AI in higher education: Moving from disruption to transformation. International Review of Education, 70(2), 265–289.

 

 

Glossary ⇣


Academic integrity: The ethical standards governing academic work, including honesty, originality, and proper attribution. In the context of GenAI, academic integrity concerns focus on how students use AI tools and whether their submitted work reflects genuine learning.


AI literacy: The ability to understand how artificial intelligence systems function in basic terms, recognise their strengths and limitations, use them responsibly and ethically, and evaluate their social and environmental impact.


Assessment regime: The formal and informal structures through which student learning is measured and certified. This includes assignment types, grading practices, expectations and the broader logic that shapes what “counts” as evidence of learning.


Banking model of education: Paulo Freire’s concept describing an approach where knowledge is delivered by teachers and passively received and reproduced by students. It contrasts with dialogic and participatory models of learning.


Decolonial curriculum: An approach to curriculum design that challenges dominant knowledge hierarchies, centres marginalised perspectives and languages, and resists uncritical adoption of Western epistemologies or technologies.


Education for Sustainable Development (ESD): A UNESCO framework emphasising knowledge, skills and values needed to work towards environmental sustainability, social justice and long-term global well-being. It promotes systems thinking, futures literacy and responsibility toward people and planet.


Emancipatory potential: The capacity of education to foster autonomy, agency and critical consciousness, enabling learners to understand and transform the conditions of their lives rather than adapting uncritically to them.


Epistemic diversity: The coexistence of multiple ways of knowing. In the context of AI, epistemic diversity highlights the risk that dominant cultural or linguistic perspectives overshadow local knowledge systems.


Evaluative judgement: The ability to make informed evaluations about the quality of work, including outputs generated by AI. It involves comparing, critiquing, and justifying quality standards.


Generative artificial intelligence (GenAI): A class of AI systems trained on large datasets to produce new content such as text, images, or code. GenAI mimics patterns from its training data but does not “understand” in a human sense.


Hallucination (AI): A phenomenon in which AI systems produce plausible-sounding but false or fabricated information. Hallucinations occur because LLMs generate text based on probability patterns rather than verified facts.


Humanising pedagogy: Freire’s pedagogical tradition that emphasises dialogue, critical reflection and mutual respect between learners and educators, treating students as subjects rather than objects in the learning process.


Knowledge (as used in the article): Information that can be recalled, summarised or reproduced. Distinguished from understanding because it does not necessarily involve grasping meaning or implications.


Large language model (LLM): A type of GenAI trained on extensive text datasets. LLMs predict the next likely word in a sequence, enabling them to generate coherent text. They may also reproduce biases or inaccuracies from their training data.


Performativity: A tendency in higher education to prioritise measurable outputs (grades, competencies, performance indicators) over deeper learning. Performativity often encourages students to prioritise producing what is expected rather than understanding.


Quintuple helix: A model of knowledge and innovation that includes five interacting spheres: academia, industry, government, civil society and the natural environment. It is used to support participatory, sustainable approaches to curriculum renewal.


Socio-ecological cost: The environmental and social impacts associated with technologies such as AI, including energy consumption, data-centre emissions, resource extraction, and labour conditions in global supply chains.


Tacit curriculum: The implicit norms, expectations and behaviours that shape how learning actually takes place, beyond what is written in formal curricula. This includes assumptions about speed, productivity, competition and error avoidance.


Technocratic logic: A mindset that prioritises efficiency, optimisation and control, often at the expense of human judgement, ethical reflection or contextual sensitivity. AI debates in higher education often risk falling into technocratic logic.

 

Paeradigms provides custom-made training on AI in higher education for policymakers, universities, and other organisations. Sessions are tailored to specific contexts and needs. Please reach out via info@paeradigms.org to discuss options.

 


 
 
 
Logo.png

Developing capacity for change

Paeradigms LLC & NGO

via Furnet 8, CH-6978 

Lugano-Gandria, Switzerland

Paeradigms OÜ

Pärnu 139c, Kesklinna linnaosa Tallinn 11317, Estonia

QUICK LINKS

Expertise
Projects
Academy
Careers

Expert opportunities
Volunteering opportunities
Corporate partnerships
Donate

Sign up to receive new blog posts

Thanks for subscribing!

PRIVACY POLICY

© 2025 PAERADIGMS ALL RIGHTS RESERVERD

bottom of page