As climate disruptions accelerate, artificial intelligence challenges our understanding of intelligence itself, and as democracy faces increasing pressure worldwide, universities are confronting a quieter but equally urgent crisis: the erosion of their legitimacy.
Once celebrated as engines of enlightenment and ladders of social mobility, many higher education institutions now seem lost, caught between outdated teaching methods and a future they struggle to understand.
They promise transformation but often provide only transactional learning. They talk about relevance but cling to traditional knowledge systems and bureaucratic inertia. As tuition costs rise, faculty job security diminishes and public trust declines, universities find themselves at a critical crossroads.
Around the world, a quiet realisation is growing louder: we are preparing students for a world that no longer exists. Universities talk about relevance but are designed for predictability. They promise transformation but run on old ways of thinking. What if we reimagined them completely, this time in dialogue with artificial intelligence?
This exercise isn’t about edtech upgrades or digital dashboards. It’s a deeper thought experiment, one that forces us to rethink what education is for, what counts as knowledge, and who gets to define the future of learning.
Cracks in the foundation
From Santiago to Seoul, and Boston to Bamako, tensions are apparent. In the Global North, universities struggle with tuition debt, corporate management structures, and what some call a reputational burnout. In the Global South, institutions often contend with donor-driven curricula, language hierarchies, and colonial legacies still woven into their frameworks.
Yet across these different geographies, a common issue persists: most universities remain designed for predictability, stability and narrow disciplinary silos, just as the world calls for ethical imagination, fluid learning and transdisciplinary insight.
If the university is to remain relevant, we must stop tweaking the system and start rethinking it entirely. What would a university look like if it were designed for emergence instead of control? For complexity instead of compliance? For dialogue instead of prestige?
How AI was used: Design by dialogue
In this speculative design exercise, AI was not approached as an oracle delivering final answers but as a co-imaginative partner in creative inquiry.
Generative models were engaged dialogically, prompted with paradoxes such as: “How might a university foster complexity without collapsing into chaos?”, and challenged to propose scenarios, value frameworks, and institutional alternatives that might otherwise remain unthinkable.
Rather than optimising existing systems, the aim was to destabilise inherited assumptions and surface neglected possibilities.
To guide this process, a wide spectrum of input criteria was employed to reflect the layered realities of higher education in a globally entangled, AI-mediated era.
These inputs included cultural metrics (for example, societal attitudes toward knowledge and authority), transnational academic linkages, regional and national resource capacities, faculty intellectual ecosystems, institutional adaptability, curriculum design, learner diversity, technological infrastructure, governance models, equity and inclusion frameworks, and alignment with long-term goals for sustainability and innovation.
The resulting outputs did not offer ready-made blueprints but acted as provocations, mirror fragments reflecting institutional blind spots and latent aspirations. Educators, designers, and theorists engaged these responses dialectically: interrogating, refining, and reimagining them through critical dialogue.
The process was not one of automation but of emergence, not the pursuit of streamlined efficiency but the invitation to design a university that could help us survive complexity, regenerate broken systems, and make meaning in a disoriented world.
Rethinking curricula: From majors to missions
In this university co-imagined with AI, traditional academic silos dissolve into dynamic, mission-driven constellations of inquiry. Students no longer select fixed degrees but chart evolving pathways organised around urgent planetary challenges, climate displacement, algorithmic injustice, biodiversity collapse, and democratic erosion.
These are not academic topics; they are conditions of existence.
The curriculum is designed as a spiral rather than a straight line, inviting students to return to the same fundamental questions with greater nuance and responsibility over time. Learning is not anchored in disciplinary expertise but in relational intelligence, the capacity to connect systems, cultures and histories in ways that generate meaning and possibility.
Here, students are not passive recipients of inherited knowledge but active co-creators of their educational trajectories. They traverse epistemic boundaries, integrating scientific, artistic, Indigenous, and technological ways of knowing to form new architectures of understanding.
This university rejects the false dichotomy between employability and enlightenment. Instead, it poses more elemental questions: What must we learn, not just to succeed, but to sustain and reimagine human life on a destabilised planet? What capacities do we need to repair what’s broken, to live wisely with uncertainty, and to design futures that remain unfinished, on purpose?
Artificial intelligence as collaborator
In this reimagined university, AI is not an adversary of education but a co-intellectual agent, a vital partner in expanding the boundaries of human thought. It enables deeper reflection, broader perspective-taking, and more complex forms of reasoning.
Generative models move beyond mere automation: they offer multilingual scaffolding for global learners, simulate competing worldviews to expand moral imagination, surface hidden assumptions in student reasoning, and generate counterfactual scenarios or ethical paradoxes that disrupt conventional thinking.
Yet the role of the educator is not diminished; it is radically redefined. Teachers become epistemic guides, ethical anchors, and dialogic facilitators, helping students navigate the ambiguity that AI inevitably surfaces. In a world flooded with machine-generated text, the educator’s task is no longer to deliver content but to cultivate discernment, humility, and critical judgement.
“Technology is never neutral,” the project reminds us. “It encodes worldviews, reifies values, and redistributes agency.” Every AI tool, whether curating a reading list or simulating a policy debate, is interrogated through a critical lens: What ideologies does it embed? Whose knowledge does it privilege? What futures does it render thinkable or unthinkable?
This model moves beyond digital literacy to cultivate critical technopolitical consciousness, an awareness that technology is always entangled with power, history, and design. Students don’t just learn with AI; they learn about AI, questioning its ontologies, contesting its boundaries, and shaping its uses toward more just and pluralistic ends.
Beyond grades: Trust, growth, and public value
In this future-facing university, the tyranny of traditional metrics, grades, GPAs, and standardised tests gives way to multidimensional narratives of learning. Assessment is no longer a reductive score but a textured story of growth, contribution, and ethical engagement.
Students build living portfolios that document not only what they’ve learned but how they’ve learnt, through challenge, collaboration, and transformation.
A student who co-designs a water-monitoring system with Ugandan farmers or develops data tools for a social justice NGO in Paris is not evaluated by institutional prestige or abstract rubrics but by relational impact. What problems were addressed? What communities were involved? What values were negotiated?
Credentials are issued via blockchain, not as trophies of status, but as verifiable records of real-world learning: transparent, portable, and publicly accountable. The credential becomes less a symbol of elitism and more a declaration of service.
Most importantly, failure is not stigmatised but integrated into the very architecture of learning. Students are encouraged to reflect rigorously, take intellectual risks, and share responsibility for outcomes. In this model, accountability is not imposed from above; it is co-owned.
In an era saturated with information but starved of trust, the central question is no longer “What do you know?” but “What have you transformed, and who have you become in the process?” Education is not a race toward credentials, but a journey toward integrity, civic agency, and the public good.
Governance for the 21st century
Governance in this model doesn’t follow corporate or top-down logic. It is participatory, recursive, and co-owned. Students, faculty, community partners, and AI agents all play roles in decision-making.
AI is used not to dictate policy but to map unintended consequences or run foresight scenarios. Final decisions rest with diverse human groups, organised into advisory circles, restorative councils, and ethical assemblies.
This isn’t a university run like a corporation. It’s a university stewarded more like an ecosystem, responsive, adaptive, and alive to the feedback of the world around it.
Reclaiming the purpose of higher education
At its core, this vision advocates for a re-evaluation of the university’s mission.
Higher education, it argues, should not be defined by credentialing or workforce pipelines. It should be a civilisational practice, a space where we collectively decide what kind of intelligence we value, what kind of futures we’re willing to work for, and what kind of world we’re prepared to build.
Education is not a ladder of individual success but a loop of reflection, relation, and renewal.
A final lesson
The university co-imagined with AI is not a fantasy. It is a provocation grounded in a hard truth: 21st-century challenges cannot be met by 19th-century institutions.
“Education,” this experiment concludes, “is no longer a transaction. It is a transformation.” And transformation is never linear. It is never safe. But it is necessary and possible if we’re brave enough to ask better questions, listen to unlikely collaborators, and let go of what no longer serves the future we claim to prepare for.
James Yoonil Auh is the chair of computing and communications engineering at Kyung Hee Cyber University in South Korea. He has worked across the United States, Asia, and Latin America on projects linking ethics, technology, and education policy.