AI and digital neocolonialism: Unintended impacts on universities
GLOBAL
The rapid integration of AI technologies, including generative pre-training transformers (GPT), generative adversarial networks, convolutional neural networks, and reinforcement learning models, has profoundly influenced various global sectors, with a notable impact on higher education.
These technologies are utilised across a range of tasks such as natural language processing, image and video recognition, personalised learning, and content generation, thereby transforming the educational landscape in the digital age.
GPT models, in particular, have been crucial in advancing natural language processing applications. Within the educational sector, they facilitate automated content generation, the creation of study materials, support for language learning, and the development of tutoring systems capable of interacting with students through natural language.
As these AI technologies become embedded in educational practices, content creation, and research methodologies, they open up significant opportunities for innovation while also posing collective challenges.
Importantly, the evolution of AI also risks reinforcing neocolonial patterns, underscoring the complex ethical implications associated with their deployment and broader impact.
This editorial explores the intricate impact of GPT and related AI models, which can reinforce existing cultural biases, create language hierarchies, and deepen economic dependencies.
These dynamics contribute to a new form of digital neocolonialism, where technologically advanced Western countries extend their influence globally, often undermining less developed regions.
As these predominantly Western-developed AI technologies become integrated into global education systems, their influence becomes particularly evident.
The editorial calls for adopting more inclusive AI practices to mitigate these adverse effects. The integration of these technologies in educational settings requires a comprehensive evaluation by the academic community.
The aim is to minimise potential negative impacts while maximising benefits to foster greater global educational equity. This involves addressing the complex interplay of AI technologies within education to ensure they contribute to a more inclusive and fair global educational landscape.
The rise of AI in academia: Opportunities and challenges
The integration of GPT and other AI technologies into higher education represents a transformative shift, heralding new opportunities for personalised learning and the adoption of innovative teaching tools.
These technologies, often driven by advances in machine learning and artificial intelligence, have begun to reshape the landscape of academia, enabling more tailored educational experiences that can adapt to the individual needs of students.
However, the rapid adoption of AI in educational settings also brings significant challenges. Predominantly, AI technologies are designed and controlled by leading figures and institutions within Western culture and economically advanced countries. This centralisation raises several critical concerns:
1. Dependency on Western technologies:
The global spread of AI in education has been marked by a significant reliance on technologies that are predominantly developed in technologically advanced Western countries.
This trend can lead to a dependency loop where institutions in developing regions become perpetual consumers rather than innovators in educational technology.
The consequence is a restriction on technological autonomy, where these institutions lack the capacity to tailor solutions to their specific educational challenges and contexts.
Such a scenario undermines local educational systems’ ability to innovate and develop home-grown technologies, potentially stalling the development of a diverse technological landscape and contributing to a technological imbalance across regions.
2. Homogenisation of educational content:
AI-driven tools in education, particularly those that utilise machine learning and data-driven algorithms, are often designed using datasets that are largely reflective of Western, English-speaking environments.
This bias can result in educational content that is not only linguistically uniform but also culturally narrow.
As these AI systems are deployed globally, there is a risk of a ‘one-size-fits-all’ approach that does not adequately address or resonate with the cultural, historical, and societal specifics of students from non-Western backgrounds.
Such homogenisation of educational content risks stripping the educational experience of local relevance and richness, potentially leading to a disconnect between learners and the knowledge they are acquiring.
3. Cultural and intellectual imperialism:
The proliferation of Western-developed AI technologies in education systems around the world can unintentionally promote a form of cultural and intellectual imperialism.
By prioritising Western methodologies and knowledge frameworks, these technologies may marginalise other epistemologies and pedagogical traditions.
This dominance can act to overwrite local educational practices, devaluing and diminishing the presence of indigenous knowledge and methods in global education.
The result is not just a loss of intellectual diversity but also an erosion of cultural identities, as education becomes a vector for the imposition of foreign values and perspectives.
4. Equity and access:
While AI has the potential to transform education through personalisation and efficiency, the distribution of these benefits is uneven across global landscapes.
Institutions in affluent parts of the world often have the financial capability, infrastructure, and expertise to adopt and integrate the latest AI innovations, thereby enhancing their educational outcomes.
In contrast, those in poorer regions may face significant barriers to entry, such as high costs, inadequate infrastructure, and a shortage of trained professionals to implement and maintain AI technologies.
This disparity not only reinforces existing inequalities but could also widen the educational gap between the developed and developing worlds, perpetuating a cycle of educational and economic disadvantage.
To truly realise the potential of AI in academia, it is crucial to address these challenges head-on. This requires a concerted effort to develop inclusive AI systems that incorporate and respect a broad spectrum of cultural and linguistic contexts.
It also involves fostering collaborations that can reduce dependency by supporting the development of local AI solutions tailored to meet the specific needs of diverse educational environments.
By doing so, the academic community can leverage AI to enhance educational outcomes while ensuring that it enriches the cultural tapestry of global education rather than homogenising it.
Case studies
ChatGPT-assisted instruction for EFL classes
At a university in Korea, the introduction of a GPT-based platform has significantly boosted student participation in English as a Foreign Language (EFL) courses. The platform, serving as a virtual English tutor, provided real-time assistance to students during their reading exercises, particularly supporting tasks related to bottom-up processing skills such as vocabulary, grammar, and translation.
While the ChatGPT-assisted instruction has positively impacted student learning, the platform’s strong emphasis on standardised American English has clashed with local language norms, reported Rakhun Kim in the 2024 article “Effects of ChatGPT on Korean EFL Learners”.
This highlights a broader challenge regarding the cultural relevance of AI applications in education. This situation underscores the necessity of developing AI systems that incorporate regional linguistic and cultural nuances to ensure effective and inclusive education.
One major issue is the potential standardisation of language through AI models. These systems are typically trained on large datasets that often reflect dominant Western norms.
As a result, the language they produce can become overly uniform, marginalising the diverse linguistic traditions that enrich human communication. Language is not just a structured system of communication but a representation of culture, heritage, and identity.
Generative AI systems display persistent stereotypical biases
Investigations into generative AI systems have revealed persistent biases related to gender, ethnicity, class, and age, significantly impacting the accuracy and fairness of AI-generated images.
For instance, when given prompts like “Maid service”, these systems often produce images of petite Asian women. Conversely, prompts such as “CEO” and “Manager” typically generate images of men. These examples highlight the deep-seated gender stereotypes ingrained in AI models, reflecting and perpetuating societal biases.
“Maid service”
“CEO”
“Manager”
(Source: Images generated with ChatGPT4o, 29 June 2024)
Impact of AI-driven neocolonialism on universities and research
The impact of AI-driven neocolonialism on universities and research institutions is profound, shaping not only the types of technologies developed but also influencing the directions of academic inquiry and the distribution of knowledge globally.
This phenomenon has significant implications for how education and research are conducted, particularly in less economically developed regions.
1. Influence on research priorities and funding
• AI-driven neocolonialism often aligns research priorities with the interests and values of economically dominant countries that design and control AI technologies.
• This alignment typically channels funding and resources towards research areas deemed strategically or economically significant to affluent regions, marginalising local issues in less developed parts of the world.
• As a result, critical challenges in these areas, which may be equally or more pressing, receive inadequate attention and resources due to their lack of alignment with the economic interests of powerful nations.
For instance, significant AI research efforts focus on advancing e-commerce algorithms, catering primarily to consumer markets and commercial interests of wealthy nations.
This research attracts substantial investment and top talent due to the promise of considerable economic returns.
In contrast, pressing needs in developing countries, such as improving agricultural productivity through crop yield prediction or enhancing water resource management with advanced monitoring and predictive analytics, receive comparatively less attention and funding.
Despite their critical importance for the sustenance and development of poorer regions, these areas do not attract similar interest due to their lower immediate financial returns for investors from wealthier countries.
This skewed prioritisation hampers AI’s potential to address a broader spectrum of human challenges, perpetuating dependency on foreign technology and expertise in developing nations and entrenching global technological and economic disparities.
Consequently, this reinforces global inequalities and underscores the need for a more democratised approach to AI research and development that incorporates diverse global perspectives and addresses a wider array of human needs.
2. Control over knowledge production
AI technologies, particularly those focused on data processing and research automation, are mainly developed by corporations in economically dominant countries.
This centralisation of AI development can foster a form of intellectual neocolonialism, where the power to generate, access, and control knowledge is disproportionately concentrated.
As a result, universities in less developed regions may become overly dependent on AI tools and systems tailored to the priorities and values of foreign developers, potentially sidelining local knowledge and research needs.
This reliance on externally developed AI tools can hinder local academic and research initiatives that cater to regional priorities, and may prevent local scholars from pursuing studies that address specific regional needs.
To mitigate these issues, initiatives that promote technological sovereignty are critical. These include fostering local AI development projects, forming international research collaborations that respect and integrate local perspectives, and investing in education and training programmes to develop regional AI expertise.
Such efforts can help distribute the benefits of AI more equitably across the globe, supporting a more diversified and inclusive global knowledge economy.
3. Impact on educational sovereignty and curriculum development
Dependency on AI technologies from economically dominant countries impacts educational sovereignty, as curriculum decisions and materials are often influenced by foreign providers.
This undermines the integration of local knowledge and pedagogical practices, potentially stifling the development of education systems that resonate with local populations.
Universities in the Global South, Far East, and Southeast Asia often adopt educational models prioritising Western viewpoints, leading to homogenised curricula that may not fully reflect local cultures, languages, and historical contexts.
This trend risks diluting the richness and diversity of local educational content, essential for preserving cultural heritage and identity.
Additionally, reliance on AI-driven educational tools from economically advanced countries can create barriers for local educators. These tools, including learning management systems and automated assessment tools, are often not designed with local cultural and societal contexts in mind, making it challenging for educators to adapt them effectively.
This scenario underscores the need for a more inclusive approach to AI in education, engaging local stakeholders in creating and customising AI tools.
Such collaboration can ensure that educational technology enhances learning and teaching experiences in a way that is both globally informed and locally relevant.
4. Intellectual dependency
AI-driven neocolonialism can lead to a form of intellectual dependency where universities in countries-in-transition as well as developing countries predominantly consume knowledge produced elsewhere, rather than creating their own.
This dependency not only undermines local innovation but also stifles the development of indigenous technologies and solutions specifically tailored to meet local needs and challenges.
Furthermore, such reliance may lead to the intellectual contributions of researchers from these regions being undervalued or ignored within the global academic community.
This dynamic exacerbates existing inequalities by reinforcing a system where knowledge and technological advances are centrally controlled by more economically dominant regions, leaving less affluent areas at a continual disadvantage.
The cycle restricts the capacity of local scholars to influence and innovate within their fields, hindering not only their professional development but also the broader socio-economic progress of their communities.
5. Ethical concerns and data sovereignty
The deployment of AI in research raises significant ethical issues regarding data sovereignty and privacy, echoing historical colonial exploitation.
Universities and institutions in both transitioning and developing countries often supply vast amounts of data without sufficient control over its use or securing tangible benefits, effectively becoming unwitting data colonies that serve the technological and economic interests of more developed nations.
This dynamic underscores a form of exploitation where data is extracted to train AI systems that predominantly serve foreign interests, reinforcing biases and perpetuating inequality.
This practice not only breaches ethical standards, including lack of informed consent, but also strips communities of their autonomy and potential for self-determination in the digital age.
The economic disparity it perpetuates is profound, with the value generated from this data accruing to tech companies and economies in the global North, enhancing their dominance.
In contrast, the data sources – the people and institutions in developing regions – receive minimal compensation, widening the global inequality gap.
To counteract this, international regulations are urgently needed to enforce fairness in data transactions, ensuring that countries and communities maintain sovereignty over their digital resources.
Additionally, stronger local capacities to manage and utilise data are essential, enabling developing countries to actively participate in shaping AI technologies that are culturally relevant and economically beneficial to them, fostering a more inclusive global digital ecosystem.
Mitigation strategies
To counteract AI-driven neocolonialism, universities and research institutions need to implement the following solutions:
1. Promote local AI development
• Invest in the development of AI technologies within local universities and research institutions.
• Fund local AI research projects, establish AI research centres, and form partnerships with local tech companies.
By fostering a homegrown AI industry, regions can develop tools and systems tailored to their specific needs and contexts, ensuring that local priorities are adequately addressed.
2. Diversify training datasets
• Ensure AI systems are trained on diverse and representative datasets, including data from various cultural, linguistic, and socio-economic backgrounds.
• Encourage global collaboration among universities and research institutions to share data and resources, creating datasets that reflect a wide range of perspectives and experiences.
This approach helps mitigate biases and ensures that AI technologies are more inclusive and effective in different contexts.
3. Strengthen human oversight and ethical standards
• Maintain significant human oversight in the development and deployment of AI technologies to safeguard against biases and ethical concerns.
• Establish ethical review boards and frameworks within universities to oversee AI research and implementation.
• Include diverse stakeholders, such as local community representatives, on these boards to ensure that AI technologies align with local values and needs.
4. Encourage international collaboration and knowledge sharing
• Form international research collaborations that respect and integrate local perspectives.
• Partner with universities and institutions from various regions to share knowledge and expertise more equitably.
Use these collaborations to build local capacities and ensure that AI development benefits from a diversity of viewpoints and innovations.
5. Invest in education and training (Digital Literacy for the 4th Industrial Revolution)
• Develop comprehensive education and training programmes to equip local researchers, educators, and students with the necessary skills for AI development.
• Universities should offer tailored courses and workshops on AI, data science, and ethics that address local needs and contexts.
• Provide scholarships and grants to support students and researchers, ensuring diverse participation and contribution to the AI field.
Implementing these solutions can help ensure that AI technologies are developed and utilised in ways that are equitable, inclusive, and beneficial to diverse global communities, countering the effects of AI-driven neocolonialism.
Navigating AI and digital neocolonialism in higher education
Universities face the dual challenge of leveraging AI technologies to enhance education while confronting the risks of digital neocolonialism.
To address this, they must adopt an inclusive approach to AI development that reflects global diversity.
Promoting local AI initiatives, using diverse training datasets, and maintaining robust human oversight can help universities navigate these challenges responsibly and support a more equitable academic future.
The integration of AI in academia also necessitates a collective global effort. Coordinated action among nations, organisations, communities, and individuals is essential to tackle these challenges effectively.
Preserving and recognising cultural significance within these collaborations ensures that unique perspectives and contributions from different cultures are valued.
This strategic and inclusive use of AI can prevent the perpetuation of digital neocolonialism in higher education, fostering a landscape where technology truly serves the diverse needs of the global academic community.
James Yoonil Auh is the dean of the School of IT and Design Convergence Education and the chair of computing and communications engineering at KyungHee Cyber University in South Korea.
link