Two light bulbs being balanced.
Credit: Jack_the_sparrow / Shutterstock.com © 2024
Posted on by Tina Reed

Striking a Balance: Navigating the Ethical Dilemmas of AI in Higher Education

Navigating the complexities of artificial intelligence (AI) while upholding ethical standards requires a balanced approach that considers the benefits and risks of AI adoption.

“As artificial intelligence (AI) continues to transform the world—including higher education—the need for responsible use has never been more critical. While AI holds immense potential to enhance teaching and learning, ethical considerations around social inequity, environmental concerns, and dehumanization continue to emerge. College and university centers for teaching and learning (CTLs), tasked with supporting faculty in best instructional practices, face growing pressure to take a balanced approach to adopting new technologies. This challenge is compounded by an unpredictable and rapidly evolving landscape. New AI tools surface almost daily. With each new tool, the educational possibilities and challenges increase exponentially. Keeping up is virtually impossible for CTLs, which historically have been institutional hubs for innovation. In fact, as of this writing, the There’s an AI for That website indicates that there are 23,208 AIs for 15,636 tasks for 4,875 jobs—with all three numbers increasing daily.

To support college and university faculty and, by extension, learners in navigating the complexities of AI integration while upholding ethical standards, CTLs must prioritize a balanced approach that considers the benefits and risks of AI adoption. Teaching and learning professionals need to expand their resources and support pathways beyond those solely targeting how to leverage AI or mitigate academic integrity violations. They need to make a concerted effort to promote critical AI literacy, grapple with issues of social inequity, examine the environmental impact of AI technologies, and promote human-centered design principles.Footnote1

Addressing Social Inequity

AI systems, though designed with positive intent, may disadvantage some learners. One of the most pressing issues associated with AI is its potential to perpetuate and even deepen social inequities. Because AI algorithms are typically trained on historical data, they often reflect and reproduce inherent societal biases. This can lead to further marginalization of students from already underrepresented groups, both in terms of the opportunities they are offered and the assessments they receive.

Consider, for example, AI-driven automated grading systems. These platforms can rapidly score and provide feedback on assignments and can even reduce some of the subjective aspects of human grading. AI grading systems free up instructor time to focus on other meaningful teaching activities, such as planning lessons or interacting with students. However, not all grading is equal. While AI grading systems may be able to grade rote assessments, providing nuanced feedback on more subjective assessments requires human expertise and discernment. Using automated systems to grade these more subjective assignments can lead to bias and may perpetuate inequities. According to Shallon Silvestrone and Jillian Rubman at MIT Sloan, “An AI tool trained primarily on business plans from male-led startups in certain industries might inadvertently penalize business plans that address specific needs or challenges for women, non-binary, and other underrepresented gender identities.”Footnote2 Although these tools promise efficiency, they can inadvertently discriminate against students from diverse backgrounds.”

Read more about navigating the dilemmas of higher education at Educause Review.

Scroll back to the top of the page