Cartoon illustration of two people using electronic devices.
Photo Courtesy of EDUCAUSE
Posted on by Tina Reed

Creating a Culture Around AI: Thoughts and Decision-Making

“Given the potential ramifications of artificial intelligence (AI) diffusion on matters of diversity, equity, inclusion, and accessibility, now is the time for higher education institutions to adopt culturally aware, analytical decision-making processes, policies, and practices around AI tools selection and use.

The use of artificial intelligence (AI) tools in higher education is fraught with tension. Ever since OpenAI launched the public version of ChatGPT, generative AI has been compared to COVID-19 in terms of its rapid onset and disruptive impact on society. Generative AI has been compared to the calculator in terms of its potential to change teaching, learning, writing, and searching practices and to fire in terms of its potential to be either a net asset or a net liability to society. Between the hype about its promise and the panic about its impact on academic integrity and knowledge production, many higher education professionals were caught in a wait-and-see moment in 2023. Will policies govern the use of AI in teaching and learning? How will faculty and staff develop their AI literacy and skill sets, especially as generative AI tools evolve? How should or shouldn’t generative AI be used at colleges and universities?

People generally recognize that AI products currently perpetuate the inherently biased content used to train them. In August 2023, The London Interdisciplinary School released a video essay about the ways in which AI image generators, such as MidJourney and Dall-E, perpetuate harmful representational biases in images generated for roles such as nurses, drug dealers, CEOs, and even terrorists. Similarly, the Brookings Institute published a series of tests that uncovered a political bias in ChatGPT output.Footnote1 When given a prompt that includes a reference to a cultural heritage, ChatGPT includes harmful stereotypes among its output. GPT detectors have also been found to perpetuate linguistic bias. In 2023, researchers from Stanford University reported that GPT detectors misclassified essays written by non-native English speakers as being AI-generated significantly more frequently than they did for essays written by native English speakers. The Modern Language Association (MLA) and Conference on College Communication and Composition (CCCC) Task Force published a working paper that placed concerns about AI bias in AI-generated written content in an academic context, acknowledging that this topic will continue to evolve, and more guidance is needed.”Footnote2

Read more about Creating a Culture of AI at EDUCAUSE.

Scroll back to the top of the page