How to Cover AI on Campus
Experts discussed what AI is, what it can be used for and where reporters should look for red flags.
Photo credit: C Vector Studio/Bigstock
Experts discussed what AI is, what it can be used for and where reporters should look for red flags.
Photo credit: C Vector Studio/Bigstock
Generative artificial intelligence has caused panic in higher education since Chat GPT debuted almost two years ago and opened a new avenue for cheating. But that’s not the only way AI is being used on college campuses.
Panelists at EWA’s 2024 Higher Education Seminar at the University of Pennsylvania’s Graduate School of Education in Philadelphia busted some myths about AI, discussed how colleges are using it and offered story ideas for reporters.
Taylor Swaak, a senior reporter who regularly covers AI at The Chronicle of Higher Education, moderated this panel. Panelists were:
Artificial intelligence is not new. Sabado pointed out that it has been on college campuses since the 1950s.
Generative AI is what has caught fire more recently. In the past, he said, AI has been used to analyze data. By contrast, generative AI produces new content (using data it was trained on).
Panelists discussed both old and new uses for AI. Nyarko got the conversation started by explaining that AI is “any computational system that is able to complete tasks that would typically require human intelligence.”
Computation, here, is key. Computers find patterns in very large datasets. That’s their main value to humans who can only keep so much information in our heads at a time.
Historically, colleges have used AI or “machine learning” for its predictive abilities. Predictive systems take in lots of information about how things happened in the past and then make predictions about how things are likely to happen in the future. This has long helped colleges manage energy on campus and best use space and resources, panelists pointed out. It has also helped colleges identify students who are at risk of dropping out.
More recently, higher education has turned to generative AI. This type of computer model is trained on as much data as possible, and it is designed to identify patterns that ultimately equip it to mimic human language. Out of this come chatbots, automated note taking, and other forms of AI-generated writing. Colleges and universities have created chatbots to help students get quick answers about admissions, financial aid and other student services.
Sabado said a University of California AI Council survey found the university’s health centers have started using generative AI for billing, note taking and transcriptions. A recent EDUCAUSE survey showed more than half of respondents in higher education said AI is being used across their entire institutions – teaching and learning, research and insights, daily operations and business.
The stated goal for all of these uses is administrative efficiency and to improve the student experience. Unclear so far is whether it’ll be worth the cost of developing AI tools.
Using all forms of AI carries risk. Humans are conditioned to trust what computers say. But generative AI is not like a calculator, giving the right answer every time. Generative AI tools merely offer a guess – the most likely answer based on everything the model has processed before.
In describing early warning systems used to identify students at risk of dropping out, Nyarko discussed the “black box problem.” AI systems don’t show their work. Once trained, people can only see the answers these tools provide, not every piece of data used to come to those answers. Big, complex models simply use too much data along the way to make that practical.
And biased data can lead to biased predictions, Nyarko said, creating vicious cycles in outcomes. He described a hypothetical brilliant student, doing well in school despite coming from a disadvantaged background. But the student gets labeled “at-risk” and goes to a counselor to try to figure out why. The counselor doesn’t know, but now both the student and the counselor have a kernel of doubt about the student’s chance of success.
Sometimes predictive models use biased data – so they’re given free rein to say all Black students are at higher risk of dropping out because Black students in the past dropped out in higher numbers. Nothing corrects the model to make clear race is not the driver of that outcome.
Sabado pointed out that sometimes the data used to train AI models is simply incorrect. “Garbage in, garbage out,” he said. Colleges and universities have to make sure they use good, clean data to train AI systems decision-makers can trust.
Campuses have responded differently since ChatGPT’s release. Some, like Arizona State University and the University of Michigan, have leaned into generative AI and made it available to their entire institutions.
Morgan State University, where Nyarko works, has developed trainings about AI through his center. Not all institutions have the money to make ChatGPT widely accessible or to help people understand what AI is and what it can be used for. And many don’t have the desire to try.
But panelists said that work is critical.
“If a campus doesn’t have a[n AI] literacy program for faculty, staff, and students, that should be a bit of a red flag,” Nyarko said. “Good campuses are developing these literacy programs, not saying they’re going to ban it.”
Here’s how to keep up with AI news:
And one final idea: Tell the stories of people who are harmed by algorithmic bias.
“The more of those stories you tell, it hits home that it’s more than just computers running programs,” Nyarko said. “It’s people’s lives.”
Your post will be on the website shortly.
We will get back to you shortly