How can Artificial Intelligence (AI) tools augment, shape or impact future academic endeavours? What ethical considerations need to be clearly defined when using AI in a learning environment? Can AI be used for good in this space?
Wits Centre for Journalism (WCJ) lecturer Pheladi Sethusa recently spoke at a workshop hosted by the Wits Centre for Learning, Teaching and Development (CTLD), to address questions such as these in reflecting on the rapid development of artificial intelligence and the implications we are beginning to see and explore in learning and teaching.
Titled Ensuring Responsibility, Equity and Access: Artificial Intelligence in Higher Education, the workshop allowed educators to share their experiences of using AI in an academic space. Sethusa told the audience of an experiment which was conducted with the WCJ’s career-entry honours in journalism and media studies class at the beginning of the 2023 academic year.
“Students were asked to find out who they were from ChatGPT by using a general prompt and then additional personalised prompts of their choosing,” said Sethusa. “Using the information on hand, they then had to write a blog post which interrogated what had been said about them. It proved an essential foundational exercise, ahead of our lectures on misinformation and disinformation. It allowed us to discuss the merits and potential pitfalls of using AI tools in both academic and professional settings and led to a lot of constructive, interesting conversations.
Later while exploring copyright in photography classes, students also looked at the Generative Fill functionality that Photoshop now has, which uses AI to fill in “missing” areas of a picture based on user-generated prompts.
“Essentially, we’ve been experimenting with how we can use AI in ways that are helpful rather than harmful, and having those conversations asking where we draw that line. We don’t use it to help us with our writing, for example. The early experiments we did in class helped students realise how easy it is for AI to lie, the bias it carries, or how it can overly embellish information with what it thinks is correct. It’s also important that we build AI literacy in students.”
Sethusa was also present at a roundtable discussion in Kigali hosted by the African Journalism Educators’ Network, the Fojo Media Institute and University of Rwanda, to discuss these findings and insights with industry peers.
Charlie Beckett, director of Polis at the London School of Economics and Political Science, told the roundtable that individuals and news organisations should experiment with AI, but in a carefully supervised way.
Beckett said reputable organisations were adopting a cautious approach, and experts or teams should be appointed to understand AI’s impact on journalism. He stressed the importance of establishing guidelines to provide advice on both the positive and negative aspects of AI. He added that these guidelines should be adaptable, given the rapidly evolving nature of AI.
Beckett pointed out that AI’s reliance on training data led to biases.
AI tools were assistants to journalists rather than replacements, he said. He outlined various ways that AI could enhance journalism, such as summarising information and reformatting content for different platforms. He also mentioned the potential for AI to improve content search and selection, as demonstrated by a recent AI-powered search tool for Associated Press TV.
Beckett urged journalism professionals to collaborate, share experiences, and optimise the use of AI tools. He emphasised that failing to prepare future journalists with AI skills could allow nefarious actors to misuse these technologies.