AI technology, such as Chat GPT, forces universities and teachers to rethink traditional examination methods. Systems like Chat GPT enables students to take shortcuts by generating essays while remaining undetected by plagiarism checks. Linköping University‘s Deputy Vice-Chancellor, Karin Axelsson, believes that AI offers fantastic opportunities, and educators should focus on leveraging its benefits. While no Chat GPT-related cases have been detected by the university’s disciplinary board, Axelsson urges course coordinators to explore alternative ways of testing students’ knowledge. Professor Fredrik Heintz likens Chat GPT’s impact to that of calculators on mathematics and envisions more interesting tasks and new insights in education.
Enhancing human capacities and ensuring ethical standards
According to UNESCO, AI in education aims to enhance human capacities, protect human rights, and promote effective human-machine collaboration for sustainable development. Assistant Director-General for Education at UNESCO, Ms. Stefania Giannini, emphasizes steering the AI revolution in the right direction to improve livelihoods and reduce inequalities.
AI education technology, such as personalized learning, intelligent tutoring systems, virtual mentors, and chatbots, is transforming the landscape of education by identifying students’ strengths, weaknesses, and learning styles. This allows for adaptive feedback and tailored instruction, resulting in improved engagement and better learning outcomes. The skill emphasis in education is shifting towards critical thinking, problem-solving, collaboration, creativity, digital literacy, emotional intelligence, and cultural awareness. This fosters lifelong learning and prepares students for career transitions.
AI in education and its impact on teachers and students
An associate professor at the prestigious Wharton School, Ethan Mollick, has adopted an open ChatGPT policy in his syllabus, requiring students to use the AI tool. Despite initial concerns about cheating, Mollick has observed positive early results, with students generating class project ideas using ChatGPT and critically interrogating its suggestions. While Mollick admits to alternating enthusiasm and anxiety about AI’s impact on assessments, he encourages educators to adapt to changing times.
Another example is Stanford Assistant Professor Chris Piech, who has utilized AI to turn two teachers into thousands. Piech’s team develops AI grading systems to improve teacher training. The ClassInSight project uses AI-driven learning technologies to equalize opportunities in higher-infrastructure settings. Additionally, the Allo Alphabet project in Côte d’Ivoire uses phone-based literacy intervention for low-infrastructure settings, highlighting AI’s potential for promoting global equity in education.
Journalism professor Bart Brouwers (University of Groningen) calls ChatGPT a great opportunity for education. In an interview for the university website TotdeMax, he mentions the importance of encouraging students to experiment with ChatGPT. “The penchant for tradition in education is quite strong. That’s really a drag sometimes. On the contrary, the university should explore the new. In doing so, it is not a bad thing if students deliberately cross the lines.” As long as this is done in an honest and transparent way, Brouwers believes it is more positive than negative. If it were up to him, the university should not only think about software that can detect artificial intelligence but also about tools like ChatGPT and how academia could perfect them for education. “A door has now been thrown open. I would say, Dear Science, do something with that.”
Addressing AI cheating concerns and ethical implications
With the rise of AI cheating tools like text generation and code plagiarism, concerns about fairness, privacy, and ethical implications have become more pressing. To address these issues, educators must ensure equal access to technology, bridge the digital divide, and consider the socioeconomic disparities that may arise from AI adoption. Privacy and data protection, as well as surveillance and consent, must be carefully considered in developing and implementing AI tools in education.
Mollick’s open ChatGPT policy at the Wharton School is one example of addressing AI cheating concerns. His policy highlights AI as an “emerging skill,” advises students to check ChatGPT’s results against other sources, holds them responsible for errors or omissions, and requires acknowledgment of ChatGPT usage. Violation of these rules breaches academic honesty policies.
Another approach is the development of tools like GPTZero, an app created by Princeton student Edward Tian to detect machine-generated writing. Tian believes that “humans deserve to know” whether writing is human or machine-generated, and tools like GPTZero can help maintain transparency and accountability in AI-generated content.
Of course, this article was written by AI, under human direction.